Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
8,100
8,100
15,139,524
2,421
Various embodiments of systems, apparatus, and/or methods are described for identifying a preferred sporting event. A receiving device receives viewing preferences from a user, sports data from a sports data provider, and programming information for candidate sporting events from a content provider. The receiving device then analyzes the programming information for one or more preferred sporting events based at least in part on the user's viewing preferences and the sports data. The user may then be notified of the preferred sporting event.
1. A method, comprising: receiving viewing preferences on a receiving device; receiving sports data from a sports data provider, wherein the sports data includes a likelihood of an occurrence of at least one content characteristic; receiving programming information for a plurality of candidate sporting events that are available to be viewed; analyzing the programming information to filter the plurality of candidate sporting events based at least in part on the viewing preferences and the sports data to determine one or more preferred sporting events with the likelihood of the occurrence of the at least one content characteristic; outputting a notification of the one or more preferred sporting events; determining whether the one or more preferred sporting events is selected for viewing within a predetermined time window; and in response to determining that the one or more preferred sporting events is not selected for viewing within the predetermined time window, automatically recording the one or more preferred sporting events. 2. The method of claim 1, further comprising: receiving a selection of the one or more preferred sporting events; and displaying the selected sporting event on a presentation device. 3. The method of claim 1, further comprising: determining multiple preferred sporting events with the likelihood of the occurrence of the at least one content characteristic; and automatically recording a set of the multiple preferred sporting events that is not being viewed. 4. The method of claim 1, wherein the viewing preferences comprise one or more of a sports team, a player, a division, a conference, a league, and a geographic region. 5. The method of claim 1, wherein the sports data comprises statistics related to the plurality of candidate sporting events available to be viewed. 6. The method of claim 5, wherein the statistics comprises active team statistics, active player statistics, a game score, a likelihood of a comeback, a rivalry, a likelihood of an exciting event, or a combination thereof. 7. The method of claim 1, wherein the notification comprises one or more of a visual notification, an audio notification, and a tactile notification. 8. The method of claim 1, wherein the notification is output by a presentation device. 9. The method of claim 1, wherein the notification indicates one or more player positions, a likelihood of an exciting event, a description of a game's status, a game score, or a combination thereof. 10. The method of claim 1, wherein the sports data provider is a crowd-sourced data source. 11. A receiving device, comprising: a user communication module to receive viewing preferences; a communication module to receive sports data and programming information for a plurality of candidate sporting events that are available to be viewed, wherein the sports data includes a likelihood of an occurrence of at least one content characteristic; a control logic to analyze the programming information to filter the plurality of candidate sporting events based at least in part on the viewing preferences and the sports data to determine one or more preferred sporting events with the likelihood of the occurrence of the at least one content characteristic; and a rendering module to output a notification of the one or more preferred sporting events, wherein the control logic is further configured to determine whether the one or more preferred sporting events is selected for viewing within a predetermined time window, and in response to determining that the one or more preferred sporting events is not selected for viewing within the predetermined time window, automatically record the one or more preferred sporting events. 12. The receiving device of claim 11, wherein the user communication module receives a selection of the one or more sporting events, and wherein the rendering module outputs the selected sporting event to a presentation device. 13. The receiving device of claim 11, wherein the control logic automatically is configured to: determine multiple preferred sporting events with the likelihood of the occurrence of the at least one content characteristic; and automatically record a set of the multiple preferred sporting events that is not being viewed. 14. The receiving device of claim 11, wherein the viewing preferences comprise one or more of a sports team, a player, a division, a conference, a league, and a geographic region. 15. The receiving device of claim 11, wherein the sports data comprises statistics related to the plurality of candidate sporting events available to be viewed. 16. The receiving device of claim 15, wherein the statistics comprises active team statistics, active player statistics, a game score, a likelihood of a comeback, a rivalry, a likelihood of an exciting event, or a combination thereof. 17. The receiving device of claim 11, wherein the notification comprises one or more of a visual notification, an audio notification, and a tactile notification. 18. The receiving device of claim 11, wherein the notification is output by a presentation device. 19. The receiving device of claim 11, wherein the notification indicates one or more player positions, a likelihood of an exciting event, a description of a game's status, a game score, or a combination thereof. 20. The receiving device of claim 11, wherein the sports data provider is a crowd-sourced data source.
Various embodiments of systems, apparatus, and/or methods are described for identifying a preferred sporting event. A receiving device receives viewing preferences from a user, sports data from a sports data provider, and programming information for candidate sporting events from a content provider. The receiving device then analyzes the programming information for one or more preferred sporting events based at least in part on the user's viewing preferences and the sports data. The user may then be notified of the preferred sporting event.1. A method, comprising: receiving viewing preferences on a receiving device; receiving sports data from a sports data provider, wherein the sports data includes a likelihood of an occurrence of at least one content characteristic; receiving programming information for a plurality of candidate sporting events that are available to be viewed; analyzing the programming information to filter the plurality of candidate sporting events based at least in part on the viewing preferences and the sports data to determine one or more preferred sporting events with the likelihood of the occurrence of the at least one content characteristic; outputting a notification of the one or more preferred sporting events; determining whether the one or more preferred sporting events is selected for viewing within a predetermined time window; and in response to determining that the one or more preferred sporting events is not selected for viewing within the predetermined time window, automatically recording the one or more preferred sporting events. 2. The method of claim 1, further comprising: receiving a selection of the one or more preferred sporting events; and displaying the selected sporting event on a presentation device. 3. The method of claim 1, further comprising: determining multiple preferred sporting events with the likelihood of the occurrence of the at least one content characteristic; and automatically recording a set of the multiple preferred sporting events that is not being viewed. 4. The method of claim 1, wherein the viewing preferences comprise one or more of a sports team, a player, a division, a conference, a league, and a geographic region. 5. The method of claim 1, wherein the sports data comprises statistics related to the plurality of candidate sporting events available to be viewed. 6. The method of claim 5, wherein the statistics comprises active team statistics, active player statistics, a game score, a likelihood of a comeback, a rivalry, a likelihood of an exciting event, or a combination thereof. 7. The method of claim 1, wherein the notification comprises one or more of a visual notification, an audio notification, and a tactile notification. 8. The method of claim 1, wherein the notification is output by a presentation device. 9. The method of claim 1, wherein the notification indicates one or more player positions, a likelihood of an exciting event, a description of a game's status, a game score, or a combination thereof. 10. The method of claim 1, wherein the sports data provider is a crowd-sourced data source. 11. A receiving device, comprising: a user communication module to receive viewing preferences; a communication module to receive sports data and programming information for a plurality of candidate sporting events that are available to be viewed, wherein the sports data includes a likelihood of an occurrence of at least one content characteristic; a control logic to analyze the programming information to filter the plurality of candidate sporting events based at least in part on the viewing preferences and the sports data to determine one or more preferred sporting events with the likelihood of the occurrence of the at least one content characteristic; and a rendering module to output a notification of the one or more preferred sporting events, wherein the control logic is further configured to determine whether the one or more preferred sporting events is selected for viewing within a predetermined time window, and in response to determining that the one or more preferred sporting events is not selected for viewing within the predetermined time window, automatically record the one or more preferred sporting events. 12. The receiving device of claim 11, wherein the user communication module receives a selection of the one or more sporting events, and wherein the rendering module outputs the selected sporting event to a presentation device. 13. The receiving device of claim 11, wherein the control logic automatically is configured to: determine multiple preferred sporting events with the likelihood of the occurrence of the at least one content characteristic; and automatically record a set of the multiple preferred sporting events that is not being viewed. 14. The receiving device of claim 11, wherein the viewing preferences comprise one or more of a sports team, a player, a division, a conference, a league, and a geographic region. 15. The receiving device of claim 11, wherein the sports data comprises statistics related to the plurality of candidate sporting events available to be viewed. 16. The receiving device of claim 15, wherein the statistics comprises active team statistics, active player statistics, a game score, a likelihood of a comeback, a rivalry, a likelihood of an exciting event, or a combination thereof. 17. The receiving device of claim 11, wherein the notification comprises one or more of a visual notification, an audio notification, and a tactile notification. 18. The receiving device of claim 11, wherein the notification is output by a presentation device. 19. The receiving device of claim 11, wherein the notification indicates one or more player positions, a likelihood of an exciting event, a description of a game's status, a game score, or a combination thereof. 20. The receiving device of claim 11, wherein the sports data provider is a crowd-sourced data source.
2,400
8,101
8,101
11,479,751
2,465
A method of operating a service provider system comprises receiving a state message from an access system indicating state information for access wherein the access system provides a device with the access, receiving a service request from the device identifying the device and the service, determining a plurality of service options for the service based on the state message, generating a service response indicating the plurality of service options, and transmitting the service response.
1. A method of operating a service provider system, the method comprising: receiving a state message from an access system indicating state information for access wherein the access system provides a device with the access; receiving a service request from the device identifying the device and the service; determining a plurality of service options for the service based on the state message; generating a service response indicating the plurality of service options; and transmitting the service response. 2. The method of claim 1 further comprising receiving a selection message indicating a selected service option of the plurality of service options and configuring the service for the selected service option. 3. The method of claim 2 wherein the state information comprises available bandwidth. 4. The method of claim 3 wherein the plurality of service options comprise a plurality of coding/decoding protocols. 5. The method of claim 4 wherein the access system comprises a modem. 6. The method of claim 5 wherein the service comprises a video service. 7. The method of claim 5 wherein the service comprises a voice service. 8. A service provider system comprising: an interface configured to receive a state message from an access system indicating state information for access wherein the access system provides a device with the access, receive a service request from the device identifying the device and the service, and transmit a service response; and a processing system configured to determine a plurality of service options for the service based on the state message and generate the service response indicating the plurality of service options. 9. The service provider system of claim 8 wherein the interface is further configured to receive a selection message indicating a selected service option of the plurality of service options and wherein the processing system is further configured to configure the service for the selected service option. 10. A communication system comprising: a device; an access system configured to provide the device with access to a service; and a service provider system configured to provide the service to the device; wherein the access system transmits a state message indicating state information for the access; wherein the device transmits a service request to the service provider system identifying the device and the service; wherein the service provider system receives the service request, determines a plurality of service options for the service based on the state message, generates a service response indicating the plurality of service options, and transmits the service response. 11. The communication system of claim 10 wherein the device receives the service response, generates a selection message indicating a one service option of the plurality of service options, and transmits the selection message. 12. The communication system of claim 11 wherein the service provider system receives the selection message and configures the service for the one service option. 13. The communication system of claim 12 wherein the state information comprises available bandwidth. 14. The communication system of claim 13 wherein the plurality of service options comprise a plurality of coding/decoding protocols. 15. The communication system of claim 14 wherein the access system comprises a modem. 16. The communication system of claim 15 wherein the service comprises a video service. 17. The communication system of claim 15 wherein the service comprises a voice service. 18. A method of monitoring the connection of an end user device to a network, wherein the connection includes at least two communication links, the method comprising: collecting performance data associated with the connection; comparing the performance data to a threshold, the threshold being associated with a particular characteristic of the performance data; and presenting at least one option to a user of the end user device in response to the comparison, the at least one option being associated with a change in an application being processed by the end user device, the application being associated with data communicated by the end user device over the connection. 19. The method of claim 18, wherein the at least two communication links include a wireless link and a wired link, and wherein collecting the performance data further includes collecting data associated with an amount of bandwidth available to the end user device over the connection. 20. The method of claim 18, wherein the method further comprises terminating the application in response to the selection of the at least one option.
A method of operating a service provider system comprises receiving a state message from an access system indicating state information for access wherein the access system provides a device with the access, receiving a service request from the device identifying the device and the service, determining a plurality of service options for the service based on the state message, generating a service response indicating the plurality of service options, and transmitting the service response.1. A method of operating a service provider system, the method comprising: receiving a state message from an access system indicating state information for access wherein the access system provides a device with the access; receiving a service request from the device identifying the device and the service; determining a plurality of service options for the service based on the state message; generating a service response indicating the plurality of service options; and transmitting the service response. 2. The method of claim 1 further comprising receiving a selection message indicating a selected service option of the plurality of service options and configuring the service for the selected service option. 3. The method of claim 2 wherein the state information comprises available bandwidth. 4. The method of claim 3 wherein the plurality of service options comprise a plurality of coding/decoding protocols. 5. The method of claim 4 wherein the access system comprises a modem. 6. The method of claim 5 wherein the service comprises a video service. 7. The method of claim 5 wherein the service comprises a voice service. 8. A service provider system comprising: an interface configured to receive a state message from an access system indicating state information for access wherein the access system provides a device with the access, receive a service request from the device identifying the device and the service, and transmit a service response; and a processing system configured to determine a plurality of service options for the service based on the state message and generate the service response indicating the plurality of service options. 9. The service provider system of claim 8 wherein the interface is further configured to receive a selection message indicating a selected service option of the plurality of service options and wherein the processing system is further configured to configure the service for the selected service option. 10. A communication system comprising: a device; an access system configured to provide the device with access to a service; and a service provider system configured to provide the service to the device; wherein the access system transmits a state message indicating state information for the access; wherein the device transmits a service request to the service provider system identifying the device and the service; wherein the service provider system receives the service request, determines a plurality of service options for the service based on the state message, generates a service response indicating the plurality of service options, and transmits the service response. 11. The communication system of claim 10 wherein the device receives the service response, generates a selection message indicating a one service option of the plurality of service options, and transmits the selection message. 12. The communication system of claim 11 wherein the service provider system receives the selection message and configures the service for the one service option. 13. The communication system of claim 12 wherein the state information comprises available bandwidth. 14. The communication system of claim 13 wherein the plurality of service options comprise a plurality of coding/decoding protocols. 15. The communication system of claim 14 wherein the access system comprises a modem. 16. The communication system of claim 15 wherein the service comprises a video service. 17. The communication system of claim 15 wherein the service comprises a voice service. 18. A method of monitoring the connection of an end user device to a network, wherein the connection includes at least two communication links, the method comprising: collecting performance data associated with the connection; comparing the performance data to a threshold, the threshold being associated with a particular characteristic of the performance data; and presenting at least one option to a user of the end user device in response to the comparison, the at least one option being associated with a change in an application being processed by the end user device, the application being associated with data communicated by the end user device over the connection. 19. The method of claim 18, wherein the at least two communication links include a wireless link and a wired link, and wherein collecting the performance data further includes collecting data associated with an amount of bandwidth available to the end user device over the connection. 20. The method of claim 18, wherein the method further comprises terminating the application in response to the selection of the at least one option.
2,400
8,102
8,102
14,681,723
2,485
A stereo vision system includes a first camera sensor and a second camera sensor. The first camera sensor is configured to sense first reflected energy and generate first sensor signals based on the sensed first reflected energy. The second camera sensor is configured to sense second reflected energy and generate second sensor signals based on the sensed second reflected energy. The stereo vision system further includes a processor configured to receive the first sensor signals from the first camera sensor and configured to receive the second sensor signals from the second camera sensor. The processor is configured to perform stereo matching based on the first sensor signals and the second sensor signals. The first camera sensor is configured to sense reflected energy that is infrared radiation. The second camera sensor is configured to sense reflected energy that is infrared radiation.
1. A stereo vision system for use in a vehicle, the stereo vision system comprising: a first camera sensor configured to sense first reflected energy and generate first sensor signals based on the sensed first reflected energy; a second camera sensor configured to sense second reflected energy and generate second sensor signals based on the sensed second reflected energy; and a processor configured to receive the first sensor signals from the first camera sensor and configured to receive the second sensor signals from the second camera sensor, wherein the processor is configured to perform stereo matching based on the first sensor signals and the second sensor signals, wherein the first camera sensor is configured to sense reflected energy that is infrared radiation, and wherein the second camera sensor is configured to sense reflected energy that is infrared radiation. 2. The stereo vision system of claim 1, wherein the processor is configured to perform the stereo matching by producing a stereo range map, wherein the processor is configured to perform object detection using the stereo range map, wherein the processor is configured to perform object tracking using a result of the object detection, and wherein the processor is configured to provide an output signal based on a result of the object tracking in order to provide assistance to a driver of the vehicle. 3. The stereo vision system of claim 1, wherein the first camera sensor is configured to sense reflected energy that is short-wavelength infrared radiation, and wherein the second camera sensor is configured to sense reflected energy that is short-wavelength infrared radiation. 4. The stereo vision system of claim 1, wherein an energy sensitive area of the first camera sensor is constructed using indium gallium arsenide, and wherein an energy sensitive area of the second camera sensor is constructed using indium gallium arsenide. 5. The stereo vision system of claim 1, wherein the stereo vision system does not include an active illumination component for emitting electromagnetic radiation that can be sensed by the stereo vision system upon reflection off of objects in an environment sensed by the stereo vision system. 6. The stereo vision system of claim 1, wherein the stereo vision system does not include a component for emitting infrared radiation. 7. The stereo vision system of claim 1, the stereo vision system further comprising: an active illumination component configured to emit infrared radiation. 8. The stereo vision system of claim 7, wherein the active illumination component is configured to alternate between emitting infrared radiation and not emitting infrared radiation, and wherein the active illumination component is configured to emit infrared radiation in synchronization with an exposure interval of the first camera sensor and an exposure interval of the second camera sensor. 9. The stereo vision system of claim 7, wherein the active illumination component comprises: one or more laser diodes configured to emit infrared radiation in one or more collimated beams; and one or more optical filters configured to produce one or more diffused conic beams from the one or more collimated beams. 10. The stereo vision system of claim 9, wherein the one or more laser diodes comprises: a first laser diode configured to emit infrared radiation in a first collimated beam; a second laser diode configured to emit infrared radiation in a second collimated beam; and a third laser diode configured to emit infrared radiation in a third collimated beam, wherein the one or more optical filters comprises: a first optical filter configured to produce a first diffused conic beam at a first dispersion angle from the first collimated beam; a second optical filter configured to produce a second diffused conic beam at a second dispersion angle from the second collimated beam; and a third optical filter configured to produce a third diffused conic beam at a third dispersion angle from the third collimated beam, wherein the first dispersion angle is different from the second dispersion angle and the third dispersion angle, and wherein the second dispersion angle is different from the third dispersion angle. 11. The stereo vision system of claim 1, the stereo vision system further comprising: a third camera sensor configured to sense third reflected energy and generate third sensor signals based on the sensed third reflected energy, wherein the processor is configured to receive the third sensor signals from the third camera sensor, and wherein the third camera sensor is configured to sense reflected energy that is infrared radiation. 12. The stereo vision system of claim 11, wherein the second camera sensor is positioned between the first camera sensor and the third camera sensor, wherein the processor is configured to perform first stereo matching based on the first sensor signals and the second sensor signals but not the third sensor signals, and wherein the processor is configured to perform second stereo matching based on the second sensor signals and the third sensor signals but not the first sensor signals. 13. The stereo vision system of claim 12, wherein the processor performs the first stereo matching for a first downrange distance range having a first minimum downrange distance and a first maximum downrange distance, wherein the processor performs the second stereo matching for a second downrange distance range having a second minimum downrange distance and a second maximum downrange distance, and wherein the first minimum downrange distance is substantially the same as the second minimum downrange distance. 14. The stereo vision system of claim 12, wherein the processor is configured to perform first object tracking based on a result of the first stereo matching but not based on a result of the second stereo matching, and wherein the processor is configured to perform second object tracking based on a result of the second stereo matching but not based on a result of the first stereo matching. 15. The stereo vision system of claim 14, wherein the processor is configured to perform merging of a result of the first object tracking and a result of the second object tracking. 16. The stereo vision system of claim 11, wherein the second camera sensor is positioned between the first camera sensor and the third camera sensor, wherein the processor is configured to perform first stereo matching based on the first sensor signals and the second sensor signals but not the third sensor signals, and wherein the processor is configured to perform second stereo matching based on the first sensor signals and the third sensor signals but not the second sensor signals. 17. The stereo vision system of claim 16, wherein the processor performs the first stereo matching for a first downrange distance range having a first minimum downrange distance and a first maximum downrange distance, wherein the processor performs the second stereo matching for a second downrange distance range having a second minimum downrange distance and a second maximum downrange distance, and wherein the first maximum downrange distance is substantially the same as the second minimum downrange distance. 18. The stereo vision system of claim 16, wherein the processor is configured to perform merging of a result of the first stereo matching and a result of the second stereo matching. 19. The stereo vision system of claim 18, wherein the processor performs the merging by performing a union of a first stereo range map resulting from the first stereo mapping and a second stereo range map resulting from the second stereo mapping. 20. A stereo vision system for use in a vehicle, the stereo vision system comprising: a first camera sensor configured to sense first reflected energy and generate first sensor signals based on the sensed first reflected energy; a second camera sensor configured to sense second reflected energy and generate second sensor signals based on the sensed second reflected energy; a third camera sensor configured to sense third energy and generate third sensor signals based on the sensed third reflected energy; and a processor configured to receive the first sensor signals from the first camera sensor, configured to receive the second sensor signals from the second camera sensor, and configured to receive the third sensor signals from the third camera sensor, wherein the processor is further configured to perform stereo matching based on at least one of the first sensor signals, the second sensor signals, and the third sensor signals, wherein the first camera sensor is configured to sense reflected energy that is visible radiation, wherein the second camera sensor is configured to sense reflected energy that is visible radiation, wherein the third camera sensor is configured to sense energy that is infrared radiation. 21. The stereo vision system of claim 20, wherein the third camera sensor is configured to sense energy that is thermal emitted energy. 22. The stereo vision system of claim 20, wherein the processor is configured to perform merging of the first sensor signal, the second sensor signals, and the third sensor signals after performing image rectification but prior to performing stereo matching. 23. The stereo vision system of claim 20, wherein the processor is configured to perform combining of the first sensor signal, the second sensor signals, and the third sensor signals after performing image rectification but prior to performing stereo matching. 24. The stereo vision system of claim 20, wherein the processor is configured to perform stereo matching based on the first sensor signals and the second sensor signals in order to produce a stereo range map, and wherein the processor is configured to perform combining of the third sensor signals with the stereo range map. 25. The stereo vision system of claim 20, wherein the processor is configured to perform first stereo matching based on the first sensor signals and the second sensor signals, wherein the processor is configured to perform first object tracking based on a result of the first stereo matching, wherein the processor is configured to perform second object tracking based on the third sensor signals, and wherein the processor is configured to perform combining of a result of the first object tracking and a result of the second object tracking. 26. A method for stereo vision in a vehicle, the method comprising: sensing first reflected energy using a first camera sensor; generating first sensor signals based on the sensed first reflected energy; sensing second reflected energy using a second camera sensor; generating second sensor signals based on the sensed second reflected energy; and performing stereo matching based on the first sensor signals and the second sensor signals, wherein the first reflected energy is infrared radiation, and wherein the second reflected energy is infrared radiation.
A stereo vision system includes a first camera sensor and a second camera sensor. The first camera sensor is configured to sense first reflected energy and generate first sensor signals based on the sensed first reflected energy. The second camera sensor is configured to sense second reflected energy and generate second sensor signals based on the sensed second reflected energy. The stereo vision system further includes a processor configured to receive the first sensor signals from the first camera sensor and configured to receive the second sensor signals from the second camera sensor. The processor is configured to perform stereo matching based on the first sensor signals and the second sensor signals. The first camera sensor is configured to sense reflected energy that is infrared radiation. The second camera sensor is configured to sense reflected energy that is infrared radiation.1. A stereo vision system for use in a vehicle, the stereo vision system comprising: a first camera sensor configured to sense first reflected energy and generate first sensor signals based on the sensed first reflected energy; a second camera sensor configured to sense second reflected energy and generate second sensor signals based on the sensed second reflected energy; and a processor configured to receive the first sensor signals from the first camera sensor and configured to receive the second sensor signals from the second camera sensor, wherein the processor is configured to perform stereo matching based on the first sensor signals and the second sensor signals, wherein the first camera sensor is configured to sense reflected energy that is infrared radiation, and wherein the second camera sensor is configured to sense reflected energy that is infrared radiation. 2. The stereo vision system of claim 1, wherein the processor is configured to perform the stereo matching by producing a stereo range map, wherein the processor is configured to perform object detection using the stereo range map, wherein the processor is configured to perform object tracking using a result of the object detection, and wherein the processor is configured to provide an output signal based on a result of the object tracking in order to provide assistance to a driver of the vehicle. 3. The stereo vision system of claim 1, wherein the first camera sensor is configured to sense reflected energy that is short-wavelength infrared radiation, and wherein the second camera sensor is configured to sense reflected energy that is short-wavelength infrared radiation. 4. The stereo vision system of claim 1, wherein an energy sensitive area of the first camera sensor is constructed using indium gallium arsenide, and wherein an energy sensitive area of the second camera sensor is constructed using indium gallium arsenide. 5. The stereo vision system of claim 1, wherein the stereo vision system does not include an active illumination component for emitting electromagnetic radiation that can be sensed by the stereo vision system upon reflection off of objects in an environment sensed by the stereo vision system. 6. The stereo vision system of claim 1, wherein the stereo vision system does not include a component for emitting infrared radiation. 7. The stereo vision system of claim 1, the stereo vision system further comprising: an active illumination component configured to emit infrared radiation. 8. The stereo vision system of claim 7, wherein the active illumination component is configured to alternate between emitting infrared radiation and not emitting infrared radiation, and wherein the active illumination component is configured to emit infrared radiation in synchronization with an exposure interval of the first camera sensor and an exposure interval of the second camera sensor. 9. The stereo vision system of claim 7, wherein the active illumination component comprises: one or more laser diodes configured to emit infrared radiation in one or more collimated beams; and one or more optical filters configured to produce one or more diffused conic beams from the one or more collimated beams. 10. The stereo vision system of claim 9, wherein the one or more laser diodes comprises: a first laser diode configured to emit infrared radiation in a first collimated beam; a second laser diode configured to emit infrared radiation in a second collimated beam; and a third laser diode configured to emit infrared radiation in a third collimated beam, wherein the one or more optical filters comprises: a first optical filter configured to produce a first diffused conic beam at a first dispersion angle from the first collimated beam; a second optical filter configured to produce a second diffused conic beam at a second dispersion angle from the second collimated beam; and a third optical filter configured to produce a third diffused conic beam at a third dispersion angle from the third collimated beam, wherein the first dispersion angle is different from the second dispersion angle and the third dispersion angle, and wherein the second dispersion angle is different from the third dispersion angle. 11. The stereo vision system of claim 1, the stereo vision system further comprising: a third camera sensor configured to sense third reflected energy and generate third sensor signals based on the sensed third reflected energy, wherein the processor is configured to receive the third sensor signals from the third camera sensor, and wherein the third camera sensor is configured to sense reflected energy that is infrared radiation. 12. The stereo vision system of claim 11, wherein the second camera sensor is positioned between the first camera sensor and the third camera sensor, wherein the processor is configured to perform first stereo matching based on the first sensor signals and the second sensor signals but not the third sensor signals, and wherein the processor is configured to perform second stereo matching based on the second sensor signals and the third sensor signals but not the first sensor signals. 13. The stereo vision system of claim 12, wherein the processor performs the first stereo matching for a first downrange distance range having a first minimum downrange distance and a first maximum downrange distance, wherein the processor performs the second stereo matching for a second downrange distance range having a second minimum downrange distance and a second maximum downrange distance, and wherein the first minimum downrange distance is substantially the same as the second minimum downrange distance. 14. The stereo vision system of claim 12, wherein the processor is configured to perform first object tracking based on a result of the first stereo matching but not based on a result of the second stereo matching, and wherein the processor is configured to perform second object tracking based on a result of the second stereo matching but not based on a result of the first stereo matching. 15. The stereo vision system of claim 14, wherein the processor is configured to perform merging of a result of the first object tracking and a result of the second object tracking. 16. The stereo vision system of claim 11, wherein the second camera sensor is positioned between the first camera sensor and the third camera sensor, wherein the processor is configured to perform first stereo matching based on the first sensor signals and the second sensor signals but not the third sensor signals, and wherein the processor is configured to perform second stereo matching based on the first sensor signals and the third sensor signals but not the second sensor signals. 17. The stereo vision system of claim 16, wherein the processor performs the first stereo matching for a first downrange distance range having a first minimum downrange distance and a first maximum downrange distance, wherein the processor performs the second stereo matching for a second downrange distance range having a second minimum downrange distance and a second maximum downrange distance, and wherein the first maximum downrange distance is substantially the same as the second minimum downrange distance. 18. The stereo vision system of claim 16, wherein the processor is configured to perform merging of a result of the first stereo matching and a result of the second stereo matching. 19. The stereo vision system of claim 18, wherein the processor performs the merging by performing a union of a first stereo range map resulting from the first stereo mapping and a second stereo range map resulting from the second stereo mapping. 20. A stereo vision system for use in a vehicle, the stereo vision system comprising: a first camera sensor configured to sense first reflected energy and generate first sensor signals based on the sensed first reflected energy; a second camera sensor configured to sense second reflected energy and generate second sensor signals based on the sensed second reflected energy; a third camera sensor configured to sense third energy and generate third sensor signals based on the sensed third reflected energy; and a processor configured to receive the first sensor signals from the first camera sensor, configured to receive the second sensor signals from the second camera sensor, and configured to receive the third sensor signals from the third camera sensor, wherein the processor is further configured to perform stereo matching based on at least one of the first sensor signals, the second sensor signals, and the third sensor signals, wherein the first camera sensor is configured to sense reflected energy that is visible radiation, wherein the second camera sensor is configured to sense reflected energy that is visible radiation, wherein the third camera sensor is configured to sense energy that is infrared radiation. 21. The stereo vision system of claim 20, wherein the third camera sensor is configured to sense energy that is thermal emitted energy. 22. The stereo vision system of claim 20, wherein the processor is configured to perform merging of the first sensor signal, the second sensor signals, and the third sensor signals after performing image rectification but prior to performing stereo matching. 23. The stereo vision system of claim 20, wherein the processor is configured to perform combining of the first sensor signal, the second sensor signals, and the third sensor signals after performing image rectification but prior to performing stereo matching. 24. The stereo vision system of claim 20, wherein the processor is configured to perform stereo matching based on the first sensor signals and the second sensor signals in order to produce a stereo range map, and wherein the processor is configured to perform combining of the third sensor signals with the stereo range map. 25. The stereo vision system of claim 20, wherein the processor is configured to perform first stereo matching based on the first sensor signals and the second sensor signals, wherein the processor is configured to perform first object tracking based on a result of the first stereo matching, wherein the processor is configured to perform second object tracking based on the third sensor signals, and wherein the processor is configured to perform combining of a result of the first object tracking and a result of the second object tracking. 26. A method for stereo vision in a vehicle, the method comprising: sensing first reflected energy using a first camera sensor; generating first sensor signals based on the sensed first reflected energy; sensing second reflected energy using a second camera sensor; generating second sensor signals based on the sensed second reflected energy; and performing stereo matching based on the first sensor signals and the second sensor signals, wherein the first reflected energy is infrared radiation, and wherein the second reflected energy is infrared radiation.
2,400
8,103
8,103
13,979,008
2,483
A sequence of digital images is encoded into a bitstream, at least one portion of an image being encoded by motion compensation with respect to a reference image portion. A target number of motion information predictors is obtained. Using the target number a set of motion information predictors is generated having controlled diversity. A motion information predictor for the image portion to encode is selected from the generated set of motion information predictors. The target number is signaled in the bitstream, together with information relating to the selected motion information predictor.
1. A method of encoding a sequence of digital images into a bitstream, at least one portion of an image being encoded by motion compensation with respect to a reference image portion, the method comprising: obtaining for an image portion to encode a target number of motion information predictors; generating using said target number a set of motion information predictors; and selecting a motion information predictor for said image portion to encode from said generated set of motion information predictors, and signaling in said bitstream said target number and information relating to the selected motion information predictor. 2. A method as claimed in claim 1, wherein said target number is inserted in a header included in said bitstream. 3. A method as claimed in claim 1, wherein said target number is inserted in a slice header included in said bitstream. 4. A method as claimed in claim 1, comprising: obtaining a first target number of motion information predictors applicable in a first encoding mode; obtaining a second target number of motion information predictors applicable in a second encoding mode; and signaling the first target number in said bitstream when the first encoding mode is applied and signaling the second target number in said bitstream when the second encoding mode is applied. 5. A method as claimed in claim 1, wherein generating said set of motion information predictors comprises: obtaining initial set of motion information predictors; testing whether the number of motion information predictors in said initial set is lower than the obtained target number and, if so, adding one or more motion information predictors to said initial set. 6. A method as claimed in claim 5, wherein the motion information predictors of said initial set are actual motion information predictors, having motion vectors obtained from image portions of said image being encoded or of a reference image, and potential motion information predictors for addition include one or more further such actual motion information predictors and also include one or more virtual motion information predictors not having motion vectors obtained from image portions of said image being encoded or of a reference image. 7. A method as claimed in claim 5, wherein at least one said virtual motion information predictor is computed from an existing motion information predictor. 8. A method as claimed in claim 7, wherein a supplementary vector is added to a motion vector of an existing motion information predictor, the supplementary vector having a predetermined direction relative to the direction of the motion vector of the existing motion information predictor. 9. A method as claimed in claim 8, wherein the magnitude of the supplementary vector is dependent on the magnitude of the motion vector of the existing motion information predictor. 10. A method as claimed in claim 8, wherein the supplementary vector has components proportional to respective corresponding components of the motion vector of the existing motion information predictor. 11. A method as claimed in claim 5, comprising eliminating duplicates from said initial set. 12. A method of decoding a bitstream comprising an encoded sequence of digital images, at least one portion of an image being encoded by motion compensation with respect to a reference image, the method comprising: obtaining from said bitstream a target number of motion information predictors for an image portion to decode; generating using said target number a set of motion information predictors having controlled diversity; and determining a motion information predictor for said image portion to decode from the generated set of motion information predictors. 13. A method as claimed in claim 12, further comprising decoding an item of information representative of a selected motion information predictor for said image portion to decode. 14. A method as claimed in claim 13, further comprising retrieving said selected motion information predictor from said generated set of motion information predictors using said decoded item of information. 15. A method as claimed in claim 12, wherein said target number is obtained from a header included in said bitstream. 16. A method as claimed in claim 12, wherein said target number is obtained from a slice header included in said bitstream. 17. A method as claimed in claim 12, comprising: obtaining from said bitstream a first target number of motion information predictors when a first encoding mode is applied; and obtaining from said bitstream a second target number of motion information predictors when a second encoding mode is applied. 18. A method as claimed in claim 12, wherein generating said set of motion information predictors comprises: obtaining an initial set of motion information predictors; testing whether the number of motion information predictors in said initial set is lower than the obtained target number and, if so, adding one or more motion information predictors to said initial set. 19. A method as claimed in claim 17, wherein the motion information predictors of said initial set are actual motion information predictors, having motion vectors obtained from image portions of said image being decoded or of a reference image, and potential motion information predictors for addition include one or more further such actual motion information predictors and also include one or more virtual motion information predictors not having motion vectors obtained from image portions of said image being decoded or of a reference image. 20. A method as claimed in claim 18, wherein at least one said virtual motion information predictor is computed from an existing motion information predictor. 21. A method as claimed in claim 19, wherein a supplementary vector is added to a motion vector of an existing motion information predictor, the supplementary vector having a predetermined direction relative to the direction of the motion vector of the existing motion information predictor. 22. A method as claimed in claim 21, wherein the magnitude of the supplementary vector is dependent on the magnitude of the motion vector of the existing motion information predictor. 23. A method as claimed in claim 21, wherein the supplementary vector has components proportional to respective corresponding components of the motion vector of the existing motion information predictor. 24. A method as claimed in claim 18, comprising eliminating duplicates from said initial set. 25. A device for encoding a sequence of digital images into a bitstream, at least one portion of an image being encoded by motion compensation with respect to a reference image portion, the device comprising: an obtaining unit configured to obtain a target number of motion information predictors; a generating unit configured to generate using said target number a set of motion information predictors; and a selecting unit configured to select a motion information predictor for said image portion to encode from said generated set of motion information predictors, and signal in said bitstream said target number and information relating to the selected motion information predictor. 26. A device for decoding a bitstream comprising an encoded sequence of digital images, at least one portion of an image being encoded by motion compensation with respect to a reference image, the device comprising: an obtaining unit configured to obtain from said bitstream a target number of motion information predictors; a generating unit configured to generate using said target number a set of motion information predictors; and a determining unit configured to determine a motion information predictor for an image portion to decode from said generated set of motion information predictors. 27-28. (canceled) 29. A non-transitory computer readable carrier medium comprising processor executable code for performing a method of encoding a sequence of digital images into a bitstream, in which method at least one portion of an image is encoded by motion compensation with respect to a reference image portion, wherein execution of the processor executable code by one or more processors causes the one or more processors to: obtain for an image portion to encode a target number of motion information predictors; generate using said target number a set of motion information predictors; and select a motion information predictor for said image portion to encode from said generated set of motion information predictors, and signal in said bitstream said target number and information relating to the selected motion information predictor. 30. A non-transitory computer readable carrier medium comprising processor executable code for performing a method of decoding a bitstream comprising an encoded sequence of digital images, in which method at least one portion of an image is encoded by motion compensation with respect to a reference image, wherein execution of the processor executable code by one or more processors causes the one or more processors to: obtain from said bitstream a target number of motion information predictors for an image portion to decode; generate using said target number a set of motion information predictors; and determine a motion information predictor for said image portion to decode from the generated set of motion information predictors. 31. A method as claimed in claim 1, wherein the number of motion information predictors in said set of motion information predictors is equal to the target number. 32. A method as claimed in claim 1, where the generated set of motion information predictors has controlled diversity. 33. A method as claimed in claim 12, wherein the number of motion information predictors in said set of motion information predictors is equal to the target number. 34. A method as claimed in claim 12, wherein the generated set of motion information predictors has controlled diversity.
A sequence of digital images is encoded into a bitstream, at least one portion of an image being encoded by motion compensation with respect to a reference image portion. A target number of motion information predictors is obtained. Using the target number a set of motion information predictors is generated having controlled diversity. A motion information predictor for the image portion to encode is selected from the generated set of motion information predictors. The target number is signaled in the bitstream, together with information relating to the selected motion information predictor.1. A method of encoding a sequence of digital images into a bitstream, at least one portion of an image being encoded by motion compensation with respect to a reference image portion, the method comprising: obtaining for an image portion to encode a target number of motion information predictors; generating using said target number a set of motion information predictors; and selecting a motion information predictor for said image portion to encode from said generated set of motion information predictors, and signaling in said bitstream said target number and information relating to the selected motion information predictor. 2. A method as claimed in claim 1, wherein said target number is inserted in a header included in said bitstream. 3. A method as claimed in claim 1, wherein said target number is inserted in a slice header included in said bitstream. 4. A method as claimed in claim 1, comprising: obtaining a first target number of motion information predictors applicable in a first encoding mode; obtaining a second target number of motion information predictors applicable in a second encoding mode; and signaling the first target number in said bitstream when the first encoding mode is applied and signaling the second target number in said bitstream when the second encoding mode is applied. 5. A method as claimed in claim 1, wherein generating said set of motion information predictors comprises: obtaining initial set of motion information predictors; testing whether the number of motion information predictors in said initial set is lower than the obtained target number and, if so, adding one or more motion information predictors to said initial set. 6. A method as claimed in claim 5, wherein the motion information predictors of said initial set are actual motion information predictors, having motion vectors obtained from image portions of said image being encoded or of a reference image, and potential motion information predictors for addition include one or more further such actual motion information predictors and also include one or more virtual motion information predictors not having motion vectors obtained from image portions of said image being encoded or of a reference image. 7. A method as claimed in claim 5, wherein at least one said virtual motion information predictor is computed from an existing motion information predictor. 8. A method as claimed in claim 7, wherein a supplementary vector is added to a motion vector of an existing motion information predictor, the supplementary vector having a predetermined direction relative to the direction of the motion vector of the existing motion information predictor. 9. A method as claimed in claim 8, wherein the magnitude of the supplementary vector is dependent on the magnitude of the motion vector of the existing motion information predictor. 10. A method as claimed in claim 8, wherein the supplementary vector has components proportional to respective corresponding components of the motion vector of the existing motion information predictor. 11. A method as claimed in claim 5, comprising eliminating duplicates from said initial set. 12. A method of decoding a bitstream comprising an encoded sequence of digital images, at least one portion of an image being encoded by motion compensation with respect to a reference image, the method comprising: obtaining from said bitstream a target number of motion information predictors for an image portion to decode; generating using said target number a set of motion information predictors having controlled diversity; and determining a motion information predictor for said image portion to decode from the generated set of motion information predictors. 13. A method as claimed in claim 12, further comprising decoding an item of information representative of a selected motion information predictor for said image portion to decode. 14. A method as claimed in claim 13, further comprising retrieving said selected motion information predictor from said generated set of motion information predictors using said decoded item of information. 15. A method as claimed in claim 12, wherein said target number is obtained from a header included in said bitstream. 16. A method as claimed in claim 12, wherein said target number is obtained from a slice header included in said bitstream. 17. A method as claimed in claim 12, comprising: obtaining from said bitstream a first target number of motion information predictors when a first encoding mode is applied; and obtaining from said bitstream a second target number of motion information predictors when a second encoding mode is applied. 18. A method as claimed in claim 12, wherein generating said set of motion information predictors comprises: obtaining an initial set of motion information predictors; testing whether the number of motion information predictors in said initial set is lower than the obtained target number and, if so, adding one or more motion information predictors to said initial set. 19. A method as claimed in claim 17, wherein the motion information predictors of said initial set are actual motion information predictors, having motion vectors obtained from image portions of said image being decoded or of a reference image, and potential motion information predictors for addition include one or more further such actual motion information predictors and also include one or more virtual motion information predictors not having motion vectors obtained from image portions of said image being decoded or of a reference image. 20. A method as claimed in claim 18, wherein at least one said virtual motion information predictor is computed from an existing motion information predictor. 21. A method as claimed in claim 19, wherein a supplementary vector is added to a motion vector of an existing motion information predictor, the supplementary vector having a predetermined direction relative to the direction of the motion vector of the existing motion information predictor. 22. A method as claimed in claim 21, wherein the magnitude of the supplementary vector is dependent on the magnitude of the motion vector of the existing motion information predictor. 23. A method as claimed in claim 21, wherein the supplementary vector has components proportional to respective corresponding components of the motion vector of the existing motion information predictor. 24. A method as claimed in claim 18, comprising eliminating duplicates from said initial set. 25. A device for encoding a sequence of digital images into a bitstream, at least one portion of an image being encoded by motion compensation with respect to a reference image portion, the device comprising: an obtaining unit configured to obtain a target number of motion information predictors; a generating unit configured to generate using said target number a set of motion information predictors; and a selecting unit configured to select a motion information predictor for said image portion to encode from said generated set of motion information predictors, and signal in said bitstream said target number and information relating to the selected motion information predictor. 26. A device for decoding a bitstream comprising an encoded sequence of digital images, at least one portion of an image being encoded by motion compensation with respect to a reference image, the device comprising: an obtaining unit configured to obtain from said bitstream a target number of motion information predictors; a generating unit configured to generate using said target number a set of motion information predictors; and a determining unit configured to determine a motion information predictor for an image portion to decode from said generated set of motion information predictors. 27-28. (canceled) 29. A non-transitory computer readable carrier medium comprising processor executable code for performing a method of encoding a sequence of digital images into a bitstream, in which method at least one portion of an image is encoded by motion compensation with respect to a reference image portion, wherein execution of the processor executable code by one or more processors causes the one or more processors to: obtain for an image portion to encode a target number of motion information predictors; generate using said target number a set of motion information predictors; and select a motion information predictor for said image portion to encode from said generated set of motion information predictors, and signal in said bitstream said target number and information relating to the selected motion information predictor. 30. A non-transitory computer readable carrier medium comprising processor executable code for performing a method of decoding a bitstream comprising an encoded sequence of digital images, in which method at least one portion of an image is encoded by motion compensation with respect to a reference image, wherein execution of the processor executable code by one or more processors causes the one or more processors to: obtain from said bitstream a target number of motion information predictors for an image portion to decode; generate using said target number a set of motion information predictors; and determine a motion information predictor for said image portion to decode from the generated set of motion information predictors. 31. A method as claimed in claim 1, wherein the number of motion information predictors in said set of motion information predictors is equal to the target number. 32. A method as claimed in claim 1, where the generated set of motion information predictors has controlled diversity. 33. A method as claimed in claim 12, wherein the number of motion information predictors in said set of motion information predictors is equal to the target number. 34. A method as claimed in claim 12, wherein the generated set of motion information predictors has controlled diversity.
2,400
8,104
8,104
14,416,970
2,454
Embodiments disclosed herein provide systems and methods for distributing applications to virtual machines. In a particular embodiment, a method includes providing a list of one or more attachable applications and receiving a selection indicating at least one application of the one or more attachable applications to be attached to a virtual machine. The method further includes attaching the at least one application to the virtual machine.
1. A method of operating an application distribution system, comprising: providing a list of one or more attachable applications; receiving a selection indicating at least one application of the one or more attachable applications to be attached to a virtual machine; and attaching the at least one application to the virtual machine. 2. The method of claim 1, further comprising: starting the virtual machine. 3. The method of claim 1, wherein attaching the at least one application to the virtual machine comprises: identifying at least one storage volume based on the at least one application; and attaching the at least one storage volume to the virtual machine. 4. The method of claim 3, further comprising: in the virtual machine, executing an application from the at least one storage volume. 5. The method of claim 3, wherein attaching the at least one storage volume to the virtual machine comprises: directing a hypervisor to attach the at least one storage volume to the virtual machine. 6. The method of claim 3, further comprising: overlaying content into the virtual machine, wherein the content makes the at least one application on the at least one storage volume available to the virtual machine. 7. The method of claim 3, further comprising: detecting a detach triggering event; and in response to the detach triggering event, detaching the at least one storage volume from the virtual machine. 8. The method of claim 1, wherein a storage system, comprising the at least one storage volume, is located remotely from a host computer system, comprising the virtual machine, over a communication network. 9. The method of claim 1, wherein the selection is received from a user. 10. A computer readable medium having instructions stored thereon for operating an application distribution system, wherein the instructions, when executed by the application distribution system, direct the application distribution system to: provide a list of one or more attachable applications; receive a selection indicating at least one application of the one or more attachable applications to be attached to a virtual machine; and attach the at least one application to the virtual machine. 11. The computer readable medium of claim 10, wherein the instructions further direct the application distribution system to: start the virtual machine. 12. The computer readable medium of claim 10, wherein the instructions that direct the application distribution system to attach the at least one application to the virtual machine comprise instructions that direct the application distribution system to: identify at least one storage volume based on the at least one application; and attach the at least one storage volume to the virtual machine. 13. The computer readable medium of claim 12, wherein the virtual machine executes an application from the at least one storage volume. 14. The computer readable medium of claim 12, wherein the instructions that direct the application distribution system to attach the at least one storage volume to the virtual machine comprise instructions that direct the application distribution system to: direct a hypervisor to attach the at least one storage volume to the virtual machine. 15. The computer readable medium of claim 12, wherein content is overlaid into the virtual machine and wherein the content makes the at least one application on the at least one storage volume available to the virtual machine. 16. The computer readable medium of claim 12, the instructions further direct the application distribution system to: detect a detach triggering event; and in response to the detach triggering event, detach the at least one storage volume from the virtual machine. 17. The computer readable medium of claim 10, wherein a storage system, comprising the at least one storage volume, is located remotely from a host computer system, comprising the virtual machine, over a communication network. 18. The computer readable medium of claim 10, wherein the selection is received from a user. 19. An application distribution system, comprising: a plurality of storage volumes comprising one or more attachable applications; a processing system configured to provide a list of the one or more attachable applications, receive a selection indicating at least one application of the one or more attachable applications to be attached to a virtual machine, and attach the at least one application to the virtual machine. 20. The application distribution system of claim 19, further comprising: a host computer configured to start the virtual machine.
Embodiments disclosed herein provide systems and methods for distributing applications to virtual machines. In a particular embodiment, a method includes providing a list of one or more attachable applications and receiving a selection indicating at least one application of the one or more attachable applications to be attached to a virtual machine. The method further includes attaching the at least one application to the virtual machine.1. A method of operating an application distribution system, comprising: providing a list of one or more attachable applications; receiving a selection indicating at least one application of the one or more attachable applications to be attached to a virtual machine; and attaching the at least one application to the virtual machine. 2. The method of claim 1, further comprising: starting the virtual machine. 3. The method of claim 1, wherein attaching the at least one application to the virtual machine comprises: identifying at least one storage volume based on the at least one application; and attaching the at least one storage volume to the virtual machine. 4. The method of claim 3, further comprising: in the virtual machine, executing an application from the at least one storage volume. 5. The method of claim 3, wherein attaching the at least one storage volume to the virtual machine comprises: directing a hypervisor to attach the at least one storage volume to the virtual machine. 6. The method of claim 3, further comprising: overlaying content into the virtual machine, wherein the content makes the at least one application on the at least one storage volume available to the virtual machine. 7. The method of claim 3, further comprising: detecting a detach triggering event; and in response to the detach triggering event, detaching the at least one storage volume from the virtual machine. 8. The method of claim 1, wherein a storage system, comprising the at least one storage volume, is located remotely from a host computer system, comprising the virtual machine, over a communication network. 9. The method of claim 1, wherein the selection is received from a user. 10. A computer readable medium having instructions stored thereon for operating an application distribution system, wherein the instructions, when executed by the application distribution system, direct the application distribution system to: provide a list of one or more attachable applications; receive a selection indicating at least one application of the one or more attachable applications to be attached to a virtual machine; and attach the at least one application to the virtual machine. 11. The computer readable medium of claim 10, wherein the instructions further direct the application distribution system to: start the virtual machine. 12. The computer readable medium of claim 10, wherein the instructions that direct the application distribution system to attach the at least one application to the virtual machine comprise instructions that direct the application distribution system to: identify at least one storage volume based on the at least one application; and attach the at least one storage volume to the virtual machine. 13. The computer readable medium of claim 12, wherein the virtual machine executes an application from the at least one storage volume. 14. The computer readable medium of claim 12, wherein the instructions that direct the application distribution system to attach the at least one storage volume to the virtual machine comprise instructions that direct the application distribution system to: direct a hypervisor to attach the at least one storage volume to the virtual machine. 15. The computer readable medium of claim 12, wherein content is overlaid into the virtual machine and wherein the content makes the at least one application on the at least one storage volume available to the virtual machine. 16. The computer readable medium of claim 12, the instructions further direct the application distribution system to: detect a detach triggering event; and in response to the detach triggering event, detach the at least one storage volume from the virtual machine. 17. The computer readable medium of claim 10, wherein a storage system, comprising the at least one storage volume, is located remotely from a host computer system, comprising the virtual machine, over a communication network. 18. The computer readable medium of claim 10, wherein the selection is received from a user. 19. An application distribution system, comprising: a plurality of storage volumes comprising one or more attachable applications; a processing system configured to provide a list of the one or more attachable applications, receive a selection indicating at least one application of the one or more attachable applications to be attached to a virtual machine, and attach the at least one application to the virtual machine. 20. The application distribution system of claim 19, further comprising: a host computer configured to start the virtual machine.
2,400
8,105
8,105
15,024,816
2,488
Innovations in hash-based block matching facilitate block copy (“BC”) prediction that is more effective in terms of rate-distortion performance and/or computational efficiency of encoding. For example, some of the innovations relate to encoding blocks with hash values determined using hash-based block matching. Other innovations relate to reconstructing blocks using hash values determined with hash-based block matching.
1. A computing device comprising one or more processing units and memory, wherein the computing device implements an encoder of video or images, the encoder being configured to perform operations comprising: encoding data for a current block of sample values of a picture, including: determining a hash value for the current block, the hash value for the current block being determined using the sample values of the current block; and identifying a matching block among multiple candidate blocks based at least in part on the hash value for the current block; and outputting the encoded data, wherein the encoded data includes the hash value for the matching block to represent the current block. 2. The computing device of claim 1 wherein the encoding data for the current block further comprises encoding the hash value for the current block. 3. The computing device of claim 1 wherein the determining the hash value for the current block uses one of a cyclic redundancy check function a hash function that includes averaging and XOR operations, and a locality-sensitive hash function. 4.-5. (canceled) 6. The computing device of claim 1 wherein the determining the hash value for the current block uses a hash function that includes block width and block height as inputs. 7. The computing device of claim 1 wherein the picture that includes the current block also includes the multiple candidate blocks, and wherein the encoding data for the current block uses intra block copy prediction. 8. The computing device of claim 1 wherein another picture includes at least some of the multiple candidate blocks. 9. The computing device of claim 1 wherein the identifying the matching block includes, for each of one or more of the multiple candidate blocks, comparing the hash value for the current block to a hash value for the candidate block. 10. In a computing device with a video decoder or image decoder, a method comprising: receiving encoded data for a picture, wherein the encoded data includes a hash value for a current block of sample values of the picture, the hash value for the current block having been determined using the sample values of the current block; and decoding the current block, including: identifying a reconstruction block among multiple candidate blocks based at least in part on the hash value for the current block; and using the reconstruction block for the current block. 11. The method of claim 10 wherein the decoding the current block further comprises decoding the hash value for the current block. 12. The method of claim 10 wherein a data structure organizes the multiple candidate blocks according to hash value. 13. The method of claim 10 further comprising, for each of the multiple candidate blocks, determining a hash value using one of a cyclic redundancy check function, a hash function that includes averaging and XOR operations, and a locality-sensitive hash function. 14.-15. (canceled) 16. The method of claim 10 further comprising, for each of the multiple candidate blocks, determining a hash value using a hash function that includes block width and block height as inputs. 17. The method of claim 10 wherein the picture that includes the current block also includes the multiple candidate blocks, and wherein the decoding the current block uses intra block copy prediction. 18. The method of claim 10 wherein another picture includes at least some of the multiple candidate blocks. 19.-20. (canceled) 21. One or more computer-readable media storing computer-executable instructions for causing a computing device, when programmed thereby, to perform operations comprising: receiving encoded data for a picture, wherein the encoded data includes a hash value for a current block of sample values of the picture, the hash value for the current block having been determined using the sample values of the current block; and decoding the current block, including: identifying a reconstruction block among multiple candidate blocks based at least in part on the hash value for the current block; and using the reconstruction block for the current block. 22. The one or more computer-readable media of claim 21 wherein the decoding the current block further comprises decoding the hash value for the current block. 23. The one or more computer-readable media of claim 21 wherein a data structure organizes the multiple candidate blocks according to hash value. 24. The one or more computer-readable media of claim 21 wherein the operations further comprise, for each of the multiple candidate blocks, determining a hash value using one of a cyclic redundancy check function, a hash function that includes averaging and XOR operations, and a locality-sensitive hash function. 25. The one or more computer-readable media of claim 21 wherein the picture that includes the current block also includes the multiple candidate blocks, and wherein the decoding the current block uses intra block copy prediction. 26. The one or more computer-readable media of claim 21 wherein another picture includes at least some of the multiple candidate blocks.
Innovations in hash-based block matching facilitate block copy (“BC”) prediction that is more effective in terms of rate-distortion performance and/or computational efficiency of encoding. For example, some of the innovations relate to encoding blocks with hash values determined using hash-based block matching. Other innovations relate to reconstructing blocks using hash values determined with hash-based block matching.1. A computing device comprising one or more processing units and memory, wherein the computing device implements an encoder of video or images, the encoder being configured to perform operations comprising: encoding data for a current block of sample values of a picture, including: determining a hash value for the current block, the hash value for the current block being determined using the sample values of the current block; and identifying a matching block among multiple candidate blocks based at least in part on the hash value for the current block; and outputting the encoded data, wherein the encoded data includes the hash value for the matching block to represent the current block. 2. The computing device of claim 1 wherein the encoding data for the current block further comprises encoding the hash value for the current block. 3. The computing device of claim 1 wherein the determining the hash value for the current block uses one of a cyclic redundancy check function a hash function that includes averaging and XOR operations, and a locality-sensitive hash function. 4.-5. (canceled) 6. The computing device of claim 1 wherein the determining the hash value for the current block uses a hash function that includes block width and block height as inputs. 7. The computing device of claim 1 wherein the picture that includes the current block also includes the multiple candidate blocks, and wherein the encoding data for the current block uses intra block copy prediction. 8. The computing device of claim 1 wherein another picture includes at least some of the multiple candidate blocks. 9. The computing device of claim 1 wherein the identifying the matching block includes, for each of one or more of the multiple candidate blocks, comparing the hash value for the current block to a hash value for the candidate block. 10. In a computing device with a video decoder or image decoder, a method comprising: receiving encoded data for a picture, wherein the encoded data includes a hash value for a current block of sample values of the picture, the hash value for the current block having been determined using the sample values of the current block; and decoding the current block, including: identifying a reconstruction block among multiple candidate blocks based at least in part on the hash value for the current block; and using the reconstruction block for the current block. 11. The method of claim 10 wherein the decoding the current block further comprises decoding the hash value for the current block. 12. The method of claim 10 wherein a data structure organizes the multiple candidate blocks according to hash value. 13. The method of claim 10 further comprising, for each of the multiple candidate blocks, determining a hash value using one of a cyclic redundancy check function, a hash function that includes averaging and XOR operations, and a locality-sensitive hash function. 14.-15. (canceled) 16. The method of claim 10 further comprising, for each of the multiple candidate blocks, determining a hash value using a hash function that includes block width and block height as inputs. 17. The method of claim 10 wherein the picture that includes the current block also includes the multiple candidate blocks, and wherein the decoding the current block uses intra block copy prediction. 18. The method of claim 10 wherein another picture includes at least some of the multiple candidate blocks. 19.-20. (canceled) 21. One or more computer-readable media storing computer-executable instructions for causing a computing device, when programmed thereby, to perform operations comprising: receiving encoded data for a picture, wherein the encoded data includes a hash value for a current block of sample values of the picture, the hash value for the current block having been determined using the sample values of the current block; and decoding the current block, including: identifying a reconstruction block among multiple candidate blocks based at least in part on the hash value for the current block; and using the reconstruction block for the current block. 22. The one or more computer-readable media of claim 21 wherein the decoding the current block further comprises decoding the hash value for the current block. 23. The one or more computer-readable media of claim 21 wherein a data structure organizes the multiple candidate blocks according to hash value. 24. The one or more computer-readable media of claim 21 wherein the operations further comprise, for each of the multiple candidate blocks, determining a hash value using one of a cyclic redundancy check function, a hash function that includes averaging and XOR operations, and a locality-sensitive hash function. 25. The one or more computer-readable media of claim 21 wherein the picture that includes the current block also includes the multiple candidate blocks, and wherein the decoding the current block uses intra block copy prediction. 26. The one or more computer-readable media of claim 21 wherein another picture includes at least some of the multiple candidate blocks.
2,400
8,106
8,106
15,374,344
2,473
Described herein are systems, methods, and software to capture packets of interest in a virtual switch. In one implementation, a method of capturing packets of interest in a virtual switch includes identifying a request to capture packets associated with first packet attributes. The method further includes, in response to the request, assigning a virtual port for forwarding the packets associated with the first packet attributes, and implementing a forwarding rule in the virtual switch to forward the packets associated with the first packet attributes to at least the virtual port. The method further provides for directing traffic over the virtual switch using the forwarding rule.
1. A method of capturing packets of interest in a virtual switch, the method comprising: identifying a request to capture packets associated with first packet attributes; assigning one or more virtual ports to capture the packets associated with the first packet attributes; implementing a forwarding rule in the virtual switch to forward the packets associated with the first packet attributes to at least the one or more virtual ports; receiving a packet at the virtual switch; determining whether the packet qualifies to be forwarded to the one or more virtual ports based on the forwarding rule; and if the packet qualifies to be forwarded to the one or more virtual ports, forwarding the packet to at least the one or more virtual ports. 2. The method of claim 1 wherein identifying the request to capture packets associated with the first packet attributes comprises identifying an administrator request to capture packets associated with the first packet attributes. 3. The method of claim 1 wherein receiving the packet at the virtual switch comprises receiving the packet from a virtual machine coupled to the virtual switch. 4. The method of claim 1 wherein receiving the packet at the virtual switch comprises receiving the packet from a physical network interface of a host computing system for the virtual switch. 5. The method of claim 1 wherein the first packet attributes comprise at least one of a source address for a communication, a destination address for a communication, a protocol associated with the communication, or a time of the communication. 6. The method of claim 1 wherein implementing the forwarding rule comprises replacing a first forwarding rule in the virtual switch with a new forwarding rule in the virtual switch to forward packets associated with the first packet attributes to at least the one or more virtual ports. 7. The method of claim 1 wherein forwarding the packet to at least the one or more virtual ports comprises forwarding the packet to the one or more virtual ports and at least one virtual machine coupled to the virtual switch. 8. The method of claim 7 wherein assigning the one or more virtual ports to capture the packets associated with the first packet attributes comprises assigning the one or more virtual ports to capture the packets associated with the first packet attributes, wherein at least one of the one or more virtual ports is coupled to a log file to store the packets associated with the first packet attributes. 9. A computer apparatus comprising: one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media to capture packets of interest in a virtual switch that, when read and executed by the processing system, direct the processing system to at least: identify a request to capture packets associated with first packet attributes; assign one or more virtual ports to capture the packets associated with the first packet attributes; implement a forwarding rule in the virtual switch to forward the packets associated with the first packet attributes to at least the one or more virtual ports; receive a packet at the virtual switch; determine whether the packet qualifies to be forwarded to the one or more virtual ports based on the forwarding rule; and if the packet qualifies to be forwarded to the one or more virtual ports, forward the packet to at least the one or more virtual ports. 10. The computer apparatus of claim 9 wherein the program instructions to identify the request to capture packets associated with the first packet attributes comprises identifying an administrator request to capture packets associated with the first packet attributes. 11. The computer apparatus of claim 9 wherein the program instructions to receive the packet at the virtual switch direct the processing system to receive the packet from a virtual machine coupled to the virtual switch. 12. The computer apparatus of claim 9 wherein the program instructions to receive the packet at the virtual switch direct the processing system to receive the packet from a physical network interface of a host computing system for the virtual switch. 13. The computer apparatus of claim 9 wherein the first packet attributes comprise at least one of a source address for a communication, a destination address for a communication, a protocol associated with the communication, or a time of the communication. 14. The computer apparatus of claim 9 wherein the program instructions to implement the forwarding rule direct the processing system to replace a first forwarding rule in the virtual switch with a new forwarding rule in the virtual switch to forward packets associated with the first packet attributes to at least the one or more virtual ports. 15. The computer apparatus of claim 9 wherein the program instructions to forward the packet to at least the one or more virtual ports direct the processing system to forward the packet to the one or more virtual ports and at least one virtual machine coupled to the virtual switch. 16. The computer apparatus of claim 15 wherein the program instructions to assign the one or more virtual ports to capture the packets associated with the first packet attributes direct the processing system to assign the one or more virtual ports to capture the packets associated with the first packet attributes, wherein at least one of the one or more virtual ports is coupled to a log file to store the packets associated with the first packet attributes. 17. An apparatus comprising: one or more computer readable storage media; program instructions stored on the one or more computer readable storage media to capture packets of interest in a virtual switch that, when read and executed by a processing system, direct the processing system to at least: identify a request to capture packets associated with first packet attributes; assign one or more virtual ports to capture the packets associated with the first packet attributes; implement a forwarding rule in the virtual switch to forward the packets associated with the first packet attributes to at least the one or more virtual ports; receive a packet at the virtual switch; determine whether the packet qualifies to be forwarded to the one or more virtual ports based on the forwarding rule; and if the packet qualifies to be forwarded to the one or more virtual ports, forward the packet to at least the one or more virtual ports. 18. The apparatus of claim 17 wherein the program instructions to receive the packet at the virtual switch direct the processing system to receive the packet from a virtual machine coupled to the virtual switch. 19. The apparatus of claim 17 wherein the program instructions to receive the packet at the virtual switch direct the processing system to receive the packet from a physical network interface of a host computing system for the virtual switch. 20. The apparatus of claim 17 wherein the program instructions to assign the one or more virtual ports to capture the packets associated with the first packet attributes direct the processing system to assign the one or more virtual ports to capture the packets associated with the first packet attributes, wherein at least one of the one or more virtual ports is coupled to a log file to store the packets associated with the first packet attributes.
Described herein are systems, methods, and software to capture packets of interest in a virtual switch. In one implementation, a method of capturing packets of interest in a virtual switch includes identifying a request to capture packets associated with first packet attributes. The method further includes, in response to the request, assigning a virtual port for forwarding the packets associated with the first packet attributes, and implementing a forwarding rule in the virtual switch to forward the packets associated with the first packet attributes to at least the virtual port. The method further provides for directing traffic over the virtual switch using the forwarding rule.1. A method of capturing packets of interest in a virtual switch, the method comprising: identifying a request to capture packets associated with first packet attributes; assigning one or more virtual ports to capture the packets associated with the first packet attributes; implementing a forwarding rule in the virtual switch to forward the packets associated with the first packet attributes to at least the one or more virtual ports; receiving a packet at the virtual switch; determining whether the packet qualifies to be forwarded to the one or more virtual ports based on the forwarding rule; and if the packet qualifies to be forwarded to the one or more virtual ports, forwarding the packet to at least the one or more virtual ports. 2. The method of claim 1 wherein identifying the request to capture packets associated with the first packet attributes comprises identifying an administrator request to capture packets associated with the first packet attributes. 3. The method of claim 1 wherein receiving the packet at the virtual switch comprises receiving the packet from a virtual machine coupled to the virtual switch. 4. The method of claim 1 wherein receiving the packet at the virtual switch comprises receiving the packet from a physical network interface of a host computing system for the virtual switch. 5. The method of claim 1 wherein the first packet attributes comprise at least one of a source address for a communication, a destination address for a communication, a protocol associated with the communication, or a time of the communication. 6. The method of claim 1 wherein implementing the forwarding rule comprises replacing a first forwarding rule in the virtual switch with a new forwarding rule in the virtual switch to forward packets associated with the first packet attributes to at least the one or more virtual ports. 7. The method of claim 1 wherein forwarding the packet to at least the one or more virtual ports comprises forwarding the packet to the one or more virtual ports and at least one virtual machine coupled to the virtual switch. 8. The method of claim 7 wherein assigning the one or more virtual ports to capture the packets associated with the first packet attributes comprises assigning the one or more virtual ports to capture the packets associated with the first packet attributes, wherein at least one of the one or more virtual ports is coupled to a log file to store the packets associated with the first packet attributes. 9. A computer apparatus comprising: one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media to capture packets of interest in a virtual switch that, when read and executed by the processing system, direct the processing system to at least: identify a request to capture packets associated with first packet attributes; assign one or more virtual ports to capture the packets associated with the first packet attributes; implement a forwarding rule in the virtual switch to forward the packets associated with the first packet attributes to at least the one or more virtual ports; receive a packet at the virtual switch; determine whether the packet qualifies to be forwarded to the one or more virtual ports based on the forwarding rule; and if the packet qualifies to be forwarded to the one or more virtual ports, forward the packet to at least the one or more virtual ports. 10. The computer apparatus of claim 9 wherein the program instructions to identify the request to capture packets associated with the first packet attributes comprises identifying an administrator request to capture packets associated with the first packet attributes. 11. The computer apparatus of claim 9 wherein the program instructions to receive the packet at the virtual switch direct the processing system to receive the packet from a virtual machine coupled to the virtual switch. 12. The computer apparatus of claim 9 wherein the program instructions to receive the packet at the virtual switch direct the processing system to receive the packet from a physical network interface of a host computing system for the virtual switch. 13. The computer apparatus of claim 9 wherein the first packet attributes comprise at least one of a source address for a communication, a destination address for a communication, a protocol associated with the communication, or a time of the communication. 14. The computer apparatus of claim 9 wherein the program instructions to implement the forwarding rule direct the processing system to replace a first forwarding rule in the virtual switch with a new forwarding rule in the virtual switch to forward packets associated with the first packet attributes to at least the one or more virtual ports. 15. The computer apparatus of claim 9 wherein the program instructions to forward the packet to at least the one or more virtual ports direct the processing system to forward the packet to the one or more virtual ports and at least one virtual machine coupled to the virtual switch. 16. The computer apparatus of claim 15 wherein the program instructions to assign the one or more virtual ports to capture the packets associated with the first packet attributes direct the processing system to assign the one or more virtual ports to capture the packets associated with the first packet attributes, wherein at least one of the one or more virtual ports is coupled to a log file to store the packets associated with the first packet attributes. 17. An apparatus comprising: one or more computer readable storage media; program instructions stored on the one or more computer readable storage media to capture packets of interest in a virtual switch that, when read and executed by a processing system, direct the processing system to at least: identify a request to capture packets associated with first packet attributes; assign one or more virtual ports to capture the packets associated with the first packet attributes; implement a forwarding rule in the virtual switch to forward the packets associated with the first packet attributes to at least the one or more virtual ports; receive a packet at the virtual switch; determine whether the packet qualifies to be forwarded to the one or more virtual ports based on the forwarding rule; and if the packet qualifies to be forwarded to the one or more virtual ports, forward the packet to at least the one or more virtual ports. 18. The apparatus of claim 17 wherein the program instructions to receive the packet at the virtual switch direct the processing system to receive the packet from a virtual machine coupled to the virtual switch. 19. The apparatus of claim 17 wherein the program instructions to receive the packet at the virtual switch direct the processing system to receive the packet from a physical network interface of a host computing system for the virtual switch. 20. The apparatus of claim 17 wherein the program instructions to assign the one or more virtual ports to capture the packets associated with the first packet attributes direct the processing system to assign the one or more virtual ports to capture the packets associated with the first packet attributes, wherein at least one of the one or more virtual ports is coupled to a log file to store the packets associated with the first packet attributes.
2,400
8,107
8,107
15,004,655
2,439
A cryptographic system includes an online computer, an offline computer and custom hardware and software by which the two computers can securely communicate to facilitate the creation, secure use, and maintenance of private cryptographic keys. The system securely stores private cryptographic keys while still enabling the keys to be quickly and easily accessed as needed in a variety of applications including, but not limited to, electronic financial transactions, cryptographic transaction processing, medical record access, email encryption, or any other cryptographic authentication process.
1. A computer-implemented method for securely storing and using private cryptographic keys utilizing a cryptographic system comprising an online computer, an offline computer, a first communication device, a second communication device, an inner Faraday cage housing the offline computer and the second communication device, and an outer Faraday cage housing the online computer, the first communication device, and the inner Faraday cage, the method comprising the steps of: (a) transmitting a cryptographic operation request requiring a private key in an electrical signal from the online computer to the first communication device; (b) converting, by the first communication device, the electrical signal received from the online computer into an optical signal, and transmitting the optical signal from the first communication device outside the inner Faraday cage to the second communication device inside the inner Faraday cage; (c) converting, by the second communication device, the optical signal received from the first communication device into an electrical signal, and transmitting the electrical signal to the offline computer; (d) performing, by the offline computer, the cryptographic operation request, and transmitting a result of the cryptographic operation request to the second communication device in an electrical signal; (e) converting, by the second communication device, the electrical signal received from the offline computer into an optical signal, and transmitting the optical signal to the first communication device; and (f) converting, by the first communication device, the optical signal received from the second communication device into an electrical signal, and transmitting the electrical signal to the online computer; and (g) processing, by the online computer, the electrical signal received from the first communication device. 2. The method of claim 1, wherein the cryptographic operation comprises generating a public/private key pair, outputting the public key to the online computer, and storing the private key, or signing an electronic transaction request. 3. The method of claim 1, further comprising validating each incoming electrical and optical signal by the first communication device and the second communication device. 4. The method of claim 3, wherein validating each incoming electrical and optical signal is performed based on length, timing, and a unique identifier assigned to the request. 5. The method of claim 1, further comprising generating and appending a unique one-time machine authentication code to the request by the first communication device. 6. The method of claim 1, further comprising validating the electrical signal received from the second communication device prior to performing the cryptographic operation request by the offline computer in step (d). 7. The method of claim 1, further comprising determining, by the second communication device, whether the electrical signal received from the offline computer in step (e) is white listed, has arrived in order, is properly formatted, and is on time, before converting the electrical signal to an optical signal. 8. The method of claim 1, further comprising validating the electrical signal received from the first communication device prior to processing the electrical signal received from the second communication device by the online computer in step (e). 9. The method of claim 1, wherein the inner Faraday cage and the outer Faraday cage each include concrete walls reinforced with stainless steel rebar to reduce magnetic resonance and lined with isolated layers of copper sheet and Mu-metal to inhibit electromagnetic leakage. 10. A cryptographic system for securely storing and using private cryptographic keys, comprising: an online computer for receiving or originating a cryptographic operation request requiring a private key; a first communication device connected to the online computer for transmission of electrical signals therebetween; a second communication device connected to the first communication device for transmission of optical signals therebetween; an offline computer for performing the cryptographic operation to generate a result, said offline computer connected to the second communication device for transmission of electrical signals therebetween; an inner Faraday cage housing the offline computer and the second communication device; and an outer Faraday cage housing the online computer, the first communication device, and the inner Faraday cage in a nested arrangement; wherein the online computer transmits the cryptographic operation request to the offline computer and the offline computer transmits the result of the cryptographic operation to the online computer only across the first communication device and the second communication device. 11. The cryptographic system of claim 10, wherein the first communication device and the second communication device each include serial-optical converters for converting electrical signals to optical signals and optical signals to electrical signals. 12. The cryptographic system of claim 10, wherein the cryptographic operation comprises generating a public/private key pair, outputting the public key to the online computer, and storing the private key, or signing an electronic transaction request. 13. The cryptographic system of claim 10, wherein the first communication device and the second communication device are configured to validate each electrical signal and optical signal received by said device. 14. The cryptographic system of claim 13, wherein the online computer, the first communication device, the second communication device, and the offline computer are configured to validate each incoming electrical or optical signal. 15. The cryptographic system of claim 10, wherein the first communication device is configured to generate and append a unique one-time machine authentication code to the request received from the online computer. 16. The cryptographic system of claim 10, wherein the second communication device is configured to determine whether an electrical signal received from the offline computer is white listed, has arrived in order, is properly formatted, and is on time, before converting the electrical signal to an optical signal. 17. The cryptographic system of claim 10, wherein the inner Faraday cage and the outer Faraday cage each include concrete walls reinforced with stainless steel rebar to reduce magnetic resonance and lined with isolated layers of copper sheet and Mu-metal to inhibit electromagnetic leakage.
A cryptographic system includes an online computer, an offline computer and custom hardware and software by which the two computers can securely communicate to facilitate the creation, secure use, and maintenance of private cryptographic keys. The system securely stores private cryptographic keys while still enabling the keys to be quickly and easily accessed as needed in a variety of applications including, but not limited to, electronic financial transactions, cryptographic transaction processing, medical record access, email encryption, or any other cryptographic authentication process.1. A computer-implemented method for securely storing and using private cryptographic keys utilizing a cryptographic system comprising an online computer, an offline computer, a first communication device, a second communication device, an inner Faraday cage housing the offline computer and the second communication device, and an outer Faraday cage housing the online computer, the first communication device, and the inner Faraday cage, the method comprising the steps of: (a) transmitting a cryptographic operation request requiring a private key in an electrical signal from the online computer to the first communication device; (b) converting, by the first communication device, the electrical signal received from the online computer into an optical signal, and transmitting the optical signal from the first communication device outside the inner Faraday cage to the second communication device inside the inner Faraday cage; (c) converting, by the second communication device, the optical signal received from the first communication device into an electrical signal, and transmitting the electrical signal to the offline computer; (d) performing, by the offline computer, the cryptographic operation request, and transmitting a result of the cryptographic operation request to the second communication device in an electrical signal; (e) converting, by the second communication device, the electrical signal received from the offline computer into an optical signal, and transmitting the optical signal to the first communication device; and (f) converting, by the first communication device, the optical signal received from the second communication device into an electrical signal, and transmitting the electrical signal to the online computer; and (g) processing, by the online computer, the electrical signal received from the first communication device. 2. The method of claim 1, wherein the cryptographic operation comprises generating a public/private key pair, outputting the public key to the online computer, and storing the private key, or signing an electronic transaction request. 3. The method of claim 1, further comprising validating each incoming electrical and optical signal by the first communication device and the second communication device. 4. The method of claim 3, wherein validating each incoming electrical and optical signal is performed based on length, timing, and a unique identifier assigned to the request. 5. The method of claim 1, further comprising generating and appending a unique one-time machine authentication code to the request by the first communication device. 6. The method of claim 1, further comprising validating the electrical signal received from the second communication device prior to performing the cryptographic operation request by the offline computer in step (d). 7. The method of claim 1, further comprising determining, by the second communication device, whether the electrical signal received from the offline computer in step (e) is white listed, has arrived in order, is properly formatted, and is on time, before converting the electrical signal to an optical signal. 8. The method of claim 1, further comprising validating the electrical signal received from the first communication device prior to processing the electrical signal received from the second communication device by the online computer in step (e). 9. The method of claim 1, wherein the inner Faraday cage and the outer Faraday cage each include concrete walls reinforced with stainless steel rebar to reduce magnetic resonance and lined with isolated layers of copper sheet and Mu-metal to inhibit electromagnetic leakage. 10. A cryptographic system for securely storing and using private cryptographic keys, comprising: an online computer for receiving or originating a cryptographic operation request requiring a private key; a first communication device connected to the online computer for transmission of electrical signals therebetween; a second communication device connected to the first communication device for transmission of optical signals therebetween; an offline computer for performing the cryptographic operation to generate a result, said offline computer connected to the second communication device for transmission of electrical signals therebetween; an inner Faraday cage housing the offline computer and the second communication device; and an outer Faraday cage housing the online computer, the first communication device, and the inner Faraday cage in a nested arrangement; wherein the online computer transmits the cryptographic operation request to the offline computer and the offline computer transmits the result of the cryptographic operation to the online computer only across the first communication device and the second communication device. 11. The cryptographic system of claim 10, wherein the first communication device and the second communication device each include serial-optical converters for converting electrical signals to optical signals and optical signals to electrical signals. 12. The cryptographic system of claim 10, wherein the cryptographic operation comprises generating a public/private key pair, outputting the public key to the online computer, and storing the private key, or signing an electronic transaction request. 13. The cryptographic system of claim 10, wherein the first communication device and the second communication device are configured to validate each electrical signal and optical signal received by said device. 14. The cryptographic system of claim 13, wherein the online computer, the first communication device, the second communication device, and the offline computer are configured to validate each incoming electrical or optical signal. 15. The cryptographic system of claim 10, wherein the first communication device is configured to generate and append a unique one-time machine authentication code to the request received from the online computer. 16. The cryptographic system of claim 10, wherein the second communication device is configured to determine whether an electrical signal received from the offline computer is white listed, has arrived in order, is properly formatted, and is on time, before converting the electrical signal to an optical signal. 17. The cryptographic system of claim 10, wherein the inner Faraday cage and the outer Faraday cage each include concrete walls reinforced with stainless steel rebar to reduce magnetic resonance and lined with isolated layers of copper sheet and Mu-metal to inhibit electromagnetic leakage.
2,400
8,108
8,108
14,213,172
2,454
A pool or spa system includes networked pool or spa devices that can be dynamically configured with network address by a controller. The controller can transmit a device discovery request on a network and can receive a discovery response from pool or spa devices that require a network address. The system determines and assigns the network addresses for the pool or spa devices based on unique device identifiers associated with the responding pool or spa devices. The network addresses assigned to the pool or spa device are transmitted to the pool or spa device to be used by the pool or spa devices to communicate with the controller over the network. The system can be used to discover and assign addresses to various types of pool or spa devices, such as pumps, underwater lights, chlorinators, water feature controllers, remote controllers, and/or other types of devices.
1. A pool or spa system including a plurality of components operatively coupled via a communications network supporting dynamic device discovery, the system comprising: a pool or spa; a plurality of slave devices, each of the plurality of slave devices being configured to perform one or more operations with respect to the pool or spa, each of the plurality of devices being un-configured and having a unique device identifier; and a master controller operative coupled to the plurality of slave devices to form a network, the master controller being programmed to assign each of the slave devices a network address based on the unique identifier of each of the plurality of slave devices and in response to bidirectional communication between the master controller and the plurality of slave devices to configure the plurality of slave devices and enable addressed communication between the controller and the plurality of slave devices. 2. The system of claim 1, wherein the master controller broadcasts a device discovery request on the network requesting a response from the plurality of slave devices and receives, in response to the device discovery request, a response from a first device of the plurality of slave devices including the unique identifier associated with the first device. 3. The system of claim 2, wherein the master controller correlates the device identifier received from the first device with an available network address, assigns the network address to the first device, and transmits a message on the network that includes the device identifier and the network address. 4. The system of claim 3, wherein the first device receives the message, compares the device identifier in the message to the device identifier of the first device, and stores the network address as the network address of the first device based on a determination that the device identifier included in the message matches the device identifier of the first device. 5. The system of claim 1, wherein at least one of the plurality of slave devices does not retain the network address assigned by the master controller when the at least one of the plurality of slave devices is powered down. 6. The system of claim 1, wherein the master controller is programmed to periodically determine whether the network includes a slave device requiring configuration. 7. The system of claim 1, wherein the master controller maintains at least one table correlating the unique device identifier of each of the plurality of slave devices with the network address assigned to each of the plurality of slave devices. 8. The system of claim 1, further comprising a gateway device operatively coupled between at least one of the plurality of slave devices and the master controller, wherein the gateway communicates with the master controller on behalf of the at least one of the plurality of slave devices to facilitate assignment of the network address to the at least one of the plurality of slave devices. 9. The system of claim 1, wherein the plurality of slave devices include at least one of a pump, a filter, a sensor, or a heater. 10. A system for dynamic discovery of networked devices in a pool or spa system, the system comprising: a non-transitory computer-readable medium storing computer executable instructions for a process of dynamically discovering networked devices in a pool or spa system; a processing device programmed to execute the computer executable instructions to: transmit a broadcast message to networked devices in the pool or spa system, the message including a device discovery request; receive a response message from an un-configured pool or spa device in the pool or spa system, the response including a unique device identifier associated with the un-configured device; correlate the unique device identifier with a network address; and transmit the network address to the un-configured pool or spa device to transform the un-configured pool or spa device to a configured pool or spa device. 11. The system of claim 10, wherein the processing device transmits the broadcast message with a time period that defines a quantity of time after the broadcast message that the processing device waits for a response from the networked devices. 12. The system of claim 10, wherein the processing device receives the response message from an un-configured pool or spa device through a gateway device. 13. The system of claim 10, wherein the processing device correlates the unique device identifier with the network address in at least one table maintained by processing device. 14. The system of claim 10, wherein the processing device transmits the network address to the un-configured pool or spa device via a message that includes the unique device identifier associated with the un-configured pool or spa device. 15. The system of claim 10, wherein the processing device is programmed to execute the computer executable instructions to periodically transmit the broadcast message to discover further un-configured pool or spa devices in the pool or spa system. 16. A method of dynamically discovering networked devices in a pool or spa system, the method comprising: transmitting a broadcast message to networked devices in the pool or spa system, the message including a device discovery request; receiving a response message from an un-configured pool or spa device in the pool or spa system, the response including a unique device identifier associated with the un-configured device; correlating the unique device identifier with a network address; and transmitting the network address to the un-configured pool or spa device to transform the un-configured pool or spa device to a configured pool or spa device. 17. The method of claim 16, wherein transmitting the broadcast message comprises transmitting the broadcast message with a time period that defines a quantity of time after the broadcast message that the processing device waits for a response from the networked devices. 18. The method of claim 16, wherein receiving the response message from an un-configured pool or spa device comprising receiving the response message through a gateway device. 19. The method of claim 16, wherein transmitting the network address to the un-configured pool or spa device comprises transmitting a message that includes the unique device identifier associated with the un-configured pool or spa device. 20. The method of claim 16, further comprising periodically transmitting the broadcast message to discover further un-configured pool or spa devices in the pool or spa system.
A pool or spa system includes networked pool or spa devices that can be dynamically configured with network address by a controller. The controller can transmit a device discovery request on a network and can receive a discovery response from pool or spa devices that require a network address. The system determines and assigns the network addresses for the pool or spa devices based on unique device identifiers associated with the responding pool or spa devices. The network addresses assigned to the pool or spa device are transmitted to the pool or spa device to be used by the pool or spa devices to communicate with the controller over the network. The system can be used to discover and assign addresses to various types of pool or spa devices, such as pumps, underwater lights, chlorinators, water feature controllers, remote controllers, and/or other types of devices.1. A pool or spa system including a plurality of components operatively coupled via a communications network supporting dynamic device discovery, the system comprising: a pool or spa; a plurality of slave devices, each of the plurality of slave devices being configured to perform one or more operations with respect to the pool or spa, each of the plurality of devices being un-configured and having a unique device identifier; and a master controller operative coupled to the plurality of slave devices to form a network, the master controller being programmed to assign each of the slave devices a network address based on the unique identifier of each of the plurality of slave devices and in response to bidirectional communication between the master controller and the plurality of slave devices to configure the plurality of slave devices and enable addressed communication between the controller and the plurality of slave devices. 2. The system of claim 1, wherein the master controller broadcasts a device discovery request on the network requesting a response from the plurality of slave devices and receives, in response to the device discovery request, a response from a first device of the plurality of slave devices including the unique identifier associated with the first device. 3. The system of claim 2, wherein the master controller correlates the device identifier received from the first device with an available network address, assigns the network address to the first device, and transmits a message on the network that includes the device identifier and the network address. 4. The system of claim 3, wherein the first device receives the message, compares the device identifier in the message to the device identifier of the first device, and stores the network address as the network address of the first device based on a determination that the device identifier included in the message matches the device identifier of the first device. 5. The system of claim 1, wherein at least one of the plurality of slave devices does not retain the network address assigned by the master controller when the at least one of the plurality of slave devices is powered down. 6. The system of claim 1, wherein the master controller is programmed to periodically determine whether the network includes a slave device requiring configuration. 7. The system of claim 1, wherein the master controller maintains at least one table correlating the unique device identifier of each of the plurality of slave devices with the network address assigned to each of the plurality of slave devices. 8. The system of claim 1, further comprising a gateway device operatively coupled between at least one of the plurality of slave devices and the master controller, wherein the gateway communicates with the master controller on behalf of the at least one of the plurality of slave devices to facilitate assignment of the network address to the at least one of the plurality of slave devices. 9. The system of claim 1, wherein the plurality of slave devices include at least one of a pump, a filter, a sensor, or a heater. 10. A system for dynamic discovery of networked devices in a pool or spa system, the system comprising: a non-transitory computer-readable medium storing computer executable instructions for a process of dynamically discovering networked devices in a pool or spa system; a processing device programmed to execute the computer executable instructions to: transmit a broadcast message to networked devices in the pool or spa system, the message including a device discovery request; receive a response message from an un-configured pool or spa device in the pool or spa system, the response including a unique device identifier associated with the un-configured device; correlate the unique device identifier with a network address; and transmit the network address to the un-configured pool or spa device to transform the un-configured pool or spa device to a configured pool or spa device. 11. The system of claim 10, wherein the processing device transmits the broadcast message with a time period that defines a quantity of time after the broadcast message that the processing device waits for a response from the networked devices. 12. The system of claim 10, wherein the processing device receives the response message from an un-configured pool or spa device through a gateway device. 13. The system of claim 10, wherein the processing device correlates the unique device identifier with the network address in at least one table maintained by processing device. 14. The system of claim 10, wherein the processing device transmits the network address to the un-configured pool or spa device via a message that includes the unique device identifier associated with the un-configured pool or spa device. 15. The system of claim 10, wherein the processing device is programmed to execute the computer executable instructions to periodically transmit the broadcast message to discover further un-configured pool or spa devices in the pool or spa system. 16. A method of dynamically discovering networked devices in a pool or spa system, the method comprising: transmitting a broadcast message to networked devices in the pool or spa system, the message including a device discovery request; receiving a response message from an un-configured pool or spa device in the pool or spa system, the response including a unique device identifier associated with the un-configured device; correlating the unique device identifier with a network address; and transmitting the network address to the un-configured pool or spa device to transform the un-configured pool or spa device to a configured pool or spa device. 17. The method of claim 16, wherein transmitting the broadcast message comprises transmitting the broadcast message with a time period that defines a quantity of time after the broadcast message that the processing device waits for a response from the networked devices. 18. The method of claim 16, wherein receiving the response message from an un-configured pool or spa device comprising receiving the response message through a gateway device. 19. The method of claim 16, wherein transmitting the network address to the un-configured pool or spa device comprises transmitting a message that includes the unique device identifier associated with the un-configured pool or spa device. 20. The method of claim 16, further comprising periodically transmitting the broadcast message to discover further un-configured pool or spa devices in the pool or spa system.
2,400
8,109
8,109
15,273,278
2,426
Systems and methods of a media device are operable to perform a channel change operation. An exemplary embodiment receives a first data table from a data table server, wherein the first data table comprises first control information used to perform a channel change operation such that the media device reconfigures itself to change to a new channel from a currently presenting channel or to a new media content event from a currently presenting media content event. The exemplary embodiment later receives a second data table after initiation of the channel change operation, wherein the second data table is broadcast to the media device in one of a plurality of received broadcasting media content streams that has the new media content event, and wherein the second data table comprises second control information that corresponds to at least some of the first control information used to perform the channel change operation.
1. A method to perform a channel change operation at a media device, comprising: receiving a first data table at the media device from a data table server via a communication system that communicatively couples the media device with the data table server, wherein the first data table comprises first control information used to perform the channel change operation such that the media device reconfigures itself to change to one a new channel from a currently presenting channel or to a new media content event from a currently presenting media content event, receiving a plurality of broadcasting media content streams at the media device, wherein the plurality of broadcasting media content streams are received at the media device after the first data table has been received from the data table server, and wherein one of the received broadcasting media content streams is providing the currently presenting media content event that is currently being presented to a user; initiating a channel change operation in response to a received user command, wherein the channel change operation is operable to cause the media device to change from the currently presenting channel to the new channel or to change from the currently presenting media content event to the new media content event, wherein the new channel is associated with the new media content event, and wherein the user command to initiate the channel change operation is received after the first data table has been received from the data table server; operating at least one of a tuner, a demultiplexer and a decoder of the media device using control instructions that are generated by the media device in response to the initiation of the channel change operation, wherein the generated control instructions are based on the first control information of the first data table that was received from the server; presenting the new media content event to the user after a completion of the channel change operation; receiving a second data table after initiation of the channel change operation, wherein the second data table is broadcast to the media device in one of the received broadcasting media content streams that has the new media content event, and wherein the second data table comprises second control information that corresponds to at least some of the first control information used to perform the channel change operation. 2. The method of claim 1, further comprising: storing the first data table received from the server in a memory medium of the media device. 3. The method of claim 1, wherein operating at least one of the tuner, the demultiplexer and the decoder of the media device using the control instructions that are generated by the media device in response to the initiation of the channel change operation comprises: generating first control instructions based on a first part of the first control information, wherein the first control instructions are used by the media device to perform a first portion of the channel change operation; performing the first portion of the channel change operation using the first control instructions; generating second control instructions based on a second part of the first control information, wherein the second control instructions are used by the media device to perform a second portion of the channel change operation after completion of the first portion of the channel change operation; attempting to perform the second portion of the channel change operation based on the second control instructions, wherein the second portion of the channel change operation cannot be completed based on the second control instructions; generating alternative second control instructions based on a second part of the second control information, wherein the second part of the second control information corresponds to the second part of the first control information; and performing the second portion of the channel change operation using the alternative second control instructions that are generated from the second part of the second control information. 4. The method of claim 1, wherein operating at least one of the tuner, the demultiplexer and the decoder of the media device using the control instructions that are generated by the media device in response to the initiation of the channel change operation comprises: generating tuner control instructions based on a first part of the first control information, wherein the tuner control instructions cause the tuner to change from a first one of the plurality of broadcasting media content streams having the currently presenting channel to a second one of the plurality of media content streams with the new channel such that the demultiplexer receives media content having the new media content therein. 5. The method of claim 1, wherein operating at least one of the tuner, the demultiplexer and the decoder of the media device using the control instructions that are generated by the media device in response to the initiation of the channel change operation comprises: generating demultiplexer control instructions based on the first control information, wherein the demultiplexer control instructions cause the demultiplexer to access data from received data packets having portions of the new media content therein. 6. The method of claim 5, further comprising: attempting to perform a demultiplexer portion of the channel change operation based on the demultiplexer control instructions; generating alternative demultiplexer control instructions based on the second control information, wherein the alternative demultiplexer control instructions are generated from the second control information and correspond to the demultiplexer control instructions that were generated from the first control information; and performing the demultiplexer portion of the channel change operation using the alternative demultiplexer control instructions. 7. The method of claim 1, wherein operating at least one of the tuner, the demultiplexer and the decoder of the media device using the control instructions that are generated by the media device in response to the initiation of the channel change operation comprises: generating decoder control instructions based on the first control information, wherein the decoder control instructions are used to process the media content event data. 8. The method of claim 7, further comprising: attempting to perform a decoder portion of the channel change operation based on the decoder control instructions; generating alternative decoder control instructions based on the second control information, wherein the alternative decoder control instructions generated from the second control information correspond to the decoder control instructions generated from the first control information; and performing the decoder portion of the channel change operation using the alternative decoder control instructions. 9. The method of claim 1, further comprising: generating an electronic program guide that is presented to the user, wherein the channel change operation that is initiated in response to the received user command is based on one of a selection of the new channel by the user via the presented EPG or a selection of the new media content event by the user via the presented EPG. 10. A media device, comprising: a communication system interface that is operable to receive a first data table at the media device from a data table server via a communication system that communicatively couples the media device with the data table server, wherein the first data table comprises first control information used to perform a channel change operation such that the media device reconfigures itself to change to one of a new channel from a currently presenting channel or to a new media content event from a currently presenting media content event; a memory communicatively coupled to the communication system interface that is operable to store the first data table received from the data table server; a tuner that is operable to receive a plurality of broadcasting media content streams, and that is further operable to select one of the plurality of broadcasting media content streams, wherein the plurality of broadcasting media content streams are received at the media device after the first data table has been received from the data table server, and wherein the selected broadcasting media content stream is providing the currently presenting media content event that is currently being presented to a user; a demultiplexer communicatively coupled to the tuner that is operable to receive the select broadcasting media content stream from the tuner, and that is further operable to access media content information from a plurality of data packets residing in the select broadcasting media content stream; a decoder communicatively coupled to the demultiplexer that is operable to receive the accessed media content information from the demultiplexer, and that is further operable generate a stream of media content that is streamed to a media presentation system for presentation to the user; and a processor system communicatively coupled to at least the memory and controllably coupled to the tuner, the demultiplexer, and the decoder, wherein the processor system is operable to: initiate the channel change operation in response to a received user command, wherein the channel change operation is operable to cause the media device to change from the currently presenting channel to the new channel or to change from the currently presenting media content event to the new media content event, wherein the new channel is associated with the new media content event, and wherein the user command to initiate the channel change operation is received after the first data table has been received from the data table server; generate control instructions that operate the tuner, the demultiplexer and the decoder, wherein the control instructions are generated by the media device in response to an initiation of the channel change operation, wherein the generated control instructions are based on the first control information of the first data table that was received from the data table server; wherein the new media content event is presented to the user after a completion of the channel change operation, and wherein the tuner is further operable to receive a second data table after initiation of the channel change operation, wherein the second data table is broadcast to the media device in one of the received broadcasting media content streams that has the new media content event, and wherein the second data table comprises second control information that corresponds to at least some of the first control information used to perform the channel change operation. 11. The media device of claim 10, further comprising: a remote interface that is operable to receive a wireless signal from a remote control, wherein the wireless signal includes information that identifies one of the new channel or the new media content event for the channel change operation. 12. The media device of claim 11, wherein information in the received wireless signal received at the remote interface from the remote control is based on a user selection of one of the new channel or the new media content event that is made by the user via a presented electronic program guide. 13. The media device of claim 10, wherein the processor system that is operable to control operation of the tuner, the demultiplexer and the decoder using the control instructions that are generated in response to the initiation of the channel change operation is further operable to: generate first control instructions based on a first part of the first control information, wherein the first control instructions are used by the media device to perform a first portion of the channel change operation; perform the first portion of the channel change operation using the first control instructions; generate second control instructions based on a second part of the first control information, wherein the second control instructions are used by the media device to perform a second portion of the channel change operation after completion of the first portion of the channel change operation; attempt to perform the second portion of the channel change operation based on the second control instructions, wherein the second portion of the channel change operation cannot be completed based on the second control instructions; generate alternative second control instructions based on a second part of the second control information, wherein the second part of the second control information corresponds to the second part of the first control information; and perform the second portion of the channel change operation using the alternative second control instructions that are generated from the second part of the second control information. 14. The media device of claim 10, wherein the processor system that is operable to control operation of the tuner using the control instructions that are generated in response to the initiation of the channel change operation is further operable to: generate tuner control instructions based on a first part of the first control information, wherein the tuner control instructions cause the tuner to change from a first one of the plurality of broadcasting media content streams having the currently presenting channel to a second one of the plurality of media content streams with the new channel such that the demultiplexer receives media content having the new media content therein. 15. The media device of claim 10, wherein the processor system that is operable to control operation of the demultiplexer using the control instructions that are generated in response to the initiation of the channel change operation is further operable to: generate demultiplexer control instructions based on the first control information, wherein the demultiplexer control instructions cause the demultiplexer to access data from the received data packets having portions of the new media content therein. 16. The media device of claim 15, wherein the processor system is further operable to: attempt to perform a demultiplexer portion of the channel change operation based on the demultiplexer control instructions; generate alternative demultiplexer control instructions based on the second control information, wherein the alternative demultiplexer control instructions are generated from the second control information and correspond to the demultiplexer control instructions that were generated from the first control information; and perform the demultiplexer portion of the channel change operation using the alternative demultiplexer control instructions. 17. The media device of claim 10, wherein the processor system that is operable to control operation of the decoder using the control instructions that are generated in response to the initiation of the channel change operation is further operable to: generate decoder control instructions based on the first control information, wherein the decoder control instructions are used to process media content event data. 18. The media device of claim 17, wherein the processor system is further operable to: attempt to perform a decoder portion of the channel change operation based on the decoder control instructions; generate alternative decoder control instructions based on the second control information, wherein the alternative decoder control instructions generated from the second control information correspond to the decoder control instructions generated from the first control information; and perform the decoder portion of the channel change operation using the alternative decoder control instructions.
Systems and methods of a media device are operable to perform a channel change operation. An exemplary embodiment receives a first data table from a data table server, wherein the first data table comprises first control information used to perform a channel change operation such that the media device reconfigures itself to change to a new channel from a currently presenting channel or to a new media content event from a currently presenting media content event. The exemplary embodiment later receives a second data table after initiation of the channel change operation, wherein the second data table is broadcast to the media device in one of a plurality of received broadcasting media content streams that has the new media content event, and wherein the second data table comprises second control information that corresponds to at least some of the first control information used to perform the channel change operation.1. A method to perform a channel change operation at a media device, comprising: receiving a first data table at the media device from a data table server via a communication system that communicatively couples the media device with the data table server, wherein the first data table comprises first control information used to perform the channel change operation such that the media device reconfigures itself to change to one a new channel from a currently presenting channel or to a new media content event from a currently presenting media content event, receiving a plurality of broadcasting media content streams at the media device, wherein the plurality of broadcasting media content streams are received at the media device after the first data table has been received from the data table server, and wherein one of the received broadcasting media content streams is providing the currently presenting media content event that is currently being presented to a user; initiating a channel change operation in response to a received user command, wherein the channel change operation is operable to cause the media device to change from the currently presenting channel to the new channel or to change from the currently presenting media content event to the new media content event, wherein the new channel is associated with the new media content event, and wherein the user command to initiate the channel change operation is received after the first data table has been received from the data table server; operating at least one of a tuner, a demultiplexer and a decoder of the media device using control instructions that are generated by the media device in response to the initiation of the channel change operation, wherein the generated control instructions are based on the first control information of the first data table that was received from the server; presenting the new media content event to the user after a completion of the channel change operation; receiving a second data table after initiation of the channel change operation, wherein the second data table is broadcast to the media device in one of the received broadcasting media content streams that has the new media content event, and wherein the second data table comprises second control information that corresponds to at least some of the first control information used to perform the channel change operation. 2. The method of claim 1, further comprising: storing the first data table received from the server in a memory medium of the media device. 3. The method of claim 1, wherein operating at least one of the tuner, the demultiplexer and the decoder of the media device using the control instructions that are generated by the media device in response to the initiation of the channel change operation comprises: generating first control instructions based on a first part of the first control information, wherein the first control instructions are used by the media device to perform a first portion of the channel change operation; performing the first portion of the channel change operation using the first control instructions; generating second control instructions based on a second part of the first control information, wherein the second control instructions are used by the media device to perform a second portion of the channel change operation after completion of the first portion of the channel change operation; attempting to perform the second portion of the channel change operation based on the second control instructions, wherein the second portion of the channel change operation cannot be completed based on the second control instructions; generating alternative second control instructions based on a second part of the second control information, wherein the second part of the second control information corresponds to the second part of the first control information; and performing the second portion of the channel change operation using the alternative second control instructions that are generated from the second part of the second control information. 4. The method of claim 1, wherein operating at least one of the tuner, the demultiplexer and the decoder of the media device using the control instructions that are generated by the media device in response to the initiation of the channel change operation comprises: generating tuner control instructions based on a first part of the first control information, wherein the tuner control instructions cause the tuner to change from a first one of the plurality of broadcasting media content streams having the currently presenting channel to a second one of the plurality of media content streams with the new channel such that the demultiplexer receives media content having the new media content therein. 5. The method of claim 1, wherein operating at least one of the tuner, the demultiplexer and the decoder of the media device using the control instructions that are generated by the media device in response to the initiation of the channel change operation comprises: generating demultiplexer control instructions based on the first control information, wherein the demultiplexer control instructions cause the demultiplexer to access data from received data packets having portions of the new media content therein. 6. The method of claim 5, further comprising: attempting to perform a demultiplexer portion of the channel change operation based on the demultiplexer control instructions; generating alternative demultiplexer control instructions based on the second control information, wherein the alternative demultiplexer control instructions are generated from the second control information and correspond to the demultiplexer control instructions that were generated from the first control information; and performing the demultiplexer portion of the channel change operation using the alternative demultiplexer control instructions. 7. The method of claim 1, wherein operating at least one of the tuner, the demultiplexer and the decoder of the media device using the control instructions that are generated by the media device in response to the initiation of the channel change operation comprises: generating decoder control instructions based on the first control information, wherein the decoder control instructions are used to process the media content event data. 8. The method of claim 7, further comprising: attempting to perform a decoder portion of the channel change operation based on the decoder control instructions; generating alternative decoder control instructions based on the second control information, wherein the alternative decoder control instructions generated from the second control information correspond to the decoder control instructions generated from the first control information; and performing the decoder portion of the channel change operation using the alternative decoder control instructions. 9. The method of claim 1, further comprising: generating an electronic program guide that is presented to the user, wherein the channel change operation that is initiated in response to the received user command is based on one of a selection of the new channel by the user via the presented EPG or a selection of the new media content event by the user via the presented EPG. 10. A media device, comprising: a communication system interface that is operable to receive a first data table at the media device from a data table server via a communication system that communicatively couples the media device with the data table server, wherein the first data table comprises first control information used to perform a channel change operation such that the media device reconfigures itself to change to one of a new channel from a currently presenting channel or to a new media content event from a currently presenting media content event; a memory communicatively coupled to the communication system interface that is operable to store the first data table received from the data table server; a tuner that is operable to receive a plurality of broadcasting media content streams, and that is further operable to select one of the plurality of broadcasting media content streams, wherein the plurality of broadcasting media content streams are received at the media device after the first data table has been received from the data table server, and wherein the selected broadcasting media content stream is providing the currently presenting media content event that is currently being presented to a user; a demultiplexer communicatively coupled to the tuner that is operable to receive the select broadcasting media content stream from the tuner, and that is further operable to access media content information from a plurality of data packets residing in the select broadcasting media content stream; a decoder communicatively coupled to the demultiplexer that is operable to receive the accessed media content information from the demultiplexer, and that is further operable generate a stream of media content that is streamed to a media presentation system for presentation to the user; and a processor system communicatively coupled to at least the memory and controllably coupled to the tuner, the demultiplexer, and the decoder, wherein the processor system is operable to: initiate the channel change operation in response to a received user command, wherein the channel change operation is operable to cause the media device to change from the currently presenting channel to the new channel or to change from the currently presenting media content event to the new media content event, wherein the new channel is associated with the new media content event, and wherein the user command to initiate the channel change operation is received after the first data table has been received from the data table server; generate control instructions that operate the tuner, the demultiplexer and the decoder, wherein the control instructions are generated by the media device in response to an initiation of the channel change operation, wherein the generated control instructions are based on the first control information of the first data table that was received from the data table server; wherein the new media content event is presented to the user after a completion of the channel change operation, and wherein the tuner is further operable to receive a second data table after initiation of the channel change operation, wherein the second data table is broadcast to the media device in one of the received broadcasting media content streams that has the new media content event, and wherein the second data table comprises second control information that corresponds to at least some of the first control information used to perform the channel change operation. 11. The media device of claim 10, further comprising: a remote interface that is operable to receive a wireless signal from a remote control, wherein the wireless signal includes information that identifies one of the new channel or the new media content event for the channel change operation. 12. The media device of claim 11, wherein information in the received wireless signal received at the remote interface from the remote control is based on a user selection of one of the new channel or the new media content event that is made by the user via a presented electronic program guide. 13. The media device of claim 10, wherein the processor system that is operable to control operation of the tuner, the demultiplexer and the decoder using the control instructions that are generated in response to the initiation of the channel change operation is further operable to: generate first control instructions based on a first part of the first control information, wherein the first control instructions are used by the media device to perform a first portion of the channel change operation; perform the first portion of the channel change operation using the first control instructions; generate second control instructions based on a second part of the first control information, wherein the second control instructions are used by the media device to perform a second portion of the channel change operation after completion of the first portion of the channel change operation; attempt to perform the second portion of the channel change operation based on the second control instructions, wherein the second portion of the channel change operation cannot be completed based on the second control instructions; generate alternative second control instructions based on a second part of the second control information, wherein the second part of the second control information corresponds to the second part of the first control information; and perform the second portion of the channel change operation using the alternative second control instructions that are generated from the second part of the second control information. 14. The media device of claim 10, wherein the processor system that is operable to control operation of the tuner using the control instructions that are generated in response to the initiation of the channel change operation is further operable to: generate tuner control instructions based on a first part of the first control information, wherein the tuner control instructions cause the tuner to change from a first one of the plurality of broadcasting media content streams having the currently presenting channel to a second one of the plurality of media content streams with the new channel such that the demultiplexer receives media content having the new media content therein. 15. The media device of claim 10, wherein the processor system that is operable to control operation of the demultiplexer using the control instructions that are generated in response to the initiation of the channel change operation is further operable to: generate demultiplexer control instructions based on the first control information, wherein the demultiplexer control instructions cause the demultiplexer to access data from the received data packets having portions of the new media content therein. 16. The media device of claim 15, wherein the processor system is further operable to: attempt to perform a demultiplexer portion of the channel change operation based on the demultiplexer control instructions; generate alternative demultiplexer control instructions based on the second control information, wherein the alternative demultiplexer control instructions are generated from the second control information and correspond to the demultiplexer control instructions that were generated from the first control information; and perform the demultiplexer portion of the channel change operation using the alternative demultiplexer control instructions. 17. The media device of claim 10, wherein the processor system that is operable to control operation of the decoder using the control instructions that are generated in response to the initiation of the channel change operation is further operable to: generate decoder control instructions based on the first control information, wherein the decoder control instructions are used to process media content event data. 18. The media device of claim 17, wherein the processor system is further operable to: attempt to perform a decoder portion of the channel change operation based on the decoder control instructions; generate alternative decoder control instructions based on the second control information, wherein the alternative decoder control instructions generated from the second control information correspond to the decoder control instructions generated from the first control information; and perform the decoder portion of the channel change operation using the alternative decoder control instructions.
2,400
8,110
8,110
14,217,672
2,481
A method for optically examining a route such as a track includes obtaining one or more images of a segment of a track from a camera mounted to a rail vehicle while the rail vehicle is moving along the track and selecting a benchmark visual profile of the segment of the track. The benchmark visual profile represents a designated layout of the track. The method also can include comparing the one or more images of the segment of the track with the benchmark visual profile of the track and identifying one or more differences between the one or more images and the benchmark visual profile as a misaligned segment of the track.
1. A method comprising: obtaining one or more images of a segment of a track from a camera mounted to a rail vehicle while the rail vehicle is moving along the track; selecting, with one or more computer processors, a benchmark visual profile of the segment of the track, the benchmark visual profile representing a layout of the track; comparing, with the one or more computer processors, the one or more images of the segment of the track with the benchmark visual profile of the segment of the track; and identifying, with the one or more computer processors, one or more differences between the one or more images and the benchmark visual profile as a misaligned segment of the track. 2. The method of claim 1, wherein the one or more images of the segment of the track are compared to the benchmark visual profile by mapping pixels of the one or more images to corresponding locations of the benchmark visual profile and determining if the pixels of the one or more images that represent the track are located in common locations as the track in the benchmark visual profile. 3. The method of claim 1, further comprising identifying portions of the one or more images that represent the track by measuring intensities of pixels in the one or more images and distinguishing the portions of the one or more images that represent the track from other portions of the one or more images based on the intensities of the pixels. 4. The method of claim 1, wherein the benchmark visual profile visually represents locations where the track is located prior to obtaining the one or more images. 5. The method of claim 1, further comprising measuring a distance between rails of the track by determining a number of pixels disposed between the rails in the one or more images. 6. The method of claim 5, further comprising identifying a switch in the segment of the track by identifying a change in the number of pixels disposed between the rails in the one or more images. 7. The method of claim 1, further comprising creating the benchmark visual profile from at least one image of the one or more images that are compared to the benchmark visual profile to identify the one or more differences. 8. The method of claim 1, further comprising comparing the one or more images of the segment of the track with one or more additional images of the segment of the track obtained by one or more other rail vehicles at one or more other times in order to identify degradation of the segment of the track. 9. The method of claim 1, wherein the one or more images of the segment of the track are obtained while the rail vehicle is traveling at an upper speed limit of the segment of the track. 10. A system comprising: a camera configured to be mounted to a rail vehicle and to obtain one or more images of a segment of a track while the rail vehicle is moving along the track; and one or more computer processors configured to select a benchmark visual profile of the segment of the track that represents a designated layout of the track, the one or more computer processors also configured to compare the one or more images of the segment of the track with the benchmark visual profile of the segment of the track to identify one or more differences between the one or more images and the benchmark visual profile as a misaligned segment of the track. 11. The system of claim 10, wherein the one or more computer processors are configured to compare the one or more images of the segment of the track to the benchmark visual profile by mapping pixels of the one or more images to corresponding locations of the benchmark visual profile and determining if the pixels of the one or more images that represent the track are located in common locations as the track in the benchmark visual profile. 12. The system of claim 10, wherein the one or more computer processors are configured to identify portions of the one or more images that represent the track by measuring intensities of pixels in the one or more images and to distinguish the portions of the one or more images that represent the track from other portions of the one or more images based on the intensities of the pixels. 13. The system of claim 10, wherein the benchmark visual profile visually represents locations where the track is located prior to obtaining the one or more images. 14. The system of claim 10, wherein the one or more computer processors also are configured to measure a distance between rails of the track by determining a number of pixels disposed between the rails in the one or more images. 15. The system of claim 14, wherein the one or more computer processors are configured to identify a switch in the segment of the track by identifying a change in the number of pixels disposed between the rails in the one or more images. 16. The system of claim 10, wherein the one or more computer processors are configured to create the benchmark visual profile from at least one image of the one or more images that are compared to the benchmark visual profile to identify the one or more differences. 17. The system of claim 10, wherein the camera is configured to obtain the one or more images of the segment of the track and the one or more computer processors are configured to identify the misaligned segment of the track while the rail vehicle is traveling at an upper speed limit of the segment of the track. 18. A method comprising: obtaining plural first images of an upcoming segment of a route with one or more cameras on a vehicle that is moving along the route; examining the first images with one or more computer processors to identify a foreign object on or near the upcoming segment of the route; identifying one or more differences between the first images with the one or more processors; determining if the foreign object is a transitory object or a persistent object based on the differences between the first images that are identified; and implementing one or more mitigating actions responsive to determining if the foreign object is the transitory object or the persistent object. 19. The method of claim 18, further comprising increasing a magnification level of the one or more cameras to zoom in on the foreign object and obtaining one or more second images of the foreign object, wherein the foreign object is determined to be the persistent object responsive to a comparison between the first images and the one or more second images. 20. The method of claim 18, wherein the first images are obtained at different times, and wherein implementing the one or more mitigating actions includes prioritizing the one or more mitigating actions based on the differences in the first images obtained at the different times.
A method for optically examining a route such as a track includes obtaining one or more images of a segment of a track from a camera mounted to a rail vehicle while the rail vehicle is moving along the track and selecting a benchmark visual profile of the segment of the track. The benchmark visual profile represents a designated layout of the track. The method also can include comparing the one or more images of the segment of the track with the benchmark visual profile of the track and identifying one or more differences between the one or more images and the benchmark visual profile as a misaligned segment of the track.1. A method comprising: obtaining one or more images of a segment of a track from a camera mounted to a rail vehicle while the rail vehicle is moving along the track; selecting, with one or more computer processors, a benchmark visual profile of the segment of the track, the benchmark visual profile representing a layout of the track; comparing, with the one or more computer processors, the one or more images of the segment of the track with the benchmark visual profile of the segment of the track; and identifying, with the one or more computer processors, one or more differences between the one or more images and the benchmark visual profile as a misaligned segment of the track. 2. The method of claim 1, wherein the one or more images of the segment of the track are compared to the benchmark visual profile by mapping pixels of the one or more images to corresponding locations of the benchmark visual profile and determining if the pixels of the one or more images that represent the track are located in common locations as the track in the benchmark visual profile. 3. The method of claim 1, further comprising identifying portions of the one or more images that represent the track by measuring intensities of pixels in the one or more images and distinguishing the portions of the one or more images that represent the track from other portions of the one or more images based on the intensities of the pixels. 4. The method of claim 1, wherein the benchmark visual profile visually represents locations where the track is located prior to obtaining the one or more images. 5. The method of claim 1, further comprising measuring a distance between rails of the track by determining a number of pixels disposed between the rails in the one or more images. 6. The method of claim 5, further comprising identifying a switch in the segment of the track by identifying a change in the number of pixels disposed between the rails in the one or more images. 7. The method of claim 1, further comprising creating the benchmark visual profile from at least one image of the one or more images that are compared to the benchmark visual profile to identify the one or more differences. 8. The method of claim 1, further comprising comparing the one or more images of the segment of the track with one or more additional images of the segment of the track obtained by one or more other rail vehicles at one or more other times in order to identify degradation of the segment of the track. 9. The method of claim 1, wherein the one or more images of the segment of the track are obtained while the rail vehicle is traveling at an upper speed limit of the segment of the track. 10. A system comprising: a camera configured to be mounted to a rail vehicle and to obtain one or more images of a segment of a track while the rail vehicle is moving along the track; and one or more computer processors configured to select a benchmark visual profile of the segment of the track that represents a designated layout of the track, the one or more computer processors also configured to compare the one or more images of the segment of the track with the benchmark visual profile of the segment of the track to identify one or more differences between the one or more images and the benchmark visual profile as a misaligned segment of the track. 11. The system of claim 10, wherein the one or more computer processors are configured to compare the one or more images of the segment of the track to the benchmark visual profile by mapping pixels of the one or more images to corresponding locations of the benchmark visual profile and determining if the pixels of the one or more images that represent the track are located in common locations as the track in the benchmark visual profile. 12. The system of claim 10, wherein the one or more computer processors are configured to identify portions of the one or more images that represent the track by measuring intensities of pixels in the one or more images and to distinguish the portions of the one or more images that represent the track from other portions of the one or more images based on the intensities of the pixels. 13. The system of claim 10, wherein the benchmark visual profile visually represents locations where the track is located prior to obtaining the one or more images. 14. The system of claim 10, wherein the one or more computer processors also are configured to measure a distance between rails of the track by determining a number of pixels disposed between the rails in the one or more images. 15. The system of claim 14, wherein the one or more computer processors are configured to identify a switch in the segment of the track by identifying a change in the number of pixels disposed between the rails in the one or more images. 16. The system of claim 10, wherein the one or more computer processors are configured to create the benchmark visual profile from at least one image of the one or more images that are compared to the benchmark visual profile to identify the one or more differences. 17. The system of claim 10, wherein the camera is configured to obtain the one or more images of the segment of the track and the one or more computer processors are configured to identify the misaligned segment of the track while the rail vehicle is traveling at an upper speed limit of the segment of the track. 18. A method comprising: obtaining plural first images of an upcoming segment of a route with one or more cameras on a vehicle that is moving along the route; examining the first images with one or more computer processors to identify a foreign object on or near the upcoming segment of the route; identifying one or more differences between the first images with the one or more processors; determining if the foreign object is a transitory object or a persistent object based on the differences between the first images that are identified; and implementing one or more mitigating actions responsive to determining if the foreign object is the transitory object or the persistent object. 19. The method of claim 18, further comprising increasing a magnification level of the one or more cameras to zoom in on the foreign object and obtaining one or more second images of the foreign object, wherein the foreign object is determined to be the persistent object responsive to a comparison between the first images and the one or more second images. 20. The method of claim 18, wherein the first images are obtained at different times, and wherein implementing the one or more mitigating actions includes prioritizing the one or more mitigating actions based on the differences in the first images obtained at the different times.
2,400
8,111
8,111
15,225,040
2,483
A vehicle assembly includes a side view mirror housing mountable to a vehicle exterior. A first LIDAR sensor is disposed in the side view mirror housing, has a first field of view, and is pointed in a first direction. A second LIDAR sensor is disposed in the side view mirror housing, has a second field of view, and is pointed in a second direction opposite the first direction. A camera is also disposed in the side view mirror housing, and the camera is spaced from the second LIDAR sensor. The camera has a third field of view and is pointed in the second direction.
1. A vehicle assembly comprising: a side view mirror housing mountable to a vehicle exterior; a first LIDAR sensor disposed in the side view mirror housing, the first LIDAR sensor having a first field of view and pointed in a first direction; a second LIDAR sensor disposed in the side view mirror housing, the second LIDAR sensor having a second field of view and pointed in a second direction opposite the first direction; and a camera disposed in the side view mirror housing, spaced from the second LIDAR sensor, the camera having a third field of view and pointed in the second direction. 2. The vehicle assembly of claim 1, wherein the side view mirror housing includes a front-facing side and a rear-facing side and wherein the first LIDAR sensor is disposed on the front-facing side and wherein the second LIDAR sensor and the camera are disposed on the rear-facing side. 3. The vehicle assembly of claim 1, wherein the third field of view of the camera at least partially overlaps the second field of view of the second LIDAR sensor and does not overlap the first field of view of the first LIDAR sensor. 4. The vehicle assembly of claim 1, further comprising a display screen and a processor programmed to receive image data from the camera and output at least part of the received image data to the display screen. 5. The vehicle assembly of claim 1, wherein the third field of view of the camera is adjustable relative to the side view mirror housing. 6. The vehicle assembly of claim 5 further comprising a processor programmed to receive a field of view adjustment request and adjust the third field of view according to the received field of view adjustment request. 7. The vehicle assembly of claim 6, wherein adjusting the third field of view according to the received adjustment request includes adjusting a position of the camera relative to the side view mirror housing. 8. The vehicle assembly of claim 7, wherein adjusting the position of the camera relative to the side view mirror housing includes linearly moving the camera in one of the first direction and the second direction. 9. The vehicle assembly of claim 6, wherein the first LIDAR sensor and the second LIDAR sensor are fixed relative to the side view mirror housing. 10. The vehicle assembly of claim 1, further comprising a processor programmed to: receive data from the first and the second LIDAR sensors; and create a three dimensional model of an area surrounding the side view mirror housing in accordance with the first field of view and the second field of view. 11. The vehicle assembly of claim 1, wherein the side view mirror housing further includes at least one exterior surface, and wherein at least one of the first LIDAR sensor, the second LIDAR sensor, and the camera is flush with the at least one exterior surface of the side view mirror housing. 12. A method, comprising: receiving data from a first LIDAR sensor disposed in a side view mirror housing of an autonomous vehicle, the first LIDAR sensor having a first field of view and pointed in a first direction; receiving data from a second LIDAR sensor disposed in the side view mirror housing, the second LIDAR sensor having a second field of view and pointed in a second direction opposite the first direction; generating a three dimensional model of an area surrounding the side view mirror housing in accordance with the first field of view and the second field of view; and controlling the autonomous vehicle according to the three dimensional model generated. 13. The method of claim 12, wherein the three dimensional model has an angle of view greater than 180 degrees. 14. The method of claim 12, wherein the first field of view and the second field of view overlap. 15. The method of claim 12, further comprising: receiving a field of view adjustment request; and adjusting a third field of view of a camera disposed in the side view mirror housing in accordance with the received field of view adjustment request. 16. The method of claim 15, wherein adjusting the third field of view of the camera includes outputting a signal to a camera actuator. 17. The method of claim 15, further comprising: receiving image data from the camera; and outputting at least part of the received image data to a display screen located in the autonomous vehicle. 18. A vehicle assembly comprising: a side view mirror housing mountable to a vehicle exterior; a first LIDAR sensor disposed in the side view mirror housing, the first LIDAR sensor having a first field of view and pointed in a first direction; a second LIDAR sensor disposed in the side view mirror housing, the second LIDAR sensor having a second field of view and pointed in a second direction opposite the first direction; a camera disposed in the side view mirror housing, spaced from the second LIDAR sensor, the camera having a third field of view and pointed in the second direction a display screen; and a processor programmed to receive image data from the camera and output at least part of the received image data to the display screen, wherein the processor is programmed to: receive data from the first and the second LIDAR sensors; and create a three dimensional model of an area surrounding the side view mirror housing in accordance with the first field of view and the second field of view. 19. The vehicle assembly of claim 18, wherein the side view mirror housing includes a front-facing side and a rear-facing side and wherein the first LIDAR sensor is disposed on the front-facing side and wherein the second LIDAR sensor and the camera are disposed on the rear-facing side. 20. The vehicle assembly of claim 18, wherein the processor is further programmed to receive a field of view adjustment request and adjust the third field of view according to the received field of view adjustment request.
A vehicle assembly includes a side view mirror housing mountable to a vehicle exterior. A first LIDAR sensor is disposed in the side view mirror housing, has a first field of view, and is pointed in a first direction. A second LIDAR sensor is disposed in the side view mirror housing, has a second field of view, and is pointed in a second direction opposite the first direction. A camera is also disposed in the side view mirror housing, and the camera is spaced from the second LIDAR sensor. The camera has a third field of view and is pointed in the second direction.1. A vehicle assembly comprising: a side view mirror housing mountable to a vehicle exterior; a first LIDAR sensor disposed in the side view mirror housing, the first LIDAR sensor having a first field of view and pointed in a first direction; a second LIDAR sensor disposed in the side view mirror housing, the second LIDAR sensor having a second field of view and pointed in a second direction opposite the first direction; and a camera disposed in the side view mirror housing, spaced from the second LIDAR sensor, the camera having a third field of view and pointed in the second direction. 2. The vehicle assembly of claim 1, wherein the side view mirror housing includes a front-facing side and a rear-facing side and wherein the first LIDAR sensor is disposed on the front-facing side and wherein the second LIDAR sensor and the camera are disposed on the rear-facing side. 3. The vehicle assembly of claim 1, wherein the third field of view of the camera at least partially overlaps the second field of view of the second LIDAR sensor and does not overlap the first field of view of the first LIDAR sensor. 4. The vehicle assembly of claim 1, further comprising a display screen and a processor programmed to receive image data from the camera and output at least part of the received image data to the display screen. 5. The vehicle assembly of claim 1, wherein the third field of view of the camera is adjustable relative to the side view mirror housing. 6. The vehicle assembly of claim 5 further comprising a processor programmed to receive a field of view adjustment request and adjust the third field of view according to the received field of view adjustment request. 7. The vehicle assembly of claim 6, wherein adjusting the third field of view according to the received adjustment request includes adjusting a position of the camera relative to the side view mirror housing. 8. The vehicle assembly of claim 7, wherein adjusting the position of the camera relative to the side view mirror housing includes linearly moving the camera in one of the first direction and the second direction. 9. The vehicle assembly of claim 6, wherein the first LIDAR sensor and the second LIDAR sensor are fixed relative to the side view mirror housing. 10. The vehicle assembly of claim 1, further comprising a processor programmed to: receive data from the first and the second LIDAR sensors; and create a three dimensional model of an area surrounding the side view mirror housing in accordance with the first field of view and the second field of view. 11. The vehicle assembly of claim 1, wherein the side view mirror housing further includes at least one exterior surface, and wherein at least one of the first LIDAR sensor, the second LIDAR sensor, and the camera is flush with the at least one exterior surface of the side view mirror housing. 12. A method, comprising: receiving data from a first LIDAR sensor disposed in a side view mirror housing of an autonomous vehicle, the first LIDAR sensor having a first field of view and pointed in a first direction; receiving data from a second LIDAR sensor disposed in the side view mirror housing, the second LIDAR sensor having a second field of view and pointed in a second direction opposite the first direction; generating a three dimensional model of an area surrounding the side view mirror housing in accordance with the first field of view and the second field of view; and controlling the autonomous vehicle according to the three dimensional model generated. 13. The method of claim 12, wherein the three dimensional model has an angle of view greater than 180 degrees. 14. The method of claim 12, wherein the first field of view and the second field of view overlap. 15. The method of claim 12, further comprising: receiving a field of view adjustment request; and adjusting a third field of view of a camera disposed in the side view mirror housing in accordance with the received field of view adjustment request. 16. The method of claim 15, wherein adjusting the third field of view of the camera includes outputting a signal to a camera actuator. 17. The method of claim 15, further comprising: receiving image data from the camera; and outputting at least part of the received image data to a display screen located in the autonomous vehicle. 18. A vehicle assembly comprising: a side view mirror housing mountable to a vehicle exterior; a first LIDAR sensor disposed in the side view mirror housing, the first LIDAR sensor having a first field of view and pointed in a first direction; a second LIDAR sensor disposed in the side view mirror housing, the second LIDAR sensor having a second field of view and pointed in a second direction opposite the first direction; a camera disposed in the side view mirror housing, spaced from the second LIDAR sensor, the camera having a third field of view and pointed in the second direction a display screen; and a processor programmed to receive image data from the camera and output at least part of the received image data to the display screen, wherein the processor is programmed to: receive data from the first and the second LIDAR sensors; and create a three dimensional model of an area surrounding the side view mirror housing in accordance with the first field of view and the second field of view. 19. The vehicle assembly of claim 18, wherein the side view mirror housing includes a front-facing side and a rear-facing side and wherein the first LIDAR sensor is disposed on the front-facing side and wherein the second LIDAR sensor and the camera are disposed on the rear-facing side. 20. The vehicle assembly of claim 18, wherein the processor is further programmed to receive a field of view adjustment request and adjust the third field of view according to the received field of view adjustment request.
2,400
8,112
8,112
14,923,112
2,459
A computer-implemented method is provided for predicting cloud enablement from storage and data metrics harnessed from across stack. The computer-implemented method includes identifying a corpus of data to be classified, and configuring at least one access threshold and at least one sensitivity threshold. The computer-implemented method also includes classifying at least a portion the data within the corpus based on the at least one access threshold and the at least one sensitivity threshold. Finally, the computer-implemented method includes outputting a model, based on the classification, that identifies at least a portion of the data for migration for enabling a hybrid cloud environment.
1. A computer-implemented method, comprising: identifying a corpus of data to be classified; configuring at least one access threshold and at least one sensitivity threshold; classifying at least a portion the data within the corpus based on the at least one access threshold and the at least one sensitivity threshold; and outputting a model, based on the classification, that identifies at least a portion of the data for migration for enabling a hybrid cloud environment. 2. The computer-implemented method of claim 1, wherein the access threshold includes a response time threshold. 3. The computer-implemented method of claim 1, wherein the access threshold includes an access rate threshold. 4. The computer-implemented method of claim 3, wherein the access rate threshold is measured in units of I/O operations performed per stored gigabyte per second. 5. The computer-implemented method of claim 3, wherein data sensitivity information and data access information for the corpus of data is obtained from one or more of an application stack, a data stack, and an infrastructure stack. 6. The computer-implemented method of claim 3, wherein data sensitivity information and data access information for the corpus of data is obtained from an application a data stack, and an infrastructure stack. 7. The computer-implemented method of claim 6, wherein only a portion of the data from the corpus of data is classified, and a result of the classification is utilized to predict a classification of a remainder of the data. 8. The computer-implemented method of claim 6, wherein the model includes at least four segments of data, and the segment of the data identified for the migration includes cold and not sensitive data of the corpus of data. 9. A computer program product for predicting cloud enablement from storage and data metrics, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: identify, by the processor, a corpus of data to be classified; configure, by the processor, at least one less access threshold and at least one sensitivity threshold; classify, by the processor, at least a portion the data within the corpus based on the at least one access threshold and the at least one sensitivity threshold; and output, by the processor, a model, based on the classification, that identifies at least a portion of the data for migration for enabling a hybrid cloud environment. 10. The computer program product of claim 9, wherein the access threshold includes a response time threshold. 11. The computer program product of claim 9, wherein the access threshold includes an access rate threshold. 12. The computer program product of claim 11, wherein the access rate threshold is measured in units of I/O operations performed per stored gigabyte per second. 13. The computer program product of claim 11, wherein data sensitivity information and data access information for the corpus of data is obtained from one or more of an application stack, a data stack, and an infrastructure stack. 14. The computer program product of claim 11, wherein data sensitivity information and data access information for the corpus of data is obtained from an application stack, a data stack, and an infrastructure stack. 15. The computer program product of claim 14, wherein only a portion of the data from the corpus of data is classified, and a result of the classification is utilized to predict a classification of a remainder of the data. 16. The computer program product of claim 14, wherein the model includes at least four segments of data, and the segment of the data identified for the migration includes cold and not sensitive data of the corpus of data. 17. A system, comprising: a processor and logic integrated with and/or executable by the processor, the logic being configured to: identify a corpus of data to be classified; configure at least one access threshold and at least one sensitivity threshold: classify at least a portion the data within the corpus based on the at least one access threshold and the at least one sensitivity threshold; and output a model, based on the classification, that identifies at least a portion of the data for migration for enabling a hybrid cloud environment. 18. The system of claim 17, wherein the access threshold includes a response time threshold. 19. The system of claim 17, wherein the access threshold includes an access rate threshold. 20. The system of claim 19, wherein the access rate threshold is measured in units of I/O operations performed per stored gigabyte per second.
A computer-implemented method is provided for predicting cloud enablement from storage and data metrics harnessed from across stack. The computer-implemented method includes identifying a corpus of data to be classified, and configuring at least one access threshold and at least one sensitivity threshold. The computer-implemented method also includes classifying at least a portion the data within the corpus based on the at least one access threshold and the at least one sensitivity threshold. Finally, the computer-implemented method includes outputting a model, based on the classification, that identifies at least a portion of the data for migration for enabling a hybrid cloud environment.1. A computer-implemented method, comprising: identifying a corpus of data to be classified; configuring at least one access threshold and at least one sensitivity threshold; classifying at least a portion the data within the corpus based on the at least one access threshold and the at least one sensitivity threshold; and outputting a model, based on the classification, that identifies at least a portion of the data for migration for enabling a hybrid cloud environment. 2. The computer-implemented method of claim 1, wherein the access threshold includes a response time threshold. 3. The computer-implemented method of claim 1, wherein the access threshold includes an access rate threshold. 4. The computer-implemented method of claim 3, wherein the access rate threshold is measured in units of I/O operations performed per stored gigabyte per second. 5. The computer-implemented method of claim 3, wherein data sensitivity information and data access information for the corpus of data is obtained from one or more of an application stack, a data stack, and an infrastructure stack. 6. The computer-implemented method of claim 3, wherein data sensitivity information and data access information for the corpus of data is obtained from an application a data stack, and an infrastructure stack. 7. The computer-implemented method of claim 6, wherein only a portion of the data from the corpus of data is classified, and a result of the classification is utilized to predict a classification of a remainder of the data. 8. The computer-implemented method of claim 6, wherein the model includes at least four segments of data, and the segment of the data identified for the migration includes cold and not sensitive data of the corpus of data. 9. A computer program product for predicting cloud enablement from storage and data metrics, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to: identify, by the processor, a corpus of data to be classified; configure, by the processor, at least one less access threshold and at least one sensitivity threshold; classify, by the processor, at least a portion the data within the corpus based on the at least one access threshold and the at least one sensitivity threshold; and output, by the processor, a model, based on the classification, that identifies at least a portion of the data for migration for enabling a hybrid cloud environment. 10. The computer program product of claim 9, wherein the access threshold includes a response time threshold. 11. The computer program product of claim 9, wherein the access threshold includes an access rate threshold. 12. The computer program product of claim 11, wherein the access rate threshold is measured in units of I/O operations performed per stored gigabyte per second. 13. The computer program product of claim 11, wherein data sensitivity information and data access information for the corpus of data is obtained from one or more of an application stack, a data stack, and an infrastructure stack. 14. The computer program product of claim 11, wherein data sensitivity information and data access information for the corpus of data is obtained from an application stack, a data stack, and an infrastructure stack. 15. The computer program product of claim 14, wherein only a portion of the data from the corpus of data is classified, and a result of the classification is utilized to predict a classification of a remainder of the data. 16. The computer program product of claim 14, wherein the model includes at least four segments of data, and the segment of the data identified for the migration includes cold and not sensitive data of the corpus of data. 17. A system, comprising: a processor and logic integrated with and/or executable by the processor, the logic being configured to: identify a corpus of data to be classified; configure at least one access threshold and at least one sensitivity threshold: classify at least a portion the data within the corpus based on the at least one access threshold and the at least one sensitivity threshold; and output a model, based on the classification, that identifies at least a portion of the data for migration for enabling a hybrid cloud environment. 18. The system of claim 17, wherein the access threshold includes a response time threshold. 19. The system of claim 17, wherein the access threshold includes an access rate threshold. 20. The system of claim 19, wherein the access rate threshold is measured in units of I/O operations performed per stored gigabyte per second.
2,400
8,113
8,113
14,865,914
2,439
Credentialing systems, methods, and mediums. A method includes receiving, by a credentialing system, an access code and a device location from a mobile device. The device location indicates the current geographic location of the mobile device. The method includes comparing the received access code to a stored site code. The method includes, when the received access code matches a stored site code, determining whether the device location corresponds to a site location of a target system associated with the site code, and determining whether the access code is received during a valid access period associated with the site code. The method includes, when the received access code matches the stored site code, the device location corresponds to the site location, and the access code is received during the valid access period, then granting access for the mobile device to the target system.
1. A method performed by a credentialing system, comprising: receiving, by the credentialing system, an access code and a device location from a mobile device, wherein the device location indicates the current geographic location of the mobile device; comparing the received access code to a stored site code; when the received access code matches a stored site code: determining whether the device location corresponds to a site location of a target system associated with the site code, and determining whether the access code is received during a valid access period associated with the site code; and when the received access code matches the stored site code, the device location corresponds to the site location, and the access code is received during the valid access period, then granting access for the mobile device to the target system. 2. The method of claim 1, further comprising: receiving, by the credentialing system, a request for a location-based credential for access the target system at a target site, wherein the request includes an identification of the target system, an identification of the target site, a level of access required on the target system, and the valid access period; generating and storing the site code by the credentialing system; and transmitting the site code to a user. 3. The method of claim 1, further comprising: determining the site location corresponding to the target site, wherein the site location identifies the geographic location of a target site corresponding to the target system. 4. The method of claim 1, wherein the device location corresponds to the site location if the device location is within a predetermined distance threshold of the site location. 5. The method of claim 1, wherein the granted access corresponds to a level of access specified in a request for a location-based credential for access to the target system at a target site. 6. The method of claim 1, wherein the credentialing system revokes the granted access at an expiration of the valid access period. 7. The method of claim 1, wherein the credentialing system does not receive a username or password with the received access code. 8. A credentialing system, comprising: a storage device comprising a credentialing application; an accessible memory comprising instructions of the credentialing application; and a processor configured to execute the instructions of the credentialing application to: receive an access code and a device location from a mobile device, wherein the device location indicates the current geographic location of the mobile device; compare the received access code to a stored site code; when the received access code matches a stored site code: determine whether the device location corresponds to a site location of a target system associated with the site code, and determine whether the access code is received during a valid access period associated with the site code; and when the received access code matches the stored site code, the device location corresponds to the site location, and the access code is received during the valid access period, then grant access for the mobile device to the target system. 9. The credentialing system of claim 8, wherein the processor is further configured to execute the instructions of the credentialing application to: Receive a request for a location-based credential for access the target system at a target site, wherein the request includes an identification of the target system, an identification of the target site, a level of access required on the target system, and the valid access period; generate and store the site code by the credentialing system; and transmit the site code to a user. 10. The credentialing system of claim 8, wherein the processor is further configured to execute the instructions of the credentialing application to determine the site location corresponding to the target site, wherein the site location identifies the geographic location of a target site corresponding to the target system. 11. The credentialing system of claim 8, wherein the device location corresponds to the site location if the device location is within a predetermined distance threshold of the site location. 12. The credentialing system of claim 8, wherein the granted access corresponds to a level of access specified in a request for a location-based credential for access to the target system at a target site. 13. The credentialing system of claim 8, wherein the credentialing system revokes the granted access at an expiration of the valid access period. 14. The credentialing system of claim 8, wherein the credentialing system does not receive a username or password with the received access code. 15. A non-transitory computer-readable medium encoded with executable instructions that, when executed, cause one or more data processing systems to: receive an access code and a device location from a mobile device, wherein the device location indicates the current geographic location of the mobile device; compare the received access code to a stored site code; when the received access code matches a stored site code: determine whether the device location corresponds to a site location of a target system associated with the site code, and determine whether the access code is received during a valid access period associated with the site code; and when the received access code matches the stored site code, the device location corresponds to the site location, and the access code is received during the valid access period, then grant access for the mobile device to the target system. 16. The computer-readable medium of claim 15, wherein the computer-readable medium is further encoded with executable instructions that, when executed, cause one or more data processing systems to: receive a request for a location-based credential for access the target system at a target site, wherein the request includes an identification of the target system, an identification of the target site, a level of access required on the target system, and the valid access period; generate and store the site code by the credentialing system; and transmit the site code to a user. 17. The computer-readable medium of claim 15, wherein the computer-readable medium is further encoded with executable instructions that, when executed, cause one or more data processing systems to determine the site location corresponding to the target site, wherein the site location identifies the geographic location of a target site corresponding to the target system. 18. The computer-readable medium of claim 15, wherein the device location corresponds to the site location if the device location is within a predetermined distance threshold of the site location. 19. The computer-readable medium of claim 15, wherein the granted access corresponds to a level of access specified in a request for a location-based credential for access to the target system at a target site. 20. The computer-readable medium of claim 15, wherein the credentialing system revokes the granted access at an expiration of the valid access period.
Credentialing systems, methods, and mediums. A method includes receiving, by a credentialing system, an access code and a device location from a mobile device. The device location indicates the current geographic location of the mobile device. The method includes comparing the received access code to a stored site code. The method includes, when the received access code matches a stored site code, determining whether the device location corresponds to a site location of a target system associated with the site code, and determining whether the access code is received during a valid access period associated with the site code. The method includes, when the received access code matches the stored site code, the device location corresponds to the site location, and the access code is received during the valid access period, then granting access for the mobile device to the target system.1. A method performed by a credentialing system, comprising: receiving, by the credentialing system, an access code and a device location from a mobile device, wherein the device location indicates the current geographic location of the mobile device; comparing the received access code to a stored site code; when the received access code matches a stored site code: determining whether the device location corresponds to a site location of a target system associated with the site code, and determining whether the access code is received during a valid access period associated with the site code; and when the received access code matches the stored site code, the device location corresponds to the site location, and the access code is received during the valid access period, then granting access for the mobile device to the target system. 2. The method of claim 1, further comprising: receiving, by the credentialing system, a request for a location-based credential for access the target system at a target site, wherein the request includes an identification of the target system, an identification of the target site, a level of access required on the target system, and the valid access period; generating and storing the site code by the credentialing system; and transmitting the site code to a user. 3. The method of claim 1, further comprising: determining the site location corresponding to the target site, wherein the site location identifies the geographic location of a target site corresponding to the target system. 4. The method of claim 1, wherein the device location corresponds to the site location if the device location is within a predetermined distance threshold of the site location. 5. The method of claim 1, wherein the granted access corresponds to a level of access specified in a request for a location-based credential for access to the target system at a target site. 6. The method of claim 1, wherein the credentialing system revokes the granted access at an expiration of the valid access period. 7. The method of claim 1, wherein the credentialing system does not receive a username or password with the received access code. 8. A credentialing system, comprising: a storage device comprising a credentialing application; an accessible memory comprising instructions of the credentialing application; and a processor configured to execute the instructions of the credentialing application to: receive an access code and a device location from a mobile device, wherein the device location indicates the current geographic location of the mobile device; compare the received access code to a stored site code; when the received access code matches a stored site code: determine whether the device location corresponds to a site location of a target system associated with the site code, and determine whether the access code is received during a valid access period associated with the site code; and when the received access code matches the stored site code, the device location corresponds to the site location, and the access code is received during the valid access period, then grant access for the mobile device to the target system. 9. The credentialing system of claim 8, wherein the processor is further configured to execute the instructions of the credentialing application to: Receive a request for a location-based credential for access the target system at a target site, wherein the request includes an identification of the target system, an identification of the target site, a level of access required on the target system, and the valid access period; generate and store the site code by the credentialing system; and transmit the site code to a user. 10. The credentialing system of claim 8, wherein the processor is further configured to execute the instructions of the credentialing application to determine the site location corresponding to the target site, wherein the site location identifies the geographic location of a target site corresponding to the target system. 11. The credentialing system of claim 8, wherein the device location corresponds to the site location if the device location is within a predetermined distance threshold of the site location. 12. The credentialing system of claim 8, wherein the granted access corresponds to a level of access specified in a request for a location-based credential for access to the target system at a target site. 13. The credentialing system of claim 8, wherein the credentialing system revokes the granted access at an expiration of the valid access period. 14. The credentialing system of claim 8, wherein the credentialing system does not receive a username or password with the received access code. 15. A non-transitory computer-readable medium encoded with executable instructions that, when executed, cause one or more data processing systems to: receive an access code and a device location from a mobile device, wherein the device location indicates the current geographic location of the mobile device; compare the received access code to a stored site code; when the received access code matches a stored site code: determine whether the device location corresponds to a site location of a target system associated with the site code, and determine whether the access code is received during a valid access period associated with the site code; and when the received access code matches the stored site code, the device location corresponds to the site location, and the access code is received during the valid access period, then grant access for the mobile device to the target system. 16. The computer-readable medium of claim 15, wherein the computer-readable medium is further encoded with executable instructions that, when executed, cause one or more data processing systems to: receive a request for a location-based credential for access the target system at a target site, wherein the request includes an identification of the target system, an identification of the target site, a level of access required on the target system, and the valid access period; generate and store the site code by the credentialing system; and transmit the site code to a user. 17. The computer-readable medium of claim 15, wherein the computer-readable medium is further encoded with executable instructions that, when executed, cause one or more data processing systems to determine the site location corresponding to the target site, wherein the site location identifies the geographic location of a target site corresponding to the target system. 18. The computer-readable medium of claim 15, wherein the device location corresponds to the site location if the device location is within a predetermined distance threshold of the site location. 19. The computer-readable medium of claim 15, wherein the granted access corresponds to a level of access specified in a request for a location-based credential for access to the target system at a target site. 20. The computer-readable medium of claim 15, wherein the credentialing system revokes the granted access at an expiration of the valid access period.
2,400
8,114
8,114
14,389,527
2,491
A method for accessing content of encrypted data item(s) by a terminal device operating in a digital environment, according to which before the data item is being accessed by the terminal device, it is modified after being intercepted if found to be encrypted. The wrapper of the data item is modified or replaced by embedding a URL with a unique identifier and a message into the wrapper of the data item. If a supported terminal device attempts to accesses the modified data item, the client application natively consumes the data from the modified data item and ignores its wrapper. If not, the message and the URL are displayed on the terminal device and the user browses the URL. Then after authentication, a web server locates the modified data item using the unique identifier, retrieves and decrypts the modified item and converts the decrypted modified data item to a format that can be consumed by the browser. Then, if the user has permission, he can view the data item by rendering it to the browser in his terminal device.
1. A method for accessing content of encrypted data item(s) by a terminal device operating in a digital environment, comprising the steps of: a) before said data item is being accessed by said terminal device, a.1) intercepting said data item; a.2) determining whether said data item, is encrypted; a.3) if found to be encrypted, modifying or replacing the wrapper of said data item by embedding a URL having a unique identifier and a message into the wrapper of said data item; b) in response to an attempt to accesses the modified data item, b.1) if said digital environment is supported, allowing the client application that natively consumes the data from said modified data item while ignoring its wrapper; b.2) if said digital environment is unsupported, displaying said message and said URL on said terminal device; c) in response to browsing said URL by a browser or a web application client, on said terminal device: c.1) authenticating the credentials of the user of said terminal device; c.2) locating, by a web server, said modified data item using said credentials and said unique identifier; c.3) retrieving and decrypting the modified item; d) converting the decrypted modified data item to a format, capable being consumed by said browser or said web application client; and e) if said user has a permission, allowing said user to at least view said data item by rendering it to said browser or to said web application client in said terminal device. 2. The method according to claim 1, wherein the data item is intercepted by a server. 3. The method according to claim 1, wherein the data item is intercepted by a component that is integrated within an application. 4. The method according to claim 1, wherein a plurality of data items are intercepted at different locations and during different time spans. 5. The method according to claim 4, wherein the different time spans include: the moment of creation of an encrypted data item; the moment of transmitting said encrypted data item; when said encrypted data item is stored. 6. The method according to claim 1, wherein the intercepted data item is determined to be encrypted when its extension is one of predetermined extensions. 7. The method according to claim 1, wherein the intercepted data item is determined to be encrypted when it has a predetermined structure. 8. The method according to claim 1, wherein the intercepted data item is determined to be encrypted when its content is found to contain unique strings that are indicative of encrypted content. 9. The method according to claim 1, wherein a component for interception and modification of the data item is integrated with the application that is responsible to the creation or the delivery or storage of the data item. 10. The method according to claim 9, wherein the component is a proxy between the application that sends the item to the server and the server managing the item. 11. The method according to claim 1, wherein the modification process is triggered by a batch process that scans data items located at different locations during predetermined intervals, in order to identify encrypted data items. 12. The method according to claim 1, wherein the modification process is triggered by a predetermined event in a running application, in order to identify encrypted data items. 13. The method according to claim 1, wherein the unique ID is injected into an indexable property of the clear text section of the modified wrapper, to be used by a repository for retrieving the modified data item. 14. The method according to claim 13, wherein the indexable property is an email header. 15. The method according to claim 1, wherein content of the modified data item is accessed after a user clicks on the URL link. 16. The method according to claim 1, wherein rendering the data item to the terminal device is done by accessing a renderer component. 17. The method according to claim 16, wherein the renderer component performs one or more of the following actions selected from the group of analyzing a user agent, authenticating the user, obtain an identifier associated with the modified data item, retrieving the modified data item, decrypting the modified data item, converting the modified data item to a standard rendering format, adjusting the rendering format to a screen requestor, hardening functionalities of the modified data item, and responding to the requestor. 18. The method according to claim 1, wherein the terminal device is a tablet or a smartphone. 19. The method according to claim 16, wherein the renderer component is a server. 20. The method according to claim 1, wherein a global unique identifier is transmitted to a central database and the modified data item is copied to a dedicated storage server. 21. The method according to claim 1, wherein the terminal device is a mobile communication device. 22. A system for accessing content of encrypted data items in an unsupported digital environment, comprising: a) a plurality of terminal devices being in a digital environment and operable to receive and transmit data items over a data network, each of said terminal devices having a browser or a web application client; b) a software module(s) for performing the following steps, before said data item is being accessed by said terminal device: b.1) intercepting said data item; b.2) determining whether said data item, is encrypted; b.3) if found to be encrypted, modifying or replacing the wrapper of said data item by embedding a URL having a unique identifier and a message into the wrapper of said data item; c) a software module(s) for performing the following steps, in response to an attempt to accesses the modified data item: c.1) if said digital environment is supported, allowing the client application that natively consumes the data from said modified data item while ignoring its wrapper; c.2) if said digital environment is unsupported, displaying said message and said URL on said terminal device; d) a software module(s) for performing the following steps, in response to browsing said URL by: d.1) authenticating the credentials of the user of said terminal device; d.2) locating, by a web server, said modified data item using said credentials and said unique identifier; d.3) retrieving and decrypting the modified item; e) converting the decrypted modified data item to a format, capable being consumed by said browser or said web application client; and f) a renderer server for allowing said user if he has a permission, to at least view said data item by rendering it to said browser or to said web application client in said terminal device.
A method for accessing content of encrypted data item(s) by a terminal device operating in a digital environment, according to which before the data item is being accessed by the terminal device, it is modified after being intercepted if found to be encrypted. The wrapper of the data item is modified or replaced by embedding a URL with a unique identifier and a message into the wrapper of the data item. If a supported terminal device attempts to accesses the modified data item, the client application natively consumes the data from the modified data item and ignores its wrapper. If not, the message and the URL are displayed on the terminal device and the user browses the URL. Then after authentication, a web server locates the modified data item using the unique identifier, retrieves and decrypts the modified item and converts the decrypted modified data item to a format that can be consumed by the browser. Then, if the user has permission, he can view the data item by rendering it to the browser in his terminal device.1. A method for accessing content of encrypted data item(s) by a terminal device operating in a digital environment, comprising the steps of: a) before said data item is being accessed by said terminal device, a.1) intercepting said data item; a.2) determining whether said data item, is encrypted; a.3) if found to be encrypted, modifying or replacing the wrapper of said data item by embedding a URL having a unique identifier and a message into the wrapper of said data item; b) in response to an attempt to accesses the modified data item, b.1) if said digital environment is supported, allowing the client application that natively consumes the data from said modified data item while ignoring its wrapper; b.2) if said digital environment is unsupported, displaying said message and said URL on said terminal device; c) in response to browsing said URL by a browser or a web application client, on said terminal device: c.1) authenticating the credentials of the user of said terminal device; c.2) locating, by a web server, said modified data item using said credentials and said unique identifier; c.3) retrieving and decrypting the modified item; d) converting the decrypted modified data item to a format, capable being consumed by said browser or said web application client; and e) if said user has a permission, allowing said user to at least view said data item by rendering it to said browser or to said web application client in said terminal device. 2. The method according to claim 1, wherein the data item is intercepted by a server. 3. The method according to claim 1, wherein the data item is intercepted by a component that is integrated within an application. 4. The method according to claim 1, wherein a plurality of data items are intercepted at different locations and during different time spans. 5. The method according to claim 4, wherein the different time spans include: the moment of creation of an encrypted data item; the moment of transmitting said encrypted data item; when said encrypted data item is stored. 6. The method according to claim 1, wherein the intercepted data item is determined to be encrypted when its extension is one of predetermined extensions. 7. The method according to claim 1, wherein the intercepted data item is determined to be encrypted when it has a predetermined structure. 8. The method according to claim 1, wherein the intercepted data item is determined to be encrypted when its content is found to contain unique strings that are indicative of encrypted content. 9. The method according to claim 1, wherein a component for interception and modification of the data item is integrated with the application that is responsible to the creation or the delivery or storage of the data item. 10. The method according to claim 9, wherein the component is a proxy between the application that sends the item to the server and the server managing the item. 11. The method according to claim 1, wherein the modification process is triggered by a batch process that scans data items located at different locations during predetermined intervals, in order to identify encrypted data items. 12. The method according to claim 1, wherein the modification process is triggered by a predetermined event in a running application, in order to identify encrypted data items. 13. The method according to claim 1, wherein the unique ID is injected into an indexable property of the clear text section of the modified wrapper, to be used by a repository for retrieving the modified data item. 14. The method according to claim 13, wherein the indexable property is an email header. 15. The method according to claim 1, wherein content of the modified data item is accessed after a user clicks on the URL link. 16. The method according to claim 1, wherein rendering the data item to the terminal device is done by accessing a renderer component. 17. The method according to claim 16, wherein the renderer component performs one or more of the following actions selected from the group of analyzing a user agent, authenticating the user, obtain an identifier associated with the modified data item, retrieving the modified data item, decrypting the modified data item, converting the modified data item to a standard rendering format, adjusting the rendering format to a screen requestor, hardening functionalities of the modified data item, and responding to the requestor. 18. The method according to claim 1, wherein the terminal device is a tablet or a smartphone. 19. The method according to claim 16, wherein the renderer component is a server. 20. The method according to claim 1, wherein a global unique identifier is transmitted to a central database and the modified data item is copied to a dedicated storage server. 21. The method according to claim 1, wherein the terminal device is a mobile communication device. 22. A system for accessing content of encrypted data items in an unsupported digital environment, comprising: a) a plurality of terminal devices being in a digital environment and operable to receive and transmit data items over a data network, each of said terminal devices having a browser or a web application client; b) a software module(s) for performing the following steps, before said data item is being accessed by said terminal device: b.1) intercepting said data item; b.2) determining whether said data item, is encrypted; b.3) if found to be encrypted, modifying or replacing the wrapper of said data item by embedding a URL having a unique identifier and a message into the wrapper of said data item; c) a software module(s) for performing the following steps, in response to an attempt to accesses the modified data item: c.1) if said digital environment is supported, allowing the client application that natively consumes the data from said modified data item while ignoring its wrapper; c.2) if said digital environment is unsupported, displaying said message and said URL on said terminal device; d) a software module(s) for performing the following steps, in response to browsing said URL by: d.1) authenticating the credentials of the user of said terminal device; d.2) locating, by a web server, said modified data item using said credentials and said unique identifier; d.3) retrieving and decrypting the modified item; e) converting the decrypted modified data item to a format, capable being consumed by said browser or said web application client; and f) a renderer server for allowing said user if he has a permission, to at least view said data item by rendering it to said browser or to said web application client in said terminal device.
2,400
8,115
8,115
14,364,670
2,463
Provided are methods, corresponding apparatuses, and computer program products for providing service continuity for local area networks. A method comprises receiving, during a handover procedure between local area networks, service information that relates to one or more services supported by one or more neighbor base stations; determining, based upon the service information, which one of the one or more neighbor base stations supports an ongoing service provided by a source base station to a user equipment; and handing over the user equipment from the source base station to the determined neighbor base station. With the claimed inventions, an inter-LAN handover procedure would not impact service continuity, resulting in a more robust user experience.
1-18. (canceled) 19. A method, comprising: receiving, during a handover procedure between local area networks, service information that relates to one or more services supported by one or more neighbor base stations; determining, based upon the service information, which one of the one or more neighbor base stations supports an ongoing service provided by a source base station to a user equipment; and handing over the user equipment from the source base station to the determined neighbor base station. 20. The method as recited in claim 19, wherein the service information is received from the user equipment, and the method further comprises: transmitting measurement configurations to the user equipment; and receiving, from the user equipment, the service information included in a measurement report. 21. The method as recited in claim 20, wherein prior to the transmitting the measurement configurations, the method further comprises: determining the measurement configurations based upon a previously received measurement report without the service information. 22. The method as recited in claim 19, wherein the configurations include information that relates to operating frequencies of the one or more neighbor base stations, a list of identifiers of the one or more neighbor base stations, or a combination of the operating frequencies and the list of identifiers of the one or more neighbor base stations. 23. The method as recited in claim 19, wherein the service information is received from a support network element, and the method further comprises: receiving, from the user equipment, a measurement report that includes identifiers of the one or more neighbor base stations; and retrieving, based upon the identifiers, the service information from the support network element. 24. An apparatus, comprising: at least one processor; and at least one memory including computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus to: receive, during a handover procedure between local area networks, service information that relates to one or more services supported by one or more neighbor base stations; determine, based upon the service information, which one of the one or more neighbor base stations supports an ongoing service provided by a source base station to a user equipment; and hand over the user equipment from the source base station to the determined neighbor base station. 25. The apparatus as recited in claim 24, wherein the service information is received from the user equipment, and the apparatus is further caused to: transmit measurement configurations to the user equipment; and receive, from the user equipment, the service information included in a measurement report. 26. The apparatus as recited in claim 25, wherein prior to the transmitting measurement configurations, the apparatus is further caused to: Determine the measurement configurations based upon a previously received measurement report without the service information. 27. The apparatus as recited in claim 24, wherein the configurations include information that relates to operating frequencies of the one or more neighbor base stations, a list of identifiers of the one or more neighbor base stations, or a combination of the operating frequencies and the list of identifiers of the one or more neighbor base stations. 28. The apparatus as recited in claim 24, wherein the service information is received from a support network element, and the apparatus is further caused to: receive, from the user equipment, a measurement report that includes identifiers of the one or more neighbor base stations; and retrieve, based upon the identifiers, the service information from the support network element. 29. An apparatus, comprising: at least one processor; and at least one memory including computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus to: include, during a handover procedure between local area networks, into a measurement report service information that relates to one or more services supported by one or more neighbor base stations; and transmit the measurement report to a source base station. 30. The apparatus as recited in claim 29, wherein prior to the including, the apparatus is further causes to obtain, based upon measurement configurations received from the source base station, the service information from the one or more neighbor base stations.
Provided are methods, corresponding apparatuses, and computer program products for providing service continuity for local area networks. A method comprises receiving, during a handover procedure between local area networks, service information that relates to one or more services supported by one or more neighbor base stations; determining, based upon the service information, which one of the one or more neighbor base stations supports an ongoing service provided by a source base station to a user equipment; and handing over the user equipment from the source base station to the determined neighbor base station. With the claimed inventions, an inter-LAN handover procedure would not impact service continuity, resulting in a more robust user experience.1-18. (canceled) 19. A method, comprising: receiving, during a handover procedure between local area networks, service information that relates to one or more services supported by one or more neighbor base stations; determining, based upon the service information, which one of the one or more neighbor base stations supports an ongoing service provided by a source base station to a user equipment; and handing over the user equipment from the source base station to the determined neighbor base station. 20. The method as recited in claim 19, wherein the service information is received from the user equipment, and the method further comprises: transmitting measurement configurations to the user equipment; and receiving, from the user equipment, the service information included in a measurement report. 21. The method as recited in claim 20, wherein prior to the transmitting the measurement configurations, the method further comprises: determining the measurement configurations based upon a previously received measurement report without the service information. 22. The method as recited in claim 19, wherein the configurations include information that relates to operating frequencies of the one or more neighbor base stations, a list of identifiers of the one or more neighbor base stations, or a combination of the operating frequencies and the list of identifiers of the one or more neighbor base stations. 23. The method as recited in claim 19, wherein the service information is received from a support network element, and the method further comprises: receiving, from the user equipment, a measurement report that includes identifiers of the one or more neighbor base stations; and retrieving, based upon the identifiers, the service information from the support network element. 24. An apparatus, comprising: at least one processor; and at least one memory including computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus to: receive, during a handover procedure between local area networks, service information that relates to one or more services supported by one or more neighbor base stations; determine, based upon the service information, which one of the one or more neighbor base stations supports an ongoing service provided by a source base station to a user equipment; and hand over the user equipment from the source base station to the determined neighbor base station. 25. The apparatus as recited in claim 24, wherein the service information is received from the user equipment, and the apparatus is further caused to: transmit measurement configurations to the user equipment; and receive, from the user equipment, the service information included in a measurement report. 26. The apparatus as recited in claim 25, wherein prior to the transmitting measurement configurations, the apparatus is further caused to: Determine the measurement configurations based upon a previously received measurement report without the service information. 27. The apparatus as recited in claim 24, wherein the configurations include information that relates to operating frequencies of the one or more neighbor base stations, a list of identifiers of the one or more neighbor base stations, or a combination of the operating frequencies and the list of identifiers of the one or more neighbor base stations. 28. The apparatus as recited in claim 24, wherein the service information is received from a support network element, and the apparatus is further caused to: receive, from the user equipment, a measurement report that includes identifiers of the one or more neighbor base stations; and retrieve, based upon the identifiers, the service information from the support network element. 29. An apparatus, comprising: at least one processor; and at least one memory including computer program code, the at least one memory and computer program code configured to, with the at least one processor, cause the apparatus to: include, during a handover procedure between local area networks, into a measurement report service information that relates to one or more services supported by one or more neighbor base stations; and transmit the measurement report to a source base station. 30. The apparatus as recited in claim 29, wherein prior to the including, the apparatus is further causes to obtain, based upon measurement configurations received from the source base station, the service information from the one or more neighbor base stations.
2,400
8,116
8,116
16,378,306
2,431
A hardware security token in contact with a user's body can send a signal via interbody communication to one or more electronic devices associated with a system of electronic devices having unified access controls such that a user can access each of the electronic devices using the same credentials. The signal sent by the hardware security token can be deputized by a user in possession of credentials to the system as a temporary proxy for that user's identity. In other examples, the signal sent by the portable security token can be deputized by a user in possession of credentials to the system as a temporary proxy for another user's identity. In some embodiments, the proxy can expire after a period of time or after a particular event occurs.
1. A method of authenticating a user of an electronic device, comprising: receiving a modulated signal through a body of the user of the electronic device; determining whether the modulated signal is deputized as an identity proxy of an authorized user of the electronic device; and denying the user access to a feature of the electronic device unless the modulated signal is deputized as an identity proxy of an authorized user of the electronic device. 2. The method of claim 1, further comprising: identifying a set of valid permissions associated with the identity proxy; and limiting access to the electronic device, by the user, to the set of valid permissions. 3. The method of claim 2, wherein identifying the set of valid permissions comprises: identifying a period of time for which a permission associated with the identity proxy is valid; and identifying the permission as a valid permission when a current time is within the period of time. 4. The method of claim 2, wherein identifying the set of valid permissions comprises: identifying a first geographic region in which a permission associated with the identity proxy is valid; determining a second geographic region in which the modulated signal is received by the electronic device; and identifying the permission as a valid permission when the second geographic region is within the first geographic region. 5. The method of claim 1, further comprising: requiring an identifier from the user of the electronic device; receiving the identifier from the user of the electronic device; determining whether the identifier identifies an authorized user of the electronic device; and denying the user access to the feature of the electronic device unless the identifier identifies an authorized user of the electronic device. 6. The method of claim 1, wherein the electronic device is a home appliance. 7. The method of claim 1, wherein the identity proxy comprises a proxy for a credential of the authorized user. 8. An electronic device, comprising: a capacitive interface configured to capacitively couple to a body of a user and receive a modulated signal through the body of the user; a processor configured to: determine whether the modulated signal is deputized as an identity proxy of an authorized user of the electronic device; and deny the user access to a feature of the electronic device unless the modulated signal is deputized as an identity proxy of an authorized user of the electronic device. 9. The electronic device of claim 8, wherein the processor is further configured to: identify a set of valid permissions associated with the identity proxy; and limit access to the electronic device, by the user, to the set of valid permissions. 10. The electronic device of claim 9, wherein the processor is configured to identify the set of valid permissions by: identifying a period of time for which a permission associated with the identity proxy is valid; and identifying the permission as a valid permission when a current time is within the period of time. 11. The electronic device of claim 9, wherein the processor is configured to identify the set of valid permissions by: identifying a first geographic region in which a permission associated with the identity proxy is valid; determining a second geographic region in which the modulated signal is received by the electronic device; and identifying the permission as a valid permission when the second geographic region is within the first geographic region. 12. The electronic device of claim 8, wherein the processor is further configured to: require an identifier from the user of the electronic device; receive the identifier from the user of the electronic device; determine whether the identifier identifies an authorized user of the electronic device; and deny the user access to the feature of the electronic device unless the identifier identifies an authorized user of the electronic device. 13. The electronic device of claim 8, wherein the electronic device is a home appliance. 14. The electronic device of claim 8, wherein the identity proxy comprises a proxy for a credential of the authorized user. 15. A method of authorizing a user to access an electronic device, the method comprising: receiving a modulated signal at the electronic device, the modulated signal received from an authentication token and via a capacitive interface defined between the authentication token in contact with a body of the user and through a portion of the body of the user that is in contact with the electronic device; requesting from the user, by the electronic device, a credential associated with authorized access to the electronic device; and deputizing the modulated signal as a proxy for the credential. 16. The method of claim 15, further comprising: permitting access to the electronic device upon receiving the modulated signal at the electronic device via the capacitive interface. 17. The method of claim 15, wherein the credential comprises biometric information associated with the user. 18. The method of claim 15, wherein the modulated signal is deputized for a timeout period. 19. The method of claim 18, wherein a selection of the timeout period is received from the user. 20. The method of claim 15, wherein the modulated signal comprises a rolling code.
A hardware security token in contact with a user's body can send a signal via interbody communication to one or more electronic devices associated with a system of electronic devices having unified access controls such that a user can access each of the electronic devices using the same credentials. The signal sent by the hardware security token can be deputized by a user in possession of credentials to the system as a temporary proxy for that user's identity. In other examples, the signal sent by the portable security token can be deputized by a user in possession of credentials to the system as a temporary proxy for another user's identity. In some embodiments, the proxy can expire after a period of time or after a particular event occurs.1. A method of authenticating a user of an electronic device, comprising: receiving a modulated signal through a body of the user of the electronic device; determining whether the modulated signal is deputized as an identity proxy of an authorized user of the electronic device; and denying the user access to a feature of the electronic device unless the modulated signal is deputized as an identity proxy of an authorized user of the electronic device. 2. The method of claim 1, further comprising: identifying a set of valid permissions associated with the identity proxy; and limiting access to the electronic device, by the user, to the set of valid permissions. 3. The method of claim 2, wherein identifying the set of valid permissions comprises: identifying a period of time for which a permission associated with the identity proxy is valid; and identifying the permission as a valid permission when a current time is within the period of time. 4. The method of claim 2, wherein identifying the set of valid permissions comprises: identifying a first geographic region in which a permission associated with the identity proxy is valid; determining a second geographic region in which the modulated signal is received by the electronic device; and identifying the permission as a valid permission when the second geographic region is within the first geographic region. 5. The method of claim 1, further comprising: requiring an identifier from the user of the electronic device; receiving the identifier from the user of the electronic device; determining whether the identifier identifies an authorized user of the electronic device; and denying the user access to the feature of the electronic device unless the identifier identifies an authorized user of the electronic device. 6. The method of claim 1, wherein the electronic device is a home appliance. 7. The method of claim 1, wherein the identity proxy comprises a proxy for a credential of the authorized user. 8. An electronic device, comprising: a capacitive interface configured to capacitively couple to a body of a user and receive a modulated signal through the body of the user; a processor configured to: determine whether the modulated signal is deputized as an identity proxy of an authorized user of the electronic device; and deny the user access to a feature of the electronic device unless the modulated signal is deputized as an identity proxy of an authorized user of the electronic device. 9. The electronic device of claim 8, wherein the processor is further configured to: identify a set of valid permissions associated with the identity proxy; and limit access to the electronic device, by the user, to the set of valid permissions. 10. The electronic device of claim 9, wherein the processor is configured to identify the set of valid permissions by: identifying a period of time for which a permission associated with the identity proxy is valid; and identifying the permission as a valid permission when a current time is within the period of time. 11. The electronic device of claim 9, wherein the processor is configured to identify the set of valid permissions by: identifying a first geographic region in which a permission associated with the identity proxy is valid; determining a second geographic region in which the modulated signal is received by the electronic device; and identifying the permission as a valid permission when the second geographic region is within the first geographic region. 12. The electronic device of claim 8, wherein the processor is further configured to: require an identifier from the user of the electronic device; receive the identifier from the user of the electronic device; determine whether the identifier identifies an authorized user of the electronic device; and deny the user access to the feature of the electronic device unless the identifier identifies an authorized user of the electronic device. 13. The electronic device of claim 8, wherein the electronic device is a home appliance. 14. The electronic device of claim 8, wherein the identity proxy comprises a proxy for a credential of the authorized user. 15. A method of authorizing a user to access an electronic device, the method comprising: receiving a modulated signal at the electronic device, the modulated signal received from an authentication token and via a capacitive interface defined between the authentication token in contact with a body of the user and through a portion of the body of the user that is in contact with the electronic device; requesting from the user, by the electronic device, a credential associated with authorized access to the electronic device; and deputizing the modulated signal as a proxy for the credential. 16. The method of claim 15, further comprising: permitting access to the electronic device upon receiving the modulated signal at the electronic device via the capacitive interface. 17. The method of claim 15, wherein the credential comprises biometric information associated with the user. 18. The method of claim 15, wherein the modulated signal is deputized for a timeout period. 19. The method of claim 18, wherein a selection of the timeout period is received from the user. 20. The method of claim 15, wherein the modulated signal comprises a rolling code.
2,400
8,117
8,117
15,154,789
2,439
A wireless network system includes a user device, a client and an access point. In the wireless network system, a wireless network mode of the client is started in an AdHoc mode in response to specific operation, a wireless network mode of the user device is switched to an AdHoc mode when it is detected that the wireless network mode of the client is started in the AdHoc mode. Then, infrastructure network information including a network name and an encryption key for setting the wireless network communication in the infrastructure mode is transmitted from the user device to the client, and the wireless network mode of the client is switched to the infrastructure mode on the basis of the infrastructure network information.
1. A network apparatus comprising: a processor; a network interface; a storage which stores network information for setting wireless access and instructions that, when executed by the processor, cause the network apparatus to: detect wireless access with a particular network name by another network apparatus; when the wireless access with the particular network name is detected, switch a network mode of the network apparatus from an infrastructure mode to an AdHoc mode to access the other network apparatus; and after switching the network mode, send the network information via the network interface to the other network apparatus. 2. The network apparatus of claim 1, wherein the network information comprises a network name and an encryption key. 3. The network apparatus of claim 1, wherein the network apparatus is a controller that controls the other apparatus. 4. A network apparatus comprising: a processor; a network interface; a storage which stores a instructions that, when executed by the processor, cause the network apparatus to: receive an operation by a user; when the operation is received, start a wireless access by an AdHoc mode with another network apparatus; after starting the wireless access, receive network information via the network interface, and store the network information to the storage; and after receiving the network information, switch a network mode of the network apparatus from the AdHoc mode to an infrastructure mode by using the network information. 5. The network apparatus of claim 4, wherein the network apparatus is an audio player. 6. A method of switching a network mode of a network apparatus established with another network apparatus, the method comprising: detecting wireless access with a particular network name by the other network apparatus; when the wireless access with the particular network name is detected, switching the network mode of the network apparatus from an infrastructure mode to an AdHoc mode to access the other network apparatus; and after switching the network mode, sending network information for setting the wireless access via the network interface to the other network apparatus. 7. A method of switching a network mode of a network apparatus established with another network apparatus, the method comprising: receiving an operation by a user; when the operation is received, starting a wireless access by an AdHoc mode with the other network apparatus; after starting the wireless access, receiving network information; and after receiving the network information, switching the network mode of the network apparatus from the AdHoc mode to an infrastructure mode by using the network information.
A wireless network system includes a user device, a client and an access point. In the wireless network system, a wireless network mode of the client is started in an AdHoc mode in response to specific operation, a wireless network mode of the user device is switched to an AdHoc mode when it is detected that the wireless network mode of the client is started in the AdHoc mode. Then, infrastructure network information including a network name and an encryption key for setting the wireless network communication in the infrastructure mode is transmitted from the user device to the client, and the wireless network mode of the client is switched to the infrastructure mode on the basis of the infrastructure network information.1. A network apparatus comprising: a processor; a network interface; a storage which stores network information for setting wireless access and instructions that, when executed by the processor, cause the network apparatus to: detect wireless access with a particular network name by another network apparatus; when the wireless access with the particular network name is detected, switch a network mode of the network apparatus from an infrastructure mode to an AdHoc mode to access the other network apparatus; and after switching the network mode, send the network information via the network interface to the other network apparatus. 2. The network apparatus of claim 1, wherein the network information comprises a network name and an encryption key. 3. The network apparatus of claim 1, wherein the network apparatus is a controller that controls the other apparatus. 4. A network apparatus comprising: a processor; a network interface; a storage which stores a instructions that, when executed by the processor, cause the network apparatus to: receive an operation by a user; when the operation is received, start a wireless access by an AdHoc mode with another network apparatus; after starting the wireless access, receive network information via the network interface, and store the network information to the storage; and after receiving the network information, switch a network mode of the network apparatus from the AdHoc mode to an infrastructure mode by using the network information. 5. The network apparatus of claim 4, wherein the network apparatus is an audio player. 6. A method of switching a network mode of a network apparatus established with another network apparatus, the method comprising: detecting wireless access with a particular network name by the other network apparatus; when the wireless access with the particular network name is detected, switching the network mode of the network apparatus from an infrastructure mode to an AdHoc mode to access the other network apparatus; and after switching the network mode, sending network information for setting the wireless access via the network interface to the other network apparatus. 7. A method of switching a network mode of a network apparatus established with another network apparatus, the method comprising: receiving an operation by a user; when the operation is received, starting a wireless access by an AdHoc mode with the other network apparatus; after starting the wireless access, receiving network information; and after receiving the network information, switching the network mode of the network apparatus from the AdHoc mode to an infrastructure mode by using the network information.
2,400
8,118
8,118
14,338,767
2,484
A video camera of a video monitoring system includes an image generation unit which provides a sequence of digital images (DI) having watchdog information data to supervise the sequence of digital images; an information data encoder adapted to encode the watchdog information data to generate information blocks; a video compression unit adapted to compress the digital images received from the image generation unit to generate compressed digital images consisting of image blocks; a data transmission unit adapted to transmit the generated image blocks received from the video compression unit along with the information blocks generated by the information data encoder.
1. A video camera of a video monitoring system comprising: an image generation unit which provides a sequence of digital images (DI) having watchdog information data (WID) to supervise said sequence of digital images (DI); an information data encoder adapted to encode the watchdog information data (WID) to generate information blocks; a video compression unit adapted to compress the digital images (DI) received from the image generation unit to generate compressed digital images consisting of image blocks; a data transmission unit adapted to transmit the generated image blocks received from the video compression unit along with the information blocks generated by said information data encoder. 2. The video camera according to claim 1 further comprising a separation unit adapted to separate watchdog information data (WID) embedded in the sequence of digital images (DI) received from said image generation unit and to store the separated watchdog information data in a memory connected to the information data encoder. 3. The video camera according to claim 1 wherein the data transmission unit of said video camera comprises encapsulation means adapted to encapsulate the image blocks and the information blocks as payload data in data packets transmitted by the data transmission unit via an interface of said video camera to a video receiver of said video monitoring system. 4. The video camera according to claim 1 wherein the image generation unit provides a sequence of digital images (DI) each being formed by a matrix of image pixels, wherein each digital image (DI) generated by said image generation unit does comprise pixel rows and pixel columns, wherein the watchdog information data (WID) of the digital image (DI) generated by said image generation unit (2A) is embedded in a predetermined pixel row of said digital image (DI). 5. The video camera according to claim 4 wherein the watchdog information data is carried in a predetermined group of pixels within the predetermined row of said digital image (DI) and comprises image parameters of said digital image (DI) including an image counter value or timestamp data provided by said image generation unit when generating the respective digital image (DI). 6. The video camera according to claim 1 wherein said video compression unit generates an interrupt signal at the end of each received digital image compressed by said video compression unit, wherein the generated interrupt signal is applied to said information data encoder which encodes the watchdog information data (WID) of the respective digital image DI to generate information blocks which are added to the image blocks output by said video compression unit. 7. The video camera according to claim 3 wherein the encapsulation means of the data transmission unit is adapted to encapsulate the information blocks of a digital image generated by said information data encoder and the image blocks of the compressed digital image output by said video compression unit as payload data in at least one Ethernet package transmitted via the interface of said video camera to said video receiver of said video monitoring system. 8. A video receiver of a video monitoring system comprising: a data reception unit adapted to receive image blocks and information blocks of digital images (DI) from at least one video camera of said video monitoring system; a video decoder adapted to decode the received image blocks and the received image information blocks of the digital images (DI); an information data decoder adapted to decode the decoded information blocks received from said video decoder to provide watchdog information data (WID) of the digital images (DI); an evaluation unit adapted to evaluate the watchdog information data (WID) of the digital images (DI) to supervise a sequence of the digital images received from at least one video camera of said video monitoring system. 9. The video receiver according to claim 8 wherein the data reception unit of said video receiver comprises decapsulation means adapted to decapsulate image blocks and information blocks carried as payload data in data packets received from at least one video camera of said video monitoring system and to supply the decapsulated image blocks and information blocks to the video decoder of said video receiver. 10. The video receiver according to claim 8 wherein the evaluation unit of said video receiver evaluates the watchdog information data (WTD) of the digital images (DI) to detect an inconsistency of image parameters of said digital images (DI) including image counter values and/or timestamp data. 11. The video receiver according to claim 8 wherein the evaluation unit of said video receiver further checks header data of the received data packets, frame rates of the received data packets and an average bandwidth of a communication channel between the video camera and the video receiver to detect a communication failure or communication impairment of the communication channel. 12. The video receiver according to claim 8 wherein the evaluation unit of said video receiver checks decoder error messages of the video decoder and/or of the information data decoder to detect a communication failure or communication impairment of a communication channel between a video camera of said video monitoring system and the video receiver. 13. The video receiver according to claim 8 wherein the evaluation unit of said video receiver deactivates a display connected to said video receiver provided for displaying the received images if an inconsistency of image parameters of the digital images (DI) and/or a communication failure or a communication impairment of a communication channel between a video camera and the video receiver is detected by said evaluation unit. 14. A video monitoring system comprising at least one video camera comprising: an image generation unit which provides a sequence of digital images (DI) having watchdog information data (WID) to supervise said sequence of digital images (DI); an information data encoder adapted to encode the watchdog information data (WID) to generate information blocks; a video compression unit adapted to compress the digital images (DI) received from the image generation unit to generate compressed digital images consisting of image blocks; a data transmission unit adapted to transmit the generated image blocks received from the video compression unit along with the information blocks generated by said information data encoder; and a video receiver comprising: a data reception unit adapted to receive image blocks and information blocks of digital images (DI) from at least one video camera of said video monitoring system; a video decoder adapted to decode the received image blocks and the received image information blocks of the digital images (DI); an information data decoder adapted to decode the decoded information blocks received from said video decoder to provide watchdog information data (WID) of the digital images (DI); an evaluation unit adapted to evaluate the watchdog information data (WID) of the digital images (DI) to supervise a sequence of the digital images received from at least one video camera of said video monitoring system. 15. A method for increasing security in a video monitoring system comprising the steps of: providing a sequence of digital images having watchdog information data to supervise said sequence of digital images, encoding the watchdog information data to generate information blocks, compressing the digital images to generate compressed digital images consisting of image blocks, and transmitting the generated image blocks along with the information blocks to a the video receiver.
A video camera of a video monitoring system includes an image generation unit which provides a sequence of digital images (DI) having watchdog information data to supervise the sequence of digital images; an information data encoder adapted to encode the watchdog information data to generate information blocks; a video compression unit adapted to compress the digital images received from the image generation unit to generate compressed digital images consisting of image blocks; a data transmission unit adapted to transmit the generated image blocks received from the video compression unit along with the information blocks generated by the information data encoder.1. A video camera of a video monitoring system comprising: an image generation unit which provides a sequence of digital images (DI) having watchdog information data (WID) to supervise said sequence of digital images (DI); an information data encoder adapted to encode the watchdog information data (WID) to generate information blocks; a video compression unit adapted to compress the digital images (DI) received from the image generation unit to generate compressed digital images consisting of image blocks; a data transmission unit adapted to transmit the generated image blocks received from the video compression unit along with the information blocks generated by said information data encoder. 2. The video camera according to claim 1 further comprising a separation unit adapted to separate watchdog information data (WID) embedded in the sequence of digital images (DI) received from said image generation unit and to store the separated watchdog information data in a memory connected to the information data encoder. 3. The video camera according to claim 1 wherein the data transmission unit of said video camera comprises encapsulation means adapted to encapsulate the image blocks and the information blocks as payload data in data packets transmitted by the data transmission unit via an interface of said video camera to a video receiver of said video monitoring system. 4. The video camera according to claim 1 wherein the image generation unit provides a sequence of digital images (DI) each being formed by a matrix of image pixels, wherein each digital image (DI) generated by said image generation unit does comprise pixel rows and pixel columns, wherein the watchdog information data (WID) of the digital image (DI) generated by said image generation unit (2A) is embedded in a predetermined pixel row of said digital image (DI). 5. The video camera according to claim 4 wherein the watchdog information data is carried in a predetermined group of pixels within the predetermined row of said digital image (DI) and comprises image parameters of said digital image (DI) including an image counter value or timestamp data provided by said image generation unit when generating the respective digital image (DI). 6. The video camera according to claim 1 wherein said video compression unit generates an interrupt signal at the end of each received digital image compressed by said video compression unit, wherein the generated interrupt signal is applied to said information data encoder which encodes the watchdog information data (WID) of the respective digital image DI to generate information blocks which are added to the image blocks output by said video compression unit. 7. The video camera according to claim 3 wherein the encapsulation means of the data transmission unit is adapted to encapsulate the information blocks of a digital image generated by said information data encoder and the image blocks of the compressed digital image output by said video compression unit as payload data in at least one Ethernet package transmitted via the interface of said video camera to said video receiver of said video monitoring system. 8. A video receiver of a video monitoring system comprising: a data reception unit adapted to receive image blocks and information blocks of digital images (DI) from at least one video camera of said video monitoring system; a video decoder adapted to decode the received image blocks and the received image information blocks of the digital images (DI); an information data decoder adapted to decode the decoded information blocks received from said video decoder to provide watchdog information data (WID) of the digital images (DI); an evaluation unit adapted to evaluate the watchdog information data (WID) of the digital images (DI) to supervise a sequence of the digital images received from at least one video camera of said video monitoring system. 9. The video receiver according to claim 8 wherein the data reception unit of said video receiver comprises decapsulation means adapted to decapsulate image blocks and information blocks carried as payload data in data packets received from at least one video camera of said video monitoring system and to supply the decapsulated image blocks and information blocks to the video decoder of said video receiver. 10. The video receiver according to claim 8 wherein the evaluation unit of said video receiver evaluates the watchdog information data (WTD) of the digital images (DI) to detect an inconsistency of image parameters of said digital images (DI) including image counter values and/or timestamp data. 11. The video receiver according to claim 8 wherein the evaluation unit of said video receiver further checks header data of the received data packets, frame rates of the received data packets and an average bandwidth of a communication channel between the video camera and the video receiver to detect a communication failure or communication impairment of the communication channel. 12. The video receiver according to claim 8 wherein the evaluation unit of said video receiver checks decoder error messages of the video decoder and/or of the information data decoder to detect a communication failure or communication impairment of a communication channel between a video camera of said video monitoring system and the video receiver. 13. The video receiver according to claim 8 wherein the evaluation unit of said video receiver deactivates a display connected to said video receiver provided for displaying the received images if an inconsistency of image parameters of the digital images (DI) and/or a communication failure or a communication impairment of a communication channel between a video camera and the video receiver is detected by said evaluation unit. 14. A video monitoring system comprising at least one video camera comprising: an image generation unit which provides a sequence of digital images (DI) having watchdog information data (WID) to supervise said sequence of digital images (DI); an information data encoder adapted to encode the watchdog information data (WID) to generate information blocks; a video compression unit adapted to compress the digital images (DI) received from the image generation unit to generate compressed digital images consisting of image blocks; a data transmission unit adapted to transmit the generated image blocks received from the video compression unit along with the information blocks generated by said information data encoder; and a video receiver comprising: a data reception unit adapted to receive image blocks and information blocks of digital images (DI) from at least one video camera of said video monitoring system; a video decoder adapted to decode the received image blocks and the received image information blocks of the digital images (DI); an information data decoder adapted to decode the decoded information blocks received from said video decoder to provide watchdog information data (WID) of the digital images (DI); an evaluation unit adapted to evaluate the watchdog information data (WID) of the digital images (DI) to supervise a sequence of the digital images received from at least one video camera of said video monitoring system. 15. A method for increasing security in a video monitoring system comprising the steps of: providing a sequence of digital images having watchdog information data to supervise said sequence of digital images, encoding the watchdog information data to generate information blocks, compressing the digital images to generate compressed digital images consisting of image blocks, and transmitting the generated image blocks along with the information blocks to a the video receiver.
2,400
8,119
8,119
15,269,157
2,483
In a method and apparatus for processing video data, one or more processors are configured to encode a portion of stored video data in a pixel domain to generate pixel domain video data, a first graphics processing unit is configured to process the video data in a graphics domain to generate graphics domain video data, and an interface transmits the graphics domain video data and the pixel domain video data. One or more processors are configured to parse the video data into a graphics stream and an audio-video stream and decode the video data, a sensor senses movement adaptations of a user, and a second graphics processing unit is configured to generate a canvas on a spherical surface with texture information received from the graphics stream, and render a field of view based on the sensed movement adaptations of the user.
1. An apparatus configured to process video data, the apparatus comprising: a memory configured to store the video data; one or more processors configured to encode a portion of the stored video data in a pixel domain to generate pixel domain video data; a first graphics processing unit configured to process the video data in a graphics domain to generate graphics domain video data; and an interface to transmit the graphics domain video data and the pixel domain video data. 2. The apparatus of claim 1, wherein the one or more processors are further configured to stitch the video data together to form an equirectangular canvas, and wherein the first graphics processing unit is further configured to convert the canvas to a texture and render the texture inside a sphere. 3. The apparatus of claim 2, wherein the first graphics processor is configured to transmit the texture via the interface at a first frame rate, and wherein the one or more processors are further configured to transmit the pixel domain video data via the interface at a second frame rate greater than the first frame rate. 4. The apparatus of claim 2, wherein the first graphics processor is configured to transmit the texture via the interface at a first resolution and wherein the one or more processors are further configured to transmit the pixel domain video data via the interface at a second resolution greater than the first resolution. 5. The apparatus of claim 2, wherein the one or more processors are configured to map the canvas to one of a cube map or a pyramid projection, encode a plurality of tiles at a plurality of resolutions, and transmit one or more tiles of the plurality of tiles that are within a field of view of a user. 6. The apparatus of claim 5, wherein the one or more processors are configured to determine movement adaptations of the user and to determine the field of view based on the determined movement adaptations of the user. 7. The apparatus of claim 5, wherein the one or more processors are configured to determine movement adaptations of the user, determine the field of view based on the determined movement adaptations of the user, and adapt surround sound video based on the determined field of view. 8. The apparatus of claim 6, wherein the one or more processors are configured to transmit tiles in a center of the field of view at a first resolution based on the movement adaptations and transmit tiles within the field of view, but not in the center of the field of view, at a second resolution less than the first resolution. 9. The apparatus of claim 1, further comprising: one or more processors configured to parse the video data into a graphics stream and an audio-video stream and decode the video data; a sensor to sense movement adaptations of a user; and a second graphics processing unit configured to generate a canvas on a spherical surface with texture information received from the graphics stream, and render a field of view based on the sensed movement adaptations of the user. 10. An apparatus configured to process video data, the apparatus comprising: a memory configured to store a video stream comprising pixel domain video data and graphics domain video data; one or more processors configured to parse the stored video stream into a graphics stream and an audio-video stream and decode the parsed video stream; a sensor to sense movement adaptations of a user; and a graphics processing unit configured to generate a canvas on a spherical surface with texture information from the graphics stream, and render a field of view based on the sensed movement adaptations of the user. 11. The apparatus of claim 10, wherein the one or more processors are configured to decode a plurality of tiles that are within the field of view of the user, upsample both tiles of the plurality of tiles that are low resolution tiles and tiles of the plurality of tiles that are high resolution tiles, and combine the low resolution tiles and the high resolution tiles to form a single image overlay. 12. The apparatus of claim 11, wherein the one or more processors are configured to form the single image overlay based on the sensed movement adaptations of the user. 13. The apparatus of claim 12, wherein the one or more processors are configured to not form the single image overlay based on determining that the sensed movement adaptations of the user causes the upsampled tiles to be outside the field of view. 14. The apparatus of claim 13, wherein the sensed movement adaptations of the user comprises head movement trajectories. 15. The apparatus of claim 13, wherein the one or more processors are configured to compare an extent of head movement of the user to a window of visibility. 16. A method of processing video data, comprising: storing the video data; encoding a portion of the video data in a pixel domain to generate pixel domain video data; processing the video data in a graphics domain to generate graphics domain video data; and transmitting the graphics domain video data and the pixel domain video data. 17. The method of claim 16, further comprising: stitching the video data together to form an equirectangular canvas; converting the canvas to a texture; and rendering the texture inside a sphere. 18. The method of claim 17, further comprising: transmitting the texture at a first frame rate; and transmitting the pixel domain video data at a second frame rate greater than the first frame rate. 19. The method of claim 17, further comprising: transmitting the texture via at a first resolution; and transmitting the pixel domain video data at a second resolution greater than the first resolution. 20. The method of claim 17, further comprising: mapping the canvas to one of a cube map or a pyramid projection; encoding a plurality of tiles at a plurality of resolutions; and transmitting one or more tiles of the plurality of tiles that are within a field of view of a user. 21. The method of claim 20, further comprising: determining movement adaptations of the user; and determining the field of view based on the determined movement adaptations of the user. 22. The method of claim 20, further comprising: determining movement adaptations of the user; determining the field of view based on the determined movement adaptations of the user; and adapting surround sound video based on the determined field of view. 23. The method of claim 21, further comprising: transmitting tiles in a center of the field of view at a first resolution based on the determined movement adaptations; and transmitting tiles within the field of view, but not in the center of the field of view, at a second resolution less than the first resolution. 24. The method of claim 16, further comprising: decoding the video data and parsing the video data into a graphics stream and an audio-video stream; sensing movement adaptations of a user; generating a canvas on a spherical surface with texture information received from the graphics stream; and rendering a field of view based on the sensed movement adaptations of the user. 25. A method of processing a video data, comprising: storing a video stream comprising pixel domain video data and graphics domain video data; parsing the stored video stream into a graphics stream and an audio-video stream and decoding the video stream; sensing movement adaptations of a user; generating a canvas on a spherical surface with texture information from the graphics stream; and rendering a field of view based on the sensed movement adaptations of the user. 26. The method of claim 25, further comprising: decoding a plurality of tiles that are within the field of view of the user; upsampling both tiles of the plurality of tiles that are low resolution tiles and tiles of the plurality of tiles that are high resolution tiles; and combining the low resolution tiles and the high resolution tiles to form a single image overlay. 27. The method of claim 26, further comprising forming the single image overlay based on the sensed movement adaptations of the user. 28. The method of claim 27, further comprising not forming the single image overlay based on determining that the sensed movement adaptations of the user cause the upsampled tiles to be outside the field of view. 29. The method of claim 28, wherein the sensed movement adaptations of the user comprise head movement trajectories. 30. The method of claim 28, further comprising comparing an extent of head movement of the user to a window of visibility.
In a method and apparatus for processing video data, one or more processors are configured to encode a portion of stored video data in a pixel domain to generate pixel domain video data, a first graphics processing unit is configured to process the video data in a graphics domain to generate graphics domain video data, and an interface transmits the graphics domain video data and the pixel domain video data. One or more processors are configured to parse the video data into a graphics stream and an audio-video stream and decode the video data, a sensor senses movement adaptations of a user, and a second graphics processing unit is configured to generate a canvas on a spherical surface with texture information received from the graphics stream, and render a field of view based on the sensed movement adaptations of the user.1. An apparatus configured to process video data, the apparatus comprising: a memory configured to store the video data; one or more processors configured to encode a portion of the stored video data in a pixel domain to generate pixel domain video data; a first graphics processing unit configured to process the video data in a graphics domain to generate graphics domain video data; and an interface to transmit the graphics domain video data and the pixel domain video data. 2. The apparatus of claim 1, wherein the one or more processors are further configured to stitch the video data together to form an equirectangular canvas, and wherein the first graphics processing unit is further configured to convert the canvas to a texture and render the texture inside a sphere. 3. The apparatus of claim 2, wherein the first graphics processor is configured to transmit the texture via the interface at a first frame rate, and wherein the one or more processors are further configured to transmit the pixel domain video data via the interface at a second frame rate greater than the first frame rate. 4. The apparatus of claim 2, wherein the first graphics processor is configured to transmit the texture via the interface at a first resolution and wherein the one or more processors are further configured to transmit the pixel domain video data via the interface at a second resolution greater than the first resolution. 5. The apparatus of claim 2, wherein the one or more processors are configured to map the canvas to one of a cube map or a pyramid projection, encode a plurality of tiles at a plurality of resolutions, and transmit one or more tiles of the plurality of tiles that are within a field of view of a user. 6. The apparatus of claim 5, wherein the one or more processors are configured to determine movement adaptations of the user and to determine the field of view based on the determined movement adaptations of the user. 7. The apparatus of claim 5, wherein the one or more processors are configured to determine movement adaptations of the user, determine the field of view based on the determined movement adaptations of the user, and adapt surround sound video based on the determined field of view. 8. The apparatus of claim 6, wherein the one or more processors are configured to transmit tiles in a center of the field of view at a first resolution based on the movement adaptations and transmit tiles within the field of view, but not in the center of the field of view, at a second resolution less than the first resolution. 9. The apparatus of claim 1, further comprising: one or more processors configured to parse the video data into a graphics stream and an audio-video stream and decode the video data; a sensor to sense movement adaptations of a user; and a second graphics processing unit configured to generate a canvas on a spherical surface with texture information received from the graphics stream, and render a field of view based on the sensed movement adaptations of the user. 10. An apparatus configured to process video data, the apparatus comprising: a memory configured to store a video stream comprising pixel domain video data and graphics domain video data; one or more processors configured to parse the stored video stream into a graphics stream and an audio-video stream and decode the parsed video stream; a sensor to sense movement adaptations of a user; and a graphics processing unit configured to generate a canvas on a spherical surface with texture information from the graphics stream, and render a field of view based on the sensed movement adaptations of the user. 11. The apparatus of claim 10, wherein the one or more processors are configured to decode a plurality of tiles that are within the field of view of the user, upsample both tiles of the plurality of tiles that are low resolution tiles and tiles of the plurality of tiles that are high resolution tiles, and combine the low resolution tiles and the high resolution tiles to form a single image overlay. 12. The apparatus of claim 11, wherein the one or more processors are configured to form the single image overlay based on the sensed movement adaptations of the user. 13. The apparatus of claim 12, wherein the one or more processors are configured to not form the single image overlay based on determining that the sensed movement adaptations of the user causes the upsampled tiles to be outside the field of view. 14. The apparatus of claim 13, wherein the sensed movement adaptations of the user comprises head movement trajectories. 15. The apparatus of claim 13, wherein the one or more processors are configured to compare an extent of head movement of the user to a window of visibility. 16. A method of processing video data, comprising: storing the video data; encoding a portion of the video data in a pixel domain to generate pixel domain video data; processing the video data in a graphics domain to generate graphics domain video data; and transmitting the graphics domain video data and the pixel domain video data. 17. The method of claim 16, further comprising: stitching the video data together to form an equirectangular canvas; converting the canvas to a texture; and rendering the texture inside a sphere. 18. The method of claim 17, further comprising: transmitting the texture at a first frame rate; and transmitting the pixel domain video data at a second frame rate greater than the first frame rate. 19. The method of claim 17, further comprising: transmitting the texture via at a first resolution; and transmitting the pixel domain video data at a second resolution greater than the first resolution. 20. The method of claim 17, further comprising: mapping the canvas to one of a cube map or a pyramid projection; encoding a plurality of tiles at a plurality of resolutions; and transmitting one or more tiles of the plurality of tiles that are within a field of view of a user. 21. The method of claim 20, further comprising: determining movement adaptations of the user; and determining the field of view based on the determined movement adaptations of the user. 22. The method of claim 20, further comprising: determining movement adaptations of the user; determining the field of view based on the determined movement adaptations of the user; and adapting surround sound video based on the determined field of view. 23. The method of claim 21, further comprising: transmitting tiles in a center of the field of view at a first resolution based on the determined movement adaptations; and transmitting tiles within the field of view, but not in the center of the field of view, at a second resolution less than the first resolution. 24. The method of claim 16, further comprising: decoding the video data and parsing the video data into a graphics stream and an audio-video stream; sensing movement adaptations of a user; generating a canvas on a spherical surface with texture information received from the graphics stream; and rendering a field of view based on the sensed movement adaptations of the user. 25. A method of processing a video data, comprising: storing a video stream comprising pixel domain video data and graphics domain video data; parsing the stored video stream into a graphics stream and an audio-video stream and decoding the video stream; sensing movement adaptations of a user; generating a canvas on a spherical surface with texture information from the graphics stream; and rendering a field of view based on the sensed movement adaptations of the user. 26. The method of claim 25, further comprising: decoding a plurality of tiles that are within the field of view of the user; upsampling both tiles of the plurality of tiles that are low resolution tiles and tiles of the plurality of tiles that are high resolution tiles; and combining the low resolution tiles and the high resolution tiles to form a single image overlay. 27. The method of claim 26, further comprising forming the single image overlay based on the sensed movement adaptations of the user. 28. The method of claim 27, further comprising not forming the single image overlay based on determining that the sensed movement adaptations of the user cause the upsampled tiles to be outside the field of view. 29. The method of claim 28, wherein the sensed movement adaptations of the user comprise head movement trajectories. 30. The method of claim 28, further comprising comparing an extent of head movement of the user to a window of visibility.
2,400
8,120
8,120
15,164,955
2,488
Method and apparatus for scanning surfaces of a three-dimensional object employ a first sensor to acquire first data points and a second sensor to acquire second data points. The first sensor has a relatively lower accuracy and faster data point acquisition rate than the second sensor, and the second data points are assigned a higher weighting than the first data points. A three-dimensional coordinate point cloud is generated based on the both the first and second data points and their respective weighting.
1. A method comprising: scanning surfaces of a three-dimensional object with a first sensor to acquire first data points representative of the scanned surfaces, and adding the first data points to a three-dimensional coordinate point cloud; scanning surfaces of the three-dimensional object with a second sensor to acquire second data points representative of the scanned surfaces, and adding the second data points to the three-dimensional coordinate point cloud; wherein the first sensor has a lower accuracy and a faster data point acquisition rate than the second sensor, and the second data points are assigned a higher weighting than the first data points; and basing the three-dimensional coordinate point cloud on weighted first and second data points. 2. The method of claim 1, wherein the first sensor comprises an optical area sensor or a 3D laser scanner. 3. The method of claim 2, further comprising adjusting a level of light projected on the object based on reflectivity of the surface being scanned. 4. The method of claim 3, wherein the optical area sensor comprises at least one of a camera and a fringe projector. 5. The method of claim 1, wherein the second sensor comprises at least one of an optical point or line sensor, a point scanning laser probe and a touch probe sensor. 6. The method of claim 1, further comprising comparing the three-dimensional coordinate point cloud to a reference model and determining any deviations therebetween. 7. The method of claim 1, wherein the weightings of the first and second data points are proportional to the respective accuracies of the first and second sensors. 8. The method of claim 1, wherein standard deviation values of the first and second data points are determined, and the weighting of the first and second data points is based on the reciprocal of the respective standard deviation values. 9. The method of claim 1, wherein the three-dimensional object is scanned with the first sensor multiple times, with the acquired first data points from each scan being added to the three-dimensional coordinate point cloud. 10. The method of claim 9, wherein the object is supported on a rotatable support, and the object is scanned with the first sensor at multiple rotatable support positions. 11. The method of claim 1, wherein the three-dimensional object is scanned with the first sensor, followed by scanning of the three-dimensional object with the second sensor, or the three-dimensional object is scanned with the second sensor, followed by scanning of the three-dimensional object with the first sensor. 12. The method of claim 1, wherein only selected surfaces of the three-dimensional objects are scanned with the second sensor. 13. The method of claim 12, wherein said selected surfaces include local portions of the object for which first data points potentially have low density or are lacking. 14. The method of claim 12, wherein said selected surfaces include local portions of the object including complex surface geometry. 15. The method of claim 1, further comprising scanning surfaces of the three-dimensional object with a third sensor to acquire third data points representative of the scanned surfaces, and adding the third data points to the three-dimensional coordinate point cloud, wherein the third sensor has a different accuracy and speed of data point acquisition than the first and second sensors, and the third data points are assigned a weighting based on the third sensor accuracy relative to the first and second sensors. 16. A method comprising: scanning surfaces of a three-dimensional object with a second sensor to acquire pre-coating data points; subsequently, coating the object and scanning surfaces of the coated object with the second sensor to acquire coated data points, and determining the thickness of the coating by comparing the pre-coating and coated data points; subsequently, scanning surfaces of the coated three-dimensional object with a first sensor to acquire subsequent data points representative of the scanned surfaces, and adding the subsequent data points to a three-dimensional coordinate point cloud after subtracting the determined thickness of the coating, wherein the first sensor has a lower accuracy and faster data point acquisition than the second sensor. 17. An apparatus comprising: a first sensor that scans surfaces of a three-dimensional object to acquire first data points representative of the scanned surfaces; a second sensor that scans surfaces of the three-dimensional object with a second sensor to acquire second data points representative of the scanned surfaces, wherein the first sensor has a lower accuracy and a faster data point acquisition than the second sensor; and a module that collects the first and second data points, adds the first and second data points to a three-dimensional coordinate point cloud, and bases the three-dimensional coordinate point cloud on a weighting of the first and second data points, with the second data points having a higher weighting than the first data points. 18. The apparatus of claim 17, wherein the first sensor comprises an optical area sensor. 19. The apparatus of claim 18, wherein the optical area sensor comprises at least one of a camera and a fringe projector. 20. The apparatus of claim 18, further comprising a light adjustor that adjusts a level of light projected on the object based on reflectivity of the surface being scanned. 21. The apparatus of claim 17, wherein the second sensor comprises at least one of an optical point or line sensor, a point scanning laser probe and a touch probe sensor. 22. The apparatus of claim 17, wherein the weightings of the first and second data points are proportional to the respective accuracies of the first and second sensors. 23. The apparatus of claim 18, further comprising a rotatable support for the object, whereby the object is scanned with the first sensor at multiple rotatable support positions. 24. The apparatus of claim 17, further comprising a third sensor to acquire third data points representative of the scanned surfaces, wherein the module collects the third data points and adds them to the three-dimensional coordinate point cloud, wherein the third sensor has a different accuracy and speed of data point acquisition than the first and second sensors, and the third data points are assigned a weighting based on the third sensor accuracy relative to the first and second sensors.
Method and apparatus for scanning surfaces of a three-dimensional object employ a first sensor to acquire first data points and a second sensor to acquire second data points. The first sensor has a relatively lower accuracy and faster data point acquisition rate than the second sensor, and the second data points are assigned a higher weighting than the first data points. A three-dimensional coordinate point cloud is generated based on the both the first and second data points and their respective weighting.1. A method comprising: scanning surfaces of a three-dimensional object with a first sensor to acquire first data points representative of the scanned surfaces, and adding the first data points to a three-dimensional coordinate point cloud; scanning surfaces of the three-dimensional object with a second sensor to acquire second data points representative of the scanned surfaces, and adding the second data points to the three-dimensional coordinate point cloud; wherein the first sensor has a lower accuracy and a faster data point acquisition rate than the second sensor, and the second data points are assigned a higher weighting than the first data points; and basing the three-dimensional coordinate point cloud on weighted first and second data points. 2. The method of claim 1, wherein the first sensor comprises an optical area sensor or a 3D laser scanner. 3. The method of claim 2, further comprising adjusting a level of light projected on the object based on reflectivity of the surface being scanned. 4. The method of claim 3, wherein the optical area sensor comprises at least one of a camera and a fringe projector. 5. The method of claim 1, wherein the second sensor comprises at least one of an optical point or line sensor, a point scanning laser probe and a touch probe sensor. 6. The method of claim 1, further comprising comparing the three-dimensional coordinate point cloud to a reference model and determining any deviations therebetween. 7. The method of claim 1, wherein the weightings of the first and second data points are proportional to the respective accuracies of the first and second sensors. 8. The method of claim 1, wherein standard deviation values of the first and second data points are determined, and the weighting of the first and second data points is based on the reciprocal of the respective standard deviation values. 9. The method of claim 1, wherein the three-dimensional object is scanned with the first sensor multiple times, with the acquired first data points from each scan being added to the three-dimensional coordinate point cloud. 10. The method of claim 9, wherein the object is supported on a rotatable support, and the object is scanned with the first sensor at multiple rotatable support positions. 11. The method of claim 1, wherein the three-dimensional object is scanned with the first sensor, followed by scanning of the three-dimensional object with the second sensor, or the three-dimensional object is scanned with the second sensor, followed by scanning of the three-dimensional object with the first sensor. 12. The method of claim 1, wherein only selected surfaces of the three-dimensional objects are scanned with the second sensor. 13. The method of claim 12, wherein said selected surfaces include local portions of the object for which first data points potentially have low density or are lacking. 14. The method of claim 12, wherein said selected surfaces include local portions of the object including complex surface geometry. 15. The method of claim 1, further comprising scanning surfaces of the three-dimensional object with a third sensor to acquire third data points representative of the scanned surfaces, and adding the third data points to the three-dimensional coordinate point cloud, wherein the third sensor has a different accuracy and speed of data point acquisition than the first and second sensors, and the third data points are assigned a weighting based on the third sensor accuracy relative to the first and second sensors. 16. A method comprising: scanning surfaces of a three-dimensional object with a second sensor to acquire pre-coating data points; subsequently, coating the object and scanning surfaces of the coated object with the second sensor to acquire coated data points, and determining the thickness of the coating by comparing the pre-coating and coated data points; subsequently, scanning surfaces of the coated three-dimensional object with a first sensor to acquire subsequent data points representative of the scanned surfaces, and adding the subsequent data points to a three-dimensional coordinate point cloud after subtracting the determined thickness of the coating, wherein the first sensor has a lower accuracy and faster data point acquisition than the second sensor. 17. An apparatus comprising: a first sensor that scans surfaces of a three-dimensional object to acquire first data points representative of the scanned surfaces; a second sensor that scans surfaces of the three-dimensional object with a second sensor to acquire second data points representative of the scanned surfaces, wherein the first sensor has a lower accuracy and a faster data point acquisition than the second sensor; and a module that collects the first and second data points, adds the first and second data points to a three-dimensional coordinate point cloud, and bases the three-dimensional coordinate point cloud on a weighting of the first and second data points, with the second data points having a higher weighting than the first data points. 18. The apparatus of claim 17, wherein the first sensor comprises an optical area sensor. 19. The apparatus of claim 18, wherein the optical area sensor comprises at least one of a camera and a fringe projector. 20. The apparatus of claim 18, further comprising a light adjustor that adjusts a level of light projected on the object based on reflectivity of the surface being scanned. 21. The apparatus of claim 17, wherein the second sensor comprises at least one of an optical point or line sensor, a point scanning laser probe and a touch probe sensor. 22. The apparatus of claim 17, wherein the weightings of the first and second data points are proportional to the respective accuracies of the first and second sensors. 23. The apparatus of claim 18, further comprising a rotatable support for the object, whereby the object is scanned with the first sensor at multiple rotatable support positions. 24. The apparatus of claim 17, further comprising a third sensor to acquire third data points representative of the scanned surfaces, wherein the module collects the third data points and adds them to the three-dimensional coordinate point cloud, wherein the third sensor has a different accuracy and speed of data point acquisition than the first and second sensors, and the third data points are assigned a weighting based on the third sensor accuracy relative to the first and second sensors.
2,400
8,121
8,121
13,844,038
2,456
Systems and methods disclosed herein include systems and methods for delivering secondary content to a plurality of user devices, the secondary content comprising events synchronized to primary content. The process can include: delivering an application to a plurality of user devices, the application configured to play or executed secondary content on the user devices; and causing the secondary content executed on the user devices to be synchronized with primary content.
1. A method comprising: receiving primary content information, the primary content information at least partially identifying a primary content that is played in a motion picture theater; identifying secondary content that is related to the primary content; downloading the secondary content onto a user device; and executing the secondary content on the user device. 2. The method of claim 1, wherein executing the secondary content on the user device comprises: synchronizing the secondary content with the primary content; and executing the synchronized secondary content on the user device. 3. The method of claim 2, wherein synchronizing the secondary content with the primary content comprises receiving a synchronization signal on the user device. 4. The method of claim 3, wherein the synchronization signal comprises at least one of an audio signal, time code data, location markers, or a manual synchronization signal. 5. The method of claim 2, wherein the secondary content comprises an alternate audio track. 6. The method of claim 5, further comprising performing audio signal cancellation on the primary content. 7. The method of claim 2, wherein the secondary content comprises alternate language subtitles. 8. The method of claim 1, further comprising displaying the identified secondary content for selection by a user. 9. The method of claim 1, further comprising identifying the user using user-specific credentials. 10. The method of claim 9, further comprising displaying the identified secondary content for selection by a user, wherein the secondary content displayed to the user is based in part on user-specific information associated with the identified user. 11. A non-transitory computer readable medium comprising an instruction set configured to cause a computing device to perform: receiving primary content information, the primary content information identifying a primary content that is played in a motion picture theater; identifying secondary content that is related to the primary content; downloading the secondary content onto a user device; and executing the secondary content on the user device. 12. The non-transitory computer readable medium of claim 11, wherein executing the secondary content on the user device comprises: synchronizing the secondary content with the primary content; and executing the synchronized secondary content on the user device. 13. The non-transitory computer readable medium of claim 12, wherein synchronizing the secondary content with the primary content comprises receiving a synchronization signal on the user device. 14. The non-transitory computer readable medium of claim 13, wherein the synchronization signal comprises at least one of an audio signal, time code data, location markers, or a manual synchronization signal. 15. The non-transitory computer readable medium of claim 12, wherein the secondary content comprises an alternate audio track. 16. The non-transitory computer readable medium of claim 15, wherein the instruction set is further configured to cause a computing device to perform audio signal cancellation on the primary content. 17. The non-transitory computer readable medium of claim 12, wherein the secondary content comprises alternate language subtitles. 18. The non-transitory computer readable medium of claim 11, further comprising displaying the identified secondary content for selection by a user. 19. The non-transitory computer readable medium of claim 18, further comprising identifying the user using user-specific credentials. 20. The non-transitory computer readable medium of claim 19, wherein the secondary content displayed to the user is based in part on user-specific information associated with the identified user. 21. A system for delivering secondary content in a motion picture theater, the system comprising: an application server configured to communicate with a user device, the user device configured to execute secondary content, wherein, the secondary content is related to a primary content that is played in the motion picture theater. 22. The system of claim 21, wherein the secondary content is configured to be synchronized with the primary content, and played simultaneously with the primary content. 23. The system of claim 22, wherein the application server communicates with the user device by sending a synchronization signal to the user device to synchronize the secondary content with the primary content. 24. The system of claim 23, wherein the synchronization signal comprises at least one of an audio signal, time code data, location markers, or a manual synchronization signal. 25. The system of claim 22, wherein the secondary content comprises an alternate audio track. 26. The system of claim 25, wherein the alternate audio track comprises an audio signal cancellation component for cancelling audio signals in the primary content. 27. The system of claim 22, wherein the secondary content comprises alternate language subtitles. 28. The system of claim 21, wherein the application server is a remote server that is outside the motion picture theater. 29. The system of claim 21, wherein the application server is a local server that is inside the motion picture theater. 30. A method comprising: receiving primary content information, the primary content information at least partially identifying a primary content that is played in a motion picture theater; identifying secondary content that is related to the primary content; and transmitting the secondary content to a user device. 31. The method of claim 30, further comprising transmitting a synchronization signal to the user device for synchronizing the secondary content with the primary content. 32. The method of claim 31, wherein the synchronization signal comprises at least one of an audio signal, time code data, location markers, or a manual synchronization signal. 33. The method of claim 31, wherein the secondary content comprises an alternate audio track. 34. The method of claim 33, further comprising transmitting primary content audio information for performing audio signal cancellation on the primary content. 35. The method of claim 31, wherein the secondary content comprises alternate language subtitles. 36. The method of claim 30, further comprising identifying the user using user-specific credentials. 37. The method of claim 36, wherein identifying secondary content is based in part on user-specific information associated with the identified user. 38. A method comprising: preparing a secondary content configured to be played on a user device simultaneously with a primary content, the primary content being configured to be displayed in a motion picture theater; and transmitting the secondary content to a user device for playing on the user device simultaneously with the primary content. 39. The method of claim 38, wherein the secondary content is configured to be played in synchronicity with the primary content such that an event in the primary content triggers a corresponding event in the secondary content. 40. The method of claim 38, wherein the secondary content comprises an alternate audio track. 41. The method of claim 40, further comprising primary content audio information that can be used for performing audio signal cancellation on the primary content. 42. The method of claim 38, wherein the secondary content comprises alternate language subtitles. 43. The method of claim 38, further comprising displaying the secondary content for selection by a user.
Systems and methods disclosed herein include systems and methods for delivering secondary content to a plurality of user devices, the secondary content comprising events synchronized to primary content. The process can include: delivering an application to a plurality of user devices, the application configured to play or executed secondary content on the user devices; and causing the secondary content executed on the user devices to be synchronized with primary content.1. A method comprising: receiving primary content information, the primary content information at least partially identifying a primary content that is played in a motion picture theater; identifying secondary content that is related to the primary content; downloading the secondary content onto a user device; and executing the secondary content on the user device. 2. The method of claim 1, wherein executing the secondary content on the user device comprises: synchronizing the secondary content with the primary content; and executing the synchronized secondary content on the user device. 3. The method of claim 2, wherein synchronizing the secondary content with the primary content comprises receiving a synchronization signal on the user device. 4. The method of claim 3, wherein the synchronization signal comprises at least one of an audio signal, time code data, location markers, or a manual synchronization signal. 5. The method of claim 2, wherein the secondary content comprises an alternate audio track. 6. The method of claim 5, further comprising performing audio signal cancellation on the primary content. 7. The method of claim 2, wherein the secondary content comprises alternate language subtitles. 8. The method of claim 1, further comprising displaying the identified secondary content for selection by a user. 9. The method of claim 1, further comprising identifying the user using user-specific credentials. 10. The method of claim 9, further comprising displaying the identified secondary content for selection by a user, wherein the secondary content displayed to the user is based in part on user-specific information associated with the identified user. 11. A non-transitory computer readable medium comprising an instruction set configured to cause a computing device to perform: receiving primary content information, the primary content information identifying a primary content that is played in a motion picture theater; identifying secondary content that is related to the primary content; downloading the secondary content onto a user device; and executing the secondary content on the user device. 12. The non-transitory computer readable medium of claim 11, wherein executing the secondary content on the user device comprises: synchronizing the secondary content with the primary content; and executing the synchronized secondary content on the user device. 13. The non-transitory computer readable medium of claim 12, wherein synchronizing the secondary content with the primary content comprises receiving a synchronization signal on the user device. 14. The non-transitory computer readable medium of claim 13, wherein the synchronization signal comprises at least one of an audio signal, time code data, location markers, or a manual synchronization signal. 15. The non-transitory computer readable medium of claim 12, wherein the secondary content comprises an alternate audio track. 16. The non-transitory computer readable medium of claim 15, wherein the instruction set is further configured to cause a computing device to perform audio signal cancellation on the primary content. 17. The non-transitory computer readable medium of claim 12, wherein the secondary content comprises alternate language subtitles. 18. The non-transitory computer readable medium of claim 11, further comprising displaying the identified secondary content for selection by a user. 19. The non-transitory computer readable medium of claim 18, further comprising identifying the user using user-specific credentials. 20. The non-transitory computer readable medium of claim 19, wherein the secondary content displayed to the user is based in part on user-specific information associated with the identified user. 21. A system for delivering secondary content in a motion picture theater, the system comprising: an application server configured to communicate with a user device, the user device configured to execute secondary content, wherein, the secondary content is related to a primary content that is played in the motion picture theater. 22. The system of claim 21, wherein the secondary content is configured to be synchronized with the primary content, and played simultaneously with the primary content. 23. The system of claim 22, wherein the application server communicates with the user device by sending a synchronization signal to the user device to synchronize the secondary content with the primary content. 24. The system of claim 23, wherein the synchronization signal comprises at least one of an audio signal, time code data, location markers, or a manual synchronization signal. 25. The system of claim 22, wherein the secondary content comprises an alternate audio track. 26. The system of claim 25, wherein the alternate audio track comprises an audio signal cancellation component for cancelling audio signals in the primary content. 27. The system of claim 22, wherein the secondary content comprises alternate language subtitles. 28. The system of claim 21, wherein the application server is a remote server that is outside the motion picture theater. 29. The system of claim 21, wherein the application server is a local server that is inside the motion picture theater. 30. A method comprising: receiving primary content information, the primary content information at least partially identifying a primary content that is played in a motion picture theater; identifying secondary content that is related to the primary content; and transmitting the secondary content to a user device. 31. The method of claim 30, further comprising transmitting a synchronization signal to the user device for synchronizing the secondary content with the primary content. 32. The method of claim 31, wherein the synchronization signal comprises at least one of an audio signal, time code data, location markers, or a manual synchronization signal. 33. The method of claim 31, wherein the secondary content comprises an alternate audio track. 34. The method of claim 33, further comprising transmitting primary content audio information for performing audio signal cancellation on the primary content. 35. The method of claim 31, wherein the secondary content comprises alternate language subtitles. 36. The method of claim 30, further comprising identifying the user using user-specific credentials. 37. The method of claim 36, wherein identifying secondary content is based in part on user-specific information associated with the identified user. 38. A method comprising: preparing a secondary content configured to be played on a user device simultaneously with a primary content, the primary content being configured to be displayed in a motion picture theater; and transmitting the secondary content to a user device for playing on the user device simultaneously with the primary content. 39. The method of claim 38, wherein the secondary content is configured to be played in synchronicity with the primary content such that an event in the primary content triggers a corresponding event in the secondary content. 40. The method of claim 38, wherein the secondary content comprises an alternate audio track. 41. The method of claim 40, further comprising primary content audio information that can be used for performing audio signal cancellation on the primary content. 42. The method of claim 38, wherein the secondary content comprises alternate language subtitles. 43. The method of claim 38, further comprising displaying the secondary content for selection by a user.
2,400
8,122
8,122
15,159,447
2,456
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving from a plurality of origination processes a plurality of messages, identifying a respective destination node and a destination process on the destination node associated with each of the messages, storing each of the messages in a respective buffer for the destination process and destination node associated with the message, identifying one or more of the buffers wherein an aggregate size of all messages stored in each of the identified buffers exceeds a threshold, and for each identified buffer, sending all messages stored in the buffer in bulk to the destination process on the destination node associated with the messages stored in the buffer.
1. A computer-implemented method comprising: receiving from a plurality of origination processes a plurality of messages; identifying a respective destination node and a destination process on the destination node associated with each of the messages; storing each of the messages in a respective buffer for the destination process and destination node associated with the message; identifying one or more of the buffers wherein an aggregate size of all messages stored in each of the identified buffers exceeds a threshold; and for each identified buffer, (i) sending all messages stored in the buffer in bulk to the destination process on the destination node associated with the messages stored in the buffer, and (ii) de-allocating the buffer. 2. The method of claim 1, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on a first node that is different from the particular destination node. 3. The method of claim 1, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on the particular destination node. 4. The method of claim 1, wherein a particular destination node is a virtual machine. 5. The method of claim 1, wherein sending all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer comprises: assembling all messages stored in the buffer in a first message; and sending the first message to the destination process on the destination node. 6. The method of claim 1, further comprising: identifying a particular buffer wherein an amount of time has passed since any messages where sent from the particular buffer; and sending all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer. 7. The method of claim 1, wherein each buffer stores messages of one of a plurality of distinct channels wherein each channel comprises an ordered plurality of messages. 8. The method of claim 7, wherein an origination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 9. The method of claim 7, wherein a destination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 10. A system comprising: one or more computer processors programmed to perform operations to: receive from a plurality of origination processes a plurality of messages; identify a respective destination node and a destination process on the destination node associated with each of the messages; store each of the messages in a respective buffer for the destination process and destination node associated with the message; identify one or more of the buffers wherein an aggregate size of all messages stored in each of the identified buffers exceeds a threshold; and for each identified buffer, (i) send all messages stored in the buffer in bulk to the destination process on the destination node associated with the messages stored in the buffer, and (ii) de-allocate the buffer. 11. The system of claim 10, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on a first node that is different from the particular destination node. 12. The system of claim 10, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on the particular destination node. 13. The system of claim 10, wherein a particular destination node is a virtual machine. 14. The system of claim 10, wherein to send all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer, the one or more computer processors programmed to comprises: assemble all messages stored in the buffer in a first message; and send the first message to the destination process on the destination node. 15. The system of claim 10, wherein the one or more computer processors further programmed to: identify a particular buffer wherein an amount of time has passed since any messages where sent from the particular buffer; and send all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer. 16. The system of claim 10, wherein each buffer to store messages of one of a plurality of distinct channels wherein each channel comprises an ordered plurality of messages. 17. The system of claim 16, wherein an origination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 18. The system of claim 16, wherein a destination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 19. An article comprising a non-transitory computer-readable storage medium having instructions stored thereon that, when executed by one or more computer processors, perform operations to: receive from a plurality of origination processes a plurality of messages; identify a respective destination node and a destination process on the destination node associated with each of the messages; store each of the messages in a respective buffer for the destination process and destination node associated with the message; identify one or more of the buffers wherein an aggregate size of all messages stored in each of the identified buffers exceeds a threshold; and for each identified buffer, (i) send all messages stored in the buffer in bulk to the destination process on the destination node associated with the messages stored in the buffer, and (ii) de-allocate the buffer. 20. The article of claim 19, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on a first node that is different from the particular destination node. 21. The article of claim 19, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on the particular destination node. 22. The article of claim 19, wherein a particular destination node is a virtual machine. 23. The article of claim 19, wherein to send all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer, the one or more computer processors to: assemble all messages stored in the buffer in a first message; and send the first message to the destination process on the destination node. 24. The article of claim 19, having further instructions stored that, when executed by the one or more computer processors further perform operations to: identify a particular buffer wherein an amount of time has passed since any messages where sent from the particular buffer; and send all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer. 25. The article of claim 19, wherein each buffer to store messages of one of a plurality of distinct channels wherein each channel comprises an ordered plurality of messages. 26. The article of claim 25, wherein an origination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 27. The article of claim 25, wherein a destination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving from a plurality of origination processes a plurality of messages, identifying a respective destination node and a destination process on the destination node associated with each of the messages, storing each of the messages in a respective buffer for the destination process and destination node associated with the message, identifying one or more of the buffers wherein an aggregate size of all messages stored in each of the identified buffers exceeds a threshold, and for each identified buffer, sending all messages stored in the buffer in bulk to the destination process on the destination node associated with the messages stored in the buffer.1. A computer-implemented method comprising: receiving from a plurality of origination processes a plurality of messages; identifying a respective destination node and a destination process on the destination node associated with each of the messages; storing each of the messages in a respective buffer for the destination process and destination node associated with the message; identifying one or more of the buffers wherein an aggregate size of all messages stored in each of the identified buffers exceeds a threshold; and for each identified buffer, (i) sending all messages stored in the buffer in bulk to the destination process on the destination node associated with the messages stored in the buffer, and (ii) de-allocating the buffer. 2. The method of claim 1, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on a first node that is different from the particular destination node. 3. The method of claim 1, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on the particular destination node. 4. The method of claim 1, wherein a particular destination node is a virtual machine. 5. The method of claim 1, wherein sending all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer comprises: assembling all messages stored in the buffer in a first message; and sending the first message to the destination process on the destination node. 6. The method of claim 1, further comprising: identifying a particular buffer wherein an amount of time has passed since any messages where sent from the particular buffer; and sending all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer. 7. The method of claim 1, wherein each buffer stores messages of one of a plurality of distinct channels wherein each channel comprises an ordered plurality of messages. 8. The method of claim 7, wherein an origination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 9. The method of claim 7, wherein a destination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 10. A system comprising: one or more computer processors programmed to perform operations to: receive from a plurality of origination processes a plurality of messages; identify a respective destination node and a destination process on the destination node associated with each of the messages; store each of the messages in a respective buffer for the destination process and destination node associated with the message; identify one or more of the buffers wherein an aggregate size of all messages stored in each of the identified buffers exceeds a threshold; and for each identified buffer, (i) send all messages stored in the buffer in bulk to the destination process on the destination node associated with the messages stored in the buffer, and (ii) de-allocate the buffer. 11. The system of claim 10, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on a first node that is different from the particular destination node. 12. The system of claim 10, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on the particular destination node. 13. The system of claim 10, wherein a particular destination node is a virtual machine. 14. The system of claim 10, wherein to send all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer, the one or more computer processors programmed to comprises: assemble all messages stored in the buffer in a first message; and send the first message to the destination process on the destination node. 15. The system of claim 10, wherein the one or more computer processors further programmed to: identify a particular buffer wherein an amount of time has passed since any messages where sent from the particular buffer; and send all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer. 16. The system of claim 10, wherein each buffer to store messages of one of a plurality of distinct channels wherein each channel comprises an ordered plurality of messages. 17. The system of claim 16, wherein an origination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 18. The system of claim 16, wherein a destination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 19. An article comprising a non-transitory computer-readable storage medium having instructions stored thereon that, when executed by one or more computer processors, perform operations to: receive from a plurality of origination processes a plurality of messages; identify a respective destination node and a destination process on the destination node associated with each of the messages; store each of the messages in a respective buffer for the destination process and destination node associated with the message; identify one or more of the buffers wherein an aggregate size of all messages stored in each of the identified buffers exceeds a threshold; and for each identified buffer, (i) send all messages stored in the buffer in bulk to the destination process on the destination node associated with the messages stored in the buffer, and (ii) de-allocate the buffer. 20. The article of claim 19, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on a first node that is different from the particular destination node. 21. The article of claim 19, wherein a first buffer that stores messages associated with a particular destination process and a particular destination node resides on the particular destination node. 22. The article of claim 19, wherein a particular destination node is a virtual machine. 23. The article of claim 19, wherein to send all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer, the one or more computer processors to: assemble all messages stored in the buffer in a first message; and send the first message to the destination process on the destination node. 24. The article of claim 19, having further instructions stored that, when executed by the one or more computer processors further perform operations to: identify a particular buffer wherein an amount of time has passed since any messages where sent from the particular buffer; and send all messages stored in the buffer in bulk to the destination process and destination node associated with the messages stored in the buffer. 25. The article of claim 19, wherein each buffer to store messages of one of a plurality of distinct channels wherein each channel comprises an ordered plurality of messages. 26. The article of claim 25, wherein an origination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live. 27. The article of claim 25, wherein a destination process is associated with a respective second buffer that stores messages of a particular channel according to the order and comprises a respective time-to-live.
2,400
8,123
8,123
14,388,514
2,419
The present invention relates to an encoding method and decoding method, and a device using the same. The encoding method according to the present invention comprises the steps of: specifying an intra prediction mode for a current block; and scanning a residual signal by intra prediction of the current block, wherein the step of scanning the residual signal can determine a scanning type for a luminance signal and a chroma signal of the current block according to an intra prediction mode for a luminance sample of the current block.
1-26. (canceled) 27. A video decoding method, comprising: specifying an intra-prediction mode for a current block; and scanning a residual signal generated through intra-prediction of the current block, wherein the scanning of the residual signal comprises determining a scanning type of residual signals of a luma signal and a chroma signal of the current block based on an intra-prediction mode for a luma sample of the current block. 28. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining vertical scanning as the scanning type of the residual signals of the luma signal and the chroma signal of the current block if the intra-prediction mode for the luma sample of the current block is a mode close to a horizontal direction or a horizontal direction. 29. The video decoding method of claim 28, wherein the intra-prediction mode for the luma sample of the current block is one of intra-prediction modes having 6 or more and 14 or less. 30. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining horizontal scanning as the scanning type of the residual signals of the luma signal and the chroma signal of the current block if the intra-prediction mode for the luma sample of the current block is a mode close to a vertical direction or a vertical direction. 31. The video decoding method of claim 30, wherein, the intra-prediction mode for the luma sample of the current block is one of intra-prediction modes having 22 or more and 30 or less. 32. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining up-right diagonal scanning as the scanning type of the residual signals of the luma signal and the chroma signal of the current block if the intra-prediction mode for the luma sample of the current block is not a directional mode. 33. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining up-right diagonal scanning as the scanning type of the residual signals of the luma signal and the chroma signal of the current block if the intra-prediction mode for the luma sample of the current block is not any one of a vertical direction, a horizontal direction, a direction close to verticality, and a direction close to horizontality. 34. The video decoding method of claim 33, wherein the intra-prediction mode for the luma sample of the current block is one of intra-prediction modes having 5 or less, 15 or more to 21 or less, and 31 or more. 35. The video decoding method of claim 27, wherein the scanning of the residual signal comprises: determining the scanning type of the residual signals of the luma signal and the chroma signal of the current block based on the intra-prediction mode for the luma sample of the current block; and applying the scanning type to the residual signals of the luma signal and the chroma signal of the current block. 36-52. (canceled) 53. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining the scanning type of the residual signals of the luma signal and the chroma signal of the current block based on the intra-prediction mode for the luma sample of the current block when the residual signals of the luma signal of the current block have any one of a 4×4 transform size and an 8×8 transform size or the residual signals of the chroma signal of the current block have a 4×4 transform size.
The present invention relates to an encoding method and decoding method, and a device using the same. The encoding method according to the present invention comprises the steps of: specifying an intra prediction mode for a current block; and scanning a residual signal by intra prediction of the current block, wherein the step of scanning the residual signal can determine a scanning type for a luminance signal and a chroma signal of the current block according to an intra prediction mode for a luminance sample of the current block.1-26. (canceled) 27. A video decoding method, comprising: specifying an intra-prediction mode for a current block; and scanning a residual signal generated through intra-prediction of the current block, wherein the scanning of the residual signal comprises determining a scanning type of residual signals of a luma signal and a chroma signal of the current block based on an intra-prediction mode for a luma sample of the current block. 28. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining vertical scanning as the scanning type of the residual signals of the luma signal and the chroma signal of the current block if the intra-prediction mode for the luma sample of the current block is a mode close to a horizontal direction or a horizontal direction. 29. The video decoding method of claim 28, wherein the intra-prediction mode for the luma sample of the current block is one of intra-prediction modes having 6 or more and 14 or less. 30. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining horizontal scanning as the scanning type of the residual signals of the luma signal and the chroma signal of the current block if the intra-prediction mode for the luma sample of the current block is a mode close to a vertical direction or a vertical direction. 31. The video decoding method of claim 30, wherein, the intra-prediction mode for the luma sample of the current block is one of intra-prediction modes having 22 or more and 30 or less. 32. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining up-right diagonal scanning as the scanning type of the residual signals of the luma signal and the chroma signal of the current block if the intra-prediction mode for the luma sample of the current block is not a directional mode. 33. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining up-right diagonal scanning as the scanning type of the residual signals of the luma signal and the chroma signal of the current block if the intra-prediction mode for the luma sample of the current block is not any one of a vertical direction, a horizontal direction, a direction close to verticality, and a direction close to horizontality. 34. The video decoding method of claim 33, wherein the intra-prediction mode for the luma sample of the current block is one of intra-prediction modes having 5 or less, 15 or more to 21 or less, and 31 or more. 35. The video decoding method of claim 27, wherein the scanning of the residual signal comprises: determining the scanning type of the residual signals of the luma signal and the chroma signal of the current block based on the intra-prediction mode for the luma sample of the current block; and applying the scanning type to the residual signals of the luma signal and the chroma signal of the current block. 36-52. (canceled) 53. The video decoding method of claim 27, wherein the scanning of the residual signal comprises determining the scanning type of the residual signals of the luma signal and the chroma signal of the current block based on the intra-prediction mode for the luma sample of the current block when the residual signals of the luma signal of the current block have any one of a 4×4 transform size and an 8×8 transform size or the residual signals of the chroma signal of the current block have a 4×4 transform size.
2,400
8,124
8,124
15,011,183
2,433
Systems and methods for providing a remote access to a service in a client-server remote access system. The method includes selecting, by a scheduler, an application server hosting the service, the selecting being performed in accordance with a utilization of resources in the client-server remote access system. A session Uniform Resource Locator (URL) is created that includes a URL payload that uniquely identifies the service and being used to establish the remote access to the service by a client. The system may include a proxy server accessible at a resource URL. The proxy server receives a request from a client to connect to the service. An authentication component authenticates the request in accordance with a payload of the resource URL. A service manager establishes the session between the client and the service connected at the session URL.
1. A method for providing remote access to a service in a client-server remote access system, comprising: selecting, by a scheduler, an application server hosting the service, the selecting being performed in accordance with a utilization of resources in the client-server remote access system; and creating a session uniform resource locator (URL) that includes a URL payload that uniquely identifies the service and being used to establish the remote access to the service by a client. 2. The method of claim 1, further comprising creating a resource URL that is provided to the client for requesting connection to the service prior to the selection of the application server. 3. The method of claim 2, further comprising receiving a request at a proxy server associated with the resource URL to access the service. 4. The method of claim 1, wherein the URL payload that uniquely identifies the service is an App ID. 5. The method of claim 1, further comprising providing an authentication component to establish a trust between the client and the service and between services in the client-server remote access system. 6. The method of claim 5, further comprising: providing a collaboration URL by the authentication component to the client; receiving a second request at the proxy server from a second client using the collaboration URL; and joining the second client to the session. 7. The method of claim 6, wherein the collaboration URL identifies the selected application server and authenticates the second client. 8. The method of claim 5, wherein the session URL is mapped to a user associated with the client. 9. The method of claim 1, further comprising managing resource utilization within the client-server remote access system using the scheduler. 10. The method of claim 9, wherein the scheduler chooses the application server based on the application server running a fewest number of services. 11. The method of claim 9, wherein the scheduler chooses the application server based on the application server being a longest-running application server having available capacity. 12. The method of claim 9, wherein the scheduler chooses the application server based on a state of a preexisting running service on the application server. 13. The method of claim 9, wherein the scheduler creates the session URL in real-time to direct the client to the application server hosting the service. 14. The method of claim 9, further comprising providing an orchestrator that starts and stops application servers within the client-server remote access system in accordance with one of load, hardware capacity (e.g., CPU, GPU, memory), networking requirements, cost, or geographic location. 15. The method of claim 14, further comprising providing information from the scheduler to the orchestrator receives about current resource allocation to automatically start or stop the application server based on demand. 16. A client-server remote access system for providing access to a service, comprising: a proxy server accessible at a resource Uniform Resource Locator (URL), the proxy server receiving a request from a client to connect to the service; an authentication component that authenticates the request in accordance with a payload of the resource URL; a scheduler that selects an application server hosting the service in accordance with a utilization of resources at the client-server remote access system, the application server creating a session URL that includes the payload that is used to establish a session between the client and the service; and a service manager that establishes the session between the client and the service connected at the session URL in accordance with the authenticated request to communicate application data and state information between the client and the service. 17. The client-server remote access system of claim 16, wherein a collaboration URL is provided by the authentication component to the client, wherein a second request is received at the proxy server from a second client using the collaboration URL, and wherein the second client is joined to the session. 18. The client-server remote access system of claim 17, wherein the collaboration URL identifies the application server and authenticates the second client. 19. The client-server remote access system of claim 16, wherein resource utilization within the client-server remote access system is managed using the scheduler, and wherein the scheduler chooses the application server based on the application server running a fewest number of services, or chooses the application server based on the application server being a longest-running application server having available capacity, or chooses the application server based on a state of a preexisting running service on the application server. 20. The client-server remote access system of claim 16, further comprising: providing a collaboration URL by the authentication component to the client, the collaboration URL containing a payload parameter to identify the application server; receiving a second request at the proxy server from a second client using the collaboration URL; and joining the second client to the session. 21. The client-server remote access system of claim 20, wherein the collaboration URL authenticates the second client. 22. The client-server remote access system of claim 16, wherein the scheduler chooses the application server based on one of the following criteria: selecting the application server running a fewest number of services, selecting the application server based on the application server being a longest-running application server having available capacity, or selecting the application server based on a state of a preexisting running service on the application server. 23. The client-server remote access system of claim 16, wherein the application server creates the session URL in real-time to when making the service available to the client.
Systems and methods for providing a remote access to a service in a client-server remote access system. The method includes selecting, by a scheduler, an application server hosting the service, the selecting being performed in accordance with a utilization of resources in the client-server remote access system. A session Uniform Resource Locator (URL) is created that includes a URL payload that uniquely identifies the service and being used to establish the remote access to the service by a client. The system may include a proxy server accessible at a resource URL. The proxy server receives a request from a client to connect to the service. An authentication component authenticates the request in accordance with a payload of the resource URL. A service manager establishes the session between the client and the service connected at the session URL.1. A method for providing remote access to a service in a client-server remote access system, comprising: selecting, by a scheduler, an application server hosting the service, the selecting being performed in accordance with a utilization of resources in the client-server remote access system; and creating a session uniform resource locator (URL) that includes a URL payload that uniquely identifies the service and being used to establish the remote access to the service by a client. 2. The method of claim 1, further comprising creating a resource URL that is provided to the client for requesting connection to the service prior to the selection of the application server. 3. The method of claim 2, further comprising receiving a request at a proxy server associated with the resource URL to access the service. 4. The method of claim 1, wherein the URL payload that uniquely identifies the service is an App ID. 5. The method of claim 1, further comprising providing an authentication component to establish a trust between the client and the service and between services in the client-server remote access system. 6. The method of claim 5, further comprising: providing a collaboration URL by the authentication component to the client; receiving a second request at the proxy server from a second client using the collaboration URL; and joining the second client to the session. 7. The method of claim 6, wherein the collaboration URL identifies the selected application server and authenticates the second client. 8. The method of claim 5, wherein the session URL is mapped to a user associated with the client. 9. The method of claim 1, further comprising managing resource utilization within the client-server remote access system using the scheduler. 10. The method of claim 9, wherein the scheduler chooses the application server based on the application server running a fewest number of services. 11. The method of claim 9, wherein the scheduler chooses the application server based on the application server being a longest-running application server having available capacity. 12. The method of claim 9, wherein the scheduler chooses the application server based on a state of a preexisting running service on the application server. 13. The method of claim 9, wherein the scheduler creates the session URL in real-time to direct the client to the application server hosting the service. 14. The method of claim 9, further comprising providing an orchestrator that starts and stops application servers within the client-server remote access system in accordance with one of load, hardware capacity (e.g., CPU, GPU, memory), networking requirements, cost, or geographic location. 15. The method of claim 14, further comprising providing information from the scheduler to the orchestrator receives about current resource allocation to automatically start or stop the application server based on demand. 16. A client-server remote access system for providing access to a service, comprising: a proxy server accessible at a resource Uniform Resource Locator (URL), the proxy server receiving a request from a client to connect to the service; an authentication component that authenticates the request in accordance with a payload of the resource URL; a scheduler that selects an application server hosting the service in accordance with a utilization of resources at the client-server remote access system, the application server creating a session URL that includes the payload that is used to establish a session between the client and the service; and a service manager that establishes the session between the client and the service connected at the session URL in accordance with the authenticated request to communicate application data and state information between the client and the service. 17. The client-server remote access system of claim 16, wherein a collaboration URL is provided by the authentication component to the client, wherein a second request is received at the proxy server from a second client using the collaboration URL, and wherein the second client is joined to the session. 18. The client-server remote access system of claim 17, wherein the collaboration URL identifies the application server and authenticates the second client. 19. The client-server remote access system of claim 16, wherein resource utilization within the client-server remote access system is managed using the scheduler, and wherein the scheduler chooses the application server based on the application server running a fewest number of services, or chooses the application server based on the application server being a longest-running application server having available capacity, or chooses the application server based on a state of a preexisting running service on the application server. 20. The client-server remote access system of claim 16, further comprising: providing a collaboration URL by the authentication component to the client, the collaboration URL containing a payload parameter to identify the application server; receiving a second request at the proxy server from a second client using the collaboration URL; and joining the second client to the session. 21. The client-server remote access system of claim 20, wherein the collaboration URL authenticates the second client. 22. The client-server remote access system of claim 16, wherein the scheduler chooses the application server based on one of the following criteria: selecting the application server running a fewest number of services, selecting the application server based on the application server being a longest-running application server having available capacity, or selecting the application server based on a state of a preexisting running service on the application server. 23. The client-server remote access system of claim 16, wherein the application server creates the session URL in real-time to when making the service available to the client.
2,400
8,125
8,125
14,233,821
2,413
If an uplink resource is not granted, wireless communication terminal, when the uplink resource occurs in the corresponding wireless communication terminal, transmits a scheduling request using a dedicated scheduling request resource of a second cell that is smaller in cell coverage than a first cell, and if grant of the uplink resource in response to the scheduling request is not performed, the wireless communication terminal transmits a random access preamble and thus performs a random access procedure that requests the grant of the uplink resource from a wireless communication device.
1. A wireless communication terminal in which, among two types of cells that make up a component career set that is used in career aggregation, a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, the wireless communication terminal comprising: a DSR management section configured to, when uplink data occurs in the wireless communication terminal, perform processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell, if the uplink resource is not granted to the wireless communication terminal; and a RACH management section configured to perform a random access procedure that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device, if the grant of the uplink resource in response to the scheduling request is not performed. 2. The wireless communication terminal according to claim 1, wherein the RACH management section is configured to perform the random access procedure in the second cell if the uplink resource is not granted although the scheduling request is transmitted, and is configured to perform the random access procedure in the first cell if the random access procedure in the second cell fails. 3. The wireless communication terminal according to claim 1, wherein the RACH management section is configured to perform the random access procedure in the first cell if the uplink resource is not granted although the scheduling request is transmitted. 4. The wireless communication terminal according to claim 1, further comprising: a priority determination section configured to determine whether the uplink data is of high priority or of low priority, using a predetermined determination method, wherein the RACH management section is configured to perform the random access procedure in the first cell if it is determined that the uplink data is of high priority and perform the random access procedure in the second cell if it is determined that the uplink data is of low priority. 5. The wireless communication terminal according to claim 4, wherein the RACH management section is configured to perform the random access procedure in the first cell if the random access procedure in the second cell fails. 6. The wireless communication terminal according to claim 1, wherein the case where the uplink resource is not granted although the scheduling request is transmitted is a case where the uplink resource is not granted to the wireless communication terminal although the DSR management transmits the scheduling request, a predetermined number of times, to the wireless communication device. 7. A wireless communication terminal in which among two types of cells that make up a component career set that is used in career aggregation, a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, the wireless communication terminal comprising: a comparison section configured to, when uplink data occurs in the wireless communication terminal, compare a path loss value that is present when a scheduling request is transmitted to the wireless communication device using the dedicated scheduling request resource of the second cell, with a threshold, if the uplink resource is not granted to the wireless communication terminal; a DSR management section configured to perform processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell, if the path loss value is at the threshold or above; and a RACH management section configured to perform a random access procedure in the first cell that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device, if the path loss value is below the threshold. 8. A wireless communication terminal in which among two types of cells that make up a component career set that is used in career aggregation, a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, the wireless communication terminal comprising: a priority determination section configured to, when uplink data occurs in the wireless communication terminal, determine whether the uplink data is of high priority or of low priority using a predetermined determination method, if the uplink resource is not granted to the wireless communication terminal; a DSR management section configured to perform processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell, if it is determined that the uplink data is of low priority; and a RACH management section configured to perform a random access procedure in the first cell that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device, if it is determined that the uplink data is of high priority. 9. The wireless communication terminal according to claim 7, wherein the RACH management section is configured to perform the random access procedure in the first cell if the uplink resource is not granted to the wireless communication terminal although the DSR management transmits the scheduling request, a predetermined number of times, to the wireless communication device. 10. A wireless communication terminal in which if among two types of cells that make up a component career set that is used in career aggregation, a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, the wireless communication terminal comprising: a priority determination section configured to, when uplink data occurs in the wireless communication terminal, determine whether the uplink data is of high priority or of low priority using a predetermined determination method, if the uplink resource is not granted to the wireless communication terminal; and a DSR management section configured to perform processing that requests the grant of the uplink resource from the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the first cell or the second cell, wherein the DSR management section is configured to transmit the scheduling request using the dedicated scheduling request resource of the second cell if it is determined that the uplink data is of low priority, and the DSR management section is configured to transmit the scheduling request using the dedicated scheduling request resource of the first cell if it is determined that the uplink data is of high priority or if the uplink source is not granted although the scheduling request is transmitted in the second cell. 11. The wireless communication terminal according to claim 10, further comprising: a RACH management section configured to perform a random access procedure in the first cell that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device if the uplink resource is not granted although the scheduling request is transmitted in the first cell. 12. The wireless communication terminal according to claim 10, wherein the case where the uplink resource is not granted although the scheduling request is transmitted is a case where the uplink resource is not granted to the wireless communication terminal although the DSR management section transmits the scheduling request, a predetermined number of times, to the wireless communication device. 13. A wireless communication device, capable of communicating with wireless communication terminal which among two types of cells that make up a component carrier set that is used in carrier aggregation, provides a second cell that is smaller in cell coverage than a first cell, wherein the wireless communication device is configured to transmit dedicated control information that includes configuration information relating to a dedicated scheduling request resource of the second cell that is available to the wireless communication terminal, to the wireless communication terminal. 14. The wireless communication device according to claim 13, wherein the dedicated control information includes permission information that gives an instruction to start a random access procedure in which the wireless communication terminal plays a leading role in the second cell in order to obtain grant of an uplink resource. 15. A wireless communication system comprising: a wireless communication device, capable of communicating with wireless communication terminal which among two types of cells that make up a component carrier set that is used in carrier aggregation, provides a second cell that is smaller in cell coverage than a first cell; and wireless communication terminal in which a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to the wireless communication device is set for the second cell, wherein the wireless communication device is configured to transmit dedicated control information that includes configuration information relating to a dedicated scheduling request resource of the second cell that is available to the wireless communication terminal, to the wireless communication terminal, and wherein the wireless communication terminal includes a DSR management section configured to, when uplink data occurs in the wireless communication terminal, perform processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell, if the uplink resource is not granted to the wireless communication terminal, and a RACH management section configured to perform a random access procedure that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device, if the grant of the uplink resource in response to the scheduling request is not performed. 16. A method of processing an uplink resource request, for use in wireless communication terminal in which a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device that is capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, among two types of cells that make up a component carrier set that is used in carrier aggregation, the method comprising: performing processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell when uplink data occurs in the wireless communication terminal if the uplink resource is not granted to the wireless communication terminal; and performing a random access procedure that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device if the grant of the uplink resource in response to the scheduling request is not performed.
If an uplink resource is not granted, wireless communication terminal, when the uplink resource occurs in the corresponding wireless communication terminal, transmits a scheduling request using a dedicated scheduling request resource of a second cell that is smaller in cell coverage than a first cell, and if grant of the uplink resource in response to the scheduling request is not performed, the wireless communication terminal transmits a random access preamble and thus performs a random access procedure that requests the grant of the uplink resource from a wireless communication device.1. A wireless communication terminal in which, among two types of cells that make up a component career set that is used in career aggregation, a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, the wireless communication terminal comprising: a DSR management section configured to, when uplink data occurs in the wireless communication terminal, perform processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell, if the uplink resource is not granted to the wireless communication terminal; and a RACH management section configured to perform a random access procedure that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device, if the grant of the uplink resource in response to the scheduling request is not performed. 2. The wireless communication terminal according to claim 1, wherein the RACH management section is configured to perform the random access procedure in the second cell if the uplink resource is not granted although the scheduling request is transmitted, and is configured to perform the random access procedure in the first cell if the random access procedure in the second cell fails. 3. The wireless communication terminal according to claim 1, wherein the RACH management section is configured to perform the random access procedure in the first cell if the uplink resource is not granted although the scheduling request is transmitted. 4. The wireless communication terminal according to claim 1, further comprising: a priority determination section configured to determine whether the uplink data is of high priority or of low priority, using a predetermined determination method, wherein the RACH management section is configured to perform the random access procedure in the first cell if it is determined that the uplink data is of high priority and perform the random access procedure in the second cell if it is determined that the uplink data is of low priority. 5. The wireless communication terminal according to claim 4, wherein the RACH management section is configured to perform the random access procedure in the first cell if the random access procedure in the second cell fails. 6. The wireless communication terminal according to claim 1, wherein the case where the uplink resource is not granted although the scheduling request is transmitted is a case where the uplink resource is not granted to the wireless communication terminal although the DSR management transmits the scheduling request, a predetermined number of times, to the wireless communication device. 7. A wireless communication terminal in which among two types of cells that make up a component career set that is used in career aggregation, a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, the wireless communication terminal comprising: a comparison section configured to, when uplink data occurs in the wireless communication terminal, compare a path loss value that is present when a scheduling request is transmitted to the wireless communication device using the dedicated scheduling request resource of the second cell, with a threshold, if the uplink resource is not granted to the wireless communication terminal; a DSR management section configured to perform processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell, if the path loss value is at the threshold or above; and a RACH management section configured to perform a random access procedure in the first cell that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device, if the path loss value is below the threshold. 8. A wireless communication terminal in which among two types of cells that make up a component career set that is used in career aggregation, a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, the wireless communication terminal comprising: a priority determination section configured to, when uplink data occurs in the wireless communication terminal, determine whether the uplink data is of high priority or of low priority using a predetermined determination method, if the uplink resource is not granted to the wireless communication terminal; a DSR management section configured to perform processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell, if it is determined that the uplink data is of low priority; and a RACH management section configured to perform a random access procedure in the first cell that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device, if it is determined that the uplink data is of high priority. 9. The wireless communication terminal according to claim 7, wherein the RACH management section is configured to perform the random access procedure in the first cell if the uplink resource is not granted to the wireless communication terminal although the DSR management transmits the scheduling request, a predetermined number of times, to the wireless communication device. 10. A wireless communication terminal in which if among two types of cells that make up a component career set that is used in career aggregation, a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, the wireless communication terminal comprising: a priority determination section configured to, when uplink data occurs in the wireless communication terminal, determine whether the uplink data is of high priority or of low priority using a predetermined determination method, if the uplink resource is not granted to the wireless communication terminal; and a DSR management section configured to perform processing that requests the grant of the uplink resource from the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the first cell or the second cell, wherein the DSR management section is configured to transmit the scheduling request using the dedicated scheduling request resource of the second cell if it is determined that the uplink data is of low priority, and the DSR management section is configured to transmit the scheduling request using the dedicated scheduling request resource of the first cell if it is determined that the uplink data is of high priority or if the uplink source is not granted although the scheduling request is transmitted in the second cell. 11. The wireless communication terminal according to claim 10, further comprising: a RACH management section configured to perform a random access procedure in the first cell that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device if the uplink resource is not granted although the scheduling request is transmitted in the first cell. 12. The wireless communication terminal according to claim 10, wherein the case where the uplink resource is not granted although the scheduling request is transmitted is a case where the uplink resource is not granted to the wireless communication terminal although the DSR management section transmits the scheduling request, a predetermined number of times, to the wireless communication device. 13. A wireless communication device, capable of communicating with wireless communication terminal which among two types of cells that make up a component carrier set that is used in carrier aggregation, provides a second cell that is smaller in cell coverage than a first cell, wherein the wireless communication device is configured to transmit dedicated control information that includes configuration information relating to a dedicated scheduling request resource of the second cell that is available to the wireless communication terminal, to the wireless communication terminal. 14. The wireless communication device according to claim 13, wherein the dedicated control information includes permission information that gives an instruction to start a random access procedure in which the wireless communication terminal plays a leading role in the second cell in order to obtain grant of an uplink resource. 15. A wireless communication system comprising: a wireless communication device, capable of communicating with wireless communication terminal which among two types of cells that make up a component carrier set that is used in carrier aggregation, provides a second cell that is smaller in cell coverage than a first cell; and wireless communication terminal in which a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to the wireless communication device is set for the second cell, wherein the wireless communication device is configured to transmit dedicated control information that includes configuration information relating to a dedicated scheduling request resource of the second cell that is available to the wireless communication terminal, to the wireless communication terminal, and wherein the wireless communication terminal includes a DSR management section configured to, when uplink data occurs in the wireless communication terminal, perform processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell, if the uplink resource is not granted to the wireless communication terminal, and a RACH management section configured to perform a random access procedure that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device, if the grant of the uplink resource in response to the scheduling request is not performed. 16. A method of processing an uplink resource request, for use in wireless communication terminal in which a dedicated scheduling request resource that is used when transmitting a scheduling request for requesting grant of an uplink resource to a wireless communication device that is capable of communicating with the wireless communication terminal is set for a second cell that is smaller in cell coverage than a first cell, among two types of cells that make up a component carrier set that is used in carrier aggregation, the method comprising: performing processing that requests the grant of the uplink resource to the wireless communication device by transmitting the scheduling request to the wireless communication device using the dedicated scheduling request resource of the second cell when uplink data occurs in the wireless communication terminal if the uplink resource is not granted to the wireless communication terminal; and performing a random access procedure that requests the grant of the uplink resource to the wireless communication device by transmitting a random access preamble to the wireless communication device if the grant of the uplink resource in response to the scheduling request is not performed.
2,400
8,126
8,126
14,939,089
2,488
Disclosed embodiments relate to systems and methods for locating, measuring, counting or aiding in the handling of drill pipes 106. The system 100 comprises at least one camera 102 capable of gathering visual data 150 regarding detecting, localizing or both, pipes 106, roughnecks 116, elevators 118 and combinations thereof. The system 100 further comprises a processor 110 and a logging system 114 for recording the gathered visual data 150. The method 200 comprises acquiring visual data 150 using a camera 106, analyzing the acquired data 150, and recording the acquired data 150.
1. A system for locating, measuring, counting, aiding or adjusting the handling of drill pipes, the system comprising: at least one camera, said camera operably connected to at least one processor wherein said camera is capable of gathering visual data regarding detecting, measuring, localizing or any combination components selected from pipes, roughnecks, elevators, and combinations thereof and transmitting the data to the processor; the processor configured to analyze the visual data, the processor operably connected to the drill pipe elevator and configured to adjust, alter, or halt elevator or roughneck operations when the visual data indicates a scenario within or outside of a pre-determined set of conditions; and, at least one logging system connected to said processor for recording said data. 2. The system of claim 1, further comprising at least one display system for displaying the collected or analyzed data. 3. The system of claim 1, wherein said camera comprises said processor. 4. The system of claim 1, further comprising an alarm for alerting staff to the occurrence of a pre-determined condition. 5. A method for locating, measuring, counting or aiding in the handling of drill pipes, the method comprising: acquiring visual data from at least one camera, analyzing said visual data, recording said analyzed data; and disrupting elevator operations in response to the occurrence of a pre-determined condition. 6. The method of claim 5, further comprising displaying the acquired, analyzed, or recorded data. 7. The method of claim 5, further comprising alerting staff to the occurrence of a pre-determined condition. 8. A system for assisting in the handling of drill pipe segments comprising: a well-bore, wherein the well-bore is being worked by a drill-string, said drill-string comprising a plurality of drill pipe segments; at least one camera configured to observe the addition or subtraction of drill pipe segments to the drill-string and gathering visual data, said camera operably connected to at least one processor wherein the processor is capable of analyzing the visual data, the processor operably connected to the elevator and configured to adjust, alter, or halt elevator or roughneck operations in response to a pre-determined condition. 9. The system of claim 8, further comprising a logging system connected to said processor. 10. The system of claim 8, further comprising at least one display system for displaying the collected or analyzed data. 11. The system of claim 8, wherein the at least one camera comprises the processor. 12. The system of claim 8, further comprising an alarm for alerting staff to the occurrence of a pre-determined condition.
Disclosed embodiments relate to systems and methods for locating, measuring, counting or aiding in the handling of drill pipes 106. The system 100 comprises at least one camera 102 capable of gathering visual data 150 regarding detecting, localizing or both, pipes 106, roughnecks 116, elevators 118 and combinations thereof. The system 100 further comprises a processor 110 and a logging system 114 for recording the gathered visual data 150. The method 200 comprises acquiring visual data 150 using a camera 106, analyzing the acquired data 150, and recording the acquired data 150.1. A system for locating, measuring, counting, aiding or adjusting the handling of drill pipes, the system comprising: at least one camera, said camera operably connected to at least one processor wherein said camera is capable of gathering visual data regarding detecting, measuring, localizing or any combination components selected from pipes, roughnecks, elevators, and combinations thereof and transmitting the data to the processor; the processor configured to analyze the visual data, the processor operably connected to the drill pipe elevator and configured to adjust, alter, or halt elevator or roughneck operations when the visual data indicates a scenario within or outside of a pre-determined set of conditions; and, at least one logging system connected to said processor for recording said data. 2. The system of claim 1, further comprising at least one display system for displaying the collected or analyzed data. 3. The system of claim 1, wherein said camera comprises said processor. 4. The system of claim 1, further comprising an alarm for alerting staff to the occurrence of a pre-determined condition. 5. A method for locating, measuring, counting or aiding in the handling of drill pipes, the method comprising: acquiring visual data from at least one camera, analyzing said visual data, recording said analyzed data; and disrupting elevator operations in response to the occurrence of a pre-determined condition. 6. The method of claim 5, further comprising displaying the acquired, analyzed, or recorded data. 7. The method of claim 5, further comprising alerting staff to the occurrence of a pre-determined condition. 8. A system for assisting in the handling of drill pipe segments comprising: a well-bore, wherein the well-bore is being worked by a drill-string, said drill-string comprising a plurality of drill pipe segments; at least one camera configured to observe the addition or subtraction of drill pipe segments to the drill-string and gathering visual data, said camera operably connected to at least one processor wherein the processor is capable of analyzing the visual data, the processor operably connected to the elevator and configured to adjust, alter, or halt elevator or roughneck operations in response to a pre-determined condition. 9. The system of claim 8, further comprising a logging system connected to said processor. 10. The system of claim 8, further comprising at least one display system for displaying the collected or analyzed data. 11. The system of claim 8, wherein the at least one camera comprises the processor. 12. The system of claim 8, further comprising an alarm for alerting staff to the occurrence of a pre-determined condition.
2,400
8,127
8,127
15,078,912
2,491
A Domain Name System (DNS) provider that is not a registrar of a domain name may nonetheless request a registry (possibly via an API request from the registrar to the registry, or via a call directly to the registry) to alter a Delegation Signer (DS) record in a DNS parent zone or other data controlled by the registry. The registry preferably confirms that the DNS provider has control over a nameserver for the domain name. Using Public Key Infrastructure (PKI), the DNS provider may sign the request with a private key and store the public key in a location that confirms the DNS provider has control over the domain name or over the nameservers for the domain name. After successfully confirming the DNS provider, the registrar or registry may change the DS record so that the domain name supports Domain Name System Security Extensions (DNSSEC) or update other data with the registry.
1. A method, comprising the steps of: receiving by a Domain Name System (DNS) provider comprising a nameserver a request for a domain name registered to a registrant to support Domain Name System Security Extensions (DNSSEC), wherein the DNS provider is not a registrar of the domain name; signing with a private key of the DNS provider an instruction to update a Delegation Signer (DS) record in a DNS parent zone controlled by a registry to enable the domain name to support DNSSEC; and transmitting by the DNS provider over an Application Programming Interface (API) the signed instruction to the registry, wherein the registry is configured to verify with a public key of the DNS provider the signed instruction and the registry is configured upon verifying the signature to update the DS record in the DNS parent zone to enable the domain name to support DNSSEC. 2. The method of claim 1, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the domain name. 3. The method of claim 2, further comprising the step of: reading by the registry the public key in the DNS zone file for the domain name. 4. The method of claim 1, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the domain of the nameserver. 5. The method of claim 4, further comprising the step of: reading by the registry the public key in the DNS zone file for the parent domain of the nameserver of the DNS provider. 6. The method of claim 1, further comprising the step of: receiving by the DNS provider a digital certificate from the registrar of the domain name, wherein the digital certificate comprises the private key and the public key. The method of claim 1, wherein the public key and the private key are associated and used only with the domain name and the public key and the private key are not associated or used with any other domain names. 8. A method, comprising the steps of: receiving by a Domain Name System (DNS) provider comprising a nameserver a request for a domain name registered to a registrant to support Domain Name System Security Extensions (DNSSEC), wherein the DNS provider is not a registrar of the domain name; signing with a private key of the DNS provider a first instruction to update a Delegation Signer (DS) record in a DNS parent zone controlled by a registry to enable the domain name to support DNSSEC; and transmitting by the DNS provider over an Application Programming Interface (API) the first instruction to the registrar, wherein the registrar is configured to verify with a public key of the DNS provider the first instruction and the registrar is configured upon verifying the first instruction was encrypted by the DNS provider to transmit a second instruction to the registry to update the DS record in the DNS parent zone to enable the domain name to support DNSSEC. 9. The method of claim 7, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the domain name. 10. The method of claim 9, further comprising the step of: reading by the registrar the public key in the DNS zone file for the domain name. 11. The method of claim 8, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the nameserver of the domain. 12. The method of claim 11, further comprising the step of: reading by the registrar the public key in the DNS zone file for the nameserver of the domain 13. The method of claim 8, further comprising the step of: receiving by the DNS provider a digital certificate from the registrar of the domain name, wherein the digital certificate comprises the private key and the public key. 14. The method of claim 8, wherein the public key and the private key are associated and used only with the domain name and the public key and the private key are not associated or used with any other domain names. 15. A method, comprising the steps of: registering by a registrar a domain name to a registrant; storing by a Domain Name System (DNS) provider, that is not the registrar of the domain name, the domain name in a nameserver controlled by the DNS provider; signing with a private key of the DNS provider a request to update a Delegation Signer (DS) record in a DNS root zone controlled by a registry to enable the domain name to support Domain Name System Security Extensions (DNSSEC); transmitting by the DNS provider over an Application Programming Interface (API) the signed request to the either the registrar or the registry; verifying by the registry or the registrar with a public key of the DNS provider the signed request; verifying by the registry or the registrar that the signed request was signed by the DNS provider; and upon the registry or the registrar verifying the request was signed by the DNS provider, updating by the registry the DS record in the DNS root zone to enable the domain name to support DNSSEC. 16. The method of claim 15, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the domain name. 17. The method of claim 16, further comprising the step of: reading by the registry or the registrar the public key in the DNS zone file for the domain name. 18. The method of claim 15, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the nameserver of the domain. 19. The method of claim 18, further comprising the step of: reading by the registry or the registrar the public key in the DNS zone file of the nameserver of the domain. 20. The method of claim 15, further comprising the step of: receiving by the DNS provider a digital certificate from the registrar of the domain name, wherein the digital certificate comprises the private key and the public key.
A Domain Name System (DNS) provider that is not a registrar of a domain name may nonetheless request a registry (possibly via an API request from the registrar to the registry, or via a call directly to the registry) to alter a Delegation Signer (DS) record in a DNS parent zone or other data controlled by the registry. The registry preferably confirms that the DNS provider has control over a nameserver for the domain name. Using Public Key Infrastructure (PKI), the DNS provider may sign the request with a private key and store the public key in a location that confirms the DNS provider has control over the domain name or over the nameservers for the domain name. After successfully confirming the DNS provider, the registrar or registry may change the DS record so that the domain name supports Domain Name System Security Extensions (DNSSEC) or update other data with the registry.1. A method, comprising the steps of: receiving by a Domain Name System (DNS) provider comprising a nameserver a request for a domain name registered to a registrant to support Domain Name System Security Extensions (DNSSEC), wherein the DNS provider is not a registrar of the domain name; signing with a private key of the DNS provider an instruction to update a Delegation Signer (DS) record in a DNS parent zone controlled by a registry to enable the domain name to support DNSSEC; and transmitting by the DNS provider over an Application Programming Interface (API) the signed instruction to the registry, wherein the registry is configured to verify with a public key of the DNS provider the signed instruction and the registry is configured upon verifying the signature to update the DS record in the DNS parent zone to enable the domain name to support DNSSEC. 2. The method of claim 1, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the domain name. 3. The method of claim 2, further comprising the step of: reading by the registry the public key in the DNS zone file for the domain name. 4. The method of claim 1, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the domain of the nameserver. 5. The method of claim 4, further comprising the step of: reading by the registry the public key in the DNS zone file for the parent domain of the nameserver of the DNS provider. 6. The method of claim 1, further comprising the step of: receiving by the DNS provider a digital certificate from the registrar of the domain name, wherein the digital certificate comprises the private key and the public key. The method of claim 1, wherein the public key and the private key are associated and used only with the domain name and the public key and the private key are not associated or used with any other domain names. 8. A method, comprising the steps of: receiving by a Domain Name System (DNS) provider comprising a nameserver a request for a domain name registered to a registrant to support Domain Name System Security Extensions (DNSSEC), wherein the DNS provider is not a registrar of the domain name; signing with a private key of the DNS provider a first instruction to update a Delegation Signer (DS) record in a DNS parent zone controlled by a registry to enable the domain name to support DNSSEC; and transmitting by the DNS provider over an Application Programming Interface (API) the first instruction to the registrar, wherein the registrar is configured to verify with a public key of the DNS provider the first instruction and the registrar is configured upon verifying the first instruction was encrypted by the DNS provider to transmit a second instruction to the registry to update the DS record in the DNS parent zone to enable the domain name to support DNSSEC. 9. The method of claim 7, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the domain name. 10. The method of claim 9, further comprising the step of: reading by the registrar the public key in the DNS zone file for the domain name. 11. The method of claim 8, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the nameserver of the domain. 12. The method of claim 11, further comprising the step of: reading by the registrar the public key in the DNS zone file for the nameserver of the domain 13. The method of claim 8, further comprising the step of: receiving by the DNS provider a digital certificate from the registrar of the domain name, wherein the digital certificate comprises the private key and the public key. 14. The method of claim 8, wherein the public key and the private key are associated and used only with the domain name and the public key and the private key are not associated or used with any other domain names. 15. A method, comprising the steps of: registering by a registrar a domain name to a registrant; storing by a Domain Name System (DNS) provider, that is not the registrar of the domain name, the domain name in a nameserver controlled by the DNS provider; signing with a private key of the DNS provider a request to update a Delegation Signer (DS) record in a DNS root zone controlled by a registry to enable the domain name to support Domain Name System Security Extensions (DNSSEC); transmitting by the DNS provider over an Application Programming Interface (API) the signed request to the either the registrar or the registry; verifying by the registry or the registrar with a public key of the DNS provider the signed request; verifying by the registry or the registrar that the signed request was signed by the DNS provider; and upon the registry or the registrar verifying the request was signed by the DNS provider, updating by the registry the DS record in the DNS root zone to enable the domain name to support DNSSEC. 16. The method of claim 15, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the domain name. 17. The method of claim 16, further comprising the step of: reading by the registry or the registrar the public key in the DNS zone file for the domain name. 18. The method of claim 15, further comprising the step of: storing by the DNS provider the public key in a DNS zone file for the nameserver of the domain. 19. The method of claim 18, further comprising the step of: reading by the registry or the registrar the public key in the DNS zone file of the nameserver of the domain. 20. The method of claim 15, further comprising the step of: receiving by the DNS provider a digital certificate from the registrar of the domain name, wherein the digital certificate comprises the private key and the public key.
2,400
8,128
8,128
14,546,091
2,432
A method, system and computer-usable medium are disclosed for producing a digital identifier. A set of design elements are selected for inclusion in the digital identifier, followed by the selection of an associated digital identifier template and a set of user credentials. The selected design elements, template and user credentials are then used to produce the digital identifier.
1. A computer-implemented method for producing a digital identifier, comprising: responsive to user input, selecting a set of design elements for inclusion in the digital identifier; responsive to user input, selecting a template for placement of the design elements within the digital identifier, the template comprising a template name; responsive to user input, selecting a set of user credentials for inclusion in the digital identifier; and using the selected design elements, template and user credentials to produce the digital identifier. 2. The method of claim 1, wherein the set of design elements comprises at least one member of the set of: a set of graphical design elements; and a set of textual design elements. 3. The method of claim 2, wherein the set of graphical design elements comprises at least one member of the set of: a digitized photograph of an individual associated with the digital identifier; a digitized signature of an individual associated with the digital identifier; a digitized glyph; a digitized barcode; a digitized security guilloche; and a digitized background image. 4. The method of claim 3, wherein one of the set of graphical design elements is captured by a user device. 5. The method of claim 2, wherein the set of textual design elements comprises at least one member of the set of: the name of the individual; the address of the individual; the member status of the individual; the employee identifier (ID) of the individual; the issuance date of the digital identifier; the issuance time of the digital identifier; the expiry date of the digital identifier; and a list of privileges associated with the individual. 6. The method of claim 1, wherein at least one of the digital identifier design elements is secured to prevent unauthorized access to the digital identifier. 7. A system comprising: a processor; a data bus coupled to the processor; and a computer-usable medium embodying computer program code, the computer-usable medium being coupled to the data bus, the computer program code used for producing a digital identifier and comprising instructions executable by the processor and configured for: responsive to user input, selecting a set of design elements for inclusion in the digital identifier; responsive to user input, selecting a template for placement of the design elements within the digital identifier, the template comprising a template name; responsive to user input, selecting a set of user credentials for inclusion in the digital identifier; and using the selected design elements, template and user credentials to produce the digital identifier. 8. The system of claim 7, wherein the set of design elements comprises at least one member of the set of: a set of graphical design elements; and a set of textual design elements. 9. The system of claim 8, wherein the set of graphical design elements comprises at least one member of the set of: a digitized photograph of an individual associated with the digital identifier; a digitized signature of an individual associated with the digital identifier; a digitized glyph; a digitized barcode; a digitized security guilloche; and a digitized background image. 10. The system of claim 9, wherein one of the set of graphical design elements is captured by a user device. 11. The system of claim 8, wherein the set of textual design elements comprises at least one member of the set of: the name of the individual; the address of the individual; the member status of the individual; the employee identifier (ID) of the individual; the issuance date of the digital identifier; the issuance time of the digital identifier; the expiry date of the digital identifier; and a list of privileges associated with the individual. 12. The system of claim 7, wherein: at least one of the digital identifier design elements is secure to prevent unauthorized access to the digital identifier. 13. A non-transitory, computer-readable storage medium embodying computer program code, the computer program code comprising computer executable instructions configured for: responsive to user input, selecting a set of design elements for inclusion in the digital identifier; responsive to user input, selecting a template for placement of the design elements within the digital identifier, the template comprising a template name; responsive to user input, selecting a set of user credentials for inclusion in the digital identifier; and using the selected design elements, template and user credentials to produce the digital identifier. 14. The non-transitory, computer-readable storage medium of claim 13, wherein the set of design elements comprises at least one member of the set of: a set of graphical design elements; and a set of textual design elements. 15. The non-transitory, computer-readable storage medium of claim 14, wherein the set of graphical design elements comprises at least one member of the set of: a digitized photograph of an individual associated with the digital identifier; a digitized signature of an individual associated with the digital identifier; a digitized glyph; a digitized barcode; a digitized security guilloche; and a digitized background image. 16. The non-transitory, computer-readable storage medium of claim 15, wherein one of the set of graphical design elements is captured by a user device. 17. The non-transitory, computer-readable storage medium of claim 14, wherein the set of textual design elements comprises at least one member of the set of: the name of the individual; the address of the individual; the member status of the individual; the employee identifier (ID) of the individual; the issuance date of the digital identifier; the issuance time of the digital identifier; the expiry date of the digital identifier; and a list of privileges associated with the individual. 18. The non-transitory, computer-readable storage medium of claim 13, wherein: at least one of the digital identifier design element is secured to prevent unauthorized access to the digital identifier. 19. The non-transitory, computer-readable storage medium of claim 13, wherein the computer executable instructions are deployable to a client system from a server system at a remote location. 20. The non-transitory, computer-readable storage medium of claim 13, wherein the computer executable instructions are provided by a service provider to a user on an on-demand basis.
A method, system and computer-usable medium are disclosed for producing a digital identifier. A set of design elements are selected for inclusion in the digital identifier, followed by the selection of an associated digital identifier template and a set of user credentials. The selected design elements, template and user credentials are then used to produce the digital identifier.1. A computer-implemented method for producing a digital identifier, comprising: responsive to user input, selecting a set of design elements for inclusion in the digital identifier; responsive to user input, selecting a template for placement of the design elements within the digital identifier, the template comprising a template name; responsive to user input, selecting a set of user credentials for inclusion in the digital identifier; and using the selected design elements, template and user credentials to produce the digital identifier. 2. The method of claim 1, wherein the set of design elements comprises at least one member of the set of: a set of graphical design elements; and a set of textual design elements. 3. The method of claim 2, wherein the set of graphical design elements comprises at least one member of the set of: a digitized photograph of an individual associated with the digital identifier; a digitized signature of an individual associated with the digital identifier; a digitized glyph; a digitized barcode; a digitized security guilloche; and a digitized background image. 4. The method of claim 3, wherein one of the set of graphical design elements is captured by a user device. 5. The method of claim 2, wherein the set of textual design elements comprises at least one member of the set of: the name of the individual; the address of the individual; the member status of the individual; the employee identifier (ID) of the individual; the issuance date of the digital identifier; the issuance time of the digital identifier; the expiry date of the digital identifier; and a list of privileges associated with the individual. 6. The method of claim 1, wherein at least one of the digital identifier design elements is secured to prevent unauthorized access to the digital identifier. 7. A system comprising: a processor; a data bus coupled to the processor; and a computer-usable medium embodying computer program code, the computer-usable medium being coupled to the data bus, the computer program code used for producing a digital identifier and comprising instructions executable by the processor and configured for: responsive to user input, selecting a set of design elements for inclusion in the digital identifier; responsive to user input, selecting a template for placement of the design elements within the digital identifier, the template comprising a template name; responsive to user input, selecting a set of user credentials for inclusion in the digital identifier; and using the selected design elements, template and user credentials to produce the digital identifier. 8. The system of claim 7, wherein the set of design elements comprises at least one member of the set of: a set of graphical design elements; and a set of textual design elements. 9. The system of claim 8, wherein the set of graphical design elements comprises at least one member of the set of: a digitized photograph of an individual associated with the digital identifier; a digitized signature of an individual associated with the digital identifier; a digitized glyph; a digitized barcode; a digitized security guilloche; and a digitized background image. 10. The system of claim 9, wherein one of the set of graphical design elements is captured by a user device. 11. The system of claim 8, wherein the set of textual design elements comprises at least one member of the set of: the name of the individual; the address of the individual; the member status of the individual; the employee identifier (ID) of the individual; the issuance date of the digital identifier; the issuance time of the digital identifier; the expiry date of the digital identifier; and a list of privileges associated with the individual. 12. The system of claim 7, wherein: at least one of the digital identifier design elements is secure to prevent unauthorized access to the digital identifier. 13. A non-transitory, computer-readable storage medium embodying computer program code, the computer program code comprising computer executable instructions configured for: responsive to user input, selecting a set of design elements for inclusion in the digital identifier; responsive to user input, selecting a template for placement of the design elements within the digital identifier, the template comprising a template name; responsive to user input, selecting a set of user credentials for inclusion in the digital identifier; and using the selected design elements, template and user credentials to produce the digital identifier. 14. The non-transitory, computer-readable storage medium of claim 13, wherein the set of design elements comprises at least one member of the set of: a set of graphical design elements; and a set of textual design elements. 15. The non-transitory, computer-readable storage medium of claim 14, wherein the set of graphical design elements comprises at least one member of the set of: a digitized photograph of an individual associated with the digital identifier; a digitized signature of an individual associated with the digital identifier; a digitized glyph; a digitized barcode; a digitized security guilloche; and a digitized background image. 16. The non-transitory, computer-readable storage medium of claim 15, wherein one of the set of graphical design elements is captured by a user device. 17. The non-transitory, computer-readable storage medium of claim 14, wherein the set of textual design elements comprises at least one member of the set of: the name of the individual; the address of the individual; the member status of the individual; the employee identifier (ID) of the individual; the issuance date of the digital identifier; the issuance time of the digital identifier; the expiry date of the digital identifier; and a list of privileges associated with the individual. 18. The non-transitory, computer-readable storage medium of claim 13, wherein: at least one of the digital identifier design element is secured to prevent unauthorized access to the digital identifier. 19. The non-transitory, computer-readable storage medium of claim 13, wherein the computer executable instructions are deployable to a client system from a server system at a remote location. 20. The non-transitory, computer-readable storage medium of claim 13, wherein the computer executable instructions are provided by a service provider to a user on an on-demand basis.
2,400
8,129
8,129
15,202,320
2,447
A method of operation of a navigation system includes: receiving an application message for communicating with a device of a sender with a messaging application installed; and generating a non-application message based on converting the application message dynamically and in real time with a control unit for communicating with the device of a recipient without the messaging application installed.
1. A method of operation of a computing system comprising: receiving an application message for communicating with a device of a sender with a messaging application installed; and generating a non-application message based on converting the application message dynamically and in real time with a control unit for communicating with the device of a recipient without the messaging application installed. 2. The method as claimed in claim 1 further comprising determining a channel availability based on a channel status of a channel for communicating the non-application message. 3. The method as claimed in claim 1 further comprising communicating the non-application message via a channel selected for communicating a message content of the application message to the recipient. 4. The method as claimed in claim 1 further comprising determining a recipient information of the application message based on a channel assigned for receiving the non-application message by the recipient. 5. The method as claimed in claim 1 further comprising updating a mapping table for indicating a channel has been used after selecting the channel. 6. A method of operation of a computing system comprising: receiving a non-application message for communicating with a device of a sender without a messaging application installed; and generating an application message based on converting the non-application message dynamically and in real time with a control unit for communicating with the device of a recipient with the messaging application installed. 7. The method as claimed in claim 6 further comprising determining a channel availability based on a channel status of a channel for communicating the application message. 8. The method as claimed in claim 6 further comprising communicating the application message via a channel selected for communicating a message content of the non-application message to the recipient. 9. The method as claimed in claim 6 further comprising determining a recipient information of the non-application message based on a channel assigned for receiving the application message by the recipient. 10. The method as claimed in claim 6 further comprising updating a mapping table for indicating a channel has been used after selecting the channel. 11. A computing system comprising: a communication unit for receiving an application message for communicating with a device of a sender with a messaging application installed; and a control unit, coupled to the communication unit, for generating a non-application message based on converting the application message dynamically and in real time with a control unit for communicating with the device of a recipient without the messaging application installed. 12. The system as claimed in claim 11 wherein the control unit is for determining a channel availability based on a channel status of a channel for communicating the non-application message. 13. The system as claimed in claim 11 wherein the control unit is for communicating the non-application message via a channel selected for communicating a message content of the application message to the recipient. 14. The system as claimed in claim 11 wherein the control unit is for determining a recipient information of the application message based on a channel assigned for receiving the non-application message by the recipient. 15. The system as claimed in claim 11 wherein the control unit is for updating a mapping table for indicating a channel has been used after selecting the channel. 16. A computing system comprising: a communication unit for receiving a non-application message for communicating with a device of a sender without a messaging application installed; and a control unit, coupled to the communication unit, for generating an application message based on converting the non-application message dynamically and in real time with a control unit for communicating with the device of a recipient with the messaging application installed. 17. The system as claimed in claim 16 wherein the control unit is for determining a channel availability based on a channel status of a channel for communicating the application message. 18. The system as claimed in claim 16 wherein the control unit is for communicating the application message via a channel selected for communicating a message content of the non-application message to the recipient. 19. The system as claimed in claim 16 wherein the control unit is for determining a recipient information of the non-application message based on a channel assigned for receiving the application message by the recipient. 20. The system as claimed in claim 16 wherein the control unit is for updating a mapping table for indicating a channel has been used after selecting the channel.
A method of operation of a navigation system includes: receiving an application message for communicating with a device of a sender with a messaging application installed; and generating a non-application message based on converting the application message dynamically and in real time with a control unit for communicating with the device of a recipient without the messaging application installed.1. A method of operation of a computing system comprising: receiving an application message for communicating with a device of a sender with a messaging application installed; and generating a non-application message based on converting the application message dynamically and in real time with a control unit for communicating with the device of a recipient without the messaging application installed. 2. The method as claimed in claim 1 further comprising determining a channel availability based on a channel status of a channel for communicating the non-application message. 3. The method as claimed in claim 1 further comprising communicating the non-application message via a channel selected for communicating a message content of the application message to the recipient. 4. The method as claimed in claim 1 further comprising determining a recipient information of the application message based on a channel assigned for receiving the non-application message by the recipient. 5. The method as claimed in claim 1 further comprising updating a mapping table for indicating a channel has been used after selecting the channel. 6. A method of operation of a computing system comprising: receiving a non-application message for communicating with a device of a sender without a messaging application installed; and generating an application message based on converting the non-application message dynamically and in real time with a control unit for communicating with the device of a recipient with the messaging application installed. 7. The method as claimed in claim 6 further comprising determining a channel availability based on a channel status of a channel for communicating the application message. 8. The method as claimed in claim 6 further comprising communicating the application message via a channel selected for communicating a message content of the non-application message to the recipient. 9. The method as claimed in claim 6 further comprising determining a recipient information of the non-application message based on a channel assigned for receiving the application message by the recipient. 10. The method as claimed in claim 6 further comprising updating a mapping table for indicating a channel has been used after selecting the channel. 11. A computing system comprising: a communication unit for receiving an application message for communicating with a device of a sender with a messaging application installed; and a control unit, coupled to the communication unit, for generating a non-application message based on converting the application message dynamically and in real time with a control unit for communicating with the device of a recipient without the messaging application installed. 12. The system as claimed in claim 11 wherein the control unit is for determining a channel availability based on a channel status of a channel for communicating the non-application message. 13. The system as claimed in claim 11 wherein the control unit is for communicating the non-application message via a channel selected for communicating a message content of the application message to the recipient. 14. The system as claimed in claim 11 wherein the control unit is for determining a recipient information of the application message based on a channel assigned for receiving the non-application message by the recipient. 15. The system as claimed in claim 11 wherein the control unit is for updating a mapping table for indicating a channel has been used after selecting the channel. 16. A computing system comprising: a communication unit for receiving a non-application message for communicating with a device of a sender without a messaging application installed; and a control unit, coupled to the communication unit, for generating an application message based on converting the non-application message dynamically and in real time with a control unit for communicating with the device of a recipient with the messaging application installed. 17. The system as claimed in claim 16 wherein the control unit is for determining a channel availability based on a channel status of a channel for communicating the application message. 18. The system as claimed in claim 16 wherein the control unit is for communicating the application message via a channel selected for communicating a message content of the non-application message to the recipient. 19. The system as claimed in claim 16 wherein the control unit is for determining a recipient information of the non-application message based on a channel assigned for receiving the application message by the recipient. 20. The system as claimed in claim 16 wherein the control unit is for updating a mapping table for indicating a channel has been used after selecting the channel.
2,400
8,130
8,130
15,064,529
2,462
Methods and apparatus are provided to support transmissions of control channels with coverage enhancements (CE) to low cost (LC) user equipments (LC/CE UEs) in a narrowband of a system bandwidth. A narrowband for a control channel transmission can depend on a type of information being scheduled for transmission by the control channel.
1. A base station comprising: a transmitter configured to transmit, in a first subframe and within a first narrowband that includes six consecutive resource blocks (RBs) in a downlink (DL) system bandwidth, a physical DL control channel (PDCCH), wherein: each RB includes a number of resource elements (REs) indexed in ascending order in a frequency domain and the six consecutive RBs in the first narrowband are indexed in ascending order according to the indexes of respective REs, when the PDCCH is transmitted over a set of two RBs, the PDCCH is mapped to a first number of control channel elements (CCEs), each CCE from the first number of CCEs includes resource elements groups (REGs) from a first RB and from a second RB, and each REG includes a number of REs either from the first RB or from the second RB, when the PDCCH is transmitted over a set of four RBs, the PDCCH is mapped to a second number of CCEs, each CCE from the second number of CCEs includes REGs from a third RB, a fourth RB, a fifth RB, and a sixth RB, and each REG includes a number of REs either from the third RB, or from the fourth RB, or from the fifth RB, or from the sixth RB, and when the PDCCH is transmitted over the six consecutive RBs, the PDCCH is mapped over REs first across the six consecutive RBs, starting from the RE with the lowest index, and then across symbols of the first subframe. 2. The base station of claim 1, wherein: the first RB and the second RB are a first two RBs of the six consecutive RBs, and the third RB, the fourth RB, the fifth RB, and the sixth RB are a last four RBs of the six consecutive RBs. 3. The base station of claim 1, wherein the transmitter is further configured to transmit, in a second subframe and over a set of RBs within a second narrowband that includes six consecutive RBs, a physical downlink shared channel (PDSCH) that is mapped over REs first across the set of RBs, starting from the RE with the lowest index, and then across symbols of the second subframe. 4. The base station of claim 1, wherein REs used for transmission of a primary broadcast channel, or for transmission of synchronization signals, or for transmission of channel state information reference signals (CSI-RS), or for transmission of common reference signals (CRS), or for transmission of demodulation reference signals (DMRS) are not used for transmission of the PDCCH. 5. The base station of claim 1, wherein the transmitter is further configured to transmit signaling configuring the PDCCH transmission (i) over CCEs only in the set of two RBs or only in the set of four RBs or (ii) over REs in both the set of two RBs and the set of four RBs. 6. A user equipment (UE) comprising: a receiver configured to receive, in a first subframe and within a first narrowband that includes six consecutive resource blocks (RBs) in a downlink (DL) system bandwidth, a physical DL control channel (PDCCH), wherein: each RB includes a number of resource elements (REs) indexed in ascending order in a frequency domain and the six consecutive RBs in the first narrowband are indexed in ascending order according to the indexes of respective REs, when the PDCCH is received over a set of two RBs, the PDCCH is mapped to a first number of control channel elements (CCEs), each CCE from the first number of CCEs includes resource elements groups (REGs) from a first RB and from a second RB, and each REG includes a number of REs either from the first RB or from the second RB, when the PDCCH is received over a set of four RBs, the PDCCH is mapped to a second number of CCEs, each CCE from the second number of CCEs includes REGs from a third RB, a fourth RB, a fifth RB, and a sixth RB, and each REG includes a number of REs either from the third RB, or from the fourth RB, or from the fifth RB, or from the sixth RB, and when the PDCCH is received over the six consecutive RBs, the PDCCH is mapped over REs first across the six consecutive RBs, starting from the RE with the lowest index, and then across symbols of the first subframe. 7. The UE of claim 6, wherein: the first RB and the second RB are a first two RBs of the six consecutive RBs, and the third RB, the fourth RB, the fifth RB, and the sixth RB are a last four RBs of the six consecutive RBs. 8. The UE of claim 6, wherein the receiver is further configured to receive, in a second subframe and over a set of RBs within a second narrowband that includes six consecutive RBs, a physical downlink shared channel (PDSCH) that is mapped over REs first across the set of RBs, starting from the RE with the lowest index, and then across symbols of the second subframe. 9. The UE of claim 6, wherein REs used for reception of a primary broadcast channel, or for reception of synchronization signals, or for reception of channel state information reference signals (CSI-RS), or for reception of common reference signals (CRS), or for reception of demodulation reference signals (DMRS) are not used for reception of the PDCCH. 10. The UE of claim 6, wherein the receiver is further configured to receive signaling configuring the PDCCH reception (i) over CCEs only in the set of two RBs or only in the set of four RBs or (ii) over REs in both the set of two RBs and the set of four RBs. 11. A base station comprising: a transmitter configured to transmit within one or more first narrowbands of six resource blocks (RBs) a physical downlink control channel (PDCCH) scheduling a transmission of a physical downlink shared channel (PDSCH) conveying one or more random access response (RAR) messages, wherein: each RAR message is associated with a random access preamble transmission and includes scheduling information for a transmission of a physical uplink shared channel (PUSCH) in a number of transmission time intervals (TTIs), and the scheduling information includes 20 binary elements when the PDSCH transmission and the PUSCH transmission are according to a first coverage enhancement (CE) mode and includes 12 binary elements when the PDSCH transmission and the PUSCH transmission are according to a second CE mode; and a receiver configured to receive the PUSCH according to the scheduling information. 12. The base station of claim 11, wherein an information element (IE) indicating an adjustment for a PUSCH transmission power, a binary IE indicating request or no request for a channel state information report, and a binary IE indicating zero or non-zero delay for the PUSCH transmission are included in the scheduling information corresponding to the first CE level and are not included in the scheduling information corresponding to the second CE level. 13. The base station of claim 12, wherein: when non-zero delay is indicated for the PUSCH transmission, the PUSCH transmission is delayed by the number of TTIs, and the number of TTIs is larger than one. 14. The base station of claim 11, wherein: the scheduling information includes an information element (IE) indicating of one or more second narrowbands for a subsequent PDCCH transmission, and one state of the IE indicates the one or more first narrowbands. 15. The base station of claim 11, wherein the transmitter is further configured to transmit in a system information block a set of available CE levels for a random access preamble transmission and an indication for a location of the one or more first narrowbands in a downlink system bandwidth for each CE level from the set of CE levels. 16. A user equipment comprising: a receiver configured to receive within one or more first narrowbands of six resource blocks (RBs) a physical downlink control channel (PDCCH) scheduling a transmission of a physical downlink shared channel (PDSCH) conveying one or more random access response (RAR) messages, wherein: each RAR message is associated with a random access preamble transmission and includes scheduling information for a transmission of a physical uplink shared channel (PUSCH) in a number of transmission time intervals (TTIs), and the scheduling information includes 20 binary elements when the PDSCH transmission and the PUSCH transmission are according to a first coverage enhancement (CE) mode and includes 12 binary elements when the PDSCH transmission and the PUSCH transmission are according to a second CE mode; and a transmitter configured to transmit the PUSCH according to the scheduling information. 17. The UE of claim 16, wherein an information element (IE) indicating an adjustment for a PUSCH transmission power, a binary IE indicating request or no request for a channel state information report, and a binary IE indicating zero or non-zero delay for the PUSCH transmission are included in the scheduling information corresponding to the first CE level and are not included in the scheduling information corresponding to the second CE level. 18. The UE of claim 17, wherein; when non-zero delay is indicated for the PUSCH transmission, the PUSCH transmission is delayed by the number of TTIs, and the number of TTIs is larger than one. 19. The UE of claim 16, wherein: the scheduling information includes an information element (IE) indicating of one or more second narrowbands for a subsequent PDCCH reception, and one state of the IE indicates the one or more first narrowbands. 20. The UE of claim 16, wherein the receiver is further configured to receive in a system information block a set of available CE levels for a random access preamble transmission and an indication for a location of the one or more first narrowbands in a downlink system bandwidth for each CE level from the set of CE levels.
Methods and apparatus are provided to support transmissions of control channels with coverage enhancements (CE) to low cost (LC) user equipments (LC/CE UEs) in a narrowband of a system bandwidth. A narrowband for a control channel transmission can depend on a type of information being scheduled for transmission by the control channel.1. A base station comprising: a transmitter configured to transmit, in a first subframe and within a first narrowband that includes six consecutive resource blocks (RBs) in a downlink (DL) system bandwidth, a physical DL control channel (PDCCH), wherein: each RB includes a number of resource elements (REs) indexed in ascending order in a frequency domain and the six consecutive RBs in the first narrowband are indexed in ascending order according to the indexes of respective REs, when the PDCCH is transmitted over a set of two RBs, the PDCCH is mapped to a first number of control channel elements (CCEs), each CCE from the first number of CCEs includes resource elements groups (REGs) from a first RB and from a second RB, and each REG includes a number of REs either from the first RB or from the second RB, when the PDCCH is transmitted over a set of four RBs, the PDCCH is mapped to a second number of CCEs, each CCE from the second number of CCEs includes REGs from a third RB, a fourth RB, a fifth RB, and a sixth RB, and each REG includes a number of REs either from the third RB, or from the fourth RB, or from the fifth RB, or from the sixth RB, and when the PDCCH is transmitted over the six consecutive RBs, the PDCCH is mapped over REs first across the six consecutive RBs, starting from the RE with the lowest index, and then across symbols of the first subframe. 2. The base station of claim 1, wherein: the first RB and the second RB are a first two RBs of the six consecutive RBs, and the third RB, the fourth RB, the fifth RB, and the sixth RB are a last four RBs of the six consecutive RBs. 3. The base station of claim 1, wherein the transmitter is further configured to transmit, in a second subframe and over a set of RBs within a second narrowband that includes six consecutive RBs, a physical downlink shared channel (PDSCH) that is mapped over REs first across the set of RBs, starting from the RE with the lowest index, and then across symbols of the second subframe. 4. The base station of claim 1, wherein REs used for transmission of a primary broadcast channel, or for transmission of synchronization signals, or for transmission of channel state information reference signals (CSI-RS), or for transmission of common reference signals (CRS), or for transmission of demodulation reference signals (DMRS) are not used for transmission of the PDCCH. 5. The base station of claim 1, wherein the transmitter is further configured to transmit signaling configuring the PDCCH transmission (i) over CCEs only in the set of two RBs or only in the set of four RBs or (ii) over REs in both the set of two RBs and the set of four RBs. 6. A user equipment (UE) comprising: a receiver configured to receive, in a first subframe and within a first narrowband that includes six consecutive resource blocks (RBs) in a downlink (DL) system bandwidth, a physical DL control channel (PDCCH), wherein: each RB includes a number of resource elements (REs) indexed in ascending order in a frequency domain and the six consecutive RBs in the first narrowband are indexed in ascending order according to the indexes of respective REs, when the PDCCH is received over a set of two RBs, the PDCCH is mapped to a first number of control channel elements (CCEs), each CCE from the first number of CCEs includes resource elements groups (REGs) from a first RB and from a second RB, and each REG includes a number of REs either from the first RB or from the second RB, when the PDCCH is received over a set of four RBs, the PDCCH is mapped to a second number of CCEs, each CCE from the second number of CCEs includes REGs from a third RB, a fourth RB, a fifth RB, and a sixth RB, and each REG includes a number of REs either from the third RB, or from the fourth RB, or from the fifth RB, or from the sixth RB, and when the PDCCH is received over the six consecutive RBs, the PDCCH is mapped over REs first across the six consecutive RBs, starting from the RE with the lowest index, and then across symbols of the first subframe. 7. The UE of claim 6, wherein: the first RB and the second RB are a first two RBs of the six consecutive RBs, and the third RB, the fourth RB, the fifth RB, and the sixth RB are a last four RBs of the six consecutive RBs. 8. The UE of claim 6, wherein the receiver is further configured to receive, in a second subframe and over a set of RBs within a second narrowband that includes six consecutive RBs, a physical downlink shared channel (PDSCH) that is mapped over REs first across the set of RBs, starting from the RE with the lowest index, and then across symbols of the second subframe. 9. The UE of claim 6, wherein REs used for reception of a primary broadcast channel, or for reception of synchronization signals, or for reception of channel state information reference signals (CSI-RS), or for reception of common reference signals (CRS), or for reception of demodulation reference signals (DMRS) are not used for reception of the PDCCH. 10. The UE of claim 6, wherein the receiver is further configured to receive signaling configuring the PDCCH reception (i) over CCEs only in the set of two RBs or only in the set of four RBs or (ii) over REs in both the set of two RBs and the set of four RBs. 11. A base station comprising: a transmitter configured to transmit within one or more first narrowbands of six resource blocks (RBs) a physical downlink control channel (PDCCH) scheduling a transmission of a physical downlink shared channel (PDSCH) conveying one or more random access response (RAR) messages, wherein: each RAR message is associated with a random access preamble transmission and includes scheduling information for a transmission of a physical uplink shared channel (PUSCH) in a number of transmission time intervals (TTIs), and the scheduling information includes 20 binary elements when the PDSCH transmission and the PUSCH transmission are according to a first coverage enhancement (CE) mode and includes 12 binary elements when the PDSCH transmission and the PUSCH transmission are according to a second CE mode; and a receiver configured to receive the PUSCH according to the scheduling information. 12. The base station of claim 11, wherein an information element (IE) indicating an adjustment for a PUSCH transmission power, a binary IE indicating request or no request for a channel state information report, and a binary IE indicating zero or non-zero delay for the PUSCH transmission are included in the scheduling information corresponding to the first CE level and are not included in the scheduling information corresponding to the second CE level. 13. The base station of claim 12, wherein: when non-zero delay is indicated for the PUSCH transmission, the PUSCH transmission is delayed by the number of TTIs, and the number of TTIs is larger than one. 14. The base station of claim 11, wherein: the scheduling information includes an information element (IE) indicating of one or more second narrowbands for a subsequent PDCCH transmission, and one state of the IE indicates the one or more first narrowbands. 15. The base station of claim 11, wherein the transmitter is further configured to transmit in a system information block a set of available CE levels for a random access preamble transmission and an indication for a location of the one or more first narrowbands in a downlink system bandwidth for each CE level from the set of CE levels. 16. A user equipment comprising: a receiver configured to receive within one or more first narrowbands of six resource blocks (RBs) a physical downlink control channel (PDCCH) scheduling a transmission of a physical downlink shared channel (PDSCH) conveying one or more random access response (RAR) messages, wherein: each RAR message is associated with a random access preamble transmission and includes scheduling information for a transmission of a physical uplink shared channel (PUSCH) in a number of transmission time intervals (TTIs), and the scheduling information includes 20 binary elements when the PDSCH transmission and the PUSCH transmission are according to a first coverage enhancement (CE) mode and includes 12 binary elements when the PDSCH transmission and the PUSCH transmission are according to a second CE mode; and a transmitter configured to transmit the PUSCH according to the scheduling information. 17. The UE of claim 16, wherein an information element (IE) indicating an adjustment for a PUSCH transmission power, a binary IE indicating request or no request for a channel state information report, and a binary IE indicating zero or non-zero delay for the PUSCH transmission are included in the scheduling information corresponding to the first CE level and are not included in the scheduling information corresponding to the second CE level. 18. The UE of claim 17, wherein; when non-zero delay is indicated for the PUSCH transmission, the PUSCH transmission is delayed by the number of TTIs, and the number of TTIs is larger than one. 19. The UE of claim 16, wherein: the scheduling information includes an information element (IE) indicating of one or more second narrowbands for a subsequent PDCCH reception, and one state of the IE indicates the one or more first narrowbands. 20. The UE of claim 16, wherein the receiver is further configured to receive in a system information block a set of available CE levels for a random access preamble transmission and an indication for a location of the one or more first narrowbands in a downlink system bandwidth for each CE level from the set of CE levels.
2,400
8,131
8,131
14,858,960
2,416
A data structure for managing user equipment communications in a wireless communication system is presented. In some examples, the data structure may include one or more resource element blocks into which a frequency bandwidth of a downlink channel is divided within a symbol that defines a transmission time interval in a downlink subframe. Furthermore, the data structure may include a control region and a data region within at least one resource element block of the one or more resource element blocks. Additionally, the data structure may include a downlink resource grant, located within the control region, for a user equipment served by the downlink channel. In an additional aspect, a network entity and method for generating the example data structure are provided.
1. A method of managing user equipment (UE) communications in a wireless communication system, comprising: obtaining, at a network entity, user data for transmission to one or more UEs on a downlink channel; determining one or more delivery constraints associated with at least one of the user data and the one or more UEs; and generating, based on the user data for transmission and the one or more delivery constraints, a data structure for allocating downlink channel resources for transmission of the user data, wherein the data structure comprises: one or more resource element blocks into which a frequency bandwidth is divided within a symbol that defines a transmission time interval (TTI) in a downlink subframe; a control region and a data region within at least one resource element block of the one or more resource element blocks; and a downlink resource grant, located within the control region, for a UE of the one or more UEs served by the downlink channel. 2. The method of claim 1, further comprising transmitting the data structure to the one or more UEs. 3. The method of claim 1, further comprising transmitting the user data to the UE according to the downlink resource grant of the data structure. 4. The method of claim 3, further comprising maintaining a HARQ process for retransmission of the user data, the HARQ process having an associated retransmission time that is less than one subframe. 5. The method of claim 3, further comprising determining whether to retransmit the user data within eight symbols. 6. The method of claim 1, wherein obtaining the user data for transmission comprises obtaining the user data from a second network entity via a data flow or from a transmit data queue associated with the network entity. 7. The method of claim 1, wherein the downlink resource grant includes an indication of a position at which the data region is located within a resource block of the one or more resource element blocks. 8. The method of claim 1, wherein the data structure further comprises an uplink resource grant, located in the control region, for the UE. 9. The method of claim 1, wherein the control region comprises a number of resource elements, wherein the number of resource elements is based on an aggregation level associated with the UE. 10. The method of claim 1, wherein the downlink resource grant allocates any resource elements outside of the control region of the at least one resource element block to the UE. 11. The method of claim 1, wherein the downlink resource grant allocates resource elements of at least one further resource element block to the UE. 12. The method of claim 1, wherein the control region is positioned within a fixed subset of resource elements within the one or more resource element blocks. 13. The method of claim 1, wherein the data structure further comprises a legacy control region within at least one further symbol of the downlink subframe, wherein the legacy control region includes at least one resource element allocation according to legacy LTE control and data channels. 14. An apparatus for managing user equipment (UE) communications in a wireless communication system, comprising: a processor; memory in electronic communication with the processor; and instructions stored in the memory, the instructions being executable by the processor to: obtain, at a network entity, user data for transmission to one or more UEs on a downlink channel; determine one or more delivery constraints associated with at least one of the user data and the one or more UEs; and generate, based on the user data for transmission and the one or more delivery constraints, a data structure for allocating downlink channel resources for transmission of the user data, wherein the data structure comprises: one or more resource element blocks into which a frequency bandwidth is divided within a symbol that defines a transmission time interval (TTI) in a downlink subframe; a control region and a data region within at least one resource element block of the one or more resource element blocks; and a downlink resource grant, located within the control region, for a UE of the one or more UEs served by the downlink channel. 15. The apparatus of claim 14, the instructions being executable by the processor to transmit the data structure to the one or more UEs. 16. The apparatus of claim 14, further the instructions being executable by the processor to transmit the user data to the UE according to the downlink resource grant of the data structure. 17. The apparatus of claim 14, wherein the instructions being executable by the processor to obtain the user data for transmission comprise the instructions being executable by the processor to obtain the user data from a second network entity via a data flow or from a transmit data queue associated with the network entity. 18. The apparatus of claim 14, wherein the downlink resource grant includes an indication of a position at which the data region is located within a resource block of the one or more resource element blocks. 19. The apparatus of claim 14, wherein the data structure further comprises an uplink resource grant, located in the control region, for the UE. 20. The apparatus of claim 14, wherein the control region comprises a number of resource elements, wherein the number of resource elements is based on an aggregation level associated with the UE. 21. An apparatus for managing user equipment (UE) communications in a wireless communication system, comprising: means for obtaining, at a network entity, user data for transmission to one or more UEs on a downlink channel; means for determining one or more delivery constraints associated with at least one of the user data and the one or more UEs; and means for generating, based on the user data for transmission and the one or more delivery constraints, a data structure for allocating downlink channel resources for transmission of the user data, wherein the data structure comprises: one or more resource element blocks into which a frequency bandwidth is divided within a symbol that defines a transmission time interval (TTI) in a downlink subframe; a control region and a data region within at least one resource element block of the one or more resource element blocks; and a downlink resource grant, located within the control region, for a UE of the one or more UEs served by the downlink channel. 22. The apparatus of claim 21, further comprising means for transmitting the data structure to the one or more UEs. 23. The apparatus of claim 21, further comprising means for transmitting the user data to the UE according to the downlink resource grant of the data structure. 24. The apparatus of claim 21, wherein the means for obtaining comprises means for obtaining the user data from a second network entity via a data flow or from a transmit data queue associated with the network entity. 25. The apparatus of claim 21, wherein the downlink resource grant includes an indication of a position at which the data region is located within a resource block of the one or more resource element blocks. 26. A non-transitory computer-readable medium storing computer-executable code for wireless communication, the code comprising instructions executable to: obtain, at a network entity, user data for transmission to one or more UEs on a downlink channel; determine one or more delivery constraints associated with at least one of the user data and the one or more UEs; and generate, based on the user data for transmission and the one or more delivery constraints, a data structure for allocating downlink channel resources for transmission of the user data, wherein the data structure comprises: one or more resource element blocks into which a frequency bandwidth is divided within a symbol that defines a transmission time interval (TTI) in a downlink subframe; a control region and a data region within at least one resource element block of the one or more resource element blocks; and a downlink resource grant, located within the control region, for a UE of the one or more UEs served by the downlink channel. 27. The computer-readable medium of claim 26, wherein the code further comprises instructions executable to transmit the data structure to the one or more UEs. 28. The computer-readable medium of claim 26, wherein the code further comprises instructions executable to transmit the user data to the UE according to the downlink resource grant of the data structure. 29. The computer-readable medium of claim 26, wherein the instructions executable to obtain the user data for transmission comprise instructions executable to obtain the user data from a second network entity via a data flow or from a transmit data queue associated with the network entity. 30. The computer-readable medium of claim 26, wherein the downlink resource grant includes an indication of a position at which the data region is located within a resource block of the one or more resource element blocks.
A data structure for managing user equipment communications in a wireless communication system is presented. In some examples, the data structure may include one or more resource element blocks into which a frequency bandwidth of a downlink channel is divided within a symbol that defines a transmission time interval in a downlink subframe. Furthermore, the data structure may include a control region and a data region within at least one resource element block of the one or more resource element blocks. Additionally, the data structure may include a downlink resource grant, located within the control region, for a user equipment served by the downlink channel. In an additional aspect, a network entity and method for generating the example data structure are provided.1. A method of managing user equipment (UE) communications in a wireless communication system, comprising: obtaining, at a network entity, user data for transmission to one or more UEs on a downlink channel; determining one or more delivery constraints associated with at least one of the user data and the one or more UEs; and generating, based on the user data for transmission and the one or more delivery constraints, a data structure for allocating downlink channel resources for transmission of the user data, wherein the data structure comprises: one or more resource element blocks into which a frequency bandwidth is divided within a symbol that defines a transmission time interval (TTI) in a downlink subframe; a control region and a data region within at least one resource element block of the one or more resource element blocks; and a downlink resource grant, located within the control region, for a UE of the one or more UEs served by the downlink channel. 2. The method of claim 1, further comprising transmitting the data structure to the one or more UEs. 3. The method of claim 1, further comprising transmitting the user data to the UE according to the downlink resource grant of the data structure. 4. The method of claim 3, further comprising maintaining a HARQ process for retransmission of the user data, the HARQ process having an associated retransmission time that is less than one subframe. 5. The method of claim 3, further comprising determining whether to retransmit the user data within eight symbols. 6. The method of claim 1, wherein obtaining the user data for transmission comprises obtaining the user data from a second network entity via a data flow or from a transmit data queue associated with the network entity. 7. The method of claim 1, wherein the downlink resource grant includes an indication of a position at which the data region is located within a resource block of the one or more resource element blocks. 8. The method of claim 1, wherein the data structure further comprises an uplink resource grant, located in the control region, for the UE. 9. The method of claim 1, wherein the control region comprises a number of resource elements, wherein the number of resource elements is based on an aggregation level associated with the UE. 10. The method of claim 1, wherein the downlink resource grant allocates any resource elements outside of the control region of the at least one resource element block to the UE. 11. The method of claim 1, wherein the downlink resource grant allocates resource elements of at least one further resource element block to the UE. 12. The method of claim 1, wherein the control region is positioned within a fixed subset of resource elements within the one or more resource element blocks. 13. The method of claim 1, wherein the data structure further comprises a legacy control region within at least one further symbol of the downlink subframe, wherein the legacy control region includes at least one resource element allocation according to legacy LTE control and data channels. 14. An apparatus for managing user equipment (UE) communications in a wireless communication system, comprising: a processor; memory in electronic communication with the processor; and instructions stored in the memory, the instructions being executable by the processor to: obtain, at a network entity, user data for transmission to one or more UEs on a downlink channel; determine one or more delivery constraints associated with at least one of the user data and the one or more UEs; and generate, based on the user data for transmission and the one or more delivery constraints, a data structure for allocating downlink channel resources for transmission of the user data, wherein the data structure comprises: one or more resource element blocks into which a frequency bandwidth is divided within a symbol that defines a transmission time interval (TTI) in a downlink subframe; a control region and a data region within at least one resource element block of the one or more resource element blocks; and a downlink resource grant, located within the control region, for a UE of the one or more UEs served by the downlink channel. 15. The apparatus of claim 14, the instructions being executable by the processor to transmit the data structure to the one or more UEs. 16. The apparatus of claim 14, further the instructions being executable by the processor to transmit the user data to the UE according to the downlink resource grant of the data structure. 17. The apparatus of claim 14, wherein the instructions being executable by the processor to obtain the user data for transmission comprise the instructions being executable by the processor to obtain the user data from a second network entity via a data flow or from a transmit data queue associated with the network entity. 18. The apparatus of claim 14, wherein the downlink resource grant includes an indication of a position at which the data region is located within a resource block of the one or more resource element blocks. 19. The apparatus of claim 14, wherein the data structure further comprises an uplink resource grant, located in the control region, for the UE. 20. The apparatus of claim 14, wherein the control region comprises a number of resource elements, wherein the number of resource elements is based on an aggregation level associated with the UE. 21. An apparatus for managing user equipment (UE) communications in a wireless communication system, comprising: means for obtaining, at a network entity, user data for transmission to one or more UEs on a downlink channel; means for determining one or more delivery constraints associated with at least one of the user data and the one or more UEs; and means for generating, based on the user data for transmission and the one or more delivery constraints, a data structure for allocating downlink channel resources for transmission of the user data, wherein the data structure comprises: one or more resource element blocks into which a frequency bandwidth is divided within a symbol that defines a transmission time interval (TTI) in a downlink subframe; a control region and a data region within at least one resource element block of the one or more resource element blocks; and a downlink resource grant, located within the control region, for a UE of the one or more UEs served by the downlink channel. 22. The apparatus of claim 21, further comprising means for transmitting the data structure to the one or more UEs. 23. The apparatus of claim 21, further comprising means for transmitting the user data to the UE according to the downlink resource grant of the data structure. 24. The apparatus of claim 21, wherein the means for obtaining comprises means for obtaining the user data from a second network entity via a data flow or from a transmit data queue associated with the network entity. 25. The apparatus of claim 21, wherein the downlink resource grant includes an indication of a position at which the data region is located within a resource block of the one or more resource element blocks. 26. A non-transitory computer-readable medium storing computer-executable code for wireless communication, the code comprising instructions executable to: obtain, at a network entity, user data for transmission to one or more UEs on a downlink channel; determine one or more delivery constraints associated with at least one of the user data and the one or more UEs; and generate, based on the user data for transmission and the one or more delivery constraints, a data structure for allocating downlink channel resources for transmission of the user data, wherein the data structure comprises: one or more resource element blocks into which a frequency bandwidth is divided within a symbol that defines a transmission time interval (TTI) in a downlink subframe; a control region and a data region within at least one resource element block of the one or more resource element blocks; and a downlink resource grant, located within the control region, for a UE of the one or more UEs served by the downlink channel. 27. The computer-readable medium of claim 26, wherein the code further comprises instructions executable to transmit the data structure to the one or more UEs. 28. The computer-readable medium of claim 26, wherein the code further comprises instructions executable to transmit the user data to the UE according to the downlink resource grant of the data structure. 29. The computer-readable medium of claim 26, wherein the instructions executable to obtain the user data for transmission comprise instructions executable to obtain the user data from a second network entity via a data flow or from a transmit data queue associated with the network entity. 30. The computer-readable medium of claim 26, wherein the downlink resource grant includes an indication of a position at which the data region is located within a resource block of the one or more resource element blocks.
2,400
8,132
8,132
14,049,801
2,441
A system and method for dynamically loading a webpage are provided. The system for dynamically loading a webpage having: a webpage qualifier module configured to receive a user requested webpage from a host server and further configured to identify a plurality of components of the webpage; a component selection module configured to determine a loading method for each component, wherein the loading technique may be either in-line loading or adaptive loading; an in-line loading module configured to load components in-line; and an adaptive loading module configured to determine a loading hierarchy and load components based on the loading hierarchy. The method for dynamically loading a webpage including: identifying a plurality of components of the webpage; determining a load method for each component wherein the loading technique may be in-line loading or adaptive loading; loading components determined to be in-line components using in-line loading; determining a loading hierarchy for components determined to be adaptive components; and loading adaptive components using adaptive loading based on the loading hierarchy.
1. A system for dynamically loading a webpage comprising: a webpage qualifier module configured to receive a user requested webpage from a host server and further configured to identify a plurality of components of the webpage; a component selection module configured to determine a loading method for each component, wherein the loading technique may be either in-line loading or adaptive loading; an in-line loading module configured to load components in-line; and an adaptive loading module configured to determine a loading hierarchy and load components based on the loading hierarchy. 2. The system of claim 1 wherein the adaptive loading module further comprises an indicator module configured to indicate to a user that a component is not fully loaded. 3. The system of claim 1 wherein the component selection module determines the loading technique based at least in part on user preferences. 4. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on metrics related to resource load on the host server. 5. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on metrics related to previous loading of the webpage for the user. 6. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on metrics related to previous loading of the webpage wherein the metrics are stored on the host server. 7. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on historical webpage loading assessments. 8. The system of claim 7 wherein the historical assessments are based on the history of the user. 9. The system of claim 7 wherein the historical assessments are based on the history of a plurality of users. 10. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on user preferences. 11. A method for dynamically loading a webpage comprising: identifying a plurality of components of the webpage; determining a load method for each component wherein the loading technique may be in-line loading or adaptive loading; loading components determined to be in-line components using in-line loading; determining a loading hierarchy for components determined to be adaptive components; and loading adaptive components using adaptive loading based on the loading hierarchy. 12. The method of claim 11 wherein the loading using in-line loading comprises loading the components in a predetermined order. 13. The method of claim 11 wherein determining a loading hierarchy and loading adaptive components occurs in parallel with loading in-line components. 14. The method of claim 11 further comprising indicating to a user that a component is in the process of being loaded. 15. The method of claim 11 wherein the determining of the loading technique is based on user selection. 16. The method of claim 11 wherein the determining of the loading technique is based on predetermined metrics. 17. The method of claim 11 wherein the loading hierarchy is based on at least one metric selected from the group of: current resource load on a host service; system performance of a user computing device; system performance of previous webpage loading; source code of the webpage; historical assessment of the webpage loading; and user preferences.
A system and method for dynamically loading a webpage are provided. The system for dynamically loading a webpage having: a webpage qualifier module configured to receive a user requested webpage from a host server and further configured to identify a plurality of components of the webpage; a component selection module configured to determine a loading method for each component, wherein the loading technique may be either in-line loading or adaptive loading; an in-line loading module configured to load components in-line; and an adaptive loading module configured to determine a loading hierarchy and load components based on the loading hierarchy. The method for dynamically loading a webpage including: identifying a plurality of components of the webpage; determining a load method for each component wherein the loading technique may be in-line loading or adaptive loading; loading components determined to be in-line components using in-line loading; determining a loading hierarchy for components determined to be adaptive components; and loading adaptive components using adaptive loading based on the loading hierarchy.1. A system for dynamically loading a webpage comprising: a webpage qualifier module configured to receive a user requested webpage from a host server and further configured to identify a plurality of components of the webpage; a component selection module configured to determine a loading method for each component, wherein the loading technique may be either in-line loading or adaptive loading; an in-line loading module configured to load components in-line; and an adaptive loading module configured to determine a loading hierarchy and load components based on the loading hierarchy. 2. The system of claim 1 wherein the adaptive loading module further comprises an indicator module configured to indicate to a user that a component is not fully loaded. 3. The system of claim 1 wherein the component selection module determines the loading technique based at least in part on user preferences. 4. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on metrics related to resource load on the host server. 5. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on metrics related to previous loading of the webpage for the user. 6. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on metrics related to previous loading of the webpage wherein the metrics are stored on the host server. 7. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on historical webpage loading assessments. 8. The system of claim 7 wherein the historical assessments are based on the history of the user. 9. The system of claim 7 wherein the historical assessments are based on the history of a plurality of users. 10. The system of claim 1 wherein the adaptive loading module determines the loading hierarchy based on user preferences. 11. A method for dynamically loading a webpage comprising: identifying a plurality of components of the webpage; determining a load method for each component wherein the loading technique may be in-line loading or adaptive loading; loading components determined to be in-line components using in-line loading; determining a loading hierarchy for components determined to be adaptive components; and loading adaptive components using adaptive loading based on the loading hierarchy. 12. The method of claim 11 wherein the loading using in-line loading comprises loading the components in a predetermined order. 13. The method of claim 11 wherein determining a loading hierarchy and loading adaptive components occurs in parallel with loading in-line components. 14. The method of claim 11 further comprising indicating to a user that a component is in the process of being loaded. 15. The method of claim 11 wherein the determining of the loading technique is based on user selection. 16. The method of claim 11 wherein the determining of the loading technique is based on predetermined metrics. 17. The method of claim 11 wherein the loading hierarchy is based on at least one metric selected from the group of: current resource load on a host service; system performance of a user computing device; system performance of previous webpage loading; source code of the webpage; historical assessment of the webpage loading; and user preferences.
2,400
8,133
8,133
15,281,236
2,439
A method of providing a transaction forwarding service in a blockchain includes executing a smart contract in the blockchain so as to determine whether a respective full node is eligible to execute the smart contract. The smart contract specifies eligible full nodes, a filter of a respective light client and a reward for executing the smart contract. The respective full node forwards data relating to a transaction that matches the filter of the respective light client to the respective light client with a proof that the transaction is included in the blockchain. The respective full node receives a signed acknowledgement from the respective light client verifying the transaction. Then, the respective full node claims the reward using the acknowledgement.
1. A method of providing a transaction forwarding service in a blockchain, the method comprising: a) executing a smart contract in the blockchain so as to determine whether a respective full node is eligible to execute the smart contract, the smart contract specifying eligible full nodes, a filter of a respective light client and a reward for executing the smart contract; b) forwarding, by the respective full node, data relating to a transaction that matches the filter of the respective light client to the respective light client with a proof that the transaction is included in the blockchain; c) receiving, by the respective full node, a signed acknowledgement from the respective light client verifying the transaction; and d) claiming, by the respective full node, the reward using the acknowledgement. 2. The method according to claim 1, wherein steps b) and c) are sequentially repeated with the acknowledgements indicating a number of times steps b) and c) have been repeated, and wherein the claiming of the reward is performed using the number of the acknowledgements in accordance with a fair exchange protocol. 3. The method according to claim 2, wherein, prior to each sequential repetition of step b), the respective full node verifies that the signature is valid and that a hash value in the respective signed acknowledgement from a previous iteration of step c) corresponds to a hash of a previous hash value concatenated with the data relating to the respective transaction sent in a previous iteration of step b). 4. The method according to claim 2, wherein the respective full node sends its address and a counter along with the data in each iteration of step b) and verifies that the counter has been incremented in each of the acknowledgements received in step c). 5. The method according to claim 1, wherein the method is performed by a plurality of full nodes, the method further comprising detecting a misbehaving one of the full nodes. 6. The method according to claim 5, wherein detecting the misbehaving one of the full nodes comprises: building, by each of the full nodes, a Merkle Patricia Tree using Unspent Transaction Outputs (UTXOs) of one or more accounts of the respective light client; and determining whether root hashes of the respective Merkle Patricia Trees differ from one another. 7. The method according to claim 1, further comprising, prior to step a), invoking, by the respective light client, the smart contract in the block chain, wherein the eligible full nodes, the filter and the reward are specified by the light client. 8. The method according to claim 1, wherein the smart contract specifies the eligible full nodes based on pseudorandom ids and step a) is performed using the pseudorandom ids. 9. A computer system for providing a transaction forwarding service in a blockchain, the system comprising: a plurality of full nodes each including one or more servers which store or have access to the blockchain, each of the full nodes being configured to: a) execute a smart contract in the blockchain so as to determine eligibility to execute the smart contract, the smart contract specifying eligible full nodes, a filter of a respective light client and a reward for executing the smart contract; b) forward data relating to a transaction that matches the filter of the respective light client to the respective light client with a proof that the transaction is included in the blockchain; c) receive a signed acknowledgement from the respective light client verifying the transaction; and d) claim the reward using the acknowledgement. 10. The system according to claim 9, wherein each of the full nodes is further configured to sequentially repeat steps b) and c) with the acknowledgements indicating a number of times steps b) and c) have been repeated, and to claim the reward using the acknowledgements in accordance with a fair exchange protocol. 11. The system according to claim 10, wherein, prior to each sequential repetition of step b), each of the full nodes is further configured to verify that the signature is valid and that a hash value in the respective signed acknowledgement from a previous iteration of step c) corresponds to a hash of a previous hash value concatenated with the data relating to the respective transaction sent in a previous iteration of step b). 12. The system according to claim 10, wherein each of the full nodes is further configured to send its address and a counter along with the data in each iteration of step b) and verify that the counter has been incremented in each of the acknowledgements received in step c). 13. The system according to claim 9, wherein each of the full nodes is further configured to build a Merkle Patricia Tree using Unspent Transaction Outputs (UTXOs) of one or more accounts of the respective light client, and wherein the system is configured to detect a misbehaving one of the full nodes based on root hashes of the respective Merkle Patricia Trees differing from one another. 14. The system according to claim 9, further comprising the respective light client, the light client being configured to invoke the smart contract in the block chain with flexibility as to the specification of the eligible full nodes, the filter and the reward. 15. A tangible, non-transitory computer-readable medium having instructions thereon which, when executed by at least one server that stores or has access to a blockchain, provides for execution of the following steps: a) executing a smart contract in the blockchain so as to determine eligibility to execute the smart contract, the smart contract specifying eligible full nodes, a filter of a respective light client and a reward for executing the smart contract; b) forwarding data relating to a transaction that matches the filter of the respective light client to the respective light client with a proof that the transaction is included in the blockchain; c) receiving a signed acknowledgement from the respective light client verifying the transaction; and d) claiming the reward using the acknowledgement.
A method of providing a transaction forwarding service in a blockchain includes executing a smart contract in the blockchain so as to determine whether a respective full node is eligible to execute the smart contract. The smart contract specifies eligible full nodes, a filter of a respective light client and a reward for executing the smart contract. The respective full node forwards data relating to a transaction that matches the filter of the respective light client to the respective light client with a proof that the transaction is included in the blockchain. The respective full node receives a signed acknowledgement from the respective light client verifying the transaction. Then, the respective full node claims the reward using the acknowledgement.1. A method of providing a transaction forwarding service in a blockchain, the method comprising: a) executing a smart contract in the blockchain so as to determine whether a respective full node is eligible to execute the smart contract, the smart contract specifying eligible full nodes, a filter of a respective light client and a reward for executing the smart contract; b) forwarding, by the respective full node, data relating to a transaction that matches the filter of the respective light client to the respective light client with a proof that the transaction is included in the blockchain; c) receiving, by the respective full node, a signed acknowledgement from the respective light client verifying the transaction; and d) claiming, by the respective full node, the reward using the acknowledgement. 2. The method according to claim 1, wherein steps b) and c) are sequentially repeated with the acknowledgements indicating a number of times steps b) and c) have been repeated, and wherein the claiming of the reward is performed using the number of the acknowledgements in accordance with a fair exchange protocol. 3. The method according to claim 2, wherein, prior to each sequential repetition of step b), the respective full node verifies that the signature is valid and that a hash value in the respective signed acknowledgement from a previous iteration of step c) corresponds to a hash of a previous hash value concatenated with the data relating to the respective transaction sent in a previous iteration of step b). 4. The method according to claim 2, wherein the respective full node sends its address and a counter along with the data in each iteration of step b) and verifies that the counter has been incremented in each of the acknowledgements received in step c). 5. The method according to claim 1, wherein the method is performed by a plurality of full nodes, the method further comprising detecting a misbehaving one of the full nodes. 6. The method according to claim 5, wherein detecting the misbehaving one of the full nodes comprises: building, by each of the full nodes, a Merkle Patricia Tree using Unspent Transaction Outputs (UTXOs) of one or more accounts of the respective light client; and determining whether root hashes of the respective Merkle Patricia Trees differ from one another. 7. The method according to claim 1, further comprising, prior to step a), invoking, by the respective light client, the smart contract in the block chain, wherein the eligible full nodes, the filter and the reward are specified by the light client. 8. The method according to claim 1, wherein the smart contract specifies the eligible full nodes based on pseudorandom ids and step a) is performed using the pseudorandom ids. 9. A computer system for providing a transaction forwarding service in a blockchain, the system comprising: a plurality of full nodes each including one or more servers which store or have access to the blockchain, each of the full nodes being configured to: a) execute a smart contract in the blockchain so as to determine eligibility to execute the smart contract, the smart contract specifying eligible full nodes, a filter of a respective light client and a reward for executing the smart contract; b) forward data relating to a transaction that matches the filter of the respective light client to the respective light client with a proof that the transaction is included in the blockchain; c) receive a signed acknowledgement from the respective light client verifying the transaction; and d) claim the reward using the acknowledgement. 10. The system according to claim 9, wherein each of the full nodes is further configured to sequentially repeat steps b) and c) with the acknowledgements indicating a number of times steps b) and c) have been repeated, and to claim the reward using the acknowledgements in accordance with a fair exchange protocol. 11. The system according to claim 10, wherein, prior to each sequential repetition of step b), each of the full nodes is further configured to verify that the signature is valid and that a hash value in the respective signed acknowledgement from a previous iteration of step c) corresponds to a hash of a previous hash value concatenated with the data relating to the respective transaction sent in a previous iteration of step b). 12. The system according to claim 10, wherein each of the full nodes is further configured to send its address and a counter along with the data in each iteration of step b) and verify that the counter has been incremented in each of the acknowledgements received in step c). 13. The system according to claim 9, wherein each of the full nodes is further configured to build a Merkle Patricia Tree using Unspent Transaction Outputs (UTXOs) of one or more accounts of the respective light client, and wherein the system is configured to detect a misbehaving one of the full nodes based on root hashes of the respective Merkle Patricia Trees differing from one another. 14. The system according to claim 9, further comprising the respective light client, the light client being configured to invoke the smart contract in the block chain with flexibility as to the specification of the eligible full nodes, the filter and the reward. 15. A tangible, non-transitory computer-readable medium having instructions thereon which, when executed by at least one server that stores or has access to a blockchain, provides for execution of the following steps: a) executing a smart contract in the blockchain so as to determine eligibility to execute the smart contract, the smart contract specifying eligible full nodes, a filter of a respective light client and a reward for executing the smart contract; b) forwarding data relating to a transaction that matches the filter of the respective light client to the respective light client with a proof that the transaction is included in the blockchain; c) receiving a signed acknowledgement from the respective light client verifying the transaction; and d) claiming the reward using the acknowledgement.
2,400
8,134
8,134
15,782,607
2,474
Examples relate to extracting data from network communications. In one example, a programmable hardware processor may: receive a first set of network packets; store each network packet included in the first set in a first storage device; identify, from each network packet included in a subset of the first set of network packets, data included in the network packet, the data meeting at least one condition defined by first programmable logic of the programmable hardware processor; and for each network packet included in the subset: extract, from the network packet, data of interest; and store, in a second storage device, i) the extracted data of interest, and ii) an identifier associated with the network packet.
1. A computing device for extracting data from network communications, the computing device comprising a programmable hardware processor configured to: receive a first set of network packets; store each network packet included in the first set in a first storage device; identify, from each network packet included in a subset of the first set of network packets, data included in the network packet, the data meeting at least one condition defined by first programmable logic of the programmable hardware processor; and for each network packet included in the subset: extract, from the network packet, data of interest; and store, in a second storage device, i) the extracted data of interest, and ii) an identifier associated with the network packet. 2. The computing device of claim 1, wherein the programmable hardware processor is further configured to: identify, for each network packet included in the first set, a network flow, each network flow including at least one of the network packets included in the first set; and for each network packet included in the subset, organize the network packet according to the network flow identified for the network packet. 3. The computing device of claim 2, wherein the programmable hardware processor is further configured to: determine that particular data of interest identified in a particular network packet is partial data; identify a particular network flow that includes the particular network packet; and identify other network packets included in the particular network flow, the other network packets including other partial data that, when combined with the partial data of the particular network packet, comprise the particular data of interest. 4. The computing device of claim 3, wherein the programmable hardware processor is further configured to: combine the extracted data of interest from the particular network packet and each other network packet included in the particular network flow, and wherein storing the extracted data of interest comprises storing the combined extracted data of interest. 5. The computing device of claim 1, wherein the data of interest comprises at least one of: network packet header data; network packet payload data; network packet footer data; or network packet metadata. 6. The computing device of claim 1, wherein the data meeting the at least one condition defined by first programmable logic is included in at least one of: network packet header data; network packet payload data; network packet footer data; or network packet metadata. 7. The computing device of claim 2, wherein, for each network packet included in the subset, the identifier associated with the network packet is based on the network flow in which the network packet is included. 8. The computing device of claim 1, wherein the programmable hardware processor is further configured to: obtain second programmable logic for the programmable hardware processor, the second programmable logic defining a condition that is different from the at least one condition defined by the first programmable logic; receive, subsequent to receiving the first set of network packets, a second set of network packets; store each network packet included in the second set in the first storage device; identify, from each network packet included in a second subset of the second set of network packets, second data included in the network packet, the second data meeting at least one condition defined by the second programmable logic of the programmable hardware processor; and for each network packet included in the second subset: extract, from the network packet, second data of interest; and store, in the second storage device, i) the extracted second data of interest, and ii) a second identifier associated with the network packet. 9. A method for extracting data from network communications, implemented by a programmable hardware processor, the method comprising: obtaining at least one network packet from a first storage device, each of the at least one network packets being included in a network flow; determining that a particular network packet included in the network flow includes data meeting at least one condition defined by first programmable logic of the programmable hardware processor; in response to determining that the particular network packet includes data meeting the at least one condition: extracting, from the particular network packet, data of interest; and storing, in a second storage device, i) the extracted data of interest, and ii) an identifier associated with the particular network packet. 10. The method of claim 9, further comprising: determining that the data of interest is a portion of whole data; extracting each portion of the whole data from each network packet that i) includes a portion of the whole data, and ii) is included in the network flow; and storing, in the second storage device, each extracted portion of the whole data. 11. The method of claim 10, further comprising: generating whole data of interest by combining each extracted portion of the whole data, and wherein storing each extracted portion of the whole data comprises storing the whole data of interest. 12. The method of claim 9, wherein the extracted data of interest: is defined by the first programmable logic; and includes data that is different from the data meeting the at least one condition. 13. The method of claim 9, further comprising: obtaining second programmable logic for the programmable hardware processor, the second programmable logic defining a condition that is different from the at least one condition defined by the first programmable logic; receiving, subsequent to the obtaining the at least one network packet, at least one second network packet from the first storage device, each of the at least one second network packets being included in a second network flow; determining that a particular second network packet included in the second network flow includes second data meeting at least one condition defined by the second programmable logic; in response to determining that the particular second network packet includes second data meeting the at least one condition: extracting, from the particular second network packet, second data of interest; and storing, in the second storage device, i) the extracted second data of interest, and ii) an identifier associated with the particular second network packet. 14. The method of claim 9, further comprising: receiving a plurality of network packets; identifying, for each of the plurality of network packets, a network flow, each network flow including at least one of the plurality of network packets; and storing each of the plurality of network packets in the first storage device. 15. The method of claim 14, further comprising: organizing each of the plurality of network packets stored in the first storage device according to the network flow identified for the network packet; organizing the extracted data of interest stored in the second storage device according to the identifier associated with the particular network packet.
Examples relate to extracting data from network communications. In one example, a programmable hardware processor may: receive a first set of network packets; store each network packet included in the first set in a first storage device; identify, from each network packet included in a subset of the first set of network packets, data included in the network packet, the data meeting at least one condition defined by first programmable logic of the programmable hardware processor; and for each network packet included in the subset: extract, from the network packet, data of interest; and store, in a second storage device, i) the extracted data of interest, and ii) an identifier associated with the network packet.1. A computing device for extracting data from network communications, the computing device comprising a programmable hardware processor configured to: receive a first set of network packets; store each network packet included in the first set in a first storage device; identify, from each network packet included in a subset of the first set of network packets, data included in the network packet, the data meeting at least one condition defined by first programmable logic of the programmable hardware processor; and for each network packet included in the subset: extract, from the network packet, data of interest; and store, in a second storage device, i) the extracted data of interest, and ii) an identifier associated with the network packet. 2. The computing device of claim 1, wherein the programmable hardware processor is further configured to: identify, for each network packet included in the first set, a network flow, each network flow including at least one of the network packets included in the first set; and for each network packet included in the subset, organize the network packet according to the network flow identified for the network packet. 3. The computing device of claim 2, wherein the programmable hardware processor is further configured to: determine that particular data of interest identified in a particular network packet is partial data; identify a particular network flow that includes the particular network packet; and identify other network packets included in the particular network flow, the other network packets including other partial data that, when combined with the partial data of the particular network packet, comprise the particular data of interest. 4. The computing device of claim 3, wherein the programmable hardware processor is further configured to: combine the extracted data of interest from the particular network packet and each other network packet included in the particular network flow, and wherein storing the extracted data of interest comprises storing the combined extracted data of interest. 5. The computing device of claim 1, wherein the data of interest comprises at least one of: network packet header data; network packet payload data; network packet footer data; or network packet metadata. 6. The computing device of claim 1, wherein the data meeting the at least one condition defined by first programmable logic is included in at least one of: network packet header data; network packet payload data; network packet footer data; or network packet metadata. 7. The computing device of claim 2, wherein, for each network packet included in the subset, the identifier associated with the network packet is based on the network flow in which the network packet is included. 8. The computing device of claim 1, wherein the programmable hardware processor is further configured to: obtain second programmable logic for the programmable hardware processor, the second programmable logic defining a condition that is different from the at least one condition defined by the first programmable logic; receive, subsequent to receiving the first set of network packets, a second set of network packets; store each network packet included in the second set in the first storage device; identify, from each network packet included in a second subset of the second set of network packets, second data included in the network packet, the second data meeting at least one condition defined by the second programmable logic of the programmable hardware processor; and for each network packet included in the second subset: extract, from the network packet, second data of interest; and store, in the second storage device, i) the extracted second data of interest, and ii) a second identifier associated with the network packet. 9. A method for extracting data from network communications, implemented by a programmable hardware processor, the method comprising: obtaining at least one network packet from a first storage device, each of the at least one network packets being included in a network flow; determining that a particular network packet included in the network flow includes data meeting at least one condition defined by first programmable logic of the programmable hardware processor; in response to determining that the particular network packet includes data meeting the at least one condition: extracting, from the particular network packet, data of interest; and storing, in a second storage device, i) the extracted data of interest, and ii) an identifier associated with the particular network packet. 10. The method of claim 9, further comprising: determining that the data of interest is a portion of whole data; extracting each portion of the whole data from each network packet that i) includes a portion of the whole data, and ii) is included in the network flow; and storing, in the second storage device, each extracted portion of the whole data. 11. The method of claim 10, further comprising: generating whole data of interest by combining each extracted portion of the whole data, and wherein storing each extracted portion of the whole data comprises storing the whole data of interest. 12. The method of claim 9, wherein the extracted data of interest: is defined by the first programmable logic; and includes data that is different from the data meeting the at least one condition. 13. The method of claim 9, further comprising: obtaining second programmable logic for the programmable hardware processor, the second programmable logic defining a condition that is different from the at least one condition defined by the first programmable logic; receiving, subsequent to the obtaining the at least one network packet, at least one second network packet from the first storage device, each of the at least one second network packets being included in a second network flow; determining that a particular second network packet included in the second network flow includes second data meeting at least one condition defined by the second programmable logic; in response to determining that the particular second network packet includes second data meeting the at least one condition: extracting, from the particular second network packet, second data of interest; and storing, in the second storage device, i) the extracted second data of interest, and ii) an identifier associated with the particular second network packet. 14. The method of claim 9, further comprising: receiving a plurality of network packets; identifying, for each of the plurality of network packets, a network flow, each network flow including at least one of the plurality of network packets; and storing each of the plurality of network packets in the first storage device. 15. The method of claim 14, further comprising: organizing each of the plurality of network packets stored in the first storage device according to the network flow identified for the network packet; organizing the extracted data of interest stored in the second storage device according to the identifier associated with the particular network packet.
2,400
8,135
8,135
14,604,141
2,463
Systems and methods for extracting identifiers from traffic of an unknown protocol are provided herein. An example method can include receiving communication traffic transferred over a communication network in accordance with a communication network. A data item that matches a predefined pattern can be identified in the communication traffic, irrespective of the communication protocol. The identified data item can then be extracted from the communication traffic.
1. A method, comprising: receiving communication traffic, which is transferred over a communication network in accordance with a communication protocol; identifying in the communication traffic, irrespective of the communication protocol, a data item that matches a predefined pattern; and extracting the identified data item from the communication traffic. 2. The method according to claim 1, wherein identifying the data item comprises applying to the communication traffic a regular expression that represents the predefined pattern. 3. The method according to claim 1, wherein the communication traffic comprises at least a textual part, and wherein identifying the data item comprises detecting the data item in the textual part of the communication traffic. 4. The method according to claim 1, wherein the data item comprises an identifier of a user or a communication terminal associated with the communication traffic. 5. The method according to claim 4, and comprising extracting an additional identifier of the user or the communication terminal from metadata of the communication traffic, and correlating the identifier and the additional identifier. 6. The method according to claim 1, wherein the data item comprises location information of a communication terminal associated with the communication traffic. 7. The method according to claim 1, and comprising training a decoding algorithm based on the extracted data item, to decode the communication protocol. 8. Apparatus, comprising: an interface, which is configured to connect to a communication network and to receive communication traffic that is transferred over the communication network in accordance with a communication protocol; and a processor, which is configured to identify in the communication traffic, irrespective of the communication protocol, a data item that matches a predefined pattern, and to extract the identified data item from the communication traffic. 9. The apparatus according to claim 8, wherein the processor is configured to identify the data item by applying to the communication traffic a regular expression that represents the predefined pattern. 10. The apparatus according to claim 8, wherein the communication traffic comprises at least a textual part, and wherein the processor is configured to identify the data item in the textual part of the communication traffic. 11. The apparatus according to claim 8, wherein the data item comprises an identifier of a user or a communication terminal associated with the communication traffic. 12. The apparatus according to claim 11, wherein the processor is configured to extract an additional identifier of the user or the communication terminal from metadata of the communication traffic, and to correlate the identifier and the additional identifier. 13. The apparatus according to claim 8, wherein the data item comprises location information of a communication terminal associated with the communication traffic. 14. The apparatus according to claim 8, wherein the processor is configured to train a decoding algorithm based on the extracted data item, to decode the communication protocol. 15. A non-transitory computer readable medium having instructions stored thereon that, when executed by a computer, direct the computer to: receive communication traffic, which is transferred over a communication network in accordance with a communication protocol; identify in the communication traffic, irrespective of the communication protocol, a data item that matches a predefined pattern; and extract the identified data item from the communication traffic. 16. The non-transitory computer readable medium of claim 15, wherein identifying the data item comprises applying to the communication traffic a regular expression that represents the predefined pattern. 17. The non-transitory computer readable medium of claim 15, wherein the communication traffic comprises at least a textual part, and wherein identifying the data item comprises detecting the data item in the textual part of the communication traffic. 18. The non-transitory computer readable medium of claim 15, wherein the data item comprises an identifier of a user or a communication terminal associated with the communication traffic, and wherein the instructions, when executed by the computer, further direct the computer to: extract an additional identifier of the user or the communication terminal from metadata of the communication traffic; and correlate the identifier and the additional identifier. 19. The non-transitory computer readable medium of claim 15, wherein the data item comprises location information of a communication terminal associated with the communication traffic. 20. The non-transitory computer readable medium of claim 15, wherein the instructions, when executed by the computer, further direct the computer to train a decoding algorithm based on the extracted data item to decode the communication protocol.
Systems and methods for extracting identifiers from traffic of an unknown protocol are provided herein. An example method can include receiving communication traffic transferred over a communication network in accordance with a communication network. A data item that matches a predefined pattern can be identified in the communication traffic, irrespective of the communication protocol. The identified data item can then be extracted from the communication traffic.1. A method, comprising: receiving communication traffic, which is transferred over a communication network in accordance with a communication protocol; identifying in the communication traffic, irrespective of the communication protocol, a data item that matches a predefined pattern; and extracting the identified data item from the communication traffic. 2. The method according to claim 1, wherein identifying the data item comprises applying to the communication traffic a regular expression that represents the predefined pattern. 3. The method according to claim 1, wherein the communication traffic comprises at least a textual part, and wherein identifying the data item comprises detecting the data item in the textual part of the communication traffic. 4. The method according to claim 1, wherein the data item comprises an identifier of a user or a communication terminal associated with the communication traffic. 5. The method according to claim 4, and comprising extracting an additional identifier of the user or the communication terminal from metadata of the communication traffic, and correlating the identifier and the additional identifier. 6. The method according to claim 1, wherein the data item comprises location information of a communication terminal associated with the communication traffic. 7. The method according to claim 1, and comprising training a decoding algorithm based on the extracted data item, to decode the communication protocol. 8. Apparatus, comprising: an interface, which is configured to connect to a communication network and to receive communication traffic that is transferred over the communication network in accordance with a communication protocol; and a processor, which is configured to identify in the communication traffic, irrespective of the communication protocol, a data item that matches a predefined pattern, and to extract the identified data item from the communication traffic. 9. The apparatus according to claim 8, wherein the processor is configured to identify the data item by applying to the communication traffic a regular expression that represents the predefined pattern. 10. The apparatus according to claim 8, wherein the communication traffic comprises at least a textual part, and wherein the processor is configured to identify the data item in the textual part of the communication traffic. 11. The apparatus according to claim 8, wherein the data item comprises an identifier of a user or a communication terminal associated with the communication traffic. 12. The apparatus according to claim 11, wherein the processor is configured to extract an additional identifier of the user or the communication terminal from metadata of the communication traffic, and to correlate the identifier and the additional identifier. 13. The apparatus according to claim 8, wherein the data item comprises location information of a communication terminal associated with the communication traffic. 14. The apparatus according to claim 8, wherein the processor is configured to train a decoding algorithm based on the extracted data item, to decode the communication protocol. 15. A non-transitory computer readable medium having instructions stored thereon that, when executed by a computer, direct the computer to: receive communication traffic, which is transferred over a communication network in accordance with a communication protocol; identify in the communication traffic, irrespective of the communication protocol, a data item that matches a predefined pattern; and extract the identified data item from the communication traffic. 16. The non-transitory computer readable medium of claim 15, wherein identifying the data item comprises applying to the communication traffic a regular expression that represents the predefined pattern. 17. The non-transitory computer readable medium of claim 15, wherein the communication traffic comprises at least a textual part, and wherein identifying the data item comprises detecting the data item in the textual part of the communication traffic. 18. The non-transitory computer readable medium of claim 15, wherein the data item comprises an identifier of a user or a communication terminal associated with the communication traffic, and wherein the instructions, when executed by the computer, further direct the computer to: extract an additional identifier of the user or the communication terminal from metadata of the communication traffic; and correlate the identifier and the additional identifier. 19. The non-transitory computer readable medium of claim 15, wherein the data item comprises location information of a communication terminal associated with the communication traffic. 20. The non-transitory computer readable medium of claim 15, wherein the instructions, when executed by the computer, further direct the computer to train a decoding algorithm based on the extracted data item to decode the communication protocol.
2,400
8,136
8,136
14,722,987
2,442
A social networking system provides tips to users about non-user entities within the social networking system. Tips include short questions, comments, reviews and non-user entities include businesses, products, bands, songs etc. Tips are provided by users of the social networking system, wherein the tips are displayed to other users of if the other users meet privacy criteria associated with the tips. Additionally, tips are ranked based on the likelihood that a user will view or read the tip. Tips with the greatest likelihood are ranked higher than those with a lower likelihood. Selected tips with a high likelihood to be viewed are displayed to a viewing user on the topic page or within the user's news feed story.
1. A method comprising: receiving, by a social networking system, a post from a first client device associated with a first user of a plurality of users of the social networking system, wherein the post part of a profile page of the first user and includes a topic; responsive to the post including the topic, providing a user interface element to the first client device prompting the user to provide a tip associated with the topic; receiving, via the user interface element, a tip from the first user that is associated with the topic; identifying a plurality of tips, including the received tip, that are associated with the topic, the plurality of tips shared by one or more of the plurality of users of the social networking system; receiving, via a second client device, a request from a second user of the plurality of users to display one or more of the plurality of tips associated with the topic, wherein each tip is associated with respective privacy criteria that identifies users having permission to view that tip; determining that the second user meets privacy criteria associated with a set of candidate tips of the plurality of tips; selecting one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for a given candidate tip, a likelihood that the second user is interested in reading the given candidate tip; and responsive to the selecting, providing a page to the second client device, the page displaying the selected one or more tips. 2. The method of claim 1, wherein determining that the second user meets privacy criteria associated with the set of candidate tips of the plurality of tips, further comprises: selecting one or more candidate tips, including the received tip, from the plurality of tips based in part on the second user meeting the respective privacy criteria associated with the selected one or more candidate tips. 3. The method of claim 1, wherein selecting one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for the given candidate tip, the likelihood that the second user is interested in reading the given candidate tip, comprises: determining a relevancy score associated with each candidate tip in the set of candidate tips; and ranking tips within the set of candidate tips based on their respective relevancy score. 4. The method of claim 3, wherein the relevancy score includes at least one of number of interactions between users; elapsed time since the tip is shared; amount of feedback provide to the tip; and an endorsement from a local business. 5. The method of claim 3, wherein the relevancy score is a weighted score based on at least two of: a number of interactions between users; elapsed time since the tip is shared; amount of feedback provide to the tip; and an endorsement from a local business. 6. The method of claim 1, wherein a tip is selected from a group comprising: a comment; a question; a short recommendation; and a short review. 7. The method of claim 1, further comprising: receiving from the second user, via the second client device, a check-in at a location associated with a tip, of the one or more tips; and responsive to the check-in, providing a page to the second client device, the page displaying the tip. 8. The method of claim 1, wherein the profile page includes a wall that includes the post. 9. A non-transitory computer-readable storage medium storing executable computer program instructions, the instructions executable to perform steps comprising: receiving, by a social networking system, a post from a first client device associated with a first user of a plurality of users of the social networking system, wherein the post part of a profile page of the first user and includes a topic; responsive to the post including the topic, providing a user interface element to the first client device prompting the user to provide a tip associated with the topic; receiving, via the user interface element, a tip from the first user that is associated with the topic; identifying a plurality of tips, including the received tip, that are associated with the topic, the plurality of tips shared by one or more of the plurality of users of the social networking system; receiving, via a second client device, a request from a second user of the plurality of users to display one or more of the plurality of tips associated with the topic, wherein each tip is associated with respective privacy criteria that identifies users having permission to view that tip; determining that the second user meets privacy criteria associated with a set of candidate tips of the plurality of tips; selecting one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for a given candidate tip, a likelihood that the second user is interested in reading the given candidate tip; and responsive to the selecting, providing a page to the second client device, the page displaying the selected one or more tips. 10. The computer-readable medium of claim 9, wherein determining that the second user meets privacy criteria associated with the set of candidate tips of the plurality of tips, further comprises: selecting one or more candidate tips, including the received tip, from the plurality of tips based in part on the second user meeting the respective privacy criteria associated with the selected one or more candidate tips. 11. The computer-readable medium of claim 9, wherein selecting one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for the given candidate tip, the likelihood that the second user is interested in reading the given candidate tip, comprises: determining a relevancy score associated with each candidate tip in the set of candidate tips; and ranking tips within the set of candidate tips based on their respective relevancy score. 12. The computer-readable medium of claim 11, wherein the relevancy score includes at least one of number of interactions between users; elapsed time since the tip is shared; amount of feedback provide to the tip; and an endorsement from a local business. 13. The computer-readable medium of claim 11, wherein the relevancy score is a weighted score based on at least two of: a number of interactions between users; elapsed time since the tip is shared; amount of feedback provide to the tip; and an endorsement from a local business. 14. The computer-readable medium of claim 9, wherein a tip is selected from a group comprising: a comment; a question; a short recommendation; and a short review. 15. The computer-readable medium of claim 9, further comprising: receiving from the second user, via the second client device, a check-in at a location associated with a tip, of the one or more tips; and responsive to the check-in, providing a page to the second client device, the page displaying the tip. 16. The computer-readable medium of claim 9, wherein the profile page includes a wall that includes the post. 17. A system comprising: a processor; a computer-readable storage medium coupled to the processor, the computer-readable storage medium including instructions that, when executed by a processor, cause the processor to: receive, by a social networking system, a post from a first client device associated with a first user of a plurality of users of the social networking system, wherein the post part of a profile page of the first user and includes a topic; responsive to the post including the topic, provide a user interface element to the first client device prompting the user to provide a tip associated with the topic; receive, via the user interface element, a tip from the first user that is associated with the topic; identify a plurality of tips, including the received tip, that are associated with the topic, the plurality of tips shared by one or more of the plurality of users of the social networking system; receive, via a second client device, a request from a second user of the plurality of users to display one or more of the plurality of tips associated with the topic, wherein each tip is associated with respective privacy criteria that identifies users having permission to view that tip; determine that the second user meets privacy criteria associated with a set of candidate tips of the plurality of tips; select one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for a given candidate tip, a likelihood that the second user is interested in reading the given candidate tip; and responsive to the selecting, provide a page to the second client device, the page displaying the selected one or more tips.
A social networking system provides tips to users about non-user entities within the social networking system. Tips include short questions, comments, reviews and non-user entities include businesses, products, bands, songs etc. Tips are provided by users of the social networking system, wherein the tips are displayed to other users of if the other users meet privacy criteria associated with the tips. Additionally, tips are ranked based on the likelihood that a user will view or read the tip. Tips with the greatest likelihood are ranked higher than those with a lower likelihood. Selected tips with a high likelihood to be viewed are displayed to a viewing user on the topic page or within the user's news feed story.1. A method comprising: receiving, by a social networking system, a post from a first client device associated with a first user of a plurality of users of the social networking system, wherein the post part of a profile page of the first user and includes a topic; responsive to the post including the topic, providing a user interface element to the first client device prompting the user to provide a tip associated with the topic; receiving, via the user interface element, a tip from the first user that is associated with the topic; identifying a plurality of tips, including the received tip, that are associated with the topic, the plurality of tips shared by one or more of the plurality of users of the social networking system; receiving, via a second client device, a request from a second user of the plurality of users to display one or more of the plurality of tips associated with the topic, wherein each tip is associated with respective privacy criteria that identifies users having permission to view that tip; determining that the second user meets privacy criteria associated with a set of candidate tips of the plurality of tips; selecting one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for a given candidate tip, a likelihood that the second user is interested in reading the given candidate tip; and responsive to the selecting, providing a page to the second client device, the page displaying the selected one or more tips. 2. The method of claim 1, wherein determining that the second user meets privacy criteria associated with the set of candidate tips of the plurality of tips, further comprises: selecting one or more candidate tips, including the received tip, from the plurality of tips based in part on the second user meeting the respective privacy criteria associated with the selected one or more candidate tips. 3. The method of claim 1, wherein selecting one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for the given candidate tip, the likelihood that the second user is interested in reading the given candidate tip, comprises: determining a relevancy score associated with each candidate tip in the set of candidate tips; and ranking tips within the set of candidate tips based on their respective relevancy score. 4. The method of claim 3, wherein the relevancy score includes at least one of number of interactions between users; elapsed time since the tip is shared; amount of feedback provide to the tip; and an endorsement from a local business. 5. The method of claim 3, wherein the relevancy score is a weighted score based on at least two of: a number of interactions between users; elapsed time since the tip is shared; amount of feedback provide to the tip; and an endorsement from a local business. 6. The method of claim 1, wherein a tip is selected from a group comprising: a comment; a question; a short recommendation; and a short review. 7. The method of claim 1, further comprising: receiving from the second user, via the second client device, a check-in at a location associated with a tip, of the one or more tips; and responsive to the check-in, providing a page to the second client device, the page displaying the tip. 8. The method of claim 1, wherein the profile page includes a wall that includes the post. 9. A non-transitory computer-readable storage medium storing executable computer program instructions, the instructions executable to perform steps comprising: receiving, by a social networking system, a post from a first client device associated with a first user of a plurality of users of the social networking system, wherein the post part of a profile page of the first user and includes a topic; responsive to the post including the topic, providing a user interface element to the first client device prompting the user to provide a tip associated with the topic; receiving, via the user interface element, a tip from the first user that is associated with the topic; identifying a plurality of tips, including the received tip, that are associated with the topic, the plurality of tips shared by one or more of the plurality of users of the social networking system; receiving, via a second client device, a request from a second user of the plurality of users to display one or more of the plurality of tips associated with the topic, wherein each tip is associated with respective privacy criteria that identifies users having permission to view that tip; determining that the second user meets privacy criteria associated with a set of candidate tips of the plurality of tips; selecting one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for a given candidate tip, a likelihood that the second user is interested in reading the given candidate tip; and responsive to the selecting, providing a page to the second client device, the page displaying the selected one or more tips. 10. The computer-readable medium of claim 9, wherein determining that the second user meets privacy criteria associated with the set of candidate tips of the plurality of tips, further comprises: selecting one or more candidate tips, including the received tip, from the plurality of tips based in part on the second user meeting the respective privacy criteria associated with the selected one or more candidate tips. 11. The computer-readable medium of claim 9, wherein selecting one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for the given candidate tip, the likelihood that the second user is interested in reading the given candidate tip, comprises: determining a relevancy score associated with each candidate tip in the set of candidate tips; and ranking tips within the set of candidate tips based on their respective relevancy score. 12. The computer-readable medium of claim 11, wherein the relevancy score includes at least one of number of interactions between users; elapsed time since the tip is shared; amount of feedback provide to the tip; and an endorsement from a local business. 13. The computer-readable medium of claim 11, wherein the relevancy score is a weighted score based on at least two of: a number of interactions between users; elapsed time since the tip is shared; amount of feedback provide to the tip; and an endorsement from a local business. 14. The computer-readable medium of claim 9, wherein a tip is selected from a group comprising: a comment; a question; a short recommendation; and a short review. 15. The computer-readable medium of claim 9, further comprising: receiving from the second user, via the second client device, a check-in at a location associated with a tip, of the one or more tips; and responsive to the check-in, providing a page to the second client device, the page displaying the tip. 16. The computer-readable medium of claim 9, wherein the profile page includes a wall that includes the post. 17. A system comprising: a processor; a computer-readable storage medium coupled to the processor, the computer-readable storage medium including instructions that, when executed by a processor, cause the processor to: receive, by a social networking system, a post from a first client device associated with a first user of a plurality of users of the social networking system, wherein the post part of a profile page of the first user and includes a topic; responsive to the post including the topic, provide a user interface element to the first client device prompting the user to provide a tip associated with the topic; receive, via the user interface element, a tip from the first user that is associated with the topic; identify a plurality of tips, including the received tip, that are associated with the topic, the plurality of tips shared by one or more of the plurality of users of the social networking system; receive, via a second client device, a request from a second user of the plurality of users to display one or more of the plurality of tips associated with the topic, wherein each tip is associated with respective privacy criteria that identifies users having permission to view that tip; determine that the second user meets privacy criteria associated with a set of candidate tips of the plurality of tips; select one or more tips, including the received tip, from the set of candidate tips to display to the second user based in part on, for a given candidate tip, a likelihood that the second user is interested in reading the given candidate tip; and responsive to the selecting, provide a page to the second client device, the page displaying the selected one or more tips.
2,400
8,137
8,137
14,919,500
2,431
Users of organizations use many different third-party applications. The organizations use the services of a server to manage and interact with the third-party applications. In particular, the server provides a user lifecycle API that defines a set of user lifecycle events corresponding to changes of the users with respect to their organizations and/or the third-party applications that they use within the organizations. The server further has access to lifecycle code modules corresponding to the different third-party applications and defining how those third-party applications will respond to the user lifecycle events. When a user lifecycle event occurs for a particular user of a particular organization, the server determines the third-party applications to which the organization has given the user access uses the appropriate functionality of the lifecycle code modules of the corresponding third-party applications to implement the appropriate user changes for those applications.
1. A computer-implemented method of a server, the method comprising: storing a plurality of lifecycle code modules corresponding to a plurality of third-party applications, the lifecycle code modules implementing user lifecycle functions defined by a user lifecycle application programming interface (API) for the plurality of third-party applications; identifying an occurrence of a user lifecycle event; identifying a first user of an organization associated with the user lifecycle event; mapping the user lifecycle event to a function of the user lifecycle API associated with the user lifecycle event; identifying, as a subset of the third-party applications, third-party applications to which the organization has granted the first user access; and for each third-party application of a plurality of the third-party applications of the identified subset: identifying a lifecycle code module corresponding to the third-party application, and calling a function of the identified lifecycle code module, thereby making a change to data of the user in the third-party application. 2. The computer-implemented method of claim 1, wherein the lifecycle code modules are received from a plurality of application creators that created the plurality of third-party applications. 3. The computer-implemented method of claim 1, wherein the user lifecycle event is triggered by a request of the first user in a user interface. 4. The computer-implemented method of claim 1, wherein the user lifecycle event is triggered by a request of an administrator of the organization. 5. The computer-implemented method of claim 1, wherein the user lifecycle event is triggered by a user password rotation policy evaluated by the server. 6. The computer-implemented method of claim 1, wherein the user lifecycle event is initiated by the server. 7. The computer-implemented method of claim 1, wherein the server identifies the user and the organization based on a prior login of the user into a user interface corresponding to the organization. 8. The computer-implemented method of claim 1, further comprising: identifying an occurrence of a second user lifecycle event associated with a second user different from the user; for a first application of the identified subset, calling a function of a second lifecycle code module corresponding to the first application, the second lifecycle code module being different from a first lifecycle code module for which the function was called for the first application and for the first user. 9. The computer-implemented method of claim 1, wherein calling the function comprises instantiating a virtualization container and executing the identified lifecycle code module within the virtualization container. 10. The computer-implemented method of claim 1, wherein calling a function of a user lifecycle code module comprises making an API call to a remote third-party application corresponding to the user lifecycle code module. 11. The computer-implemented method of claim 1, wherein the user lifecycle event is an addition of the first user, and the function is determining whether the first user already exists in the third-party applications. 12. A computer-implemented method of a server, the method comprising: accessing a plurality of lifecycle code modules corresponding to a plurality of third-party applications, the lifecycle code modules implementing user lifecycle functions defined by a user lifecycle application programming interface (API) for the plurality of third-party applications; identifying an occurrence of a user lifecycle event associated with a first user and an organization; mapping the user lifecycle event to a function of the user lifecycle API associated with the user lifecycle event; and for each third-party application of a plurality of the third-party applications: identifying a lifecycle code module corresponding to the third-party application, and calling a function of the identified lifecycle code module, thereby making a change to data of the user in the third-party application. 13. The computer-implemented method of claim 12, wherein the lifecycle code modules are received from a plurality of application creators that created the plurality of third-party applications. 14. The computer-implemented method of claim 12, wherein the user lifecycle event is triggered by a request of the first user in a user interface. 15. The computer-implemented method of claim 12, wherein the user lifecycle event is triggered by a request of an administrator of the organization. 16. The computer-implemented method of claim 12, wherein the user lifecycle event is triggered by a user password rotation policy evaluated by the server. 17. The computer-implemented method of claim 12, wherein the server identifies the user and the organization based on a prior login of the user into a user interface corresponding to the organization. 18. The computer-implemented method of claim 12, further comprising: identifying an occurrence of a second user lifecycle event associated with a second user different from the user; for a first application of the plurality of third-party applications, calling a function of a second lifecycle code module corresponding to the first application, the second lifecycle code module being different from a first lifecycle code module for which the function was called for the first application and for the first user. 19. The computer-implemented method of claim 12, wherein calling the function comprises instantiating a virtualization container and executing the identified lifecycle code module within the virtualization container. 20. The computer-implemented method of claim 12, wherein calling a function of a user lifecycle code module comprises making a API call to a remote third-party application corresponding to the user lifecycle code module.
Users of organizations use many different third-party applications. The organizations use the services of a server to manage and interact with the third-party applications. In particular, the server provides a user lifecycle API that defines a set of user lifecycle events corresponding to changes of the users with respect to their organizations and/or the third-party applications that they use within the organizations. The server further has access to lifecycle code modules corresponding to the different third-party applications and defining how those third-party applications will respond to the user lifecycle events. When a user lifecycle event occurs for a particular user of a particular organization, the server determines the third-party applications to which the organization has given the user access uses the appropriate functionality of the lifecycle code modules of the corresponding third-party applications to implement the appropriate user changes for those applications.1. A computer-implemented method of a server, the method comprising: storing a plurality of lifecycle code modules corresponding to a plurality of third-party applications, the lifecycle code modules implementing user lifecycle functions defined by a user lifecycle application programming interface (API) for the plurality of third-party applications; identifying an occurrence of a user lifecycle event; identifying a first user of an organization associated with the user lifecycle event; mapping the user lifecycle event to a function of the user lifecycle API associated with the user lifecycle event; identifying, as a subset of the third-party applications, third-party applications to which the organization has granted the first user access; and for each third-party application of a plurality of the third-party applications of the identified subset: identifying a lifecycle code module corresponding to the third-party application, and calling a function of the identified lifecycle code module, thereby making a change to data of the user in the third-party application. 2. The computer-implemented method of claim 1, wherein the lifecycle code modules are received from a plurality of application creators that created the plurality of third-party applications. 3. The computer-implemented method of claim 1, wherein the user lifecycle event is triggered by a request of the first user in a user interface. 4. The computer-implemented method of claim 1, wherein the user lifecycle event is triggered by a request of an administrator of the organization. 5. The computer-implemented method of claim 1, wherein the user lifecycle event is triggered by a user password rotation policy evaluated by the server. 6. The computer-implemented method of claim 1, wherein the user lifecycle event is initiated by the server. 7. The computer-implemented method of claim 1, wherein the server identifies the user and the organization based on a prior login of the user into a user interface corresponding to the organization. 8. The computer-implemented method of claim 1, further comprising: identifying an occurrence of a second user lifecycle event associated with a second user different from the user; for a first application of the identified subset, calling a function of a second lifecycle code module corresponding to the first application, the second lifecycle code module being different from a first lifecycle code module for which the function was called for the first application and for the first user. 9. The computer-implemented method of claim 1, wherein calling the function comprises instantiating a virtualization container and executing the identified lifecycle code module within the virtualization container. 10. The computer-implemented method of claim 1, wherein calling a function of a user lifecycle code module comprises making an API call to a remote third-party application corresponding to the user lifecycle code module. 11. The computer-implemented method of claim 1, wherein the user lifecycle event is an addition of the first user, and the function is determining whether the first user already exists in the third-party applications. 12. A computer-implemented method of a server, the method comprising: accessing a plurality of lifecycle code modules corresponding to a plurality of third-party applications, the lifecycle code modules implementing user lifecycle functions defined by a user lifecycle application programming interface (API) for the plurality of third-party applications; identifying an occurrence of a user lifecycle event associated with a first user and an organization; mapping the user lifecycle event to a function of the user lifecycle API associated with the user lifecycle event; and for each third-party application of a plurality of the third-party applications: identifying a lifecycle code module corresponding to the third-party application, and calling a function of the identified lifecycle code module, thereby making a change to data of the user in the third-party application. 13. The computer-implemented method of claim 12, wherein the lifecycle code modules are received from a plurality of application creators that created the plurality of third-party applications. 14. The computer-implemented method of claim 12, wherein the user lifecycle event is triggered by a request of the first user in a user interface. 15. The computer-implemented method of claim 12, wherein the user lifecycle event is triggered by a request of an administrator of the organization. 16. The computer-implemented method of claim 12, wherein the user lifecycle event is triggered by a user password rotation policy evaluated by the server. 17. The computer-implemented method of claim 12, wherein the server identifies the user and the organization based on a prior login of the user into a user interface corresponding to the organization. 18. The computer-implemented method of claim 12, further comprising: identifying an occurrence of a second user lifecycle event associated with a second user different from the user; for a first application of the plurality of third-party applications, calling a function of a second lifecycle code module corresponding to the first application, the second lifecycle code module being different from a first lifecycle code module for which the function was called for the first application and for the first user. 19. The computer-implemented method of claim 12, wherein calling the function comprises instantiating a virtualization container and executing the identified lifecycle code module within the virtualization container. 20. The computer-implemented method of claim 12, wherein calling a function of a user lifecycle code module comprises making a API call to a remote third-party application corresponding to the user lifecycle code module.
2,400
8,138
8,138
14,675,558
2,421
An exemplary method includes a media channel navigation user interface system detecting, while media content distributed on a media channel is being presented for display on a display screen, a user request to launch a media channel navigation user interface, and providing, in response to the detected user request and for concurrent display with the presentation of the media content on the display screen, a media channel navigation user interface pane comprising a set of media channel navigation tools that include at least media content channel navigation tools selectable by a user to launch different types of menus of media channels in the media channel navigation user interface pane.
1. A method comprising: detecting, by a media channel navigation user interface system while media content distributed on a media channel is being presented for display on a display screen, a user request to launch a media channel navigation user interface; and providing, by the media channel navigation user interface system in response to the detected user request and for concurrent display with the presentation of the media content on the display screen, a media channel navigation user interface pane comprising a set of media channel navigation tools that include a first media channel navigation tool selectable by a user to launch a first type of menu of media channels in the media channel navigation user interface pane, and a second media channel navigation tool selectable by the user to launch a second type of menu of media channels in the media channel navigation user interface pane, the second type of menu of media channels different from the first type of menu of media channels. 2. The method of claim 1, wherein the first type of menu of media channels comprises one of a recency-based menu of media channels, a similarity-based menu of media channels, and an alphabet-based menu of media channels. 3. The method of claim 2, wherein the second type of menu of media channels comprises a different one of the recency-based menu of media channels, the similarity-based menu of media channels, and the alphabet-based menu of media channels. 4. The method of claim 1, wherein the set of media channel navigation tools further include a third media channel navigation tool selectable by the user to launch a third type of menu of media channels in the media channel navigation user interface pane, the third type of menu of media channels different from the second type of menu of media channels and the first type of menu of media channels. 5. The method of claim 4, wherein: the first type of menu of media channels comprises a recency-based menu of media channels; the second type of menu of media channels comprises a similarity-based menu of media channels; and the third type of menu of media channels comprises an alphabet-based menu of media channels. 6. The method of claim 1, further comprising: detecting, by the media channel navigation user interface system, a selection of the first media channel navigation tool in the media channel navigation user interface pane; and providing, by the media channel navigation user interface system in response to the detected selection of the first media channel navigation tool, the first type of menu of media channels for concurrent display with the set of media channel navigation tools in the media channel navigation user interface pane. 7. The method of claim 6, wherein: the first type of menu of media channels comprises a recency-based menu of media channels; and the providing of the first type of menu of media channels for display comprises selecting one or more media channels, from a history of accessed media channels, for inclusion in the recency-based menu of media channels based on a set of predefined recency-based channel selection factors. 8. The method of claim 6, further comprising: detecting, by the media channel navigation user interface system, a selection of a media channel included in the first type of menu of media channels; and providing, by the media channel navigation user interface system in response to the detected selection of the media channel, content related to the media channel for display within the first type of menu of media channels in media channel navigation user interface pane. 9. The method of claim 1, embodied as computer-executable instructions on at least one non-transitory computer-readable medium. 10. A method comprising: accessing, by a media content access system, a media channel; providing, by the media content access system for display on a display screen, a presentation of media content distributed on the accessed media channel; and providing, by the media content access system for concurrent display with the presentation of the media content on the display screen, a media channel navigation user interface pane that includes a menu of media channel navigation tools comprising a recent-channel navigation tool selectable by a user to launch, in the media channel navigation user interface pane, a recency-based menu of one or more media channels selected for inclusion in the recency-based menu based on recency of access of the one or more media channels by the media content access system prior to the launch of the recency-based menu, and a similar-channel navigation tool selectable by the user to launch, in the media channel navigation user interface pane, a similarity-based menu of one or more media channels selected for inclusion in the similarity-based menu based on at least one attribute shared with the accessed media channel. 11. The method of claim 10, wherein the set of media channel navigation tools further include an alphabet-based channel navigation tool selectable by the user to launch, in the media channel navigation user interface pane, an alphabet-based menu of one or more media channels selected for inclusion in the alphabet-based menu based on alphabetical similarity to the accessed media channel. 12. The method of claim 10, further comprising: detecting, by the media channel navigation user interface system, a selection of the recent-channel navigation tool in the media channel navigation user interface pane; and providing, by the media channel navigation user interface system in response to the detected selection of the recent-channel navigation tool, the recency-based menu of one or more media channels for concurrent display with the set of media channel navigation tools in the media channel navigation user interface pane. 13. The method of claim 12, further comprising: detecting, by the media channel navigation user interface system, a selection of a media channel included in the recency-based menu of one or more media channels; and providing, by the media channel navigation user interface system in response to the detected selection of the media channel and for display in the recency-based menu, informational content about a media program currently being distributed on the selected media channel. 14. The method of claim 10, embodied as computer-executable instructions on at least one non-transitory computer-readable medium. 15. A system comprising: at least one physical computing device that: detects, while media content distributed on a media channel is being presented for display on a display screen, a user request to launch a media channel navigation user interface; and provides, in response to the detected user request and for concurrent display with the presentation of the media content on the display screen, a media channel navigation user interface pane comprising a set of media channel navigation tools that include a first media channel navigation tool selectable by a user to launch a first type of menu of media channels in the media channel navigation user interface pane, and a second media channel navigation tool selectable by the user to launch a second type of menu of media channels in the media channel navigation user interface pane, the second type of menu of media channels different from the first type of menu of media channels. 16. The system of claim 15, wherein the first type of menu of media channels comprises one of a recency-based menu of media channels, a similarity-based menu of media channels, and an alphabet-based menu of media channels. 17. The system of claim 16, wherein the second type of menu of media channels comprises a different one of the recency-based menu of media channels, the similarity-based menu of media channels, and the alphabet-based menu of media channels. 18. The system of claim 15, wherein the set of media channel navigation tools further include a third media channel navigation tool selectable by the user to launch a third type of menu of media channels in the media channel navigation user interface pane, the third type of menu of media channels different from the second type of menu of media channels and the first type of menu of media channels. 19. The system of claim 18, wherein: the first type of menu of media channels comprises a recency-based menu of media channels; the second type of menu of media channels comprises a similarity-based menu of media channels; and the third type of menu of media channels comprises an alphabet-based menu of media channels. 20. The system of claim 15, further comprising: detecting, by the media channel navigation user interface system, a selection of the first media channel navigation tool in the media channel navigation user interface pane; and providing, by the media channel navigation user interface system in response to the detected selection of the first media channel navigation tool, the first type of menu of media channels for concurrent display with the set of media channel navigation tools in the media channel navigation user interface pane.
An exemplary method includes a media channel navigation user interface system detecting, while media content distributed on a media channel is being presented for display on a display screen, a user request to launch a media channel navigation user interface, and providing, in response to the detected user request and for concurrent display with the presentation of the media content on the display screen, a media channel navigation user interface pane comprising a set of media channel navigation tools that include at least media content channel navigation tools selectable by a user to launch different types of menus of media channels in the media channel navigation user interface pane.1. A method comprising: detecting, by a media channel navigation user interface system while media content distributed on a media channel is being presented for display on a display screen, a user request to launch a media channel navigation user interface; and providing, by the media channel navigation user interface system in response to the detected user request and for concurrent display with the presentation of the media content on the display screen, a media channel navigation user interface pane comprising a set of media channel navigation tools that include a first media channel navigation tool selectable by a user to launch a first type of menu of media channels in the media channel navigation user interface pane, and a second media channel navigation tool selectable by the user to launch a second type of menu of media channels in the media channel navigation user interface pane, the second type of menu of media channels different from the first type of menu of media channels. 2. The method of claim 1, wherein the first type of menu of media channels comprises one of a recency-based menu of media channels, a similarity-based menu of media channels, and an alphabet-based menu of media channels. 3. The method of claim 2, wherein the second type of menu of media channels comprises a different one of the recency-based menu of media channels, the similarity-based menu of media channels, and the alphabet-based menu of media channels. 4. The method of claim 1, wherein the set of media channel navigation tools further include a third media channel navigation tool selectable by the user to launch a third type of menu of media channels in the media channel navigation user interface pane, the third type of menu of media channels different from the second type of menu of media channels and the first type of menu of media channels. 5. The method of claim 4, wherein: the first type of menu of media channels comprises a recency-based menu of media channels; the second type of menu of media channels comprises a similarity-based menu of media channels; and the third type of menu of media channels comprises an alphabet-based menu of media channels. 6. The method of claim 1, further comprising: detecting, by the media channel navigation user interface system, a selection of the first media channel navigation tool in the media channel navigation user interface pane; and providing, by the media channel navigation user interface system in response to the detected selection of the first media channel navigation tool, the first type of menu of media channels for concurrent display with the set of media channel navigation tools in the media channel navigation user interface pane. 7. The method of claim 6, wherein: the first type of menu of media channels comprises a recency-based menu of media channels; and the providing of the first type of menu of media channels for display comprises selecting one or more media channels, from a history of accessed media channels, for inclusion in the recency-based menu of media channels based on a set of predefined recency-based channel selection factors. 8. The method of claim 6, further comprising: detecting, by the media channel navigation user interface system, a selection of a media channel included in the first type of menu of media channels; and providing, by the media channel navigation user interface system in response to the detected selection of the media channel, content related to the media channel for display within the first type of menu of media channels in media channel navigation user interface pane. 9. The method of claim 1, embodied as computer-executable instructions on at least one non-transitory computer-readable medium. 10. A method comprising: accessing, by a media content access system, a media channel; providing, by the media content access system for display on a display screen, a presentation of media content distributed on the accessed media channel; and providing, by the media content access system for concurrent display with the presentation of the media content on the display screen, a media channel navigation user interface pane that includes a menu of media channel navigation tools comprising a recent-channel navigation tool selectable by a user to launch, in the media channel navigation user interface pane, a recency-based menu of one or more media channels selected for inclusion in the recency-based menu based on recency of access of the one or more media channels by the media content access system prior to the launch of the recency-based menu, and a similar-channel navigation tool selectable by the user to launch, in the media channel navigation user interface pane, a similarity-based menu of one or more media channels selected for inclusion in the similarity-based menu based on at least one attribute shared with the accessed media channel. 11. The method of claim 10, wherein the set of media channel navigation tools further include an alphabet-based channel navigation tool selectable by the user to launch, in the media channel navigation user interface pane, an alphabet-based menu of one or more media channels selected for inclusion in the alphabet-based menu based on alphabetical similarity to the accessed media channel. 12. The method of claim 10, further comprising: detecting, by the media channel navigation user interface system, a selection of the recent-channel navigation tool in the media channel navigation user interface pane; and providing, by the media channel navigation user interface system in response to the detected selection of the recent-channel navigation tool, the recency-based menu of one or more media channels for concurrent display with the set of media channel navigation tools in the media channel navigation user interface pane. 13. The method of claim 12, further comprising: detecting, by the media channel navigation user interface system, a selection of a media channel included in the recency-based menu of one or more media channels; and providing, by the media channel navigation user interface system in response to the detected selection of the media channel and for display in the recency-based menu, informational content about a media program currently being distributed on the selected media channel. 14. The method of claim 10, embodied as computer-executable instructions on at least one non-transitory computer-readable medium. 15. A system comprising: at least one physical computing device that: detects, while media content distributed on a media channel is being presented for display on a display screen, a user request to launch a media channel navigation user interface; and provides, in response to the detected user request and for concurrent display with the presentation of the media content on the display screen, a media channel navigation user interface pane comprising a set of media channel navigation tools that include a first media channel navigation tool selectable by a user to launch a first type of menu of media channels in the media channel navigation user interface pane, and a second media channel navigation tool selectable by the user to launch a second type of menu of media channels in the media channel navigation user interface pane, the second type of menu of media channels different from the first type of menu of media channels. 16. The system of claim 15, wherein the first type of menu of media channels comprises one of a recency-based menu of media channels, a similarity-based menu of media channels, and an alphabet-based menu of media channels. 17. The system of claim 16, wherein the second type of menu of media channels comprises a different one of the recency-based menu of media channels, the similarity-based menu of media channels, and the alphabet-based menu of media channels. 18. The system of claim 15, wherein the set of media channel navigation tools further include a third media channel navigation tool selectable by the user to launch a third type of menu of media channels in the media channel navigation user interface pane, the third type of menu of media channels different from the second type of menu of media channels and the first type of menu of media channels. 19. The system of claim 18, wherein: the first type of menu of media channels comprises a recency-based menu of media channels; the second type of menu of media channels comprises a similarity-based menu of media channels; and the third type of menu of media channels comprises an alphabet-based menu of media channels. 20. The system of claim 15, further comprising: detecting, by the media channel navigation user interface system, a selection of the first media channel navigation tool in the media channel navigation user interface pane; and providing, by the media channel navigation user interface system in response to the detected selection of the first media channel navigation tool, the first type of menu of media channels for concurrent display with the set of media channel navigation tools in the media channel navigation user interface pane.
2,400
8,139
8,139
15,049,663
2,467
Detecting uplink/downlink time-division duplexed (TDD) frame configurations in TDD communications signals to synchronize uplink communications from TDD communications units. In one example, embodiments disclosed herein involve detecting uplink/downlink time-division duplexed (TDD) frame configurations employed in downlink TDD communications signals transmitted from a TDD base station. The TDD base station may be configured to provide TDD communications according to a TDD frame to a distributed antenna system. The detected uplink/downlink TDD frame configuration of the downlink TDD communications signals can be used to determine time periods in the TDD frame when downlink communications transmissions are intended and uplink communications transmissions are intended. In this manner, a TDD distributed communications unit can synchronize transmission circuitry transmitting uplink TDD communications signals to the TDD base station in a different time slot(s) from reception of downlink TDD communication signals from the TDD base station to avoid or reduce data loss.
1. A time-division duplexed (TDD) communications unit, comprising: a TDD communications signal interface configured to receive a downlink TDD communications signal and an uplink TDD communications signal over a communications medium; an uplink transmitter circuit coupled to the TDD communications signal interface, the uplink transmitter circuit configured to transmit the uplink TDD communications signal over the communications medium during at least one uplink frame period of a TDD frame based on a received uplink transmission control signal; a downlink receiver circuit coupled to the TDD communications signal interface, the downlink receiver circuit configured to be deactivated to not sample the downlink TDD communications signal during at least one uplink frame period of the TDD frame based on a received downlink reception control signal; and a controller configured to: detect an uplink/downlink TDD frame configuration of the TDD frame; determine at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generate the uplink transmission control signal based on the determined at least one uplink frame period in the TDD frame; and generate the downlink reception control signal based on the determined at least one uplink frame period in the TDD frame. 2. The TDD communications unit of claim 1, wherein: the uplink transmitter circuit is further configured to not transmit the uplink TDD communications signal over the communications medium during at least one downlink frame period of a TDD frame based on the received uplink transmission control signal; the downlink receiver circuit further configured to be activated to receive the downlink TDD communications signal during the at least one downlink frame period of the TDD frame based on the received downlink reception control signal; wherein the controller is further configured to: determine at least one downlink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generate the downlink reception control signal based on the determined at least one downlink frame period in the TDD frame; and generate the uplink transmission control signal based on the determined at least one downlink frame period in the TDD frame. 3. The TDD communications unit of claim 1, further comprising a power detector comprising a power detector input coupled to the communications medium, the power detector configured to generate a power detector output representing detected power on the communications medium; and wherein the controller is configured to detect the uplink/downlink TDD frame configuration of the TDD frame by being configured to detect the uplink/downlink TDD frame configuration of the TDD frame based on the power detector output received on the controller input from the power detector. 4. The TDD communications unit of claim 3, wherein the power detector is further configured to detect downlink power in a first subframe of the TDD frame on the communications medium. 5. The TDD communications unit of claim 1, wherein the downlink reception control signal is comprised of the uplink transmission control signal. 6. The TDD communications unit of claim 1, wherein the controller is further configured to continuously: detect the uplink/downlink TDD frame configuration of the TDD frame; and determine the at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration. 7. The TDD communications unit of claim 1, wherein the controller is further configured to determine the at least one uplink frame period in the TDD frame, by being configured to detect at least one transition in the TDD frame. 8. The TDD communications unit of claim 7, wherein the controller is configured to determine the at least one uplink frame period in the TDD frame, by being configured to detect at least one transition from the at least one uplink frame period to at least one downlink frame period in the TDD frame. 9. The TDD communications unit of claim 7, wherein the controller is further configured to determine the at least one uplink frame period in the TDD frame, by being configured to detect at least one transition from at least one downlink frame period to the at least one uplink frame period in the TDD frame. 10. The TDD communications unit of claim 7, wherein the controller is configured to create a TDD frame timing pattern from the detected uplink/downlink TDD frame configuration and the detected at least one transition in the TDD frame. 11. The TDD communications unit of claim 10, wherein the controller is further configured to synchronize the TDD frame timing pattern with the TDD frame, to determine the at least one uplink frame period in the TDD frame. 12. The TDD communications unit of claim 1, wherein: the TDD communications signal interface is configured to receive a downlink Long Term Evolution (LTE) TDD communications signal over the communications medium and an uplink Long Term Evolution (LTE) TDD communications signal over the communications medium; wherein the TDD frame is comprised of a LTE TDD frame. 13. The TDD communications unit of claim 12, wherein the controller is further configured to detect the uplink/downlink TDD frame configuration of the LTE TDD frame based on a non-transmission duration on the communications medium being greater than one (1) LTE sub-frame in the LTE TDD frame. 14. The TDD communications unit of claim 13, wherein the controller is further configured to detect the uplink/downlink TDD frame configuration of the LTE TDD frame based on having one (1) non-transmission duration, if a number of the non-transmission duration in the LTE TDD frame is one (1). 15. The TDD communications unit of claim 13, wherein the controller is further configured to detect the uplink/downlink TDD frame configuration of the LTE TDD frame based on having two (2) non-transmission durations, if a number of the non-transmission duration in the LTD TDD frame is two (2). 16. The TDD communications unit of claim 1, wherein the TDD communications signal interface is configured to receive the downlink TDD communications signal from a TDD base station over a coaxial cable communications medium. 17. The TDD communications unit of claim 1, wherein the TDD communications signal interface is configured to receive a downlink TDD communications signal over the communications medium from a TDD base station. 18. A method for synchronizing time-division duplexed (TDD) downlink and uplink communications with a TDD communications unit, comprising: receiving a downlink TDD communications signal having a TDD frame; detecting an uplink/downlink TDD frame configuration of the TDD frame; determining at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generating an uplink transmission control signal based on the determined at least one uplink frame period in the TDD frame; generating a downlink reception control signal based on the determined at least one uplink frame period in the TDD frame; transmitting an uplink TDD communications signal from an uplink transmitter circuit over a communications medium during the at least one uplink frame period in the TDD frame based on receiving the uplink transmission control signal; and deactivating a downlink receiver circuit to not sample the downlink TDD communications signal during at least one uplink frame period of the TDD frame based on receiving the downlink reception control signal. 19. The method of claim 18, further comprising: determining at least one downlink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generating the downlink reception control signal based on the determined at least one downlink frame period in the TDD frame; generating the uplink transmission control signal based on the determined at least one downlink frame period in the TDD frame; not transmitting the uplink TDD communications signal from the uplink transmitter circuit over the communications medium during the at least one downlink frame period of a TDD frame based on the received uplink transmission control signal; and receiving the downlink TDD communications signal in a downlink receiver circuit during the at least one on downlink frame period of the TDD frame based on the received downlink reception control signal. 20. The method of claim 18, further comprising: detecting power on the communications medium in a power detector at a power detector input coupled to the communications medium; and generating a power detector output from the power detector detecting power on the communications medium; wherein detecting the uplink/downlink TDD frame configuration of the TDD frame comprises detecting the uplink/downlink TDD frame configuration of the TDD frame based on the power detector output received on the controller input from the power detector. 21. The method of claim 18, further comprising continuously: receiving the downlink TDD communications signal having the TDD frame; detecting the uplink/downlink TDD frame configuration of the TDD frame; and determining the at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration. 22. The method of claim 18, wherein determining the at least one uplink frame period in the TDD frame further comprises detecting at least one transition in the TDD frame. 23. The method of claim 18, further comprising creating a TDD frame timing pattern from the detected uplink/downlink TDD frame configuration and the detected at least one transition in the TDD frame, wherein determining the at least one uplink frame period in the TDD frame further comprises synchronizing the TDD frame timing pattern with the TDD frame. 24. A time-division domain (TDD) distributed antenna system, comprising: a head-end unit, comprising: a first TDD communications signal interface configured to receive a downlink TDD communications signal over a communications medium from a base station and distribute the downlink TDD communications signal to a plurality of remote units; a second TDD communications interface configured to receive an uplink TDD communications signal from the plurality of remote units and distribute the received uplink TDD communications signal to the base station; an uplink transmitter circuit coupled to the first TDD communications signal interface, the uplink transmitter circuit configured to transmit the received uplink TDD communications signal from at least one distributed antenna system communications medium communicatively coupling a plurality of remote units to the head-end unit, over the communications medium to the base station during at least one uplink frame period of a TDD frame based on a received uplink transmission control signal; a downlink receiver circuit coupled to the first TDD communications signal interface, the downlink receiver circuit configured to be deactivated to not sample the downlink TDD communications signal during at least one uplink frame period of the TDD frame based on a received downlink reception control signal; and a controller configured to: detect an uplink/downlink TDD frame configuration of the TDD frame; determine at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generate the uplink transmission control signal based on the determined at least one uplink frame period in the TDD frame; and generate the downlink reception control signal based on the determined at least one uplink frame period in the TDD frame; each of the plurality of remote units comprising: at least one antenna configured to receive the uplink TDD communications signal from at least one TDD client device; an uplink transmitter circuit configured to transmit the uplink TDD communications signal over the at least one distributed antenna system communications medium to the head-end unit during at least one uplink frame period of a TDD frame, based on a received uplink transmission control signal from the head-end unit; a downlink receiver circuit configured to be deactivated to not sample the downlink TDD communications signal received from the head-end unit over the at least one distributed antenna system communications medium during the at least one uplink frame period of the TDD frame, based on a received downlink reception control signal from the head-end unit.
Detecting uplink/downlink time-division duplexed (TDD) frame configurations in TDD communications signals to synchronize uplink communications from TDD communications units. In one example, embodiments disclosed herein involve detecting uplink/downlink time-division duplexed (TDD) frame configurations employed in downlink TDD communications signals transmitted from a TDD base station. The TDD base station may be configured to provide TDD communications according to a TDD frame to a distributed antenna system. The detected uplink/downlink TDD frame configuration of the downlink TDD communications signals can be used to determine time periods in the TDD frame when downlink communications transmissions are intended and uplink communications transmissions are intended. In this manner, a TDD distributed communications unit can synchronize transmission circuitry transmitting uplink TDD communications signals to the TDD base station in a different time slot(s) from reception of downlink TDD communication signals from the TDD base station to avoid or reduce data loss.1. A time-division duplexed (TDD) communications unit, comprising: a TDD communications signal interface configured to receive a downlink TDD communications signal and an uplink TDD communications signal over a communications medium; an uplink transmitter circuit coupled to the TDD communications signal interface, the uplink transmitter circuit configured to transmit the uplink TDD communications signal over the communications medium during at least one uplink frame period of a TDD frame based on a received uplink transmission control signal; a downlink receiver circuit coupled to the TDD communications signal interface, the downlink receiver circuit configured to be deactivated to not sample the downlink TDD communications signal during at least one uplink frame period of the TDD frame based on a received downlink reception control signal; and a controller configured to: detect an uplink/downlink TDD frame configuration of the TDD frame; determine at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generate the uplink transmission control signal based on the determined at least one uplink frame period in the TDD frame; and generate the downlink reception control signal based on the determined at least one uplink frame period in the TDD frame. 2. The TDD communications unit of claim 1, wherein: the uplink transmitter circuit is further configured to not transmit the uplink TDD communications signal over the communications medium during at least one downlink frame period of a TDD frame based on the received uplink transmission control signal; the downlink receiver circuit further configured to be activated to receive the downlink TDD communications signal during the at least one downlink frame period of the TDD frame based on the received downlink reception control signal; wherein the controller is further configured to: determine at least one downlink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generate the downlink reception control signal based on the determined at least one downlink frame period in the TDD frame; and generate the uplink transmission control signal based on the determined at least one downlink frame period in the TDD frame. 3. The TDD communications unit of claim 1, further comprising a power detector comprising a power detector input coupled to the communications medium, the power detector configured to generate a power detector output representing detected power on the communications medium; and wherein the controller is configured to detect the uplink/downlink TDD frame configuration of the TDD frame by being configured to detect the uplink/downlink TDD frame configuration of the TDD frame based on the power detector output received on the controller input from the power detector. 4. The TDD communications unit of claim 3, wherein the power detector is further configured to detect downlink power in a first subframe of the TDD frame on the communications medium. 5. The TDD communications unit of claim 1, wherein the downlink reception control signal is comprised of the uplink transmission control signal. 6. The TDD communications unit of claim 1, wherein the controller is further configured to continuously: detect the uplink/downlink TDD frame configuration of the TDD frame; and determine the at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration. 7. The TDD communications unit of claim 1, wherein the controller is further configured to determine the at least one uplink frame period in the TDD frame, by being configured to detect at least one transition in the TDD frame. 8. The TDD communications unit of claim 7, wherein the controller is configured to determine the at least one uplink frame period in the TDD frame, by being configured to detect at least one transition from the at least one uplink frame period to at least one downlink frame period in the TDD frame. 9. The TDD communications unit of claim 7, wherein the controller is further configured to determine the at least one uplink frame period in the TDD frame, by being configured to detect at least one transition from at least one downlink frame period to the at least one uplink frame period in the TDD frame. 10. The TDD communications unit of claim 7, wherein the controller is configured to create a TDD frame timing pattern from the detected uplink/downlink TDD frame configuration and the detected at least one transition in the TDD frame. 11. The TDD communications unit of claim 10, wherein the controller is further configured to synchronize the TDD frame timing pattern with the TDD frame, to determine the at least one uplink frame period in the TDD frame. 12. The TDD communications unit of claim 1, wherein: the TDD communications signal interface is configured to receive a downlink Long Term Evolution (LTE) TDD communications signal over the communications medium and an uplink Long Term Evolution (LTE) TDD communications signal over the communications medium; wherein the TDD frame is comprised of a LTE TDD frame. 13. The TDD communications unit of claim 12, wherein the controller is further configured to detect the uplink/downlink TDD frame configuration of the LTE TDD frame based on a non-transmission duration on the communications medium being greater than one (1) LTE sub-frame in the LTE TDD frame. 14. The TDD communications unit of claim 13, wherein the controller is further configured to detect the uplink/downlink TDD frame configuration of the LTE TDD frame based on having one (1) non-transmission duration, if a number of the non-transmission duration in the LTE TDD frame is one (1). 15. The TDD communications unit of claim 13, wherein the controller is further configured to detect the uplink/downlink TDD frame configuration of the LTE TDD frame based on having two (2) non-transmission durations, if a number of the non-transmission duration in the LTD TDD frame is two (2). 16. The TDD communications unit of claim 1, wherein the TDD communications signal interface is configured to receive the downlink TDD communications signal from a TDD base station over a coaxial cable communications medium. 17. The TDD communications unit of claim 1, wherein the TDD communications signal interface is configured to receive a downlink TDD communications signal over the communications medium from a TDD base station. 18. A method for synchronizing time-division duplexed (TDD) downlink and uplink communications with a TDD communications unit, comprising: receiving a downlink TDD communications signal having a TDD frame; detecting an uplink/downlink TDD frame configuration of the TDD frame; determining at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generating an uplink transmission control signal based on the determined at least one uplink frame period in the TDD frame; generating a downlink reception control signal based on the determined at least one uplink frame period in the TDD frame; transmitting an uplink TDD communications signal from an uplink transmitter circuit over a communications medium during the at least one uplink frame period in the TDD frame based on receiving the uplink transmission control signal; and deactivating a downlink receiver circuit to not sample the downlink TDD communications signal during at least one uplink frame period of the TDD frame based on receiving the downlink reception control signal. 19. The method of claim 18, further comprising: determining at least one downlink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generating the downlink reception control signal based on the determined at least one downlink frame period in the TDD frame; generating the uplink transmission control signal based on the determined at least one downlink frame period in the TDD frame; not transmitting the uplink TDD communications signal from the uplink transmitter circuit over the communications medium during the at least one downlink frame period of a TDD frame based on the received uplink transmission control signal; and receiving the downlink TDD communications signal in a downlink receiver circuit during the at least one on downlink frame period of the TDD frame based on the received downlink reception control signal. 20. The method of claim 18, further comprising: detecting power on the communications medium in a power detector at a power detector input coupled to the communications medium; and generating a power detector output from the power detector detecting power on the communications medium; wherein detecting the uplink/downlink TDD frame configuration of the TDD frame comprises detecting the uplink/downlink TDD frame configuration of the TDD frame based on the power detector output received on the controller input from the power detector. 21. The method of claim 18, further comprising continuously: receiving the downlink TDD communications signal having the TDD frame; detecting the uplink/downlink TDD frame configuration of the TDD frame; and determining the at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration. 22. The method of claim 18, wherein determining the at least one uplink frame period in the TDD frame further comprises detecting at least one transition in the TDD frame. 23. The method of claim 18, further comprising creating a TDD frame timing pattern from the detected uplink/downlink TDD frame configuration and the detected at least one transition in the TDD frame, wherein determining the at least one uplink frame period in the TDD frame further comprises synchronizing the TDD frame timing pattern with the TDD frame. 24. A time-division domain (TDD) distributed antenna system, comprising: a head-end unit, comprising: a first TDD communications signal interface configured to receive a downlink TDD communications signal over a communications medium from a base station and distribute the downlink TDD communications signal to a plurality of remote units; a second TDD communications interface configured to receive an uplink TDD communications signal from the plurality of remote units and distribute the received uplink TDD communications signal to the base station; an uplink transmitter circuit coupled to the first TDD communications signal interface, the uplink transmitter circuit configured to transmit the received uplink TDD communications signal from at least one distributed antenna system communications medium communicatively coupling a plurality of remote units to the head-end unit, over the communications medium to the base station during at least one uplink frame period of a TDD frame based on a received uplink transmission control signal; a downlink receiver circuit coupled to the first TDD communications signal interface, the downlink receiver circuit configured to be deactivated to not sample the downlink TDD communications signal during at least one uplink frame period of the TDD frame based on a received downlink reception control signal; and a controller configured to: detect an uplink/downlink TDD frame configuration of the TDD frame; determine at least one uplink frame period in the TDD frame based on the detected uplink/downlink TDD frame configuration; generate the uplink transmission control signal based on the determined at least one uplink frame period in the TDD frame; and generate the downlink reception control signal based on the determined at least one uplink frame period in the TDD frame; each of the plurality of remote units comprising: at least one antenna configured to receive the uplink TDD communications signal from at least one TDD client device; an uplink transmitter circuit configured to transmit the uplink TDD communications signal over the at least one distributed antenna system communications medium to the head-end unit during at least one uplink frame period of a TDD frame, based on a received uplink transmission control signal from the head-end unit; a downlink receiver circuit configured to be deactivated to not sample the downlink TDD communications signal received from the head-end unit over the at least one distributed antenna system communications medium during the at least one uplink frame period of the TDD frame, based on a received downlink reception control signal from the head-end unit.
2,400
8,140
8,140
15,948,053
2,422
An interlaced video signal can include content of different types, such as interlaced content and progressive content. The progressive content may have different cadences according to the ratio between the frame rate of the progressive content and the field rate of the interlaced video signal. Cadence analysis is performed to identify the cadence of the video signal and/or to determine field pairings when progressive content is included. As described herein, motion information (e.g. motion vectors) for blocks of fields of a video signal can be used for the cadence analysis. The use of motion information provides a robust method of performing cadence analysis.
1. A method of processing a video signal in a video processing unit, the video signal having an interlaced format and comprising a sequence of fields, the method comprising: determining cadence signatures for blocks within fields of the video signal, wherein the cadence signatures are indicative of one or more cadences of the video signal; creating a histogram of determined cadence signatures of said blocks for a field; and using the histogram to determine one or more significant cadence signatures in said field; whereby the fields of the video signal are processed in accordance with one or more determined significant cadence signatures to thereby determine frames of the video signal. 2. The method of claim 1, further comprising obtaining motion indicators for blocks of a plurality of fields of the video signal, wherein the obtained motion indicators are used to determine the cadence signatures of the blocks within fields of the video signal. 3. The method of claim 2, wherein the motion indicators are motion vectors. 4. The method of claim 2, wherein the motion indicators are binary flags, wherein the binary flag for a block in a particular field has a first binary value if there is substantially no motion in the block of the particular field, and wherein the binary flag for the block in the particular field has a second binary value if there is substantially some motion in the block of the particular field. 5. The method of claim 4, further comprising concatenating the binary flags for the block over a sequence of consecutive fields of the video signal to thereby determine a cadence signature for the block. 6. The method of claim 3, wherein the obtained motion vectors are used to determine a cadence signature for each of the blocks within the plurality of fields by assigning a binary flag to the block based on the obtained motion vector for the block. 7. The method of claim 2, wherein said obtaining the motion indicators comprises either: (i) determining the motion indicators in a motion analysis module of the video processing unit, or (ii) receiving the motion indicators which have been determined by a motion analysis module which is separate to the video processing unit. 8. The method of claim 1, wherein using the histogram to determine one or more significant cadence signatures in said field comprises determining the position of one or more peaks in the histogram for said field. 9. A video processing unit configured to process a video signal, the video signal having an interlaced format and comprising a sequence of fields, the video processing unit comprising: a cadence analysis module configured to: determine cadence signatures for blocks within fields of the video signal, wherein the cadence signatures are indicative of one or more cadences of the video signal; create a histogram of determined cadence signatures of said blocks for a field; and use the histogram to determine one or more significant cadence signatures in said field; wherein the video processing unit is configured to process the fields of the video signal in accordance with one or more determined significant cadence signatures to thereby determine frames of the video signal. 10. The video processing unit of claim 9, wherein the video signal includes video content of one or more type from a set of available video content types which have different cadences, said set of available video content types including an interlaced content type and a progressive content type, and wherein the cadence analysis module is further configured to: use the determined cadence signatures to determine the one or more types of the video content in the video signal. 11. The video processing unit of claim 10, wherein the cadence analysis module is configured to: determine a cadence signature for each of the blocks in said field; and assign each of the blocks in said field to one of the video content types based on the determined cadence signature for that block. 12. The video processing unit of claim 11, wherein the cadence analysis module is further configured to: determine the number of blocks in said field which are assigned to the progressive content type; wherein said histogram is a histogram of the determined cadence signatures of the blocks in said field which are assigned to the progressive content type, and wherein the cadence analysis module is configured to create said histogram responsive to determining that the number of blocks in said field assigned to the progressive content type exceeds a threshold. 13. The video processing unit of claim 9, wherein the cadence analysis module is further configured to map the determined one or more significant cadence signatures to one or more cadences of the video signal. 14. The video processing unit of claim 13, wherein the one or more cadences are one or more known cadences from a set of known cadences which have known cadence signatures, and wherein the cadence analysis module is configured to map each of the one or more significant cadence signatures to a known cadence by determining which of the known cadence signatures of the known cadences has the greatest similarity with the significant cadence signature. 15. The video processing unit of claim 14, wherein the cadence analysis module is configured to determine which of the known cadence signatures of the known cadences has the greatest similarity with the significant cadence signature by determining which of the known cadence signatures of the known cadences has a smallest distance to the significant cadence signature. 16. The video processing unit of claim 15, wherein the smallest distance is a smallest Hamming distance. 17. The video processing unit of claim 11, wherein the cadence analysis module is further configured to, for each block of a particular field: determine a respective measure of the similarity between the cadence signature of the block and each of: (i) the known cadence signatures of the known cadences, (ii) a cadence signature indicative of interlaced content, and (iii) a cadence signature indicative of static content; and assign, to the block, the cadence or content type which has the greatest similarity according to the determined measures of similarity. 18. The video processing unit of claim 9, wherein the cadence analysis module is further configured to use the determined cadence signatures to derive field pairings of consecutive fields which relate to the same time instance of the video content. 19. The video processing unit of claim 18, wherein the motion indicators are binary flags, wherein the binary flag for a block in a particular field has a first binary value if there is substantially no motion in the block of the particular field, and wherein the binary flag for the block in the particular field has a second binary value if there is substantially some motion in the block of the particular field, and wherein the cadence analysis module is configured to derive the field pairings by finding the first binary value in the most recent elements of the cadence signatures for the blocks of a field. 20. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed cause at least one processor to process a video signal in a video processing unit, the video signal having an interlaced format and comprising a sequence of fields, the processing of the video signal comprising: determining cadence signatures for blocks within fields of the video signal, wherein the cadence signatures are indicative of one or more cadences of the video signal; creating a histogram of determined cadence signatures of said blocks for a field; and using the histogram to determine one or more significant cadence signatures in said field; wherein the fields of the video signal are to be processed in accordance with one or more determined significant cadence signatures to thereby determine frames of the video signal.
An interlaced video signal can include content of different types, such as interlaced content and progressive content. The progressive content may have different cadences according to the ratio between the frame rate of the progressive content and the field rate of the interlaced video signal. Cadence analysis is performed to identify the cadence of the video signal and/or to determine field pairings when progressive content is included. As described herein, motion information (e.g. motion vectors) for blocks of fields of a video signal can be used for the cadence analysis. The use of motion information provides a robust method of performing cadence analysis.1. A method of processing a video signal in a video processing unit, the video signal having an interlaced format and comprising a sequence of fields, the method comprising: determining cadence signatures for blocks within fields of the video signal, wherein the cadence signatures are indicative of one or more cadences of the video signal; creating a histogram of determined cadence signatures of said blocks for a field; and using the histogram to determine one or more significant cadence signatures in said field; whereby the fields of the video signal are processed in accordance with one or more determined significant cadence signatures to thereby determine frames of the video signal. 2. The method of claim 1, further comprising obtaining motion indicators for blocks of a plurality of fields of the video signal, wherein the obtained motion indicators are used to determine the cadence signatures of the blocks within fields of the video signal. 3. The method of claim 2, wherein the motion indicators are motion vectors. 4. The method of claim 2, wherein the motion indicators are binary flags, wherein the binary flag for a block in a particular field has a first binary value if there is substantially no motion in the block of the particular field, and wherein the binary flag for the block in the particular field has a second binary value if there is substantially some motion in the block of the particular field. 5. The method of claim 4, further comprising concatenating the binary flags for the block over a sequence of consecutive fields of the video signal to thereby determine a cadence signature for the block. 6. The method of claim 3, wherein the obtained motion vectors are used to determine a cadence signature for each of the blocks within the plurality of fields by assigning a binary flag to the block based on the obtained motion vector for the block. 7. The method of claim 2, wherein said obtaining the motion indicators comprises either: (i) determining the motion indicators in a motion analysis module of the video processing unit, or (ii) receiving the motion indicators which have been determined by a motion analysis module which is separate to the video processing unit. 8. The method of claim 1, wherein using the histogram to determine one or more significant cadence signatures in said field comprises determining the position of one or more peaks in the histogram for said field. 9. A video processing unit configured to process a video signal, the video signal having an interlaced format and comprising a sequence of fields, the video processing unit comprising: a cadence analysis module configured to: determine cadence signatures for blocks within fields of the video signal, wherein the cadence signatures are indicative of one or more cadences of the video signal; create a histogram of determined cadence signatures of said blocks for a field; and use the histogram to determine one or more significant cadence signatures in said field; wherein the video processing unit is configured to process the fields of the video signal in accordance with one or more determined significant cadence signatures to thereby determine frames of the video signal. 10. The video processing unit of claim 9, wherein the video signal includes video content of one or more type from a set of available video content types which have different cadences, said set of available video content types including an interlaced content type and a progressive content type, and wherein the cadence analysis module is further configured to: use the determined cadence signatures to determine the one or more types of the video content in the video signal. 11. The video processing unit of claim 10, wherein the cadence analysis module is configured to: determine a cadence signature for each of the blocks in said field; and assign each of the blocks in said field to one of the video content types based on the determined cadence signature for that block. 12. The video processing unit of claim 11, wherein the cadence analysis module is further configured to: determine the number of blocks in said field which are assigned to the progressive content type; wherein said histogram is a histogram of the determined cadence signatures of the blocks in said field which are assigned to the progressive content type, and wherein the cadence analysis module is configured to create said histogram responsive to determining that the number of blocks in said field assigned to the progressive content type exceeds a threshold. 13. The video processing unit of claim 9, wherein the cadence analysis module is further configured to map the determined one or more significant cadence signatures to one or more cadences of the video signal. 14. The video processing unit of claim 13, wherein the one or more cadences are one or more known cadences from a set of known cadences which have known cadence signatures, and wherein the cadence analysis module is configured to map each of the one or more significant cadence signatures to a known cadence by determining which of the known cadence signatures of the known cadences has the greatest similarity with the significant cadence signature. 15. The video processing unit of claim 14, wherein the cadence analysis module is configured to determine which of the known cadence signatures of the known cadences has the greatest similarity with the significant cadence signature by determining which of the known cadence signatures of the known cadences has a smallest distance to the significant cadence signature. 16. The video processing unit of claim 15, wherein the smallest distance is a smallest Hamming distance. 17. The video processing unit of claim 11, wherein the cadence analysis module is further configured to, for each block of a particular field: determine a respective measure of the similarity between the cadence signature of the block and each of: (i) the known cadence signatures of the known cadences, (ii) a cadence signature indicative of interlaced content, and (iii) a cadence signature indicative of static content; and assign, to the block, the cadence or content type which has the greatest similarity according to the determined measures of similarity. 18. The video processing unit of claim 9, wherein the cadence analysis module is further configured to use the determined cadence signatures to derive field pairings of consecutive fields which relate to the same time instance of the video content. 19. The video processing unit of claim 18, wherein the motion indicators are binary flags, wherein the binary flag for a block in a particular field has a first binary value if there is substantially no motion in the block of the particular field, and wherein the binary flag for the block in the particular field has a second binary value if there is substantially some motion in the block of the particular field, and wherein the cadence analysis module is configured to derive the field pairings by finding the first binary value in the most recent elements of the cadence signatures for the blocks of a field. 20. A non-transitory computer readable storage medium having stored thereon processor executable instructions that when executed cause at least one processor to process a video signal in a video processing unit, the video signal having an interlaced format and comprising a sequence of fields, the processing of the video signal comprising: determining cadence signatures for blocks within fields of the video signal, wherein the cadence signatures are indicative of one or more cadences of the video signal; creating a histogram of determined cadence signatures of said blocks for a field; and using the histogram to determine one or more significant cadence signatures in said field; wherein the fields of the video signal are to be processed in accordance with one or more determined significant cadence signatures to thereby determine frames of the video signal.
2,400
8,141
8,141
15,061,159
2,444
A first information processing apparatus includes a first message acceptance unit which accepts input of a first message and a first message transmission unit which transmits the accepted first message and first character information to a server. A second information processing apparatus includes a second message acceptance unit which accepts input of a second message and a second message transmission unit which transmits the accepted second message and second character information to the server. The first information processing apparatus further includes a representation output unit which has a display unit display in chronological order, the first message brought in correspondence with a first character based on the first character information and the second message brought in correspondence with a second character based on the second character information obtained through the server and a reproduction output unit which provides audio output of the first message and the second message.
1. An information processing system in which a plurality of information processing apparatuses communicate data through a server, a first information processing apparatus including a first message acceptance unit which accepts input of a first message input by a user who operates the first information processing apparatus, and a first message transmission unit which transmits the accepted first message and first character information to the server, a second information processing apparatus including a second message acceptance unit which accepts input of a second message input by a user who operates the second information processing apparatus, and a second message transmission unit which transmits the accepted second message and second character information to the server, the first information processing apparatus further including a representation output unit which has a display unit display in chronological order, the first message brought in correspondence with a first character based on the first character information and the second message brought in correspondence with a second character based on the second character information obtained through the server, and a reproduction output unit which provides audio output of the first message and the second message. 2. The information processing system according to claim 1, wherein the reproduction output unit performs processing for displaying a character based on a result of analysis of contents of at least any of the first message and the second message and corresponding character information. 3. The information processing system according to claim 2, wherein the reproduction output unit performs processing for displaying the character based on the result of analysis and the corresponding character information while audio output of the first message and the second message is provided. 4. The information processing system according to claim 1, wherein the reproduction output unit performs processing for displaying at least any of the first message and the second message based on a result of analysis of contents of at least any of the first message and the second message. 5. The information processing system according to claim 1, wherein the reproduction output unit provides audio output of at least any of the first message and the second message based on a result of analysis of contents of at least any of the first message and the second message. 6. The information processing system according to claim 2, wherein the result of analysis is a result of analysis for each divided unit resulting from division of contents of at least any of the first message and the second message into prescribed units. 7. The information processing system according to claim 1, wherein messages displayed in chronological order on the display unit are provided such that scroll display of the messages can be provided in response to an operation by the user, and the reproduction output unit determines whether or not the first or second message is displayed in a screen on the display unit and successively provides audio output of the messages displayed in the screen. 8. The information processing system according to claim 1, wherein the character is an avatar representing the user. 9. The information processing system according to claim 8, wherein the reproduction output unit provides animated representation of the avatar. 10. The information processing system according to claim 9, wherein the reproduction output unit provides animated representation of the avatar by selecting one of a plurality of operation patterns. 11. The information processing system according to claim 1, wherein the reproduction output unit sequentially provides audio output of the messages displayed in chronological order on the display unit. 12. The information processing system according to claim 11, wherein the reproduction output unit sequentially and repeatedly provides audio output of the messages displayed on the display unit. 13. The information processing system according to claim 11, wherein the reproduction output unit provides scroll display of messages not displayed on the display unit and sequentially and repeatedly provides audio output of the messages of which scroll display is provided. 14. The information processing system according to claim 1, wherein the reproduction output unit obtains a result of analysis of message contents obtained through the server, which corresponds to a reproduction position in audio output of the message. 15. The information processing system according to claim 14, wherein the reproduction output unit performs processing for displaying the message based on the obtained result of analysis. 16. The information processing system according to claim 14, wherein the reproduction output unit performs processing for displaying a character based on the obtained result of analysis and corresponding character information. 17. An information processing system in which a plurality of information processing apparatuses can communicate data through a server, a first information processing apparatus including a first message acceptance unit which accepts input of a first message input by a user who operates the first information processing apparatus, and a first message transmission unit which transmits the accepted first message and first character information to the server, a second information processing apparatus including a second message acceptance unit which accepts input of a second message input by a user who operates the second information processing apparatus, and a second message transmission unit which transmits the accepted second message and second character information to the server, the first information processing apparatus further including a representation output unit which has a display unit display in chronological order, the first message brought in correspondence with a first character based on the first character information and the second message brought in correspondence with a second character based on the second character information obtained through the server, and a reproduction output unit which performs reproduction processing based on a result of analysis of contents of the first and second messages. 18. An information processing apparatus capable of communicating data with another information processing apparatus through a server, comprising: a message acceptance unit which accepts input of a first message input by a user who operates the information processing apparatus; a message transmission unit which transmits the accepted first message and first character information to the server; a representation output unit which has a display unit display in chronological order, a first character based on the first character information brought in correspondence with the first message, and a second message and a second character based on second character information, which have been input from another information processing apparatus and obtained through the server; and an audio output unit which provides audio output of the first message and the second message. 19. A non-transitory storage medium encoded with a computer readable program executed by a computer of an information processing apparatus capable of communicating data with another information processing apparatus through a server, the program causing the computer of the information processing apparatus to function as: a message acceptance unit which accepts input of a first message input by a user who operates the information processing apparatus; a message transmission unit which transmits the accepted first message and first character information to the server; a representation output unit which has a display unit display in chronological order, a first character based on the first character information brought in correspondence with the first message, and a second message and a second character based on second character information, which have been input from another information processing apparatus and obtained through the server; and an audio output unit which provides audio output of the first message and the second message. 20. A method of controlling an information processing apparatus capable of communicating data with another information processing apparatus through a server, comprising the steps of: accepting input of a first message input by a user who operates the information processing apparatus; transmitting the accepted first message and first character information to the server; displaying in chronological order on a display unit, a first character based on the first character information brought in correspondence with the first message, and a second message and a second character based on second character information, which have been input from another information processing apparatus and obtained through the server; and providing audio output of the first message and the second message.
A first information processing apparatus includes a first message acceptance unit which accepts input of a first message and a first message transmission unit which transmits the accepted first message and first character information to a server. A second information processing apparatus includes a second message acceptance unit which accepts input of a second message and a second message transmission unit which transmits the accepted second message and second character information to the server. The first information processing apparatus further includes a representation output unit which has a display unit display in chronological order, the first message brought in correspondence with a first character based on the first character information and the second message brought in correspondence with a second character based on the second character information obtained through the server and a reproduction output unit which provides audio output of the first message and the second message.1. An information processing system in which a plurality of information processing apparatuses communicate data through a server, a first information processing apparatus including a first message acceptance unit which accepts input of a first message input by a user who operates the first information processing apparatus, and a first message transmission unit which transmits the accepted first message and first character information to the server, a second information processing apparatus including a second message acceptance unit which accepts input of a second message input by a user who operates the second information processing apparatus, and a second message transmission unit which transmits the accepted second message and second character information to the server, the first information processing apparatus further including a representation output unit which has a display unit display in chronological order, the first message brought in correspondence with a first character based on the first character information and the second message brought in correspondence with a second character based on the second character information obtained through the server, and a reproduction output unit which provides audio output of the first message and the second message. 2. The information processing system according to claim 1, wherein the reproduction output unit performs processing for displaying a character based on a result of analysis of contents of at least any of the first message and the second message and corresponding character information. 3. The information processing system according to claim 2, wherein the reproduction output unit performs processing for displaying the character based on the result of analysis and the corresponding character information while audio output of the first message and the second message is provided. 4. The information processing system according to claim 1, wherein the reproduction output unit performs processing for displaying at least any of the first message and the second message based on a result of analysis of contents of at least any of the first message and the second message. 5. The information processing system according to claim 1, wherein the reproduction output unit provides audio output of at least any of the first message and the second message based on a result of analysis of contents of at least any of the first message and the second message. 6. The information processing system according to claim 2, wherein the result of analysis is a result of analysis for each divided unit resulting from division of contents of at least any of the first message and the second message into prescribed units. 7. The information processing system according to claim 1, wherein messages displayed in chronological order on the display unit are provided such that scroll display of the messages can be provided in response to an operation by the user, and the reproduction output unit determines whether or not the first or second message is displayed in a screen on the display unit and successively provides audio output of the messages displayed in the screen. 8. The information processing system according to claim 1, wherein the character is an avatar representing the user. 9. The information processing system according to claim 8, wherein the reproduction output unit provides animated representation of the avatar. 10. The information processing system according to claim 9, wherein the reproduction output unit provides animated representation of the avatar by selecting one of a plurality of operation patterns. 11. The information processing system according to claim 1, wherein the reproduction output unit sequentially provides audio output of the messages displayed in chronological order on the display unit. 12. The information processing system according to claim 11, wherein the reproduction output unit sequentially and repeatedly provides audio output of the messages displayed on the display unit. 13. The information processing system according to claim 11, wherein the reproduction output unit provides scroll display of messages not displayed on the display unit and sequentially and repeatedly provides audio output of the messages of which scroll display is provided. 14. The information processing system according to claim 1, wherein the reproduction output unit obtains a result of analysis of message contents obtained through the server, which corresponds to a reproduction position in audio output of the message. 15. The information processing system according to claim 14, wherein the reproduction output unit performs processing for displaying the message based on the obtained result of analysis. 16. The information processing system according to claim 14, wherein the reproduction output unit performs processing for displaying a character based on the obtained result of analysis and corresponding character information. 17. An information processing system in which a plurality of information processing apparatuses can communicate data through a server, a first information processing apparatus including a first message acceptance unit which accepts input of a first message input by a user who operates the first information processing apparatus, and a first message transmission unit which transmits the accepted first message and first character information to the server, a second information processing apparatus including a second message acceptance unit which accepts input of a second message input by a user who operates the second information processing apparatus, and a second message transmission unit which transmits the accepted second message and second character information to the server, the first information processing apparatus further including a representation output unit which has a display unit display in chronological order, the first message brought in correspondence with a first character based on the first character information and the second message brought in correspondence with a second character based on the second character information obtained through the server, and a reproduction output unit which performs reproduction processing based on a result of analysis of contents of the first and second messages. 18. An information processing apparatus capable of communicating data with another information processing apparatus through a server, comprising: a message acceptance unit which accepts input of a first message input by a user who operates the information processing apparatus; a message transmission unit which transmits the accepted first message and first character information to the server; a representation output unit which has a display unit display in chronological order, a first character based on the first character information brought in correspondence with the first message, and a second message and a second character based on second character information, which have been input from another information processing apparatus and obtained through the server; and an audio output unit which provides audio output of the first message and the second message. 19. A non-transitory storage medium encoded with a computer readable program executed by a computer of an information processing apparatus capable of communicating data with another information processing apparatus through a server, the program causing the computer of the information processing apparatus to function as: a message acceptance unit which accepts input of a first message input by a user who operates the information processing apparatus; a message transmission unit which transmits the accepted first message and first character information to the server; a representation output unit which has a display unit display in chronological order, a first character based on the first character information brought in correspondence with the first message, and a second message and a second character based on second character information, which have been input from another information processing apparatus and obtained through the server; and an audio output unit which provides audio output of the first message and the second message. 20. A method of controlling an information processing apparatus capable of communicating data with another information processing apparatus through a server, comprising the steps of: accepting input of a first message input by a user who operates the information processing apparatus; transmitting the accepted first message and first character information to the server; displaying in chronological order on a display unit, a first character based on the first character information brought in correspondence with the first message, and a second message and a second character based on second character information, which have been input from another information processing apparatus and obtained through the server; and providing audio output of the first message and the second message.
2,400
8,142
8,142
13,470,712
2,477
An administrative interface is provided between a first network and a second network, where the administrative interface is separate from one or more communications session signaling interfaces between the first network and second network. At least one of authorization, authentication, and accounting messages is communicated over the administrative interface. A module associated with the administrative interface is provided to perform topology hiding of the first network such that topology information of the first network is hidden from the second network.
1. A method comprising: providing an administrative interface between a first network and a second network, wherein the administrative interface is different from one or more communications session signaling interfaces between the first network and second network; communicating control messages over the one or more communications session signaling interfaces for establishing a communications session; performing first topology hiding at the one or more communications session signaling interfaces between the first and second networks; communicating authorization, authentication, and accounting messages over the administrative interface between a first module in the first network and a second module in the second network; and providing a topology hiding module implemented in a computer and associated with the administrative interface to perform second topology hiding at the administrative interface, wherein the second topology hiding is in addition to the first topology hiding, and the first topology hiding and second topology hiding are to hide topology information of the first network from the second network. 2. The method of claim 1, wherein providing the topology hiding module comprises providing an application level gateway (ALG). 3. The method of claim 2, wherein providing the ALG comprises providing a Diameter ALG. 4. The method of claim 1, wherein providing the administrative interface comprises providing the administrative interface over which Diameter messaging is exchanged for performing authorization, authentication, and accounting tasks. 5. The method of claim 1, wherein performing the first topology hiding at the one or more communications session signaling interfaces comprises performing the topology hiding at the one or more communications session signaling interfaces over which Session Initiation Protocol messaging is exchanged. 6. The method of claim 1, wherein performing the first topology hiding at the one or more communications session signaling interfaces is coordinated with the second topology hiding at the administrative interface. 7. The method of claim 1, wherein performing the second topology hiding comprises substituting a local address of the first network with an external address in an administrative message communicated over the administrative interface. 8. The method of claim 1, wherein performing the second topology hiding comprises encrypting a local address of the first network in an administrative message communicated over the administrative interface. 9. The method of claim 1, wherein performing the second topology hiding comprises hashing a local address of the first network in an administrative message communicated over the administrative interface. 10. The method of claim 1, wherein performing the second topology hiding comprises removing a local address of the first network from an administrative message communicated over the administrative interface. 11. The method of claim 1, wherein the first network is a visited network of a mobile station, and the second network is a home network of the mobile station, and wherein providing the topology hiding module to perform the second topology hiding comprises providing the topology hiding module in the visited network to perform the second topology hiding by modifying the authorization, authentication, and accounting messages sent on behalf of the mobile station from the visited network to the home network. 12. The method of claim 1, wherein the first module is a first policy control and charging rules function and the second module is a second policy control and charging rules function. 13. A system comprising: a first node in a visited network of a mobile station, comprising: at least one processor; a network interface to send administrative messages on behalf of the mobile station to a home network of the mobile station, the administrative messages comprising authorization, authentication, and accounting messages to perform respective authorization, authentication, and accounting tasks; and a module executable on the at least one processor to perform first topology hiding of the visited network such that topology information of the visited network is hidden from the home network, wherein the first topology hiding is performed by modifying the administrative messages; a second node in the visited network, comprising: at least one processor; a network interface to send control messages on behalf of the mobile station to the home network to establish a communications session; and a module executable on the at least one processor of the second node to perform second topology hiding of the visited network with respect to the control messages such that the topology information of the visited network is hidden from the home network. 14. The system of claim 13, wherein the module in the first node and the module in the second node are to coordinate the first topology hiding with the second topology hiding. 15. The system of claim 13, wherein the first topology hiding comprises one of: substituting a local address in at least one of the administrative messages with an external address; encrypting the local address in at least one of the administrative messages; hashing the local address in at least one of the administrative messages; and removing the local address from at least one of the administrative messages. 16. The system of claim 13, wherein the control messages comprise Session Initiation Protocol (SIP) messages. 17. The system of claim 13, wherein the administrative messages are configured to be sent over an administrative interface between the visited network and the home network, and wherein the control messages are configured to be sent over a communications session signaling interface between the visited network and the home network. 18. An article comprising at least one non-transitory computer-readable storage medium containing instructions that when executed cause a processor to: communicate administrative messages over an administrative interface between a first network and a second network, wherein the administrative interface is different from one or more communications session signaling interfaces between the first network and second network, and wherein the administrative messages are communicated to perform respective authorization task, authentication task, and accounting tasks; perform first topology hiding at the administrative interface to protect topology information of the first network such that the topology information of the first network is hidden from the second network, wherein the first topology hiding is performed by modifying the administrative messages; communicate control messages over the one or more communications session signaling interfaces for establishing a communications session; perform second topology hiding at the one or more communications session signaling interfaces between the first and second networks, where the second topology hiding is in addition to the first topology hiding. 19. The article of claim 18, wherein performing the second topology hiding at the one or more communications session signaling interfaces is coordinated with the first topology hiding at the administrative interface. 20. The article of claim 18, wherein the first network is a visited network of a mobile station, and the second network is a home network of the mobile station, and wherein performing the first topology hiding comprises performing the first topology hiding by modifying the authorization, authentication, and accounting messages sent on behalf of the mobile station from the visited network to the home network over the administrative interface.
An administrative interface is provided between a first network and a second network, where the administrative interface is separate from one or more communications session signaling interfaces between the first network and second network. At least one of authorization, authentication, and accounting messages is communicated over the administrative interface. A module associated with the administrative interface is provided to perform topology hiding of the first network such that topology information of the first network is hidden from the second network.1. A method comprising: providing an administrative interface between a first network and a second network, wherein the administrative interface is different from one or more communications session signaling interfaces between the first network and second network; communicating control messages over the one or more communications session signaling interfaces for establishing a communications session; performing first topology hiding at the one or more communications session signaling interfaces between the first and second networks; communicating authorization, authentication, and accounting messages over the administrative interface between a first module in the first network and a second module in the second network; and providing a topology hiding module implemented in a computer and associated with the administrative interface to perform second topology hiding at the administrative interface, wherein the second topology hiding is in addition to the first topology hiding, and the first topology hiding and second topology hiding are to hide topology information of the first network from the second network. 2. The method of claim 1, wherein providing the topology hiding module comprises providing an application level gateway (ALG). 3. The method of claim 2, wherein providing the ALG comprises providing a Diameter ALG. 4. The method of claim 1, wherein providing the administrative interface comprises providing the administrative interface over which Diameter messaging is exchanged for performing authorization, authentication, and accounting tasks. 5. The method of claim 1, wherein performing the first topology hiding at the one or more communications session signaling interfaces comprises performing the topology hiding at the one or more communications session signaling interfaces over which Session Initiation Protocol messaging is exchanged. 6. The method of claim 1, wherein performing the first topology hiding at the one or more communications session signaling interfaces is coordinated with the second topology hiding at the administrative interface. 7. The method of claim 1, wherein performing the second topology hiding comprises substituting a local address of the first network with an external address in an administrative message communicated over the administrative interface. 8. The method of claim 1, wherein performing the second topology hiding comprises encrypting a local address of the first network in an administrative message communicated over the administrative interface. 9. The method of claim 1, wherein performing the second topology hiding comprises hashing a local address of the first network in an administrative message communicated over the administrative interface. 10. The method of claim 1, wherein performing the second topology hiding comprises removing a local address of the first network from an administrative message communicated over the administrative interface. 11. The method of claim 1, wherein the first network is a visited network of a mobile station, and the second network is a home network of the mobile station, and wherein providing the topology hiding module to perform the second topology hiding comprises providing the topology hiding module in the visited network to perform the second topology hiding by modifying the authorization, authentication, and accounting messages sent on behalf of the mobile station from the visited network to the home network. 12. The method of claim 1, wherein the first module is a first policy control and charging rules function and the second module is a second policy control and charging rules function. 13. A system comprising: a first node in a visited network of a mobile station, comprising: at least one processor; a network interface to send administrative messages on behalf of the mobile station to a home network of the mobile station, the administrative messages comprising authorization, authentication, and accounting messages to perform respective authorization, authentication, and accounting tasks; and a module executable on the at least one processor to perform first topology hiding of the visited network such that topology information of the visited network is hidden from the home network, wherein the first topology hiding is performed by modifying the administrative messages; a second node in the visited network, comprising: at least one processor; a network interface to send control messages on behalf of the mobile station to the home network to establish a communications session; and a module executable on the at least one processor of the second node to perform second topology hiding of the visited network with respect to the control messages such that the topology information of the visited network is hidden from the home network. 14. The system of claim 13, wherein the module in the first node and the module in the second node are to coordinate the first topology hiding with the second topology hiding. 15. The system of claim 13, wherein the first topology hiding comprises one of: substituting a local address in at least one of the administrative messages with an external address; encrypting the local address in at least one of the administrative messages; hashing the local address in at least one of the administrative messages; and removing the local address from at least one of the administrative messages. 16. The system of claim 13, wherein the control messages comprise Session Initiation Protocol (SIP) messages. 17. The system of claim 13, wherein the administrative messages are configured to be sent over an administrative interface between the visited network and the home network, and wherein the control messages are configured to be sent over a communications session signaling interface between the visited network and the home network. 18. An article comprising at least one non-transitory computer-readable storage medium containing instructions that when executed cause a processor to: communicate administrative messages over an administrative interface between a first network and a second network, wherein the administrative interface is different from one or more communications session signaling interfaces between the first network and second network, and wherein the administrative messages are communicated to perform respective authorization task, authentication task, and accounting tasks; perform first topology hiding at the administrative interface to protect topology information of the first network such that the topology information of the first network is hidden from the second network, wherein the first topology hiding is performed by modifying the administrative messages; communicate control messages over the one or more communications session signaling interfaces for establishing a communications session; perform second topology hiding at the one or more communications session signaling interfaces between the first and second networks, where the second topology hiding is in addition to the first topology hiding. 19. The article of claim 18, wherein performing the second topology hiding at the one or more communications session signaling interfaces is coordinated with the first topology hiding at the administrative interface. 20. The article of claim 18, wherein the first network is a visited network of a mobile station, and the second network is a home network of the mobile station, and wherein performing the first topology hiding comprises performing the first topology hiding by modifying the authorization, authentication, and accounting messages sent on behalf of the mobile station from the visited network to the home network over the administrative interface.
2,400
8,143
8,143
15,066,453
2,494
In an apparatus performing multi-threaded data processing event handling circuitry receives event information from the data processing circuitry indicative of an event which has occurred during the data processing operations. Visibility configuration storage holds a set of visibility configuration values, each visibility configuration value associated with a thread of the multiple threads and the event handling circuitry adapts its use of the event information to restrict visibility of the event information for software of threads other than the thread which generated the event information when a visibility configuration value for the thread which generated the event information has a predetermined value. This allows multi-threaded event monitoring to be supported, whilst protecting event information from a particular thread for which it is desired to limit its visibility to software of other threads.
1. Apparatus for multi-threaded data processing comprising: data processing circuitry to perform data processing operations for each thread of multiple threads: event handling circuitry to receive event information from the data processing circuitry indicative of an event which has occurred during the data processing operations; and visibility configuration storage to hold a set of visibility configuration values, each visibility configuration value associated with a thread of the multiple threads, wherein the event handling circuitry is responsive on receipt of the event information to adapt its use of the event information to restrict visibility of the event information for software of threads other than the thread which generated the event information when a visibility configuration value for the thread which generated the event information has a predetermined value. 2. The apparatus as claimed in claim 1, wherein each visibility configuration value is not accessible to software of threads other than the corresponding thread. 3. The apparatus as claimed in claim 1, wherein the event handling circuitry comprises at least one event counter. 4. The apparatus as claimed in claim 3, wherein the at least one event counter comprises an event counter for each thread of the multiple threads. 5. The apparatus as claimed in claim 3, wherein the at least one event counter is configurable for multi-thread event counting and the at least one event counter counts events for more than one thread. 6. The apparatus as claimed in claim 5, wherein responsive to the at least one event counter performing multi-thread event counting the at least one event counter does not count events for the thread which generated the event information when a visibility configuration value for the thread has the predetermined value. 7. The apparatus as claimed in claim 1, wherein the visibility configuration storage is arranged to hold single bit visibility configuration values. 8. The apparatus as claimed in claim 1, wherein the visibility configuration storage is arranged to hold multi-bit visibility configuration values, wherein the event handling circuitry is responsive on receipt of the event information to adapt its use of the event information such that visibility of the event information for software of each of the multiple threads other than the thread which generated the event information is defined by a corresponding bit of a multi-bit visibility configuration value stored for the thread which generated the event information. 9. The apparatus as claimed in claim 1, wherein the thread which generated the event information is comprised in a group of threads, and the event handling circuitry is responsive on receipt of the event information to adapt its use of the event information such that the event information is visible to software of threads in the group of threads and such that the event information is not visible to software of threads not comprised in the group of threads. 10. The apparatus as claimed in claim 9, wherein the group of threads is defined by a group identifier, and the event handling circuitry is responsive on receipt of the event information to use the group identifier as the visibility configuration value. 11. The apparatus as claimed in claim 1, wherein the thread which generated the event information is comprised in a group of threads, and the visibility configuration value is given by a group visibility configuration value for the group of threads. 12. The apparatus as claimed in claim 1, wherein the event handling circuitry is responsive on receipt of the event information to adapt its use of the event information such that the event information is not visible to software of threads other than the thread which generated the event information when the visibility configuration value for the thread which generated the event information has the predetermined value. 13. The apparatus as claimed in claim 1, wherein the data processing circuitry is arranged to perform data processing operations for each thread of multiple threads at a selected execution level of multiple execution levels, wherein each visibility configuration value is not accessible to threads being executed at a lower execution level than the selected execution level. 14. The apparatus as claimed in claim 13, wherein the multiple execution levels comprise multiple exception levels. 15. The apparatus as claimed in claim 13, wherein the multiple execution levels comprise multiple security levels. 16. The apparatus as claimed in claim 13, wherein the apparatus is arranged to update the visibility configuration value when context switching between execution levels. 17. The apparatus as claimed in claim 1, wherein the data processing circuitry is arranged to perform the data processing operations in response to instructions and to select a subset of the instructions which it executes for profiling, the profiling being on the basis of event information generated for the subset of instructions, wherein the profiling comprises storing to a storage unit profiling data comprising the event information or further information derived from the event information, and wherein the data processing circuitry is arranged to prevent storage of the profiling data when the visibility configuration value for the thread which generated the event information has the predetermined value. 18. A data processing system comprising the apparatus as claimed in claim 1 and a storage unit. 19. A method of multi-threaded data processing comprising the steps of: performing data processing operations for each thread of multiple threads: storing a set of visibility configuration values, each visibility configuration value associated with a thread of the multiple threads; receiving event information indicative of an event which has occurred during the data processing operations; and adapting usage of the event information to restrict visibility of the event information for software of threads other than the thread which generated the event information when a visibility configuration value for the thread which generated the event information has a predetermined value. 20. Apparatus for multi-threaded data processing comprising: means for performing data processing operations for each thread of multiple threads; means for storing a set of visibility configuration values, each visibility configuration value associated with a thread of the multiple threads; means for receiving event information from the means for performing data processing operations indicative of an event which has occurred during the data processing operations; and means for adapting usage of the event information to restrict visibility of the event information for software of threads other than the thread which generated the event information when a visibility configuration value for the thread which generated the event information has a predetermined value. 21. A computer readable storage medium storing in a non-transient form software which when executed on a computing device causes the computing device to carry out the method of claim 19. 22. Software which when executed on a computing device causes the computing device to carry out the method of claim 19.
In an apparatus performing multi-threaded data processing event handling circuitry receives event information from the data processing circuitry indicative of an event which has occurred during the data processing operations. Visibility configuration storage holds a set of visibility configuration values, each visibility configuration value associated with a thread of the multiple threads and the event handling circuitry adapts its use of the event information to restrict visibility of the event information for software of threads other than the thread which generated the event information when a visibility configuration value for the thread which generated the event information has a predetermined value. This allows multi-threaded event monitoring to be supported, whilst protecting event information from a particular thread for which it is desired to limit its visibility to software of other threads.1. Apparatus for multi-threaded data processing comprising: data processing circuitry to perform data processing operations for each thread of multiple threads: event handling circuitry to receive event information from the data processing circuitry indicative of an event which has occurred during the data processing operations; and visibility configuration storage to hold a set of visibility configuration values, each visibility configuration value associated with a thread of the multiple threads, wherein the event handling circuitry is responsive on receipt of the event information to adapt its use of the event information to restrict visibility of the event information for software of threads other than the thread which generated the event information when a visibility configuration value for the thread which generated the event information has a predetermined value. 2. The apparatus as claimed in claim 1, wherein each visibility configuration value is not accessible to software of threads other than the corresponding thread. 3. The apparatus as claimed in claim 1, wherein the event handling circuitry comprises at least one event counter. 4. The apparatus as claimed in claim 3, wherein the at least one event counter comprises an event counter for each thread of the multiple threads. 5. The apparatus as claimed in claim 3, wherein the at least one event counter is configurable for multi-thread event counting and the at least one event counter counts events for more than one thread. 6. The apparatus as claimed in claim 5, wherein responsive to the at least one event counter performing multi-thread event counting the at least one event counter does not count events for the thread which generated the event information when a visibility configuration value for the thread has the predetermined value. 7. The apparatus as claimed in claim 1, wherein the visibility configuration storage is arranged to hold single bit visibility configuration values. 8. The apparatus as claimed in claim 1, wherein the visibility configuration storage is arranged to hold multi-bit visibility configuration values, wherein the event handling circuitry is responsive on receipt of the event information to adapt its use of the event information such that visibility of the event information for software of each of the multiple threads other than the thread which generated the event information is defined by a corresponding bit of a multi-bit visibility configuration value stored for the thread which generated the event information. 9. The apparatus as claimed in claim 1, wherein the thread which generated the event information is comprised in a group of threads, and the event handling circuitry is responsive on receipt of the event information to adapt its use of the event information such that the event information is visible to software of threads in the group of threads and such that the event information is not visible to software of threads not comprised in the group of threads. 10. The apparatus as claimed in claim 9, wherein the group of threads is defined by a group identifier, and the event handling circuitry is responsive on receipt of the event information to use the group identifier as the visibility configuration value. 11. The apparatus as claimed in claim 1, wherein the thread which generated the event information is comprised in a group of threads, and the visibility configuration value is given by a group visibility configuration value for the group of threads. 12. The apparatus as claimed in claim 1, wherein the event handling circuitry is responsive on receipt of the event information to adapt its use of the event information such that the event information is not visible to software of threads other than the thread which generated the event information when the visibility configuration value for the thread which generated the event information has the predetermined value. 13. The apparatus as claimed in claim 1, wherein the data processing circuitry is arranged to perform data processing operations for each thread of multiple threads at a selected execution level of multiple execution levels, wherein each visibility configuration value is not accessible to threads being executed at a lower execution level than the selected execution level. 14. The apparatus as claimed in claim 13, wherein the multiple execution levels comprise multiple exception levels. 15. The apparatus as claimed in claim 13, wherein the multiple execution levels comprise multiple security levels. 16. The apparatus as claimed in claim 13, wherein the apparatus is arranged to update the visibility configuration value when context switching between execution levels. 17. The apparatus as claimed in claim 1, wherein the data processing circuitry is arranged to perform the data processing operations in response to instructions and to select a subset of the instructions which it executes for profiling, the profiling being on the basis of event information generated for the subset of instructions, wherein the profiling comprises storing to a storage unit profiling data comprising the event information or further information derived from the event information, and wherein the data processing circuitry is arranged to prevent storage of the profiling data when the visibility configuration value for the thread which generated the event information has the predetermined value. 18. A data processing system comprising the apparatus as claimed in claim 1 and a storage unit. 19. A method of multi-threaded data processing comprising the steps of: performing data processing operations for each thread of multiple threads: storing a set of visibility configuration values, each visibility configuration value associated with a thread of the multiple threads; receiving event information indicative of an event which has occurred during the data processing operations; and adapting usage of the event information to restrict visibility of the event information for software of threads other than the thread which generated the event information when a visibility configuration value for the thread which generated the event information has a predetermined value. 20. Apparatus for multi-threaded data processing comprising: means for performing data processing operations for each thread of multiple threads; means for storing a set of visibility configuration values, each visibility configuration value associated with a thread of the multiple threads; means for receiving event information from the means for performing data processing operations indicative of an event which has occurred during the data processing operations; and means for adapting usage of the event information to restrict visibility of the event information for software of threads other than the thread which generated the event information when a visibility configuration value for the thread which generated the event information has a predetermined value. 21. A computer readable storage medium storing in a non-transient form software which when executed on a computing device causes the computing device to carry out the method of claim 19. 22. Software which when executed on a computing device causes the computing device to carry out the method of claim 19.
2,400
8,144
8,144
14,129,927
2,451
A mechanism is described for facilitating dynamic storage management for computing mobile devices according to one embodiment. A method of embodiments, as described herein, includes detecting context-aware data relating to a computing device and a user associated with the computing device, monitoring available space at a local storage of the computing device, and dynamically allocating portions of the space at the local storage based on the context-aware data and results of the monitoring of the space. The dynamic allocation may include providing a first portion of the space to a first content by moving a second content from the local storage to one or more remote storage devices.
1.-25. (canceled) 26. An apparatus comprising: context-aware detection and management logic to detect context-aware data relating to a computing device and a user associated with the computing device; and storage allocation logic to monitor available space at a local storage of the computing device, wherein the storage allocation logic is further to dynamically allocate portions of the space at the local storage based on the context-aware data and results of the monitoring of the space, wherein the dynamic allocation includes providing a first portion of the space to a first content by moving a second content from the local storage to one or more remote storage devices. 27. The apparatus of claim 26, further comprising predictability logic to evaluate one or more of the detected context-aware data, user-provided contexts, and changing activities of the computing device or the user, wherein the predictability logic is further to predict future usage behavior of the first computing device based on the evaluation. 28. The apparatus of claim 26, wherein the dynamic allocation is further based on the predicted usage behavior. 29. The apparatus of claim 26, wherein the one or more remote storage devices comprise a cloud-based remote storage device coupled to a server computing device, wherein the second content is moved to the cloud-based remote storage device over a network including a cloud network. 30. The apparatus of claim 26, wherein the one or more remote storage devices comprise a device-based remote storage device coupled to a client computing device, wherein the second content is moved to the device-based remote storage device over a proximity network including a Bluetooth connection. 31. The apparatus of claim 26, further comprising conflict resolution logic to resolve a conflict between the first content and the second content based on a default conflict resolution policy or a user-provided conflict resolution policy, wherein the user-provided conflict resolution policy maintains priority over the default conflict resolution policy. 32. The apparatus of claim 31, wherein the first and second contents comprise one or more of software applications, photographs, videos, text files, music files, and one or more data files needing storage. 33. An apparatus comprising: reception/authentication logic to receive, at a server computing device, a request for moving content from a local storage of a client computing device to a remote cloud storage device coupled to the server computing device; context-aware data management logic to review context-aware data relating to the client computing device; and remote storage logic to store the content at the remote cloud storage device based on the context-aware data. 34. The apparatus of claim 33, further comprising device/context management logic to extract the context-aware data relating to the client computing device prior to providing the context-aware data to the context-aware data management logic. 35. The apparatus of claim 33, further comprising communication/compatibility logic to communicate, over a network, a notification relating to the content being stored at the remote cloud storage or the local storage to the client computing device, wherein the network includes a cloud network. 36. The apparatus of claim 33, wherein the remote storage logic is further to retrieve, in response to a retrieval request, based on usage and predictability, the content from the remote cloud storage device, wherein the retrieved content is communicated back to the local storage at the client computing device. 36. A method comprising: detecting context-aware data relating to a computing device and a user associated with the computing device; monitoring available space at a local storage of the computing device; and dynamically allocating portions of the space at the local storage based on the context-aware data and results of the monitoring of the space, wherein the dynamic allocation includes providing a first portion of the space to a first content by moving a second content from the local storage to one or more remote storage devices. 37. The method of claim 36, further comprising evaluating one or more of the detected context-aware data, user-provided contexts, and changing activities of the computing device or the user; and predicting usage behavior of the first computing device based on the evaluation. 38. The method of claim 36, wherein the dynamic allocation is further based on the predicted usage behavior. 39. The method of claim 36, wherein the one or more remote storage devices comprise a cloud-based remote storage device coupled to a server computing device, wherein the second content is moved to the cloud-based remote storage device over a network including a cloud network. 40. The method of claim 36, wherein the one or more remote storage devices comprise a device-based remote storage device coupled to a client computing device, wherein the second content is moved to the device-based remote storage device over a proximity network including a Bluetooth connection. 41. The method of claim 36, further comprising resolving a conflict between the first content and the second content based on a default conflict resolution policy or a user-provided conflict resolution policy, wherein the user-provided conflict resolution policy maintains priority over the default conflict resolution policy. 42. The method of claim 41, wherein the first and second contents comprise one or more of software applications, photographs, videos, text files, music files, and one or more data files needing storage. 43. At least one machine-readable medium comprising a plurality of instructions that in response to being executed on a computing device, causes the computing device to carry out one or more operations comprising: detecting context-aware data relating to a computing device and a user associated with the computing device; monitoring available space at a local storage of the computing device; and dynamically allocating portions of the space at the local storage based on the context-aware data and results of the monitoring of the space, wherein the dynamic allocation includes providing a first portion of the space to a first content by moving a second content from the local storage to one or more remote storage devices. 44. The machine-readable medium of claim 43, wherein the one or more operations further comprise evaluating one or more of the detected context-aware data, user-provided contexts, and changing activities of the computing device or the user; and predicting usage behavior of the first computing device based on the evaluation. 45. The machine-readable medium of claim 43, wherein the dynamic allocation is further based on the predicted usage behavior. 46. The machine-readable medium of claim 43, wherein the one or more remote storage devices comprise a cloud-based remote storage device coupled to a server computing device, wherein the second content is moved to the cloud-based remote storage device over a network including a cloud network. 47. The machine-readable medium of claim 43, wherein the one or more remote storage devices comprise a device-based remote storage device coupled to a client computing device, wherein the second content is moved to the device-based remote storage device over a proximity network including a Bluetooth connection. 48. The machine-readable medium of claim 43, wherein the one or more operations further comprise resolving a conflict between the first content and the second content based on a default conflict resolution policy or a user-provided conflict resolution policy, wherein the user-provided conflict resolution policy maintains priority over the default conflict resolution policy. 49. The machine-readable medium of claim 48, wherein the first and second contents comprise one or more of software applications, photographs, videos, text files, music files, and one or more data files needing storage.
A mechanism is described for facilitating dynamic storage management for computing mobile devices according to one embodiment. A method of embodiments, as described herein, includes detecting context-aware data relating to a computing device and a user associated with the computing device, monitoring available space at a local storage of the computing device, and dynamically allocating portions of the space at the local storage based on the context-aware data and results of the monitoring of the space. The dynamic allocation may include providing a first portion of the space to a first content by moving a second content from the local storage to one or more remote storage devices.1.-25. (canceled) 26. An apparatus comprising: context-aware detection and management logic to detect context-aware data relating to a computing device and a user associated with the computing device; and storage allocation logic to monitor available space at a local storage of the computing device, wherein the storage allocation logic is further to dynamically allocate portions of the space at the local storage based on the context-aware data and results of the monitoring of the space, wherein the dynamic allocation includes providing a first portion of the space to a first content by moving a second content from the local storage to one or more remote storage devices. 27. The apparatus of claim 26, further comprising predictability logic to evaluate one or more of the detected context-aware data, user-provided contexts, and changing activities of the computing device or the user, wherein the predictability logic is further to predict future usage behavior of the first computing device based on the evaluation. 28. The apparatus of claim 26, wherein the dynamic allocation is further based on the predicted usage behavior. 29. The apparatus of claim 26, wherein the one or more remote storage devices comprise a cloud-based remote storage device coupled to a server computing device, wherein the second content is moved to the cloud-based remote storage device over a network including a cloud network. 30. The apparatus of claim 26, wherein the one or more remote storage devices comprise a device-based remote storage device coupled to a client computing device, wherein the second content is moved to the device-based remote storage device over a proximity network including a Bluetooth connection. 31. The apparatus of claim 26, further comprising conflict resolution logic to resolve a conflict between the first content and the second content based on a default conflict resolution policy or a user-provided conflict resolution policy, wherein the user-provided conflict resolution policy maintains priority over the default conflict resolution policy. 32. The apparatus of claim 31, wherein the first and second contents comprise one or more of software applications, photographs, videos, text files, music files, and one or more data files needing storage. 33. An apparatus comprising: reception/authentication logic to receive, at a server computing device, a request for moving content from a local storage of a client computing device to a remote cloud storage device coupled to the server computing device; context-aware data management logic to review context-aware data relating to the client computing device; and remote storage logic to store the content at the remote cloud storage device based on the context-aware data. 34. The apparatus of claim 33, further comprising device/context management logic to extract the context-aware data relating to the client computing device prior to providing the context-aware data to the context-aware data management logic. 35. The apparatus of claim 33, further comprising communication/compatibility logic to communicate, over a network, a notification relating to the content being stored at the remote cloud storage or the local storage to the client computing device, wherein the network includes a cloud network. 36. The apparatus of claim 33, wherein the remote storage logic is further to retrieve, in response to a retrieval request, based on usage and predictability, the content from the remote cloud storage device, wherein the retrieved content is communicated back to the local storage at the client computing device. 36. A method comprising: detecting context-aware data relating to a computing device and a user associated with the computing device; monitoring available space at a local storage of the computing device; and dynamically allocating portions of the space at the local storage based on the context-aware data and results of the monitoring of the space, wherein the dynamic allocation includes providing a first portion of the space to a first content by moving a second content from the local storage to one or more remote storage devices. 37. The method of claim 36, further comprising evaluating one or more of the detected context-aware data, user-provided contexts, and changing activities of the computing device or the user; and predicting usage behavior of the first computing device based on the evaluation. 38. The method of claim 36, wherein the dynamic allocation is further based on the predicted usage behavior. 39. The method of claim 36, wherein the one or more remote storage devices comprise a cloud-based remote storage device coupled to a server computing device, wherein the second content is moved to the cloud-based remote storage device over a network including a cloud network. 40. The method of claim 36, wherein the one or more remote storage devices comprise a device-based remote storage device coupled to a client computing device, wherein the second content is moved to the device-based remote storage device over a proximity network including a Bluetooth connection. 41. The method of claim 36, further comprising resolving a conflict between the first content and the second content based on a default conflict resolution policy or a user-provided conflict resolution policy, wherein the user-provided conflict resolution policy maintains priority over the default conflict resolution policy. 42. The method of claim 41, wherein the first and second contents comprise one or more of software applications, photographs, videos, text files, music files, and one or more data files needing storage. 43. At least one machine-readable medium comprising a plurality of instructions that in response to being executed on a computing device, causes the computing device to carry out one or more operations comprising: detecting context-aware data relating to a computing device and a user associated with the computing device; monitoring available space at a local storage of the computing device; and dynamically allocating portions of the space at the local storage based on the context-aware data and results of the monitoring of the space, wherein the dynamic allocation includes providing a first portion of the space to a first content by moving a second content from the local storage to one or more remote storage devices. 44. The machine-readable medium of claim 43, wherein the one or more operations further comprise evaluating one or more of the detected context-aware data, user-provided contexts, and changing activities of the computing device or the user; and predicting usage behavior of the first computing device based on the evaluation. 45. The machine-readable medium of claim 43, wherein the dynamic allocation is further based on the predicted usage behavior. 46. The machine-readable medium of claim 43, wherein the one or more remote storage devices comprise a cloud-based remote storage device coupled to a server computing device, wherein the second content is moved to the cloud-based remote storage device over a network including a cloud network. 47. The machine-readable medium of claim 43, wherein the one or more remote storage devices comprise a device-based remote storage device coupled to a client computing device, wherein the second content is moved to the device-based remote storage device over a proximity network including a Bluetooth connection. 48. The machine-readable medium of claim 43, wherein the one or more operations further comprise resolving a conflict between the first content and the second content based on a default conflict resolution policy or a user-provided conflict resolution policy, wherein the user-provided conflict resolution policy maintains priority over the default conflict resolution policy. 49. The machine-readable medium of claim 48, wherein the first and second contents comprise one or more of software applications, photographs, videos, text files, music files, and one or more data files needing storage.
2,400
8,145
8,145
14,407,145
2,416
The invention relates to a method and a device in an Evolved Packet System (EPS) communications network of facilitating re-establishing of a failed communications session undertaken via a plurality of EPS nodes. In a first aspect of the present invention, a method is provided in an EPS communications network of facilitating re-establishing of a failed communications session undertaken via a plurality of EPS nodes. The method comprises acquiring stateful session data of each of the plurality of EPS nodes involved in the communication session, and combining the acquired stateful session data of each of the plurality of EPS nodes into stateful session data common to the plurality of EPS nodes. Thereafter, the common stateful session data is associated with a communication session identifier and an identifier of each of the plurality of EPS nodes involved in the communication session, and stored for re-establishment of the communications session from the common stateful session data in case of failure.
1-15. (canceled) 16. A method, in an Evolved Packet System (EPS) communications network, of facilitating re-establishing of a failed communications session undertaken via a plurality of EPS nodes, comprising: acquiring stateful session data of each of the plurality of EPS nodes involved in the communication session; combining the acquired stateful session data of each of the plurality of EPS nodes into stateful session data common to the plurality of EPS nodes; associating the common stateful session data with a communication session identifier and an identifier of each of the plurality of EPS nodes involved in the communication session; and storing the common stateful session data and the identifiers for re-establishment of the communications session from the common stateful session data in case of failure. 17. The method of claim 16, wherein the stateful session data is EPS bearer context data. 18. The method of claim 16, further comprising re-instantiating at least one of the plurality of EPS nodes with the stored common stateful session data, such that the failed communication session identified by the communication session identifier is re-established. 19. The method of claim 18, further comprising: acquiring an indication of re-instantiation of at least one of the plurality of EPS nodes; comparing current stateful session data of said at least one of the plurality of EPS nodes with the stored common stateful session data; wherein said re-instantiating of the at least one of the plurality of EPS nodes is carried out in response to determining that the current stateful session data does not comply with the stored common stateful session data. 20. The method of claim 19, wherein said re-instantiating of the at least one of the plurality of EPS nodes comprises re-instantiating each of the plurality of EPS nodes involved in the communication session based on the stored common stateful session data. 21. The method of claim 16, wherein the common stateful session data is stored in a database common to the involved EPS nodes. 22. A network node in an Evolved Packet System (EPS) communications network configured to facilitate re-establishing of a failed communications session undertaken via a plurality of EPS nodes, the network node comprising a processing unit and a memory, said memory containing instructions executable by said processing unit, whereby said network node is operative to: acquire stateful session data of each of the plurality of EPS nodes involved in the communication session; combine the acquired stateful session data of each of the plurality of EPS nodes into stateful session data common to the plurality of EPS nodes; associate the common stateful session data with a communication session identifier and an identifier of each of the plurality of EPS nodes involved in the communication session; and store the common stateful session data and the identifiers for re-establishment of the communications session from the common stateful session data in case of failure. 23. The network node of claim 22, wherein the stateful session data is EPS bearer context data. 24. The network node of claim 22, further being operative to re-instantiate at least one of the plurality of EPS nodes with the stored common stateful session data, wherein the failed communication session identified by the communication session identifier is re-established. 25. The network node of claim 24, further being operative to: acquire an indication of re-instantiation of at least one of the plurality of EPS nodes; and compare current stateful session data of said at least one of the plurality of EPS nodes with the stored common stateful session data; wherein the network node is operative to re-instantiate the at least one of the plurality of EPS nodes in response to determining that the current stateful session data does not comply with the stored common stateful session data. 26. The network node of claim 25, wherein the network is operative to re-instantiate the at least one of the plurality of EPS nodes by re-instantiating each of the plurality of EPS nodes involved in the communication session based on the stored common stateful session data. 27. The network node of claim 22, further being operative to store the common stateful session data in a database common to the involved EPS nodes. 28. The network node of claim 22, wherein said identifier of each of the plurality of EPS nodes involved in the communication session comprises an Internet Protocol (IP) address. 29. A non-transitory computer-readable medium comprising, stored thereupon, a computer program comprising computer-executable instructions that, when executed by a processor in a device, cause the device to facilitate re-establishing of a failed communications session undertaken via a plurality of EPS nodes, the computer-executable instructions comprising instructions for: acquiring stateful session data of each of the plurality of EPS nodes involved in the communication session; combining the acquired stateful session data of each of the plurality of EPS nodes into stateful session data common to the plurality of EPS nodes; associating the common stateful session data with a communication session identifier and an identifier of each of the plurality of EPS nodes involved in the communication session; and storing the common stateful session data and the identifiers for re-establishment of the communications session from the common stateful session data in case of failure.
The invention relates to a method and a device in an Evolved Packet System (EPS) communications network of facilitating re-establishing of a failed communications session undertaken via a plurality of EPS nodes. In a first aspect of the present invention, a method is provided in an EPS communications network of facilitating re-establishing of a failed communications session undertaken via a plurality of EPS nodes. The method comprises acquiring stateful session data of each of the plurality of EPS nodes involved in the communication session, and combining the acquired stateful session data of each of the plurality of EPS nodes into stateful session data common to the plurality of EPS nodes. Thereafter, the common stateful session data is associated with a communication session identifier and an identifier of each of the plurality of EPS nodes involved in the communication session, and stored for re-establishment of the communications session from the common stateful session data in case of failure.1-15. (canceled) 16. A method, in an Evolved Packet System (EPS) communications network, of facilitating re-establishing of a failed communications session undertaken via a plurality of EPS nodes, comprising: acquiring stateful session data of each of the plurality of EPS nodes involved in the communication session; combining the acquired stateful session data of each of the plurality of EPS nodes into stateful session data common to the plurality of EPS nodes; associating the common stateful session data with a communication session identifier and an identifier of each of the plurality of EPS nodes involved in the communication session; and storing the common stateful session data and the identifiers for re-establishment of the communications session from the common stateful session data in case of failure. 17. The method of claim 16, wherein the stateful session data is EPS bearer context data. 18. The method of claim 16, further comprising re-instantiating at least one of the plurality of EPS nodes with the stored common stateful session data, such that the failed communication session identified by the communication session identifier is re-established. 19. The method of claim 18, further comprising: acquiring an indication of re-instantiation of at least one of the plurality of EPS nodes; comparing current stateful session data of said at least one of the plurality of EPS nodes with the stored common stateful session data; wherein said re-instantiating of the at least one of the plurality of EPS nodes is carried out in response to determining that the current stateful session data does not comply with the stored common stateful session data. 20. The method of claim 19, wherein said re-instantiating of the at least one of the plurality of EPS nodes comprises re-instantiating each of the plurality of EPS nodes involved in the communication session based on the stored common stateful session data. 21. The method of claim 16, wherein the common stateful session data is stored in a database common to the involved EPS nodes. 22. A network node in an Evolved Packet System (EPS) communications network configured to facilitate re-establishing of a failed communications session undertaken via a plurality of EPS nodes, the network node comprising a processing unit and a memory, said memory containing instructions executable by said processing unit, whereby said network node is operative to: acquire stateful session data of each of the plurality of EPS nodes involved in the communication session; combine the acquired stateful session data of each of the plurality of EPS nodes into stateful session data common to the plurality of EPS nodes; associate the common stateful session data with a communication session identifier and an identifier of each of the plurality of EPS nodes involved in the communication session; and store the common stateful session data and the identifiers for re-establishment of the communications session from the common stateful session data in case of failure. 23. The network node of claim 22, wherein the stateful session data is EPS bearer context data. 24. The network node of claim 22, further being operative to re-instantiate at least one of the plurality of EPS nodes with the stored common stateful session data, wherein the failed communication session identified by the communication session identifier is re-established. 25. The network node of claim 24, further being operative to: acquire an indication of re-instantiation of at least one of the plurality of EPS nodes; and compare current stateful session data of said at least one of the plurality of EPS nodes with the stored common stateful session data; wherein the network node is operative to re-instantiate the at least one of the plurality of EPS nodes in response to determining that the current stateful session data does not comply with the stored common stateful session data. 26. The network node of claim 25, wherein the network is operative to re-instantiate the at least one of the plurality of EPS nodes by re-instantiating each of the plurality of EPS nodes involved in the communication session based on the stored common stateful session data. 27. The network node of claim 22, further being operative to store the common stateful session data in a database common to the involved EPS nodes. 28. The network node of claim 22, wherein said identifier of each of the plurality of EPS nodes involved in the communication session comprises an Internet Protocol (IP) address. 29. A non-transitory computer-readable medium comprising, stored thereupon, a computer program comprising computer-executable instructions that, when executed by a processor in a device, cause the device to facilitate re-establishing of a failed communications session undertaken via a plurality of EPS nodes, the computer-executable instructions comprising instructions for: acquiring stateful session data of each of the plurality of EPS nodes involved in the communication session; combining the acquired stateful session data of each of the plurality of EPS nodes into stateful session data common to the plurality of EPS nodes; associating the common stateful session data with a communication session identifier and an identifier of each of the plurality of EPS nodes involved in the communication session; and storing the common stateful session data and the identifiers for re-establishment of the communications session from the common stateful session data in case of failure.
2,400
8,146
8,146
14,704,127
2,441
Embodiments include systems and methods comprising a gateway located at a premise forming at least one network on the premise that includes a plurality of premise devices. A sensor user interface (SUI) is coupled to the gateway and presented to a user via a remote device. The SUI includes at least one display element. The at least one display element includes a floor plan display that represents at least one floor of the premise. The floor plan display visually and separately indicates a location and a current state of each premise device of the plurality of premise devices.
1-31. (canceled) 32. A method for a premises based system, the method comprising: generating a layout of at least a portion of a premises to be monitored by the premises based system; populating the layout with at least one premises device based at least in part on at least one characteristic of the at least one premises device; and causing layout data associated with the populated layout to be stored. 33. The method of claim 32, wherein the populating of the layout comprises populating with a plurality of premises devices based at least in part on a plurality of premises rules, wherein the plurality of premises rules dictate at least one of a type and a location of the plurality of premises devices based at least in part on the at least one characteristic of the premises. 34. The method of claim 33, wherein the populating of the layout comprises populating with at least one of the plurality of premises devices based on a manual user selection of the at least one premises device. 35. The method of claim 32, wherein the populating of the layout comprises populating with a plurality of premises devices corresponding to a plurality of service packages, wherein each service package corresponds to at least one of a different number and different type of premises device than the other service packages. 36. The method of claim 35, comprising displaying at least one monitoring area of at least one premises device in the layout. 37. The method of claim 32, comprising storing the layout data for retrieval during installation of the premises based system. 38. The method of claim 32, comprising indicating with the layout data whether the layout was at least one of populated by applying the plurality of premises rules and by manual user selection of the at least one premises device. 39. The method of claim 38, comprising, when the layout was populated by at least manual user selection: applying the plurality of premises rules to the layout; determining the differences between the layout populated by at least manual user selection and the layout populated by applying the plurality of premises rules; and storing the determined differences in the layout data. 40. The method of claim 32, wherein the generating of the layout comprises generating the layout based at least on one of a predefined floor layout template and manual user interaction with a drawing tool. 41. The method of claim 32, wherein the at least one premises device includes at least one of a sensor, door sensor, window sensor, glass break sensor, detector, smoke detector, carbon monoxide detector, motion detector, camera, video camera, and imaging camera. 42. The method of claim 32, wherein the layout comprises at least one display element that includes a floor plan display that represents at least one floor of the premises. 43. The method of claim 42, wherein the floor plan display visually and separately indicates a location and a current state of the at least one premises device. 44. The method of claim 42, wherein the floor plan display includes a color that visually indicates a state of the at least one premises device. 45. The method of claim 42, wherein the floor plan display includes text presented with the floor plan display. 46. The method of claim 45, wherein the text comprises at least one of a text description of a state of the at least one premises device and a status of the at least one premises device. 47. The method of claim 42, wherein the at least one floor comprises a plurality of floors. 48. The method of claim 47, wherein the floor plan display comprises a plurality of floor icons, wherein each floor icon corresponds to one of the plurality of floors. 49. The method of claim 47, wherein the plurality of floors corresponds to a number of floors of the premises. 50. The method of claim 47, wherein the plurality of floors corresponds to at least one floor of the premises and at least one floor of an outbuilding corresponding to the premise. 51. The method of claim 42, wherein the at least one display element includes at least one device icon, wherein the at least one device icon represents a location and a state of the at least one premises device corresponding to the device icon. 52. The method of claim 51, wherein the at least one premises device is at least one of a door sensor, a window sensor, a motion sensor, a fire sensor, a smoke sensor, a glass-break sensor, a flood sensor, a light, a thermostat, a camera, a lock, and an energy device. 53. The method of claim 51, wherein the state comprises at least one of an alarmed state, a tripped state, a tampered state, a low-battery state, an offline state, an unknown state, an installing state, an open state, a closed state, a motion state, a quiet state, an inactive state, a closed state, an untriggered state, and an untripped state. 54. The method of claim 51, comprising configuring the at least one display element to include a popup display that is displayed in response to a touch of the at least one device icon. 55. The method of claim 54, comprising configuring the popup display to include at least one of a name of the at least one premises device corresponding to the at least one device icon that was touched, detailed information of the at least one premises device, and a link to information of the at least one premises device. 56. The method of claim 55, comprising configuring the link to activate presentation of at least one of live video from the at least one premises device when the at least one premises device is a camera, and a control screen comprising controls for the premises device. 57. The method of claim 43, controlling an edit mode that is configured to generate the floor plan display and place the plurality of system icons on the floor plan display. 58. The method of claim 57, configuring the edit mode to include a plurality of floor plans, and configuring each floor plan of the plurality of floor plans to define a perimeter shape of a floor and corresponds to a floor plan icon that is selectable for the floor plan display. 59. The method of claim 57, configuring the edit mode to present a grid comprising a plurality of tiles on the floor plan display. 60. The method of claim 59, comprising configuring the edit mode to include at least one of adding walls and deleting walls. 61. The method of claim 60, comprising configuring the edit mode to include adding a wall on the floor plan display, wherein the adding of the wall comprises forming the wall to have a length and placing the wall at a location on the floor plan display. 62. The method of claim 60, comprising configuring the edit mode to include deleting at least a portion of a wall from the floor plan display. 63. The method of claim 60, comprising configuring the edit mode to include placing the plurality of system icons on the floor plan display. 64. The method of claim 60, comprising configuring the edit mode to differentiate premise exteriors from premise interiors based on a location of a tile. 65. The method of claim 43, comprising configuring the at least one display element to include at least one warning that is an informational warning of the at least one premises device. 66. A method comprising: forming at least one network at a premises, wherein the at least one network includes a plurality of premise devices; and exchanging data between the at least one network and a sensor user interface (SUI) application, wherein the SUI application includes at least one display element that includes a floor plan layout representing at least one floor of the premises, wherein the floor plan layout is populated with at least one premises device that is at least one of a security device and a network device, wherein the floor plan layout visually and separately indicates a location and a current state of the at least one premises device. 67. A method comprising: forming an integrated network at a premises, wherein the integrated network includes a security network and a subnetwork, wherein the security network comprises a security system that includes security system components located at the premises, wherein the subnetwork comprises a plurality of network devices located at the premise; and exchanging data between the integrated network and a sensor user interface (SUI) application, wherein the SUI application includes at least one display element that includes a floor plan layout representing at least one floor of the premises, wherein the floor plan layout is populated with at least one premises device that is at least one of a security device and a network device, wherein the floor plan layout visually and separately indicates a location and a current state of the at least one premises device.
Embodiments include systems and methods comprising a gateway located at a premise forming at least one network on the premise that includes a plurality of premise devices. A sensor user interface (SUI) is coupled to the gateway and presented to a user via a remote device. The SUI includes at least one display element. The at least one display element includes a floor plan display that represents at least one floor of the premise. The floor plan display visually and separately indicates a location and a current state of each premise device of the plurality of premise devices.1-31. (canceled) 32. A method for a premises based system, the method comprising: generating a layout of at least a portion of a premises to be monitored by the premises based system; populating the layout with at least one premises device based at least in part on at least one characteristic of the at least one premises device; and causing layout data associated with the populated layout to be stored. 33. The method of claim 32, wherein the populating of the layout comprises populating with a plurality of premises devices based at least in part on a plurality of premises rules, wherein the plurality of premises rules dictate at least one of a type and a location of the plurality of premises devices based at least in part on the at least one characteristic of the premises. 34. The method of claim 33, wherein the populating of the layout comprises populating with at least one of the plurality of premises devices based on a manual user selection of the at least one premises device. 35. The method of claim 32, wherein the populating of the layout comprises populating with a plurality of premises devices corresponding to a plurality of service packages, wherein each service package corresponds to at least one of a different number and different type of premises device than the other service packages. 36. The method of claim 35, comprising displaying at least one monitoring area of at least one premises device in the layout. 37. The method of claim 32, comprising storing the layout data for retrieval during installation of the premises based system. 38. The method of claim 32, comprising indicating with the layout data whether the layout was at least one of populated by applying the plurality of premises rules and by manual user selection of the at least one premises device. 39. The method of claim 38, comprising, when the layout was populated by at least manual user selection: applying the plurality of premises rules to the layout; determining the differences between the layout populated by at least manual user selection and the layout populated by applying the plurality of premises rules; and storing the determined differences in the layout data. 40. The method of claim 32, wherein the generating of the layout comprises generating the layout based at least on one of a predefined floor layout template and manual user interaction with a drawing tool. 41. The method of claim 32, wherein the at least one premises device includes at least one of a sensor, door sensor, window sensor, glass break sensor, detector, smoke detector, carbon monoxide detector, motion detector, camera, video camera, and imaging camera. 42. The method of claim 32, wherein the layout comprises at least one display element that includes a floor plan display that represents at least one floor of the premises. 43. The method of claim 42, wherein the floor plan display visually and separately indicates a location and a current state of the at least one premises device. 44. The method of claim 42, wherein the floor plan display includes a color that visually indicates a state of the at least one premises device. 45. The method of claim 42, wherein the floor plan display includes text presented with the floor plan display. 46. The method of claim 45, wherein the text comprises at least one of a text description of a state of the at least one premises device and a status of the at least one premises device. 47. The method of claim 42, wherein the at least one floor comprises a plurality of floors. 48. The method of claim 47, wherein the floor plan display comprises a plurality of floor icons, wherein each floor icon corresponds to one of the plurality of floors. 49. The method of claim 47, wherein the plurality of floors corresponds to a number of floors of the premises. 50. The method of claim 47, wherein the plurality of floors corresponds to at least one floor of the premises and at least one floor of an outbuilding corresponding to the premise. 51. The method of claim 42, wherein the at least one display element includes at least one device icon, wherein the at least one device icon represents a location and a state of the at least one premises device corresponding to the device icon. 52. The method of claim 51, wherein the at least one premises device is at least one of a door sensor, a window sensor, a motion sensor, a fire sensor, a smoke sensor, a glass-break sensor, a flood sensor, a light, a thermostat, a camera, a lock, and an energy device. 53. The method of claim 51, wherein the state comprises at least one of an alarmed state, a tripped state, a tampered state, a low-battery state, an offline state, an unknown state, an installing state, an open state, a closed state, a motion state, a quiet state, an inactive state, a closed state, an untriggered state, and an untripped state. 54. The method of claim 51, comprising configuring the at least one display element to include a popup display that is displayed in response to a touch of the at least one device icon. 55. The method of claim 54, comprising configuring the popup display to include at least one of a name of the at least one premises device corresponding to the at least one device icon that was touched, detailed information of the at least one premises device, and a link to information of the at least one premises device. 56. The method of claim 55, comprising configuring the link to activate presentation of at least one of live video from the at least one premises device when the at least one premises device is a camera, and a control screen comprising controls for the premises device. 57. The method of claim 43, controlling an edit mode that is configured to generate the floor plan display and place the plurality of system icons on the floor plan display. 58. The method of claim 57, configuring the edit mode to include a plurality of floor plans, and configuring each floor plan of the plurality of floor plans to define a perimeter shape of a floor and corresponds to a floor plan icon that is selectable for the floor plan display. 59. The method of claim 57, configuring the edit mode to present a grid comprising a plurality of tiles on the floor plan display. 60. The method of claim 59, comprising configuring the edit mode to include at least one of adding walls and deleting walls. 61. The method of claim 60, comprising configuring the edit mode to include adding a wall on the floor plan display, wherein the adding of the wall comprises forming the wall to have a length and placing the wall at a location on the floor plan display. 62. The method of claim 60, comprising configuring the edit mode to include deleting at least a portion of a wall from the floor plan display. 63. The method of claim 60, comprising configuring the edit mode to include placing the plurality of system icons on the floor plan display. 64. The method of claim 60, comprising configuring the edit mode to differentiate premise exteriors from premise interiors based on a location of a tile. 65. The method of claim 43, comprising configuring the at least one display element to include at least one warning that is an informational warning of the at least one premises device. 66. A method comprising: forming at least one network at a premises, wherein the at least one network includes a plurality of premise devices; and exchanging data between the at least one network and a sensor user interface (SUI) application, wherein the SUI application includes at least one display element that includes a floor plan layout representing at least one floor of the premises, wherein the floor plan layout is populated with at least one premises device that is at least one of a security device and a network device, wherein the floor plan layout visually and separately indicates a location and a current state of the at least one premises device. 67. A method comprising: forming an integrated network at a premises, wherein the integrated network includes a security network and a subnetwork, wherein the security network comprises a security system that includes security system components located at the premises, wherein the subnetwork comprises a plurality of network devices located at the premise; and exchanging data between the integrated network and a sensor user interface (SUI) application, wherein the SUI application includes at least one display element that includes a floor plan layout representing at least one floor of the premises, wherein the floor plan layout is populated with at least one premises device that is at least one of a security device and a network device, wherein the floor plan layout visually and separately indicates a location and a current state of the at least one premises device.
2,400
8,147
8,147
14,917,685
2,482
The present invention relates to a contact lens inspection system, comprising: a light source ( 20 ) being adapted to illuminate the contact lens with collimated light from a front side or the rear side of the contact lens; a camera ( 40 ) having an objective lens ( 41 ) and an electronic sensor ( 42 ), said camera being arranged to produce an electronic orthographic image ( 10 ) of said contact lens on said electronic sensor ( 42 ), wherein said objective lens ( 41 ) has a diameter which is at least as large as a maximum diameter of said contact lens, said camera ( 40 ) being arranged on that side of the contact lens opposite to said side of said light source ( 20 ); an electronic scanning and evaluation unit ( 50 ) adapted for electronically scanning said electronic orthographic image ( 10 ) of said contact lens to determine whether or not said contact lens is inverted.
1. A contact lens inspection system (100) for soft contact lenses (1), comprising: a light source (20) for illuminating a contact lens to be inspected, said light source (20) being adapted to illuminate said contact lens with collimated light from a front side or the rear side of said contact lens; a camera (40) having an objective lens (41) and an electronic sensor (42), said camera being arranged to produce an electronic orthographic image (10) of said contact lens on said electronic sensor (42), wherein said objective lens (41) has a diameter which is at least as large as a maximum diameter of said contact lens, and wherein said camera(40) is arranged on that side of said contact lens opposite to said side where said light source (20) is arranged; an electronic scanning and evaluation unit (50) adapted for electronically scanning at least one portion of said electronic orthographic image of said contact lens on said electronic sensor (42) in sections (S) of a predetermined size to detect within each of said sections (S) of said electronic orthographic image of said contact lens a line structure (5), and further adapted for counting a total number of detected line structures (5) in said scanned at least one portion of said electronic orthographic image of said contact lens and for comparing said total number of detected line structures (5) with a predetermined threshold value (T) to determine whether or not said contact lens is inverted. 2. The contact lens inspection system according to claim 1, wherein said objective lens (41) comprises a telecentric lens. 3. The contact lens inspection system according to claim 2, wherein said contact lens is arranged in a container (30) with its front side or its rear side facing towards one end of said container, and wherein said light source (20) is arranged on said one end of said container (30) while said camera (40) is arranged on another end of said container (30) opposite to said one end of said container (30). 4. The contact lens inspection system according to claim 1, wherein said electronic scanning and evaluation unit (50) is adapted for electronically scanning said at least one portion of said electronic orthographic image of said contact lens in one of horizontal or vertical scans, or in both horizontal and vertical scans, and wherein said sections (S) are one of horizontal or vertical sections, or both horizontal and vertical sections. 5. The contact lens inspection system according to claim 4, wherein each of said horizontal or vertical sections (S) is of rectangular shape, and wherein said horizontal section (S) has a size of five horizontal pixels and three vertical pixels, of said electronic sensor (42), and wherein said vertical section (S) has a size of three horizontal pixels and five vertical pixels of said electronic sensor (42). 6. The contact lens inspection system according to claim 1, wherein said electronic scanning and evaluation unit is adapted for electronically scanning said at least one portion of said electronic orthographic image of said contact lens in radially directed scans, and wherein said sections (S) are radially oriented sections. 7. The contact lens inspection system according to claim 6, wherein each of said radially oriented sections (S) is of rectangular shape and has a size of three circumferential pixels and five radial pixels of said electronic sensor (42). 8. A method for inspecting a soft contact lens (1), said method comprising: illuminating a contact lens to be inspected with collimated light from one of a front side or a rear side of said contact lens; producing an electronic orthographic image (10,) of said illuminated contact lens on a side opposite to said side from which said contact lens is illuminated; electronically scanning said electronic orthographic image (10) of said contact lens in at least one portion in sections (S) of a predetermined size and detecting within each of said sections a line structure (5); counting a total number of detected line structures (5) in said at least one portion of said scanned electronic orthographic image of said contact lens and comparing said counted total number of line structures with a predetermined threshold value (T) to determine whether or not said contact lens is inverted. 9. The method according to claim 8, wherein illuminating said contact lens with collimated light is performed using a light source (20) which is arranged on one side of said front and rear sides of said contact lens, and wherein producing said electronic orthographic image of said illuminated contact lens is performed using a camera (40) arranged on said other side of said front side and said rear side of said contact lens, said camera (40) having an objective lens (41) and an electronic sensor (42), wherein said objective lens (41) has a diameter which is at least as large as the maximum diameter of said contact lens. 10. The method according to claim 9, wherein an objective lens (41) is used comprising a telecentric lens. 11. The method according to claim 8, wherein electronically scanning said at least one portion of said electronic orthographic image (10) of said contact lens in said sections (S) is performed in one of horizontal or vertical scans, or is performed both in horizontal and vertical scans, and wherein said sections (S) are one of horizontal sections or vertical sections or both, horizontal and vertical sections. 12. The method according to claim 11, wherein each of said horizontal or vertical sections (S) is of rectangular shape, and wherein said horizontal section (S) has a size of five horizontal pixels and three vertical pixels of said electronic sensor (42), and wherein said vertical section (S) has a size of three horizontal pixels and five vertical pixels of said electronic sensor (42). 13. The method according to claim 8, wherein electronically scanning said electronic orthographic image (10) of said contact lens in said sections (S) is performed in a radial direction, and wherein said sections (S) are radially oriented sections. 14. The method of claim 13, wherein said radially oriented section (S) is of rectangular shape and has a size of three circumferential pixels and five radial pixels of said electronic sensor (42).
The present invention relates to a contact lens inspection system, comprising: a light source ( 20 ) being adapted to illuminate the contact lens with collimated light from a front side or the rear side of the contact lens; a camera ( 40 ) having an objective lens ( 41 ) and an electronic sensor ( 42 ), said camera being arranged to produce an electronic orthographic image ( 10 ) of said contact lens on said electronic sensor ( 42 ), wherein said objective lens ( 41 ) has a diameter which is at least as large as a maximum diameter of said contact lens, said camera ( 40 ) being arranged on that side of the contact lens opposite to said side of said light source ( 20 ); an electronic scanning and evaluation unit ( 50 ) adapted for electronically scanning said electronic orthographic image ( 10 ) of said contact lens to determine whether or not said contact lens is inverted.1. A contact lens inspection system (100) for soft contact lenses (1), comprising: a light source (20) for illuminating a contact lens to be inspected, said light source (20) being adapted to illuminate said contact lens with collimated light from a front side or the rear side of said contact lens; a camera (40) having an objective lens (41) and an electronic sensor (42), said camera being arranged to produce an electronic orthographic image (10) of said contact lens on said electronic sensor (42), wherein said objective lens (41) has a diameter which is at least as large as a maximum diameter of said contact lens, and wherein said camera(40) is arranged on that side of said contact lens opposite to said side where said light source (20) is arranged; an electronic scanning and evaluation unit (50) adapted for electronically scanning at least one portion of said electronic orthographic image of said contact lens on said electronic sensor (42) in sections (S) of a predetermined size to detect within each of said sections (S) of said electronic orthographic image of said contact lens a line structure (5), and further adapted for counting a total number of detected line structures (5) in said scanned at least one portion of said electronic orthographic image of said contact lens and for comparing said total number of detected line structures (5) with a predetermined threshold value (T) to determine whether or not said contact lens is inverted. 2. The contact lens inspection system according to claim 1, wherein said objective lens (41) comprises a telecentric lens. 3. The contact lens inspection system according to claim 2, wherein said contact lens is arranged in a container (30) with its front side or its rear side facing towards one end of said container, and wherein said light source (20) is arranged on said one end of said container (30) while said camera (40) is arranged on another end of said container (30) opposite to said one end of said container (30). 4. The contact lens inspection system according to claim 1, wherein said electronic scanning and evaluation unit (50) is adapted for electronically scanning said at least one portion of said electronic orthographic image of said contact lens in one of horizontal or vertical scans, or in both horizontal and vertical scans, and wherein said sections (S) are one of horizontal or vertical sections, or both horizontal and vertical sections. 5. The contact lens inspection system according to claim 4, wherein each of said horizontal or vertical sections (S) is of rectangular shape, and wherein said horizontal section (S) has a size of five horizontal pixels and three vertical pixels, of said electronic sensor (42), and wherein said vertical section (S) has a size of three horizontal pixels and five vertical pixels of said electronic sensor (42). 6. The contact lens inspection system according to claim 1, wherein said electronic scanning and evaluation unit is adapted for electronically scanning said at least one portion of said electronic orthographic image of said contact lens in radially directed scans, and wherein said sections (S) are radially oriented sections. 7. The contact lens inspection system according to claim 6, wherein each of said radially oriented sections (S) is of rectangular shape and has a size of three circumferential pixels and five radial pixels of said electronic sensor (42). 8. A method for inspecting a soft contact lens (1), said method comprising: illuminating a contact lens to be inspected with collimated light from one of a front side or a rear side of said contact lens; producing an electronic orthographic image (10,) of said illuminated contact lens on a side opposite to said side from which said contact lens is illuminated; electronically scanning said electronic orthographic image (10) of said contact lens in at least one portion in sections (S) of a predetermined size and detecting within each of said sections a line structure (5); counting a total number of detected line structures (5) in said at least one portion of said scanned electronic orthographic image of said contact lens and comparing said counted total number of line structures with a predetermined threshold value (T) to determine whether or not said contact lens is inverted. 9. The method according to claim 8, wherein illuminating said contact lens with collimated light is performed using a light source (20) which is arranged on one side of said front and rear sides of said contact lens, and wherein producing said electronic orthographic image of said illuminated contact lens is performed using a camera (40) arranged on said other side of said front side and said rear side of said contact lens, said camera (40) having an objective lens (41) and an electronic sensor (42), wherein said objective lens (41) has a diameter which is at least as large as the maximum diameter of said contact lens. 10. The method according to claim 9, wherein an objective lens (41) is used comprising a telecentric lens. 11. The method according to claim 8, wherein electronically scanning said at least one portion of said electronic orthographic image (10) of said contact lens in said sections (S) is performed in one of horizontal or vertical scans, or is performed both in horizontal and vertical scans, and wherein said sections (S) are one of horizontal sections or vertical sections or both, horizontal and vertical sections. 12. The method according to claim 11, wherein each of said horizontal or vertical sections (S) is of rectangular shape, and wherein said horizontal section (S) has a size of five horizontal pixels and three vertical pixels of said electronic sensor (42), and wherein said vertical section (S) has a size of three horizontal pixels and five vertical pixels of said electronic sensor (42). 13. The method according to claim 8, wherein electronically scanning said electronic orthographic image (10) of said contact lens in said sections (S) is performed in a radial direction, and wherein said sections (S) are radially oriented sections. 14. The method of claim 13, wherein said radially oriented section (S) is of rectangular shape and has a size of three circumferential pixels and five radial pixels of said electronic sensor (42).
2,400
8,148
8,148
15,027,575
2,466
The proposed technology relates to methods, devices and network nodes for enabling mitigation of interference between an External Wireless System, EWS, and a mobile communication system. For example, a UE may detect (S 1 ) an EWS event involving EWS operation interfering with the operation of the mobile communication system provided at least one interference condition is fulfilled. The interference condition(s) includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency. The UE may then enable (S 2 ) the mitigation of interference based on event information representing the EWS event.
1-44. (canceled) 45. A method performed by a User Equipment (UE) for enabling mitigation of interference between an External Wireless System (EWS) and a mobile communication system, said method comprising the steps of: said UE detecting an EWS event involving EWS operation interfering with the operation of the mobile communication system, provided at least one interference condition is fulfilled, wherein said at least one interference condition includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency; and said UE enabling said mitigation of interference based on event information representing said EWS event. 46. The method of claim 45, wherein said step of said UE enabling said mitigation of interference comprises the step of reporting said event information to a radio network node, to enable said mitigation of interference. 47. The method of claim 45, wherein said first condition is expressed by a pre-defined frequency relation relating the representation of said operating frequency of the EWS and the representation of said reference frequency. 48. The method of claim 45, wherein said UE receives a configuration request from said radio network node, including configuration information to enable said UE to detect said EWS event, provided said at least one interference condition is fulfilled. 49. The method of claim 48, wherein said configuration information includes at least information about said frequency relation. 50. The method of claim 45, wherein said event information includes at least one of: time, location, and operating frequency of said EWS event. 51. The method of claim 45, wherein said reference frequency is representative of an operating frequency of the mobile communication system. 52. The method of claim 45, wherein said first condition involves a general function g relating a representation of the operating frequency, FEWS, of the EWS and a representation of the reference frequency, FR, and said first condition is fulfilled provided at least one of the following is valid: |F R −g(F EWS, α)|≦Δf R |F EWS −g −1(F R, σ)|≦Δf EWS where α and σ are respective optional parameters used to relate FR and FEWS, and ΔfR and ΔfEWS are respective margins. 53. A method performed by a radio network node for enabling mitigation of interference between an External Wireless System (EWS) and a mobile communication system, said method comprising the steps of: said radio network node configuring a User Equipment (UE) or other network node for detecting and reporting an EWS event involving EWS operation interfering with the operation of the mobile communication system, provided at least one interference condition is fulfilled, wherein said at least one interference condition includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency; and said radio network node receiving event information representing said EWS event from said UE or said other network node, to enable said mitigation of interference. 54. The method of claim 53, wherein said first condition is expressed by a pre-defined frequency relation relating the representation of said operating frequency of the EWS and the representation of said reference frequency. 55. The method of claim 53, wherein said step of said radio network node configuring said UE comprises the step of sending a configuration request to said UE or said other network node including configuration information to enable said UE or said other network node to detect and report said EWS event, provided said at least one interference condition is fulfilled. 56. The method of claim 55, wherein said configuration information includes at least information about said frequency relation. 57. The method of claim 53, wherein said event information includes at least one of: time, location and operating frequency of said EWS event. 58. The method of claim 53, wherein said reference frequency is representative of an operating frequency of the mobile communication system. 59. The method of claim 53, wherein said first condition involves a general function g relating a representation of the operating frequency, FEWS, of the EWS and a representation of the reference frequency, FR, and said first condition is fulfilled provided at least one of the following is valid: |F R −g(F EWS, α)|≦Δf R |F EWS −g −1(F R, σ)|≦Δf EWS where α and σ are respective optional parameters used to relate FR and FEWS, and ΔfR and ΔfEWS are respective margins. 60. A User Equipment (UE) configured to enable mitigation of interference between an External Wireless System (EWS) and a mobile communication system, said UE comprising: communication circuitry configured for communicating with a radio network node of the mobile communication system; and processing circuitry operatively associated with the communication circuitry and configured to: detect an EWS event involving EWS operation interfering with the operation of the mobile communication system, provided at least one interference condition is fulfilled, wherein said at least one interference condition includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency; and report event information representing said EWS event to the radio network node, to enable mitigation of interference. 61. The user equipment of claim 60, wherein said UE is configured to receive a configuration request from said radio network node including configuration information to enable said UE to detect said EWS event, provided said at least one interference condition is fulfilled. 62. The user equipment of claim 60, wherein said processing circuitry is configured to detect the EWS event involving EWS operation, based on a frequency relation between the representation of an operating frequency of the EWS and the representation of an operating frequency of the mobile communication system. 63. The user equipment of claim 60, wherein said processing circuitry is configured to report event information including at least one of: time, location and operating frequency of said EWS event. 64. A radio network node configured to enable mitigation of interference between an External Wireless System, EWS, and a mobile communication system, said radio network node comprising: communication circuitry configured to communicate with a User Equipment (UE) or other network node; and processing circuitry operatively associated with the communication circuitry and configured to: configure the User Equipment (UE) or other network node for detecting and reporting an EWS event involving EWS operation interfering with the operation of the mobile communication system, provided at least one interference condition is fulfilled, wherein said at least one interference condition includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency; and receive event information representing said EWS event from the UE or other network node, to enable said mitigation of interference. 65. A method performed by a User Equipment (UE) configured for operation in a mobile communication system, the method comprising: receiving configuration information from a radio network node in the mobile communication system, indicating one or more operating parameters of an External Wireless System (EWS) that potentially interferes with operation of the mobile communication system; receiving event condition information from the radio network node, said event condition information being received in, or in conjunction with the configuration information, and indicating one or more event conditions, said one or more event conditions conditioning reporting of EWS events by the UE to EWS events that involve signals satisfying a defined frequency relationship with an operating frequency of the mobile communication network; detecting EWS events, according to the one or more operating parameters indicated by the configuration information; and reporting a detected EWS event to the radio network node in response to determining that the detected EWS event satisfies the one or more event conditions indicated by the event condition information.
The proposed technology relates to methods, devices and network nodes for enabling mitigation of interference between an External Wireless System, EWS, and a mobile communication system. For example, a UE may detect (S 1 ) an EWS event involving EWS operation interfering with the operation of the mobile communication system provided at least one interference condition is fulfilled. The interference condition(s) includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency. The UE may then enable (S 2 ) the mitigation of interference based on event information representing the EWS event.1-44. (canceled) 45. A method performed by a User Equipment (UE) for enabling mitigation of interference between an External Wireless System (EWS) and a mobile communication system, said method comprising the steps of: said UE detecting an EWS event involving EWS operation interfering with the operation of the mobile communication system, provided at least one interference condition is fulfilled, wherein said at least one interference condition includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency; and said UE enabling said mitigation of interference based on event information representing said EWS event. 46. The method of claim 45, wherein said step of said UE enabling said mitigation of interference comprises the step of reporting said event information to a radio network node, to enable said mitigation of interference. 47. The method of claim 45, wherein said first condition is expressed by a pre-defined frequency relation relating the representation of said operating frequency of the EWS and the representation of said reference frequency. 48. The method of claim 45, wherein said UE receives a configuration request from said radio network node, including configuration information to enable said UE to detect said EWS event, provided said at least one interference condition is fulfilled. 49. The method of claim 48, wherein said configuration information includes at least information about said frequency relation. 50. The method of claim 45, wherein said event information includes at least one of: time, location, and operating frequency of said EWS event. 51. The method of claim 45, wherein said reference frequency is representative of an operating frequency of the mobile communication system. 52. The method of claim 45, wherein said first condition involves a general function g relating a representation of the operating frequency, FEWS, of the EWS and a representation of the reference frequency, FR, and said first condition is fulfilled provided at least one of the following is valid: |F R −g(F EWS, α)|≦Δf R |F EWS −g −1(F R, σ)|≦Δf EWS where α and σ are respective optional parameters used to relate FR and FEWS, and ΔfR and ΔfEWS are respective margins. 53. A method performed by a radio network node for enabling mitigation of interference between an External Wireless System (EWS) and a mobile communication system, said method comprising the steps of: said radio network node configuring a User Equipment (UE) or other network node for detecting and reporting an EWS event involving EWS operation interfering with the operation of the mobile communication system, provided at least one interference condition is fulfilled, wherein said at least one interference condition includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency; and said radio network node receiving event information representing said EWS event from said UE or said other network node, to enable said mitigation of interference. 54. The method of claim 53, wherein said first condition is expressed by a pre-defined frequency relation relating the representation of said operating frequency of the EWS and the representation of said reference frequency. 55. The method of claim 53, wherein said step of said radio network node configuring said UE comprises the step of sending a configuration request to said UE or said other network node including configuration information to enable said UE or said other network node to detect and report said EWS event, provided said at least one interference condition is fulfilled. 56. The method of claim 55, wherein said configuration information includes at least information about said frequency relation. 57. The method of claim 53, wherein said event information includes at least one of: time, location and operating frequency of said EWS event. 58. The method of claim 53, wherein said reference frequency is representative of an operating frequency of the mobile communication system. 59. The method of claim 53, wherein said first condition involves a general function g relating a representation of the operating frequency, FEWS, of the EWS and a representation of the reference frequency, FR, and said first condition is fulfilled provided at least one of the following is valid: |F R −g(F EWS, α)|≦Δf R |F EWS −g −1(F R, σ)|≦Δf EWS where α and σ are respective optional parameters used to relate FR and FEWS, and ΔfR and ΔfEWS are respective margins. 60. A User Equipment (UE) configured to enable mitigation of interference between an External Wireless System (EWS) and a mobile communication system, said UE comprising: communication circuitry configured for communicating with a radio network node of the mobile communication system; and processing circuitry operatively associated with the communication circuitry and configured to: detect an EWS event involving EWS operation interfering with the operation of the mobile communication system, provided at least one interference condition is fulfilled, wherein said at least one interference condition includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency; and report event information representing said EWS event to the radio network node, to enable mitigation of interference. 61. The user equipment of claim 60, wherein said UE is configured to receive a configuration request from said radio network node including configuration information to enable said UE to detect said EWS event, provided said at least one interference condition is fulfilled. 62. The user equipment of claim 60, wherein said processing circuitry is configured to detect the EWS event involving EWS operation, based on a frequency relation between the representation of an operating frequency of the EWS and the representation of an operating frequency of the mobile communication system. 63. The user equipment of claim 60, wherein said processing circuitry is configured to report event information including at least one of: time, location and operating frequency of said EWS event. 64. A radio network node configured to enable mitigation of interference between an External Wireless System, EWS, and a mobile communication system, said radio network node comprising: communication circuitry configured to communicate with a User Equipment (UE) or other network node; and processing circuitry operatively associated with the communication circuitry and configured to: configure the User Equipment (UE) or other network node for detecting and reporting an EWS event involving EWS operation interfering with the operation of the mobile communication system, provided at least one interference condition is fulfilled, wherein said at least one interference condition includes a first condition based on a frequency relation between a representation of an operating frequency of the EWS and a representation of a reference frequency; and receive event information representing said EWS event from the UE or other network node, to enable said mitigation of interference. 65. A method performed by a User Equipment (UE) configured for operation in a mobile communication system, the method comprising: receiving configuration information from a radio network node in the mobile communication system, indicating one or more operating parameters of an External Wireless System (EWS) that potentially interferes with operation of the mobile communication system; receiving event condition information from the radio network node, said event condition information being received in, or in conjunction with the configuration information, and indicating one or more event conditions, said one or more event conditions conditioning reporting of EWS events by the UE to EWS events that involve signals satisfying a defined frequency relationship with an operating frequency of the mobile communication network; detecting EWS events, according to the one or more operating parameters indicated by the configuration information; and reporting a detected EWS event to the radio network node in response to determining that the detected EWS event satisfies the one or more event conditions indicated by the event condition information.
2,400
8,149
8,149
14,724,748
2,439
There is disclosed a method for facilitating transactions carried out by a mobile device, wherein: the mobile device executes a smart card application; the smart card application receives a cryptographic algorithm from a transaction server external to the mobile device; the smart card application further receives transaction data from said transaction server; the cryptographic algorithm encrypts said transaction data and stores the encrypted transaction data in a storage unit of the mobile device. Furthermore, a corresponding computer program product and a corresponding mobile device for carrying out transactions are disclosed.
1. A method for facilitating transactions carried out by a mobile device, wherein: the mobile device executes a smart card application; the smart card application receives a cryptographic algorithm from a transaction server external to the mobile device; the smart card application further receives transaction data from said transaction server; the cryptographic algorithm encrypts said transaction data and stores the encrypted transaction data in a storage unit of the mobile device. 2. A method as claimed in claim 1, wherein the cryptographic algorithm retrieves the encrypted transaction data from the storage unit and decrypts the encrypted transaction data, and wherein the smart card application provides the decrypted transaction data to a reader device external to the mobile device. 3. A method as claimed in claim 2, wherein the smart card application further receives a password from the reader device, and wherein the cryptographic algorithm takes said password as an input, such that the cryptographic algorithm correctly decrypts the encrypted transaction data only if said password is correct. 4. A method as claimed in claim 2, wherein the cryptographic algorithm further encrypts the decrypted transaction data again after decrypting the encrypted transaction data and before the smart card application provides the transaction data to the reader device. 5. A method as claimed in claim 2, wherein the smart card application provides the decrypted transaction data to the reader device via near field communication. 6. A method as claimed in claim 1, wherein the cryptographic algorithm is implemented as a white-box implementation comprising a series of look-up tables. 7. A method as claimed in claim 6, wherein at least one of the look-up tables has been compiled using a coding function which takes an identifier of the mobile device as an input. 8. A method as claimed in claim 1, wherein the transaction data are ticket data or access control data. 9. A computer program product comprising instructions which, when being executed by a processing unit of a mobile device, carry out or control respective steps of a method as claimed in claim 1. 10. A mobile device for carrying out transactions, the mobile device being arranged to execute a smart card application, wherein the smart card application, when being executed by the mobile device, receives a cryptographic algorithm from a transaction server external to the mobile device; wherein the smart card application, when being executed by the mobile device, further receives transaction data from said transaction server; wherein the mobile device is further arranged to execute the cryptographic algorithm and wherein the cryptographic algorithm, when being executed by the mobile device, encrypts said transaction data and stores the encrypted transaction data in a storage unit of the mobile device. 11. A mobile device as claimed in claim 10, being a mobile phone or a tablet device. 12. A mobile device as claimed in claim 10, wherein the cryptographic algorithm, when being executed by the mobile device, retrieves the encrypted transaction data from the storage unit and decrypts the encrypted transaction data, and wherein the smart card application, when being executed by the mobile device, provides the decrypted transaction data to a reader device external to the mobile device. 13. A mobile device as claimed in claim 12, comprising a near field communication unit for providing the decrypted transaction data to the reader device.
There is disclosed a method for facilitating transactions carried out by a mobile device, wherein: the mobile device executes a smart card application; the smart card application receives a cryptographic algorithm from a transaction server external to the mobile device; the smart card application further receives transaction data from said transaction server; the cryptographic algorithm encrypts said transaction data and stores the encrypted transaction data in a storage unit of the mobile device. Furthermore, a corresponding computer program product and a corresponding mobile device for carrying out transactions are disclosed.1. A method for facilitating transactions carried out by a mobile device, wherein: the mobile device executes a smart card application; the smart card application receives a cryptographic algorithm from a transaction server external to the mobile device; the smart card application further receives transaction data from said transaction server; the cryptographic algorithm encrypts said transaction data and stores the encrypted transaction data in a storage unit of the mobile device. 2. A method as claimed in claim 1, wherein the cryptographic algorithm retrieves the encrypted transaction data from the storage unit and decrypts the encrypted transaction data, and wherein the smart card application provides the decrypted transaction data to a reader device external to the mobile device. 3. A method as claimed in claim 2, wherein the smart card application further receives a password from the reader device, and wherein the cryptographic algorithm takes said password as an input, such that the cryptographic algorithm correctly decrypts the encrypted transaction data only if said password is correct. 4. A method as claimed in claim 2, wherein the cryptographic algorithm further encrypts the decrypted transaction data again after decrypting the encrypted transaction data and before the smart card application provides the transaction data to the reader device. 5. A method as claimed in claim 2, wherein the smart card application provides the decrypted transaction data to the reader device via near field communication. 6. A method as claimed in claim 1, wherein the cryptographic algorithm is implemented as a white-box implementation comprising a series of look-up tables. 7. A method as claimed in claim 6, wherein at least one of the look-up tables has been compiled using a coding function which takes an identifier of the mobile device as an input. 8. A method as claimed in claim 1, wherein the transaction data are ticket data or access control data. 9. A computer program product comprising instructions which, when being executed by a processing unit of a mobile device, carry out or control respective steps of a method as claimed in claim 1. 10. A mobile device for carrying out transactions, the mobile device being arranged to execute a smart card application, wherein the smart card application, when being executed by the mobile device, receives a cryptographic algorithm from a transaction server external to the mobile device; wherein the smart card application, when being executed by the mobile device, further receives transaction data from said transaction server; wherein the mobile device is further arranged to execute the cryptographic algorithm and wherein the cryptographic algorithm, when being executed by the mobile device, encrypts said transaction data and stores the encrypted transaction data in a storage unit of the mobile device. 11. A mobile device as claimed in claim 10, being a mobile phone or a tablet device. 12. A mobile device as claimed in claim 10, wherein the cryptographic algorithm, when being executed by the mobile device, retrieves the encrypted transaction data from the storage unit and decrypts the encrypted transaction data, and wherein the smart card application, when being executed by the mobile device, provides the decrypted transaction data to a reader device external to the mobile device. 13. A mobile device as claimed in claim 12, comprising a near field communication unit for providing the decrypted transaction data to the reader device.
2,400
8,150
8,150
15,814,630
2,425
A method and system for managing digital media campaigns accesses a set of programming data that contains various attributes of media assets that a media service provider will present to users. A media advertising campaign manager receives various criteria for the inclusion of advertisements in a particular entity's advertising campaign. The system uses the attributes in the data set to develop an advertising campaign that satisfies the entity's criteria. In various embodiments, the method system may consider the entity's preferences, seller criteria, and campaign requests for other entities.
1. A system for managing a digital media campaign, comprising: a first data store comprising a plurality of digital media files, each of which corresponds to a digital advertisement that an electronic media service provider may present to consumers; a second data store containing an inventory of digital programming files, each of which corresponds to one or more digital programming assets; a set of programming data comprising temporal attributes and non-temporal attributes for a plurality of the digital programming files; a digital media server configured to access the first data store and transmit the digital programming files to a plurality of media presentation devices; a processor; and a computer-readable medium containing programming instructions that, when executed, cause the processor to implement a digital media campaign manager by: causing an electronic device to implement a buyer-side user interface that displays a plurality of sections that provide input fields for user-selectable purchasing criteria for placement of digital advertisements in one or more of the digital programming assets, receiving, via one or more of the input fields of the buyer-side user interface, a selection of one or more of the purchasing criteria for a purchase of digital advertisements by a first buyer, causing a display device to present a seller-side user interface that comprises sections that provide input fields for by which a seller may enter seller-side criteria for placement of digital advertisements in one or more of the digital programming assets, receiving, via the input fields of the seller-side user interface, a selection of one or more of the seller-side criteria for placement of digital advertisements in one or more of the digital programming assets, comparing the purchasing criteria and the seller-side criteria to the temporal attributes and non-temporal attributes in the data set to automatically develop an advertising campaign for the first buyer by selecting a group of the digital programming assets and automatically allocating the selected group of digital programming assets to the advertising campaign, along with scheduling parameters indicating when the digital advertisements will run within the digital programming assets in the group, so that the advertising campaign satisfies the selected purchasing criteria and the selected seller-side criteria, and causing either the buyer-side user interface or the seller-side user interface to present indicia of the advertising campaign to either the first buyer or the seller to review; and after acceptance of the advertising campaign by either the first buyer or the seller, causing the digital media server to transmit the selected group of digital programming assets to a plurality of media presentation devices with the digital media files that will run according to the scheduling parameters. 2. The system of claim 1, wherein: the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to: enable the first buyer to identify a start time, end time or duration for the advertising campaign, and enable the first buyer to identify a budget constraint for the advertising campaign; and the instructions to implement the digital media campaign manager comprise instructions to develop the advertising campaign so that it satisfies the budget constraint and the identified start time, end time or duration. 3. The system of claim 1, wherein the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to enable the first buyer to: identify a monetary value that the first buyer will pay if at least a portion of the selected purchasing criteria are met; and identify one or more alternative criteria for the campaign; and identify one or more different monetary values that the first buyer will pay if any of the alternative criteria are met. 4. The system of claim 3, wherein the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to enable the first buyer to: identify the different monetary values by expressing preferences over targeting, campaign control criteria or both. 5. The system of claim 1, wherein the received selection of one or more of the purchasing criteria comprises a target audience criterion that comprises at least one of the following: a requirement that a viewer have purchased a specified good or service within a time period; a requirement that a viewer have exhibited a viewing pattern over a time period; a requirement that a viewer has publicly expressed positive feedback on a social network for a digital media asset in the advertising campaign; or a requirement that a viewer has not publicly expressed negative feedback on a social network for a digital media asset in the advertising campaign. 6. The system of claim 3, wherein: at least one of the alternative criteria comprises an exclusivity preference that comprises: an exclusive time period, and a competitive restriction for the placement of advertisements by a second buyer in one or more of the digital programming assets during the exclusive time period; and the instructions to automatically develop the advertising campaign comprise instructions to: use the different monetary value for the exclusivity preference to determine whether satisfying the exclusivity preference will maximize a revenue opportunity for the media service provider, and if satisfying the exclusivity preference will maximize a revenue opportunity for media service provider, then develop an advertising campaign for the second buyer so that digital media assets of the second buyer are positioned in a manner that does not violate the exclusivity preference of the first buyer. 7. The system of claim 1, wherein: the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to enable the first buyer to: identify an overall time period for the advertising campaign, define a time unit that is a subunit of the overall time period, and identify a smoothness criterion that represents a measurement of a maximum amount of, or a maximum change in a volume of, advertisements allocated to each of the time units for the advertising campaign; and the instructions to automatically develop the advertising campaign comprise instructions to select a group of advertisements for the campaign and automatically allocate the advertisements to spots in the campaign so that any advertisements placed in digital programming assets that are scheduled television programs are allocated to the time units in a manner that does not violate the smoothness criterion. 8. The system of claim 1, wherein: the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to enable the user to: identify a separation criterion that comprises: a user-specified type of advertisement, and a minimum distance that the advertising campaign should maintain between placement of the first buyer's advertisements and placement of advertisements of the user-specified type; and the instructions to automatically develop the advertising campaign comprise instructions to select a group of advertisements for the first buyer for the campaign and automatically allocate the first buyer's advertisements to spots in the campaign so that the first buyer's advertisements and placement of advertisements of the user-specified type are positioned in a manner that does not violate the separation criterion. 9. The system of claim 1, wherein the instructions to cause the display device to present the seller-side user interface comprise instructions to provide an input by which the seller-side user interface may receive from a user: a categorization of at least one of the purchasing criteria as a preference; and for at least one of the criteria that is categorized as a preference, a plurality of levels for the preference criterion and, for each level, a level-specific bonus amount that the first buyer will pay if the preference is satisfied in the advertising campaign. 10. The system of claim 1, wherein: the instructions to present the seller-side user interface comprise instructions to provide an input by which the seller-side user interface may receive from a user: a categorization of two or more of the purchasing criteria as preferences, and for each of the criteria that are categorized as a preference, a bonus amount that the first buyer will pay if the preference is satisfied in the advertising campaign; and the instructions to automatically develop the advertising campaign comprise instructions to develop the advertising campaign to satisfy at least one of the preferences that maximize a revenue opportunity for the media service provider. 11. The system of claim 10, wherein: the instructions to present the seller-side user interface also comprise instructions to provide an input by which the seller-side user interface may receive from a user a constraint on how the bonus amounts may be aggregated; and the instructions to automatically develop the advertising campaign comprise instructions to determine a cost for the advertising campaign that includes the bonus amounts as limited by the constraint. 12. The system of claim 1, wherein: the instructions to implement the digital media campaign manager also comprise instructions to receive a group of zones that constitute an interconnect for at least a portion of the advertising campaign; the instructions to automatically develop the advertising campaign comprise instructions to develop the portion of the advertising campaign so that whenever a slot is allocated to an interconnect, then present an ad running in that slot to all zones that constitute the interconnect. 13. The system of claim 1, wherein the instructions to cause the electronic device to implement the buyer-side user interface that displays the plurality of user-selectable purchasing criteria comprise instructions to: access a set of profile data for the first buyer; for at least one of the purchasing criteria, determine a recommended value for that purchasing criteria for the first buyer based on the profile data; include the recommended value in the campaign as a default value. 14. The system of claim 1, wherein the instructions to develop the advertising campaign comprise instructions to: identify a plurality of advertisements to include in the advertising campaign; for each of the advertisements, preliminarily assign the advertisement to a position in the advertising campaign, wherein the position comprises at least one of the following: a media asset or a temporal position; and present the advertising campaign to the first buyer with at least one advertising assignment presented as an unscheduled commitment for which the position will be assigned or confirmed after the first buyer accepts the advertising campaign. 15. The system of claim 14, wherein the instructions to assign each advertisement to a position comprise instructions to: determine that a plurality of candidate positions have attributes that meet each constraint in the first buyer's purchasing criteria; select, from the candidate positions, a group of position assignments that maximize a revenue opportunity for the media service provider while remaining within the budget criteria; and include the group of media assets and position assignments in the advertising campaign. 16. The system of claim 1, wherein: the instructions to cause the electronic device to implement the buyer-side user interface comprise instructions to provide an input by which the buyer-side user interface may receive from the first buyer a selection of at least one preferred slot within a pod of programming, along with a monetary value that the first buyer will pay if the campaign includes an advertisement in the preferred slot; and the instructions to implement the advertising campaign comprise instructions to: identify a plurality of advertisements to include in the advertising campaign, determine whether assigning an advertisement to the preferred slot will maximize a revenue opportunity for the media service provider, and when assigning an advertisement to the preferred slot will maximize a revenue opportunity for the media service provider, assigning an advertisement to a position that corresponds to the preferred slot. 17. The system of claim 1, wherein the instructions to develop the advertising campaign further comprise additional instructions to: receive a second set of purchasing criteria for a purchase of advertisements by a second buyer; use the second set of purchasing criteria and at least some of the attributes in the set of programming data to develop a second advertising campaign for the second buyer; determine that the first advertising campaign and the second advertising campaign would, if implemented, each place an advertisement in a common position; determine whether placing the advertisement from the first advertising campaign in the common position or placing the advertisement from the second advertising campaign in the common position will maximize a revenue opportunity for the media service provider; and place the advertisement from the advertising campaign that will maximize the revenue opportunity in the common position, and modify the other advertising campaign to identify a new position for the other advertising campaign's advertisement such that the new position will satisfy the purchasing criteria for the buyer of the other advertising campaign. 18. The system of claim 17, wherein the instructions to develop the first and second advertising campaigns further comprise additional instructions to reoptimize an allocation of inventory to each campaign in light of a new campaign request, a change in supply of inventory, or a change in demand projections. 19. The system of claim 1, wherein the instructions further comprise instructions that, when executed, cause the processor to, after the advertisement campaign has begun: determine that the media service provider was, or likely will be, unable to satisfy a purchasing criterion that was classified as a constraint; identify a make-good action that has a value that is appropriate to compensate the first buyer for the media service provider's inability to satisfy the purchasing criterion that was classified as a constraint; and automatically cause the make-good action to be offered or given to the first buyer or to a representative of the media service provider. 20. The system of claim 1, wherein the instructions to develop the advertising campaign comprise instructions to: identify a plurality of advertisements to include in the advertising campaign; and assign the advertisements to positions in a plurality of digital programming assets that comprise at least two of the following: a television program, an on-demand program that is distributed via an online audio/video distribution service, an electronic game, an electronic publication, or a web page. 21. The system of claim 1, wherein the instructions further comprise instructions that, when executed, cause the processor to, after presenting the indicia of the advertising campaign: receive a response comprising an acceptance of a first portion of the advertising campaign and a rejection of a second portion of the advertising campaign; determine an updated price for the first portion of the advertising campaign; modify the advertising campaign to exclude the second portion of the advertising campaign; and output the modified advertising campaign and the updated price via the buyer-side user interface and/or the seller-side user interface for review. 22. The system of claim 1, wherein the instructions to develop the advertising campaign also comprise instructions to: identify an alternate criterion, wherein the alternate criterion comprises an alternative or supplement to at least one of the purchasing criteria; use the temporal attributes and the non-temporal attributes in the data set to automatically develop an alternative advertising campaign for the first buyer that satisfies the purchasing criteria as modified by the alternate criterion; and present the alternative advertising campaign to the first buyer via the buyer-side user interface and/or the seller via the seller-side user interface. 23. The system of claim 1, wherein the instructions to develop the advertising campaign comprise instructions to: identify an undersell constraint, wherein the undersell constraint comprises a restriction on sale of advertisements for a particular media asset or other unit of inventory; and when automatically developing the advertising campaign, doing so such that the advertising campaign satisfies the undersell constraint. 24. The system of claim 1, wherein the instructions to develop the advertising campaign comprise instructions to: relax a supply constraint throughout a campaign on a per-inventory-segment basis; and reoptimize one or more campaigns based on the relaxing. 25. The system of claim 1, further comprising additional programming instructions that cause the processor to provide a property manager configured to: receive temporal attributes, non-temporal attributes or both for a new media asset; and add the received attributes for the new media asset to the set of programming data for use in future advertising campaigns. 26. The system of claim 1, wherein the seller-side criteria comprise one or more of the following: a premium value to be added to a bid or budget received from the first buyer; a requirement to provide the first buyer with an audience having one or more specified attributes; or a rule to factor a make-good cost in a revenue analysis when developing the advertising campaign. 27. The system of claim 1, wherein the instructions to implement the digital media campaign manager further comprise instructions that, when executed: cause a processor to receive a modification of the advertising campaign; and cause a processor to modify the advertising campaign to implement the received modification. 28. The system of claim 1, further comprising: a set of additional programming data comprising temporal attributes and non-temporal attributes for a plurality of digital media assets that a second media service provider will present to consumers; and wherein the instructions to develop the advertising campaign further comprise instructions to also use the parameters in the data set for the second media service provider so that the advertising campaign allocates advertisements to media assets for each of the media service providers. 29. The system of claim 1, further comprising additional programming instructions to: determine a first cost to reach a target audience using a first type of targeting for the placement of the digital advertisements in one or more of the digital programming assets; determine a second cost to reach the target audience using a second type of targeting, wherein the second type of targeting is broader or narrower than the first type of targeting; and cause the seller-side user interface or the buyer-side user interface to output the first and second costs for comparison. 30. A system for managing a digital media campaign, comprising: a data store containing an inventory of digital programming files, each of which corresponds to one or more digital programming assets; a set of programming data comprising temporal attributes and non-temporal attributes for a plurality of the digital programming files; a digital media server configured to access the first data store and transmit the digital programming files to a plurality of media presentation devices; a processor; and a computer-readable medium containing programming instructions that, when executed, cause the processor to implement a digital media campaign manager by: causing an electronic device to implement a buyer-side user interface that displays a plurality of sections that provide input fields for user-selectable purchasing criteria for placement of digital advertisements in one or more of the digital programming assets, receiving, via one or more of the input fields of the buyer-side user interface, a selection of one or more of the purchasing criteria for a purchase of digital advertisements by a first buyer, causing a display device to present a seller-side user interface that comprises sections that provide input fields by which a seller may enter seller-side criteria for placement of digital advertisements in one or more of the digital programming assets, receiving, via the input fields of the seller-side user interface, a selection of one or more of the seller-side criteria for placement of digital advertisements in one or more of the digital programming assets, comparing the purchasing criteria and the seller-side criteria to the temporal attributes and non-temporal attributes in the data set to develop an advertising campaign for the first buyer by selecting a group of the digital programming assets and automatically allocating the selected group of digital programming assets to the advertising campaign, along with scheduling parameters indicating when the digital advertisements will run within the digital programming assets in the group, so that the advertising campaign satisfies the selected purchasing criteria and the selected seller-side criteria, causing either the buyer-side user interface or the seller-side user interface to present indicia of the advertising campaign to either the first buyer the seller to review, and after acceptance of the advertising campaign by either the first buyer or the seller, generating a campaign file or a scheduling file containing programming instructions and parameters for the advertising campaign that are callable by an application programming interface or usable by a digital advertising insertion system to place digital advertisements in the digital programming assets.
A method and system for managing digital media campaigns accesses a set of programming data that contains various attributes of media assets that a media service provider will present to users. A media advertising campaign manager receives various criteria for the inclusion of advertisements in a particular entity's advertising campaign. The system uses the attributes in the data set to develop an advertising campaign that satisfies the entity's criteria. In various embodiments, the method system may consider the entity's preferences, seller criteria, and campaign requests for other entities.1. A system for managing a digital media campaign, comprising: a first data store comprising a plurality of digital media files, each of which corresponds to a digital advertisement that an electronic media service provider may present to consumers; a second data store containing an inventory of digital programming files, each of which corresponds to one or more digital programming assets; a set of programming data comprising temporal attributes and non-temporal attributes for a plurality of the digital programming files; a digital media server configured to access the first data store and transmit the digital programming files to a plurality of media presentation devices; a processor; and a computer-readable medium containing programming instructions that, when executed, cause the processor to implement a digital media campaign manager by: causing an electronic device to implement a buyer-side user interface that displays a plurality of sections that provide input fields for user-selectable purchasing criteria for placement of digital advertisements in one or more of the digital programming assets, receiving, via one or more of the input fields of the buyer-side user interface, a selection of one or more of the purchasing criteria for a purchase of digital advertisements by a first buyer, causing a display device to present a seller-side user interface that comprises sections that provide input fields for by which a seller may enter seller-side criteria for placement of digital advertisements in one or more of the digital programming assets, receiving, via the input fields of the seller-side user interface, a selection of one or more of the seller-side criteria for placement of digital advertisements in one or more of the digital programming assets, comparing the purchasing criteria and the seller-side criteria to the temporal attributes and non-temporal attributes in the data set to automatically develop an advertising campaign for the first buyer by selecting a group of the digital programming assets and automatically allocating the selected group of digital programming assets to the advertising campaign, along with scheduling parameters indicating when the digital advertisements will run within the digital programming assets in the group, so that the advertising campaign satisfies the selected purchasing criteria and the selected seller-side criteria, and causing either the buyer-side user interface or the seller-side user interface to present indicia of the advertising campaign to either the first buyer or the seller to review; and after acceptance of the advertising campaign by either the first buyer or the seller, causing the digital media server to transmit the selected group of digital programming assets to a plurality of media presentation devices with the digital media files that will run according to the scheduling parameters. 2. The system of claim 1, wherein: the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to: enable the first buyer to identify a start time, end time or duration for the advertising campaign, and enable the first buyer to identify a budget constraint for the advertising campaign; and the instructions to implement the digital media campaign manager comprise instructions to develop the advertising campaign so that it satisfies the budget constraint and the identified start time, end time or duration. 3. The system of claim 1, wherein the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to enable the first buyer to: identify a monetary value that the first buyer will pay if at least a portion of the selected purchasing criteria are met; and identify one or more alternative criteria for the campaign; and identify one or more different monetary values that the first buyer will pay if any of the alternative criteria are met. 4. The system of claim 3, wherein the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to enable the first buyer to: identify the different monetary values by expressing preferences over targeting, campaign control criteria or both. 5. The system of claim 1, wherein the received selection of one or more of the purchasing criteria comprises a target audience criterion that comprises at least one of the following: a requirement that a viewer have purchased a specified good or service within a time period; a requirement that a viewer have exhibited a viewing pattern over a time period; a requirement that a viewer has publicly expressed positive feedback on a social network for a digital media asset in the advertising campaign; or a requirement that a viewer has not publicly expressed negative feedback on a social network for a digital media asset in the advertising campaign. 6. The system of claim 3, wherein: at least one of the alternative criteria comprises an exclusivity preference that comprises: an exclusive time period, and a competitive restriction for the placement of advertisements by a second buyer in one or more of the digital programming assets during the exclusive time period; and the instructions to automatically develop the advertising campaign comprise instructions to: use the different monetary value for the exclusivity preference to determine whether satisfying the exclusivity preference will maximize a revenue opportunity for the media service provider, and if satisfying the exclusivity preference will maximize a revenue opportunity for media service provider, then develop an advertising campaign for the second buyer so that digital media assets of the second buyer are positioned in a manner that does not violate the exclusivity preference of the first buyer. 7. The system of claim 1, wherein: the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to enable the first buyer to: identify an overall time period for the advertising campaign, define a time unit that is a subunit of the overall time period, and identify a smoothness criterion that represents a measurement of a maximum amount of, or a maximum change in a volume of, advertisements allocated to each of the time units for the advertising campaign; and the instructions to automatically develop the advertising campaign comprise instructions to select a group of advertisements for the campaign and automatically allocate the advertisements to spots in the campaign so that any advertisements placed in digital programming assets that are scheduled television programs are allocated to the time units in a manner that does not violate the smoothness criterion. 8. The system of claim 1, wherein: the instructions to present the buyer-side user interface that displays a plurality of user-selectable purchasing criteria comprise instructions to enable the user to: identify a separation criterion that comprises: a user-specified type of advertisement, and a minimum distance that the advertising campaign should maintain between placement of the first buyer's advertisements and placement of advertisements of the user-specified type; and the instructions to automatically develop the advertising campaign comprise instructions to select a group of advertisements for the first buyer for the campaign and automatically allocate the first buyer's advertisements to spots in the campaign so that the first buyer's advertisements and placement of advertisements of the user-specified type are positioned in a manner that does not violate the separation criterion. 9. The system of claim 1, wherein the instructions to cause the display device to present the seller-side user interface comprise instructions to provide an input by which the seller-side user interface may receive from a user: a categorization of at least one of the purchasing criteria as a preference; and for at least one of the criteria that is categorized as a preference, a plurality of levels for the preference criterion and, for each level, a level-specific bonus amount that the first buyer will pay if the preference is satisfied in the advertising campaign. 10. The system of claim 1, wherein: the instructions to present the seller-side user interface comprise instructions to provide an input by which the seller-side user interface may receive from a user: a categorization of two or more of the purchasing criteria as preferences, and for each of the criteria that are categorized as a preference, a bonus amount that the first buyer will pay if the preference is satisfied in the advertising campaign; and the instructions to automatically develop the advertising campaign comprise instructions to develop the advertising campaign to satisfy at least one of the preferences that maximize a revenue opportunity for the media service provider. 11. The system of claim 10, wherein: the instructions to present the seller-side user interface also comprise instructions to provide an input by which the seller-side user interface may receive from a user a constraint on how the bonus amounts may be aggregated; and the instructions to automatically develop the advertising campaign comprise instructions to determine a cost for the advertising campaign that includes the bonus amounts as limited by the constraint. 12. The system of claim 1, wherein: the instructions to implement the digital media campaign manager also comprise instructions to receive a group of zones that constitute an interconnect for at least a portion of the advertising campaign; the instructions to automatically develop the advertising campaign comprise instructions to develop the portion of the advertising campaign so that whenever a slot is allocated to an interconnect, then present an ad running in that slot to all zones that constitute the interconnect. 13. The system of claim 1, wherein the instructions to cause the electronic device to implement the buyer-side user interface that displays the plurality of user-selectable purchasing criteria comprise instructions to: access a set of profile data for the first buyer; for at least one of the purchasing criteria, determine a recommended value for that purchasing criteria for the first buyer based on the profile data; include the recommended value in the campaign as a default value. 14. The system of claim 1, wherein the instructions to develop the advertising campaign comprise instructions to: identify a plurality of advertisements to include in the advertising campaign; for each of the advertisements, preliminarily assign the advertisement to a position in the advertising campaign, wherein the position comprises at least one of the following: a media asset or a temporal position; and present the advertising campaign to the first buyer with at least one advertising assignment presented as an unscheduled commitment for which the position will be assigned or confirmed after the first buyer accepts the advertising campaign. 15. The system of claim 14, wherein the instructions to assign each advertisement to a position comprise instructions to: determine that a plurality of candidate positions have attributes that meet each constraint in the first buyer's purchasing criteria; select, from the candidate positions, a group of position assignments that maximize a revenue opportunity for the media service provider while remaining within the budget criteria; and include the group of media assets and position assignments in the advertising campaign. 16. The system of claim 1, wherein: the instructions to cause the electronic device to implement the buyer-side user interface comprise instructions to provide an input by which the buyer-side user interface may receive from the first buyer a selection of at least one preferred slot within a pod of programming, along with a monetary value that the first buyer will pay if the campaign includes an advertisement in the preferred slot; and the instructions to implement the advertising campaign comprise instructions to: identify a plurality of advertisements to include in the advertising campaign, determine whether assigning an advertisement to the preferred slot will maximize a revenue opportunity for the media service provider, and when assigning an advertisement to the preferred slot will maximize a revenue opportunity for the media service provider, assigning an advertisement to a position that corresponds to the preferred slot. 17. The system of claim 1, wherein the instructions to develop the advertising campaign further comprise additional instructions to: receive a second set of purchasing criteria for a purchase of advertisements by a second buyer; use the second set of purchasing criteria and at least some of the attributes in the set of programming data to develop a second advertising campaign for the second buyer; determine that the first advertising campaign and the second advertising campaign would, if implemented, each place an advertisement in a common position; determine whether placing the advertisement from the first advertising campaign in the common position or placing the advertisement from the second advertising campaign in the common position will maximize a revenue opportunity for the media service provider; and place the advertisement from the advertising campaign that will maximize the revenue opportunity in the common position, and modify the other advertising campaign to identify a new position for the other advertising campaign's advertisement such that the new position will satisfy the purchasing criteria for the buyer of the other advertising campaign. 18. The system of claim 17, wherein the instructions to develop the first and second advertising campaigns further comprise additional instructions to reoptimize an allocation of inventory to each campaign in light of a new campaign request, a change in supply of inventory, or a change in demand projections. 19. The system of claim 1, wherein the instructions further comprise instructions that, when executed, cause the processor to, after the advertisement campaign has begun: determine that the media service provider was, or likely will be, unable to satisfy a purchasing criterion that was classified as a constraint; identify a make-good action that has a value that is appropriate to compensate the first buyer for the media service provider's inability to satisfy the purchasing criterion that was classified as a constraint; and automatically cause the make-good action to be offered or given to the first buyer or to a representative of the media service provider. 20. The system of claim 1, wherein the instructions to develop the advertising campaign comprise instructions to: identify a plurality of advertisements to include in the advertising campaign; and assign the advertisements to positions in a plurality of digital programming assets that comprise at least two of the following: a television program, an on-demand program that is distributed via an online audio/video distribution service, an electronic game, an electronic publication, or a web page. 21. The system of claim 1, wherein the instructions further comprise instructions that, when executed, cause the processor to, after presenting the indicia of the advertising campaign: receive a response comprising an acceptance of a first portion of the advertising campaign and a rejection of a second portion of the advertising campaign; determine an updated price for the first portion of the advertising campaign; modify the advertising campaign to exclude the second portion of the advertising campaign; and output the modified advertising campaign and the updated price via the buyer-side user interface and/or the seller-side user interface for review. 22. The system of claim 1, wherein the instructions to develop the advertising campaign also comprise instructions to: identify an alternate criterion, wherein the alternate criterion comprises an alternative or supplement to at least one of the purchasing criteria; use the temporal attributes and the non-temporal attributes in the data set to automatically develop an alternative advertising campaign for the first buyer that satisfies the purchasing criteria as modified by the alternate criterion; and present the alternative advertising campaign to the first buyer via the buyer-side user interface and/or the seller via the seller-side user interface. 23. The system of claim 1, wherein the instructions to develop the advertising campaign comprise instructions to: identify an undersell constraint, wherein the undersell constraint comprises a restriction on sale of advertisements for a particular media asset or other unit of inventory; and when automatically developing the advertising campaign, doing so such that the advertising campaign satisfies the undersell constraint. 24. The system of claim 1, wherein the instructions to develop the advertising campaign comprise instructions to: relax a supply constraint throughout a campaign on a per-inventory-segment basis; and reoptimize one or more campaigns based on the relaxing. 25. The system of claim 1, further comprising additional programming instructions that cause the processor to provide a property manager configured to: receive temporal attributes, non-temporal attributes or both for a new media asset; and add the received attributes for the new media asset to the set of programming data for use in future advertising campaigns. 26. The system of claim 1, wherein the seller-side criteria comprise one or more of the following: a premium value to be added to a bid or budget received from the first buyer; a requirement to provide the first buyer with an audience having one or more specified attributes; or a rule to factor a make-good cost in a revenue analysis when developing the advertising campaign. 27. The system of claim 1, wherein the instructions to implement the digital media campaign manager further comprise instructions that, when executed: cause a processor to receive a modification of the advertising campaign; and cause a processor to modify the advertising campaign to implement the received modification. 28. The system of claim 1, further comprising: a set of additional programming data comprising temporal attributes and non-temporal attributes for a plurality of digital media assets that a second media service provider will present to consumers; and wherein the instructions to develop the advertising campaign further comprise instructions to also use the parameters in the data set for the second media service provider so that the advertising campaign allocates advertisements to media assets for each of the media service providers. 29. The system of claim 1, further comprising additional programming instructions to: determine a first cost to reach a target audience using a first type of targeting for the placement of the digital advertisements in one or more of the digital programming assets; determine a second cost to reach the target audience using a second type of targeting, wherein the second type of targeting is broader or narrower than the first type of targeting; and cause the seller-side user interface or the buyer-side user interface to output the first and second costs for comparison. 30. A system for managing a digital media campaign, comprising: a data store containing an inventory of digital programming files, each of which corresponds to one or more digital programming assets; a set of programming data comprising temporal attributes and non-temporal attributes for a plurality of the digital programming files; a digital media server configured to access the first data store and transmit the digital programming files to a plurality of media presentation devices; a processor; and a computer-readable medium containing programming instructions that, when executed, cause the processor to implement a digital media campaign manager by: causing an electronic device to implement a buyer-side user interface that displays a plurality of sections that provide input fields for user-selectable purchasing criteria for placement of digital advertisements in one or more of the digital programming assets, receiving, via one or more of the input fields of the buyer-side user interface, a selection of one or more of the purchasing criteria for a purchase of digital advertisements by a first buyer, causing a display device to present a seller-side user interface that comprises sections that provide input fields by which a seller may enter seller-side criteria for placement of digital advertisements in one or more of the digital programming assets, receiving, via the input fields of the seller-side user interface, a selection of one or more of the seller-side criteria for placement of digital advertisements in one or more of the digital programming assets, comparing the purchasing criteria and the seller-side criteria to the temporal attributes and non-temporal attributes in the data set to develop an advertising campaign for the first buyer by selecting a group of the digital programming assets and automatically allocating the selected group of digital programming assets to the advertising campaign, along with scheduling parameters indicating when the digital advertisements will run within the digital programming assets in the group, so that the advertising campaign satisfies the selected purchasing criteria and the selected seller-side criteria, causing either the buyer-side user interface or the seller-side user interface to present indicia of the advertising campaign to either the first buyer the seller to review, and after acceptance of the advertising campaign by either the first buyer or the seller, generating a campaign file or a scheduling file containing programming instructions and parameters for the advertising campaign that are callable by an application programming interface or usable by a digital advertising insertion system to place digital advertisements in the digital programming assets.
2,400
8,151
8,151
12,857,727
2,424
Described herein are systems, methods and apparatus which allow users to remotely access content from other devices. One embodiment provides a method for presenting available content to a user via a television receiver. A first content selection menu is output for display that includes at least a first content selection item associated with locally accessible content and a second content selection item associated with an external device. Responsive to a request associated with the second content selection item, the method includes outputting for display a second content selection menu. The second content selection menu specifies at least one content item remotely available through the external device. A user provides a selection of the at least one content item and the television receiver receives the selected content item from the external device and outputs the selected content item for presentation by a presentation device.
1. A method for presenting available content to a user, the method comprising: outputting for display from a television receiver a first content selection menu, the first content selection menu including at least a first content selection item associated with locally accessible content and a second content selection item associated with an external device; receiving a request, at the television receiver, associated with the second content selection item; outputting for display from the television receiver a second content selection menu, the second content selection menu specifying at least one content item remotely available through the external device; receiving, at the television receiver, a selection of the at least one content item; receiving the selected content item at the television receiver from the external device; and outputting the selected content item from the television receiver for presentation by a presentation device. 2. The method of claim 1, wherein the first content selection menu comprises an electronic programming guide. 3. The method of claim 2, wherein the electronic programming guide comprises a grid, the grid including a plurality of cells, with at least a portion of the cells of the grid corresponding with particular television programs. 4. The method of claim 3, wherein the locally accessible content is associated with at least one linear television channel receivable by the television receiver through a communicatively coupled television distribution network. 5. The method of claim 4, wherein the content item comprises an audio/video program stored on a storage medium of the external device. 6. The method of claim 1, further comprising: responsive to the request, transmitting an authentication command from the television receiver to the external device. 7. The method of claim 6, wherein transmitting the authentication command further comprises: storing at least one authentication credential at the television receiver prior to outputting for display the first content selection menu; and transmitting the stored authentication credential from the television receiver to the external device responsive to the request. 8. A television receiver comprising: a first communication interface operable to receive at least one audio/video program from a television distribution network, the audio/video program associated with a first content identifier; a second communication interface operable to communicate with an external device, the external device associated with a second content identifier; and control logic operable to: receive a first request associated with the second content identifier; output a first content selection menu for presentation by the presentation device responsive to the first request, the first content selection menu specifying at least one content item remotely available through the external device; and receive a selection associated with the at least one remote content item; the second communication interface operable to receive the selected content item at the television receiver from the external device; the control logic operable to output the selected content item for presentation by the presentation device. 9. The television receiver of claim 8, wherein the control logic is further operable to receive external data specifying the at least one content item remotely available through the external device, the control logic operable to generate the first content selection menu based on the external data. 10. The television receiver of claim 9, wherein the external data is received from the external device, the control logic operable to generate the first content selection menu based on the external data. 11. The television receiver of claim 9, wherein the external data is received from an external server separate from the external device. 12. The television receiver of claim 8, wherein the control logic is further operable to: output a second content selection menu for presentation by the presentation device; the second selection menu specifying the first content identifier and the second content identifier; and receive a second request associated with the second content identifier responsive to output of the second content selection menu; wherein the first content selection menu is output responsive to the second request. 13. The television receiver of claim 8, wherein the first and second content identifiers comprise channel identifiers and the first request comprises user input specifying the second channel identifier. 14. A television receiver comprising: a first communication interface operable to receive at least one audio/video program from a television distribution network, the audio/video program associated with a first content identifier, and a second content identifier associated with a device selection menu; a second communication interface operable to communicate with at least one external device; control logic operable to: receive a first request associated with the second content identifier; output the device selection menu for presentation by the presentation device responsive to the second request, the device selection menu specifying the at least one external device; and receive a selection associated with the at least one external device responsive to the output of the device selection menu; the second communication interface operable to communicate with the selected external device to receive a remote content item; the control logic operable to output the remote content item for presentation by the presentation device. 15. The television receiver of claim 14, wherein the control logic is further operable to: output a first content selection menu for presentation by the presentation device, the first content selection menu specifying the first content identifier and the second content identifier; and receive a second request associated with the second content identifier responsive to output of the first content selection menu; wherein the first content selection menu is output responsive to the second request. 16. The television receiver of claim 15, wherein the content selection menu specifies a plurality of external devices accessible for selection. 17. The television receiver of claim 16, wherein the control logic is further operable to: output a second content selection menu for presentation by the presentation device, the second content selection menu specifying at least one content item remotely available through the selected external device; receiver selection of the at least one content item; and initiate a content request to the selected external device for the selected content item; the second communication interface operable to receive the selected content item from the selected external device; and the control logic operable to output the selected content item for presentation by the presentation device. 18. The television receiver of claim 15, wherein the content selection menu comprises a grid, the grid including a plurality of cells, with at least a first portion of the cells of the grid corresponding with particular television programs and a second portion of the cells of the grid corresponding with external devices. 19. The television receiver of claim 15, wherein the first and second content identifiers comprise channel identifiers and the first request comprises user input specifying the second channel identifier. 20. The television receiver of claim 15, further comprising: a storage device operable to store at least one authentication credential prior to the control logic outputting for display the device selection menu; the control logic operable to initiate transmission of the stored authentication credential to the external device responsive to the first request.
Described herein are systems, methods and apparatus which allow users to remotely access content from other devices. One embodiment provides a method for presenting available content to a user via a television receiver. A first content selection menu is output for display that includes at least a first content selection item associated with locally accessible content and a second content selection item associated with an external device. Responsive to a request associated with the second content selection item, the method includes outputting for display a second content selection menu. The second content selection menu specifies at least one content item remotely available through the external device. A user provides a selection of the at least one content item and the television receiver receives the selected content item from the external device and outputs the selected content item for presentation by a presentation device.1. A method for presenting available content to a user, the method comprising: outputting for display from a television receiver a first content selection menu, the first content selection menu including at least a first content selection item associated with locally accessible content and a second content selection item associated with an external device; receiving a request, at the television receiver, associated with the second content selection item; outputting for display from the television receiver a second content selection menu, the second content selection menu specifying at least one content item remotely available through the external device; receiving, at the television receiver, a selection of the at least one content item; receiving the selected content item at the television receiver from the external device; and outputting the selected content item from the television receiver for presentation by a presentation device. 2. The method of claim 1, wherein the first content selection menu comprises an electronic programming guide. 3. The method of claim 2, wherein the electronic programming guide comprises a grid, the grid including a plurality of cells, with at least a portion of the cells of the grid corresponding with particular television programs. 4. The method of claim 3, wherein the locally accessible content is associated with at least one linear television channel receivable by the television receiver through a communicatively coupled television distribution network. 5. The method of claim 4, wherein the content item comprises an audio/video program stored on a storage medium of the external device. 6. The method of claim 1, further comprising: responsive to the request, transmitting an authentication command from the television receiver to the external device. 7. The method of claim 6, wherein transmitting the authentication command further comprises: storing at least one authentication credential at the television receiver prior to outputting for display the first content selection menu; and transmitting the stored authentication credential from the television receiver to the external device responsive to the request. 8. A television receiver comprising: a first communication interface operable to receive at least one audio/video program from a television distribution network, the audio/video program associated with a first content identifier; a second communication interface operable to communicate with an external device, the external device associated with a second content identifier; and control logic operable to: receive a first request associated with the second content identifier; output a first content selection menu for presentation by the presentation device responsive to the first request, the first content selection menu specifying at least one content item remotely available through the external device; and receive a selection associated with the at least one remote content item; the second communication interface operable to receive the selected content item at the television receiver from the external device; the control logic operable to output the selected content item for presentation by the presentation device. 9. The television receiver of claim 8, wherein the control logic is further operable to receive external data specifying the at least one content item remotely available through the external device, the control logic operable to generate the first content selection menu based on the external data. 10. The television receiver of claim 9, wherein the external data is received from the external device, the control logic operable to generate the first content selection menu based on the external data. 11. The television receiver of claim 9, wherein the external data is received from an external server separate from the external device. 12. The television receiver of claim 8, wherein the control logic is further operable to: output a second content selection menu for presentation by the presentation device; the second selection menu specifying the first content identifier and the second content identifier; and receive a second request associated with the second content identifier responsive to output of the second content selection menu; wherein the first content selection menu is output responsive to the second request. 13. The television receiver of claim 8, wherein the first and second content identifiers comprise channel identifiers and the first request comprises user input specifying the second channel identifier. 14. A television receiver comprising: a first communication interface operable to receive at least one audio/video program from a television distribution network, the audio/video program associated with a first content identifier, and a second content identifier associated with a device selection menu; a second communication interface operable to communicate with at least one external device; control logic operable to: receive a first request associated with the second content identifier; output the device selection menu for presentation by the presentation device responsive to the second request, the device selection menu specifying the at least one external device; and receive a selection associated with the at least one external device responsive to the output of the device selection menu; the second communication interface operable to communicate with the selected external device to receive a remote content item; the control logic operable to output the remote content item for presentation by the presentation device. 15. The television receiver of claim 14, wherein the control logic is further operable to: output a first content selection menu for presentation by the presentation device, the first content selection menu specifying the first content identifier and the second content identifier; and receive a second request associated with the second content identifier responsive to output of the first content selection menu; wherein the first content selection menu is output responsive to the second request. 16. The television receiver of claim 15, wherein the content selection menu specifies a plurality of external devices accessible for selection. 17. The television receiver of claim 16, wherein the control logic is further operable to: output a second content selection menu for presentation by the presentation device, the second content selection menu specifying at least one content item remotely available through the selected external device; receiver selection of the at least one content item; and initiate a content request to the selected external device for the selected content item; the second communication interface operable to receive the selected content item from the selected external device; and the control logic operable to output the selected content item for presentation by the presentation device. 18. The television receiver of claim 15, wherein the content selection menu comprises a grid, the grid including a plurality of cells, with at least a first portion of the cells of the grid corresponding with particular television programs and a second portion of the cells of the grid corresponding with external devices. 19. The television receiver of claim 15, wherein the first and second content identifiers comprise channel identifiers and the first request comprises user input specifying the second channel identifier. 20. The television receiver of claim 15, further comprising: a storage device operable to store at least one authentication credential prior to the control logic outputting for display the device selection menu; the control logic operable to initiate transmission of the stored authentication credential to the external device responsive to the first request.
2,400
8,152
8,152
15,421,285
2,433
A novel method for distributing firewall configuration of a software defined data center is provided. The network manager of the data center receives update requests from tenants of the data center and correspondingly generates update fragments and delivers the generated update fragment to local control planes controlling the enforcing devices. Each local control plane in turn integrates the update fragments it receives into its firewall rules table. For each rule and/or section thusly integrated, the local control plane uses the rule or the section's assigned priority number to establish ordering in the firewall rules table of the local control plane.
1. A software-defined data center comprising: a firewall configuration data store storing firewall rules defined by a plurality of tenants; a plurality of host machines, each host machine operating (i) a set of datapath elements for a plurality of tenants of the software defined data center and (ii) a local control plane for implementing a firewall rules table comprising a set of firewall rules that is to be enforced on the set of datapath elements of the host machines; and a network manager for (i) managing the firewall configuration data store and receiving updates to the firewall configuration data store from the plurality of tenants, (ii) generating an update fragment that is relevant to a particular local control plane and transmitting the identified relevant update fragment to the particular local control plane, wherein the particular local control plane receives the update fragment and assembles a firewall rules table based on the received update fragment. 2. The software-defined data center of claim 1, wherein the set of datapath elements enforcing a rules table comprises a set of logical forwarding element and a set of virtual machines defined by a tenant. 3. The software-defined data center of claim 1, wherein the set of datapath elements for enforcing a particular firewall rule table comprises a VNIC that is enforcing a set of firewall rules of the particular firewall rule table. 4. The software-defined data center of claim 1, wherein the firewall configuration data store is divided into a plurality of sections, each section comprising a set of firewall rules, wherein each tenant has a set of corresponding sections in the plurality of sections. 5. The software-defined data center of claim 1, wherein the firewall rules table assembled by the particular local control plane is for implementing firewall policies defined by a particular tenant. 6. The software-defined data center of claim 4, wherein the firewall rules table of the particular local control plane comprises a first set of sections comprising rules defined by a subset of the tenants in the plurality of tenants and a second set of sections comprising rules applicable to all tenants of the software-defined data center. 7. The software-defined data center of claim 6, wherein the firewall rules table of the particular local control plane does not store a section comprising rules defined by another tenant. 8. The software-defined data center of claim 1, wherein the update fragment comprises a set of rules, each rule associated with a priority number, wherein the particular local control plane assembles the rules in the fragment into its firewall rules table based on the priority numbers of the rules. 9. The software-defined data center of claim 8, wherein the update fragment comprises a set of rules belonging to a particular section of the firewall configuration, the particular section associated with a section priority number, wherein the particular local control plane assembles the rules in the fragment into its firewall rules table based on the section priority number of the particular section. 10. The software-defined data center of claim 9, wherein the priority numbers assigned to different sections are sparsely allocated. 11. A method of providing firewall services at a software defined data center, the method comprising: receiving an update to a firewall configuration database from one of a plurality of tenants, wherein the firewall configuration is for implementing firewall protection for the plurality of tenants at a set of enforcing devices; based on the received update, identifying a particular enforcing device to which the update is relevant and generating an update fragment for the identified particular enforcing device, wherein the update fragment comprises an update to a logical entity in the firewall configuration and a priority number for positioning the logical entity relative to other logical entities in the firewall configuration; and transmitting the identified relevant update fragment to the particular enforcing device. 12. The method of claim 11, wherein the firewall configuration is divided into sections that each include a set of rules, wherein the logical entity is a section of the firewall configuration. 13. The method of claim 11, wherein the firewall configuration is divided into sections that each include a set of rules, wherein the logical entity is a rule in a section of the firewall configuration. 14. The method of claim 11, wherein the particular enforcing device implements the firewall protection by using a firewall rules table, wherein the update fragment is used by the particular enforcing device to incorporate the logical entity into the firewall rules table, and the priority number of the logical entity is used to position the logical entity relative to other logical entities in the firewall rules table. 15. The method of claim 14, wherein the particular enforcing device is a first enforcing device and the update fragment is a first update fragment, the method further comprising identifying a second enforcing device to which the received update is relevant and generating a second update fragment for the second enforcing device based on the received update to the firewall configuration database. 16. A method of providing firewall services at a software defined data center comprising host machines, the method comprising: controlling a set of datapath elements in a host machine in order to enforce firewall protection according to a firewall rules table; receiving an update fragment from a network manager, wherein the update fragment comprises an update to a logical entity in a firewall configuration of the data center and a priority number for positioning the logical entity relative to other logical entities in the firewall configuration; and incorporating the update fragment to the firewall rules table, wherein the priority number of the logical entity is used for positioning the logical entity relative to other logical entities in the firewall rules table. 17. The method of claim 16, wherein the firewall configuration is divided into sections that each include a set of rules, wherein the logical entity is a section of the firewall configuration. 18. The method of claim 16, wherein the firewall configuration is divided into sections that each include a set of rules, wherein the logical entity is a rule in a section of the firewall configuration. 19. The method of claim 16, wherein the priority number of the logical entity determines an order by which the logical entity is examined for the purpose of identifying a matching firewall rule for filtering a packet. 20. The method of claim 16, wherein the set of datapath elements being controlled for enforcing firewall protection comprises a virtual network interface controller (VNIC) associated with a virtual machine, wherein the firewall rules table is used to filter packets destined for the virtual machine through the VNIC.
A novel method for distributing firewall configuration of a software defined data center is provided. The network manager of the data center receives update requests from tenants of the data center and correspondingly generates update fragments and delivers the generated update fragment to local control planes controlling the enforcing devices. Each local control plane in turn integrates the update fragments it receives into its firewall rules table. For each rule and/or section thusly integrated, the local control plane uses the rule or the section's assigned priority number to establish ordering in the firewall rules table of the local control plane.1. A software-defined data center comprising: a firewall configuration data store storing firewall rules defined by a plurality of tenants; a plurality of host machines, each host machine operating (i) a set of datapath elements for a plurality of tenants of the software defined data center and (ii) a local control plane for implementing a firewall rules table comprising a set of firewall rules that is to be enforced on the set of datapath elements of the host machines; and a network manager for (i) managing the firewall configuration data store and receiving updates to the firewall configuration data store from the plurality of tenants, (ii) generating an update fragment that is relevant to a particular local control plane and transmitting the identified relevant update fragment to the particular local control plane, wherein the particular local control plane receives the update fragment and assembles a firewall rules table based on the received update fragment. 2. The software-defined data center of claim 1, wherein the set of datapath elements enforcing a rules table comprises a set of logical forwarding element and a set of virtual machines defined by a tenant. 3. The software-defined data center of claim 1, wherein the set of datapath elements for enforcing a particular firewall rule table comprises a VNIC that is enforcing a set of firewall rules of the particular firewall rule table. 4. The software-defined data center of claim 1, wherein the firewall configuration data store is divided into a plurality of sections, each section comprising a set of firewall rules, wherein each tenant has a set of corresponding sections in the plurality of sections. 5. The software-defined data center of claim 1, wherein the firewall rules table assembled by the particular local control plane is for implementing firewall policies defined by a particular tenant. 6. The software-defined data center of claim 4, wherein the firewall rules table of the particular local control plane comprises a first set of sections comprising rules defined by a subset of the tenants in the plurality of tenants and a second set of sections comprising rules applicable to all tenants of the software-defined data center. 7. The software-defined data center of claim 6, wherein the firewall rules table of the particular local control plane does not store a section comprising rules defined by another tenant. 8. The software-defined data center of claim 1, wherein the update fragment comprises a set of rules, each rule associated with a priority number, wherein the particular local control plane assembles the rules in the fragment into its firewall rules table based on the priority numbers of the rules. 9. The software-defined data center of claim 8, wherein the update fragment comprises a set of rules belonging to a particular section of the firewall configuration, the particular section associated with a section priority number, wherein the particular local control plane assembles the rules in the fragment into its firewall rules table based on the section priority number of the particular section. 10. The software-defined data center of claim 9, wherein the priority numbers assigned to different sections are sparsely allocated. 11. A method of providing firewall services at a software defined data center, the method comprising: receiving an update to a firewall configuration database from one of a plurality of tenants, wherein the firewall configuration is for implementing firewall protection for the plurality of tenants at a set of enforcing devices; based on the received update, identifying a particular enforcing device to which the update is relevant and generating an update fragment for the identified particular enforcing device, wherein the update fragment comprises an update to a logical entity in the firewall configuration and a priority number for positioning the logical entity relative to other logical entities in the firewall configuration; and transmitting the identified relevant update fragment to the particular enforcing device. 12. The method of claim 11, wherein the firewall configuration is divided into sections that each include a set of rules, wherein the logical entity is a section of the firewall configuration. 13. The method of claim 11, wherein the firewall configuration is divided into sections that each include a set of rules, wherein the logical entity is a rule in a section of the firewall configuration. 14. The method of claim 11, wherein the particular enforcing device implements the firewall protection by using a firewall rules table, wherein the update fragment is used by the particular enforcing device to incorporate the logical entity into the firewall rules table, and the priority number of the logical entity is used to position the logical entity relative to other logical entities in the firewall rules table. 15. The method of claim 14, wherein the particular enforcing device is a first enforcing device and the update fragment is a first update fragment, the method further comprising identifying a second enforcing device to which the received update is relevant and generating a second update fragment for the second enforcing device based on the received update to the firewall configuration database. 16. A method of providing firewall services at a software defined data center comprising host machines, the method comprising: controlling a set of datapath elements in a host machine in order to enforce firewall protection according to a firewall rules table; receiving an update fragment from a network manager, wherein the update fragment comprises an update to a logical entity in a firewall configuration of the data center and a priority number for positioning the logical entity relative to other logical entities in the firewall configuration; and incorporating the update fragment to the firewall rules table, wherein the priority number of the logical entity is used for positioning the logical entity relative to other logical entities in the firewall rules table. 17. The method of claim 16, wherein the firewall configuration is divided into sections that each include a set of rules, wherein the logical entity is a section of the firewall configuration. 18. The method of claim 16, wherein the firewall configuration is divided into sections that each include a set of rules, wherein the logical entity is a rule in a section of the firewall configuration. 19. The method of claim 16, wherein the priority number of the logical entity determines an order by which the logical entity is examined for the purpose of identifying a matching firewall rule for filtering a packet. 20. The method of claim 16, wherein the set of datapath elements being controlled for enforcing firewall protection comprises a virtual network interface controller (VNIC) associated with a virtual machine, wherein the firewall rules table is used to filter packets destined for the virtual machine through the VNIC.
2,400
8,153
8,153
14,792,950
2,454
A mechanism is provided for monitoring performance of a computing machine. A current indicator is collected representing a current value of a performance indicator of the computing machine. The current indicator is compared with at least one previous indicator representing a previous value of the performance indicator of the computing machine. Responsive to the current indicator being outside a threshold of the at least one previous indicator, the current indicator is transmitted remotely to a resource via a communication network. Responsive to the current indicator being within the threshold of the at least one previous indicator, the transmission of the current indicator is disabled.
1. A method, in a data processing system, for monitoring performance of a computing machine, the method comprising the repetition of: collecting, by a processor in the data processing system, a current indicator representing a current value of a performance indicator of the computing machine; comparing, by the processor, the current indicator with at least one previous indicator representing a previous value of the performance indicator of the computing machine; responsive to the current indicator being outside a threshold of the at least one previous indicator, transmitting, by the processor, the current indicator remotely to a resource via a communication network; and responsive to the current indicator being within the threshold of the at least one previous indicator, disabling, by the processor, the transmission of the current indicator. 2. The method according to claim 1, wherein the disabling of the transmission of the current indicator comprises: disabling, by the processor, the transmission of the current indicator according to a comparison of the current indicator with a last indicator representing a last value of the performance indicator transmitted remotely to the resource in the cloud environment via the communication network. 3. The method according to claim 2, wherein the threshold is a variation range comprising the last indicator. 4. The method according to claim 3, wherein the variation range is centered on the last indicator. 5. The method according to claim 1, further comprising: receiving, by the processor, an indication of each transmitted current indicator in a monitoring interval comprising a plurality of monitoring instants each one associated with the collection of a corresponding current indicator, and displaying, by the processor, a monitoring report, for each monitoring instant the monitoring report comprising a representation of the corresponding transmitted current indicator when available or a representation of a variation range for a last available transmitted current indicator otherwise. 6. The method according to claim 5, wherein the displaying of the monitoring report comprises: displaying, by the processor, a representation of a bar for each monitoring instant at which the corresponding transmitted current indicator is available, the bar having a width corresponding to the variation range of the transmitted current indicator. 7. The method according to claim 1, further comprising: enabling, by the processor, the transmission of the current indicator irrespective of the comparison between the current indicator and the at least one previous indicator according to a comparison of the current indicator with an alarm threshold. 8. The method according to claim 1, wherein the collecting of the current indicator comprises: collecting, by the processor, the current indicator periodically. 9. The method according to claim 1, wherein the disabling of the transmission of the current indicator comprises: intercepting, by the processor, the transmission of the current indicator within the computing machine, and discarding or relaying, by the processor, the current indicator responsive to the current indicator being within the threshold of the at least one previous indicator. 10. The method according to claim 1, wherein the resource is a monitoring application as a service in a cloud provider. 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on the computing system, causes the computing system to: collect a current indicator representing a current value of a performance indicator of the computing machine; compare the current indicator with at least one previous indicator representing a previous value of the performance indicator of the computing machine; responsive to the current indicator being outside a threshold of the at least one previous indicator, transmit the current indicator remotely to a resource via a communication network; and responsive to the current indicator being within the threshold of the at least one previous indicator, disable the transmission of the current indicator. 12. A system comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: collect a current indicator representing a current value of a performance indicator of the computing machine; compare the current indicator with at least one previous indicator representing a previous value of the performance indicator of the computing machine; responsive to the current indicator being outside a threshold of the at least one previous indicator, transmit the current indicator remotely to a resource via a communication network; and responsive to the current indicator being within the threshold of the at least one previous indicator, disable the transmission of the current indicator. 13. The system according to claim 12, wherein the instructions to disable the transmission of the current indicator comprises instructions that cause the processor to: disable the transmission of the current indicator according to a comparison of the current indicator with a last indicator representing a last value of the performance indicator transmitted remotely to the resource in the cloud environment via the communication network. 14. The system according to claim 12, wherein the instructions further cause the processor to: receive an indication of each transmitted current indicator in a monitoring interval comprising a plurality of monitoring instants each one associated with the collection of a corresponding current indicator, and display a monitoring report, for each monitoring instant the monitoring report comprising a representation of the corresponding transmitted current indicator when available or a representation of a variation range for a last available transmitted current indicator otherwise. 15. The system according to claim 14, wherein the instruction to display the monitoring report comprises instructions that cause the processor to: display a representation of a bar for each monitoring instant at which the corresponding transmitted current indicator is available, the bar having a width corresponding to the variation range of the transmitted current indicator. 16. The system according to claim 12, wherein the instructions further cause the processor to: enable the transmission of the current indicator irrespective of the comparison between the current indicator and the at least one previous indicator according to a comparison of the current indicator with an alarm threshold. 17. The computer program product according to claim 11, wherein the computer readable program to disable the transmission of the current indicator comprises computer readable program that causes the computing system to: disable the transmission of the current indicator according to a comparison of the current indicator with a last indicator representing a last value of the performance indicator transmitted remotely to the resource in the cloud environment via the communication network. 18. The computer program product according to claim 11, wherein the computer readable program further causes the computing system to: receive an indication of each transmitted current indicator in a monitoring interval comprising a plurality of monitoring instants each one associated with the collection of a corresponding current indicator, and display a monitoring report, for each monitoring instant the monitoring report comprising a representation of the corresponding transmitted current indicator when available or a representation of a variation range for a last available transmitted current indicator otherwise. 19. The computer program product according to claim 18, wherein the computer readable program to display the monitoring report comprises computer readable program that causes the computing system to: display a representation of a bar for each monitoring instant at which the corresponding transmitted current indicator is available, the bar having a width corresponding to the variation range of the transmitted current indicator. 20. The computer program product according to claim 11, wherein the computer readable program further causes the computing system to: enable the transmission of the current indicator irrespective of the comparison between the current indicator and the at least one previous indicator according to a comparison of the current indicator with an alarm threshold.
A mechanism is provided for monitoring performance of a computing machine. A current indicator is collected representing a current value of a performance indicator of the computing machine. The current indicator is compared with at least one previous indicator representing a previous value of the performance indicator of the computing machine. Responsive to the current indicator being outside a threshold of the at least one previous indicator, the current indicator is transmitted remotely to a resource via a communication network. Responsive to the current indicator being within the threshold of the at least one previous indicator, the transmission of the current indicator is disabled.1. A method, in a data processing system, for monitoring performance of a computing machine, the method comprising the repetition of: collecting, by a processor in the data processing system, a current indicator representing a current value of a performance indicator of the computing machine; comparing, by the processor, the current indicator with at least one previous indicator representing a previous value of the performance indicator of the computing machine; responsive to the current indicator being outside a threshold of the at least one previous indicator, transmitting, by the processor, the current indicator remotely to a resource via a communication network; and responsive to the current indicator being within the threshold of the at least one previous indicator, disabling, by the processor, the transmission of the current indicator. 2. The method according to claim 1, wherein the disabling of the transmission of the current indicator comprises: disabling, by the processor, the transmission of the current indicator according to a comparison of the current indicator with a last indicator representing a last value of the performance indicator transmitted remotely to the resource in the cloud environment via the communication network. 3. The method according to claim 2, wherein the threshold is a variation range comprising the last indicator. 4. The method according to claim 3, wherein the variation range is centered on the last indicator. 5. The method according to claim 1, further comprising: receiving, by the processor, an indication of each transmitted current indicator in a monitoring interval comprising a plurality of monitoring instants each one associated with the collection of a corresponding current indicator, and displaying, by the processor, a monitoring report, for each monitoring instant the monitoring report comprising a representation of the corresponding transmitted current indicator when available or a representation of a variation range for a last available transmitted current indicator otherwise. 6. The method according to claim 5, wherein the displaying of the monitoring report comprises: displaying, by the processor, a representation of a bar for each monitoring instant at which the corresponding transmitted current indicator is available, the bar having a width corresponding to the variation range of the transmitted current indicator. 7. The method according to claim 1, further comprising: enabling, by the processor, the transmission of the current indicator irrespective of the comparison between the current indicator and the at least one previous indicator according to a comparison of the current indicator with an alarm threshold. 8. The method according to claim 1, wherein the collecting of the current indicator comprises: collecting, by the processor, the current indicator periodically. 9. The method according to claim 1, wherein the disabling of the transmission of the current indicator comprises: intercepting, by the processor, the transmission of the current indicator within the computing machine, and discarding or relaying, by the processor, the current indicator responsive to the current indicator being within the threshold of the at least one previous indicator. 10. The method according to claim 1, wherein the resource is a monitoring application as a service in a cloud provider. 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on the computing system, causes the computing system to: collect a current indicator representing a current value of a performance indicator of the computing machine; compare the current indicator with at least one previous indicator representing a previous value of the performance indicator of the computing machine; responsive to the current indicator being outside a threshold of the at least one previous indicator, transmit the current indicator remotely to a resource via a communication network; and responsive to the current indicator being within the threshold of the at least one previous indicator, disable the transmission of the current indicator. 12. A system comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: collect a current indicator representing a current value of a performance indicator of the computing machine; compare the current indicator with at least one previous indicator representing a previous value of the performance indicator of the computing machine; responsive to the current indicator being outside a threshold of the at least one previous indicator, transmit the current indicator remotely to a resource via a communication network; and responsive to the current indicator being within the threshold of the at least one previous indicator, disable the transmission of the current indicator. 13. The system according to claim 12, wherein the instructions to disable the transmission of the current indicator comprises instructions that cause the processor to: disable the transmission of the current indicator according to a comparison of the current indicator with a last indicator representing a last value of the performance indicator transmitted remotely to the resource in the cloud environment via the communication network. 14. The system according to claim 12, wherein the instructions further cause the processor to: receive an indication of each transmitted current indicator in a monitoring interval comprising a plurality of monitoring instants each one associated with the collection of a corresponding current indicator, and display a monitoring report, for each monitoring instant the monitoring report comprising a representation of the corresponding transmitted current indicator when available or a representation of a variation range for a last available transmitted current indicator otherwise. 15. The system according to claim 14, wherein the instruction to display the monitoring report comprises instructions that cause the processor to: display a representation of a bar for each monitoring instant at which the corresponding transmitted current indicator is available, the bar having a width corresponding to the variation range of the transmitted current indicator. 16. The system according to claim 12, wherein the instructions further cause the processor to: enable the transmission of the current indicator irrespective of the comparison between the current indicator and the at least one previous indicator according to a comparison of the current indicator with an alarm threshold. 17. The computer program product according to claim 11, wherein the computer readable program to disable the transmission of the current indicator comprises computer readable program that causes the computing system to: disable the transmission of the current indicator according to a comparison of the current indicator with a last indicator representing a last value of the performance indicator transmitted remotely to the resource in the cloud environment via the communication network. 18. The computer program product according to claim 11, wherein the computer readable program further causes the computing system to: receive an indication of each transmitted current indicator in a monitoring interval comprising a plurality of monitoring instants each one associated with the collection of a corresponding current indicator, and display a monitoring report, for each monitoring instant the monitoring report comprising a representation of the corresponding transmitted current indicator when available or a representation of a variation range for a last available transmitted current indicator otherwise. 19. The computer program product according to claim 18, wherein the computer readable program to display the monitoring report comprises computer readable program that causes the computing system to: display a representation of a bar for each monitoring instant at which the corresponding transmitted current indicator is available, the bar having a width corresponding to the variation range of the transmitted current indicator. 20. The computer program product according to claim 11, wherein the computer readable program further causes the computing system to: enable the transmission of the current indicator irrespective of the comparison between the current indicator and the at least one previous indicator according to a comparison of the current indicator with an alarm threshold.
2,400
8,154
8,154
14,934,461
2,421
An apparatus for configuring one or more communication-related parameters in a cable modem system is provided. The apparatus comprises a processor, a memory coupled to the processor, a cable television signal input interface for receiving one or more cable television signals, a spectrum analyser for analysing a spectrum of signals, and a cable modem termination system unit for providing access to a communication network.
1. An apparatus for configuring one or more communication-related parameters in a cable modem system, the apparatus comprising: a processor; a memory coupled to the processor, said memory storing a first computer-readable program code that, when executed on the processor, is configured to configure the communication related parameters; and a second computer-readable program code, when executed on the processor, is configured to send a request to at least one cable modem to measure a signal strength of a signal received by the at least one cable modem, to receive one or more signal strength measurements from the at least one cable modem, and to adjust the one or more communication-related parameters to be used for communication with the at least one cable modem, based at least partially on the one or more signal strength measurements; a cable television signal input interface for receiving one or more cable television signals; a spectrum analyser for analysing a spectrum of signals; and a cable modem termination system unit for providing access to a communication network. 2. An apparatus according to claim 1, wherein the spectrum analyser is configured to analyse the spectrum of signals to determine one or more free frequency bands that are unallocated in the spectrum of signals. 3. An apparatus according to claim 2, wherein a third computer-readable program code, when executed on the processor, is configured to allocate at least one of the one or more free frequency bands for a cable modem termination system signal between the apparatus and the at least one cable modem. 4. An apparatus according to claim 1, wherein the one or more communication-related parameters comprise at least one of: gain, slope, amplification, signal-to-noise ratio. 5. An apparatus according to claim 1, wherein the one or more signal strength measurements pertain to at least two different frequencies. 6. An apparatus according to claim 1, further comprising a programmable controllable switch for switching ON/OFF the one or more cable television signals. 7. An apparatus according to claim 1, wherein the apparatus conforms to Data Over Cable Service Interface Specification. 8. An apparatus according to claim 1, wherein the apparatus further comprises an attenuator. 9. A method for configuring one or more communication-related parameters in a cable modem system, via an apparatus, the method comprising: analysing a spectrum of signals from a cable television signal input interface to determine one or more free frequency bands that are unallocated in the spectrum; sending a request to at least one cable modem to measure a signal strength of a signal received by the at least one cable modem, to receive one or more signal strength measurements from the at least one cable modem; adjusting the one or more communication-related parameters to be used for communication with the at least one cable modem, based at least partially on the one or more signal strength measurements; and allocating at least one of the one or more free frequency bands for transmission of at least one cable modem termination system signal between the apparatus and at least one cable modem in the cable modem system. 10. A method according to claim 9, wherein the one or more communication-related parameters comprise at least one of gain, slope, amplification, signal-to-noise ratio. 11. A method according to claim 9, wherein the method further comprises one or more signal strength measurements wherein the one or more signal strength measurements pertain to at least two different frequencies. 12. A method according to claim 9, wherein the one or more communication-related parameters are adjusted automatically. 13. A method according to claim 12 further comprising switching OFF one or more cable television signals when making one or more signal strength measurements and/or analysing the spectrum.
An apparatus for configuring one or more communication-related parameters in a cable modem system is provided. The apparatus comprises a processor, a memory coupled to the processor, a cable television signal input interface for receiving one or more cable television signals, a spectrum analyser for analysing a spectrum of signals, and a cable modem termination system unit for providing access to a communication network.1. An apparatus for configuring one or more communication-related parameters in a cable modem system, the apparatus comprising: a processor; a memory coupled to the processor, said memory storing a first computer-readable program code that, when executed on the processor, is configured to configure the communication related parameters; and a second computer-readable program code, when executed on the processor, is configured to send a request to at least one cable modem to measure a signal strength of a signal received by the at least one cable modem, to receive one or more signal strength measurements from the at least one cable modem, and to adjust the one or more communication-related parameters to be used for communication with the at least one cable modem, based at least partially on the one or more signal strength measurements; a cable television signal input interface for receiving one or more cable television signals; a spectrum analyser for analysing a spectrum of signals; and a cable modem termination system unit for providing access to a communication network. 2. An apparatus according to claim 1, wherein the spectrum analyser is configured to analyse the spectrum of signals to determine one or more free frequency bands that are unallocated in the spectrum of signals. 3. An apparatus according to claim 2, wherein a third computer-readable program code, when executed on the processor, is configured to allocate at least one of the one or more free frequency bands for a cable modem termination system signal between the apparatus and the at least one cable modem. 4. An apparatus according to claim 1, wherein the one or more communication-related parameters comprise at least one of: gain, slope, amplification, signal-to-noise ratio. 5. An apparatus according to claim 1, wherein the one or more signal strength measurements pertain to at least two different frequencies. 6. An apparatus according to claim 1, further comprising a programmable controllable switch for switching ON/OFF the one or more cable television signals. 7. An apparatus according to claim 1, wherein the apparatus conforms to Data Over Cable Service Interface Specification. 8. An apparatus according to claim 1, wherein the apparatus further comprises an attenuator. 9. A method for configuring one or more communication-related parameters in a cable modem system, via an apparatus, the method comprising: analysing a spectrum of signals from a cable television signal input interface to determine one or more free frequency bands that are unallocated in the spectrum; sending a request to at least one cable modem to measure a signal strength of a signal received by the at least one cable modem, to receive one or more signal strength measurements from the at least one cable modem; adjusting the one or more communication-related parameters to be used for communication with the at least one cable modem, based at least partially on the one or more signal strength measurements; and allocating at least one of the one or more free frequency bands for transmission of at least one cable modem termination system signal between the apparatus and at least one cable modem in the cable modem system. 10. A method according to claim 9, wherein the one or more communication-related parameters comprise at least one of gain, slope, amplification, signal-to-noise ratio. 11. A method according to claim 9, wherein the method further comprises one or more signal strength measurements wherein the one or more signal strength measurements pertain to at least two different frequencies. 12. A method according to claim 9, wherein the one or more communication-related parameters are adjusted automatically. 13. A method according to claim 12 further comprising switching OFF one or more cable television signals when making one or more signal strength measurements and/or analysing the spectrum.
2,400
8,155
8,155
13,745,580
2,481
Described herein are methods and systems associated with viewing condition adaption of multimedia content. A method for receiving multimedia content with a device from a network may include determining a viewing parameter, transmitting a request for the multimedia content to the network, whereby the request may be based on the viewing parameter, and receiving the multimedia content from the network, whereby the multimedia content may be processed at a rate according to the viewing parameter. The viewing parameter may include at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. The method may further include receiving a multimedia presentation description (MPD) file from the network. The MPD file may include information relating to the rate of the multimedia content and information relating to the rate may include a descriptor relating to the viewing parameter, whereby the descriptor may be required or optional.
1. A method for receiving multimedia content with a device from a network, the method comprising: receiving a first segment of the multimedia content from the network, the first segment processed at a first rate; determining a viewing parameter; transmitting a request for a second segment of the multimedia content to the network, the request based on the viewing parameter; and receiving the second segment of the multimedia content from the network, the second segment processed at a second rate according to the viewing parameter. 2. The method of claim 1, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 3. The method of claim 2, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 4. The method of claim 2, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, contrast of a screen of the device, brightness of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 5. The method of claim 2, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 6. The method of claim 1, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 7. A device configured to receive multimedia content from a network, the device comprising: a processor configured to: receive a first segment of the multimedia content from the network, the first segment processed at a first rate; determine a viewing parameter; transmit a request for a second segment of the multimedia content to the network, the request based on the viewing parameter; and receive the second segment of the multimedia content from the network, the second segment processed at a second rate according to the viewing parameter. 8. The device of claim 7, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 9. The device of claim 8, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 10. The device of claim 8, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 11. The device of claim 8, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 12. The device of claim 7, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 13. A method for receiving multimedia content with a device from a network, the method comprising: determining a viewing parameter; transmitting a request for the multimedia content to the network, the request based on the viewing parameter; and receiving the multimedia content from the network, the multimedia content processed at a rate according to the viewing parameter. 14. The method of claim 13, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 15. The method of claim 14, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 16. The method of claim 14, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 17. The method of claim 14, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 18. The method of claim 13, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 19. The method of claim 13, wherein the viewing parameter is determined using at least one of: a size of a screen of the device, a resolution of a screen of the device, an angle of a screen of the device, a pixel density of a screen of the device, a contrast ratio of a screen of the device, a user proximity sensor, a front facing camera, a back facing camera, a light sensor, an infra-red imaging device, an ultra-sonic sensor, a microphone, an accelerometer, a compass, or a gyroscope sensor. 20. The method of claim 13, wherein the request transmitted by the device determines the rate of multimedia content received by the device. 21. The method of claim 13, wherein the network determines the rate of the multimedia content received by the device according to the request. 22. The method of claim 21, wherein the request is a multimedia presentation description (MPD) file that comprises the viewing parameter. 23. The method of claim 13, further comprising: receiving a multimedia presentation description (MPD) file from the network, the MPD file comprising information relating to the rate of the multimedia content. 24. The method of claim 23, wherein the information relating to the rate comprises a descriptor relating to the viewing parameter, and wherein the MPD file indicates whether the descriptor is required or optional. 25. The method of claim 24, wherein a required descriptor indicates that the device must meet the requirements of the descriptor to receive the multimedia content processed at the rate, and wherein an optional descriptor indicates that the device may meet the requirements of the descriptor to receive the multimedia content processed at the rate. 26. The method of claim 13, wherein the multimedia content comprises a video file. 27. The method of claim 13, wherein the method is performed via a DASH client of the device. 28. A device configured to receive multimedia content from a network, the device comprising: a processor configured to: determine a viewing parameter; transmit a request for the multimedia content to the network, the request based on the viewing parameter; and receive the multimedia content from the network, the multimedia content processed at a rate according to the viewing parameter. 29. The device of claim 28, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 30. The device of claim 29, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 31. The device of claim 29, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 32. The device of claim 29, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 33. The device of claim 28, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 34. The device of claim 28, wherein the viewing parameter is determined using at least one of: a size of a screen of the device, a resolution of a screen of the device, an angle of a screen of the device, a pixel density of a screen of the device, a contrast ratio of a screen of the device, a user proximity sensor, a front facing camera, a back facing camera, a light sensor, an infra-red imaging device, an ultra-sonic sensor, a microphone, an accelerometer, a compass, or a gyroscope sensor. 35. The device of claim 28, wherein the request transmitted by the device determines the rate of multimedia content received by the device. 36. The device of claim 28, wherein the network determines the rate of the multimedia content received by the device according to the request. 37. The device of claim 36, wherein the request is a multimedia presentation description (MPD) file that comprises the viewing parameter. 38. The device of claim 28, wherein the processor is further configured to: receive a multimedia presentation description (MPD) file from the network, the MPD file comprising information relating to the rate of the multimedia content. 39. The device of claim 38, wherein the information relating to the rate comprises a descriptor relating to the viewing parameter, and wherein the MPD file indicates whether the descriptor is required or optional. 40. The device of claim 39, wherein a required descriptor indicates that the device must meet the requirements of the descriptor to receive the multimedia content processed at the rate, and wherein an optional descriptor indicates that the device may meet the requirements of the descriptor to receive the multimedia content processed at the rate. 41. The device of claim 28, wherein the multimedia content comprises a video file. 42. The device of claim 28, wherein the processor is part of a DASH client residing on the device. 43. A method for transmitting multimedia content, the method comprising: receiving a request from the user, the request based on a viewing parameter of the user; determining a rate for multimedia content based on the viewing parameter; and transmitting the multimedia content to the user, the multimedia content processed at the rate. 44. The method of claim 43, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 45. The method of claim 44, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 46. The method of claim 44, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 47. The method of claim 44, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 48. The method of claim 43, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 49. The method of claim 43, wherein the viewing parameter is determined using at least one of: a size of a screen of the device, a resolution of a screen of the device, an angle of a screen of the device, a pixel density of a screen of the device, a contrast ratio of a screen of the device, a user proximity sensor, a front facing camera, a back facing camera, a light sensor, an infra-red imaging device, an ultra-sonic sensor, a microphone, an accelerometer, a compass, or a gyroscope sensor. 50. The method of claim 43, further comprising: transmitting a multimedia presentation description (MPD) file to the user, the MPD file comprising information relating to the rate of the multimedia content. 51. The method of claim 50, wherein the information relating to the rate comprises a descriptor relating to the viewing parameter, and wherein the MPD file indicates whether the descriptor is required or optional. 52. The method of claim 51, wherein a required descriptor indicates that the device must meet the requirements of the descriptor to receive the multimedia content processed at the rate, and wherein an optional descriptor indicates that the device may meet the requirements of the descriptor to receive the multimedia content processed at the rate. 53. A system configured to transmit multimedia content to a user, the system comprising: a processor configured to: receive a request from the user, the request based on a viewing parameter of the user; determine a rate for multimedia content based on the viewing parameter; and transmit the multimedia content to the user, the multimedia content processed at the rate. 54. The system of claim 53, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 55. The system of claim 54, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 56. The system of claim 54, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 57. The system of claim 54, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 58. The system of claim 53, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 59. The system of claim 53, wherein the viewing parameter is determined using at least one of: a size of a screen of the device, a resolution of a screen of the device, an angle of a screen of the device, a pixel density of a screen of the device, a contrast ratio of a screen of the device, a user proximity sensor, a front facing camera, a back facing camera, a light sensor, an infra-red imaging device, an ultra-sonic sensor, a microphone, an accelerometer, a compass, or a gyroscope sensor. 60. The system of claim 53, further comprising: transmitting a multimedia presentation description (MPD) file to the user, the MPD file comprising information relating to the rate of the multimedia content. 61. The system of claim 60, wherein the information relating to the rate comprises a descriptor relating to the viewing parameter, and wherein the MPD file indicates whether the descriptor is required or optional. 62. The system of claim 61, wherein a required descriptor indicates that the device must meet the requirements of the descriptor to receive the multimedia content processed at the rate, and wherein an optional descriptor indicates that the device may meet the requirements of the descriptor to receive the multimedia content processed at the rate.
Described herein are methods and systems associated with viewing condition adaption of multimedia content. A method for receiving multimedia content with a device from a network may include determining a viewing parameter, transmitting a request for the multimedia content to the network, whereby the request may be based on the viewing parameter, and receiving the multimedia content from the network, whereby the multimedia content may be processed at a rate according to the viewing parameter. The viewing parameter may include at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. The method may further include receiving a multimedia presentation description (MPD) file from the network. The MPD file may include information relating to the rate of the multimedia content and information relating to the rate may include a descriptor relating to the viewing parameter, whereby the descriptor may be required or optional.1. A method for receiving multimedia content with a device from a network, the method comprising: receiving a first segment of the multimedia content from the network, the first segment processed at a first rate; determining a viewing parameter; transmitting a request for a second segment of the multimedia content to the network, the request based on the viewing parameter; and receiving the second segment of the multimedia content from the network, the second segment processed at a second rate according to the viewing parameter. 2. The method of claim 1, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 3. The method of claim 2, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 4. The method of claim 2, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, contrast of a screen of the device, brightness of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 5. The method of claim 2, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 6. The method of claim 1, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 7. A device configured to receive multimedia content from a network, the device comprising: a processor configured to: receive a first segment of the multimedia content from the network, the first segment processed at a first rate; determine a viewing parameter; transmit a request for a second segment of the multimedia content to the network, the request based on the viewing parameter; and receive the second segment of the multimedia content from the network, the second segment processed at a second rate according to the viewing parameter. 8. The device of claim 7, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 9. The device of claim 8, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 10. The device of claim 8, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 11. The device of claim 8, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 12. The device of claim 7, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 13. A method for receiving multimedia content with a device from a network, the method comprising: determining a viewing parameter; transmitting a request for the multimedia content to the network, the request based on the viewing parameter; and receiving the multimedia content from the network, the multimedia content processed at a rate according to the viewing parameter. 14. The method of claim 13, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 15. The method of claim 14, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 16. The method of claim 14, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 17. The method of claim 14, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 18. The method of claim 13, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 19. The method of claim 13, wherein the viewing parameter is determined using at least one of: a size of a screen of the device, a resolution of a screen of the device, an angle of a screen of the device, a pixel density of a screen of the device, a contrast ratio of a screen of the device, a user proximity sensor, a front facing camera, a back facing camera, a light sensor, an infra-red imaging device, an ultra-sonic sensor, a microphone, an accelerometer, a compass, or a gyroscope sensor. 20. The method of claim 13, wherein the request transmitted by the device determines the rate of multimedia content received by the device. 21. The method of claim 13, wherein the network determines the rate of the multimedia content received by the device according to the request. 22. The method of claim 21, wherein the request is a multimedia presentation description (MPD) file that comprises the viewing parameter. 23. The method of claim 13, further comprising: receiving a multimedia presentation description (MPD) file from the network, the MPD file comprising information relating to the rate of the multimedia content. 24. The method of claim 23, wherein the information relating to the rate comprises a descriptor relating to the viewing parameter, and wherein the MPD file indicates whether the descriptor is required or optional. 25. The method of claim 24, wherein a required descriptor indicates that the device must meet the requirements of the descriptor to receive the multimedia content processed at the rate, and wherein an optional descriptor indicates that the device may meet the requirements of the descriptor to receive the multimedia content processed at the rate. 26. The method of claim 13, wherein the multimedia content comprises a video file. 27. The method of claim 13, wherein the method is performed via a DASH client of the device. 28. A device configured to receive multimedia content from a network, the device comprising: a processor configured to: determine a viewing parameter; transmit a request for the multimedia content to the network, the request based on the viewing parameter; and receive the multimedia content from the network, the multimedia content processed at a rate according to the viewing parameter. 29. The device of claim 28, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 30. The device of claim 29, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 31. The device of claim 29, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 32. The device of claim 29, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 33. The device of claim 28, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 34. The device of claim 28, wherein the viewing parameter is determined using at least one of: a size of a screen of the device, a resolution of a screen of the device, an angle of a screen of the device, a pixel density of a screen of the device, a contrast ratio of a screen of the device, a user proximity sensor, a front facing camera, a back facing camera, a light sensor, an infra-red imaging device, an ultra-sonic sensor, a microphone, an accelerometer, a compass, or a gyroscope sensor. 35. The device of claim 28, wherein the request transmitted by the device determines the rate of multimedia content received by the device. 36. The device of claim 28, wherein the network determines the rate of the multimedia content received by the device according to the request. 37. The device of claim 36, wherein the request is a multimedia presentation description (MPD) file that comprises the viewing parameter. 38. The device of claim 28, wherein the processor is further configured to: receive a multimedia presentation description (MPD) file from the network, the MPD file comprising information relating to the rate of the multimedia content. 39. The device of claim 38, wherein the information relating to the rate comprises a descriptor relating to the viewing parameter, and wherein the MPD file indicates whether the descriptor is required or optional. 40. The device of claim 39, wherein a required descriptor indicates that the device must meet the requirements of the descriptor to receive the multimedia content processed at the rate, and wherein an optional descriptor indicates that the device may meet the requirements of the descriptor to receive the multimedia content processed at the rate. 41. The device of claim 28, wherein the multimedia content comprises a video file. 42. The device of claim 28, wherein the processor is part of a DASH client residing on the device. 43. A method for transmitting multimedia content, the method comprising: receiving a request from the user, the request based on a viewing parameter of the user; determining a rate for multimedia content based on the viewing parameter; and transmitting the multimedia content to the user, the multimedia content processed at the rate. 44. The method of claim 43, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 45. The method of claim 44, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 46. The method of claim 44, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 47. The method of claim 44, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 48. The method of claim 43, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 49. The method of claim 43, wherein the viewing parameter is determined using at least one of: a size of a screen of the device, a resolution of a screen of the device, an angle of a screen of the device, a pixel density of a screen of the device, a contrast ratio of a screen of the device, a user proximity sensor, a front facing camera, a back facing camera, a light sensor, an infra-red imaging device, an ultra-sonic sensor, a microphone, an accelerometer, a compass, or a gyroscope sensor. 50. The method of claim 43, further comprising: transmitting a multimedia presentation description (MPD) file to the user, the MPD file comprising information relating to the rate of the multimedia content. 51. The method of claim 50, wherein the information relating to the rate comprises a descriptor relating to the viewing parameter, and wherein the MPD file indicates whether the descriptor is required or optional. 52. The method of claim 51, wherein a required descriptor indicates that the device must meet the requirements of the descriptor to receive the multimedia content processed at the rate, and wherein an optional descriptor indicates that the device may meet the requirements of the descriptor to receive the multimedia content processed at the rate. 53. A system configured to transmit multimedia content to a user, the system comprising: a processor configured to: receive a request from the user, the request based on a viewing parameter of the user; determine a rate for multimedia content based on the viewing parameter; and transmit the multimedia content to the user, the multimedia content processed at the rate. 54. The system of claim 53, wherein the viewing parameter comprises at least one of: a user viewing parameter, a device viewing parameter, or a content viewing parameter. 55. The system of claim 54, wherein the user viewing parameter comprise at least one of: a user's presence, a user's location with respect to a screen of the device, a user's orientation with respect to a screen of the device, a user's viewing angle with respect to a screen of the device, a user's distance from a screen of the device, a user's visual acuity, an ambient lighting condition, a number of users viewing a screen of the device, or a user's point of attention. 56. The system of claim 54, wherein the device viewing parameter comprise at least one of: mobility of the device, size of a screen of the device, resolution of a screen of the device, pixel density of a screen of the device, size of a window displaying the multimedia content on the device, or a location of a window displaying the multimedia content on the device. 57. The system of claim 54, wherein the content viewing parameter comprise at least one of: contrast of the multimedia content, color gamut of the multimedia content, presence of third-dimension of multimedia content, or range of depth of three-dimensional content of the multimedia content. 58. The system of claim 53, wherein the rate is a function of at least one of: an encoding rate of the multimedia content, a spatial resolution of the multimedia content, a temporal resolution of the multimedia content, quantization parameters, rate control parameters, target bit rate of the multimedia content, spatial filtering of the multimedia content, or temporal filtering of the multimedia content. 59. The system of claim 53, wherein the viewing parameter is determined using at least one of: a size of a screen of the device, a resolution of a screen of the device, an angle of a screen of the device, a pixel density of a screen of the device, a contrast ratio of a screen of the device, a user proximity sensor, a front facing camera, a back facing camera, a light sensor, an infra-red imaging device, an ultra-sonic sensor, a microphone, an accelerometer, a compass, or a gyroscope sensor. 60. The system of claim 53, further comprising: transmitting a multimedia presentation description (MPD) file to the user, the MPD file comprising information relating to the rate of the multimedia content. 61. The system of claim 60, wherein the information relating to the rate comprises a descriptor relating to the viewing parameter, and wherein the MPD file indicates whether the descriptor is required or optional. 62. The system of claim 61, wherein a required descriptor indicates that the device must meet the requirements of the descriptor to receive the multimedia content processed at the rate, and wherein an optional descriptor indicates that the device may meet the requirements of the descriptor to receive the multimedia content processed at the rate.
2,400
8,156
8,156
15,396,546
2,485
A system and method for iris authentication in an electronic device employ a presence detection sensor to detect when an object such as a user is close to the device. Thereafter, an array of gesture recognition IR (infrared) LEDs (light emitting diodes) are activated, and their reflections are used to determine the distance and location of the user with respect to the electronic device. Each gesture recognition IR LED is then driven so that the combined IR illumination emitted by the gesture recognition IR LEDs is sufficient to gather a user iris image suitable for iris authentication. The IR LEDs of the gesture recognition system may be driven unevenly based on the user's position and location. In an embodiment, the gesture recognition IR LEDs are employed to supplement illumination from a dedicated iris authentication IR LED.
1. A method of iris authentication in an electronic device having an array of gesture recognition IR LEDs, the method comprising: detecting a presence of a user; detecting a distance and location of the user from the electronic device using the array of gesture recognition IR LEDs; and driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication. 2. The method in accordance with claim 1, wherein driving each gesture recognition IR LED further comprises driving a gesture recognition IR LED closest to the user at a higher average power than a gesture recognition IR LED furthest from the user. 3. The method in accordance with claim 1, wherein driving each gesture recognition IR LED comprises driving an IR LED closest to the user with a first maximum power during a cycle and driving an IR LED furthest from the user with a second maximum power during a cycle, wherein the first maximum power is greater than the second maximum power. 4. The method in accordance with claim 1, wherein driving each gesture recognition IR LED comprises driving two or more of the array of gesture recognition IR LEDs unevenly with respect to duty cycle. 5. The method in accordance with claim 4, wherein driving two or more of the array of gesture recognition IR LEDs unevenly with respect to duty cycle comprises driving an IR LED closest to the user with a first duty cycle and driving an IR LED furthest from the user with a second duty cycle, wherein the first duty cycle is higher than the second duty cycle. 6. The method in accordance with claim 1, wherein detecting a distance and location of the user from the electronic device using the array of gesture recognition IR LEDs comprises employing a closed loop 3D IR gesture recognition process. 7. The method in accordance with claim 1, wherein driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication comprises driving each gesture recognition IR LED with an average power based on the detected distance and position of the user. 8. The method in accordance with claim 7, wherein driving each gesture recognition IR LED with an average power based on the detected distance and position of the user further comprises segregating the array of gesture recognition IR LEDs into two or more IR LED groups and driving each IR LED of each group in the same manner as any other IR LED in the same group. 9. The method in accordance with claim 1, wherein the electronic device also includes a dedicated iris IR LED, and wherein driving each gesture recognition IR LED further comprises driving the dedicated iris IR LED. 10. The method in accordance with claim 9, wherein driving the dedicated iris IR LED comprises driving the dedicated iris IR LED at an average power level corresponding to an average driving power at which the gesture recognition IR LEDs are driven. 11. A method of iris authentication in an electronic device comprising: detecting a presence of an object; detecting a distance and location of the object relative to the electronic device using an array of gesture recognition IR LEDs associated with the electronic device; and driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication. 12. The method in accordance with claim 11, wherein driving each gesture recognition IR LED further comprises driving a gesture recognition IR LED closer to the object at a higher average power than a gesture recognition IR LED further from the object. 13. The method in accordance with claim 11, wherein driving each gesture recognition IR LED comprises driving an IR LED closer to the object with a first maximum power during a cycle and driving an IR LED further from the object with a second maximum power during a cycle, wherein the first maximum power is greater than the second maximum power. 14. The method in accordance with claim 11, wherein driving each gesture recognition IR LED comprises driving two or more of the array of gesture recognition IR LEDs unevenly with respect to duty cycle. 15. The method in accordance with claim 14, wherein driving two or more of the array of gesture recognition IR LEDs unevenly with respect to duty cycle comprises driving an IR LED closer to the object with a first duty cycle and driving an IR LED further from the object with a second duty cycle, wherein the first duty cycle is higher than the second duty cycle. 16. The method in accordance with claim 11, wherein detecting a distance and location of the object relative to the electronic device using the array of gesture recognition IR LEDs comprises employing a closed loop 3D IR gesture recognition process. 17. The method in accordance with claim 11, wherein driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication comprises driving each gesture recognition IR LED with an average power based on the detected distance and position of the object. 18. The method in accordance with claim 17, wherein driving each gesture recognition IR LED with an average power based on the detected distance and position of the object further comprises segregating the array of gesture recognition IR LEDs into two or more IR LED groups and driving each IR LED of each group in the same manner as any other IR LED in the same group. 19. The method in accordance with claim 11, wherein the electronic device also includes a dedicated iris IR LED, and wherein driving each gesture recognition IR LED further comprises driving the dedicated iris IR LED. 20. The method in accordance with claim 19, wherein driving the dedicated iris IR LED comprises driving the dedicated iris IR LED at an average power level corresponding to an average driving power at which the gesture recognition IR LEDs are driven.
A system and method for iris authentication in an electronic device employ a presence detection sensor to detect when an object such as a user is close to the device. Thereafter, an array of gesture recognition IR (infrared) LEDs (light emitting diodes) are activated, and their reflections are used to determine the distance and location of the user with respect to the electronic device. Each gesture recognition IR LED is then driven so that the combined IR illumination emitted by the gesture recognition IR LEDs is sufficient to gather a user iris image suitable for iris authentication. The IR LEDs of the gesture recognition system may be driven unevenly based on the user's position and location. In an embodiment, the gesture recognition IR LEDs are employed to supplement illumination from a dedicated iris authentication IR LED.1. A method of iris authentication in an electronic device having an array of gesture recognition IR LEDs, the method comprising: detecting a presence of a user; detecting a distance and location of the user from the electronic device using the array of gesture recognition IR LEDs; and driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication. 2. The method in accordance with claim 1, wherein driving each gesture recognition IR LED further comprises driving a gesture recognition IR LED closest to the user at a higher average power than a gesture recognition IR LED furthest from the user. 3. The method in accordance with claim 1, wherein driving each gesture recognition IR LED comprises driving an IR LED closest to the user with a first maximum power during a cycle and driving an IR LED furthest from the user with a second maximum power during a cycle, wherein the first maximum power is greater than the second maximum power. 4. The method in accordance with claim 1, wherein driving each gesture recognition IR LED comprises driving two or more of the array of gesture recognition IR LEDs unevenly with respect to duty cycle. 5. The method in accordance with claim 4, wherein driving two or more of the array of gesture recognition IR LEDs unevenly with respect to duty cycle comprises driving an IR LED closest to the user with a first duty cycle and driving an IR LED furthest from the user with a second duty cycle, wherein the first duty cycle is higher than the second duty cycle. 6. The method in accordance with claim 1, wherein detecting a distance and location of the user from the electronic device using the array of gesture recognition IR LEDs comprises employing a closed loop 3D IR gesture recognition process. 7. The method in accordance with claim 1, wherein driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication comprises driving each gesture recognition IR LED with an average power based on the detected distance and position of the user. 8. The method in accordance with claim 7, wherein driving each gesture recognition IR LED with an average power based on the detected distance and position of the user further comprises segregating the array of gesture recognition IR LEDs into two or more IR LED groups and driving each IR LED of each group in the same manner as any other IR LED in the same group. 9. The method in accordance with claim 1, wherein the electronic device also includes a dedicated iris IR LED, and wherein driving each gesture recognition IR LED further comprises driving the dedicated iris IR LED. 10. The method in accordance with claim 9, wherein driving the dedicated iris IR LED comprises driving the dedicated iris IR LED at an average power level corresponding to an average driving power at which the gesture recognition IR LEDs are driven. 11. A method of iris authentication in an electronic device comprising: detecting a presence of an object; detecting a distance and location of the object relative to the electronic device using an array of gesture recognition IR LEDs associated with the electronic device; and driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication. 12. The method in accordance with claim 11, wherein driving each gesture recognition IR LED further comprises driving a gesture recognition IR LED closer to the object at a higher average power than a gesture recognition IR LED further from the object. 13. The method in accordance with claim 11, wherein driving each gesture recognition IR LED comprises driving an IR LED closer to the object with a first maximum power during a cycle and driving an IR LED further from the object with a second maximum power during a cycle, wherein the first maximum power is greater than the second maximum power. 14. The method in accordance with claim 11, wherein driving each gesture recognition IR LED comprises driving two or more of the array of gesture recognition IR LEDs unevenly with respect to duty cycle. 15. The method in accordance with claim 14, wherein driving two or more of the array of gesture recognition IR LEDs unevenly with respect to duty cycle comprises driving an IR LED closer to the object with a first duty cycle and driving an IR LED further from the object with a second duty cycle, wherein the first duty cycle is higher than the second duty cycle. 16. The method in accordance with claim 11, wherein detecting a distance and location of the object relative to the electronic device using the array of gesture recognition IR LEDs comprises employing a closed loop 3D IR gesture recognition process. 17. The method in accordance with claim 11, wherein driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication comprises driving each gesture recognition IR LED with an average power based on the detected distance and position of the object. 18. The method in accordance with claim 17, wherein driving each gesture recognition IR LED with an average power based on the detected distance and position of the object further comprises segregating the array of gesture recognition IR LEDs into two or more IR LED groups and driving each IR LED of each group in the same manner as any other IR LED in the same group. 19. The method in accordance with claim 11, wherein the electronic device also includes a dedicated iris IR LED, and wherein driving each gesture recognition IR LED further comprises driving the dedicated iris IR LED. 20. The method in accordance with claim 19, wherein driving the dedicated iris IR LED comprises driving the dedicated iris IR LED at an average power level corresponding to an average driving power at which the gesture recognition IR LEDs are driven.
2,400
8,157
8,157
14,597,239
2,485
A system and method for iris authentication in an electronic device employ a presence detection sensor to detect when a user is close to the device. Thereafter, an array of gesture recognition IR (infrared) LEDs (light emitting diodes) on the device are activated, and the reflections thereof are used to determine the distance and location of the user from the electronic device. Given the known distance and location of the user, each gesture recognition IR LED is then driven such that the combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather a user iris image suitable for iris authentication. The IR LEDs of the gesture recognition system may be driven unevenly based on the user's position and location. In an embodiment, the gesture recognition IR LEDs are employed to supplement illumination from a dedicated iris authentication IR LED.
1. A method of IR iris authentication in an electronic device having a gesture recognition system comprising a plurality of IR LEDs and an IR receiver, the method including: detecting a presence of a user; activating the plurality of IR LEDs in a repeating sequence to provide IR illumination, each IR LED being activated singly with a duty cycle less than 100 percent and at a peak power that is less than the rated peak power of the IR LED; gathering reflected illumination at the IR receiver while activating the plurality of IR LEDs; determining a distance and position of the user relative to the IR receiver based on the gathered reflected illumination; and activating the plurality of IR LEDs again while gathering an iris image, with the average power of each IR LED being based on the determined distance and position of the user. 2. The method in accordance with claim 1, wherein activating the plurality of IR LEDs again while gathering the iris image comprises driving two or more of the plurality of IR LEDs unevenly with respect to maximum power during a cycle. 3. The method in accordance with claim 2, wherein driving two or more of the plurality of IR LEDs unevenly with respect to maximum power during a cycle comprises driving an IR LED closest to the user with a first maximum power during a cycle and driving an IR LED furthest from the user with a second maximum power during a cycle, wherein the first maximum power is greater than the second maximum power. 4. The method in accordance with claim 1, wherein activating the plurality of IR LEDs again while gathering the iris image comprises driving two or more of the plurality of IR LEDs unevenly with respect to duty cycle. 5. The method in accordance with claim 4, wherein driving two or more of the plurality of IR LEDs unevenly with respect to duty cycle comprises driving an IR LED closest to the user with a first duty cycle and driving an IR LED furthest from the user with a second duty cycle, wherein the first duty cycle is higher than the second duty cycle. 6. The method in accordance with claim 1, wherein the gesture recognition system of the electronic device is configured to execute a closed loop 3D IR gesture recognition process, and wherein determining a distance and position of the user relative to the IR receiver comprises employing the closed loop 3D IR gesture recognition process of the gesture recognition system. 7. The method in accordance with claim 1, wherein activating the plurality of IR LEDs with the average power of each IR LED being based on the determined distance and position of the user comprises segregating the plurality of IR LEDs into two or more IR LED groups and driving each IR LED of each group in the same manner as any other IR LED in the same group. 8. The method in accordance with claim 1, wherein the electronic device also includes a dedicated iris IR LED, and wherein activating the plurality of IR LEDs again while gathering an iris image further comprises driving the iris IR LED while gathering the iris image. 9. The method in accordance with claim 8, wherein driving the iris IR LED while gathering the iris image includes driving the iris IR LED at an average power level that is based on the average driving power levels at which the plurality of IR LEDs are driven. 10. An electronic device comprising: a gesture recognition system including a plurality of IR LEDs and an IR receiver; and a controller configured to employ the gesture recognition system for iris authentication by detecting a presence of a user, driving the plurality of IR LEDs in a repeating sequence with each IR LED being driven at a respective time, with a respective maximum power, and with a respective duty cycle, gathering reflected IR illumination at the IR receiver, determining a distance and position of the user relative to the IR receiver based on the gathered reflected illumination, and driving the plurality of IR LEDs again while gathering an iris image, with the average driving power of each IR LED being based on the determined distance and position of the user. 11. The electronic device in accordance with claim 10, wherein the controller is further configured to drive two or more of the plurality of IR LEDs unevenly with respect to maximum power while gathering the iris image. 12. The electronic device in accordance with claim 11, wherein the controller is further configured to drive the two or more of the plurality of IR LEDs unevenly by driving an IR LED closest to the user with a first maximum power and driving an IR LED furthest from the user with a second maximum power, wherein the first maximum power is greater than the second maximum power. 13. The electronic device in accordance with claim 10, wherein the controller is configured to drive the two or more IR LEDs unevenly with respect to duty cycle. 14. The electronic device in accordance with claim 13, wherein the controller is further configured to drive the two or more IR LEDs unevenly with respect to duty cycle by driving an IR LED closest to the user with a first duty cycle and driving an IR LED furthest from the user with a second duty cycle, wherein the first duty cycle is higher than the second duty cycle. 15. The electronic device in accordance with claim 10, wherein the controller is further configured to determine the distance and position of the user relative to the IR receiver by using a closed loop 3D IR gesture recognition process. 16. The electronic device in accordance with claim 10, wherein the controller is further configured to drive the plurality of IR LEDs with the average power of each IR LED being based on the determined distance and position of the user by segregating the plurality of IR LEDs into two or more IR LED groups and driving each IR LED of each group in the same manner as any other IR LED in the same group. 17. The electronic device in accordance with claim 10, further comprising a dedicated iris IR LED, and wherein the controller is further configured to drive the iris IR LED while gathering the iris image. 18. The electronic device in accordance with claim 17, wherein the controller is further configured to drive the iris IR LED at an average power level that is based on the average driving power levels at which the plurality of IR LEDs are driven. 19. A method of iris authentication in an electronic device having an array of gesture recognition IR LEDs, the method comprising: detecting a presence of a user; detecting a distance and location of the user from the electronic device using the array of gesture recognition IR LEDs; and driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication. 20. The method in accordance with claim 19, wherein driving each gesture recognition IR LED further comprises driving a gesture recognition IR LED closest to the user at a higher average power than a gesture recognition IR LED furthest from the user.
A system and method for iris authentication in an electronic device employ a presence detection sensor to detect when a user is close to the device. Thereafter, an array of gesture recognition IR (infrared) LEDs (light emitting diodes) on the device are activated, and the reflections thereof are used to determine the distance and location of the user from the electronic device. Given the known distance and location of the user, each gesture recognition IR LED is then driven such that the combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather a user iris image suitable for iris authentication. The IR LEDs of the gesture recognition system may be driven unevenly based on the user's position and location. In an embodiment, the gesture recognition IR LEDs are employed to supplement illumination from a dedicated iris authentication IR LED.1. A method of IR iris authentication in an electronic device having a gesture recognition system comprising a plurality of IR LEDs and an IR receiver, the method including: detecting a presence of a user; activating the plurality of IR LEDs in a repeating sequence to provide IR illumination, each IR LED being activated singly with a duty cycle less than 100 percent and at a peak power that is less than the rated peak power of the IR LED; gathering reflected illumination at the IR receiver while activating the plurality of IR LEDs; determining a distance and position of the user relative to the IR receiver based on the gathered reflected illumination; and activating the plurality of IR LEDs again while gathering an iris image, with the average power of each IR LED being based on the determined distance and position of the user. 2. The method in accordance with claim 1, wherein activating the plurality of IR LEDs again while gathering the iris image comprises driving two or more of the plurality of IR LEDs unevenly with respect to maximum power during a cycle. 3. The method in accordance with claim 2, wherein driving two or more of the plurality of IR LEDs unevenly with respect to maximum power during a cycle comprises driving an IR LED closest to the user with a first maximum power during a cycle and driving an IR LED furthest from the user with a second maximum power during a cycle, wherein the first maximum power is greater than the second maximum power. 4. The method in accordance with claim 1, wherein activating the plurality of IR LEDs again while gathering the iris image comprises driving two or more of the plurality of IR LEDs unevenly with respect to duty cycle. 5. The method in accordance with claim 4, wherein driving two or more of the plurality of IR LEDs unevenly with respect to duty cycle comprises driving an IR LED closest to the user with a first duty cycle and driving an IR LED furthest from the user with a second duty cycle, wherein the first duty cycle is higher than the second duty cycle. 6. The method in accordance with claim 1, wherein the gesture recognition system of the electronic device is configured to execute a closed loop 3D IR gesture recognition process, and wherein determining a distance and position of the user relative to the IR receiver comprises employing the closed loop 3D IR gesture recognition process of the gesture recognition system. 7. The method in accordance with claim 1, wherein activating the plurality of IR LEDs with the average power of each IR LED being based on the determined distance and position of the user comprises segregating the plurality of IR LEDs into two or more IR LED groups and driving each IR LED of each group in the same manner as any other IR LED in the same group. 8. The method in accordance with claim 1, wherein the electronic device also includes a dedicated iris IR LED, and wherein activating the plurality of IR LEDs again while gathering an iris image further comprises driving the iris IR LED while gathering the iris image. 9. The method in accordance with claim 8, wherein driving the iris IR LED while gathering the iris image includes driving the iris IR LED at an average power level that is based on the average driving power levels at which the plurality of IR LEDs are driven. 10. An electronic device comprising: a gesture recognition system including a plurality of IR LEDs and an IR receiver; and a controller configured to employ the gesture recognition system for iris authentication by detecting a presence of a user, driving the plurality of IR LEDs in a repeating sequence with each IR LED being driven at a respective time, with a respective maximum power, and with a respective duty cycle, gathering reflected IR illumination at the IR receiver, determining a distance and position of the user relative to the IR receiver based on the gathered reflected illumination, and driving the plurality of IR LEDs again while gathering an iris image, with the average driving power of each IR LED being based on the determined distance and position of the user. 11. The electronic device in accordance with claim 10, wherein the controller is further configured to drive two or more of the plurality of IR LEDs unevenly with respect to maximum power while gathering the iris image. 12. The electronic device in accordance with claim 11, wherein the controller is further configured to drive the two or more of the plurality of IR LEDs unevenly by driving an IR LED closest to the user with a first maximum power and driving an IR LED furthest from the user with a second maximum power, wherein the first maximum power is greater than the second maximum power. 13. The electronic device in accordance with claim 10, wherein the controller is configured to drive the two or more IR LEDs unevenly with respect to duty cycle. 14. The electronic device in accordance with claim 13, wherein the controller is further configured to drive the two or more IR LEDs unevenly with respect to duty cycle by driving an IR LED closest to the user with a first duty cycle and driving an IR LED furthest from the user with a second duty cycle, wherein the first duty cycle is higher than the second duty cycle. 15. The electronic device in accordance with claim 10, wherein the controller is further configured to determine the distance and position of the user relative to the IR receiver by using a closed loop 3D IR gesture recognition process. 16. The electronic device in accordance with claim 10, wherein the controller is further configured to drive the plurality of IR LEDs with the average power of each IR LED being based on the determined distance and position of the user by segregating the plurality of IR LEDs into two or more IR LED groups and driving each IR LED of each group in the same manner as any other IR LED in the same group. 17. The electronic device in accordance with claim 10, further comprising a dedicated iris IR LED, and wherein the controller is further configured to drive the iris IR LED while gathering the iris image. 18. The electronic device in accordance with claim 17, wherein the controller is further configured to drive the iris IR LED at an average power level that is based on the average driving power levels at which the plurality of IR LEDs are driven. 19. A method of iris authentication in an electronic device having an array of gesture recognition IR LEDs, the method comprising: detecting a presence of a user; detecting a distance and location of the user from the electronic device using the array of gesture recognition IR LEDs; and driving each gesture recognition IR LED such that a combined IR illumination emitted by the array of gesture recognition IR LEDs is sufficient to gather an iris image suitable for iris authentication. 20. The method in accordance with claim 19, wherein driving each gesture recognition IR LED further comprises driving a gesture recognition IR LED closest to the user at a higher average power than a gesture recognition IR LED furthest from the user.
2,400
8,158
8,158
15,163,280
2,454
A method for triggering execution of a workflow over a network comprises receiving an instruction to execute a workflow comprising a first task for being executed on a first remote device, receiving network settings from the user device to enable communication and execution of the first task on the first remote device, applying the network settings to at least one of the host or the first remote device, and executing the first task on the first remote device using the network settings upon receiving the instructions from the user device. The workflow comprises multiple tasks for execution on multiple remote devices. Multiple tasks include the first task, and multiple remote devices include the first remote device. Network settings include settings for establishing communication between any two or more of the host and the remote devices.
1. A method for triggering execution of a workflow over a network, the method comprising: receiving, from a user device at a host, an instruction to execute a workflow comprising a first task for being executed on a first remote device; receiving, at the host, network settings from the user device to enable communication and execution of the first task on the first remote device; applying the network settings to at least one of the host or the first remote device; and executing the first task on the first remote device using the network settings upon receiving the instructions from the user device, wherein the workflow comprises a plurality of tasks for being executed on a plurality of remote devices, the plurality of tasks including the first task, and the plurality of remote devices including the first remote device, and wherein the network settings include settings for establishing communication between two or more of the host and the plurality of remote devices. 2. The method of claim 1, wherein the network settings include information relating to at least one of a network type, a connection mode, a connection settings, or connection parameters. 3. The method of claim 2, wherein the network type is at least one of a Public Switch Telephone Network (“PSTN”), the Internet, a proprietary public network, a wireless voice and packet-data network, 1G, 2G, 2.5G, 3G, 4G or LTE telecommunication network, a wireless office telephone system (“WOTS”), a wired or wireless local area network (“LAN”), Bluetooth network, IEEE 802.11 WLAN, a wired or wireless personal area network (“PAN”), or a wired or wireless metropolitan area network (“MAN”). 4. The method of claim 2, wherein connection parameters include at least one of a login and a password, access credentials, a combination of a private and public key, Hardware type, Processor type, Network Hardware, API Key, API Secret, Connection Profile name, Type, Security Type, SSID, Password, Transport Protocol, or Device Role. 5. The method of claim 1, wherein the host generates network settings according to a status of the network configuration of at least one of the user device, the host or the plurality of remote devices. 6. The method of claim 1, wherein the instruction to execute the workflow is received from an application software on the user device, or by manipulation of the GUI software on the user device, further comprising: receiving an instruction, from the application software on the user device or from the GUI software on the user device, to apply the network settings to at least one of the plurality of remote devices. 7. The method of claim 6, wherein the application software comprises at least one of word-processing application, spreadsheet application, database application, email application, messaging application, text messaging interface application, presentation application, Internet-browser application, calendar application, media application, multimedia application, file management programs, operating system shells, a compiled application without a graphical user interface, a compiled programming application, a time-based job scheduler, a CRON, a macro-language application, at least one of a sensor or actuator based application deployed a device or a computer, micro-controller based application, SoC application, a MQTT application, a wireless communication based application, a mobile application, or an RTOS based application. 8. The method of claim 7, wherein the workflow comprises a plurality of tasks for being executed on a plurality of devices, wherein a second task of the plurality of tasks is executed on a second remote device using task parameters received from the application software on the user device. 9. The method of claim 8, wherein the task parameters received from the application software on the user device are input to the application software using the GUI on the user device, or include an output resulting from execution of the first task. 10. The method of claim 1, wherein execution of the first task on the first remote device includes placing a call by the first task to begin execution of a second task on a second remote device, or execution of the first task and the second task require exchange of data between the first task and the second task, wherein the placing a call and the data exchange are implemented according to the network settings. 11. The method of claim 1, wherein the workflow is an ordered sequence of the plurality of tasks designed graphically using a graphical user interface (GUI) application on the user device. 12. The method of claim 1, wherein the network settings are input at the user device. 13. The method of claim 1, wherein the network settings are obtained from a database remote to the host, the user device and the remote device. 14. An apparatus for triggering execution of a workflow over a network, the apparatus comprising: at least one processor; a memory coupled to the at least one processor comprising instructions, which when executed using the at least one processor executes a method comprising: receiving, from a user device, an instruction to execute a workflow comprising a first task for being executed on a first remote device; receiving network settings from the user device to enable communication and execution of the first task on the first remote device; applying the network settings to at least one of the host or the first remote device; and executing the first task on the first remote device using the network settings upon receiving the instructions from the user device, wherein the workflow comprises a plurality of tasks for being executed on a plurality of remote devices, the plurality of tasks including the first task, and the plurality of remote devices including the first remote device, wherein the network settings include settings for establishing communication between two or more of the host and the plurality of remote devices. 15. The apparatus of claim 14, wherein the network settings include information relating to at least one of a network type, a connection mode, a connection settings, or connection parameters, wherein the network type is at least one of a Public Switch Telephone Network (“PSTN”), the Internet, a proprietary public network, a wireless voice and packet-data network, 1G, 2G, 2.5G, 3G, 4G or LTE telecommunication network, a wireless office telephone system (“WOTS”), a wired or wireless local area network (“LAN”), Bluetooth network, IEEE 802.11 WLAN, a wired or wireless personal area network (“PAN”), or a wired or wireless metropolitan area network (“MAN”), wherein connection settings includes at least one of standardized, proprietary, or open-source communication protocols, and wherein connection parameters include at least one of a login and a password, access credentials, a combination of a private and public key, Hardware type, Processor type, Network Hardware, API Key, API Secret, Connection Profile name, Type, Security Type, SSID, Password, Transport Protocol, or Device Role. 16. The apparatus of claim 14, wherein the host receives an instruction from the user device to execute the workflow via an application software on the user device or the GUI software on the user device, and wherein the host receives an instruction to apply the network settings to at least one of the plurality of remote devices, from the application software on the user device or from the GUI software on the user device. 17. The apparatus of claim 16, wherein the application software comprises at least one of word-processing application, spreadsheet application, database application, email application, messaging application, text messaging interface application, presentation application, Internet-browser application, calendar application, media application, multimedia application, file management programs, operating system shells, a compiled application without a graphical user interface, a compiled programming application, a time-based job scheduler, a CRON, a macro-language application, at least one of a sensor or actuator based application deployed a device or a computer, micro-controller based application, SoC application, a MQTT application, a wireless communication based application, a mobile application, or an RTOS based application. 18. The apparatus of claim 14, wherein execution of the first task on the first remote device includes placing a call by the first task to begin execution of a second task on a second remote device, or execution of the first task and the second task require exchange of data between the first task and the second task, wherein the placing a call and the data exchange are implemented according to the network settings. 19. The apparatus of claim 14, wherein the network settings are input at the user device. 20. The method of claim 14, wherein the network settings are obtained from a database remote to the host, the user device and the remote device.
A method for triggering execution of a workflow over a network comprises receiving an instruction to execute a workflow comprising a first task for being executed on a first remote device, receiving network settings from the user device to enable communication and execution of the first task on the first remote device, applying the network settings to at least one of the host or the first remote device, and executing the first task on the first remote device using the network settings upon receiving the instructions from the user device. The workflow comprises multiple tasks for execution on multiple remote devices. Multiple tasks include the first task, and multiple remote devices include the first remote device. Network settings include settings for establishing communication between any two or more of the host and the remote devices.1. A method for triggering execution of a workflow over a network, the method comprising: receiving, from a user device at a host, an instruction to execute a workflow comprising a first task for being executed on a first remote device; receiving, at the host, network settings from the user device to enable communication and execution of the first task on the first remote device; applying the network settings to at least one of the host or the first remote device; and executing the first task on the first remote device using the network settings upon receiving the instructions from the user device, wherein the workflow comprises a plurality of tasks for being executed on a plurality of remote devices, the plurality of tasks including the first task, and the plurality of remote devices including the first remote device, and wherein the network settings include settings for establishing communication between two or more of the host and the plurality of remote devices. 2. The method of claim 1, wherein the network settings include information relating to at least one of a network type, a connection mode, a connection settings, or connection parameters. 3. The method of claim 2, wherein the network type is at least one of a Public Switch Telephone Network (“PSTN”), the Internet, a proprietary public network, a wireless voice and packet-data network, 1G, 2G, 2.5G, 3G, 4G or LTE telecommunication network, a wireless office telephone system (“WOTS”), a wired or wireless local area network (“LAN”), Bluetooth network, IEEE 802.11 WLAN, a wired or wireless personal area network (“PAN”), or a wired or wireless metropolitan area network (“MAN”). 4. The method of claim 2, wherein connection parameters include at least one of a login and a password, access credentials, a combination of a private and public key, Hardware type, Processor type, Network Hardware, API Key, API Secret, Connection Profile name, Type, Security Type, SSID, Password, Transport Protocol, or Device Role. 5. The method of claim 1, wherein the host generates network settings according to a status of the network configuration of at least one of the user device, the host or the plurality of remote devices. 6. The method of claim 1, wherein the instruction to execute the workflow is received from an application software on the user device, or by manipulation of the GUI software on the user device, further comprising: receiving an instruction, from the application software on the user device or from the GUI software on the user device, to apply the network settings to at least one of the plurality of remote devices. 7. The method of claim 6, wherein the application software comprises at least one of word-processing application, spreadsheet application, database application, email application, messaging application, text messaging interface application, presentation application, Internet-browser application, calendar application, media application, multimedia application, file management programs, operating system shells, a compiled application without a graphical user interface, a compiled programming application, a time-based job scheduler, a CRON, a macro-language application, at least one of a sensor or actuator based application deployed a device or a computer, micro-controller based application, SoC application, a MQTT application, a wireless communication based application, a mobile application, or an RTOS based application. 8. The method of claim 7, wherein the workflow comprises a plurality of tasks for being executed on a plurality of devices, wherein a second task of the plurality of tasks is executed on a second remote device using task parameters received from the application software on the user device. 9. The method of claim 8, wherein the task parameters received from the application software on the user device are input to the application software using the GUI on the user device, or include an output resulting from execution of the first task. 10. The method of claim 1, wherein execution of the first task on the first remote device includes placing a call by the first task to begin execution of a second task on a second remote device, or execution of the first task and the second task require exchange of data between the first task and the second task, wherein the placing a call and the data exchange are implemented according to the network settings. 11. The method of claim 1, wherein the workflow is an ordered sequence of the plurality of tasks designed graphically using a graphical user interface (GUI) application on the user device. 12. The method of claim 1, wherein the network settings are input at the user device. 13. The method of claim 1, wherein the network settings are obtained from a database remote to the host, the user device and the remote device. 14. An apparatus for triggering execution of a workflow over a network, the apparatus comprising: at least one processor; a memory coupled to the at least one processor comprising instructions, which when executed using the at least one processor executes a method comprising: receiving, from a user device, an instruction to execute a workflow comprising a first task for being executed on a first remote device; receiving network settings from the user device to enable communication and execution of the first task on the first remote device; applying the network settings to at least one of the host or the first remote device; and executing the first task on the first remote device using the network settings upon receiving the instructions from the user device, wherein the workflow comprises a plurality of tasks for being executed on a plurality of remote devices, the plurality of tasks including the first task, and the plurality of remote devices including the first remote device, wherein the network settings include settings for establishing communication between two or more of the host and the plurality of remote devices. 15. The apparatus of claim 14, wherein the network settings include information relating to at least one of a network type, a connection mode, a connection settings, or connection parameters, wherein the network type is at least one of a Public Switch Telephone Network (“PSTN”), the Internet, a proprietary public network, a wireless voice and packet-data network, 1G, 2G, 2.5G, 3G, 4G or LTE telecommunication network, a wireless office telephone system (“WOTS”), a wired or wireless local area network (“LAN”), Bluetooth network, IEEE 802.11 WLAN, a wired or wireless personal area network (“PAN”), or a wired or wireless metropolitan area network (“MAN”), wherein connection settings includes at least one of standardized, proprietary, or open-source communication protocols, and wherein connection parameters include at least one of a login and a password, access credentials, a combination of a private and public key, Hardware type, Processor type, Network Hardware, API Key, API Secret, Connection Profile name, Type, Security Type, SSID, Password, Transport Protocol, or Device Role. 16. The apparatus of claim 14, wherein the host receives an instruction from the user device to execute the workflow via an application software on the user device or the GUI software on the user device, and wherein the host receives an instruction to apply the network settings to at least one of the plurality of remote devices, from the application software on the user device or from the GUI software on the user device. 17. The apparatus of claim 16, wherein the application software comprises at least one of word-processing application, spreadsheet application, database application, email application, messaging application, text messaging interface application, presentation application, Internet-browser application, calendar application, media application, multimedia application, file management programs, operating system shells, a compiled application without a graphical user interface, a compiled programming application, a time-based job scheduler, a CRON, a macro-language application, at least one of a sensor or actuator based application deployed a device or a computer, micro-controller based application, SoC application, a MQTT application, a wireless communication based application, a mobile application, or an RTOS based application. 18. The apparatus of claim 14, wherein execution of the first task on the first remote device includes placing a call by the first task to begin execution of a second task on a second remote device, or execution of the first task and the second task require exchange of data between the first task and the second task, wherein the placing a call and the data exchange are implemented according to the network settings. 19. The apparatus of claim 14, wherein the network settings are input at the user device. 20. The method of claim 14, wherein the network settings are obtained from a database remote to the host, the user device and the remote device.
2,400
8,159
8,159
14,622,770
2,483
An apparatus for viewing a stereoscopic display comprises a frame chassis, a hinge mechanism, a left lens assembly, a right lens assembly, and a sensor array. The hinge mechanism allows the left lens assembly and the right lens assembly to switch from a first orientation to a second orientation. The left lens assembly is coupled to the frame chassis via the hinge mechanism and is configured to be transparent to a first image output by the stereoscopic display and opaque to a second image output from the stereoscopic display, while the right lens assembly is coupled to the frame chassis via the hinge mechanism and is configured to be transparent to the second image output and opaque to the first image output. The sensor array is positioned to detect a current orientation of the left lens and the right lens.
1. An apparatus for viewing a stereoscopic display, the apparatus comprising: a frame chassis; a hinge mechanism; a left lens assembly coupled to the frame chassis via the hinge mechanism and configured to be transparent to a first image output by the stereoscopic display and opaque to a second image output from the stereoscopic display; a right lens assembly coupled to the frame chassis via the hinge mechanism and configured to be transparent to the second image output and opaque to the first image output; and a sensor array positioned to detect a current orientation of the left lens and the right lens, wherein the hinge mechanism allows the left lens assembly and the right lens assembly to switch from a first orientation to a second orientation. 2. The apparatus of claim 1, wherein the first orientation comprises an orientation for viewing two-dimensional content presented by the stereoscopic display, and the second orientation comprises an orientation for viewing three-dimensional content presented by the stereoscopic display 3. The apparatus of claim 1, wherein the first image output comprises a first portion of stereoscopic output generated by the stereoscopic display, and the second image output comprises a second portion of stereoscopic output generated by the stereoscopic display. 4. The apparatus of claim 1, wherein the sensor array is coupled to the hinge mechanism and is configured to generate a signal based on the current orientation of the left lens and the right lens. 5. The apparatus of claim 1, wherein the sensor array includes an orientation marker mounted on at least one of the left lens array and the right lens array. 6. The apparatus of claim 1, wherein the hinge mechanism comprises an automated actuator configured to switch the left lens assembly and the right lens assembly from the first orientation to the second orientation. 7. The apparatus of claim 6, further comprising: a wireless communication module configured to transmit information to and receive information from the stereoscopic display; and a controller configured to cause the left lens assembly and the right lens assembly to be switched from the first orientation to the second orientation in response to the wireless communication module receiving a signal from the stereoscopic display. 8. The apparatus of claim 1, further comprising a wireless communication module configured to transmit information to and receive information from the stereoscopic display. 9. The apparatus of claim 8, further comprising a controller configured to cause a signal to be transmitted to the stereoscopic display when the left lens assembly and the right lens assembly are switched from the first orientation to the second orientation. 10. A stereoscopic display system comprising: a stereoscopic display configured to present a first image output and a second image output; and an apparatus for viewing the stereoscopic display, the apparatus comprising: a frame chassis; a hinge mechanism; a left lens assembly coupled to the frame chassis via the hinge mechanism and configured to be transparent to the first image output and opaque to the second image; a right lens assembly coupled to the frame chassis via the hinge mechanism and configured to be transparent to the second image output and opaque to the first image output; and a sensor array positioned to detect a current orientation of the left lens and the right lens, wherein the hinge mechanism allows the left lens assembly and the right lens assembly to switch from a first orientation to a second orientation. 11. The stereoscopic display system of claim 1, wherein the first orientation comprises an orientation for viewing two-dimensional content presented by the stereoscopic display and the second orientation comprises an orientation for viewing three-dimensional content presented by the stereoscopic display. 12. The stereoscopic display system of claim 1, wherein the sensor array includes an orientation marker mounted on at least one of the left lens array and the right lens array. 13. The stereoscopic display system of claim 12, further comprising a sensor configured to generate an image of the orientation marker when the apparatus is positioned for viewing the stereoscopic display. 14. The stereoscopic display system of claim 13, further comprising a controller configured to determine an orientation of the left lens assembly and the right lens assembly based on the image of the orientation marker. 15. The stereoscopic display system of claim 14, wherein the controller is further configured to change a display mode of the stereoscopic display based on the orientation of the left lens assembly and the right lens assembly. 16. The stereoscopic display system of claim 10, wherein the hinge mechanism comprises an automated actuator configured to switch the left lens assembly and the right lens assembly from the first orientation to the second orientation. 17. The stereoscopic display system of claim 16, wherein the apparatus further comprises: a wireless communication module configured to transmit information to and receive information from the stereoscopic display; and an actuator controller configured to cause the left lens assembly and the right lens assembly to be switched from the first orientation to the second orientation in response to the wireless communication module receiving a signal from the stereoscopic display. 18. The stereoscopic display system of claim 10, further comprising a wireless communication module configured to transmit information to and receive information from the stereoscopic display. 19. The stereoscopic display system of claim 18, further comprising a controller configured to cause a signal to be transmitted to the stereoscopic display when the left lens assembly and the right lens assembly are switched from the first orientation to the second orientation. 20. The stereoscopic display system of claim 10, wherein the first image output comprises a first portion of stereoscopic output generated by the stereoscopic display, and the second image output comprises a second portion of stereoscopic output generated by the stereoscopic display.
An apparatus for viewing a stereoscopic display comprises a frame chassis, a hinge mechanism, a left lens assembly, a right lens assembly, and a sensor array. The hinge mechanism allows the left lens assembly and the right lens assembly to switch from a first orientation to a second orientation. The left lens assembly is coupled to the frame chassis via the hinge mechanism and is configured to be transparent to a first image output by the stereoscopic display and opaque to a second image output from the stereoscopic display, while the right lens assembly is coupled to the frame chassis via the hinge mechanism and is configured to be transparent to the second image output and opaque to the first image output. The sensor array is positioned to detect a current orientation of the left lens and the right lens.1. An apparatus for viewing a stereoscopic display, the apparatus comprising: a frame chassis; a hinge mechanism; a left lens assembly coupled to the frame chassis via the hinge mechanism and configured to be transparent to a first image output by the stereoscopic display and opaque to a second image output from the stereoscopic display; a right lens assembly coupled to the frame chassis via the hinge mechanism and configured to be transparent to the second image output and opaque to the first image output; and a sensor array positioned to detect a current orientation of the left lens and the right lens, wherein the hinge mechanism allows the left lens assembly and the right lens assembly to switch from a first orientation to a second orientation. 2. The apparatus of claim 1, wherein the first orientation comprises an orientation for viewing two-dimensional content presented by the stereoscopic display, and the second orientation comprises an orientation for viewing three-dimensional content presented by the stereoscopic display 3. The apparatus of claim 1, wherein the first image output comprises a first portion of stereoscopic output generated by the stereoscopic display, and the second image output comprises a second portion of stereoscopic output generated by the stereoscopic display. 4. The apparatus of claim 1, wherein the sensor array is coupled to the hinge mechanism and is configured to generate a signal based on the current orientation of the left lens and the right lens. 5. The apparatus of claim 1, wherein the sensor array includes an orientation marker mounted on at least one of the left lens array and the right lens array. 6. The apparatus of claim 1, wherein the hinge mechanism comprises an automated actuator configured to switch the left lens assembly and the right lens assembly from the first orientation to the second orientation. 7. The apparatus of claim 6, further comprising: a wireless communication module configured to transmit information to and receive information from the stereoscopic display; and a controller configured to cause the left lens assembly and the right lens assembly to be switched from the first orientation to the second orientation in response to the wireless communication module receiving a signal from the stereoscopic display. 8. The apparatus of claim 1, further comprising a wireless communication module configured to transmit information to and receive information from the stereoscopic display. 9. The apparatus of claim 8, further comprising a controller configured to cause a signal to be transmitted to the stereoscopic display when the left lens assembly and the right lens assembly are switched from the first orientation to the second orientation. 10. A stereoscopic display system comprising: a stereoscopic display configured to present a first image output and a second image output; and an apparatus for viewing the stereoscopic display, the apparatus comprising: a frame chassis; a hinge mechanism; a left lens assembly coupled to the frame chassis via the hinge mechanism and configured to be transparent to the first image output and opaque to the second image; a right lens assembly coupled to the frame chassis via the hinge mechanism and configured to be transparent to the second image output and opaque to the first image output; and a sensor array positioned to detect a current orientation of the left lens and the right lens, wherein the hinge mechanism allows the left lens assembly and the right lens assembly to switch from a first orientation to a second orientation. 11. The stereoscopic display system of claim 1, wherein the first orientation comprises an orientation for viewing two-dimensional content presented by the stereoscopic display and the second orientation comprises an orientation for viewing three-dimensional content presented by the stereoscopic display. 12. The stereoscopic display system of claim 1, wherein the sensor array includes an orientation marker mounted on at least one of the left lens array and the right lens array. 13. The stereoscopic display system of claim 12, further comprising a sensor configured to generate an image of the orientation marker when the apparatus is positioned for viewing the stereoscopic display. 14. The stereoscopic display system of claim 13, further comprising a controller configured to determine an orientation of the left lens assembly and the right lens assembly based on the image of the orientation marker. 15. The stereoscopic display system of claim 14, wherein the controller is further configured to change a display mode of the stereoscopic display based on the orientation of the left lens assembly and the right lens assembly. 16. The stereoscopic display system of claim 10, wherein the hinge mechanism comprises an automated actuator configured to switch the left lens assembly and the right lens assembly from the first orientation to the second orientation. 17. The stereoscopic display system of claim 16, wherein the apparatus further comprises: a wireless communication module configured to transmit information to and receive information from the stereoscopic display; and an actuator controller configured to cause the left lens assembly and the right lens assembly to be switched from the first orientation to the second orientation in response to the wireless communication module receiving a signal from the stereoscopic display. 18. The stereoscopic display system of claim 10, further comprising a wireless communication module configured to transmit information to and receive information from the stereoscopic display. 19. The stereoscopic display system of claim 18, further comprising a controller configured to cause a signal to be transmitted to the stereoscopic display when the left lens assembly and the right lens assembly are switched from the first orientation to the second orientation. 20. The stereoscopic display system of claim 10, wherein the first image output comprises a first portion of stereoscopic output generated by the stereoscopic display, and the second image output comprises a second portion of stereoscopic output generated by the stereoscopic display.
2,400
8,160
8,160
14,565,675
2,488
Various embodiments include a system and method that enhance the visualization of selected individual images from a real-time scan. The method can include acquiring ultrasound image data at an ultrasound system. The method may include processing the acquired ultrasound image data according to cine sequence processing parameters to generate a cine sequence. The cine sequence includes a plurality of frames, each of the frames having a first resolution. The method can include receiving a trigger. The method may include processing the acquired ultrasound image data according to still image processing parameters in response to the trigger to generate a still image. The still image has a second resolution that is higher than the first resolution.
1. A method, comprising: acquiring ultrasound image data at an ultrasound system; processing the acquired ultrasound image data, by a processor, according to cine sequence processing parameters to generate a cine sequence comprising a plurality of frames, wherein each of the frames has a first resolution; receiving a trigger at the processor; and processing the acquired ultrasound image data, by the processor, according to still image processing parameters in response to the trigger to generate a still image having a second resolution that is higher than the first resolution. 2. The method according to claim 1, comprising receiving user input at a user input module identifying one or more of the cine sequence processing parameters and cine sequence imaging parameters prior to acquiring the ultrasound image data. 3. The method according to claim 1, comprising receiving user input at a user input module identifying one or more of the still image processing parameters, still image imaging parameters, and the trigger prior to acquiring the ultrasound image data and receiving the trigger. 4. The method according to claim 1, comprising: storing the acquired ultrasound image data in memory; navigating, in response to instructions received at a user input module, the cine sequence to one of the plurality of frames, wherein the received trigger selects the one of the plurality of frames; retrieving, by the processor, a portion of the acquired ultrasound image data from the memory that corresponds with the one of the plurality of frames selected by the trigger; and processing the portion of the acquired ultrasound image data, by the processor, to generate the still image. 5. The method according to claim 4, comprising processing at least a portion of the still image, by the processor, according to magnifying glass processing parameters to generate a magnified still image. 6. The method according to claim 4, wherein the trigger comprises one or more of: an automatic trigger configured to activate when the cine sequence is navigated to the one of the plurality of frames by one or more of pausing or stopping the cine sequence at the one of the plurality of frames, and a user trigger configured to activate based on an input received at a user input module. 7. The method according to claim 1, wherein the acquired ultrasound image data processed to generate the cine sequence is acquired based on cine sequence imaging parameters, and wherein the acquired ultrasound image data processed to generate the still image is acquired based on still image imaging parameters. 8. The method according to claim 7, wherein the trigger comprises one or more of: a timer trigger configured to activate at one or more predetermined times, an automatic trigger configured to activate when at least one condition is detected in the acquired ultrasound image data, and a user trigger configured to activate based on an input received at a user input module. 9. The method according to claim 1, wherein the cine sequence is one of: a two-dimensional (2D) real-time motion imaging scan, and a four-dimensional (4D) imaging scan; and wherein the still image is one of: a static two-dimensional (2D) imaging acquisition, and a static three-dimensional (3D) imaging acquisition. 10. A system, comprising: an ultrasound device operable to acquire ultrasound image data; and a processor operable to: process the acquired ultrasound image data according to cine sequence processing parameters to generate a cine sequence comprising a plurality of frames, wherein each of the frames has a first resolution, receive a trigger, and process the acquired ultrasound image data according to still image processing parameters in response to the received trigger to generate a still image having a second resolution that is higher than the first resolution. 11. The system according to claim 10, comprising a user input module configured to receive one or more of: user input identifying one or more of the cine sequence processing parameters and cine sequence imaging parameters prior to acquiring the ultrasound image data, user input identifying one or more of the still image processing parameters, still image imaging parameters, and the trigger prior to acquiring the ultrasound image data and receiving the trigger, and user input provided as the trigger to one of: select one of the plurality of frames of the cine sequence that corresponds with a portion of the acquired ultrasound image data that is processed to generate the still image, or switch from acquiring ultrasound image data according to the cine sequence imaging parameters and processing the acquired ultrasound image data according to the cine sequence processing parameters to acquiring ultrasound image data according to the still image imaging parameters and processing the acquired ultrasound image data according to the still image processing parameters. 12. The system according to claim 10, comprising: a memory configured to store the acquired ultrasound image data; and a user input module configured to receive instructions for navigating the cine sequence to one of the plurality of frames, wherein the received trigger selects the one of the plurality of frames, wherein the processor is configured to: retrieve a portion of the acquired ultrasound image data from the memory that corresponds with the one of the plurality of frames selected by the trigger, and process the portion of the acquired ultrasound image data to generate the still image. 13. The system according to claim 12, wherein the processor is configured to process at least a portion of the still image according to magnifying glass processing parameters to generate a magnified still image. 14. The system according to claim 12, wherein the trigger comprises one or more of: an automatic trigger configured to activate when the cine sequence is navigated to the one of the plurality of frames by one or more of pausing or stopping the cine sequence at the one of the plurality of frames, and a user trigger configured to activate based on an input received at a user input module. 15. The system according to claim 10, wherein the acquired ultrasound image data processed to generate the cine sequence is acquired by the ultrasound device based on cine sequence imaging parameters, and wherein the acquired ultrasound image data processed to generate the still image is acquired by the ultrasound device based on still image imaging parameters. 16. A non-transitory computer readable medium having stored thereon, a computer program having at least one code section, the at least one code section being executable by a machine for causing the machine to perform steps comprising: acquiring ultrasound image data; processing the acquired ultrasound image data according to cine sequence processing parameters to generate a cine sequence comprising a plurality of frames, each of the frames having a first resolution; receiving a trigger; and processing the acquired ultrasound image data according to still image processing parameters in response to the trigger to generate a still image having a second resolution that is higher than the first resolution. 17. The non-transitory computer readable medium according to claim 16, comprising receiving one or more of: user input identifying one or more of the cine sequence processing parameters and cine sequence imaging parameters prior to acquiring the ultrasound image data, and user input identifying one or more of the still image processing parameters, still image imaging parameters, and the trigger prior to acquiring the ultrasound image data and receiving the trigger. 18. The non-transitory computer readable medium according to claim 16, comprising: storing the acquired ultrasound image data in memory; navigating the cine sequence to one of the plurality of frames, wherein the received trigger selects the one of the plurality of frames; retrieving a portion of the acquired ultrasound image data from the memory that corresponds with the one of the plurality of frames selected by the trigger; and processing the portion of the acquired ultrasound image data to generate the still image. 19. The non-transitory computer readable medium according to claim 18, comprising processing at least a portion of the still image according to magnifying glass processing parameters to generate a magnified still image. 20. The non-transitory computer readable medium according to claim 16, wherein the acquired ultrasound image data processed to generate the cine sequence is acquired based on cine sequence imaging parameters, and wherein the acquired ultrasound image data processed to generate the still image is acquired based on still image imaging parameters.
Various embodiments include a system and method that enhance the visualization of selected individual images from a real-time scan. The method can include acquiring ultrasound image data at an ultrasound system. The method may include processing the acquired ultrasound image data according to cine sequence processing parameters to generate a cine sequence. The cine sequence includes a plurality of frames, each of the frames having a first resolution. The method can include receiving a trigger. The method may include processing the acquired ultrasound image data according to still image processing parameters in response to the trigger to generate a still image. The still image has a second resolution that is higher than the first resolution.1. A method, comprising: acquiring ultrasound image data at an ultrasound system; processing the acquired ultrasound image data, by a processor, according to cine sequence processing parameters to generate a cine sequence comprising a plurality of frames, wherein each of the frames has a first resolution; receiving a trigger at the processor; and processing the acquired ultrasound image data, by the processor, according to still image processing parameters in response to the trigger to generate a still image having a second resolution that is higher than the first resolution. 2. The method according to claim 1, comprising receiving user input at a user input module identifying one or more of the cine sequence processing parameters and cine sequence imaging parameters prior to acquiring the ultrasound image data. 3. The method according to claim 1, comprising receiving user input at a user input module identifying one or more of the still image processing parameters, still image imaging parameters, and the trigger prior to acquiring the ultrasound image data and receiving the trigger. 4. The method according to claim 1, comprising: storing the acquired ultrasound image data in memory; navigating, in response to instructions received at a user input module, the cine sequence to one of the plurality of frames, wherein the received trigger selects the one of the plurality of frames; retrieving, by the processor, a portion of the acquired ultrasound image data from the memory that corresponds with the one of the plurality of frames selected by the trigger; and processing the portion of the acquired ultrasound image data, by the processor, to generate the still image. 5. The method according to claim 4, comprising processing at least a portion of the still image, by the processor, according to magnifying glass processing parameters to generate a magnified still image. 6. The method according to claim 4, wherein the trigger comprises one or more of: an automatic trigger configured to activate when the cine sequence is navigated to the one of the plurality of frames by one or more of pausing or stopping the cine sequence at the one of the plurality of frames, and a user trigger configured to activate based on an input received at a user input module. 7. The method according to claim 1, wherein the acquired ultrasound image data processed to generate the cine sequence is acquired based on cine sequence imaging parameters, and wherein the acquired ultrasound image data processed to generate the still image is acquired based on still image imaging parameters. 8. The method according to claim 7, wherein the trigger comprises one or more of: a timer trigger configured to activate at one or more predetermined times, an automatic trigger configured to activate when at least one condition is detected in the acquired ultrasound image data, and a user trigger configured to activate based on an input received at a user input module. 9. The method according to claim 1, wherein the cine sequence is one of: a two-dimensional (2D) real-time motion imaging scan, and a four-dimensional (4D) imaging scan; and wherein the still image is one of: a static two-dimensional (2D) imaging acquisition, and a static three-dimensional (3D) imaging acquisition. 10. A system, comprising: an ultrasound device operable to acquire ultrasound image data; and a processor operable to: process the acquired ultrasound image data according to cine sequence processing parameters to generate a cine sequence comprising a plurality of frames, wherein each of the frames has a first resolution, receive a trigger, and process the acquired ultrasound image data according to still image processing parameters in response to the received trigger to generate a still image having a second resolution that is higher than the first resolution. 11. The system according to claim 10, comprising a user input module configured to receive one or more of: user input identifying one or more of the cine sequence processing parameters and cine sequence imaging parameters prior to acquiring the ultrasound image data, user input identifying one or more of the still image processing parameters, still image imaging parameters, and the trigger prior to acquiring the ultrasound image data and receiving the trigger, and user input provided as the trigger to one of: select one of the plurality of frames of the cine sequence that corresponds with a portion of the acquired ultrasound image data that is processed to generate the still image, or switch from acquiring ultrasound image data according to the cine sequence imaging parameters and processing the acquired ultrasound image data according to the cine sequence processing parameters to acquiring ultrasound image data according to the still image imaging parameters and processing the acquired ultrasound image data according to the still image processing parameters. 12. The system according to claim 10, comprising: a memory configured to store the acquired ultrasound image data; and a user input module configured to receive instructions for navigating the cine sequence to one of the plurality of frames, wherein the received trigger selects the one of the plurality of frames, wherein the processor is configured to: retrieve a portion of the acquired ultrasound image data from the memory that corresponds with the one of the plurality of frames selected by the trigger, and process the portion of the acquired ultrasound image data to generate the still image. 13. The system according to claim 12, wherein the processor is configured to process at least a portion of the still image according to magnifying glass processing parameters to generate a magnified still image. 14. The system according to claim 12, wherein the trigger comprises one or more of: an automatic trigger configured to activate when the cine sequence is navigated to the one of the plurality of frames by one or more of pausing or stopping the cine sequence at the one of the plurality of frames, and a user trigger configured to activate based on an input received at a user input module. 15. The system according to claim 10, wherein the acquired ultrasound image data processed to generate the cine sequence is acquired by the ultrasound device based on cine sequence imaging parameters, and wherein the acquired ultrasound image data processed to generate the still image is acquired by the ultrasound device based on still image imaging parameters. 16. A non-transitory computer readable medium having stored thereon, a computer program having at least one code section, the at least one code section being executable by a machine for causing the machine to perform steps comprising: acquiring ultrasound image data; processing the acquired ultrasound image data according to cine sequence processing parameters to generate a cine sequence comprising a plurality of frames, each of the frames having a first resolution; receiving a trigger; and processing the acquired ultrasound image data according to still image processing parameters in response to the trigger to generate a still image having a second resolution that is higher than the first resolution. 17. The non-transitory computer readable medium according to claim 16, comprising receiving one or more of: user input identifying one or more of the cine sequence processing parameters and cine sequence imaging parameters prior to acquiring the ultrasound image data, and user input identifying one or more of the still image processing parameters, still image imaging parameters, and the trigger prior to acquiring the ultrasound image data and receiving the trigger. 18. The non-transitory computer readable medium according to claim 16, comprising: storing the acquired ultrasound image data in memory; navigating the cine sequence to one of the plurality of frames, wherein the received trigger selects the one of the plurality of frames; retrieving a portion of the acquired ultrasound image data from the memory that corresponds with the one of the plurality of frames selected by the trigger; and processing the portion of the acquired ultrasound image data to generate the still image. 19. The non-transitory computer readable medium according to claim 18, comprising processing at least a portion of the still image according to magnifying glass processing parameters to generate a magnified still image. 20. The non-transitory computer readable medium according to claim 16, wherein the acquired ultrasound image data processed to generate the cine sequence is acquired based on cine sequence imaging parameters, and wherein the acquired ultrasound image data processed to generate the still image is acquired based on still image imaging parameters.
2,400
8,161
8,161
15,043,623
2,497
A computer-implemented method for discovering network attack paths is provided. The method includes a computer generating scoring system results based on analysis of vulnerabilities of nodes in a network configuration. The method also includes the computer applying Bayesian probability to the scoring system results and selected qualitative risk attributes wherein output accounts for dependencies between vulnerabilities of the nodes. The method also includes the computer applying a weighted-average algorithm to the output yielding at least one ranking of nodes in order of likelihood of targeting by an external attacker.
1. A computer-implemented method for discovering network attack paths comprising: generating scoring system results, using a computer, based on analysis of vulnerabilities of nodes in a network configuration, wherein the scoring system results are a quantitative assessment of severities of computer system security vulnerabilities of the nodes in the network; applying, using the computer, a Bayesian probability model to the scoring system results to provide probabilities of attack paths into the network, wherein the Bayesian probability model includes conditional dependency probability tables reflecting dependencies between risks associated with different nodes in the network; and combining, using the computer, qualitative input with both the scoring system results and the probabilities of attack paths, wherein by combining an output is formed; applying, using the computer, a weighted-average algorithm to the output to yield at least one ranking of nodes in order of likelihood of targeting by an external attacker. 2. (canceled) 3. The method of claim 1, wherein the weighted-average algorithm prioritizes nodes by magnitude of risk and imminence of risk. 4. The method of claim 1, wherein the scoring system results are further generated based on a fixed formula for risk assessment. 5. The method of claim 1, wherein the method accounts for betweenness centrality of nodes as a risk factor, wherein betweenness centrality quantifies a number of times a given node in the network acts as a bridge along a shortest path between two other nodes in the network. 6. The method of claim 1, wherein the method accounts for assortativity of nodes, wherein assortativity represents a preference for a given node in the network to attached to other nodes in the network. 7. The method of claim 1, wherein the selected qualitative risk attributes comprise at least one of threat type, ease of execution, business risk, and certification risk. 8-20. (canceled) 21. The method of claim 1 further comprising: using the weighted average algorithm to determine at least one resistant attack path into the network. 22. The method of claim 1 further comprising: using the weighted average algorithm to determine at least one optimal attack path into the network. 23. The method of claim 1 further comprising: determining an aggregated attack path using the weighted-average algorithm. 24. The method of claim 1 further comprising: determining an architecture and configuration of the network. 25. A computer comprising: a processor; a bus connected to the processor; a memory connected to the bus, the memory storing computer usable program code which, when executed by the processor, performs a method for discovering network attack paths, wherein the computer usable program code comprises: computer usable program code for generating scoring system results, using the processor, based on analysis of vulnerabilities of nodes in a network configuration, wherein the scoring system results are a quantitative assessment of severities of computer system security vulnerabilities of the nodes in the network; computer usable program code for applying, using the processor, a Bayesian probability model to the scoring system results to provide probabilities of attack paths into the network, wherein the Bayesian probability model includes conditional dependency probability tables reflecting dependencies between risks associated with different nodes in the network; computer usable program code for combining, using the processor, qualitative input with both the scoring system results and the probabilities of attack paths, wherein by combining an output is formed; and computer usable program code for applying, using the processor, a weighted-average algorithm to the output to yield at least one ranking of nodes in order of likelihood of targeting by an external attacker. 26. The computer of claim 25, wherein the weighted-average algorithm is configured to prioritize nodes by magnitude of risk and imminence of risk. 27. The computer of claim 25, wherein the computer usable program code for generating the scoring system results includes a fixed formula for risk assessment. 28. The computer of claim 25, wherein the computer usable program code accounts for betweenness centrality of nodes as a risk factor, wherein betweenness centrality quantifies a number of times a given node in the network acts as a bridge along a shortest path between two other nodes in the network. 29. The computer of claim 25, wherein the computer usable program code accounts for assortativity of nodes, wherein assortativity represents a preference for a given node in the network to attached to other nodes in the network. 30. The computer of claim 25, wherein the selected qualitative risk attributes comprise at least one of threat type, ease of execution, business risk, and certification risk. 31. The computer of claim 25, wherein the computer usable program code further includes: computer usable program code for using the weighted average algorithm to determine at least one resistant attack path into the network. 32. The computer of claim 25 computer usable program code further includes: computer usable program code for using the weighted average algorithm to determine at least one optimal attack path into the network. 33. The method of claim 25 further comprising: determining an aggregated attack path using the weighted-average algorithm. 34. The method of claim 25 further comprising: determining an architecture and configuration of the network.
A computer-implemented method for discovering network attack paths is provided. The method includes a computer generating scoring system results based on analysis of vulnerabilities of nodes in a network configuration. The method also includes the computer applying Bayesian probability to the scoring system results and selected qualitative risk attributes wherein output accounts for dependencies between vulnerabilities of the nodes. The method also includes the computer applying a weighted-average algorithm to the output yielding at least one ranking of nodes in order of likelihood of targeting by an external attacker.1. A computer-implemented method for discovering network attack paths comprising: generating scoring system results, using a computer, based on analysis of vulnerabilities of nodes in a network configuration, wherein the scoring system results are a quantitative assessment of severities of computer system security vulnerabilities of the nodes in the network; applying, using the computer, a Bayesian probability model to the scoring system results to provide probabilities of attack paths into the network, wherein the Bayesian probability model includes conditional dependency probability tables reflecting dependencies between risks associated with different nodes in the network; and combining, using the computer, qualitative input with both the scoring system results and the probabilities of attack paths, wherein by combining an output is formed; applying, using the computer, a weighted-average algorithm to the output to yield at least one ranking of nodes in order of likelihood of targeting by an external attacker. 2. (canceled) 3. The method of claim 1, wherein the weighted-average algorithm prioritizes nodes by magnitude of risk and imminence of risk. 4. The method of claim 1, wherein the scoring system results are further generated based on a fixed formula for risk assessment. 5. The method of claim 1, wherein the method accounts for betweenness centrality of nodes as a risk factor, wherein betweenness centrality quantifies a number of times a given node in the network acts as a bridge along a shortest path between two other nodes in the network. 6. The method of claim 1, wherein the method accounts for assortativity of nodes, wherein assortativity represents a preference for a given node in the network to attached to other nodes in the network. 7. The method of claim 1, wherein the selected qualitative risk attributes comprise at least one of threat type, ease of execution, business risk, and certification risk. 8-20. (canceled) 21. The method of claim 1 further comprising: using the weighted average algorithm to determine at least one resistant attack path into the network. 22. The method of claim 1 further comprising: using the weighted average algorithm to determine at least one optimal attack path into the network. 23. The method of claim 1 further comprising: determining an aggregated attack path using the weighted-average algorithm. 24. The method of claim 1 further comprising: determining an architecture and configuration of the network. 25. A computer comprising: a processor; a bus connected to the processor; a memory connected to the bus, the memory storing computer usable program code which, when executed by the processor, performs a method for discovering network attack paths, wherein the computer usable program code comprises: computer usable program code for generating scoring system results, using the processor, based on analysis of vulnerabilities of nodes in a network configuration, wherein the scoring system results are a quantitative assessment of severities of computer system security vulnerabilities of the nodes in the network; computer usable program code for applying, using the processor, a Bayesian probability model to the scoring system results to provide probabilities of attack paths into the network, wherein the Bayesian probability model includes conditional dependency probability tables reflecting dependencies between risks associated with different nodes in the network; computer usable program code for combining, using the processor, qualitative input with both the scoring system results and the probabilities of attack paths, wherein by combining an output is formed; and computer usable program code for applying, using the processor, a weighted-average algorithm to the output to yield at least one ranking of nodes in order of likelihood of targeting by an external attacker. 26. The computer of claim 25, wherein the weighted-average algorithm is configured to prioritize nodes by magnitude of risk and imminence of risk. 27. The computer of claim 25, wherein the computer usable program code for generating the scoring system results includes a fixed formula for risk assessment. 28. The computer of claim 25, wherein the computer usable program code accounts for betweenness centrality of nodes as a risk factor, wherein betweenness centrality quantifies a number of times a given node in the network acts as a bridge along a shortest path between two other nodes in the network. 29. The computer of claim 25, wherein the computer usable program code accounts for assortativity of nodes, wherein assortativity represents a preference for a given node in the network to attached to other nodes in the network. 30. The computer of claim 25, wherein the selected qualitative risk attributes comprise at least one of threat type, ease of execution, business risk, and certification risk. 31. The computer of claim 25, wherein the computer usable program code further includes: computer usable program code for using the weighted average algorithm to determine at least one resistant attack path into the network. 32. The computer of claim 25 computer usable program code further includes: computer usable program code for using the weighted average algorithm to determine at least one optimal attack path into the network. 33. The method of claim 25 further comprising: determining an aggregated attack path using the weighted-average algorithm. 34. The method of claim 25 further comprising: determining an architecture and configuration of the network.
2,400
8,162
8,162
14,754,578
2,433
Using a mobile solution as described, external logistics providers can be readily on-boarded into a logistics network and identified as trusted providers at a customer or other transport participant site. For example, an electronic authentication token can be provided to a first mobile device of an external logistics provider operator to authorize the external logistics provider operator for a specific transport assignment. When a request for verification is received from an other transport participant in the transport assignment a server can verify that the external logistics provider operator is registered and authenticated for the transport assignment, and notify the other transport participant via a confirmation message to a second mobile device used by the transport participant.
1. A computer program product comprising a non-transitory machine-readable medium storing instructions that, when executed by at least one programmable processor, cause the at least one programmable processor to perform operations comprising: providing an electronic authentication token from a server implemented on one or more computing machines to a first mobile device of an external logistics provider operator, the electronic authentication token authorizing the external logistics provider operator for a transport assignment; verifying, when a request for verification is received from an other transport participant in the transport assignment, that the external logistics provider operator is registered and authenticated for the transport assignment, the request for verification comprising receipt at the server from a second mobile device of the other transport participant of the electronic authentication token that has been exchanged from the first mobile device to the second mobile device; and notifying, via a confirmation message to the second mobile device, that the external logistics provider operator is authenticated for the transport assignment. 2. A computer program product as in claim 1, wherein the operations further comprise: receiving a registration request for the external logistics provider operator at the server; and authenticating the external logistics provider operator at the server. 3. A computer program product as in claim 1, wherein the electronic authentication token comprises one or more of a quick response code, a bar code, and a near field communications code. 4. A computer program product as in claim 1, wherein the request for verification is received from the other transport participant in the transport assignment after the second mobile device captures the electronic authentication token displayed on the first mobile device. 5. A computer program product as in claim 1, wherein the electronic authorization token is logically linked to the transport assignment and only valid for a duration of the transport assignment. 6. A computer program product as in claim 5, wherein the operations further comprise: providing a second electronic authentication token from the server to the first mobile device of the external logistics provider operator, the second electronic authentication token authorizing the external logistics provider operator for a second transport assignment. 7. A system comprising: computer hardware configured to perform operations comprising: providing an electronic authentication token from a server implemented on one or more computing machines to a first mobile device of an external logistics provider operator, the electronic authentication token authorizing the external logistics provider operator for a transport assignment; verifying, when a request for verification is received from an other transport participant in the transport assignment, that the external logistics provider operator is registered and authenticated for the transport assignment, the request for verification comprising receipt at the server from a second mobile device of the other transport participant of the electronic authentication token that has been exchanged from the first mobile device to the second mobile device; and notifying, via a confirmation message to the second mobile device, that the external logistics provider operator is authenticated for the transport assignment. 8. A system as in claim 7, wherein the operations further comprise: receiving a registration request for the external logistics provider operator at the server; and authenticating the external logistics provider operator at the server. 9. A system as in claim 7, wherein the electronic authentication token comprises one or more of a quick response code, a bar code, and a near field communications code. 10. A system as in claim 7, wherein the request for verification is received from the other transport participant in the transport assignment after the second mobile device captures the electronic authentication token displayed on the first mobile device. 11. A system as in claim 7, wherein the electronic authorization token is logically linked to the transport assignment and only valid for a duration of the transport assignment. 12. A system as in claim 11, wherein the operations further comprise: providing a second electronic authentication token from the server to the first mobile device of the external logistics provider operator, the second electronic authentication token authorizing the external logistics provider operator for a second transport assignment. 13. A computer-implemented method comprising: providing an electronic authentication token from a server implemented on one or more computing machines to a first mobile device of an external logistics provider operator, the electronic authentication token authorizing the external logistics provider operator for a transport assignment; verifying, when a request for verification is received from an other transport participant in the transport assignment, that the external logistics provider operator is registered and authenticated for the transport assignment, the request for verification comprising receipt at the server from a second mobile device of the other transport participant of the electronic authentication token that has been exchanged from the first mobile device to the second mobile device; and notifying, via a confirmation message to the second mobile device, that the external logistics provider operator is authenticated for the transport assignment. 14. A computer-implemented method as in claim 13, wherein the operations further comprise: receiving a registration request for the external logistics provider operator at the server; and authenticating the external logistics provider operator at the server. 15. A computer-implemented method as in claim 13, wherein the electronic authentication token comprises one or more of a quick response code, a bar code, and a near field communications code. 16. A computer-implemented method as in claim 13, wherein the request for verification is received from the other transport participant in the transport assignment after the second mobile device captures the electronic authentication token displayed on the first mobile device. 17. A computer-implemented method as in claim 13, wherein the electronic authorization token is logically linked to the transport assignment and only valid for a duration of the transport assignment. 18. A computer-implemented method as in claim 17, wherein the operations further comprise: providing a second electronic authentication token from the server to the first mobile device of the external logistics provider operator, the second electronic authentication token authorizing the external logistics provider operator for a second transport assignment.
Using a mobile solution as described, external logistics providers can be readily on-boarded into a logistics network and identified as trusted providers at a customer or other transport participant site. For example, an electronic authentication token can be provided to a first mobile device of an external logistics provider operator to authorize the external logistics provider operator for a specific transport assignment. When a request for verification is received from an other transport participant in the transport assignment a server can verify that the external logistics provider operator is registered and authenticated for the transport assignment, and notify the other transport participant via a confirmation message to a second mobile device used by the transport participant.1. A computer program product comprising a non-transitory machine-readable medium storing instructions that, when executed by at least one programmable processor, cause the at least one programmable processor to perform operations comprising: providing an electronic authentication token from a server implemented on one or more computing machines to a first mobile device of an external logistics provider operator, the electronic authentication token authorizing the external logistics provider operator for a transport assignment; verifying, when a request for verification is received from an other transport participant in the transport assignment, that the external logistics provider operator is registered and authenticated for the transport assignment, the request for verification comprising receipt at the server from a second mobile device of the other transport participant of the electronic authentication token that has been exchanged from the first mobile device to the second mobile device; and notifying, via a confirmation message to the second mobile device, that the external logistics provider operator is authenticated for the transport assignment. 2. A computer program product as in claim 1, wherein the operations further comprise: receiving a registration request for the external logistics provider operator at the server; and authenticating the external logistics provider operator at the server. 3. A computer program product as in claim 1, wherein the electronic authentication token comprises one or more of a quick response code, a bar code, and a near field communications code. 4. A computer program product as in claim 1, wherein the request for verification is received from the other transport participant in the transport assignment after the second mobile device captures the electronic authentication token displayed on the first mobile device. 5. A computer program product as in claim 1, wherein the electronic authorization token is logically linked to the transport assignment and only valid for a duration of the transport assignment. 6. A computer program product as in claim 5, wherein the operations further comprise: providing a second electronic authentication token from the server to the first mobile device of the external logistics provider operator, the second electronic authentication token authorizing the external logistics provider operator for a second transport assignment. 7. A system comprising: computer hardware configured to perform operations comprising: providing an electronic authentication token from a server implemented on one or more computing machines to a first mobile device of an external logistics provider operator, the electronic authentication token authorizing the external logistics provider operator for a transport assignment; verifying, when a request for verification is received from an other transport participant in the transport assignment, that the external logistics provider operator is registered and authenticated for the transport assignment, the request for verification comprising receipt at the server from a second mobile device of the other transport participant of the electronic authentication token that has been exchanged from the first mobile device to the second mobile device; and notifying, via a confirmation message to the second mobile device, that the external logistics provider operator is authenticated for the transport assignment. 8. A system as in claim 7, wherein the operations further comprise: receiving a registration request for the external logistics provider operator at the server; and authenticating the external logistics provider operator at the server. 9. A system as in claim 7, wherein the electronic authentication token comprises one or more of a quick response code, a bar code, and a near field communications code. 10. A system as in claim 7, wherein the request for verification is received from the other transport participant in the transport assignment after the second mobile device captures the electronic authentication token displayed on the first mobile device. 11. A system as in claim 7, wherein the electronic authorization token is logically linked to the transport assignment and only valid for a duration of the transport assignment. 12. A system as in claim 11, wherein the operations further comprise: providing a second electronic authentication token from the server to the first mobile device of the external logistics provider operator, the second electronic authentication token authorizing the external logistics provider operator for a second transport assignment. 13. A computer-implemented method comprising: providing an electronic authentication token from a server implemented on one or more computing machines to a first mobile device of an external logistics provider operator, the electronic authentication token authorizing the external logistics provider operator for a transport assignment; verifying, when a request for verification is received from an other transport participant in the transport assignment, that the external logistics provider operator is registered and authenticated for the transport assignment, the request for verification comprising receipt at the server from a second mobile device of the other transport participant of the electronic authentication token that has been exchanged from the first mobile device to the second mobile device; and notifying, via a confirmation message to the second mobile device, that the external logistics provider operator is authenticated for the transport assignment. 14. A computer-implemented method as in claim 13, wherein the operations further comprise: receiving a registration request for the external logistics provider operator at the server; and authenticating the external logistics provider operator at the server. 15. A computer-implemented method as in claim 13, wherein the electronic authentication token comprises one or more of a quick response code, a bar code, and a near field communications code. 16. A computer-implemented method as in claim 13, wherein the request for verification is received from the other transport participant in the transport assignment after the second mobile device captures the electronic authentication token displayed on the first mobile device. 17. A computer-implemented method as in claim 13, wherein the electronic authorization token is logically linked to the transport assignment and only valid for a duration of the transport assignment. 18. A computer-implemented method as in claim 17, wherein the operations further comprise: providing a second electronic authentication token from the server to the first mobile device of the external logistics provider operator, the second electronic authentication token authorizing the external logistics provider operator for a second transport assignment.
2,400
8,163
8,163
14,724,633
2,487
A head-mounted display device is disclosed that includes an at least partially see-though display, a processor, and a non-volatile storage device holding instructions executable by the processor to: select an image that corresponds to a physical object viewable by the user; display the image at a perceived offset to the physical object; in response to alignment user input, move a perceived position of the image relative to the physical object; output an instruction to provide completion user input when the image appears to align with the physical object; when the completion user input is received, determine the inter-pupillary distance of the user; and calibrate the head mounted display device based on the inter-pupillary distance.
1. A head-mounted display device for determining an inter-pupillary distance of a user, comprising: an at least partially see-through display; a processor; and a non-volatile storage device holding instructions executable by the processor to: select an image that corresponds to a physical object viewable by the user; display the image at a perceived offset to the physical object; in response to alignment user input, move a perceived position of the image relative to the physical object; output an instruction to provide completion user input when the image appears to align with the physical object; when the completion user input is received, determine the inter-pupillary distance of the user; and calibrate the head mounted display device based on the inter-pupillary distance. 2. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to select the image by capturing the image with a camera of the head-mounted display device. 3. The head-mounted display device of claim 2, wherein the instructions are executable by the processor to capture the image in response to receiving capture user input. 4. The head-mounted display device of claim 2, wherein the instructions are executable by the processor to capture the image by: programmatically detecting the physical object; and in response, programmatically capturing the image of the physical object. 5. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to select the image by retrieving the image from a storage. 6. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to: display the image to a first eye of the user; and display blocking light to another eye of the user to obscure vision in the other eye. 7. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to provide one or more of audio guidance and visual guidance that guides the user to the physical object. 8. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to display a plurality of alignment icons, and wherein the alignment user input comprises user selection of one or more of the alignment icons via one or more of gesture input, gaze input and voice input. 9. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to: receive initial alignment user input that corresponds to moving the perceived position of the image in a first direction; in response, move the perceived position of the image by an initial distance in the first direction; subsequently receive subsequent alignment user input that corresponds to moving the perceived position of the image in a second direction opposite to the first direction; and in response, move the perceived position of the image in the second direction by a subsequent distance that is less than the initial distance. 10. A method for determining an inter-pupillary distance of a user of a head-mounted display device, comprising: selecting an image that corresponds to a physical object viewable by the user; displaying the image at a perceived offset to the physical object; in response to receiving alignment user input, moving a perceived position of the image relative to the physical object; outputting an instruction to provide completion user input when the image appears to align with the physical object; when the completion user input is received, determining the inter-pupillary distance of the user; and calibrating the head mounted display device based on the inter-pupillary distance. 11. The method of claim 10, wherein selecting the image comprises capturing the image with a camera of the head-mounted display device. 12. The method of claim 11, further comprising: receiving capture user input from the user; and in response, capturing the image with the camera. 13. The method of claim 11, further comprising: programmatically detecting the physical object; and in response, programmatically capturing the image of the physical object with the camera. 14. The method of claim 10, wherein selecting the image comprises retrieving the image from a storage. 15. The method of claim 10, further comprising: displaying the image to a first eye of the user; and displaying blocking light to another eye of the user to obscure vision in the other eye. 16. The method of claim 10, further comprising providing one or more of audio guidance and visual guidance that guides the user to the physical object. 17. The method of claim 10, further comprising displaying a plurality of alignment icons, and wherein the alignment user input comprises user selection of one or more of the alignment icons via one or more of gesture input, gaze input and voice input. 18. The method of claim 10, further comprising: receiving initial alignment user input that corresponds to moving the perceived position of the image in a first direction; in response, moving the perceived position of the image by an initial distance in the first direction; subsequently receiving subsequent alignment user input that corresponds to moving the perceived position of the image in a second direction opposite to the first direction; and in response, moving the a perceived position of the image in the second direction by a subsequent distance that is less than the initial distance. 19. A head mounted display device for determining an inter-pupillary distance of a user, comprising: an at least partially see-through display; a processor; and a non-volatile storage device holding instructions executable by the processor to: programmatically detect a physical object; in response, programmatically capture an image of the physical object with a camera of the head-mounted display device; display the image at a perceived offset to the physical object; in response to alignment user input, move a perceived position of the image relative to the physical object; output an instruction to provide completion user input when the image appears to align with the physical object; when the completion user input is received, determine the inter-pupillary distance of the user; and calibrate the head mounted display device based on the inter-pupillary distance. 20. The head-mounted display device of claim 19, wherein the instructions are executable by the processor to: display the image to a first eye of the user; and display blocking light to another eye of the user to obscure vision in the other eye.
A head-mounted display device is disclosed that includes an at least partially see-though display, a processor, and a non-volatile storage device holding instructions executable by the processor to: select an image that corresponds to a physical object viewable by the user; display the image at a perceived offset to the physical object; in response to alignment user input, move a perceived position of the image relative to the physical object; output an instruction to provide completion user input when the image appears to align with the physical object; when the completion user input is received, determine the inter-pupillary distance of the user; and calibrate the head mounted display device based on the inter-pupillary distance.1. A head-mounted display device for determining an inter-pupillary distance of a user, comprising: an at least partially see-through display; a processor; and a non-volatile storage device holding instructions executable by the processor to: select an image that corresponds to a physical object viewable by the user; display the image at a perceived offset to the physical object; in response to alignment user input, move a perceived position of the image relative to the physical object; output an instruction to provide completion user input when the image appears to align with the physical object; when the completion user input is received, determine the inter-pupillary distance of the user; and calibrate the head mounted display device based on the inter-pupillary distance. 2. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to select the image by capturing the image with a camera of the head-mounted display device. 3. The head-mounted display device of claim 2, wherein the instructions are executable by the processor to capture the image in response to receiving capture user input. 4. The head-mounted display device of claim 2, wherein the instructions are executable by the processor to capture the image by: programmatically detecting the physical object; and in response, programmatically capturing the image of the physical object. 5. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to select the image by retrieving the image from a storage. 6. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to: display the image to a first eye of the user; and display blocking light to another eye of the user to obscure vision in the other eye. 7. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to provide one or more of audio guidance and visual guidance that guides the user to the physical object. 8. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to display a plurality of alignment icons, and wherein the alignment user input comprises user selection of one or more of the alignment icons via one or more of gesture input, gaze input and voice input. 9. The head-mounted display device of claim 1, wherein the instructions are executable by the processor to: receive initial alignment user input that corresponds to moving the perceived position of the image in a first direction; in response, move the perceived position of the image by an initial distance in the first direction; subsequently receive subsequent alignment user input that corresponds to moving the perceived position of the image in a second direction opposite to the first direction; and in response, move the perceived position of the image in the second direction by a subsequent distance that is less than the initial distance. 10. A method for determining an inter-pupillary distance of a user of a head-mounted display device, comprising: selecting an image that corresponds to a physical object viewable by the user; displaying the image at a perceived offset to the physical object; in response to receiving alignment user input, moving a perceived position of the image relative to the physical object; outputting an instruction to provide completion user input when the image appears to align with the physical object; when the completion user input is received, determining the inter-pupillary distance of the user; and calibrating the head mounted display device based on the inter-pupillary distance. 11. The method of claim 10, wherein selecting the image comprises capturing the image with a camera of the head-mounted display device. 12. The method of claim 11, further comprising: receiving capture user input from the user; and in response, capturing the image with the camera. 13. The method of claim 11, further comprising: programmatically detecting the physical object; and in response, programmatically capturing the image of the physical object with the camera. 14. The method of claim 10, wherein selecting the image comprises retrieving the image from a storage. 15. The method of claim 10, further comprising: displaying the image to a first eye of the user; and displaying blocking light to another eye of the user to obscure vision in the other eye. 16. The method of claim 10, further comprising providing one or more of audio guidance and visual guidance that guides the user to the physical object. 17. The method of claim 10, further comprising displaying a plurality of alignment icons, and wherein the alignment user input comprises user selection of one or more of the alignment icons via one or more of gesture input, gaze input and voice input. 18. The method of claim 10, further comprising: receiving initial alignment user input that corresponds to moving the perceived position of the image in a first direction; in response, moving the perceived position of the image by an initial distance in the first direction; subsequently receiving subsequent alignment user input that corresponds to moving the perceived position of the image in a second direction opposite to the first direction; and in response, moving the a perceived position of the image in the second direction by a subsequent distance that is less than the initial distance. 19. A head mounted display device for determining an inter-pupillary distance of a user, comprising: an at least partially see-through display; a processor; and a non-volatile storage device holding instructions executable by the processor to: programmatically detect a physical object; in response, programmatically capture an image of the physical object with a camera of the head-mounted display device; display the image at a perceived offset to the physical object; in response to alignment user input, move a perceived position of the image relative to the physical object; output an instruction to provide completion user input when the image appears to align with the physical object; when the completion user input is received, determine the inter-pupillary distance of the user; and calibrate the head mounted display device based on the inter-pupillary distance. 20. The head-mounted display device of claim 19, wherein the instructions are executable by the processor to: display the image to a first eye of the user; and display blocking light to another eye of the user to obscure vision in the other eye.
2,400
8,164
8,164
15,352,447
2,426
Managing modes of a content playback device for playing back content including: receiving, from a licensing authority, at least one of: a deprecation message used to transition the content playback device from a full mode to a deprecated mode; a patched message used to transition the content playback device from the deprecated mode to the full mode; and a revocation message used to transition the content playback device from the full mode or the deprecated mode to a revoked mode; outputting a first set of features of the content when the content playback device is in the full mode; outputting a second set of features of the content reduced from the first set of features when the content playback device is in the deprecated mode; disabling all features of the content so that nothing is output when the content playback device is in the revoked mode.
1. A method for managing modes of a content playback device for playing back content, the method comprising: receiving, from a licensing authority, at least one of: a deprecation message used to transition the content playback device from a full mode to a deprecated mode; a patched message used to transition the content playback device from the deprecated mode to the full mode; and a revocation message used to transition the content playback device from the full mode or the deprecated mode to a revoked mode; outputting a first set of features of the content when the content playback device is in the full mode; outputting a second set of features of the content reduced from the first set of features when the content playback device is in the deprecated mode; disabling all features of the content so that nothing is output when the content playback device is in the revoked mode. 2. The method of claim 1, wherein the deprecation message is received when the licensing authority detects that a protection measure in the content playback device to protect the content from unauthorized copying or use has been circumvented. 3. The method of claim 2, wherein the deprecation message is received when the licensing authority also determines that the protection measure cannot be renewed. 4. The method of claim 3, wherein the deprecation message is received when the licensing authority also determines that the protection measure cannot be renewed according to defined criteria. 5. The method of claim 2, wherein the patched message is received when the licensing authority detects that the circumvented protection measure has been renewed. 6. The method of claim 2, wherein the revocation message is received when the licensing authority determines that the circumvention of the protection measure is permanent. 7. The method of claim 2, wherein receiving from a licensing authority further comprises receiving the deprecation message including a first version number, wherein the content playback device transitions from the deprecated mode to the full mode when the first version number is equal to or less than a current version number of the protection measure. 8. The method of claim 2, wherein the deprecation message includes a list of impacted devices with the protection measure circumvented. 9. The method of claim 8, wherein each of the list of impacted devices with the protection measure circumvented includes a specific content playback device with the protection measure circumvented. 10. The method of claim 8, wherein each of the list of impacted devices with the protection measure circumvented includes a specific type of devices with at least one of the devices having the protection measure circumvented. 11. The method of claim 8, wherein each of the list of impacted devices with the protection measure circumvented includes one of a specific content playback device with the protection measure circumvented or a specific type of devices with at least one of the devices having the protection measure circumvented. 12. The method of claim 1, wherein the first set of features of the content output when the content playback device is in the full mode comprises a 4K resolution, a high dynamic range, and a wide gamut color space. 13. The method of claim 1, wherein the second set of features of the content when the content playback device is in the deprecated mode comprises a high definition resolution, a simple dynamic range, and a narrow gamut color space. 14. An apparatus for outputting content according to modes, the apparatus comprising: a protection measurement unit configured to receive the content, the content having a first set of features; a downconverter configured to reduce the first set of features of the content to a second set of features of the content; a renewability control unit configured to determine and manage transitions between operating modes of the content playback device, the operating modes including a full mode, a deprecated mode, and a revoked mode; and a video renderer configured to render the first set of features when the apparatus is in the full mode or the second set of features when the apparatus is in the deprecated mode, or to disable all features of the content so that nothing is rendered when the apparatus is in the revoked mode. 15. The apparatus of claim 14, wherein the renewability control unit is further configured to receive system messages from a licensing authority, the system messages including a system patched message, a system deprecation message, and a system revocation message. 16. The apparatus of claim 15, wherein the renewability control unit is configured to transition the apparatus from the full mode to the deprecated mode when the system deprecation message is received. 17. The apparatus of claim 15, wherein the renewability control unit is configured to transition the apparatus from the deprecated mode to the full mode when the system patched message is received. 18. The apparatus of claim 15, wherein the system deprecation message includes a first version number. 19. The apparatus of claim 18, wherein the renewability control unit is configured to transition the apparatus from the deprecated mode to the full mode when the first version number is equal to or less than a current version number of the protection measurement unit. 20. The apparatus of claim 15, wherein the renewability control unit is configured to transition the apparatus from the full mode or the deprecated mode to the revoked mode when the system revocation message is received.
Managing modes of a content playback device for playing back content including: receiving, from a licensing authority, at least one of: a deprecation message used to transition the content playback device from a full mode to a deprecated mode; a patched message used to transition the content playback device from the deprecated mode to the full mode; and a revocation message used to transition the content playback device from the full mode or the deprecated mode to a revoked mode; outputting a first set of features of the content when the content playback device is in the full mode; outputting a second set of features of the content reduced from the first set of features when the content playback device is in the deprecated mode; disabling all features of the content so that nothing is output when the content playback device is in the revoked mode.1. A method for managing modes of a content playback device for playing back content, the method comprising: receiving, from a licensing authority, at least one of: a deprecation message used to transition the content playback device from a full mode to a deprecated mode; a patched message used to transition the content playback device from the deprecated mode to the full mode; and a revocation message used to transition the content playback device from the full mode or the deprecated mode to a revoked mode; outputting a first set of features of the content when the content playback device is in the full mode; outputting a second set of features of the content reduced from the first set of features when the content playback device is in the deprecated mode; disabling all features of the content so that nothing is output when the content playback device is in the revoked mode. 2. The method of claim 1, wherein the deprecation message is received when the licensing authority detects that a protection measure in the content playback device to protect the content from unauthorized copying or use has been circumvented. 3. The method of claim 2, wherein the deprecation message is received when the licensing authority also determines that the protection measure cannot be renewed. 4. The method of claim 3, wherein the deprecation message is received when the licensing authority also determines that the protection measure cannot be renewed according to defined criteria. 5. The method of claim 2, wherein the patched message is received when the licensing authority detects that the circumvented protection measure has been renewed. 6. The method of claim 2, wherein the revocation message is received when the licensing authority determines that the circumvention of the protection measure is permanent. 7. The method of claim 2, wherein receiving from a licensing authority further comprises receiving the deprecation message including a first version number, wherein the content playback device transitions from the deprecated mode to the full mode when the first version number is equal to or less than a current version number of the protection measure. 8. The method of claim 2, wherein the deprecation message includes a list of impacted devices with the protection measure circumvented. 9. The method of claim 8, wherein each of the list of impacted devices with the protection measure circumvented includes a specific content playback device with the protection measure circumvented. 10. The method of claim 8, wherein each of the list of impacted devices with the protection measure circumvented includes a specific type of devices with at least one of the devices having the protection measure circumvented. 11. The method of claim 8, wherein each of the list of impacted devices with the protection measure circumvented includes one of a specific content playback device with the protection measure circumvented or a specific type of devices with at least one of the devices having the protection measure circumvented. 12. The method of claim 1, wherein the first set of features of the content output when the content playback device is in the full mode comprises a 4K resolution, a high dynamic range, and a wide gamut color space. 13. The method of claim 1, wherein the second set of features of the content when the content playback device is in the deprecated mode comprises a high definition resolution, a simple dynamic range, and a narrow gamut color space. 14. An apparatus for outputting content according to modes, the apparatus comprising: a protection measurement unit configured to receive the content, the content having a first set of features; a downconverter configured to reduce the first set of features of the content to a second set of features of the content; a renewability control unit configured to determine and manage transitions between operating modes of the content playback device, the operating modes including a full mode, a deprecated mode, and a revoked mode; and a video renderer configured to render the first set of features when the apparatus is in the full mode or the second set of features when the apparatus is in the deprecated mode, or to disable all features of the content so that nothing is rendered when the apparatus is in the revoked mode. 15. The apparatus of claim 14, wherein the renewability control unit is further configured to receive system messages from a licensing authority, the system messages including a system patched message, a system deprecation message, and a system revocation message. 16. The apparatus of claim 15, wherein the renewability control unit is configured to transition the apparatus from the full mode to the deprecated mode when the system deprecation message is received. 17. The apparatus of claim 15, wherein the renewability control unit is configured to transition the apparatus from the deprecated mode to the full mode when the system patched message is received. 18. The apparatus of claim 15, wherein the system deprecation message includes a first version number. 19. The apparatus of claim 18, wherein the renewability control unit is configured to transition the apparatus from the deprecated mode to the full mode when the first version number is equal to or less than a current version number of the protection measurement unit. 20. The apparatus of claim 15, wherein the renewability control unit is configured to transition the apparatus from the full mode or the deprecated mode to the revoked mode when the system revocation message is received.
2,400
8,165
8,165
14,854,734
2,471
Apparatuses, methods, and program products are disclosed for data bandwidth limitation. By use of a processor, a data bandwidth limitation of a network connection of a first electronic device is determined. A request from a second electronic device to access the network connection of the first electronic device is received. Data corresponding to the data bandwidth limitation to the second electronic device in response to receiving the request from the second electronic device to access the network connection is provided.
1. An apparatus comprising: a processor; a memory that stores code executable by the processor, the code comprising: code that determines a data bandwidth limitation of a network connection of a first electronic device; code that receives a request from a second electronic device to access the network connection of the first electronic device; and code that provides data corresponding to the data bandwidth limitation to the second electronic device in response to receiving the request from the second electronic device to access the network connection. 2. The apparatus of claim 1, wherein the code that determines the data bandwidth limitation of the network connection of the first electronic device comprises code that computes a data quota of the network connection of the first electronic device. 3. The apparatus of claim 2, wherein the code that determines the data bandwidth limitation of the network connection of the first electronic device comprises code that computes a percent of the data quota used during a time period. 4. The apparatus of claim 3, wherein the code that computes the percent of the data quota used during the time period comprises code that computes the percent of the data quota used during a billing cycle. 5. The apparatus of claim 1, further comprising code that determines a metering level for data provided to the second electronic device. 6. The apparatus of claim 5, wherein the metering level comprises a high data output level, a medium data output level, and a low data output level. 7. The apparatus of claim 1, further comprising code that monitors an amount of data used by the second electronic device. 8. A method comprising: determining, by use of a processor, a data bandwidth limitation of a network connection of a first electronic device; receiving a request from a second electronic device to access the network connection of the first electronic device; and providing data corresponding to the data bandwidth limitation to the second electronic device in response to receiving the request from the second electronic device to access the network connection. 9. The method of claim 8, wherein determining the data bandwidth limitation of the network connection of the first electronic device comprises computing a data quota of the network connection of the first electronic device. 10. The method of claim 9, wherein determining the data bandwidth limitation of the network connection of the first electronic device comprises computing a percent of the data quota used during a time period. 11. The method of claim 10, wherein computing the percent of the data quota used during the time period comprises computing the percent of the data quota used during a billing cycle. 12. The method of claim 8, further comprising determining a metering level for data provided to the second electronic device. 13. The method of claim 12, wherein determining the metering level for data provided to the second electronic device comprises comparing a percent of a data quota of the network connection used during a billing cycle with a percent of the number of days that have passed in the billing cycle. 14. The method of claim 8, wherein providing data corresponding to the data bandwidth limitation to the second electronic device comprises providing data in a dynamic host configuration protocol (“DHCP”) packet. 15. The method of claim 8, further comprising monitoring an amount of data used by the second electronic device. 16. The method of claim 15, further comprising limiting the amount of data used by the second electronic device. 17. A program product comprising a computer readable storage medium that stores code executable by a processor, the executable code comprising code to perform: determining a data bandwidth limitation of a network connection of a first electronic device; receiving a request from a second electronic device to access the network connection of the first electronic device; and providing data corresponding to the data bandwidth limitation to the second electronic device in response to receiving the request from the second electronic device to access the network connection. 18. The program product of claim 17, wherein the code further comprises code to perform determining a metering level for data provided to the second electronic device. 19. The program product of claim 17, wherein the code further comprises code to perform monitoring an amount of data used by the second electronic device. 20. The program product of claim 17, wherein the code further comprises code to perform limiting an amount of data used by the second electronic device.
Apparatuses, methods, and program products are disclosed for data bandwidth limitation. By use of a processor, a data bandwidth limitation of a network connection of a first electronic device is determined. A request from a second electronic device to access the network connection of the first electronic device is received. Data corresponding to the data bandwidth limitation to the second electronic device in response to receiving the request from the second electronic device to access the network connection is provided.1. An apparatus comprising: a processor; a memory that stores code executable by the processor, the code comprising: code that determines a data bandwidth limitation of a network connection of a first electronic device; code that receives a request from a second electronic device to access the network connection of the first electronic device; and code that provides data corresponding to the data bandwidth limitation to the second electronic device in response to receiving the request from the second electronic device to access the network connection. 2. The apparatus of claim 1, wherein the code that determines the data bandwidth limitation of the network connection of the first electronic device comprises code that computes a data quota of the network connection of the first electronic device. 3. The apparatus of claim 2, wherein the code that determines the data bandwidth limitation of the network connection of the first electronic device comprises code that computes a percent of the data quota used during a time period. 4. The apparatus of claim 3, wherein the code that computes the percent of the data quota used during the time period comprises code that computes the percent of the data quota used during a billing cycle. 5. The apparatus of claim 1, further comprising code that determines a metering level for data provided to the second electronic device. 6. The apparatus of claim 5, wherein the metering level comprises a high data output level, a medium data output level, and a low data output level. 7. The apparatus of claim 1, further comprising code that monitors an amount of data used by the second electronic device. 8. A method comprising: determining, by use of a processor, a data bandwidth limitation of a network connection of a first electronic device; receiving a request from a second electronic device to access the network connection of the first electronic device; and providing data corresponding to the data bandwidth limitation to the second electronic device in response to receiving the request from the second electronic device to access the network connection. 9. The method of claim 8, wherein determining the data bandwidth limitation of the network connection of the first electronic device comprises computing a data quota of the network connection of the first electronic device. 10. The method of claim 9, wherein determining the data bandwidth limitation of the network connection of the first electronic device comprises computing a percent of the data quota used during a time period. 11. The method of claim 10, wherein computing the percent of the data quota used during the time period comprises computing the percent of the data quota used during a billing cycle. 12. The method of claim 8, further comprising determining a metering level for data provided to the second electronic device. 13. The method of claim 12, wherein determining the metering level for data provided to the second electronic device comprises comparing a percent of a data quota of the network connection used during a billing cycle with a percent of the number of days that have passed in the billing cycle. 14. The method of claim 8, wherein providing data corresponding to the data bandwidth limitation to the second electronic device comprises providing data in a dynamic host configuration protocol (“DHCP”) packet. 15. The method of claim 8, further comprising monitoring an amount of data used by the second electronic device. 16. The method of claim 15, further comprising limiting the amount of data used by the second electronic device. 17. A program product comprising a computer readable storage medium that stores code executable by a processor, the executable code comprising code to perform: determining a data bandwidth limitation of a network connection of a first electronic device; receiving a request from a second electronic device to access the network connection of the first electronic device; and providing data corresponding to the data bandwidth limitation to the second electronic device in response to receiving the request from the second electronic device to access the network connection. 18. The program product of claim 17, wherein the code further comprises code to perform determining a metering level for data provided to the second electronic device. 19. The program product of claim 17, wherein the code further comprises code to perform monitoring an amount of data used by the second electronic device. 20. The program product of claim 17, wherein the code further comprises code to perform limiting an amount of data used by the second electronic device.
2,400
8,166
8,166
12,669,685
2,482
A system. The system includes a computing device configured for communication with a plurality of multiple resolution cameras and with a display device. The computing device includes a camera resolution module configured for instructing at least one of the multiple resolution cameras to operate at a first resolution at a first period of time and at a second resolution at a second period of time. The first resolution is different than the second resolution.
1. A system, comprising: a computing device configured for communication with a plurality of multiple resolution cameras and with a display device, wherein the computing device comprises: a camera resolution module configured for instructing at least one of the multiple resolution cameras to operate at: a first resolution at a first period of time; and a second resolution at a second period of time, wherein the first resolution is different than the second resolution. 2. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first position of a person's eye with a first location on the display device. 3. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first position of a person's eye with a first image on the display device. 4. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first position of a person's eye with a first one of the multiple resolution cameras. 5. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first location on the display device with a first image on the display device. 6. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first location on the display device with a first one of the multiple resolution cameras. 7. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first image on the display device with a first one of the multiple resolution cameras. 8. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the camera resolution module is further configured for instructing the at least one of the multiple resolution cameras based on information received from the eye tracking module. 9. The system of claim 1, wherein the computing device further comprises a display module configured for sending a plurality of images to the display device, wherein the plurality of images comprise: a first image at the first resolution; and a second image at the second resolution. 10. A method, implemented at least in part by a computing device, the method comprising: receiving a first image from a multiple resolution camera at a first resolution; generating a change of resolution instruction; sending the change of resolution instruction to the multiple resolution camera; and receiving a second image from the multiple resolution camera at a second resolution, wherein the second resolution is different than the first resolution, 11. The method of claim 10, wherein generating the change of resolution instruction comprises generating the change of resolution instruction based on a position of a person's eye. 12. The method of claim 10, wherein generating the change of resolution instruction comprises generating the change of resolution instruction after a position of a person's eye has remained fixed for a predetermined period of time. 13. The method of claim 10, further comprising associating a position of a person's eyes with a location on a display device. 14. The method of claim 10, further comprising associating a position of a person's eyes with an image on a display device. 15. The method of claim 10, further comprising associating a position of a person's eyes with a multiple resolution camera. 16. The method of claim 10, further comprising associating a location on a display device with an image on the display device. 17. The method of claim 10, further comprising associating a location on a display device with a multiple resolution camera. 18. The method of claim 10, further comprising associating an image on a display device with a multiple resolution camera. 19. The method of claim 10, further comprising: generating a second change of resolution instruction; sending the second change of resolution instruction to the multiple resolution camera; and receiving a third image from the multiple resolution camera at the first resolution.
A system. The system includes a computing device configured for communication with a plurality of multiple resolution cameras and with a display device. The computing device includes a camera resolution module configured for instructing at least one of the multiple resolution cameras to operate at a first resolution at a first period of time and at a second resolution at a second period of time. The first resolution is different than the second resolution.1. A system, comprising: a computing device configured for communication with a plurality of multiple resolution cameras and with a display device, wherein the computing device comprises: a camera resolution module configured for instructing at least one of the multiple resolution cameras to operate at: a first resolution at a first period of time; and a second resolution at a second period of time, wherein the first resolution is different than the second resolution. 2. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first position of a person's eye with a first location on the display device. 3. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first position of a person's eye with a first image on the display device. 4. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first position of a person's eye with a first one of the multiple resolution cameras. 5. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first location on the display device with a first image on the display device. 6. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first location on the display device with a first one of the multiple resolution cameras. 7. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the eye tracking module is configured for associating a first image on the display device with a first one of the multiple resolution cameras. 8. The system of claim 1, wherein the computing device further comprises an eye tracking module in communication with the camera resolution module, wherein the camera resolution module is further configured for instructing the at least one of the multiple resolution cameras based on information received from the eye tracking module. 9. The system of claim 1, wherein the computing device further comprises a display module configured for sending a plurality of images to the display device, wherein the plurality of images comprise: a first image at the first resolution; and a second image at the second resolution. 10. A method, implemented at least in part by a computing device, the method comprising: receiving a first image from a multiple resolution camera at a first resolution; generating a change of resolution instruction; sending the change of resolution instruction to the multiple resolution camera; and receiving a second image from the multiple resolution camera at a second resolution, wherein the second resolution is different than the first resolution, 11. The method of claim 10, wherein generating the change of resolution instruction comprises generating the change of resolution instruction based on a position of a person's eye. 12. The method of claim 10, wherein generating the change of resolution instruction comprises generating the change of resolution instruction after a position of a person's eye has remained fixed for a predetermined period of time. 13. The method of claim 10, further comprising associating a position of a person's eyes with a location on a display device. 14. The method of claim 10, further comprising associating a position of a person's eyes with an image on a display device. 15. The method of claim 10, further comprising associating a position of a person's eyes with a multiple resolution camera. 16. The method of claim 10, further comprising associating a location on a display device with an image on the display device. 17. The method of claim 10, further comprising associating a location on a display device with a multiple resolution camera. 18. The method of claim 10, further comprising associating an image on a display device with a multiple resolution camera. 19. The method of claim 10, further comprising: generating a second change of resolution instruction; sending the second change of resolution instruction to the multiple resolution camera; and receiving a third image from the multiple resolution camera at the first resolution.
2,400
8,167
8,167
15,826,650
2,459
A method, apparatus, and system relating to a notification system for merging a new message into a pending notification.
1. (canceled) 2. A system comprising at least one processor and memory including instructions that, when executed by the at least one processor, cause the system to: receive a first communication comprising a first message in one of a plurality of first fields, the one of the plurality of first fields stored in a first portion of a storage area and the first message for display on a first device; determine a first indicator from a portion of the first message; determine a notification associated with the first communication; display the notification and the first message; receive a second communication comprising a second message in one of a plurality of second fields; determine that a second indicator from a portion of the second message matches the first indicator; overwrite the first message with the second message in the first portion of the storage area; determine a new first indicator from a portion of the second message, the new first indicator for use with at least one subsequent communication comprising a subsequent message for display on the first device; and modify the display of the notification and the first message with the second message. 3. The system of claim 2, wherein the instructions that, when executed by the at least one processor, further cause the system to: store remaining fields of the plurality of first fields in second portions of the storage area; store the second communication in third portions of the storage area; maintain the second portions of the storage area as unchanged after the overwrite of the first portion of the storage area. 4. The system of claim 3, wherein the instructions that, when executed by the at least one processor, further cause the system to: display, together with the notification and the first message, a first time associated with receipt of the first communication; determine a second time associated with the overwrite of the first communication; and display the second time together with the notification and the second message. 5. The system of claim 2, wherein the instructions that, when executed by the at least one processor, further cause the system to: display, in a second device that is in communication with the first device, user definable settings for altering the first indicator; receive an alteration to the user definable settings; forward the alteration from the second device to the first device; determine at least the second indicator from the second message based at least in part on the alteration to the first indicator. 6. The system of claim 2, wherein the instructions that, when executed by the at least one processor, further cause the system to: determine an originator of the first communication; and determine the notification from an application of the originator on the first device. 7. A computer-implemented method: receiving a first communication comprising a first message in one of a plurality of first fields, the one of the plurality of first fields stored in a first portion of a storage area and the first message for display on a first device; determining a first indicator from a portion of the first message; determining a notification associated with the first communication; displaying the notification with the first message on a display of the first device; receiving a second communication comprising a second message in one of a plurality of second fields; determining that a second indicator from a portion of the second message matches the first indicator; overwriting the first message with the second message in the first portion of the storage area; determining a new first indicator from a portion of the second message, the new first indicator for use with at least one subsequent communication comprising a subsequent message for display on the first device; and modifying the display of the first device to replace the notification and the first message with the second message. 8. The computer-implemented method of claim 7, further comprising: storing remaining fields of the plurality of first fields in second portions of the storage area; storing the second communication in third portions of the storage area; maintaining the second portions of the storage area as unchanged after the overwrite of the first portion of the storage area. 9. The computer-implemented method of claim 8, further comprising: displaying, together with the notification and the first message, a first time associated with receipt of the first communication; determining a second time associated with the overwriting of the first communication; and displaying the second time together with the notification and the second message. 10. The computer-implemented method of claim 7, further comprising: displaying, in a second device that is in communication with the first device, user definable settings for altering the first indicator; receiving an alteration to the user definable settings; forwarding the alteration from the second device to the first device; determining at least the second indicator from the second message based at least in part on the alteration to the first indicator. 11. The computer-implemented method of claim 7, further comprising: determining an originator of the first communication; and determining the notification from an application of the originator on the first device. 12. The computer-implemented method of claim 7, further comprising: determining an originator of the second communication; and determining to perform an evaluation on the second message, the evaluation to assign a portion of the second message as the second indicator. 13. The computer-implemented method of claim 7, further comprising: determining an originator of the first communication; and receiving alterations to settings in the first device, the alterations to assign a portion of the first message as the first indicator. 14. The computer-implemented method of claim 7, further comprising: determining an originator of the first communication; and performing the overwriting of the first message with the second message such that the second message annotates the first message. 15. The computer-implemented method of claim 7, further comprising: displaying user definable settings in the first device; receiving alterations to existing settings in the user definable settings; and enabling the first device to respond to the first communication or the second communication based at least in part on the alterations. 16. The computer-implemented method of claim 15, wherein the user definable settings are specific to an application associated with the first communication and the second communication. 17. A computer readable product comprising non-transitory media with instructions that, when executed by at least one processor of a system, cause the system to: receive a first communication comprising a first message in one of a plurality of first fields, the one of the plurality of first fields stored in a first portion of a storage area and the first message for display on a first device; determine a first indicator from a portion of the first message; determine a notification associated with the first communication; display the notification and the first message; receive a second communication comprising a second message in one of a plurality of second fields; determine that a second indicator from a portion of the second message matches the first indicator; overwrite the first message with the second message in the first portion of the storage area; determine a new first indicator from a portion of the second message, the new first indicator for use with at least one subsequent communication comprising a subsequent message for display on the first device; and modify the display of the notification and the first message with the second message. 18. The computer readable product of claim 17, wherein the instructions that, when executed by the at least one processor of the system, further cause the system to: store remaining fields of the plurality of first fields in second portions of the storage area; store the second communication in third portions of the storage area; maintain the second portions of the storage area as unchanged after the overwrite of the first portion of the storage area. 19. The computer readable product of claim 18, wherein the instructions that, when executed by the at least one processor of the system, further cause the system to: display, together with the notification and the first message, a first time associated with receipt of the first communication; determine a second time associated with the overwrite of the first communication; and display the second time together with the notification and the second message. 20. The computer readable product of claim 17, wherein the instructions that, when executed by the at least one processor of the system, further cause the system to: display, in a second device that is in communication with the first device, user definable settings for altering the first indicator; receive an alteration to the user definable settings; forward the alteration from the second device to the first device; determine at least the second indicator from the second message based at least in part on the alteration to the first indicator. 21. The computer readable product of claim 17, wherein the instructions that, when executed by the at least one processor of the system, further cause the system to: determine an originator of the first communication; and determine the notification from an application of the originator on the first device.
A method, apparatus, and system relating to a notification system for merging a new message into a pending notification.1. (canceled) 2. A system comprising at least one processor and memory including instructions that, when executed by the at least one processor, cause the system to: receive a first communication comprising a first message in one of a plurality of first fields, the one of the plurality of first fields stored in a first portion of a storage area and the first message for display on a first device; determine a first indicator from a portion of the first message; determine a notification associated with the first communication; display the notification and the first message; receive a second communication comprising a second message in one of a plurality of second fields; determine that a second indicator from a portion of the second message matches the first indicator; overwrite the first message with the second message in the first portion of the storage area; determine a new first indicator from a portion of the second message, the new first indicator for use with at least one subsequent communication comprising a subsequent message for display on the first device; and modify the display of the notification and the first message with the second message. 3. The system of claim 2, wherein the instructions that, when executed by the at least one processor, further cause the system to: store remaining fields of the plurality of first fields in second portions of the storage area; store the second communication in third portions of the storage area; maintain the second portions of the storage area as unchanged after the overwrite of the first portion of the storage area. 4. The system of claim 3, wherein the instructions that, when executed by the at least one processor, further cause the system to: display, together with the notification and the first message, a first time associated with receipt of the first communication; determine a second time associated with the overwrite of the first communication; and display the second time together with the notification and the second message. 5. The system of claim 2, wherein the instructions that, when executed by the at least one processor, further cause the system to: display, in a second device that is in communication with the first device, user definable settings for altering the first indicator; receive an alteration to the user definable settings; forward the alteration from the second device to the first device; determine at least the second indicator from the second message based at least in part on the alteration to the first indicator. 6. The system of claim 2, wherein the instructions that, when executed by the at least one processor, further cause the system to: determine an originator of the first communication; and determine the notification from an application of the originator on the first device. 7. A computer-implemented method: receiving a first communication comprising a first message in one of a plurality of first fields, the one of the plurality of first fields stored in a first portion of a storage area and the first message for display on a first device; determining a first indicator from a portion of the first message; determining a notification associated with the first communication; displaying the notification with the first message on a display of the first device; receiving a second communication comprising a second message in one of a plurality of second fields; determining that a second indicator from a portion of the second message matches the first indicator; overwriting the first message with the second message in the first portion of the storage area; determining a new first indicator from a portion of the second message, the new first indicator for use with at least one subsequent communication comprising a subsequent message for display on the first device; and modifying the display of the first device to replace the notification and the first message with the second message. 8. The computer-implemented method of claim 7, further comprising: storing remaining fields of the plurality of first fields in second portions of the storage area; storing the second communication in third portions of the storage area; maintaining the second portions of the storage area as unchanged after the overwrite of the first portion of the storage area. 9. The computer-implemented method of claim 8, further comprising: displaying, together with the notification and the first message, a first time associated with receipt of the first communication; determining a second time associated with the overwriting of the first communication; and displaying the second time together with the notification and the second message. 10. The computer-implemented method of claim 7, further comprising: displaying, in a second device that is in communication with the first device, user definable settings for altering the first indicator; receiving an alteration to the user definable settings; forwarding the alteration from the second device to the first device; determining at least the second indicator from the second message based at least in part on the alteration to the first indicator. 11. The computer-implemented method of claim 7, further comprising: determining an originator of the first communication; and determining the notification from an application of the originator on the first device. 12. The computer-implemented method of claim 7, further comprising: determining an originator of the second communication; and determining to perform an evaluation on the second message, the evaluation to assign a portion of the second message as the second indicator. 13. The computer-implemented method of claim 7, further comprising: determining an originator of the first communication; and receiving alterations to settings in the first device, the alterations to assign a portion of the first message as the first indicator. 14. The computer-implemented method of claim 7, further comprising: determining an originator of the first communication; and performing the overwriting of the first message with the second message such that the second message annotates the first message. 15. The computer-implemented method of claim 7, further comprising: displaying user definable settings in the first device; receiving alterations to existing settings in the user definable settings; and enabling the first device to respond to the first communication or the second communication based at least in part on the alterations. 16. The computer-implemented method of claim 15, wherein the user definable settings are specific to an application associated with the first communication and the second communication. 17. A computer readable product comprising non-transitory media with instructions that, when executed by at least one processor of a system, cause the system to: receive a first communication comprising a first message in one of a plurality of first fields, the one of the plurality of first fields stored in a first portion of a storage area and the first message for display on a first device; determine a first indicator from a portion of the first message; determine a notification associated with the first communication; display the notification and the first message; receive a second communication comprising a second message in one of a plurality of second fields; determine that a second indicator from a portion of the second message matches the first indicator; overwrite the first message with the second message in the first portion of the storage area; determine a new first indicator from a portion of the second message, the new first indicator for use with at least one subsequent communication comprising a subsequent message for display on the first device; and modify the display of the notification and the first message with the second message. 18. The computer readable product of claim 17, wherein the instructions that, when executed by the at least one processor of the system, further cause the system to: store remaining fields of the plurality of first fields in second portions of the storage area; store the second communication in third portions of the storage area; maintain the second portions of the storage area as unchanged after the overwrite of the first portion of the storage area. 19. The computer readable product of claim 18, wherein the instructions that, when executed by the at least one processor of the system, further cause the system to: display, together with the notification and the first message, a first time associated with receipt of the first communication; determine a second time associated with the overwrite of the first communication; and display the second time together with the notification and the second message. 20. The computer readable product of claim 17, wherein the instructions that, when executed by the at least one processor of the system, further cause the system to: display, in a second device that is in communication with the first device, user definable settings for altering the first indicator; receive an alteration to the user definable settings; forward the alteration from the second device to the first device; determine at least the second indicator from the second message based at least in part on the alteration to the first indicator. 21. The computer readable product of claim 17, wherein the instructions that, when executed by the at least one processor of the system, further cause the system to: determine an originator of the first communication; and determine the notification from an application of the originator on the first device.
2,400
8,168
8,168
14,519,016
2,482
A system having: a processor and addressable memory, where the processor is configured to: receive a geographic data defining a selected geographical area; receive an operating mode associated with the selected geographical area, where the received operating mode restricts at least one of: a viewing of a UAV data and a recording of the UAV data by at least one user device; and broadcast the UAV data to the at least one user device based on the selected geographical area and the received operating mode.
1. A system comprising: a processor and addressable memory, wherein the processor is configured to: receive a geographic data defining a selected geographical area; receive an operating mode associated with the selected geographical area, wherein the received operating mode restricts at least one of: a viewing of a UAV data and a recording of the UAV data by at least one user device; and broadcast the UAV data to the at least one user device based on the selected geographical area and the received operating mode. 2. The system of claim 1 wherein the UAV data comprises at least one of: a video stream from a UAV imager and a metadata associated with the video stream from the UAV imager. 3. The system of claim 2 wherein the first processor is further configured to: determine a field of view of the UAV imager; and determine if the field of view of the UAV imager is within the selected geographical area. 4. The system of claim 3 wherein the field of view of the UAV imager is based on a center field of view of the UAV. 5. The system of claim 3 wherein the field of view of the UAV imager is based on four corners of the video stream of the UAV imager. 6. The system of claim 3 wherein the received operating mode restricts viewing the UAV data, by the at least one user device, if any portion of the determined field of view of the UAV imager is within the selected geographical area. 7. The system of claim 3 wherein the received operating mode restricts viewing the UAV data, by the at least one user device, if no portion of the determined field of view of the UAV imager is within the selected geographical area. 8. The system of claim 3 wherein the received operating mode restricts viewing a portion of the UAV data, by the at least one user device, if the determined field of view of the UAV imager is partially within the selected geographical area and partially not within the selected geographical area, wherein the restricted viewing portion of the UAV data is one of: the field of view inside the selected geographical area and the field of view outside the selected geographical area. 9. The system of claim 3 wherein the received operating mode restricts recording the UAV data, by the at least one user device, if any portion of the determined field of view of the UAV imager is within the selected geographical area. 10. The system of claim 3 wherein the received operating mode restricts recording the UAV data, by the at least one user device, if any portion of the determined field of view of the UAV imager is not within the selected geographical area. 11. The system of claim 3 wherein the received operating mode restricts recording a portion of the UAV data, by the at least one user device, if the determined field of view of the UAV imager is partially within the selected geographical area and partially not within the selected geographical area, wherein the restricted recording portion of the UAV data is one of: the field of view inside the selected geographical area and the field of view outside the selected geographical area. 12. A method comprising: selecting, by an operator of a Ground Control System (GCS) having a processor and addressable memory, a geographical area; selecting, by the operator of the GCS, an operating mode to associate with the geographical area; sending, by the operator of the GCS, the selected geographical area and selected operating mode to a processor of a UAV having addressable memory; determining, by the processor of the UAV, a field of view of a UAV imager; and broadcasting, by the processor of the UAV, a UAV data defining the selected geographical area, the selected operating mode, and the field of view of the UAV imager to at least one user device; wherein the selected operating mode restricts at least one of: a viewing of the UAV data and a recording of the UAV data by the least one user device. 13. The method of claim 12 wherein the UAV data comprises at least one of: a video stream from the UAV imager and a metadata associated with the video stream from the UAV imager. 14. The method of claim 13 wherein the selected operating mode restricts viewing the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes any area inside of the selected geographical area. 15. The method of claim 13 wherein the selected operating mode restricts viewing the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes any area outside of the selected geographical area. 16. The method of claim 13 wherein the selected operating mode restricts viewing a portion of the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes an area inside the selected geographical area and an area outside the selected geographical area, wherein the restricted portion of the UAV data is one of: the area inside the selected geographical area and the area outside the selected geographical area. 17. The method of claim 13 wherein the selected operating mode restricts recording the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes any area inside of the selected geographical area. 18. The method of claim 13 wherein the selected operating mode restricts recording the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes any area outside of the selected geographical area. 19. The method of claim 13 wherein the selected operating mode restricts recording a portion of the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes an area inside the selected geographical area and an area outside the geographical area, wherein the restricted portion of the UAV data is one of: the area inside the selected geographical area and the area outside the selected geographical area. 20. The method of claim 13 further comprising: selecting, by the operator of the GCS, a geographical point to direct a center field of view of the UAV imager. 21. The method of claim 13 further comprising: inverting, by the operator of the GCS, any areas where at least one of: viewing of the UAV data and recording of the UAV data by the at least one user device are restricted with any areas where at least one of: viewing of the UAV data and recording of the UAV data by the at least one user device are allowed. 22. A system comprising: a processor and addressable memory, wherein the processor is configured to: receive, from at least one database, metadata containing information on one or more geographical areas where at least one UAV imager has at least one of: viewed the one or more geographical areas and recorded the one or more geographical areas; receive, by a user, a geographic data defining a selected geographical area; determine if the selected geographical area was at least one of: viewed and recorded based on the received metadata on one or more geographical areas; receive, by the user, a request to prevent at least one of: viewing the selected geographical area and recording the selected geographical area by the at least one UAV imager; and update the at least one database with the received request.
A system having: a processor and addressable memory, where the processor is configured to: receive a geographic data defining a selected geographical area; receive an operating mode associated with the selected geographical area, where the received operating mode restricts at least one of: a viewing of a UAV data and a recording of the UAV data by at least one user device; and broadcast the UAV data to the at least one user device based on the selected geographical area and the received operating mode.1. A system comprising: a processor and addressable memory, wherein the processor is configured to: receive a geographic data defining a selected geographical area; receive an operating mode associated with the selected geographical area, wherein the received operating mode restricts at least one of: a viewing of a UAV data and a recording of the UAV data by at least one user device; and broadcast the UAV data to the at least one user device based on the selected geographical area and the received operating mode. 2. The system of claim 1 wherein the UAV data comprises at least one of: a video stream from a UAV imager and a metadata associated with the video stream from the UAV imager. 3. The system of claim 2 wherein the first processor is further configured to: determine a field of view of the UAV imager; and determine if the field of view of the UAV imager is within the selected geographical area. 4. The system of claim 3 wherein the field of view of the UAV imager is based on a center field of view of the UAV. 5. The system of claim 3 wherein the field of view of the UAV imager is based on four corners of the video stream of the UAV imager. 6. The system of claim 3 wherein the received operating mode restricts viewing the UAV data, by the at least one user device, if any portion of the determined field of view of the UAV imager is within the selected geographical area. 7. The system of claim 3 wherein the received operating mode restricts viewing the UAV data, by the at least one user device, if no portion of the determined field of view of the UAV imager is within the selected geographical area. 8. The system of claim 3 wherein the received operating mode restricts viewing a portion of the UAV data, by the at least one user device, if the determined field of view of the UAV imager is partially within the selected geographical area and partially not within the selected geographical area, wherein the restricted viewing portion of the UAV data is one of: the field of view inside the selected geographical area and the field of view outside the selected geographical area. 9. The system of claim 3 wherein the received operating mode restricts recording the UAV data, by the at least one user device, if any portion of the determined field of view of the UAV imager is within the selected geographical area. 10. The system of claim 3 wherein the received operating mode restricts recording the UAV data, by the at least one user device, if any portion of the determined field of view of the UAV imager is not within the selected geographical area. 11. The system of claim 3 wherein the received operating mode restricts recording a portion of the UAV data, by the at least one user device, if the determined field of view of the UAV imager is partially within the selected geographical area and partially not within the selected geographical area, wherein the restricted recording portion of the UAV data is one of: the field of view inside the selected geographical area and the field of view outside the selected geographical area. 12. A method comprising: selecting, by an operator of a Ground Control System (GCS) having a processor and addressable memory, a geographical area; selecting, by the operator of the GCS, an operating mode to associate with the geographical area; sending, by the operator of the GCS, the selected geographical area and selected operating mode to a processor of a UAV having addressable memory; determining, by the processor of the UAV, a field of view of a UAV imager; and broadcasting, by the processor of the UAV, a UAV data defining the selected geographical area, the selected operating mode, and the field of view of the UAV imager to at least one user device; wherein the selected operating mode restricts at least one of: a viewing of the UAV data and a recording of the UAV data by the least one user device. 13. The method of claim 12 wherein the UAV data comprises at least one of: a video stream from the UAV imager and a metadata associated with the video stream from the UAV imager. 14. The method of claim 13 wherein the selected operating mode restricts viewing the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes any area inside of the selected geographical area. 15. The method of claim 13 wherein the selected operating mode restricts viewing the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes any area outside of the selected geographical area. 16. The method of claim 13 wherein the selected operating mode restricts viewing a portion of the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes an area inside the selected geographical area and an area outside the selected geographical area, wherein the restricted portion of the UAV data is one of: the area inside the selected geographical area and the area outside the selected geographical area. 17. The method of claim 13 wherein the selected operating mode restricts recording the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes any area inside of the selected geographical area. 18. The method of claim 13 wherein the selected operating mode restricts recording the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes any area outside of the selected geographical area. 19. The method of claim 13 wherein the selected operating mode restricts recording a portion of the UAV data, by the at least one user device, if the determined field of view of the UAV imager includes an area inside the selected geographical area and an area outside the geographical area, wherein the restricted portion of the UAV data is one of: the area inside the selected geographical area and the area outside the selected geographical area. 20. The method of claim 13 further comprising: selecting, by the operator of the GCS, a geographical point to direct a center field of view of the UAV imager. 21. The method of claim 13 further comprising: inverting, by the operator of the GCS, any areas where at least one of: viewing of the UAV data and recording of the UAV data by the at least one user device are restricted with any areas where at least one of: viewing of the UAV data and recording of the UAV data by the at least one user device are allowed. 22. A system comprising: a processor and addressable memory, wherein the processor is configured to: receive, from at least one database, metadata containing information on one or more geographical areas where at least one UAV imager has at least one of: viewed the one or more geographical areas and recorded the one or more geographical areas; receive, by a user, a geographic data defining a selected geographical area; determine if the selected geographical area was at least one of: viewed and recorded based on the received metadata on one or more geographical areas; receive, by the user, a request to prevent at least one of: viewing the selected geographical area and recording the selected geographical area by the at least one UAV imager; and update the at least one database with the received request.
2,400
8,169
8,169
13,813,468
2,459
Collaborative help for user applications includes: generating a message, the message being reflective of a user's experience in using a user application; sending the message to a collaborative help server to share the message with other users; and receiving a response from the server providing information regarding one or more other users' experience similar to the user's experience reflected in the message.
1. A method for collaborative help for user applications, comprising: generating, using a processor, a message, the message being reflective of a user's experience in using a user application; sending the message to a collaborative help server to share the message with other users; and receiving a response from the server providing information regarding one or more other users' experience similar to the user's experience reflected in the message. 2. The method as claimed in claim 1, including: receiving a user's input of created content; validating the created content and generating any error messages; and wherein generating a message includes the created content and any error messages. 3. The method as claimed in claim 1, wherein a message is formed of one or more of the group of: sequence of events, sequence of inputs, sequence of results, error messages. 4. The method as claimed in claim 1, wherein the information in the response is contextual information providing a solution to the user's experience reflected in the message. 5. The method as claimed in claim 1, wherein the information in the response is other user's contact information to provide direct collaboration between the users. 6. The method as claimed in claim 5, wherein the contact information indicates whether a user is currently online. 7. The method as claimed in claim 1, wherein the response from the server provides a ranked list of other users' experiences. 8. The method as claimed in claim 1, including receiving a response from the server including results of an Internet or intranet search for a query based on the message. 9. The method as claimed in claim 1, wherein sending the message and receiving a response is by a micro-messaging infrastructure. 10. A method for collaborative help for user applications, comprising: receiving a message, the message being reflective of a user's experience in using a user application; storing the message with other messages from other users; searching, using a processor the stored messages for similar messages from other users; and returning a message providing information regarding one or more other users' experience similar to the user's experience reflected in the received message. 11. The method as claimed in claim 10, wherein the information in the response is contextual information providing a solution to the user's experience reflected in the message. 12. The method as claimed in claim 10, wherein storing the message includes storing a reference to the origin of the message and the information in the response is other user's contact information to provide direct collaboration between the users. 13. The method as claimed in claim 12, wherein the contact information indicates whether a user is currently online. 14. The method as claimed in claim 10, including ranking the stored messages of other users' experiences by frequency. 15. The method as claimed in claim 10, including carrying out an Internet or intranet search for a query based on the message. 16. A computer software product for collaborative help for user applications, the product comprising a computer-readable storage medium having computer readable program code embodied therewith, the computer readable program code readable by a processor to perform a method comprising: generating, using the processor, a message, the message being reflective of a user's experience in using a user application; sending, using the processor, the message to a collaborative help server to share the message with other users; and receiving, using the processor, receive a response from the server providing information regarding one or more other users' experience similar to the user's experience reflected in the message. 17. A computer software product for collaborative help for user applications, the product comprising a computer-readable storage medium having computer readable program code embodied therewith, the computer readable program code readable by a processor to perform a method comprising: receiving, using the processor, a message, the message being reflective of a user's experience in using a user application; storing, using the processor, the message with other messages from other users; searching, using the processor, the stored messages for similar messages from other users; and returning, using the processor, return a message providing information regarding one or more other users' experience similar to the user's experience reflected in the received message. 18-20. (canceled) 21. A system for collaborative help for user applications, comprising: a processor programmed to initiate executable operations comprising: generating a message, the message being reflective of a user's experience in using a user application; sending the message to a collaborative help server to share the message with other users; and receiving a response from the server providing information regarding one or more other users' experience similar to the user's experience reflected in the message. 22. A system for collaborative help for user applications, comprising: a processor programmed to initiate executable operations comprising: receiving a message, the message being reflective of a user's experience in using a user application; storing the message with other messages from other users; searching the stored messages for similar messages from other users; and returning a message providing information regarding one or more other users' experience similar to the user's experience reflected in the received message. 23. The system of claim 22, wherein the information in the response is contextual information providing a solution to the user's experience reflected in the message.
Collaborative help for user applications includes: generating a message, the message being reflective of a user's experience in using a user application; sending the message to a collaborative help server to share the message with other users; and receiving a response from the server providing information regarding one or more other users' experience similar to the user's experience reflected in the message.1. A method for collaborative help for user applications, comprising: generating, using a processor, a message, the message being reflective of a user's experience in using a user application; sending the message to a collaborative help server to share the message with other users; and receiving a response from the server providing information regarding one or more other users' experience similar to the user's experience reflected in the message. 2. The method as claimed in claim 1, including: receiving a user's input of created content; validating the created content and generating any error messages; and wherein generating a message includes the created content and any error messages. 3. The method as claimed in claim 1, wherein a message is formed of one or more of the group of: sequence of events, sequence of inputs, sequence of results, error messages. 4. The method as claimed in claim 1, wherein the information in the response is contextual information providing a solution to the user's experience reflected in the message. 5. The method as claimed in claim 1, wherein the information in the response is other user's contact information to provide direct collaboration between the users. 6. The method as claimed in claim 5, wherein the contact information indicates whether a user is currently online. 7. The method as claimed in claim 1, wherein the response from the server provides a ranked list of other users' experiences. 8. The method as claimed in claim 1, including receiving a response from the server including results of an Internet or intranet search for a query based on the message. 9. The method as claimed in claim 1, wherein sending the message and receiving a response is by a micro-messaging infrastructure. 10. A method for collaborative help for user applications, comprising: receiving a message, the message being reflective of a user's experience in using a user application; storing the message with other messages from other users; searching, using a processor the stored messages for similar messages from other users; and returning a message providing information regarding one or more other users' experience similar to the user's experience reflected in the received message. 11. The method as claimed in claim 10, wherein the information in the response is contextual information providing a solution to the user's experience reflected in the message. 12. The method as claimed in claim 10, wherein storing the message includes storing a reference to the origin of the message and the information in the response is other user's contact information to provide direct collaboration between the users. 13. The method as claimed in claim 12, wherein the contact information indicates whether a user is currently online. 14. The method as claimed in claim 10, including ranking the stored messages of other users' experiences by frequency. 15. The method as claimed in claim 10, including carrying out an Internet or intranet search for a query based on the message. 16. A computer software product for collaborative help for user applications, the product comprising a computer-readable storage medium having computer readable program code embodied therewith, the computer readable program code readable by a processor to perform a method comprising: generating, using the processor, a message, the message being reflective of a user's experience in using a user application; sending, using the processor, the message to a collaborative help server to share the message with other users; and receiving, using the processor, receive a response from the server providing information regarding one or more other users' experience similar to the user's experience reflected in the message. 17. A computer software product for collaborative help for user applications, the product comprising a computer-readable storage medium having computer readable program code embodied therewith, the computer readable program code readable by a processor to perform a method comprising: receiving, using the processor, a message, the message being reflective of a user's experience in using a user application; storing, using the processor, the message with other messages from other users; searching, using the processor, the stored messages for similar messages from other users; and returning, using the processor, return a message providing information regarding one or more other users' experience similar to the user's experience reflected in the received message. 18-20. (canceled) 21. A system for collaborative help for user applications, comprising: a processor programmed to initiate executable operations comprising: generating a message, the message being reflective of a user's experience in using a user application; sending the message to a collaborative help server to share the message with other users; and receiving a response from the server providing information regarding one or more other users' experience similar to the user's experience reflected in the message. 22. A system for collaborative help for user applications, comprising: a processor programmed to initiate executable operations comprising: receiving a message, the message being reflective of a user's experience in using a user application; storing the message with other messages from other users; searching the stored messages for similar messages from other users; and returning a message providing information regarding one or more other users' experience similar to the user's experience reflected in the received message. 23. The system of claim 22, wherein the information in the response is contextual information providing a solution to the user's experience reflected in the message.
2,400
8,170
8,170
14,001,233
2,482
The invention relates to a system for carrying out a treatment of a human or animal body by a surgeon, comprising a hand-held instrument ( 2 ) having an instrument head ( 3 ) that acts on an operating field of the body and supports a treatment tool ( 12 ). The system also comprises a computer ( 4 ) on which a navigation program is provided to assist the guidance of the instrument head ( 3 ), wherein body data representing the part of the body containing the operating field, planning data representing the planned treatment, and instrument data representing the position and orientation of the instrument head ( 3 ) are available to the navigation program. A positioning means for recording the instrument data is present, wherein the navigation program compares the instrument data as actual data with the planning data as desired data, and a signalling means ( 10 ) is provided, which indicates to the surgeon a deviation of the actual data from the desired data. The positioning means comprises an image-recording means that is located at the instrument head ( 3 ) and that records, during the handling of the instrument head and in particular in rapid sequence, single images of a body part that is represented in the body data and is in a defined relationship with the treatment site, wherein the orientation of the image-recording means is in a defined relationship with the instrument head, and the navigation program generates the instrument data by matching the single images with the body data.
1. A system for carrying out a treatment of a human or animal body by a surgeon, comprising a hand-held instrument (2) having an instrument head (3) that acts on an operating field of the body and supports a treatment tool (12), and comprising a computer (4) on which a navigation program is provided to assist the guidance of the instrument head (3), wherein body data representing the part of the body containing the operating field, planning data representing the planned treatment, and instrument data representing the position and orientation of the instrument head (3) are available to the navigation program, wherein a positioning means for recording the instrument data is present, wherein the navigation program compares the instrument data as actual data with the planning data as desired data, wherein a signalling means (10) is provided, which indicates to the surgeon a deviation of the actual data from the desired data, characterised in that the positioning means comprises an image-recording means that is located at the instrument head (3) and that records, during the handling of the instrument head and in particular in rapid sequence, single images of a body part that is represented in the body data and is in a defined relationship with the treatment site, wherein the orientation of the image-recording means is in a defined relationship with the instrument head, wherein the navigation program generates the instrument data by matching the single images with the body data. 2. The system according to claim 1, characterised in that the signalling means (10) is provided on the instrument head (3), wherein the signalling means (10) indicates to the surgeon the deviation of the actual data from the desired data whilst the instrument head (3) is in his field of vision, wherein the instrument is a dental drill (2) in particular. 3. The system according to claim 1, characterised in that the positioning means has two image-recording means, which are oriented relative to one another in a manner suitable for generating a stereoscopic recording of the body part. 4. The system according to claim 1, characterised in that the navigation program, for comparison of the single images, filters out from the body data solid structures, such as exposed bone and/or teeth. 5. The system according to claim 1, characterised in that the field of vision (9) of the image-recording means is directed to the operating field and in particular also to the instrument head or the treatment tool (12). 6. The system according to claim 5, characterised in that the current image of the image-recording means is displayed on a screen visible to the surgeon. 7. The system according to claim 1, characterised in that the image-recording means has a device that carries out surface scans over the body part and generates the single images therefrom. 8. The system according to claim 1, characterised in that the image-recording means is a light-conducting fibre (7) ending in the instrument head, at the other end of which a camera (15) is located. 9. The system according to claim 1, characterised in that light can be guided to the operating field by means of a light-conducting fibre (7) ending in the instrument head. 10. The system according to claim 1, characterised in that the image-recording means has a lens optic (14) focussing automatically on the body part.
The invention relates to a system for carrying out a treatment of a human or animal body by a surgeon, comprising a hand-held instrument ( 2 ) having an instrument head ( 3 ) that acts on an operating field of the body and supports a treatment tool ( 12 ). The system also comprises a computer ( 4 ) on which a navigation program is provided to assist the guidance of the instrument head ( 3 ), wherein body data representing the part of the body containing the operating field, planning data representing the planned treatment, and instrument data representing the position and orientation of the instrument head ( 3 ) are available to the navigation program. A positioning means for recording the instrument data is present, wherein the navigation program compares the instrument data as actual data with the planning data as desired data, and a signalling means ( 10 ) is provided, which indicates to the surgeon a deviation of the actual data from the desired data. The positioning means comprises an image-recording means that is located at the instrument head ( 3 ) and that records, during the handling of the instrument head and in particular in rapid sequence, single images of a body part that is represented in the body data and is in a defined relationship with the treatment site, wherein the orientation of the image-recording means is in a defined relationship with the instrument head, and the navigation program generates the instrument data by matching the single images with the body data.1. A system for carrying out a treatment of a human or animal body by a surgeon, comprising a hand-held instrument (2) having an instrument head (3) that acts on an operating field of the body and supports a treatment tool (12), and comprising a computer (4) on which a navigation program is provided to assist the guidance of the instrument head (3), wherein body data representing the part of the body containing the operating field, planning data representing the planned treatment, and instrument data representing the position and orientation of the instrument head (3) are available to the navigation program, wherein a positioning means for recording the instrument data is present, wherein the navigation program compares the instrument data as actual data with the planning data as desired data, wherein a signalling means (10) is provided, which indicates to the surgeon a deviation of the actual data from the desired data, characterised in that the positioning means comprises an image-recording means that is located at the instrument head (3) and that records, during the handling of the instrument head and in particular in rapid sequence, single images of a body part that is represented in the body data and is in a defined relationship with the treatment site, wherein the orientation of the image-recording means is in a defined relationship with the instrument head, wherein the navigation program generates the instrument data by matching the single images with the body data. 2. The system according to claim 1, characterised in that the signalling means (10) is provided on the instrument head (3), wherein the signalling means (10) indicates to the surgeon the deviation of the actual data from the desired data whilst the instrument head (3) is in his field of vision, wherein the instrument is a dental drill (2) in particular. 3. The system according to claim 1, characterised in that the positioning means has two image-recording means, which are oriented relative to one another in a manner suitable for generating a stereoscopic recording of the body part. 4. The system according to claim 1, characterised in that the navigation program, for comparison of the single images, filters out from the body data solid structures, such as exposed bone and/or teeth. 5. The system according to claim 1, characterised in that the field of vision (9) of the image-recording means is directed to the operating field and in particular also to the instrument head or the treatment tool (12). 6. The system according to claim 5, characterised in that the current image of the image-recording means is displayed on a screen visible to the surgeon. 7. The system according to claim 1, characterised in that the image-recording means has a device that carries out surface scans over the body part and generates the single images therefrom. 8. The system according to claim 1, characterised in that the image-recording means is a light-conducting fibre (7) ending in the instrument head, at the other end of which a camera (15) is located. 9. The system according to claim 1, characterised in that light can be guided to the operating field by means of a light-conducting fibre (7) ending in the instrument head. 10. The system according to claim 1, characterised in that the image-recording means has a lens optic (14) focussing automatically on the body part.
2,400
8,171
8,171
15,386,214
2,433
Some embodiments provide a method for distributing firewall configuration in a datacenter comprising multiple host machines. The method retrieves a rule in the firewall configuration for distribution to the host machines. The firewall rule is associated with a minimum required version number. The method identifies a high-level construct in the firewall rule. The method queries a translation cache for the identified high-level construct. The translation cache stores previous translation results for different high-level constructs. Each stored translation result is associated with a version number. When the translation cache has a stored previous translation result for the identified high-level construct that is associated with a version number that is equal to or newer than the minimum required version number, the method uses the previous translation result stored in the cache to translate the identified high-level construct to a low-level construct.
1. A method for distributing firewall configuration in a datacenter comprising a plurality of host machines, the method comprising: retrieving a rule in the firewall configuration for distribution to the plurality of host machines, the firewall rule associated with a minimum required version number; identifying a high-level construct in the firewall rule; querying a translation cache for the identified high-level construct, the translation cache storing previous translation results for different high-level constructs, each stored translation result associated with a version number; and when the translation cache has a stored previous translation result for the identified high-level construct that is associated with a version number that is equal to or newer than the minimum required version number, using the previous translation result stored in the cache to translate the identified high-level construct to a low-level construct. 2. The method of claim 1, wherein the high-level construct is a container that represents a plurality of network addresses. 3. The method of claim 1, wherein the high-level construct is for identifying an enforcement point of the firewall rule, and the set of low-level constructs comprises identifiers for enforcement points for the firewall rule. 4. The method of claim 1, wherein the set of low-level constructs comprises identifiers for virtual network interface controllers (VNICs) for virtual machines (VMs) being hosted by the plurality of host machines. 5. The method of claim 1, wherein the high-level construct is for identifying a source or destination network address of the firewall rule. 6. The method of claim 1, wherein the minimum required version number is based on an instant in time that the firewall configuration was updated. 7. The method of claim 1 further comprising: performing translation of the identified high-level construct; and storing the result of the translation in the translation cache with a new version number when the translation cache does not have a previous translation result for the identified high-level construct or when the stored previous translation result is associated with a version number that is older than the minimum required version number. 8. The method of claim 7, wherein performing the translation of the identified high-level construct comprises looking up a dynamically modifiable definition for the identified high-level construct. 9. A method for distributing firewall configuration in a datacenter comprising a plurality of host machines, the method comprising: examining a firewall rule in the firewall configuration that uses a high-level construct to reference a set of low-level constructs; translating the high-level construct to a low-level construct and storing the result of said translation into a translation cache as an entry in the cache; associating the cache entry with a particular version number that is based on a time instant at which the firewall configuration was updated; and upon receiving a query for the high-level construct, providing the result of the translation of the high-level construct stored in the translation cache when the received query is associated with a minimum required version number that is not newer than the particular version number associated with the cache entry. 10. The method of claim 9, wherein the high-level construct is a container that represents a plurality of network addresses. 11. The method of claim 9, wherein the set of low-level constructs comprises identifiers for virtual network interface controllers (VNICs) for virtual machines (VMs) being hosted by the plurality of host machines. 12. The method of claim 9, wherein the high-level construct is for identifying a source or destination network address of the firewall rule. 13. A non-transitory machine readable medium storing a program which when executed by at least one processing unit distributes firewall configuration in a datacenter comprising a plurality of host machines, the method comprising: retrieving a rule in the firewall configuration for distribution to the plurality of host machines, the firewall rule associated with a minimum required version number; identifying a high-level construct in the firewall rule; querying a translation cache for the identified high-level construct, the translation cache storing previous translation results for different high-level constructs, each stored translation result associated with a version number; and when the translation cache has a stored previous translation result for the identified high-level construct that is associated with a version number that is equal to or newer than the minimum required version number, using the previous translation result stored in the cache to translate the identified high-level construct to a low-level construct. 14. The non-transitory machine readable medium of claim 13, wherein the high-level construct is a container that represents a plurality of network addresses. 15. The non-transitory machine readable medium of claim 13, wherein the high-level construct is for identifying an enforcement point of the firewall rule, and the set of low-level constructs comprises identifiers for enforcement points for the firewall rule. 16. The non-transitory machine readable medium of claim 13, wherein the set of low-level constructs comprises identifiers for virtual network interface controllers (VNICs) for virtual machines (VMs) being hosted by the plurality of host machines. 17. The non-transitory machine readable medium of claim 13, wherein the high-level construct is for identifying a source or destination network address of the firewall rule. 18. The non-transitory machine readable medium of claim 13, wherein the minimum required version number is based on an instant in time that the firewall configuration was updated. 19. The non-transitory machine readable medium of claim 13, wherein the program further comprises sets of instructions for: performing translation of the identified high-level construct; and storing the result of the translation in the translation cache with a new version number when the translation cache does not have a previous translation result for the identified high-level construct or when the stored previous translation result is associated with a version number that is older than the minimum required version number. 20. The non-transitory machine readable medium of claim 19, wherein the set of instructions for performing the translation of the identified high-level construct comprises a set of instructions for looking up a dynamically modifiable definition for the identified high-level construct.
Some embodiments provide a method for distributing firewall configuration in a datacenter comprising multiple host machines. The method retrieves a rule in the firewall configuration for distribution to the host machines. The firewall rule is associated with a minimum required version number. The method identifies a high-level construct in the firewall rule. The method queries a translation cache for the identified high-level construct. The translation cache stores previous translation results for different high-level constructs. Each stored translation result is associated with a version number. When the translation cache has a stored previous translation result for the identified high-level construct that is associated with a version number that is equal to or newer than the minimum required version number, the method uses the previous translation result stored in the cache to translate the identified high-level construct to a low-level construct.1. A method for distributing firewall configuration in a datacenter comprising a plurality of host machines, the method comprising: retrieving a rule in the firewall configuration for distribution to the plurality of host machines, the firewall rule associated with a minimum required version number; identifying a high-level construct in the firewall rule; querying a translation cache for the identified high-level construct, the translation cache storing previous translation results for different high-level constructs, each stored translation result associated with a version number; and when the translation cache has a stored previous translation result for the identified high-level construct that is associated with a version number that is equal to or newer than the minimum required version number, using the previous translation result stored in the cache to translate the identified high-level construct to a low-level construct. 2. The method of claim 1, wherein the high-level construct is a container that represents a plurality of network addresses. 3. The method of claim 1, wherein the high-level construct is for identifying an enforcement point of the firewall rule, and the set of low-level constructs comprises identifiers for enforcement points for the firewall rule. 4. The method of claim 1, wherein the set of low-level constructs comprises identifiers for virtual network interface controllers (VNICs) for virtual machines (VMs) being hosted by the plurality of host machines. 5. The method of claim 1, wherein the high-level construct is for identifying a source or destination network address of the firewall rule. 6. The method of claim 1, wherein the minimum required version number is based on an instant in time that the firewall configuration was updated. 7. The method of claim 1 further comprising: performing translation of the identified high-level construct; and storing the result of the translation in the translation cache with a new version number when the translation cache does not have a previous translation result for the identified high-level construct or when the stored previous translation result is associated with a version number that is older than the minimum required version number. 8. The method of claim 7, wherein performing the translation of the identified high-level construct comprises looking up a dynamically modifiable definition for the identified high-level construct. 9. A method for distributing firewall configuration in a datacenter comprising a plurality of host machines, the method comprising: examining a firewall rule in the firewall configuration that uses a high-level construct to reference a set of low-level constructs; translating the high-level construct to a low-level construct and storing the result of said translation into a translation cache as an entry in the cache; associating the cache entry with a particular version number that is based on a time instant at which the firewall configuration was updated; and upon receiving a query for the high-level construct, providing the result of the translation of the high-level construct stored in the translation cache when the received query is associated with a minimum required version number that is not newer than the particular version number associated with the cache entry. 10. The method of claim 9, wherein the high-level construct is a container that represents a plurality of network addresses. 11. The method of claim 9, wherein the set of low-level constructs comprises identifiers for virtual network interface controllers (VNICs) for virtual machines (VMs) being hosted by the plurality of host machines. 12. The method of claim 9, wherein the high-level construct is for identifying a source or destination network address of the firewall rule. 13. A non-transitory machine readable medium storing a program which when executed by at least one processing unit distributes firewall configuration in a datacenter comprising a plurality of host machines, the method comprising: retrieving a rule in the firewall configuration for distribution to the plurality of host machines, the firewall rule associated with a minimum required version number; identifying a high-level construct in the firewall rule; querying a translation cache for the identified high-level construct, the translation cache storing previous translation results for different high-level constructs, each stored translation result associated with a version number; and when the translation cache has a stored previous translation result for the identified high-level construct that is associated with a version number that is equal to or newer than the minimum required version number, using the previous translation result stored in the cache to translate the identified high-level construct to a low-level construct. 14. The non-transitory machine readable medium of claim 13, wherein the high-level construct is a container that represents a plurality of network addresses. 15. The non-transitory machine readable medium of claim 13, wherein the high-level construct is for identifying an enforcement point of the firewall rule, and the set of low-level constructs comprises identifiers for enforcement points for the firewall rule. 16. The non-transitory machine readable medium of claim 13, wherein the set of low-level constructs comprises identifiers for virtual network interface controllers (VNICs) for virtual machines (VMs) being hosted by the plurality of host machines. 17. The non-transitory machine readable medium of claim 13, wherein the high-level construct is for identifying a source or destination network address of the firewall rule. 18. The non-transitory machine readable medium of claim 13, wherein the minimum required version number is based on an instant in time that the firewall configuration was updated. 19. The non-transitory machine readable medium of claim 13, wherein the program further comprises sets of instructions for: performing translation of the identified high-level construct; and storing the result of the translation in the translation cache with a new version number when the translation cache does not have a previous translation result for the identified high-level construct or when the stored previous translation result is associated with a version number that is older than the minimum required version number. 20. The non-transitory machine readable medium of claim 19, wherein the set of instructions for performing the translation of the identified high-level construct comprises a set of instructions for looking up a dynamically modifiable definition for the identified high-level construct.
2,400
8,172
8,172
15,894,567
2,488
Systems and methods are disclosed for making three-dimensional models of the inside of an ear canal using a projected pattern. A system comprises a probe adapted to be inserted into the ear canal. The probe comprises a narrow portion adapted to fit inside the ear canal and a wide portion adapted to be wider than the ear canal, which may be formed by a tapered stop. An illumination subsystem projects a pattern of light from the distal end of the probe onto a surface of the ear canal, the pattern being modulated by the three-dimensional surface of the ear canal. An imaging subsystem captures a series of individual images of the pattern of light projected onto the surface of the ear canal. A computer subsystem calculates digital three-dimensional representations from the individual images and stitches them together to generate a digital three-dimensional model of the ear canal.
1. A system for making a three-dimensional model of the inside of an ear canal in order to manufacture an earmold to fit inside the ear canal, the system comprising: an instrument comprising a probe adapted to be inserted into the ear canal, the probe comprising a tapered stop that has a narrower end and a wider end, the narrower end sized to fit inside the ear canal, the wider end sized not to fit inside the ear canal, the wider end adapted to act as a stop that limits the distance of the probe into the ear canal; an illumination subsystem comprising a light source, a pattern screen, and a lens, with at least the lens being located in a distal end of the probe, the illumination subsystem adapted to project light from the light source, through the pattern screen, and through the lens in order to project a pattern of light from the distal end of the probe onto a surface of the ear canal, the pattern being modulated by the surface of the ear canal; an imaging subsystem comprising a video camera and a lens, with at least the lens being located in the distal end of the probe, the imaging subsystem adapted to capture in succession, at a video frame rate of the video camera, a plurality of individual images of the pattern of light projected onto the surface of the ear canal, each individual image corresponding to a video frame; and a computer subsystem adapted to calculate an individual digital three-dimensional representation from each individual image in the plurality of individual images, the calculations resulting in a plurality of individual digital three-dimensional representations, the computer subsystem adapted to stitch together the individual digital three-dimensional representations to generate a digital three-dimensional model of the ear canal. 2. A system as recited in claim 1, wherein the probe further comprises a tube connected to the narrower end of the tapered stop. 3. A system as recited in claim 2, wherein the tube is rigid. 4. A system as recited in claim 1, wherein the probe further comprises a handle connected to the wider end of the tapered stop. 5. A system as recited in claim 1, wherein the illumination subsystem projects light only in a range of 10 nm to 550 nm. 6. A system as recited in claim 1, wherein the illumination subsystem projects only green or blue light. 7. A system as recited in claim 1, wherein the illumination subsystem projects only ultraviolet light. 8. A system as recited in claim 1, wherein the pattern screen comprises a grating of alternating opaque and transparent stripes. 9. A system as recited in claim 1, wherein the lens of the imaging subsystem is a wide-angle lens that enables the video camera to capture in one image up to a full 180-degree view of the ear canal. 10. A system for making a three-dimensional model of the inside of an ear canal in order to manufacture an earmold to fit inside the ear canal, the system comprising: an instrument comprising a probe adapted to be inserted into the ear canal; an illumination subsystem comprising a light source, a pattern screen, and a lens, with at least the lens being located in a distal end of the probe, the illumination subsystem adapted to project light from the light source, through the pattern screen, and through the lens in order to project a pattern of light from the distal end of the probe onto a surface of the ear canal, the pattern being modulated by the surface of the ear canal; an imaging subsystem comprising a video camera and a lens, with at least the lens being located in the distal end of the probe, the imaging subsystem adapted to capture in succession, at a video frame rate of the video camera, a plurality of individual images of the pattern of light projected onto the surface of the ear canal, each individual image corresponding to a video frame; and a computer subsystem adapted to calculate an individual digital three-dimensional representation from each individual image in the plurality of individual images, the calculations resulting in a plurality of individual digital three-dimensional representations, the computer subsystem adapted to stitch together the individual digital three-dimensional representations to generate a digital three-dimensional model of the ear canal. 11. A method of making a three-dimensional model of the inside of an ear canal in order to manufacture an earmold to fit inside the ear canal, the method comprising: inserting a probe into the ear canal, the probe carrying at least a distal end of an illumination subsystem and at least a distal end of an imaging subsystem, the illumination subsystem comprising a light source, a pattern screen, and a lens, with at least the lens being located in a distal end of the probe, the imaging subsystem comprising a video camera and a lens, with at least the lens being located in the distal end of the probe; projecting light from the light source, through the pattern screen, and through the lens of the illumination subsystem, and thereby projecting a pattern of light from the distal end of the probe onto a surface of the ear canal, the pattern being modulated by the surface of the ear canal; capturing in succession, at a video frame rate of the video camera, a plurality of individual images of the pattern of light projected onto the surface of the ear canal, each individual image corresponding to a video frame; and calculating an individual digital three-dimensional representation from each individual image in the plurality of individual images, the calculations resulting in a plurality of individual digital three-dimensional representations, and stitching together the individual digital three-dimensional representations to generate a digital three-dimensional model of the ear canal. 12. A method as recited in claim 11, wherein the probe comprises a narrow portion adapted to fit inside the ear canal and a wide portion adapted to be wider than the ear canal, the wide portion acting as a stop to limit the distance of the probe into the ear canal. 13. A method as recited in claim 12, wherein the wide portion of the probe is part of a tapered stop that is narrower on a side facing the narrow portion of the probe. 14. A method as recited in claim 12, wherein the narrow portion of the probe is rigid. 15. A method as recited in claim 12, wherein the probe further comprises a handle connected to the wide portion of the probe. 16. A method as recited in claim 11, wherein the illumination subsystem projects light only in a range of 10 nm to 550 nm. 17. A method as recited in claim 11, wherein the illumination subsystem projects only green or blue light. 18. A method as recited in claim 11, wherein the illumination subsystem projects only ultraviolet light. 19. A method as recited in claim 11, wherein the pattern screen comprises a grating of alternating opaque and transparent stripes. 20. A method as recited in claim 11, wherein the lens of the imaging subsystem is a wide-angle lens that enables the video camera to capture in one image up to a full 180-degree view of the ear canal.
Systems and methods are disclosed for making three-dimensional models of the inside of an ear canal using a projected pattern. A system comprises a probe adapted to be inserted into the ear canal. The probe comprises a narrow portion adapted to fit inside the ear canal and a wide portion adapted to be wider than the ear canal, which may be formed by a tapered stop. An illumination subsystem projects a pattern of light from the distal end of the probe onto a surface of the ear canal, the pattern being modulated by the three-dimensional surface of the ear canal. An imaging subsystem captures a series of individual images of the pattern of light projected onto the surface of the ear canal. A computer subsystem calculates digital three-dimensional representations from the individual images and stitches them together to generate a digital three-dimensional model of the ear canal.1. A system for making a three-dimensional model of the inside of an ear canal in order to manufacture an earmold to fit inside the ear canal, the system comprising: an instrument comprising a probe adapted to be inserted into the ear canal, the probe comprising a tapered stop that has a narrower end and a wider end, the narrower end sized to fit inside the ear canal, the wider end sized not to fit inside the ear canal, the wider end adapted to act as a stop that limits the distance of the probe into the ear canal; an illumination subsystem comprising a light source, a pattern screen, and a lens, with at least the lens being located in a distal end of the probe, the illumination subsystem adapted to project light from the light source, through the pattern screen, and through the lens in order to project a pattern of light from the distal end of the probe onto a surface of the ear canal, the pattern being modulated by the surface of the ear canal; an imaging subsystem comprising a video camera and a lens, with at least the lens being located in the distal end of the probe, the imaging subsystem adapted to capture in succession, at a video frame rate of the video camera, a plurality of individual images of the pattern of light projected onto the surface of the ear canal, each individual image corresponding to a video frame; and a computer subsystem adapted to calculate an individual digital three-dimensional representation from each individual image in the plurality of individual images, the calculations resulting in a plurality of individual digital three-dimensional representations, the computer subsystem adapted to stitch together the individual digital three-dimensional representations to generate a digital three-dimensional model of the ear canal. 2. A system as recited in claim 1, wherein the probe further comprises a tube connected to the narrower end of the tapered stop. 3. A system as recited in claim 2, wherein the tube is rigid. 4. A system as recited in claim 1, wherein the probe further comprises a handle connected to the wider end of the tapered stop. 5. A system as recited in claim 1, wherein the illumination subsystem projects light only in a range of 10 nm to 550 nm. 6. A system as recited in claim 1, wherein the illumination subsystem projects only green or blue light. 7. A system as recited in claim 1, wherein the illumination subsystem projects only ultraviolet light. 8. A system as recited in claim 1, wherein the pattern screen comprises a grating of alternating opaque and transparent stripes. 9. A system as recited in claim 1, wherein the lens of the imaging subsystem is a wide-angle lens that enables the video camera to capture in one image up to a full 180-degree view of the ear canal. 10. A system for making a three-dimensional model of the inside of an ear canal in order to manufacture an earmold to fit inside the ear canal, the system comprising: an instrument comprising a probe adapted to be inserted into the ear canal; an illumination subsystem comprising a light source, a pattern screen, and a lens, with at least the lens being located in a distal end of the probe, the illumination subsystem adapted to project light from the light source, through the pattern screen, and through the lens in order to project a pattern of light from the distal end of the probe onto a surface of the ear canal, the pattern being modulated by the surface of the ear canal; an imaging subsystem comprising a video camera and a lens, with at least the lens being located in the distal end of the probe, the imaging subsystem adapted to capture in succession, at a video frame rate of the video camera, a plurality of individual images of the pattern of light projected onto the surface of the ear canal, each individual image corresponding to a video frame; and a computer subsystem adapted to calculate an individual digital three-dimensional representation from each individual image in the plurality of individual images, the calculations resulting in a plurality of individual digital three-dimensional representations, the computer subsystem adapted to stitch together the individual digital three-dimensional representations to generate a digital three-dimensional model of the ear canal. 11. A method of making a three-dimensional model of the inside of an ear canal in order to manufacture an earmold to fit inside the ear canal, the method comprising: inserting a probe into the ear canal, the probe carrying at least a distal end of an illumination subsystem and at least a distal end of an imaging subsystem, the illumination subsystem comprising a light source, a pattern screen, and a lens, with at least the lens being located in a distal end of the probe, the imaging subsystem comprising a video camera and a lens, with at least the lens being located in the distal end of the probe; projecting light from the light source, through the pattern screen, and through the lens of the illumination subsystem, and thereby projecting a pattern of light from the distal end of the probe onto a surface of the ear canal, the pattern being modulated by the surface of the ear canal; capturing in succession, at a video frame rate of the video camera, a plurality of individual images of the pattern of light projected onto the surface of the ear canal, each individual image corresponding to a video frame; and calculating an individual digital three-dimensional representation from each individual image in the plurality of individual images, the calculations resulting in a plurality of individual digital three-dimensional representations, and stitching together the individual digital three-dimensional representations to generate a digital three-dimensional model of the ear canal. 12. A method as recited in claim 11, wherein the probe comprises a narrow portion adapted to fit inside the ear canal and a wide portion adapted to be wider than the ear canal, the wide portion acting as a stop to limit the distance of the probe into the ear canal. 13. A method as recited in claim 12, wherein the wide portion of the probe is part of a tapered stop that is narrower on a side facing the narrow portion of the probe. 14. A method as recited in claim 12, wherein the narrow portion of the probe is rigid. 15. A method as recited in claim 12, wherein the probe further comprises a handle connected to the wide portion of the probe. 16. A method as recited in claim 11, wherein the illumination subsystem projects light only in a range of 10 nm to 550 nm. 17. A method as recited in claim 11, wherein the illumination subsystem projects only green or blue light. 18. A method as recited in claim 11, wherein the illumination subsystem projects only ultraviolet light. 19. A method as recited in claim 11, wherein the pattern screen comprises a grating of alternating opaque and transparent stripes. 20. A method as recited in claim 11, wherein the lens of the imaging subsystem is a wide-angle lens that enables the video camera to capture in one image up to a full 180-degree view of the ear canal.
2,400
8,173
8,173
15,232,455
2,462
A broadcast manager identifies a time interval that encompasses paging windows for a plurality of user equipment. The plurality of user equipment operate according to a plurality of discontinuous reception cycles that include sleep intervals separated by the paging windows. The broadcast manager generates messages for wireless transmission to the plurality of user equipment in corresponding paging windows during the time interval. The messages include information indicating a transmission time at which the plurality of user equipment are to receive broadcast or multicast data. The transmission time is subsequent to the time interval.
1. A method comprising: identifying, at a broadcast manager, a first time interval that encompasses paging windows for a plurality of user equipment, wherein the plurality of user equipment operate according to a plurality of discontinuous reception cycles that include sleep intervals separated by the paging windows; and generating, at the broadcast manager, messages for wireless transmission to the plurality of user equipment in corresponding paging windows during the first time interval, wherein the messages include information indicating a transmission time at which the plurality of user equipment are to receive at least one of broadcast and multicast data, wherein the transmission time is subsequent to the first time interval. 2. The method of claim 1, further comprising, at the broadcast manager, sending the transmission time to at least one of a service capability server, a service compatibility exposure function, and an application server. 3. The method of claim 1, further comprising, at the broadcast manager, computing the transmission time. 4. The method of claim 1, further comprising: determining, at the broadcast manager, that the plurality of user equipment are members of a logical grouping indicated by a group identifier. 5. The method of claim 1, wherein identifying the first time interval that encompasses the paging windows comprises at least one of: identifying a first time interval that is equal to or greater than a longest discontinuous reception cycle of the plurality of discontinuous reception cycles; or identifying a first time interval based on an amount of time that will elapse before a latest paging window from among the paging windows of the plurality of user equipment. 6. The method of claim 1, wherein generating the messages comprises at least one of: generating messages including information indicating a plurality of second time intervals equal to differences between the paging windows and the transmission time; generating messages include information indicating the plurality of second time intervals plus a margin time interval; or generating messages including information indicating a clock time that is equal to the transmission time. 7. The method of claim 1, further comprising: subdividing the plurality of user equipment into subsets; identifying first time intervals that encompass paging windows for the subsets; and generating messages for wireless transmission to the subsets in corresponding paging windows during the first time intervals, wherein the messages include information indicating different transmission times at which the subsets are to wake up to receive information that is broadcast or multicast to the subsets. 8. The method of claim 1, further comprising: negotiating the transmission time with at least one of a service capability server, a service compatibility exposure function, and an application server. 9. A method comprising: receiving, at a server, a message including information indicating a transmission time to perform at least one of broadcasting or multicasting data to a plurality of user equipment, wherein the transmission time is subsequent to a first time interval that encompasses paging windows for the plurality of user equipment, wherein the plurality of user equipment operate according to a plurality of discontinuous reception cycles that include sleep intervals separated by the paging windows; and transmitting, from the server to a broadcast/multicast server, a request to perform at least one of broadcasting or multicasting the data to the plurality of user equipment at the transmission time. 10. The method of claim 9, further comprising: negotiating the transmission time with a broadcast manager that transmitted the message. 11. A method comprising: receiving, at a user equipment during a paging window in a discontinuous reception cycle that includes sleep intervals separated by paging windows, a message including information indicating a transmission time at which the user equipment is to receive at least one of broadcast or multicast data, wherein the user equipment is one of a plurality of user equipment that operate according to a plurality of discontinuous reception cycles, and wherein the transmission time is subsequent to a time interval that encompasses paging windows for the plurality of user equipment; waking up the user equipment from a sleep interval prior to the transmission time; and receiving, at the user equipment, at least one of broadcast or multicast data beginning at the transmission time. 12. The method of claim 11, wherein waking up the user equipment comprises waking up the user equipment an additional time in addition to waking up during the paging window of the discontinuous reception cycle of the user equipment. 13. The method of claim 11, wherein receiving the message comprises at least one of: receiving a message including information indicating a second time interval equal to a difference between the paging window and the transmission time; receiving a message including information indicating the second time interval plus a margin time interval; or receiving a message including information indicating a clock time that is equal to the transmission time. 14. The method of claim 11, wherein the plurality of user equipment are a logical grouping that receives messages including the information indicating the transmission time, wherein the plurality of user equipment in the logical grouping have different discontinuous reception cycles, and wherein receiving the message comprises at least one of: receiving the message during a first time interval that is equal to or greater than a longest discontinuous reception cycle of the different discontinuous reception cycles; or receiving the message during a first time interval that is equal to an amount of time that will elapse before a latest paging window from among the paging windows of the plurality of user equipment. 15. A non-transitory computer readable medium embodying a set of executable instructions, the set of executable instructions to manipulate at least one processor to: identify a first time interval that encompasses paging windows for a plurality of user equipment, wherein the plurality of user equipment operate according to a plurality of discontinuous reception cycles that include sleep intervals separated by the paging windows; and generate messages for wireless transmission to the plurality of user equipment in corresponding paging windows during the first time interval, wherein the messages include information indicating a transmission time at which the plurality of user equipment are to receive at least one of broadcast and multicast data, wherein the transmission time is subsequent to the first time interval. 16. The non-transitory computer readable medium of claim 15, wherein the set of executable instructions further is to manipulate the at least one processor to compute the transmission time. 17. The non-transitory computer readable medium of claim 15, wherein the set of executable instructions is to manipulate the at least one processor to identify a first time interval that is equal to or greater than a longest discontinuous reception cycle of the plurality of discontinuous reception cycles or to manipulate the at least one processor to identify a first time interval based on an amount of time that will elapse before a latest paging window from among the paging windows of the plurality of user equipment. 18. User equipment comprising: a transceiver to receive, during a paging window in a discontinuous reception cycle that includes sleep intervals separated by paging windows, a message including information indicating a transmission time at which the user equipment is to receive at least one of broadcast or multicast data, wherein the user equipment is one of a plurality of user equipment that operate according to a plurality of discontinuous reception cycles, and wherein the transmission time is subsequent to a time interval that encompasses paging windows for the plurality of user equipment; a processor; and a non-transitory computer readable medium embodying a set of executable instructions, the set of executable instructions to manipulate the processor to: wake up the user equipment from a sleep interval prior to the transmission time so that the transceiver is able to receive at least one of broadcast or multicast data beginning at the transmission time. 19. The user equipment of claim 18, wherein the set of executable instructions is to manipulate the processor to wake up the user equipment an additional time in addition to waking up during the paging window of the discontinuous reception cycle of the user equipment. 20. The user equipment of claim 18, wherein the set of executable instructions is to manipulate the processor to wake up the user equipment to receive a message including information indicating at least one of: a second time interval equal to a difference between the paging window and the transmission time; or the second time interval plus a margin time interval.
A broadcast manager identifies a time interval that encompasses paging windows for a plurality of user equipment. The plurality of user equipment operate according to a plurality of discontinuous reception cycles that include sleep intervals separated by the paging windows. The broadcast manager generates messages for wireless transmission to the plurality of user equipment in corresponding paging windows during the time interval. The messages include information indicating a transmission time at which the plurality of user equipment are to receive broadcast or multicast data. The transmission time is subsequent to the time interval.1. A method comprising: identifying, at a broadcast manager, a first time interval that encompasses paging windows for a plurality of user equipment, wherein the plurality of user equipment operate according to a plurality of discontinuous reception cycles that include sleep intervals separated by the paging windows; and generating, at the broadcast manager, messages for wireless transmission to the plurality of user equipment in corresponding paging windows during the first time interval, wherein the messages include information indicating a transmission time at which the plurality of user equipment are to receive at least one of broadcast and multicast data, wherein the transmission time is subsequent to the first time interval. 2. The method of claim 1, further comprising, at the broadcast manager, sending the transmission time to at least one of a service capability server, a service compatibility exposure function, and an application server. 3. The method of claim 1, further comprising, at the broadcast manager, computing the transmission time. 4. The method of claim 1, further comprising: determining, at the broadcast manager, that the plurality of user equipment are members of a logical grouping indicated by a group identifier. 5. The method of claim 1, wherein identifying the first time interval that encompasses the paging windows comprises at least one of: identifying a first time interval that is equal to or greater than a longest discontinuous reception cycle of the plurality of discontinuous reception cycles; or identifying a first time interval based on an amount of time that will elapse before a latest paging window from among the paging windows of the plurality of user equipment. 6. The method of claim 1, wherein generating the messages comprises at least one of: generating messages including information indicating a plurality of second time intervals equal to differences between the paging windows and the transmission time; generating messages include information indicating the plurality of second time intervals plus a margin time interval; or generating messages including information indicating a clock time that is equal to the transmission time. 7. The method of claim 1, further comprising: subdividing the plurality of user equipment into subsets; identifying first time intervals that encompass paging windows for the subsets; and generating messages for wireless transmission to the subsets in corresponding paging windows during the first time intervals, wherein the messages include information indicating different transmission times at which the subsets are to wake up to receive information that is broadcast or multicast to the subsets. 8. The method of claim 1, further comprising: negotiating the transmission time with at least one of a service capability server, a service compatibility exposure function, and an application server. 9. A method comprising: receiving, at a server, a message including information indicating a transmission time to perform at least one of broadcasting or multicasting data to a plurality of user equipment, wherein the transmission time is subsequent to a first time interval that encompasses paging windows for the plurality of user equipment, wherein the plurality of user equipment operate according to a plurality of discontinuous reception cycles that include sleep intervals separated by the paging windows; and transmitting, from the server to a broadcast/multicast server, a request to perform at least one of broadcasting or multicasting the data to the plurality of user equipment at the transmission time. 10. The method of claim 9, further comprising: negotiating the transmission time with a broadcast manager that transmitted the message. 11. A method comprising: receiving, at a user equipment during a paging window in a discontinuous reception cycle that includes sleep intervals separated by paging windows, a message including information indicating a transmission time at which the user equipment is to receive at least one of broadcast or multicast data, wherein the user equipment is one of a plurality of user equipment that operate according to a plurality of discontinuous reception cycles, and wherein the transmission time is subsequent to a time interval that encompasses paging windows for the plurality of user equipment; waking up the user equipment from a sleep interval prior to the transmission time; and receiving, at the user equipment, at least one of broadcast or multicast data beginning at the transmission time. 12. The method of claim 11, wherein waking up the user equipment comprises waking up the user equipment an additional time in addition to waking up during the paging window of the discontinuous reception cycle of the user equipment. 13. The method of claim 11, wherein receiving the message comprises at least one of: receiving a message including information indicating a second time interval equal to a difference between the paging window and the transmission time; receiving a message including information indicating the second time interval plus a margin time interval; or receiving a message including information indicating a clock time that is equal to the transmission time. 14. The method of claim 11, wherein the plurality of user equipment are a logical grouping that receives messages including the information indicating the transmission time, wherein the plurality of user equipment in the logical grouping have different discontinuous reception cycles, and wherein receiving the message comprises at least one of: receiving the message during a first time interval that is equal to or greater than a longest discontinuous reception cycle of the different discontinuous reception cycles; or receiving the message during a first time interval that is equal to an amount of time that will elapse before a latest paging window from among the paging windows of the plurality of user equipment. 15. A non-transitory computer readable medium embodying a set of executable instructions, the set of executable instructions to manipulate at least one processor to: identify a first time interval that encompasses paging windows for a plurality of user equipment, wherein the plurality of user equipment operate according to a plurality of discontinuous reception cycles that include sleep intervals separated by the paging windows; and generate messages for wireless transmission to the plurality of user equipment in corresponding paging windows during the first time interval, wherein the messages include information indicating a transmission time at which the plurality of user equipment are to receive at least one of broadcast and multicast data, wherein the transmission time is subsequent to the first time interval. 16. The non-transitory computer readable medium of claim 15, wherein the set of executable instructions further is to manipulate the at least one processor to compute the transmission time. 17. The non-transitory computer readable medium of claim 15, wherein the set of executable instructions is to manipulate the at least one processor to identify a first time interval that is equal to or greater than a longest discontinuous reception cycle of the plurality of discontinuous reception cycles or to manipulate the at least one processor to identify a first time interval based on an amount of time that will elapse before a latest paging window from among the paging windows of the plurality of user equipment. 18. User equipment comprising: a transceiver to receive, during a paging window in a discontinuous reception cycle that includes sleep intervals separated by paging windows, a message including information indicating a transmission time at which the user equipment is to receive at least one of broadcast or multicast data, wherein the user equipment is one of a plurality of user equipment that operate according to a plurality of discontinuous reception cycles, and wherein the transmission time is subsequent to a time interval that encompasses paging windows for the plurality of user equipment; a processor; and a non-transitory computer readable medium embodying a set of executable instructions, the set of executable instructions to manipulate the processor to: wake up the user equipment from a sleep interval prior to the transmission time so that the transceiver is able to receive at least one of broadcast or multicast data beginning at the transmission time. 19. The user equipment of claim 18, wherein the set of executable instructions is to manipulate the processor to wake up the user equipment an additional time in addition to waking up during the paging window of the discontinuous reception cycle of the user equipment. 20. The user equipment of claim 18, wherein the set of executable instructions is to manipulate the processor to wake up the user equipment to receive a message including information indicating at least one of: a second time interval equal to a difference between the paging window and the transmission time; or the second time interval plus a margin time interval.
2,400
8,174
8,174
14,962,503
2,477
Systems and methods are described herein relating to preemptive retransmission of a transport block in successive subframes on, e.g., a Listen-Before-Talk (LBT) cell. Embodiments of a method of operation of a radio node of a cellular communications network are disclosed. The radio node serves an LBT cell. In some embodiments, the method of operation of the radio node comprises transmitting a transport block in a first subframe on the LBT cell and retransmitting the transport block in a second subframe (e.g., on the LBT cell), where the second subframe is adjacent, in time, to the first subframe. In embodiments in which the retransmission of the transport block is on the LBT cell (or another LBT cell), the time span of a transmission burst can be extended to a maximum allowed burst duration.
1. A method of operation of a radio node of a cellular communications network, the radio node serving a Listen-Before-Talk, LBT, cell, comprising: transmitting a transport block in a first subframe on the LBT cell; and retransmitting the transport block in a second subframe, the second subframe being adjacent, in time, to the first subframe. 2. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises preemptively retransmitting the transport block in the second subframe without first receiving an indication that retransmission of the transport block transmitted in the first subframe is needed. 3. The method of claim 2 wherein preemptively retransmitting the transport block in the second subframe comprises preemptively retransmitting the transport block in the second subframe according to a Hybrid Automatic Repeat Request, HARQ, procedure. 4. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises transmitting a redundancy version of the transport block in the second subframe that is different than that transmitted in the first subframe. 5. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises retransmitting the transport block in the second subframe using time-frequency resources within the second subframe that are the same as time-frequency resources used for transmission of the transport block within the first subframe. 6. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises retransmitting the transport block in the second subframe using time-frequency resources within the second subframe that are different than time-frequency resources used for transmission of the transport block within the first subframe. 7. The method of claim 1 wherein the radio node is a radio access node, transmitting the transport block in the first subframe comprises transmitting a downlink transport block to a wireless device in the first subframe, and retransmitting the transport block in the second subframe comprises retransmitting the downlink transport block to the wireless device in the second subframe. 8. The method of claim 7 further comprising: transmitting a single resource allocation grant for transmission of the downlink transport block in the first subframe and retransmission of the downlink transport block in the second subframe. 9. The method of claim 8 wherein transmitting the single resource allocation grant comprises transmitting the single resource allocation grant on a cell other than the LBT cell. 10. The method of claim 9 wherein the cell is a primary cell with respect to downlink carrier aggregation for the wireless device, and the LBT cell is a secondary cell with respect to downlink carrier aggregation for the wireless device. 11. The method of claim 10 wherein the primary cell operates in a licensed frequency spectrum. 12. The method of claim 1 wherein the radio node is a wireless device, transmitting the transport block in the first subframe comprises transmitting an uplink transport block to a radio access node in the first subframe, and retransmitting the transport block in the second subframe comprises retransmitting the uplink transport block to the radio access node in the second subframe. 13. The method of claim 12 further comprising: receiving a single resource allocation grant for transmission of the uplink transport block in the first subframe and retransmission of the uplink transport block in the second subframe. 14. The method of claim 13 wherein receiving the single resource allocation grant comprises receiving the single resource allocation grant on a cell other than the LBT cell. 15. The method of claim 14 wherein the cell is a primary cell with respect to downlink carrier aggregation for the wireless device, and the LBT cell is a secondary cell with respect to downlink carrier aggregation for the wireless device. 16. The method of claim 15 wherein the primary cell operates in a licensed frequency spectrum. 17. The method of claim 7 wherein the single resource allocation grant for transmission of the transport block in the first subframe and retransmission of the transport block in the second subframe comprises an indication of a sequence of redundancy versions of the transport block that the wireless device is to expect in successive subframes comprising the first subframe and the second subframe. 18. The method of claim 7 wherein the single resource allocation grant for transmission of the transport block in the first subframe and retransmission of the transport block in the second subframe is comprised in a Downlink Control Information, DCI, message that is scrambled with a Radio Network Temporary Identifier, RNTI, that indicates that preemptive retransmissions will be used for the single resource allocation grant. 19. The method of claim 18 wherein a number of preemptive transmissions in successive subframes for the single resource allocation grant is predefined. 20. The method of claim 7 wherein the single resource allocation grant for transmission of the transport block in the first subframe and retransmission of the transport block in the second subframe comprises an indication that the single resource allocation grant is valid for multiple successive subframes comprising the first subframe and the second subframe. 21. The method of claim 1 wherein: both transmission of the transport block in the first subframe and retransmission of the transport block in the second subframe are scheduled by a single resource allocation grant that is provided on a cell other than the LBT cell; the cell on which the single resource allocation grant is provided and the LBT cell are Time Division Duplexing, TDD, cells in which transmissions in a particular subframe on the LBT cell are normally scheduled by resource allocation grants transmitted in a corresponding downlink subframe on the cell; and the second subframe is a subframe on the LBT cell on which transmissions could normally not be scheduled because a corresponding subframe on the cell is an uplink subframe. 22. The method of claim 1 wherein retransmitting the transport block comprises retransmitting the transport block in one or more additional subframes on the LBT cell, where the one or more additional subframes are adjacent, in time, to one another and the one or more additional subframes comprise the second subframe that is adjacent, in time, to the first subframe. 23. The method of claim 22 wherein the one or more additional subframes further comprise a third subframe that is adjacent, in time, to the second subframe. 24. The method of claim 22 wherein a number of the one or more additional subframes is variable. 25. The method of claim 24 wherein the number of the one or more additional subframes is defined by higher-layer signaling. 26. The method of claim 22 wherein the one or more additional subframes comprises two or more additional subframes, and retransmitting the transport block in the one or more additional subframes comprises transmitting a different redundancy version of the transport block in each of the two or more additional subframes. 27. The method of claim 22 wherein retransmitting the transport block in the one or more additional subframes comprises retransmitting the transport block in the one or more additional subframes on the LBT cell such that transmission on the LBT cell by the radio node reaches a maximum allowed occupancy time for the LBT cell. 28. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises preemptively retransmitting the transport block in the second subframe without first receiving an indication that retransmission of the transport block transmitted in the first subframe is needed when a channel on which the radio node is transmitting would have otherwise been released. 29. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises retransmitting the transport block in the second subframe on the LBT cell. 30. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises retransmitting the transport block in the second subframe on a cell other than the LBT cell. 31. The method of claim 1 wherein the LBT cell is a License Assisted Access, LAA, secondary cell. 32. The method of claim 1 wherein the LBT cell is a standalone LBT cell. 33. A radio node of a cellular communications network, the radio node serving a Listen-Before-Talk, LBT, cell, comprising: one or more transmitters; a processor; and memory containing instructions executable by the processor whereby the radio node is operable to: transmit, via the one or more transmitters, a transport block in a first subframe on the LBT cell; and retransmit, via the one or more transmitters, the transport block in a second subframe, the second subframe being adjacent, in time, to the first subframe. 34. The radio node of claim 33 wherein the transport block is retransmitted in the second subframe on the LBT cell. 35. The radio node of claim 33 wherein the transport block is retransmitted in the second subframe on a cell other than the LBT cell. 36-40. (canceled)
Systems and methods are described herein relating to preemptive retransmission of a transport block in successive subframes on, e.g., a Listen-Before-Talk (LBT) cell. Embodiments of a method of operation of a radio node of a cellular communications network are disclosed. The radio node serves an LBT cell. In some embodiments, the method of operation of the radio node comprises transmitting a transport block in a first subframe on the LBT cell and retransmitting the transport block in a second subframe (e.g., on the LBT cell), where the second subframe is adjacent, in time, to the first subframe. In embodiments in which the retransmission of the transport block is on the LBT cell (or another LBT cell), the time span of a transmission burst can be extended to a maximum allowed burst duration.1. A method of operation of a radio node of a cellular communications network, the radio node serving a Listen-Before-Talk, LBT, cell, comprising: transmitting a transport block in a first subframe on the LBT cell; and retransmitting the transport block in a second subframe, the second subframe being adjacent, in time, to the first subframe. 2. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises preemptively retransmitting the transport block in the second subframe without first receiving an indication that retransmission of the transport block transmitted in the first subframe is needed. 3. The method of claim 2 wherein preemptively retransmitting the transport block in the second subframe comprises preemptively retransmitting the transport block in the second subframe according to a Hybrid Automatic Repeat Request, HARQ, procedure. 4. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises transmitting a redundancy version of the transport block in the second subframe that is different than that transmitted in the first subframe. 5. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises retransmitting the transport block in the second subframe using time-frequency resources within the second subframe that are the same as time-frequency resources used for transmission of the transport block within the first subframe. 6. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises retransmitting the transport block in the second subframe using time-frequency resources within the second subframe that are different than time-frequency resources used for transmission of the transport block within the first subframe. 7. The method of claim 1 wherein the radio node is a radio access node, transmitting the transport block in the first subframe comprises transmitting a downlink transport block to a wireless device in the first subframe, and retransmitting the transport block in the second subframe comprises retransmitting the downlink transport block to the wireless device in the second subframe. 8. The method of claim 7 further comprising: transmitting a single resource allocation grant for transmission of the downlink transport block in the first subframe and retransmission of the downlink transport block in the second subframe. 9. The method of claim 8 wherein transmitting the single resource allocation grant comprises transmitting the single resource allocation grant on a cell other than the LBT cell. 10. The method of claim 9 wherein the cell is a primary cell with respect to downlink carrier aggregation for the wireless device, and the LBT cell is a secondary cell with respect to downlink carrier aggregation for the wireless device. 11. The method of claim 10 wherein the primary cell operates in a licensed frequency spectrum. 12. The method of claim 1 wherein the radio node is a wireless device, transmitting the transport block in the first subframe comprises transmitting an uplink transport block to a radio access node in the first subframe, and retransmitting the transport block in the second subframe comprises retransmitting the uplink transport block to the radio access node in the second subframe. 13. The method of claim 12 further comprising: receiving a single resource allocation grant for transmission of the uplink transport block in the first subframe and retransmission of the uplink transport block in the second subframe. 14. The method of claim 13 wherein receiving the single resource allocation grant comprises receiving the single resource allocation grant on a cell other than the LBT cell. 15. The method of claim 14 wherein the cell is a primary cell with respect to downlink carrier aggregation for the wireless device, and the LBT cell is a secondary cell with respect to downlink carrier aggregation for the wireless device. 16. The method of claim 15 wherein the primary cell operates in a licensed frequency spectrum. 17. The method of claim 7 wherein the single resource allocation grant for transmission of the transport block in the first subframe and retransmission of the transport block in the second subframe comprises an indication of a sequence of redundancy versions of the transport block that the wireless device is to expect in successive subframes comprising the first subframe and the second subframe. 18. The method of claim 7 wherein the single resource allocation grant for transmission of the transport block in the first subframe and retransmission of the transport block in the second subframe is comprised in a Downlink Control Information, DCI, message that is scrambled with a Radio Network Temporary Identifier, RNTI, that indicates that preemptive retransmissions will be used for the single resource allocation grant. 19. The method of claim 18 wherein a number of preemptive transmissions in successive subframes for the single resource allocation grant is predefined. 20. The method of claim 7 wherein the single resource allocation grant for transmission of the transport block in the first subframe and retransmission of the transport block in the second subframe comprises an indication that the single resource allocation grant is valid for multiple successive subframes comprising the first subframe and the second subframe. 21. The method of claim 1 wherein: both transmission of the transport block in the first subframe and retransmission of the transport block in the second subframe are scheduled by a single resource allocation grant that is provided on a cell other than the LBT cell; the cell on which the single resource allocation grant is provided and the LBT cell are Time Division Duplexing, TDD, cells in which transmissions in a particular subframe on the LBT cell are normally scheduled by resource allocation grants transmitted in a corresponding downlink subframe on the cell; and the second subframe is a subframe on the LBT cell on which transmissions could normally not be scheduled because a corresponding subframe on the cell is an uplink subframe. 22. The method of claim 1 wherein retransmitting the transport block comprises retransmitting the transport block in one or more additional subframes on the LBT cell, where the one or more additional subframes are adjacent, in time, to one another and the one or more additional subframes comprise the second subframe that is adjacent, in time, to the first subframe. 23. The method of claim 22 wherein the one or more additional subframes further comprise a third subframe that is adjacent, in time, to the second subframe. 24. The method of claim 22 wherein a number of the one or more additional subframes is variable. 25. The method of claim 24 wherein the number of the one or more additional subframes is defined by higher-layer signaling. 26. The method of claim 22 wherein the one or more additional subframes comprises two or more additional subframes, and retransmitting the transport block in the one or more additional subframes comprises transmitting a different redundancy version of the transport block in each of the two or more additional subframes. 27. The method of claim 22 wherein retransmitting the transport block in the one or more additional subframes comprises retransmitting the transport block in the one or more additional subframes on the LBT cell such that transmission on the LBT cell by the radio node reaches a maximum allowed occupancy time for the LBT cell. 28. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises preemptively retransmitting the transport block in the second subframe without first receiving an indication that retransmission of the transport block transmitted in the first subframe is needed when a channel on which the radio node is transmitting would have otherwise been released. 29. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises retransmitting the transport block in the second subframe on the LBT cell. 30. The method of claim 1 wherein retransmitting the transport block in the second subframe comprises retransmitting the transport block in the second subframe on a cell other than the LBT cell. 31. The method of claim 1 wherein the LBT cell is a License Assisted Access, LAA, secondary cell. 32. The method of claim 1 wherein the LBT cell is a standalone LBT cell. 33. A radio node of a cellular communications network, the radio node serving a Listen-Before-Talk, LBT, cell, comprising: one or more transmitters; a processor; and memory containing instructions executable by the processor whereby the radio node is operable to: transmit, via the one or more transmitters, a transport block in a first subframe on the LBT cell; and retransmit, via the one or more transmitters, the transport block in a second subframe, the second subframe being adjacent, in time, to the first subframe. 34. The radio node of claim 33 wherein the transport block is retransmitted in the second subframe on the LBT cell. 35. The radio node of claim 33 wherein the transport block is retransmitted in the second subframe on a cell other than the LBT cell. 36-40. (canceled)
2,400
8,175
8,175
15,216,964
2,482
An image of at least a portion of a room may be received, the image of the room comprising an image of a sensor mounted in the room. At least one optical parameter related to the image of the room may also be received. A distance may be determined between the sensor and a camera that captured the image of the room, wherein the determination of the distance is based at least in part on the optical parameters and on known physical dimensions of the sensor. A sensitivity requirement of the sensor may be determined, based on the distance. The determined sensitivity may be sent to control logic of the sensor.
1. A computer-implemented method comprising: receiving an image of at least a portion of a room, the image of the room comprising an image of a sensor mounted in the room; receiving at least one optical parameter related to the image of the room; determining a distance between the sensor and a camera that captured the image of the room, wherein the determination of the distance is based at least in part on the image of the room, on the optical parameters, and on known physical dimensions of the sensor; determining a sensitivity requirement of the sensor, based on the distance; and sending the determined sensitivity requirement to control logic of the sensor. 2. The method of claim 1, further comprising: configuring the sensor to operate at the determined sensitivity requirement. 3. The method of claim 2, wherein the sensor comprises a motion detector. 4. The method of claim 1, further comprising: using the image, determining a height of the sensor above a floor of the room, wherein the determination of the height is based at least in part on the optical parameters and on the known physical dimensions of the sensor. 5. The method of claim 4, further comprising: determining if the height of the sensor is within an appropriate range of heights; and if not, outputting an alert signal. 6. The method of claim 5, wherein the alert signal results in a communication through a user interface, the communication suggesting relocation of the sensor. 7. The method of claim 1, further comprising: using the image, determining a separation between the sensor and a source of infrared radiation (IR) in the room, wherein the determination of the separation is based at least in part on the optical parameters and on the known physical dimensions of the sensor. 8. The method of claim 7, further comprising: determining if the separation is greater than a predefined threshold; and if not, outputting an alert signal. 9. The method of claim 8, wherein the alert signal results in a communication through a user interface, the communication suggesting relocation of the sensor. 10. The method of claim 7, wherein the source of IR comprises one of: a window; a door; a heating, ventilation, or air conditioning (HVAC) vent; or an appliance. 11. A system comprising: a processor; and a memory in communication with the processor, wherein the memory stores instructions that, when executed by the processor, cause the processor to: receive an image of at least a portion of a room, the image of the room comprising an image of a sensor mounted in the room; receive at least one optical parameter related to the image of the room; determine a distance between the sensor and a camera that captured the image of the room, wherein the determination of the distance is based at least in part on the image of the room, on the optical parameters, and on known physical dimensions of the sensor; determine a sensitivity requirement of the sensor, based on the distance; and send the determined sensitivity requirement to control logic of the sensor. 12. The system of claim 11, wherein the instructions further cause the processor to: configure the sensor to operate at the determined sensitivity requirement. 13. The system of claim 12, wherein the sensor comprises a motion detector. 14. The system of claim 11, wherein the instructions further cause the processor to: using the image, determine a height of the sensor above a floor of the room, wherein the determination of the height is based at least in part on the optical parameters and on the known physical dimensions of the sensor. 15. The system of claim 14, wherein the instructions further cause the processor to: determine if the height of the sensor is within an appropriate range of heights; and if not, outputting an alert signal. 16. The system of claim 15, wherein the alert signal results in a communication through a user interface, the communication suggesting relocation of the sensor. 17. The system of claim 11, wherein the instructions further cause the processor to: using the image, determine a separation between the sensor and a source of infrared radiation (IR) in the room, wherein the determination of the separation is based at least in part on the optical parameters and on the known physical dimensions of the sensor. 18. The system of claim 17, wherein the instructions further cause the processor to: determine if the separation is greater than a predefined threshold; and if not, outputting an alert signal. 19. The system of claim 18, wherein the alert signal results in a communication through a user interface, the communication suggesting relocation of the sensor. 20. The system of claim 17, wherein the source of IR comprises one of: a window; a door; a heating, ventilation, or air conditioning (HVAC) vent; or an appliance.
An image of at least a portion of a room may be received, the image of the room comprising an image of a sensor mounted in the room. At least one optical parameter related to the image of the room may also be received. A distance may be determined between the sensor and a camera that captured the image of the room, wherein the determination of the distance is based at least in part on the optical parameters and on known physical dimensions of the sensor. A sensitivity requirement of the sensor may be determined, based on the distance. The determined sensitivity may be sent to control logic of the sensor.1. A computer-implemented method comprising: receiving an image of at least a portion of a room, the image of the room comprising an image of a sensor mounted in the room; receiving at least one optical parameter related to the image of the room; determining a distance between the sensor and a camera that captured the image of the room, wherein the determination of the distance is based at least in part on the image of the room, on the optical parameters, and on known physical dimensions of the sensor; determining a sensitivity requirement of the sensor, based on the distance; and sending the determined sensitivity requirement to control logic of the sensor. 2. The method of claim 1, further comprising: configuring the sensor to operate at the determined sensitivity requirement. 3. The method of claim 2, wherein the sensor comprises a motion detector. 4. The method of claim 1, further comprising: using the image, determining a height of the sensor above a floor of the room, wherein the determination of the height is based at least in part on the optical parameters and on the known physical dimensions of the sensor. 5. The method of claim 4, further comprising: determining if the height of the sensor is within an appropriate range of heights; and if not, outputting an alert signal. 6. The method of claim 5, wherein the alert signal results in a communication through a user interface, the communication suggesting relocation of the sensor. 7. The method of claim 1, further comprising: using the image, determining a separation between the sensor and a source of infrared radiation (IR) in the room, wherein the determination of the separation is based at least in part on the optical parameters and on the known physical dimensions of the sensor. 8. The method of claim 7, further comprising: determining if the separation is greater than a predefined threshold; and if not, outputting an alert signal. 9. The method of claim 8, wherein the alert signal results in a communication through a user interface, the communication suggesting relocation of the sensor. 10. The method of claim 7, wherein the source of IR comprises one of: a window; a door; a heating, ventilation, or air conditioning (HVAC) vent; or an appliance. 11. A system comprising: a processor; and a memory in communication with the processor, wherein the memory stores instructions that, when executed by the processor, cause the processor to: receive an image of at least a portion of a room, the image of the room comprising an image of a sensor mounted in the room; receive at least one optical parameter related to the image of the room; determine a distance between the sensor and a camera that captured the image of the room, wherein the determination of the distance is based at least in part on the image of the room, on the optical parameters, and on known physical dimensions of the sensor; determine a sensitivity requirement of the sensor, based on the distance; and send the determined sensitivity requirement to control logic of the sensor. 12. The system of claim 11, wherein the instructions further cause the processor to: configure the sensor to operate at the determined sensitivity requirement. 13. The system of claim 12, wherein the sensor comprises a motion detector. 14. The system of claim 11, wherein the instructions further cause the processor to: using the image, determine a height of the sensor above a floor of the room, wherein the determination of the height is based at least in part on the optical parameters and on the known physical dimensions of the sensor. 15. The system of claim 14, wherein the instructions further cause the processor to: determine if the height of the sensor is within an appropriate range of heights; and if not, outputting an alert signal. 16. The system of claim 15, wherein the alert signal results in a communication through a user interface, the communication suggesting relocation of the sensor. 17. The system of claim 11, wherein the instructions further cause the processor to: using the image, determine a separation between the sensor and a source of infrared radiation (IR) in the room, wherein the determination of the separation is based at least in part on the optical parameters and on the known physical dimensions of the sensor. 18. The system of claim 17, wherein the instructions further cause the processor to: determine if the separation is greater than a predefined threshold; and if not, outputting an alert signal. 19. The system of claim 18, wherein the alert signal results in a communication through a user interface, the communication suggesting relocation of the sensor. 20. The system of claim 17, wherein the source of IR comprises one of: a window; a door; a heating, ventilation, or air conditioning (HVAC) vent; or an appliance.
2,400
8,176
8,176
15,088,066
2,465
A network device detects a multicast video stream being from an upstream resource being sent to downstream multicast members. If the number of multicast members are below a threshold (e.g., 5 stations), multicast network packets can be converted to unicast network packets. On the other hand, if the number of multicast members are above the threshold, the multicast members are divided into groupings based on capabilities of the multicast members, such as data rate capability. Data rates of transmissions are set according to the group data rate capabilities. As a result, the higher data rate members are able to operate at a higher speed rather than at the lowest common denominator. Further, because there are several multicast streams being sent, packets missed from the higher data rate stream can be picked up on the lower data rate stream.
1. A computer-implemented method in a network device of a data communication network, for increasing delivery probability of multicast video transmissions, the method comprising the steps of: detecting, by a processor of the network device, a video stream within a multicast transmission of network packets to multicast members; responsive to a number of multicast members being below a threshold, converting, by the processor, the multicast transmission to a plurality of unicast transmissions, wherein the unicast transmission provides an acknowledgment for received network packets from the multicast members; responsive to the number of multicast members meeting or exceeding the threshold: determining, by the processor, data rate capabilities for each of the multicast members, grouping the multicast members into two or more groups based on corresponding data rate capabilities, and generating a multicast stream for each of the groups at distinct data rates; and sending, by a network interface of the network device, two or more multicast transmissions downstream to the multicast members groups at the distinct data rates. 2. The method of claim 1, further comprising: determining an amount of computing resources needed by the network device to facilitate different numbers of unicast transmissions; and setting the threshold for the number of multicast members based on the determined amount of computing resources needed for the plurality of unicast transmissions. 3. The method of claim 1, further comprising: prior to detecting the video stream, receiving the video stream from a remote server. 4. The method of claim 1, wherein the multicast members comprise end station devices. 5. The method of claim 1, further comprising: tracking the number of multicast members; detecting that the number of multicast members has crossed or will soon cross the threshold; and dynamically changing between a unicast mode and a data rate grouping mode, responsive to the detection. 6. The method of claim 1, wherein responsive to the number of multicast members meeting or exceeding the threshold: detecting high priority network packets, including I and P frames from an MPEG format of video multicast; and automatically retransmitting at least some of the I and P frames. 7. A non-transitory computer-readable medium, storing source code that, when executed by a processor, performs a method in a network device of a data communication network, for increasing delivery probability of multicast video transmissions, the method comprising the steps of: detecting, by a processor of the network device, a video stream within a multicast transmission of network packets to multicast members; responsive to a number of multicast members being below a threshold, converting, by the processor, the multicast transmission to a plurality of unicast transmissions, wherein the unicast transmission provides an acknowledgment for received network packets from the multicast members; responsive to the number of multicast members meeting or exceeding the threshold: determining, by the processor, data rate capabilities for each of the multicast members, grouping the multicast members into two or more groups based on corresponding data rate capabilities, and generating a multicast stream for each of the groups at distinct data rates; and sending, by a network interface of the network device, two or more multicast transmissions downstream to the multicast members groups at the distinct data rates. 8. The computer-readable medium of claim 7, the method further comprising: determining an amount of computing resources needed by the network device to facilitate different numbers of unicast transmissions; and setting the threshold for the number of multicast members based on the determined amount of computing resources needed for the plurality of unicast transmissions. 9. The computer-readable medium of claim 7, the method further comprising: prior to detecting the video stream, receiving the video stream from a remote server. 10. The computer-readable medium of claim 7, wherein the multicast members comprise end station devices. 11. The computer-readable medium of claim 7, the method further comprising: tracking the number of multicast members; detecting that the number of multicast members has crossed or will soon cross the threshold; and dynamically changing between a unicast mode and a data rate grouping mode, responsive to the detection. 12. The computer-readable medium of claim 7, the method further comprising, wherein responsive to the number of multicast members meeting or exceeding the threshold: detecting high priority network packets, including I and P frames from an MPEG format of video multicast; and automatically retransmitting at least some of the I and P frames. 13. A network device of a data communication network, for increasing delivery probability of multicast video transmissions, the network device comprising: a processor; a video multicast detection module, communicatively coupled to the processor, the video multicast detection module to detect a video stream within a multicast transmission of network packets to multicast members; a unicast conversion module, communicatively coupled to the processor and the video multicast detection module, the unicast conversion module 220 to, responsive to a number of multicast members being below a threshold, convert the multicast transmission to a plurality of unicast transmissions, wherein the unicast transmission provides an acknowledgment for received network packets from the multicast members; a data rate grouping module, communicatively coupled to the processor and the unicast conversion module to, responsive to the number of multicast members meeting or exceeding the threshold: determine data rate capabilities for each of the multicast members, group the multicast members into two or more groups based on corresponding data rate capabilities, and generate a multicast stream for each of the groups at distinct data rates; and a network interface, communicatively coupled to the network interface and the unicast conversion module and to the data rate grouping module, the network interface to send two or more multicast transmissions downstream to the multicast members groups at the distinct data rates.
A network device detects a multicast video stream being from an upstream resource being sent to downstream multicast members. If the number of multicast members are below a threshold (e.g., 5 stations), multicast network packets can be converted to unicast network packets. On the other hand, if the number of multicast members are above the threshold, the multicast members are divided into groupings based on capabilities of the multicast members, such as data rate capability. Data rates of transmissions are set according to the group data rate capabilities. As a result, the higher data rate members are able to operate at a higher speed rather than at the lowest common denominator. Further, because there are several multicast streams being sent, packets missed from the higher data rate stream can be picked up on the lower data rate stream.1. A computer-implemented method in a network device of a data communication network, for increasing delivery probability of multicast video transmissions, the method comprising the steps of: detecting, by a processor of the network device, a video stream within a multicast transmission of network packets to multicast members; responsive to a number of multicast members being below a threshold, converting, by the processor, the multicast transmission to a plurality of unicast transmissions, wherein the unicast transmission provides an acknowledgment for received network packets from the multicast members; responsive to the number of multicast members meeting or exceeding the threshold: determining, by the processor, data rate capabilities for each of the multicast members, grouping the multicast members into two or more groups based on corresponding data rate capabilities, and generating a multicast stream for each of the groups at distinct data rates; and sending, by a network interface of the network device, two or more multicast transmissions downstream to the multicast members groups at the distinct data rates. 2. The method of claim 1, further comprising: determining an amount of computing resources needed by the network device to facilitate different numbers of unicast transmissions; and setting the threshold for the number of multicast members based on the determined amount of computing resources needed for the plurality of unicast transmissions. 3. The method of claim 1, further comprising: prior to detecting the video stream, receiving the video stream from a remote server. 4. The method of claim 1, wherein the multicast members comprise end station devices. 5. The method of claim 1, further comprising: tracking the number of multicast members; detecting that the number of multicast members has crossed or will soon cross the threshold; and dynamically changing between a unicast mode and a data rate grouping mode, responsive to the detection. 6. The method of claim 1, wherein responsive to the number of multicast members meeting or exceeding the threshold: detecting high priority network packets, including I and P frames from an MPEG format of video multicast; and automatically retransmitting at least some of the I and P frames. 7. A non-transitory computer-readable medium, storing source code that, when executed by a processor, performs a method in a network device of a data communication network, for increasing delivery probability of multicast video transmissions, the method comprising the steps of: detecting, by a processor of the network device, a video stream within a multicast transmission of network packets to multicast members; responsive to a number of multicast members being below a threshold, converting, by the processor, the multicast transmission to a plurality of unicast transmissions, wherein the unicast transmission provides an acknowledgment for received network packets from the multicast members; responsive to the number of multicast members meeting or exceeding the threshold: determining, by the processor, data rate capabilities for each of the multicast members, grouping the multicast members into two or more groups based on corresponding data rate capabilities, and generating a multicast stream for each of the groups at distinct data rates; and sending, by a network interface of the network device, two or more multicast transmissions downstream to the multicast members groups at the distinct data rates. 8. The computer-readable medium of claim 7, the method further comprising: determining an amount of computing resources needed by the network device to facilitate different numbers of unicast transmissions; and setting the threshold for the number of multicast members based on the determined amount of computing resources needed for the plurality of unicast transmissions. 9. The computer-readable medium of claim 7, the method further comprising: prior to detecting the video stream, receiving the video stream from a remote server. 10. The computer-readable medium of claim 7, wherein the multicast members comprise end station devices. 11. The computer-readable medium of claim 7, the method further comprising: tracking the number of multicast members; detecting that the number of multicast members has crossed or will soon cross the threshold; and dynamically changing between a unicast mode and a data rate grouping mode, responsive to the detection. 12. The computer-readable medium of claim 7, the method further comprising, wherein responsive to the number of multicast members meeting or exceeding the threshold: detecting high priority network packets, including I and P frames from an MPEG format of video multicast; and automatically retransmitting at least some of the I and P frames. 13. A network device of a data communication network, for increasing delivery probability of multicast video transmissions, the network device comprising: a processor; a video multicast detection module, communicatively coupled to the processor, the video multicast detection module to detect a video stream within a multicast transmission of network packets to multicast members; a unicast conversion module, communicatively coupled to the processor and the video multicast detection module, the unicast conversion module 220 to, responsive to a number of multicast members being below a threshold, convert the multicast transmission to a plurality of unicast transmissions, wherein the unicast transmission provides an acknowledgment for received network packets from the multicast members; a data rate grouping module, communicatively coupled to the processor and the unicast conversion module to, responsive to the number of multicast members meeting or exceeding the threshold: determine data rate capabilities for each of the multicast members, group the multicast members into two or more groups based on corresponding data rate capabilities, and generate a multicast stream for each of the groups at distinct data rates; and a network interface, communicatively coupled to the network interface and the unicast conversion module and to the data rate grouping module, the network interface to send two or more multicast transmissions downstream to the multicast members groups at the distinct data rates.
2,400
8,177
8,177
15,666,387
2,466
A multicast broadcast service controller is disclosed. The MBSC processes multicast broadcast data streams for transmission to access service network gateways or base stations. The MBSC includes a MBSC core processor for establishing time synchronization information used by the access service network gateways or base stations to synchronously transmit data streams. The MBSC core processor selects streams for transmission in a time diversity interval (TDI) and builds multicast broadcast (MBS) region content based on the selected streams and configuration information. The MBS region content includes timing synchronization information, resource information and MBS region content location information. A MBS region distribution module (MRD) transmits the MBS region content to the access service network gateways or base stations.
1. A method for use by a base station, the method comprising: receiving, by the base station, information for a multicast broadcast region indicating a pattern of sub-intervals within a time divided interval and indicating resource information for use in broadcasting multicast broadcast service data, wherein the base station belongs to the multicast broadcast region which includes a plurality of base stations; receiving, by the base station, a multicast broadcast data stream; and broadcasting, by the base station, the multicast broadcast data stream in at least one of the pattern of sub-intervals using the indicated resource information as an orthogonal frequency division multiplex signal. 2. The method of claim 1, wherein the multicast broadcast data stream is for a service and the resource information is for the service. 3. The method of claim 1, wherein multicast broadcast service data for the multicast broadcast region is not broadcasted in sub-intervals not of the pattern of sub-intervals. 4. The method of claim 1, wherein the sub-intervals are frames. 5. The method of claim 1, wherein the pattern of sub-intervals repeats each time divided interval. 6. The method of claim 1, wherein the received information for the multicast broadcast region constrains scheduling by the base station. 7. A base station comprising: an interface configured to receive information for a multicast broadcast region indicating a pattern of sub-intervals within a time divided interval and indicating resource information for use to broadcast multicast broadcast service data, wherein the base station belongs to the multicast broadcast region which includes a plurality of base stations; the interface further configured to receive a multicast broadcast data stream; and a processor configured to control a radio interface to broadcast the multicast broadcast data stream in at least one of the pattern of sub-intervals using the indicated resource information as an orthogonal frequency division multiplex signal. 8. The base station of claim 7, wherein the multicast broadcast data stream is for a service and the resource information is for the service. 9. The base station of claim 7, wherein the processor does not have the interface broadcast multicast broadcast service data for the multicast broadcast region in sub-intervals not of the pattern of sub-intervals. 10. The base station of claim 7, wherein the sub-intervals are frames. 11. The base station of claim 7, wherein the pattern of sub-intervals repeats each time divided interval. 12. The base station of claim 7, wherein the received information for the multicast broadcast region constrains scheduling by the processor.
A multicast broadcast service controller is disclosed. The MBSC processes multicast broadcast data streams for transmission to access service network gateways or base stations. The MBSC includes a MBSC core processor for establishing time synchronization information used by the access service network gateways or base stations to synchronously transmit data streams. The MBSC core processor selects streams for transmission in a time diversity interval (TDI) and builds multicast broadcast (MBS) region content based on the selected streams and configuration information. The MBS region content includes timing synchronization information, resource information and MBS region content location information. A MBS region distribution module (MRD) transmits the MBS region content to the access service network gateways or base stations.1. A method for use by a base station, the method comprising: receiving, by the base station, information for a multicast broadcast region indicating a pattern of sub-intervals within a time divided interval and indicating resource information for use in broadcasting multicast broadcast service data, wherein the base station belongs to the multicast broadcast region which includes a plurality of base stations; receiving, by the base station, a multicast broadcast data stream; and broadcasting, by the base station, the multicast broadcast data stream in at least one of the pattern of sub-intervals using the indicated resource information as an orthogonal frequency division multiplex signal. 2. The method of claim 1, wherein the multicast broadcast data stream is for a service and the resource information is for the service. 3. The method of claim 1, wherein multicast broadcast service data for the multicast broadcast region is not broadcasted in sub-intervals not of the pattern of sub-intervals. 4. The method of claim 1, wherein the sub-intervals are frames. 5. The method of claim 1, wherein the pattern of sub-intervals repeats each time divided interval. 6. The method of claim 1, wherein the received information for the multicast broadcast region constrains scheduling by the base station. 7. A base station comprising: an interface configured to receive information for a multicast broadcast region indicating a pattern of sub-intervals within a time divided interval and indicating resource information for use to broadcast multicast broadcast service data, wherein the base station belongs to the multicast broadcast region which includes a plurality of base stations; the interface further configured to receive a multicast broadcast data stream; and a processor configured to control a radio interface to broadcast the multicast broadcast data stream in at least one of the pattern of sub-intervals using the indicated resource information as an orthogonal frequency division multiplex signal. 8. The base station of claim 7, wherein the multicast broadcast data stream is for a service and the resource information is for the service. 9. The base station of claim 7, wherein the processor does not have the interface broadcast multicast broadcast service data for the multicast broadcast region in sub-intervals not of the pattern of sub-intervals. 10. The base station of claim 7, wherein the sub-intervals are frames. 11. The base station of claim 7, wherein the pattern of sub-intervals repeats each time divided interval. 12. The base station of claim 7, wherein the received information for the multicast broadcast region constrains scheduling by the processor.
2,400
8,178
8,178
14,441,237
2,411
A method, apparatus, system, and computer readable medium may be used to perform beamforming. The method may include a first communication device sending a first plurality of beamforming training frames to a second communication device using a first beamforming weight vector; the first communication device receiving from the second communication device a second beamforming weight vector; and the first communication device sending a second plurality of beamforming training frames to the second communication device using the second beamforming weight vector. The apparatus, method, system, and computer readable media may use spatial diversity with beam switching, spatial diversity with a single beam, weighted multipath beamforming training, single user spatial multiplexing, and beamforming training for beam division multiple access (BDMA).
1.-20. (canceled) 21. A first communication device, the first communication device comprising: a plurality of antennas; a processor, wherein the processor is configured to partition the plurality of antennas into at least a first group of antennas and a second group of antennas, wherein the first group of antennas is associated with a first beam to a second communication device, and the second group of antennas is associated with a second beam to the second communication device; a transmitter configured to transmit, to the second communication device, a plurality of beamforming training frames using the first group of antennas and the second group of antennas; and a receiver configured to receive from the second communication device a first beamforming weight vector for sending signals on the first group of antennas and to receive a second beamforming weight vector for sending signals on the second group of antennas. 22. The first communication device of claim 21, wherein the first beamforming weight vector is a strongest beam between the first communication device and the second communication device, and wherein the second beamforming weight vector is for a second strongest beam between the first communication device and the second communication device. 23. The first communication device of claim 21, wherein the first communication device and the second communication device are one of: a wireless transmit receive unit, a station, an access point, or a base station. 24. The first communication device of claim 21, wherein the beamforming training frames are modulated using orthogonal beamforming vectors. 25. The first communication device of claim 21, wherein the transmitter is further configured to transmit a second set of beamforming training frames using the received first beamforming weight vector and the second beamforming weight vector; and wherein the receiver is further configured to receive from the second communication device a modified first beamforming weight vector for sending signals on the first group of antenna and to receive a modified second beamforming weight vector for sending signals on the second group of antenna. 26. The first communication device of claim 21, wherein the first communication device comprises one or more radio frequency (RF) chains and wherein a number of the antenna is larger than a number of the one or more RF chains. 27. A first communication device, the first communication device comprising: a plurality of antennas; and a processor, wherein the plurality of antennas is configured to receive one training frame of a set of beamforming training frames, and wherein the processor is configured to determine a first transmit beamforming weight vector corresponding to a first antenna group of a second communication device, and to determine a second transmit beamforming weight vector corresponding to a second antenna group of the second communication device; and a transmitter configured to transmit data using the first transmit beamforming weight vector and to transmit data using the second transmit beamforming weight vector to the second communication device. 28. The first communication device of claim 27, wherein the first communication device and the second communication device are one of: a wireless transmit receive unit, a station, an access point, or a base station. 29. The first communication device of claim 27, wherein the received beamforming training frames are orthogonal beamforming vectors. 30. The first communication device of claim 27, wherein the transmitted beamforming weight vectors are orthogonal beamforming vectors. 31. A method for use in a first communication device, the method comprising: partitioning, at the first communication device, a plurality of antennas into at least a first group of antennas and a second group of antennas, wherein the first group of antennas is associated with a first beam to a second communication device, and the second group of antennas is associated with a second beam to the second communication device; transmitting, to the second communication device, a plurality of beamforming training frames using the first group of antennas and the second group of antennas; and receiving, from the second communication device, a first beamforming weight vector for transmitting signals on the first group of antennas and receiving a second beamforming weight vector for transmitting signals on the second group of antennas. 32. The method of claim 31, wherein the first beamforming weight vector is a strongest beam between the first communication device and the second communication device, and wherein the second beamforming weight vector is for a second strongest beam between the first communication device and the second communication device. 33. The method of claim 31, wherein the first communication device and the second communication device are one of: a wireless transmit receive unit, a STA, an access point (AP), or a base station. 34. The method of claim 31, wherein the beamforming training frames are modulated using orthogonal beamforming vectors. 35. The method of claim 31 further comprising: transmitting a second set of beamforming training frames using the received first beamforming weight vector and the second beamforming weight vector; and receiving from the second communication device a modified first beamforming weight vector for transmitting signals on the first group of antenna and receiving a modified second beamforming weight vector for transmitting signals on the second group of antenna. 36. The method of claim 31, wherein the first communication device comprises one or more radio frequency (RF) chains and wherein a number of the antenna is larger than a number of the one or more RF chains. 37. A method for use in a first communication device, the method comprising: receiving one training frame of a set of beamforming training frames on each antenna of a plurality of antennas; determining a first transmit beamforming weight vector corresponding to a first antenna group of a second communication device, and determining a second transmit beamforming weight vector corresponding to a second antenna group of the second communication device; and transmitting data using the first transmit beamforming weight vector and transmitting data using the second transmit beamforming weight vector to the second communication device. 38. The method of claim 37, wherein the first communication device and the second communication device are one of: a wireless transmit receive unit, a station (STA), an access point (AP), or a base station. 39. The method of claim 37, wherein the received beamforming training frames are orthogonal beamforming vectors. 40. The method of claim 37, wherein the received beamforming training frames are orthogonal beamforming vectors.
A method, apparatus, system, and computer readable medium may be used to perform beamforming. The method may include a first communication device sending a first plurality of beamforming training frames to a second communication device using a first beamforming weight vector; the first communication device receiving from the second communication device a second beamforming weight vector; and the first communication device sending a second plurality of beamforming training frames to the second communication device using the second beamforming weight vector. The apparatus, method, system, and computer readable media may use spatial diversity with beam switching, spatial diversity with a single beam, weighted multipath beamforming training, single user spatial multiplexing, and beamforming training for beam division multiple access (BDMA).1.-20. (canceled) 21. A first communication device, the first communication device comprising: a plurality of antennas; a processor, wherein the processor is configured to partition the plurality of antennas into at least a first group of antennas and a second group of antennas, wherein the first group of antennas is associated with a first beam to a second communication device, and the second group of antennas is associated with a second beam to the second communication device; a transmitter configured to transmit, to the second communication device, a plurality of beamforming training frames using the first group of antennas and the second group of antennas; and a receiver configured to receive from the second communication device a first beamforming weight vector for sending signals on the first group of antennas and to receive a second beamforming weight vector for sending signals on the second group of antennas. 22. The first communication device of claim 21, wherein the first beamforming weight vector is a strongest beam between the first communication device and the second communication device, and wherein the second beamforming weight vector is for a second strongest beam between the first communication device and the second communication device. 23. The first communication device of claim 21, wherein the first communication device and the second communication device are one of: a wireless transmit receive unit, a station, an access point, or a base station. 24. The first communication device of claim 21, wherein the beamforming training frames are modulated using orthogonal beamforming vectors. 25. The first communication device of claim 21, wherein the transmitter is further configured to transmit a second set of beamforming training frames using the received first beamforming weight vector and the second beamforming weight vector; and wherein the receiver is further configured to receive from the second communication device a modified first beamforming weight vector for sending signals on the first group of antenna and to receive a modified second beamforming weight vector for sending signals on the second group of antenna. 26. The first communication device of claim 21, wherein the first communication device comprises one or more radio frequency (RF) chains and wherein a number of the antenna is larger than a number of the one or more RF chains. 27. A first communication device, the first communication device comprising: a plurality of antennas; and a processor, wherein the plurality of antennas is configured to receive one training frame of a set of beamforming training frames, and wherein the processor is configured to determine a first transmit beamforming weight vector corresponding to a first antenna group of a second communication device, and to determine a second transmit beamforming weight vector corresponding to a second antenna group of the second communication device; and a transmitter configured to transmit data using the first transmit beamforming weight vector and to transmit data using the second transmit beamforming weight vector to the second communication device. 28. The first communication device of claim 27, wherein the first communication device and the second communication device are one of: a wireless transmit receive unit, a station, an access point, or a base station. 29. The first communication device of claim 27, wherein the received beamforming training frames are orthogonal beamforming vectors. 30. The first communication device of claim 27, wherein the transmitted beamforming weight vectors are orthogonal beamforming vectors. 31. A method for use in a first communication device, the method comprising: partitioning, at the first communication device, a plurality of antennas into at least a first group of antennas and a second group of antennas, wherein the first group of antennas is associated with a first beam to a second communication device, and the second group of antennas is associated with a second beam to the second communication device; transmitting, to the second communication device, a plurality of beamforming training frames using the first group of antennas and the second group of antennas; and receiving, from the second communication device, a first beamforming weight vector for transmitting signals on the first group of antennas and receiving a second beamforming weight vector for transmitting signals on the second group of antennas. 32. The method of claim 31, wherein the first beamforming weight vector is a strongest beam between the first communication device and the second communication device, and wherein the second beamforming weight vector is for a second strongest beam between the first communication device and the second communication device. 33. The method of claim 31, wherein the first communication device and the second communication device are one of: a wireless transmit receive unit, a STA, an access point (AP), or a base station. 34. The method of claim 31, wherein the beamforming training frames are modulated using orthogonal beamforming vectors. 35. The method of claim 31 further comprising: transmitting a second set of beamforming training frames using the received first beamforming weight vector and the second beamforming weight vector; and receiving from the second communication device a modified first beamforming weight vector for transmitting signals on the first group of antenna and receiving a modified second beamforming weight vector for transmitting signals on the second group of antenna. 36. The method of claim 31, wherein the first communication device comprises one or more radio frequency (RF) chains and wherein a number of the antenna is larger than a number of the one or more RF chains. 37. A method for use in a first communication device, the method comprising: receiving one training frame of a set of beamforming training frames on each antenna of a plurality of antennas; determining a first transmit beamforming weight vector corresponding to a first antenna group of a second communication device, and determining a second transmit beamforming weight vector corresponding to a second antenna group of the second communication device; and transmitting data using the first transmit beamforming weight vector and transmitting data using the second transmit beamforming weight vector to the second communication device. 38. The method of claim 37, wherein the first communication device and the second communication device are one of: a wireless transmit receive unit, a station (STA), an access point (AP), or a base station. 39. The method of claim 37, wherein the received beamforming training frames are orthogonal beamforming vectors. 40. The method of claim 37, wherein the received beamforming training frames are orthogonal beamforming vectors.
2,400
8,179
8,179
15,373,320
2,481
A method of operating a rear video camera includes a non-volatile memory for storing information and a video imager configured to load information from the non-volatile memory and to obtain video signals with video images from a field of view rearward of a vehicle. The video imager is configured to output an enable signal at a pin of the video imager when the information is loaded onto the video imager from the non-volatile memory and to output video signals. A video buffer is configured to receive the video signals and the enable signal from the video imager and to output video signals to an interior vehicle display when the enable signal is received. When the information is not loaded onto the video imager, the video imager does not provide an enable signal and no image is displayed.
1. Method of preventing an interior vehicle display from displaying video images provided by a rear video camera having an erroneous image orientation, comprising: determining whether a non-volatile memory is providing information to a video imager of the rear video camera; providing an enable signal from the video imager to a video buffer when the information in the non-volatile memory is provided to the video imager, and not providing the enable signal from the video imager to the video buffer when the information in the non-volatile memory is not provided to the video imager; and providing video signals from the video imager through the video buffer to an interior vehicle display when the video buffer receives the enable signal. 2. The method according to claim 1, wherein the providing of the information from the non-volatile memory into the video imager includes horizontally reversing the orientation of the video images for the video signals that are output from the video imager. 3. The method according to claim 2, wherein the non-volatile memory includes a flash memory, and wherein the video imager, the flash memory and the video buffer are provided on a single circuit board of the rear video camera. 4. The method according to claim 1, wherein the video imager, the non-volatile memory and the video buffer are provided on two or more circuit boards of the rear video camera. 5. The method according to claim 1, including displaying the video images of the video signals from the video imager on the interior vehicle display when the video buffer receives the enable signal. 6. The method according to claim 1, wherein the non-volatile memory includes a flash memory and the enable signal is provided on a connection from an input/output pin of the video imager to an input pin of the video buffer. 7. A rear video camera comprising: a non-volatile memory for storing information; a video imager configured to load the information from the non-volatile memory, to obtain video signals with video images from a field of view outwardly from a vehicle and to output the video signals, the video imager configured to output an enable signal at a pin of the video imager when the information is loaded onto the video imager from the non-volatile memory; and a video buffer configured to receive the video signals and the enable signal from the video imager and to output the video images of the video signals when the enable signal is received. 8. The rear video camera according to claim 7, wherein the information loaded from the non-volatile memory into the video imager includes operating settings to alter an orientation of the video images of the video signals that are output from the video imager, and wherein the video buffer is configured to block output of the video signals when the enable signal from the video imager is not received. 9. The rear video camera according to claim 8, wherein the operating settings horizontally reverse the orientation of the video images of the video signals that are output from the video imager. 10. The rear video camera according to claim 7, wherein the non-volatile memory includes a flash memory. 11. The rear video camera according to claim 7, wherein the enable signal is provided to a trace connected at a first end to the pin of the video imager, the trace being connected at a second end to a pin of the video buffer. 12. The rear video camera according to claim 7, wherein the video imager is configured to reverse the video images of the video signals in response to the information loaded from the non-volatile memory, and wherein the video imager is configured to not reverse the video images of the video signals when the information is not loaded from the non-volatile memory. 13. The rear video camera according to claim 12, wherein the video imager, the non-volatile memory and the video buffer are provided on a single circuit board. 14. The rear video camera according to claim 13, wherein the pin of the video imager is an input/output pin, the non-volatile memory includes a flash memory, and the enable signal is provided on a connection from the input/output pin of the video imager to an input pin of the video buffer. 15. A vehicle video camera display system comprising: a rear video camera mounted on a vehicle, the rear video camera having a field of view outwardly from the vehicle, the rear video camera including: a non-volatile memory for storing information; a video imager configured to obtain video signals of video images from rearward of a vehicle and to load the information from the non-volatile memory, the video imager configured to provide an enable signal at a pin of the video imager when the information is loaded onto the video imager; and a video buffer configured to receive the video signals and the enable signal, and to output the video signals having the video images when the enable signal is received, and an interior vehicle display configured to receive and display a video image from the video signals received from the video buffer. 16. The vehicle video camera display system according to claim 16, wherein in response to the receiving of the information from the non-volatile memory into the video imager, the video imager is configured to horizontally reverse the orientation of the video images of the video signals that are output from the video imager, and wherein the video buffer is configured to block output of the video signals when the enable signal from the video imager is not received. 17. The vehicle video camera display system according to claim 15, wherein the non-volatile memory includes a flash memory. 18. The vehicle video camera display system according to claim 15, wherein the enable signal is provided by a connection from the pin of the video imager to a pin of the video buffer. 19. The vehicle video camera display system according to claim 18, wherein the video imager is configured to horizontally reverse the video images for the video signals in response to the information loaded from the non-volatile memory, and the video imager is configured to not reverse the video images for the video signals that are output when the information is not loaded from the non-volatile memory into the video imager.
A method of operating a rear video camera includes a non-volatile memory for storing information and a video imager configured to load information from the non-volatile memory and to obtain video signals with video images from a field of view rearward of a vehicle. The video imager is configured to output an enable signal at a pin of the video imager when the information is loaded onto the video imager from the non-volatile memory and to output video signals. A video buffer is configured to receive the video signals and the enable signal from the video imager and to output video signals to an interior vehicle display when the enable signal is received. When the information is not loaded onto the video imager, the video imager does not provide an enable signal and no image is displayed.1. Method of preventing an interior vehicle display from displaying video images provided by a rear video camera having an erroneous image orientation, comprising: determining whether a non-volatile memory is providing information to a video imager of the rear video camera; providing an enable signal from the video imager to a video buffer when the information in the non-volatile memory is provided to the video imager, and not providing the enable signal from the video imager to the video buffer when the information in the non-volatile memory is not provided to the video imager; and providing video signals from the video imager through the video buffer to an interior vehicle display when the video buffer receives the enable signal. 2. The method according to claim 1, wherein the providing of the information from the non-volatile memory into the video imager includes horizontally reversing the orientation of the video images for the video signals that are output from the video imager. 3. The method according to claim 2, wherein the non-volatile memory includes a flash memory, and wherein the video imager, the flash memory and the video buffer are provided on a single circuit board of the rear video camera. 4. The method according to claim 1, wherein the video imager, the non-volatile memory and the video buffer are provided on two or more circuit boards of the rear video camera. 5. The method according to claim 1, including displaying the video images of the video signals from the video imager on the interior vehicle display when the video buffer receives the enable signal. 6. The method according to claim 1, wherein the non-volatile memory includes a flash memory and the enable signal is provided on a connection from an input/output pin of the video imager to an input pin of the video buffer. 7. A rear video camera comprising: a non-volatile memory for storing information; a video imager configured to load the information from the non-volatile memory, to obtain video signals with video images from a field of view outwardly from a vehicle and to output the video signals, the video imager configured to output an enable signal at a pin of the video imager when the information is loaded onto the video imager from the non-volatile memory; and a video buffer configured to receive the video signals and the enable signal from the video imager and to output the video images of the video signals when the enable signal is received. 8. The rear video camera according to claim 7, wherein the information loaded from the non-volatile memory into the video imager includes operating settings to alter an orientation of the video images of the video signals that are output from the video imager, and wherein the video buffer is configured to block output of the video signals when the enable signal from the video imager is not received. 9. The rear video camera according to claim 8, wherein the operating settings horizontally reverse the orientation of the video images of the video signals that are output from the video imager. 10. The rear video camera according to claim 7, wherein the non-volatile memory includes a flash memory. 11. The rear video camera according to claim 7, wherein the enable signal is provided to a trace connected at a first end to the pin of the video imager, the trace being connected at a second end to a pin of the video buffer. 12. The rear video camera according to claim 7, wherein the video imager is configured to reverse the video images of the video signals in response to the information loaded from the non-volatile memory, and wherein the video imager is configured to not reverse the video images of the video signals when the information is not loaded from the non-volatile memory. 13. The rear video camera according to claim 12, wherein the video imager, the non-volatile memory and the video buffer are provided on a single circuit board. 14. The rear video camera according to claim 13, wherein the pin of the video imager is an input/output pin, the non-volatile memory includes a flash memory, and the enable signal is provided on a connection from the input/output pin of the video imager to an input pin of the video buffer. 15. A vehicle video camera display system comprising: a rear video camera mounted on a vehicle, the rear video camera having a field of view outwardly from the vehicle, the rear video camera including: a non-volatile memory for storing information; a video imager configured to obtain video signals of video images from rearward of a vehicle and to load the information from the non-volatile memory, the video imager configured to provide an enable signal at a pin of the video imager when the information is loaded onto the video imager; and a video buffer configured to receive the video signals and the enable signal, and to output the video signals having the video images when the enable signal is received, and an interior vehicle display configured to receive and display a video image from the video signals received from the video buffer. 16. The vehicle video camera display system according to claim 16, wherein in response to the receiving of the information from the non-volatile memory into the video imager, the video imager is configured to horizontally reverse the orientation of the video images of the video signals that are output from the video imager, and wherein the video buffer is configured to block output of the video signals when the enable signal from the video imager is not received. 17. The vehicle video camera display system according to claim 15, wherein the non-volatile memory includes a flash memory. 18. The vehicle video camera display system according to claim 15, wherein the enable signal is provided by a connection from the pin of the video imager to a pin of the video buffer. 19. The vehicle video camera display system according to claim 18, wherein the video imager is configured to horizontally reverse the video images for the video signals in response to the information loaded from the non-volatile memory, and the video imager is configured to not reverse the video images for the video signals that are output when the information is not loaded from the non-volatile memory into the video imager.
2,400
8,180
8,180
15,177,067
2,431
Protecting a module intended to be executed by an executing device that has an operating system and that is either genuine or jailbroken is described. An application provider device obtains a first version of the module intended to be executed on a genuine executing device, the first version implementing a first software protection technique allowed by the operating system on the genuine device, obtains a second version of the application intended to be executed on a jailbroken device, the second version implementing a second software protection technique not allowed by the operating system on the genuine device, obtains a jailbreak detection function configured to determine whether a device executing the jailbreak function is genuine or jailbroken, and to call the first version of the module in case the executing device is genuine and call the second version of the module in case the executing device is jailbroken and generates an application package including the jailbreak detection function, the first version of the module and the second version of the module, and that is output by an interface.
1. An application provider device for protecting a module intended to be executed by an executing device that has an operating system and that is either genuine or jailbroken, the application provider device comprising: a processing unit configured to: obtain a first version of the module intended to be executed on a genuine executing device, the first version implementing a first software protection technique allowed by the operating system on the genuine device; obtain a second version of the module intended to be executed on a jailbroken device, the second version implementing a second software protection technique not allowed by the operating system on the genuine device; obtain a jailbreak detection function configured to determine whether the executing device is genuine or jailbroken, and to call the first version of the module in case the executing device is genuine and call the second version of the module in case the executing device is jailbroken; and generate an application package comprising the jailbreak detection function, the first version of the module and the second version of the module; and an interface configured to output the application package. 2. The application provider device of claim 1, wherein the processing unit is further configured to use the first software protection technique to protect the first version of the module. 3. The application provider device of claim 2, wherein the first software protection technique comprises at least one of control flow graph flattening and verification that the executing device is genuine. 4. The application provider device of claim 1, wherein the processing unit is further configured to use the second software protection technique to protect the second version of the module. 5. The application provider device of claim 4, wherein the second software protection technique is dynamic ciphering. 6. A method for protecting a module intended to be executed by an executing device that has an operating system and that is either genuine or jailbroken, the method comprising at an application provider device: obtaining, by a processing unit, a first version of the module intended to be executed on a genuine executing device, the first version implementing a first software protection technique allowed by the operating system on the genuine device; obtaining, by the processing unit, a second version of the module intended to be executed on a jailbroken device, the second version implementing a second software protection technique not allowed by the operating system on the genuine device; obtaining, by the processing unit, a jailbreak detection function configured to determine whether the executing device is genuine or jailbroken, and to call the first version of the module in case the executing device is genuine and call the second version of the module in case the executing device is jailbroken; generating, by the processing unit, an application package comprising the jailbreak detection function, the first version of the module and the second version of the module; and outputting, by an interface, the application package. 7. The method of claim 6, further comprising using, by the processing unit, the first software protection technique to protect the first version of the module. 8. The method of claim 7, wherein the first software protection technique comprises at least one of control flow graph flattening and verification that the executing device is genuine. 9. The method of claim 6, further comprising using, by the processing unit, the second software protection technique to protect the second version of the module. 10. The method of claim 9, wherein the second software protection technique is dynamic ciphering. 11. Computer program product which is stored on a non-transitory computer readable medium and comprises: a first version of a module intended to be executed on a genuine executing device, the first version implementing a first software protection technique allowed by an operating system on the genuine device; a second version of the module intended to be executed on a jailbroken device, the second version implementing a second software protection technique not allowed by the operating system on the genuine device; and a jailbreak detection function that, when executed by a hardware processor causes the hardware processor to determine whether a device executing the jailbreak function is genuine or jailbroken, and to call the first version of the module in case the executing device is genuine and call the second version of the module in case the executing device is jailbroken. 12. An executing device having an operating system, the executing device comprising: memory storing a first version of a module intended to be executed on a genuine executing device and implementing a first software protection technique allowed by the operating system on the genuine device, a second version of the module intended to be executed on a jailbroken device and implementing a second software protection technique not allowed by the operating system on the genuine device, and a jailbreak detection function configured to determine whether the executing device the jailbreak function is genuine or jailbroken; and a processing unit configured to: execute the jailbreak detection function to determine whether the executing device is genuine or jailbroken; and call the first version of the module in case it is determined that the executing device is genuine and call the second version of the module in case it is determined that the executing device is jailbroken.
Protecting a module intended to be executed by an executing device that has an operating system and that is either genuine or jailbroken is described. An application provider device obtains a first version of the module intended to be executed on a genuine executing device, the first version implementing a first software protection technique allowed by the operating system on the genuine device, obtains a second version of the application intended to be executed on a jailbroken device, the second version implementing a second software protection technique not allowed by the operating system on the genuine device, obtains a jailbreak detection function configured to determine whether a device executing the jailbreak function is genuine or jailbroken, and to call the first version of the module in case the executing device is genuine and call the second version of the module in case the executing device is jailbroken and generates an application package including the jailbreak detection function, the first version of the module and the second version of the module, and that is output by an interface.1. An application provider device for protecting a module intended to be executed by an executing device that has an operating system and that is either genuine or jailbroken, the application provider device comprising: a processing unit configured to: obtain a first version of the module intended to be executed on a genuine executing device, the first version implementing a first software protection technique allowed by the operating system on the genuine device; obtain a second version of the module intended to be executed on a jailbroken device, the second version implementing a second software protection technique not allowed by the operating system on the genuine device; obtain a jailbreak detection function configured to determine whether the executing device is genuine or jailbroken, and to call the first version of the module in case the executing device is genuine and call the second version of the module in case the executing device is jailbroken; and generate an application package comprising the jailbreak detection function, the first version of the module and the second version of the module; and an interface configured to output the application package. 2. The application provider device of claim 1, wherein the processing unit is further configured to use the first software protection technique to protect the first version of the module. 3. The application provider device of claim 2, wherein the first software protection technique comprises at least one of control flow graph flattening and verification that the executing device is genuine. 4. The application provider device of claim 1, wherein the processing unit is further configured to use the second software protection technique to protect the second version of the module. 5. The application provider device of claim 4, wherein the second software protection technique is dynamic ciphering. 6. A method for protecting a module intended to be executed by an executing device that has an operating system and that is either genuine or jailbroken, the method comprising at an application provider device: obtaining, by a processing unit, a first version of the module intended to be executed on a genuine executing device, the first version implementing a first software protection technique allowed by the operating system on the genuine device; obtaining, by the processing unit, a second version of the module intended to be executed on a jailbroken device, the second version implementing a second software protection technique not allowed by the operating system on the genuine device; obtaining, by the processing unit, a jailbreak detection function configured to determine whether the executing device is genuine or jailbroken, and to call the first version of the module in case the executing device is genuine and call the second version of the module in case the executing device is jailbroken; generating, by the processing unit, an application package comprising the jailbreak detection function, the first version of the module and the second version of the module; and outputting, by an interface, the application package. 7. The method of claim 6, further comprising using, by the processing unit, the first software protection technique to protect the first version of the module. 8. The method of claim 7, wherein the first software protection technique comprises at least one of control flow graph flattening and verification that the executing device is genuine. 9. The method of claim 6, further comprising using, by the processing unit, the second software protection technique to protect the second version of the module. 10. The method of claim 9, wherein the second software protection technique is dynamic ciphering. 11. Computer program product which is stored on a non-transitory computer readable medium and comprises: a first version of a module intended to be executed on a genuine executing device, the first version implementing a first software protection technique allowed by an operating system on the genuine device; a second version of the module intended to be executed on a jailbroken device, the second version implementing a second software protection technique not allowed by the operating system on the genuine device; and a jailbreak detection function that, when executed by a hardware processor causes the hardware processor to determine whether a device executing the jailbreak function is genuine or jailbroken, and to call the first version of the module in case the executing device is genuine and call the second version of the module in case the executing device is jailbroken. 12. An executing device having an operating system, the executing device comprising: memory storing a first version of a module intended to be executed on a genuine executing device and implementing a first software protection technique allowed by the operating system on the genuine device, a second version of the module intended to be executed on a jailbroken device and implementing a second software protection technique not allowed by the operating system on the genuine device, and a jailbreak detection function configured to determine whether the executing device the jailbreak function is genuine or jailbroken; and a processing unit configured to: execute the jailbreak detection function to determine whether the executing device is genuine or jailbroken; and call the first version of the module in case it is determined that the executing device is genuine and call the second version of the module in case it is determined that the executing device is jailbroken.
2,400
8,181
8,181
15,386,207
2,433
Some embodiments provide a method for managing firewall protection in a datacenter that includes multiple host machines that each hosts a set of data compute nodes. The method maintains a firewall configuration for the host machines at a network manager of the data center. The firewall configuration includes multiple firewall rules to be enforced at the host machines. The method aggregates a first set of updates to the firewall configuration into a first aggregated update and associates the first aggregated update with a first version number. The method distributes a first host-level firewall configuration update to a first host machine based on the first aggregated update and associates the first host machine with the first version number. The method aggregates a second set of updates to the firewall configuration into a second aggregated update and associates the second aggregated update with a second version number.
1. A method for managing firewall protection in a datacenter comprising a plurality of host machines that each hosts a set of data compute nodes, the method comprising: maintaining a firewall configuration for the plurality of host machines at a network manager of the data center, the firewall configuration comprising a plurality of firewall rules that are to be enforced at the plurality of host machines; aggregating a first plurality of updates to the firewall configuration into a first aggregated update and associating the first aggregated update with a first version number; distributing a first host-level firewall configuration update to a first host machine based on the first aggregated update and associating the first host machine with the first version number; and aggregating a second plurality of updates to the firewall configuration into a second aggregated update and associating the second aggregated update with a second version number. 2. The method of claim 1, wherein the plurality of rules in the firewall configuration comprises rules that are specified for a plurality of different tenants, wherein the first plurality of updates comprises update specifications from at least two different tenants. 3. The method of claim 1 further comprising distributing a second host-level firewall configuration update to a second host machine based on the second aggregated update and associating the second host machine with the second version number. 4. The method of claim 1 further comprising providing a firewall configuration status that comprises version numbers associated with host machines. 5. The method of claim 5, wherein distributing the first host-level firewall configuration update to the first host machine comprises sending the first version number to the first host machine and recording the first version number in the firewall configuration status when an acknowledgement comprising the first version number is received from the first host machine. 6. The method of claim 1, wherein the first version number is a first timestamp captured at a first time and the second version number is a second timestamp captured at a second time. 7. The method of claim 1, wherein the firewall configuration comprises a plurality of sections, wherein the first version number associated with the first aggregated update is identified based on the version numbers assigned to the plurality of sections of the firewall configuration. 8. The method of claim 8, wherein the first plurality of updates are received via a set of application programming interface (API) routines, wherein a version number assigned to a section is a timestamp captured when an API for updating the section is invoked, wherein a same version number is assigned to all sections of the firewall configuration when an API for updating the entire firewall configuration is invoked. 9. A method for managing firewall protection in a datacenter comprising a plurality of computing devices capable of enforcing firewall protection based on a firewall configuration of the data center, the method comprising: receiving updates to entities of the firewall configuration by making changes to each updated firewall configuration entity; associating each updated firewall configuration entity with a version number that corresponds to a time instant that the firewall configuration entity is updated; generating a local-level firewall configuration to be enforced at a particular computing device and associating the local-level firewall configuration with a particular version number that is based on the version numbers associated with the firewall configuration entities at a time instant that the local-level firewall configuration is generated; and monitoring for obsolescence of the local-level firewall configuration by comparing the particular version number with the version numbers associated with the firewall configuration entities after the time instant that the local-level firewall configuration is generated. 10. The method of claim 9, wherein the local-level firewall configuration comprises a subset of the firewall configuration entities that each includes a set of rules enforceable at the particular computing device. 11. The method of claim 9, wherein the firewall configuration entities comprise sections of the firewall configuration, each section comprising a set of firewall rules. 12. The method of claim 9 further comprising providing a firewall configuration status for recording version numbers associated with the plurality of computing devices. 13. The method of claim 12 further comprising distributing the generated host-level firewall configuration update to the particular computing device by sending the particular version number to the particular computing device and recording the particular version number in the firewall configuration status when an acknowledgement comprising the particular version number is received from the particular computing device. 14. The method of claim 9, wherein the updates to the firewall configuration are received via a set of application programming interface (API) routines, wherein a version number assigned to a particular firewall configuration entity is a timestamp that is captured when an API for updating the particular firewall configuration entity is invoked. 15. A non-transitory machine readable medium storing a program which when executed by at least one processing unit manages firewall protection in a datacenter comprising a plurality of host machines that each hosts a set of data compute nodes, the program comprising sets of instructions for: maintaining a firewall configuration for the plurality of host machines at a network manager of the data center, the firewall configuration comprising a plurality of firewall rules that are to be enforced at the plurality of host machines; aggregating a first plurality of updates to the firewall configuration into a first aggregated update and associating the first aggregated update with a first version number; distributing a first host-level firewall configuration update to a first host machine based on the first aggregated update and associating the first host machine with the first version number; and aggregating a second plurality of updates to the firewall configuration into a second aggregated update and associating the second aggregated update with a second version number. 16. The non-transitory machine readable medium of claim 15, wherein the plurality of rules in the firewall configuration comprises rules that are specified for a plurality of different tenants, wherein the first plurality of updates comprises update specifications from at least two different tenants. 17. The non-transitory machine readable medium of claim 15, wherein the program further comprises a set of instructions for distributing a second host-level firewall configuration update to a second host machine based on the second aggregated update and associating the second host machine with the second version number. 18. The non-transitory machine readable medium of claim 15, wherein the program further comprises a set of instructions for providing a firewall configuration status that comprises version numbers associated with host machines. 19. The non-transitory machine readable medium of claim 15, wherein the first version number is a first timestamp captured at a first time and the second version number is a second timestamp captured at a second time. 20. The non-transitory machine readable medium of claim 15, wherein the firewall configuration comprises a plurality of sections, wherein the first version number associated with the first aggregated update is identified based on the version numbers assigned to the plurality of sections of the firewall configuration.
Some embodiments provide a method for managing firewall protection in a datacenter that includes multiple host machines that each hosts a set of data compute nodes. The method maintains a firewall configuration for the host machines at a network manager of the data center. The firewall configuration includes multiple firewall rules to be enforced at the host machines. The method aggregates a first set of updates to the firewall configuration into a first aggregated update and associates the first aggregated update with a first version number. The method distributes a first host-level firewall configuration update to a first host machine based on the first aggregated update and associates the first host machine with the first version number. The method aggregates a second set of updates to the firewall configuration into a second aggregated update and associates the second aggregated update with a second version number.1. A method for managing firewall protection in a datacenter comprising a plurality of host machines that each hosts a set of data compute nodes, the method comprising: maintaining a firewall configuration for the plurality of host machines at a network manager of the data center, the firewall configuration comprising a plurality of firewall rules that are to be enforced at the plurality of host machines; aggregating a first plurality of updates to the firewall configuration into a first aggregated update and associating the first aggregated update with a first version number; distributing a first host-level firewall configuration update to a first host machine based on the first aggregated update and associating the first host machine with the first version number; and aggregating a second plurality of updates to the firewall configuration into a second aggregated update and associating the second aggregated update with a second version number. 2. The method of claim 1, wherein the plurality of rules in the firewall configuration comprises rules that are specified for a plurality of different tenants, wherein the first plurality of updates comprises update specifications from at least two different tenants. 3. The method of claim 1 further comprising distributing a second host-level firewall configuration update to a second host machine based on the second aggregated update and associating the second host machine with the second version number. 4. The method of claim 1 further comprising providing a firewall configuration status that comprises version numbers associated with host machines. 5. The method of claim 5, wherein distributing the first host-level firewall configuration update to the first host machine comprises sending the first version number to the first host machine and recording the first version number in the firewall configuration status when an acknowledgement comprising the first version number is received from the first host machine. 6. The method of claim 1, wherein the first version number is a first timestamp captured at a first time and the second version number is a second timestamp captured at a second time. 7. The method of claim 1, wherein the firewall configuration comprises a plurality of sections, wherein the first version number associated with the first aggregated update is identified based on the version numbers assigned to the plurality of sections of the firewall configuration. 8. The method of claim 8, wherein the first plurality of updates are received via a set of application programming interface (API) routines, wherein a version number assigned to a section is a timestamp captured when an API for updating the section is invoked, wherein a same version number is assigned to all sections of the firewall configuration when an API for updating the entire firewall configuration is invoked. 9. A method for managing firewall protection in a datacenter comprising a plurality of computing devices capable of enforcing firewall protection based on a firewall configuration of the data center, the method comprising: receiving updates to entities of the firewall configuration by making changes to each updated firewall configuration entity; associating each updated firewall configuration entity with a version number that corresponds to a time instant that the firewall configuration entity is updated; generating a local-level firewall configuration to be enforced at a particular computing device and associating the local-level firewall configuration with a particular version number that is based on the version numbers associated with the firewall configuration entities at a time instant that the local-level firewall configuration is generated; and monitoring for obsolescence of the local-level firewall configuration by comparing the particular version number with the version numbers associated with the firewall configuration entities after the time instant that the local-level firewall configuration is generated. 10. The method of claim 9, wherein the local-level firewall configuration comprises a subset of the firewall configuration entities that each includes a set of rules enforceable at the particular computing device. 11. The method of claim 9, wherein the firewall configuration entities comprise sections of the firewall configuration, each section comprising a set of firewall rules. 12. The method of claim 9 further comprising providing a firewall configuration status for recording version numbers associated with the plurality of computing devices. 13. The method of claim 12 further comprising distributing the generated host-level firewall configuration update to the particular computing device by sending the particular version number to the particular computing device and recording the particular version number in the firewall configuration status when an acknowledgement comprising the particular version number is received from the particular computing device. 14. The method of claim 9, wherein the updates to the firewall configuration are received via a set of application programming interface (API) routines, wherein a version number assigned to a particular firewall configuration entity is a timestamp that is captured when an API for updating the particular firewall configuration entity is invoked. 15. A non-transitory machine readable medium storing a program which when executed by at least one processing unit manages firewall protection in a datacenter comprising a plurality of host machines that each hosts a set of data compute nodes, the program comprising sets of instructions for: maintaining a firewall configuration for the plurality of host machines at a network manager of the data center, the firewall configuration comprising a plurality of firewall rules that are to be enforced at the plurality of host machines; aggregating a first plurality of updates to the firewall configuration into a first aggregated update and associating the first aggregated update with a first version number; distributing a first host-level firewall configuration update to a first host machine based on the first aggregated update and associating the first host machine with the first version number; and aggregating a second plurality of updates to the firewall configuration into a second aggregated update and associating the second aggregated update with a second version number. 16. The non-transitory machine readable medium of claim 15, wherein the plurality of rules in the firewall configuration comprises rules that are specified for a plurality of different tenants, wherein the first plurality of updates comprises update specifications from at least two different tenants. 17. The non-transitory machine readable medium of claim 15, wherein the program further comprises a set of instructions for distributing a second host-level firewall configuration update to a second host machine based on the second aggregated update and associating the second host machine with the second version number. 18. The non-transitory machine readable medium of claim 15, wherein the program further comprises a set of instructions for providing a firewall configuration status that comprises version numbers associated with host machines. 19. The non-transitory machine readable medium of claim 15, wherein the first version number is a first timestamp captured at a first time and the second version number is a second timestamp captured at a second time. 20. The non-transitory machine readable medium of claim 15, wherein the firewall configuration comprises a plurality of sections, wherein the first version number associated with the first aggregated update is identified based on the version numbers assigned to the plurality of sections of the firewall configuration.
2,400
8,182
8,182
15,063,086
2,439
A security agent configured to initiate a security agent component as a hypervisor for a computing device is described herein. The security agent is further configured to determine a subset of memory locations in memory of the computing device to be intercepted. The security agent component may then set intercepts for the determined memory locations. Setting such intercepts may include setting privilege attributes for pages which include the determined memory locations so as to prevent specific operations in association with those memory locations. In response to one of those specific operations, the security agent component may return a false indication of success or allow the operation to enable monitoring of the actor associated with the operation. When an operation affects another memory location associated with one of the pages, the security agent component may temporarily reset the privilege attribute for that page to allow the operation.
1. A system comprising: a processor; a memory coupled to the processor; a security agent configured to be operated by the processor to initiate a security agent component as a hypervisor for the system and determine a subset of memory locations in the memory to be intercepted; and the security agent component configured to be operated by the processor to set intercepts for memory locations of the determined subset of memory locations. 2. The system of claim 1, wherein the security agent component is further configured to set the intercepts by setting privilege attributes for pages which include the memory locations of the determined subset of memory locations. 3. The system of claim 1, wherein the security agent component is further configured to set the intercepts by redirecting from the subset of memory locations to different memory locations. 4. The system of claim 1, wherein the security agent is further configured to initiate the security agent component as the hypervisor by storing processor state settings in a data structure and instructing the processor to initiate the security agent component as the hypervisor based on the data structure. 5. The system of claim 4, wherein the security agent includes different routines for different operating systems, each of the different routines fixing as invariant a part of the data structure associated with the respective different operating system. 6. The system of claim 1, wherein the security agent is further configured to determine the subset of the memory locations based on a security agent configuration received from a security service. 7. The system of claim 1, wherein the security agent is further configured to intercept page out requests and prevent paging out of memory pages which include the memory locations that are to be intercepted, or to intercept page in requests in order to update knowledge of memory locations. 8. The system of claim 1, wherein the security agent is further configured to request that an operating system kernel of the system lock page table mappings of the memory locations of the subset of memory locations. 9. The system of claim 1, wherein the security agent is further configured to determine instructions to be intercepted and the security agent component is further configured to set intercepts for the determined instructions. 10. The system of claim 1, wherein the security agent component is further configured to remove intercepts corresponding to a process upon termination of the process. 11. A non-transitory computer-readable medium having stored thereon executable instructions which, when executed by a computing device, cause the computing device to perform operations comprising: identifying memory locations of a subset of memory locations in memory of the computing device to be intercepted; determining pages of the memory which include the identified memory locations; setting privilege attributes of the pages to prevent specific types of operations from affecting the memory locations; noting an operation affecting another memory location associated with one of the pages which differs from the identified memory location associated with that page; and temporarily resetting the privilege attribute of the one of the pages to allow the operation. 12. The non-transitory computer-readable medium of claim 11, wherein the identified memory locations include a memory location associated with privileges for a process and the setting includes setting the privilege attribute for the page including the memory location to a read only value to prevent writes to the memory location. 13. The non-transitory computer-readable medium of claim 11, wherein the identified memory locations include a memory location associated with user credentials and the setting includes setting the privilege attribute for the page including the memory location to an inaccessible value to prevent reads of the memory location. 14. A computer-implemented method comprising: identifying memory locations of a subset of memory locations in memory of the computing device to be intercepted; determining pages of the memory which include the identified memory locations; setting privilege attributes of the pages to prevent specific types of operations from affecting the memory locations; noting an operation affecting one of the identified memory locations; in response to noting the operation, either: temporarily resetting the privilege attribute of the page including the one of the identified memory locations to allow the operation, or returning a false indication of success for the operation. 15. The method of claim 14, wherein the operation is a write operation, the one of the identified memory locations is a memory location associated with privileges for a process and the setting includes setting the privilege attribute for the page including the one of the identified memory locations to a read only value to prevent write operations to the one of the identified memory locations. 16. The method of claim 15, wherein the returning the false indication of success includes allow the write operation to an alternate memory location and returning an indication that the write operation was successful. 17. The method of claim 14, wherein the operation is a read operation, the one of the identified memory locations is memory location associated with user credentials and the setting includes setting the privilege attribute for the page including the one of the identified memory locations to an inaccessible value to prevent reads of the one of the identified memory locations. 18. The method of claim 17, further comprising causing the read operation to be performed on an alternate memory location storing false or deceptive user credentials. 19. The method of claim 18, further comprising monitoring use of the deceptive credentials. 20. The method of claim 18, further comprising copying contents of the page including the one of the identified memory locations to a page which includes the alternate memory location storing the false or deceptive user credentials. 21. The method of claim 14, further comprising identifying a process, thread, or module that requested the operation. 22. The method of claim 21, further comprising, after temporarily resetting the privilege attribute, monitoring activities of the process, thread, or module. 23. The method of claim 21, further comprising terminating the process, thread, or module.
A security agent configured to initiate a security agent component as a hypervisor for a computing device is described herein. The security agent is further configured to determine a subset of memory locations in memory of the computing device to be intercepted. The security agent component may then set intercepts for the determined memory locations. Setting such intercepts may include setting privilege attributes for pages which include the determined memory locations so as to prevent specific operations in association with those memory locations. In response to one of those specific operations, the security agent component may return a false indication of success or allow the operation to enable monitoring of the actor associated with the operation. When an operation affects another memory location associated with one of the pages, the security agent component may temporarily reset the privilege attribute for that page to allow the operation.1. A system comprising: a processor; a memory coupled to the processor; a security agent configured to be operated by the processor to initiate a security agent component as a hypervisor for the system and determine a subset of memory locations in the memory to be intercepted; and the security agent component configured to be operated by the processor to set intercepts for memory locations of the determined subset of memory locations. 2. The system of claim 1, wherein the security agent component is further configured to set the intercepts by setting privilege attributes for pages which include the memory locations of the determined subset of memory locations. 3. The system of claim 1, wherein the security agent component is further configured to set the intercepts by redirecting from the subset of memory locations to different memory locations. 4. The system of claim 1, wherein the security agent is further configured to initiate the security agent component as the hypervisor by storing processor state settings in a data structure and instructing the processor to initiate the security agent component as the hypervisor based on the data structure. 5. The system of claim 4, wherein the security agent includes different routines for different operating systems, each of the different routines fixing as invariant a part of the data structure associated with the respective different operating system. 6. The system of claim 1, wherein the security agent is further configured to determine the subset of the memory locations based on a security agent configuration received from a security service. 7. The system of claim 1, wherein the security agent is further configured to intercept page out requests and prevent paging out of memory pages which include the memory locations that are to be intercepted, or to intercept page in requests in order to update knowledge of memory locations. 8. The system of claim 1, wherein the security agent is further configured to request that an operating system kernel of the system lock page table mappings of the memory locations of the subset of memory locations. 9. The system of claim 1, wherein the security agent is further configured to determine instructions to be intercepted and the security agent component is further configured to set intercepts for the determined instructions. 10. The system of claim 1, wherein the security agent component is further configured to remove intercepts corresponding to a process upon termination of the process. 11. A non-transitory computer-readable medium having stored thereon executable instructions which, when executed by a computing device, cause the computing device to perform operations comprising: identifying memory locations of a subset of memory locations in memory of the computing device to be intercepted; determining pages of the memory which include the identified memory locations; setting privilege attributes of the pages to prevent specific types of operations from affecting the memory locations; noting an operation affecting another memory location associated with one of the pages which differs from the identified memory location associated with that page; and temporarily resetting the privilege attribute of the one of the pages to allow the operation. 12. The non-transitory computer-readable medium of claim 11, wherein the identified memory locations include a memory location associated with privileges for a process and the setting includes setting the privilege attribute for the page including the memory location to a read only value to prevent writes to the memory location. 13. The non-transitory computer-readable medium of claim 11, wherein the identified memory locations include a memory location associated with user credentials and the setting includes setting the privilege attribute for the page including the memory location to an inaccessible value to prevent reads of the memory location. 14. A computer-implemented method comprising: identifying memory locations of a subset of memory locations in memory of the computing device to be intercepted; determining pages of the memory which include the identified memory locations; setting privilege attributes of the pages to prevent specific types of operations from affecting the memory locations; noting an operation affecting one of the identified memory locations; in response to noting the operation, either: temporarily resetting the privilege attribute of the page including the one of the identified memory locations to allow the operation, or returning a false indication of success for the operation. 15. The method of claim 14, wherein the operation is a write operation, the one of the identified memory locations is a memory location associated with privileges for a process and the setting includes setting the privilege attribute for the page including the one of the identified memory locations to a read only value to prevent write operations to the one of the identified memory locations. 16. The method of claim 15, wherein the returning the false indication of success includes allow the write operation to an alternate memory location and returning an indication that the write operation was successful. 17. The method of claim 14, wherein the operation is a read operation, the one of the identified memory locations is memory location associated with user credentials and the setting includes setting the privilege attribute for the page including the one of the identified memory locations to an inaccessible value to prevent reads of the one of the identified memory locations. 18. The method of claim 17, further comprising causing the read operation to be performed on an alternate memory location storing false or deceptive user credentials. 19. The method of claim 18, further comprising monitoring use of the deceptive credentials. 20. The method of claim 18, further comprising copying contents of the page including the one of the identified memory locations to a page which includes the alternate memory location storing the false or deceptive user credentials. 21. The method of claim 14, further comprising identifying a process, thread, or module that requested the operation. 22. The method of claim 21, further comprising, after temporarily resetting the privilege attribute, monitoring activities of the process, thread, or module. 23. The method of claim 21, further comprising terminating the process, thread, or module.
2,400
8,183
8,183
14,963,727
2,482
A method is provided to better detect a scene change to provide a prediction to an encoder to enable more efficient encoding. The method uses a Motion Compensated Temporal Filter (MCTF) that provides motion estimation and is located prior to an encoder. The MCTF provides a Motion Compensated Residual (MCR) used to detect the scene change transition. When a scene is relatively stable, the MCR score is also relatively stable. However, when a scene transition is in process, the MCR score behavior changes, Algorithmically, the MCR score is used by comparing the sliding mean of the MCR score to the sliding median. This comparison highlights the transition points. In the case of a scene cut, the MCR score exhibits a distinct spike. In the case of a fade or dissolve, the MCR score exhibits a transitional period of degradation followed by recovery. By implementing the above detection using the MCR, the location of the I-pictures in the downstream encoding process can be accurately determined for the encoder.
1. A method for encoding video using scene change detection comprising: obtaining video frames provided to an encoder; obtaining a motion compensated residual (MCR) for the video frames; determining a sliding MCR score for individual ones of the video frames; determining a sliding mean of the MCR score for the video frames; comparing the MCR score with the MCR mean score; and providing a prediction of when a scene change occurs to the encoder based on the comparison of the MCR score with the MCR mean. 2. The method of claim 1, wherein the MCR measurement is provided from a Motion Compensated Temporal Filter (MCTF) that provides pre-processing prior to encoding to perform motion estimation as well as motion compensation prediction on temporally sequential pictures of the video frames. 3. The method of claim 1, wherein the scene change comprises at least one of a fade, a dissolve or a scene cut. 4. The method of claim 1, wherein when the scene change comprises a scene cut, the MCR score exhibits a spike. 5. The method of claim 1, wherein when the scene change comprises a fade or a dissolve, the MCR score exhibits a transitional period of degradation followed by a recovery. 6. The method of claim 1, wherein the prediction of when a scene change occurs identifies the transition terminals at the beginning and the end of the scene change. 7. The method of claim 1 further comprising placing an I-frame during the encoding process at a point where the scene change occurs. 8. An apparatus to encode video frames, the apparatus comprising: an encoder having a first input for receiving video frames to be processed and a second input for receiving parameter data to enable the encoder to allocate bits for frames for encoding; a pre-filter with Motion Compensated Temporal Filtering (MCTF) frame buffer having an input receiving the video frames and an output providing the first input to the encoder; a MCTF statistical analysis module processor that provides a Motion Compensated Residual (MCR) for receiving the video frames from the pre-filter with MCTF and having an output providing the second input to the encoder; a preprocessor memory connected to the MCTF statistical analysis processor for storing code that is executable by the preprocessor to determine the parameter data to enable the encoder to allocate bits, the code causing the preprocessor to perform the following steps: obtaining a MCR for the video frames; determining a sliding MCR score for individual ones of the video frames; determining a sliding mean of the MCR score for the video frames; comparing the MCR score with the MCR mean score; and providing a prediction of when a scene change occurs to the encoder based on the comparison of the MCR score with the MCR mean. 9. The apparatus of claim 8, wherein the MCR measurement is provided from a Motion Compensated Temporal Filter (MCTF) that provides pre-processing prior to encoding to perform motion estimation as well as motion compensation prediction on temporally sequential pictures of the video frames. 10. The apparatus of claim 8, wherein the scene change comprises at least one of a fade, a dissolve or a scene cut. 11. The apparatus of claim 8, wherein when the scene change comprises a scene cut, the MCR score exhibits a spike. 12. The apparatus of claim 8, wherein when the scene change comprises a fade or a dissolve, the MCR score exhibits a transitional period of degradation followed by a recovery. 13. The apparatus of claim 8, wherein the prediction of when a scene change occurs identifies the transition terminals at the beginning and the end of the scene change. 14. The apparatus of claim 8 further comprising placing an I-frame during the encoding process at a point where the scene change occurs. 15. The apparatus of claim 8, wherein the encoder is a two path device with the MCTF statistical analysis module processor provided in a first path and the encoder provided in the second path.
A method is provided to better detect a scene change to provide a prediction to an encoder to enable more efficient encoding. The method uses a Motion Compensated Temporal Filter (MCTF) that provides motion estimation and is located prior to an encoder. The MCTF provides a Motion Compensated Residual (MCR) used to detect the scene change transition. When a scene is relatively stable, the MCR score is also relatively stable. However, when a scene transition is in process, the MCR score behavior changes, Algorithmically, the MCR score is used by comparing the sliding mean of the MCR score to the sliding median. This comparison highlights the transition points. In the case of a scene cut, the MCR score exhibits a distinct spike. In the case of a fade or dissolve, the MCR score exhibits a transitional period of degradation followed by recovery. By implementing the above detection using the MCR, the location of the I-pictures in the downstream encoding process can be accurately determined for the encoder.1. A method for encoding video using scene change detection comprising: obtaining video frames provided to an encoder; obtaining a motion compensated residual (MCR) for the video frames; determining a sliding MCR score for individual ones of the video frames; determining a sliding mean of the MCR score for the video frames; comparing the MCR score with the MCR mean score; and providing a prediction of when a scene change occurs to the encoder based on the comparison of the MCR score with the MCR mean. 2. The method of claim 1, wherein the MCR measurement is provided from a Motion Compensated Temporal Filter (MCTF) that provides pre-processing prior to encoding to perform motion estimation as well as motion compensation prediction on temporally sequential pictures of the video frames. 3. The method of claim 1, wherein the scene change comprises at least one of a fade, a dissolve or a scene cut. 4. The method of claim 1, wherein when the scene change comprises a scene cut, the MCR score exhibits a spike. 5. The method of claim 1, wherein when the scene change comprises a fade or a dissolve, the MCR score exhibits a transitional period of degradation followed by a recovery. 6. The method of claim 1, wherein the prediction of when a scene change occurs identifies the transition terminals at the beginning and the end of the scene change. 7. The method of claim 1 further comprising placing an I-frame during the encoding process at a point where the scene change occurs. 8. An apparatus to encode video frames, the apparatus comprising: an encoder having a first input for receiving video frames to be processed and a second input for receiving parameter data to enable the encoder to allocate bits for frames for encoding; a pre-filter with Motion Compensated Temporal Filtering (MCTF) frame buffer having an input receiving the video frames and an output providing the first input to the encoder; a MCTF statistical analysis module processor that provides a Motion Compensated Residual (MCR) for receiving the video frames from the pre-filter with MCTF and having an output providing the second input to the encoder; a preprocessor memory connected to the MCTF statistical analysis processor for storing code that is executable by the preprocessor to determine the parameter data to enable the encoder to allocate bits, the code causing the preprocessor to perform the following steps: obtaining a MCR for the video frames; determining a sliding MCR score for individual ones of the video frames; determining a sliding mean of the MCR score for the video frames; comparing the MCR score with the MCR mean score; and providing a prediction of when a scene change occurs to the encoder based on the comparison of the MCR score with the MCR mean. 9. The apparatus of claim 8, wherein the MCR measurement is provided from a Motion Compensated Temporal Filter (MCTF) that provides pre-processing prior to encoding to perform motion estimation as well as motion compensation prediction on temporally sequential pictures of the video frames. 10. The apparatus of claim 8, wherein the scene change comprises at least one of a fade, a dissolve or a scene cut. 11. The apparatus of claim 8, wherein when the scene change comprises a scene cut, the MCR score exhibits a spike. 12. The apparatus of claim 8, wherein when the scene change comprises a fade or a dissolve, the MCR score exhibits a transitional period of degradation followed by a recovery. 13. The apparatus of claim 8, wherein the prediction of when a scene change occurs identifies the transition terminals at the beginning and the end of the scene change. 14. The apparatus of claim 8 further comprising placing an I-frame during the encoding process at a point where the scene change occurs. 15. The apparatus of claim 8, wherein the encoder is a two path device with the MCTF statistical analysis module processor provided in a first path and the encoder provided in the second path.
2,400
8,184
8,184
15,803,207
2,483
A modular video imaging system, and more particularly, a modular video imaging system having a control module connectable to multiple input modules. The input modules each capable of receiving differing types of image data from different types of cameras and processing the image data into a format recognizable by the control module. The control unit providing general functions such as user interface and general image processing that is not camera specific.
1. A modular video imaging system comprising: a first input module configured to receive first image data from a first camera, transmit a first input module identifier, apply a first processing function to the first image data based on a first command to generate first processed image data, and transmit the first processed image data; and a control module external to the first input module and configured to determine the first command based on the first input module identifier and a user input, transmit the first command to the first input module, receive the first processed image data, and output the first processed image data to one or more video displays. 2. The modular video imaging system of claim 1, wherein the first input module identifier includes data identifying a plurality of processing functions available for application to the first image data. 3. The modular video imaging system of claim 1, wherein the first processing function is selected from a plurality of image processing functions comprising adjusting color balance, adjusting light, adjusting focal distance, adjusting resolution, adjusting zoom, adjusting focus, and adjusting shading. 4. The modular video imaging system of claim 1, wherein the first image data includes parameters for at least one of aspect ratio, timing, pixel rate, pixel resolution, and pixel encoding. 5. The modular video imaging system of claim 1, wherein the control module is configured to manipulate the first processed image data into a first manipulated processed image data. 6. The modular video imaging system of claim 1, further comprising: a second input module configured to receive second image data from a second camera, transmit a second input module identifier, apply a second processing function to the second image data based on a second command to generate second processed image data, and transmit the second processed image data; wherein the control module is external to the second input module and is configured to determine the second command based on the second input module identifier and the user input; transmit the second command to the second input module, and receive the second processed image data, and output the second processed image data to the one or more video displays. 7. The modular video imaging system of claim 6, wherein the second input module identifier includes data identifying a plurality of processing functions available for application to the second image data. 8. The modular video imaging system of claim 6, wherein the second processing function is selected from a plurality of image processing functions comprising adjusting color balance, adjusting light, adjusting focal distance, adjusting resolution, adjusting zoom, adjusting focus, and adjusting shading. 9. The modular video imaging system of claim 6, wherein the second image data includes parameters for at least one of aspect ratio, timing, pixel rate, pixel resolution, and pixel encoding. 10. The modular video imaging system of claim 6, wherein the control module is configured to manipulate the second processed image data into a second manipulated processed image data. 11. A modular video imaging system comprising: a first camera selected from a first family of cameras for providing a first type of image data; a first input module configured to receive first image data from the first camera and to process the first image data into first processed image data for output; and a control module configured to command the first input module to process the first image data into the first processed image data using a first processing function based on a first input module identifier and a first user input. 12. The modular video imaging system of claim 11, further comprising: a second camera selected from a second family of cameras for providing a second type of image data; and a second input module configured to receive second image data from the second camera and to process the second image data into second processed image data for output, the control module further configured to command the second input module to process the second image data into the second processed image data using a second processing function based on a second input module identifier and a second user input. 13. The modular video imaging system of claim 12, wherein the first type of image data and the second type of image data are different. 14. The modular video imaging system of claim 11, wherein the control module is disposed external to the first input module. 15. The modular video imaging system of claim 11, wherein the first input module transmits a first input module identifier to the control module. 16. The modular video imaging system of claim 15, wherein, based on the first input module identifier, the control module communicates compatible types of processed image data for which both are compatible. 17. The modular video imaging system of claim 16, wherein the control module commands the first input module to process the first image data into the first processed image data and the first processed image data is a type selected from the compatible types of processed image data. 18. The modular video imaging system of claim 15, wherein the first input module identifier includes data identifying a plurality of processing functions available for application to the first image data. 19. The modular video imaging system of claim 18, wherein the first processing function is selected from a plurality of image processing functions comprising adjusting color balance, adjusting light, adjusting focal distance, adjusting resolution, adjusting zoom, adjusting focus, and adjusting shading. 20. The modular video imaging system of claim 11, wherein the first image data includes parameters for at least one of aspect ratio, timing, pixel rate, pixel resolution, and pixel encoding.
A modular video imaging system, and more particularly, a modular video imaging system having a control module connectable to multiple input modules. The input modules each capable of receiving differing types of image data from different types of cameras and processing the image data into a format recognizable by the control module. The control unit providing general functions such as user interface and general image processing that is not camera specific.1. A modular video imaging system comprising: a first input module configured to receive first image data from a first camera, transmit a first input module identifier, apply a first processing function to the first image data based on a first command to generate first processed image data, and transmit the first processed image data; and a control module external to the first input module and configured to determine the first command based on the first input module identifier and a user input, transmit the first command to the first input module, receive the first processed image data, and output the first processed image data to one or more video displays. 2. The modular video imaging system of claim 1, wherein the first input module identifier includes data identifying a plurality of processing functions available for application to the first image data. 3. The modular video imaging system of claim 1, wherein the first processing function is selected from a plurality of image processing functions comprising adjusting color balance, adjusting light, adjusting focal distance, adjusting resolution, adjusting zoom, adjusting focus, and adjusting shading. 4. The modular video imaging system of claim 1, wherein the first image data includes parameters for at least one of aspect ratio, timing, pixel rate, pixel resolution, and pixel encoding. 5. The modular video imaging system of claim 1, wherein the control module is configured to manipulate the first processed image data into a first manipulated processed image data. 6. The modular video imaging system of claim 1, further comprising: a second input module configured to receive second image data from a second camera, transmit a second input module identifier, apply a second processing function to the second image data based on a second command to generate second processed image data, and transmit the second processed image data; wherein the control module is external to the second input module and is configured to determine the second command based on the second input module identifier and the user input; transmit the second command to the second input module, and receive the second processed image data, and output the second processed image data to the one or more video displays. 7. The modular video imaging system of claim 6, wherein the second input module identifier includes data identifying a plurality of processing functions available for application to the second image data. 8. The modular video imaging system of claim 6, wherein the second processing function is selected from a plurality of image processing functions comprising adjusting color balance, adjusting light, adjusting focal distance, adjusting resolution, adjusting zoom, adjusting focus, and adjusting shading. 9. The modular video imaging system of claim 6, wherein the second image data includes parameters for at least one of aspect ratio, timing, pixel rate, pixel resolution, and pixel encoding. 10. The modular video imaging system of claim 6, wherein the control module is configured to manipulate the second processed image data into a second manipulated processed image data. 11. A modular video imaging system comprising: a first camera selected from a first family of cameras for providing a first type of image data; a first input module configured to receive first image data from the first camera and to process the first image data into first processed image data for output; and a control module configured to command the first input module to process the first image data into the first processed image data using a first processing function based on a first input module identifier and a first user input. 12. The modular video imaging system of claim 11, further comprising: a second camera selected from a second family of cameras for providing a second type of image data; and a second input module configured to receive second image data from the second camera and to process the second image data into second processed image data for output, the control module further configured to command the second input module to process the second image data into the second processed image data using a second processing function based on a second input module identifier and a second user input. 13. The modular video imaging system of claim 12, wherein the first type of image data and the second type of image data are different. 14. The modular video imaging system of claim 11, wherein the control module is disposed external to the first input module. 15. The modular video imaging system of claim 11, wherein the first input module transmits a first input module identifier to the control module. 16. The modular video imaging system of claim 15, wherein, based on the first input module identifier, the control module communicates compatible types of processed image data for which both are compatible. 17. The modular video imaging system of claim 16, wherein the control module commands the first input module to process the first image data into the first processed image data and the first processed image data is a type selected from the compatible types of processed image data. 18. The modular video imaging system of claim 15, wherein the first input module identifier includes data identifying a plurality of processing functions available for application to the first image data. 19. The modular video imaging system of claim 18, wherein the first processing function is selected from a plurality of image processing functions comprising adjusting color balance, adjusting light, adjusting focal distance, adjusting resolution, adjusting zoom, adjusting focus, and adjusting shading. 20. The modular video imaging system of claim 11, wherein the first image data includes parameters for at least one of aspect ratio, timing, pixel rate, pixel resolution, and pixel encoding.
2,400
8,185
8,185
15,632,166
2,493
A computing resource service provider receives a request from a customer to establish a physical connection between a provider network device and a customer network device in a colocation center. Once the connection has been established, the customer may transmit cryptographic authentication information, through the physical connection, to the provider network device. The provider network device transmits this information to an authentication service operated by the computing resource service provider to verify the authenticity of the information. If the information is authentic, the authentication service may re-configure the provider network device to allow the customer to access one or more services provided by the computing resource service provider. The authentication service may transmit cryptographic authentication information to the customer to verify the identity of the computing resource service provider.
1. A computer-implemented method for authenticating a connection, comprising: receiving, at a network device of a computing resource service provider, through a dedicated physical network connection and from a customer network device connected with the network device via a dedicated physical network connection, cryptographic authentication information; obtaining, from an authentication service operable to verify authentication information, verification that the cryptographic authentication information received is authentic based at least in part on a secret key of a customer associated with the customer network device; and as a result of the authentication service successfully verifying the cryptographic authentication information, causing the network device to route network traffic received from the customer device over the dedicated physical network connection to one or more services of the computing resource service provider different from the authentication service.
A computing resource service provider receives a request from a customer to establish a physical connection between a provider network device and a customer network device in a colocation center. Once the connection has been established, the customer may transmit cryptographic authentication information, through the physical connection, to the provider network device. The provider network device transmits this information to an authentication service operated by the computing resource service provider to verify the authenticity of the information. If the information is authentic, the authentication service may re-configure the provider network device to allow the customer to access one or more services provided by the computing resource service provider. The authentication service may transmit cryptographic authentication information to the customer to verify the identity of the computing resource service provider.1. A computer-implemented method for authenticating a connection, comprising: receiving, at a network device of a computing resource service provider, through a dedicated physical network connection and from a customer network device connected with the network device via a dedicated physical network connection, cryptographic authentication information; obtaining, from an authentication service operable to verify authentication information, verification that the cryptographic authentication information received is authentic based at least in part on a secret key of a customer associated with the customer network device; and as a result of the authentication service successfully verifying the cryptographic authentication information, causing the network device to route network traffic received from the customer device over the dedicated physical network connection to one or more services of the computing resource service provider different from the authentication service.
2,400
8,186
8,186
14,801,340
2,436
The present disclosure is directed to systems and methods of obtaining authorization for an application or client to access certain privileged resources on behalf of a user in the OAuth2 protocol based on a voice input; validating an authentication token; and logging in to a service based on the validation.
1. A method for authorization, comprising: receiving a voice input from the user; requesting an authentication token based on the voice input; validating the authentication token; and logging in to a service based on the validation. 2. The method of claim 1, wherein the method for authorization is devoid of a browser interaction. 3. The method of claim 1, wherein the requesting comprises sending one of a voice_print message and a TUI message. 4. The method of claim 3, wherein the validating the authentication token is based on the one of the voice_print message and the TUI message. 5. The method of claim 3, wherein the validating further comprises: presenting the one of the voice_print message and the TUI message to an authorization server; redirecting to an authentication server to confirm the voice input for the authorization server; registering at a resource server; and validating the authentication token at the resource server; wherein the logging in further comprises notifying, by the resource server, the service of the valid authentication token to allow the user to interact with the service. 6. The method of claim 5, wherein the redirecting to the authentication server uses an HTTP protocol. 7. The method of claim 3, wherein the requesting comprises sending the TUI message, and wherein the validating the authentication token is based on the TUI message. 8. The method of claim 7, wherein the validating further comprises: presenting the TUI message to an authorization server; redirecting to an authentication server to confirm the voice input for the authorization server, wherein the redirecting to the authentication server uses an HTTP protocol; registering at a resource server; and validating the authentication token at the resource server; wherein the logging in further comprises notifying, by the resource server, the service of the valid authentication token to allow the user to interact with the service. 9. The method of claim 1, wherein the method for authorization of a user is an OAuth2 protocol. 10. A computer readable medium encoded with processor executable instructions operable to, when executed, perform the method of claim 1. 11. A system for authorization, comprising: a user device; and an authentication server; the system configured to: receive a voice input from the user at the user device; request an authentication token based on the voice input; validate the authentication token; and log in to a service based on the validation. 12. The system of claim 11, wherein the method for authorization is devoid of a browser interaction. 13. The system of claim 11, wherein the requesting comprises sending one of a voice_print message and a TUI message. 14. The system of claim 13, wherein the validating the authentication token is based on the one of the voice_print message and the TUI message. 15. The system of claim 13, wherein the validating further comprises: presenting the one of the voice_print message and the TUI message to an authorization server; redirecting to an authentication server to confirm the voice input for the authorization server; registering at a resource server; and validating the authentication token at the resource server; wherein the logging in further comprises notifying, by the resource server, the service of the valid authentication token to allow the user to interact with the service. 16. The system of claim 15, wherein the redirecting to the authentication server uses an HTTP protocol. 17. The system of claim 13, wherein the requesting comprises sending the TUI message, and wherein the validating the authentication token is based on the TUI message. 18. The method of claim 17, wherein the validating further comprises: presenting the TUI message to an authorization server; redirecting to an authentication server to confirm the voice input for the authorization server, wherein the redirecting to the authentication server uses an HTTP protocol; registering at a resource server; and validating the authentication token at the resource server; wherein the logging in further comprises notifying, by the resource server, the service of the valid authentication token to allow the user to interact with the service. 19. The system of claim 11, wherein the method for authorization of a user is an OAuth2 protocol. 20. A non-transitory computer readable medium having stored thereon instructions that, when executed, cause a processor to perform a method, the instructions comprising: instructions to receive a voice input from the user; instructions to request an authentication token based on the voice input; instructions to validate the authentication token; and instructions to log in to a service based on the validation.
The present disclosure is directed to systems and methods of obtaining authorization for an application or client to access certain privileged resources on behalf of a user in the OAuth2 protocol based on a voice input; validating an authentication token; and logging in to a service based on the validation.1. A method for authorization, comprising: receiving a voice input from the user; requesting an authentication token based on the voice input; validating the authentication token; and logging in to a service based on the validation. 2. The method of claim 1, wherein the method for authorization is devoid of a browser interaction. 3. The method of claim 1, wherein the requesting comprises sending one of a voice_print message and a TUI message. 4. The method of claim 3, wherein the validating the authentication token is based on the one of the voice_print message and the TUI message. 5. The method of claim 3, wherein the validating further comprises: presenting the one of the voice_print message and the TUI message to an authorization server; redirecting to an authentication server to confirm the voice input for the authorization server; registering at a resource server; and validating the authentication token at the resource server; wherein the logging in further comprises notifying, by the resource server, the service of the valid authentication token to allow the user to interact with the service. 6. The method of claim 5, wherein the redirecting to the authentication server uses an HTTP protocol. 7. The method of claim 3, wherein the requesting comprises sending the TUI message, and wherein the validating the authentication token is based on the TUI message. 8. The method of claim 7, wherein the validating further comprises: presenting the TUI message to an authorization server; redirecting to an authentication server to confirm the voice input for the authorization server, wherein the redirecting to the authentication server uses an HTTP protocol; registering at a resource server; and validating the authentication token at the resource server; wherein the logging in further comprises notifying, by the resource server, the service of the valid authentication token to allow the user to interact with the service. 9. The method of claim 1, wherein the method for authorization of a user is an OAuth2 protocol. 10. A computer readable medium encoded with processor executable instructions operable to, when executed, perform the method of claim 1. 11. A system for authorization, comprising: a user device; and an authentication server; the system configured to: receive a voice input from the user at the user device; request an authentication token based on the voice input; validate the authentication token; and log in to a service based on the validation. 12. The system of claim 11, wherein the method for authorization is devoid of a browser interaction. 13. The system of claim 11, wherein the requesting comprises sending one of a voice_print message and a TUI message. 14. The system of claim 13, wherein the validating the authentication token is based on the one of the voice_print message and the TUI message. 15. The system of claim 13, wherein the validating further comprises: presenting the one of the voice_print message and the TUI message to an authorization server; redirecting to an authentication server to confirm the voice input for the authorization server; registering at a resource server; and validating the authentication token at the resource server; wherein the logging in further comprises notifying, by the resource server, the service of the valid authentication token to allow the user to interact with the service. 16. The system of claim 15, wherein the redirecting to the authentication server uses an HTTP protocol. 17. The system of claim 13, wherein the requesting comprises sending the TUI message, and wherein the validating the authentication token is based on the TUI message. 18. The method of claim 17, wherein the validating further comprises: presenting the TUI message to an authorization server; redirecting to an authentication server to confirm the voice input for the authorization server, wherein the redirecting to the authentication server uses an HTTP protocol; registering at a resource server; and validating the authentication token at the resource server; wherein the logging in further comprises notifying, by the resource server, the service of the valid authentication token to allow the user to interact with the service. 19. The system of claim 11, wherein the method for authorization of a user is an OAuth2 protocol. 20. A non-transitory computer readable medium having stored thereon instructions that, when executed, cause a processor to perform a method, the instructions comprising: instructions to receive a voice input from the user; instructions to request an authentication token based on the voice input; instructions to validate the authentication token; and instructions to log in to a service based on the validation.
2,400
8,187
8,187
13,888,629
2,442
Provided are techniques for grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment.
1-7. (canceled) 8. An apparatus, comprising: a processor; a non-transitory, computer-readable storage medium (CRSM) coupled to the processor; and logic, stored on the CRSM and executed on the processor, for: grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources. 9. The apparatus of claim 8, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment. 10. The apparatus of claim 8, the logic for grouping comprising logic for identifying the ownership metadata associated with the resources. 11. The apparatus of claim 8, the logic for grouping comprising logic for identifying the ownership by utilizing representational state transfer (REST) application programming interfaces (APIs). 12. The apparatus of claim 8, the logic further comprising logic for: associating the two or more virtual machines with the composite application; grouping a plurality of agents associated with the two or more virtual machines into corresponding lists based upon each agents correspondence to the composite application; and managing the addition and removal of particular virtual machines of the two or more virtual machines from the lists based upon scaling requirements. 13. The apparatus of claim 8, the logic further comprising logic for: associating a user with the composite application; and providing, to the user, access to the monitored resources via the graphical user interface (GUI) for monitoring the monitored resources. 14. A computer programming product, comprising: a processor; a non-transitory, computer-readable storage medium (CRSM) coupled to the processor; and logic, stored on the CRSM and executed on the processor, for: grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources. 15. The computer programming product of claim 14, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment. 16. The computer programming product of claim 14, the logic for grouping comprising logic for identifying the ownership metadata associated with the resources. 17. The computer programming product of claim 14, the logic for grouping comprising logic for identifying the ownership by utilizing representational state transfer (REST) application programming interfaces (APIs). 18. The computer programming product of claim 14, the logic further comprising logic for: associating the two or more virtual machines with the composite application; grouping a plurality of agents associated with the two or more virtual machines into corresponding lists based upon each agents correspondence to the composite application; and managing the addition and removal of particular virtual machines of the two or more virtual machines from the lists based upon scaling requirements. 19. The computer programming product of claim 4, further comprising: associating a user with the composite application; and providing, to the user, access to the monitored resources via the graphical user interface (GUI) for monitoring the monitored resources. 20. A cloud resource monitoring agent, comprising: a processor; a non-transitory, computer-readable storage medium (CASIO) coupled to the processor; and logic, stored on the CRSM and executed on the processor, for: grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUT) in a context associated with the composite application for managing the composite application and the resources. 21. The cloud resource monitoring agent of claim 20, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment. 22. The cloud resource monitoring agent of claim 20, the logic for grouping comprising logic for identifying the ownership metadata associated with the resources. 23. The cloud resource monitoring agent of claim 20, the logic for grouping comprising logic for identifying the ownership by utilizing representational state transfer (REST) application programming interfaces (APIs). 24. The cloud resource monitoring agent of claim 20, the logic further comprising logic for: associating the two or more virtual machines with the composite application; grouping a plurality of agents associated with the two or more virtual machines into corresponding lists based upon each agents correspondence to the composite application; and managing the addition and removal of particular virtual machines of the two or more virtual machines from the lists based upon scaling requirements. 25. The cloud resource monitoring agent of claim 20, the logic further comprising logic for: associating a user with the composite application; and providing, to the user, access to the monitored resources via the graphical user interface (GUI) for monitoring the monitored resources.
Provided are techniques for grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment.1-7. (canceled) 8. An apparatus, comprising: a processor; a non-transitory, computer-readable storage medium (CRSM) coupled to the processor; and logic, stored on the CRSM and executed on the processor, for: grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources. 9. The apparatus of claim 8, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment. 10. The apparatus of claim 8, the logic for grouping comprising logic for identifying the ownership metadata associated with the resources. 11. The apparatus of claim 8, the logic for grouping comprising logic for identifying the ownership by utilizing representational state transfer (REST) application programming interfaces (APIs). 12. The apparatus of claim 8, the logic further comprising logic for: associating the two or more virtual machines with the composite application; grouping a plurality of agents associated with the two or more virtual machines into corresponding lists based upon each agents correspondence to the composite application; and managing the addition and removal of particular virtual machines of the two or more virtual machines from the lists based upon scaling requirements. 13. The apparatus of claim 8, the logic further comprising logic for: associating a user with the composite application; and providing, to the user, access to the monitored resources via the graphical user interface (GUI) for monitoring the monitored resources. 14. A computer programming product, comprising: a processor; a non-transitory, computer-readable storage medium (CRSM) coupled to the processor; and logic, stored on the CRSM and executed on the processor, for: grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources. 15. The computer programming product of claim 14, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment. 16. The computer programming product of claim 14, the logic for grouping comprising logic for identifying the ownership metadata associated with the resources. 17. The computer programming product of claim 14, the logic for grouping comprising logic for identifying the ownership by utilizing representational state transfer (REST) application programming interfaces (APIs). 18. The computer programming product of claim 14, the logic further comprising logic for: associating the two or more virtual machines with the composite application; grouping a plurality of agents associated with the two or more virtual machines into corresponding lists based upon each agents correspondence to the composite application; and managing the addition and removal of particular virtual machines of the two or more virtual machines from the lists based upon scaling requirements. 19. The computer programming product of claim 4, further comprising: associating a user with the composite application; and providing, to the user, access to the monitored resources via the graphical user interface (GUI) for monitoring the monitored resources. 20. A cloud resource monitoring agent, comprising: a processor; a non-transitory, computer-readable storage medium (CASIO) coupled to the processor; and logic, stored on the CRSM and executed on the processor, for: grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUT) in a context associated with the composite application for managing the composite application and the resources. 21. The cloud resource monitoring agent of claim 20, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment. 22. The cloud resource monitoring agent of claim 20, the logic for grouping comprising logic for identifying the ownership metadata associated with the resources. 23. The cloud resource monitoring agent of claim 20, the logic for grouping comprising logic for identifying the ownership by utilizing representational state transfer (REST) application programming interfaces (APIs). 24. The cloud resource monitoring agent of claim 20, the logic further comprising logic for: associating the two or more virtual machines with the composite application; grouping a plurality of agents associated with the two or more virtual machines into corresponding lists based upon each agents correspondence to the composite application; and managing the addition and removal of particular virtual machines of the two or more virtual machines from the lists based upon scaling requirements. 25. The cloud resource monitoring agent of claim 20, the logic further comprising logic for: associating a user with the composite application; and providing, to the user, access to the monitored resources via the graphical user interface (GUI) for monitoring the monitored resources.
2,400
8,188
8,188
15,657,713
2,458
The present application is directed to a distributed-services component of a distributed system that facilitates multi-cloud aggregation using a cloud-connector server and cloud-connector nodes that cooperate to provide services that are distributed across multiple clouds. These services include the transfer of virtual-machine containers, or workloads, between two different clouds and remote management interfaces.
1. A distributed-services component of a multiple-cloud-computing-facility aggregation, the distributed-service component comprising: a cloud-connector server that provides an electronic cloud-connector server interface through which a cloud-connector-server user interface is displayed on a remote computer and cloud-connector-server-provided distributed services are accessed from a remote computer, and that provides an electronic cloud-connector-node interface through which the cloud-connector server requests services provided by remote cloud-connector nodes; and two or more cloud-connector nodes, each installed in a different cloud-computing facility that each provides an electronic interface through which the cloud-connector server accesses services provided by the cloud-connector node and that each accesses a cloud-management interface within the cloud-computing facility in which the cloud-connector node is installed. 2. The distributed-services component of claim 1 wherein the multiple-cloud-computing-facility aggregation comprises: at least one cloud-computing facility managed by a virtual-data-center server; and additional cloud-computing facilities that are operationally and geographically distinct from the at least one cloud-computing facility managed by the virtual-data-center server. 3. The distributed-services component of claim 1 wherein the multiple-cloud-computing-facility aggregation comprises: at least one cloud-computing facility that includes two or more organization virtual data centers managed by a cloud director; and additional cloud-computing facilities that are operationally and geographically distinct from the at least one cloud-computing facility managed by the cloud director. 4. The distributed-services component of claim 1 wherein the multiple-cloud-computing-facility aggregation comprises: at least one cloud-computing facility managed by a management system that is neither a cloud director nor a virtual-data-center management server; and additional cloud-computing facilities that are operationally and geographically distinct from the at least one cloud-computing facility managed by the at least one cloud-computing facility managed by a management system that is neither a cloud director nor a virtual-data-center management server. 5. The distributed-services component of claim 1 wherein the multiple-cloud-computing-facility aggregation comprises at least two cloud-computing facilities managed by two different types of management systems. 6. The distributed-services component of claim 1 wherein each cloud-connector node is a virtual appliance that executes within a management system of a cloud-computing system selected from among a virtual-data-center management server, cloud director, and a management system that is neither a cloud director nor a virtual-data-center management server. 7. The distributed-services component of claim 6 wherein a cloud-connector node comprises: an application program interface through which the cloud-connector server requests services provided by the cloud-connector node; an authorization service that authorizes access to the cloud-connector node and to services provided by the cloud-connector node; service routines that, when executed in response to a request received through the application program interface, carry out the request and provide a response to the request; a database that stores configuration data for the cloud-connector node; adapters that provide access by the cloud-connector node to a file system and a management interface of a cloud-computing-facility management system; and a messaging protocol and network transfer services that together provide for transfer of data files to remote cloud-connector nodes. 8. The distributed-services component of claim 7 wherein the cloud-connector node wherein the messaging protocol and network transfer services provide for checkpoint-restart of interrupted or failed data-transfer operations. 9. The distributed-services component of claim 7 wherein the cloud-connector node provides, through the application program interface, a login service, a parameterized service request that invokes a particular parameter-specified service, a data-upload service that receives and stores data transmitted to the cloud-connector node by the cloud-connector server, a file-transfer service that, when requested by the cloud-connector server, transfers a file from the cloud-connector node to a different cloud-connector node; and a file-transfer service that, when requested by the cloud-connector server, transfers a file from a different cloud-connector node to the cloud-connector node. 10. The distributed-services component of claim 7 wherein the application program interface is accessed by the cloud-connector server through the representational state transfer protocol via a hypertext transfer protocol proxy server. 11. A cloud-connector node that executes within a cloud-computing facility and that is managed by a remote cloud-connector server, the cloud-connector node comprising a virtual appliance within a management system of the cloud-computing facility and further comprising: an application program interface through which the remote cloud-connector server requests services provided by the cloud-connector node; an authorization service that authorizes access to the cloud-connector node and to services provided by the cloud-connector node; service routines that, when executed in response to a request received through the application program interface, carry out the request and provide a response to the request; a database that stores configuration data for the cloud-connector node; adapters that provide access, by the service routines within the cloud-connector node, to a file system and a management interface of a cloud-computing-facility management system; and a messaging protocol and network transfer services that together provide for transfer of data files to remote cloud-connector nodes. 12. The cloud-connector node of claim 11 wherein the cloud-connector node is installed in a virtual-data-center server that manages a virtual data center within the cloud-computing facility. 13. The cloud-connector node of claim 11 wherein the cloud-connector node is installed in a cloud director that manages organization virtual data centers within the cloud-computing facility. 14. The cloud-connector node of claim 11 wherein the cloud-connector node is installed in a management system that is neither a cloud director nor a virtual-data-center management server, the management system managing the cloud-computing facility. 15. The cloud-connector node of claim 11 wherein the cloud-connector server is located within a cloud-computing facility that is geographically and operationally remote from the cloud-computing facility within which the cloud-connector node executes. 16. The cloud-connector node of claim 11 wherein the cloud-connector node wherein the messaging protocol and network transfer services provide for checkpoint-restart of interrupted or failed data-transfer operations. 17. The cloud-connector node of claim 11 wherein the cloud-connector node provides, through the application program interface, a login service, a parameterized service request that invokes a particular parameter-specified service, a data-upload service that receives and stores data transmitted to the cloud-connector node by the cloud-connector server, a file-transfer service that, when requested by the cloud-connector server, transfers a file from the cloud-connector node to a different cloud-connector node; and a file-transfer service that, when requested by the cloud-connector server, transfers a file from a different cloud-connector node to the cloud-connector node. 18. The cloud-connector node of claim 11 wherein the application program interface is accessed by the cloud-connector server through the representational state transfer protocol via a hypertext transfer protocol proxy server. 19. The cloud-connector node of claim 11 that, together with the cloud-connector server and additional remote cloud-connector nodes in operationally distinct cloud-computing facilities, comprises a distributed-services component of a multiple-cloud-computing-facility aggregation. 20. A method for providing distributed services within multiple, operationally distinct cloud-computing facilities, the method comprising: installing, within one of the multiple, operationally distinct cloud-computing facilities, a cloud-connector server that provides an electronic cloud-connector server interface through which a cloud-connector-server user interface is displayed on a remote computer and cloud-connector-server-provided distributed services are accessed from a remote computer and that provides an electronic cloud-connector-node interface through which the cloud-connector server requests services provided by remote cloud-connector nodes; and installing two or more cloud-connector nodes, each in a different cloud-computing facility, that each provides an electronic interface through which the cloud-connector server accesses services provided by the cloud-connector node and that each accesses a cloud-management interface within the cloud-computing facility in which the cloud-connector node is installed. 21. A computer-readable data-storage device that stores digitally encoded computer instructions that carry out a method that provides distributed services within multiple, operationally distinct cloud-computing facilities, the method comprising: installing, within one of the multiple, operationally distinct cloud-computing facilities, a cloud-connector server that provides an electronic cloud-connector server interface through which a cloud-connector-server user interface is displayed on a remote computer and cloud-connector-server-provided distributed services are accessed from a remote computer and that provides an electronic cloud-connector-node interface through which the cloud-connector server requests services provided by remote cloud-connector nodes; and installing two or more cloud-connector nodes, each in a different cloud-computing facility, that each provides an electronic interface through which the cloud-connector server accesses services provided by the cloud-connector node and that each accesses a cloud-management interface within the cloud-computing facility in which the cloud-connector node is installed.
The present application is directed to a distributed-services component of a distributed system that facilitates multi-cloud aggregation using a cloud-connector server and cloud-connector nodes that cooperate to provide services that are distributed across multiple clouds. These services include the transfer of virtual-machine containers, or workloads, between two different clouds and remote management interfaces.1. A distributed-services component of a multiple-cloud-computing-facility aggregation, the distributed-service component comprising: a cloud-connector server that provides an electronic cloud-connector server interface through which a cloud-connector-server user interface is displayed on a remote computer and cloud-connector-server-provided distributed services are accessed from a remote computer, and that provides an electronic cloud-connector-node interface through which the cloud-connector server requests services provided by remote cloud-connector nodes; and two or more cloud-connector nodes, each installed in a different cloud-computing facility that each provides an electronic interface through which the cloud-connector server accesses services provided by the cloud-connector node and that each accesses a cloud-management interface within the cloud-computing facility in which the cloud-connector node is installed. 2. The distributed-services component of claim 1 wherein the multiple-cloud-computing-facility aggregation comprises: at least one cloud-computing facility managed by a virtual-data-center server; and additional cloud-computing facilities that are operationally and geographically distinct from the at least one cloud-computing facility managed by the virtual-data-center server. 3. The distributed-services component of claim 1 wherein the multiple-cloud-computing-facility aggregation comprises: at least one cloud-computing facility that includes two or more organization virtual data centers managed by a cloud director; and additional cloud-computing facilities that are operationally and geographically distinct from the at least one cloud-computing facility managed by the cloud director. 4. The distributed-services component of claim 1 wherein the multiple-cloud-computing-facility aggregation comprises: at least one cloud-computing facility managed by a management system that is neither a cloud director nor a virtual-data-center management server; and additional cloud-computing facilities that are operationally and geographically distinct from the at least one cloud-computing facility managed by the at least one cloud-computing facility managed by a management system that is neither a cloud director nor a virtual-data-center management server. 5. The distributed-services component of claim 1 wherein the multiple-cloud-computing-facility aggregation comprises at least two cloud-computing facilities managed by two different types of management systems. 6. The distributed-services component of claim 1 wherein each cloud-connector node is a virtual appliance that executes within a management system of a cloud-computing system selected from among a virtual-data-center management server, cloud director, and a management system that is neither a cloud director nor a virtual-data-center management server. 7. The distributed-services component of claim 6 wherein a cloud-connector node comprises: an application program interface through which the cloud-connector server requests services provided by the cloud-connector node; an authorization service that authorizes access to the cloud-connector node and to services provided by the cloud-connector node; service routines that, when executed in response to a request received through the application program interface, carry out the request and provide a response to the request; a database that stores configuration data for the cloud-connector node; adapters that provide access by the cloud-connector node to a file system and a management interface of a cloud-computing-facility management system; and a messaging protocol and network transfer services that together provide for transfer of data files to remote cloud-connector nodes. 8. The distributed-services component of claim 7 wherein the cloud-connector node wherein the messaging protocol and network transfer services provide for checkpoint-restart of interrupted or failed data-transfer operations. 9. The distributed-services component of claim 7 wherein the cloud-connector node provides, through the application program interface, a login service, a parameterized service request that invokes a particular parameter-specified service, a data-upload service that receives and stores data transmitted to the cloud-connector node by the cloud-connector server, a file-transfer service that, when requested by the cloud-connector server, transfers a file from the cloud-connector node to a different cloud-connector node; and a file-transfer service that, when requested by the cloud-connector server, transfers a file from a different cloud-connector node to the cloud-connector node. 10. The distributed-services component of claim 7 wherein the application program interface is accessed by the cloud-connector server through the representational state transfer protocol via a hypertext transfer protocol proxy server. 11. A cloud-connector node that executes within a cloud-computing facility and that is managed by a remote cloud-connector server, the cloud-connector node comprising a virtual appliance within a management system of the cloud-computing facility and further comprising: an application program interface through which the remote cloud-connector server requests services provided by the cloud-connector node; an authorization service that authorizes access to the cloud-connector node and to services provided by the cloud-connector node; service routines that, when executed in response to a request received through the application program interface, carry out the request and provide a response to the request; a database that stores configuration data for the cloud-connector node; adapters that provide access, by the service routines within the cloud-connector node, to a file system and a management interface of a cloud-computing-facility management system; and a messaging protocol and network transfer services that together provide for transfer of data files to remote cloud-connector nodes. 12. The cloud-connector node of claim 11 wherein the cloud-connector node is installed in a virtual-data-center server that manages a virtual data center within the cloud-computing facility. 13. The cloud-connector node of claim 11 wherein the cloud-connector node is installed in a cloud director that manages organization virtual data centers within the cloud-computing facility. 14. The cloud-connector node of claim 11 wherein the cloud-connector node is installed in a management system that is neither a cloud director nor a virtual-data-center management server, the management system managing the cloud-computing facility. 15. The cloud-connector node of claim 11 wherein the cloud-connector server is located within a cloud-computing facility that is geographically and operationally remote from the cloud-computing facility within which the cloud-connector node executes. 16. The cloud-connector node of claim 11 wherein the cloud-connector node wherein the messaging protocol and network transfer services provide for checkpoint-restart of interrupted or failed data-transfer operations. 17. The cloud-connector node of claim 11 wherein the cloud-connector node provides, through the application program interface, a login service, a parameterized service request that invokes a particular parameter-specified service, a data-upload service that receives and stores data transmitted to the cloud-connector node by the cloud-connector server, a file-transfer service that, when requested by the cloud-connector server, transfers a file from the cloud-connector node to a different cloud-connector node; and a file-transfer service that, when requested by the cloud-connector server, transfers a file from a different cloud-connector node to the cloud-connector node. 18. The cloud-connector node of claim 11 wherein the application program interface is accessed by the cloud-connector server through the representational state transfer protocol via a hypertext transfer protocol proxy server. 19. The cloud-connector node of claim 11 that, together with the cloud-connector server and additional remote cloud-connector nodes in operationally distinct cloud-computing facilities, comprises a distributed-services component of a multiple-cloud-computing-facility aggregation. 20. A method for providing distributed services within multiple, operationally distinct cloud-computing facilities, the method comprising: installing, within one of the multiple, operationally distinct cloud-computing facilities, a cloud-connector server that provides an electronic cloud-connector server interface through which a cloud-connector-server user interface is displayed on a remote computer and cloud-connector-server-provided distributed services are accessed from a remote computer and that provides an electronic cloud-connector-node interface through which the cloud-connector server requests services provided by remote cloud-connector nodes; and installing two or more cloud-connector nodes, each in a different cloud-computing facility, that each provides an electronic interface through which the cloud-connector server accesses services provided by the cloud-connector node and that each accesses a cloud-management interface within the cloud-computing facility in which the cloud-connector node is installed. 21. A computer-readable data-storage device that stores digitally encoded computer instructions that carry out a method that provides distributed services within multiple, operationally distinct cloud-computing facilities, the method comprising: installing, within one of the multiple, operationally distinct cloud-computing facilities, a cloud-connector server that provides an electronic cloud-connector server interface through which a cloud-connector-server user interface is displayed on a remote computer and cloud-connector-server-provided distributed services are accessed from a remote computer and that provides an electronic cloud-connector-node interface through which the cloud-connector server requests services provided by remote cloud-connector nodes; and installing two or more cloud-connector nodes, each in a different cloud-computing facility, that each provides an electronic interface through which the cloud-connector server accesses services provided by the cloud-connector node and that each accesses a cloud-management interface within the cloud-computing facility in which the cloud-connector node is installed.
2,400
8,189
8,189
14,989,372
2,487
A camera for a vision system of a vehicle includes a printed circuit board having a plurality of layers laminated together. The plurality of layers includes an outermost layer. A pixelated imaging array having a plurality of photosensing elements is disposed at the outermost layer of the plurality of layers of the printed circuit board. The outermost layer has a cutout connecting region to expose electrically conductive pads at a layer below the outermost layer. A discrete flex cable is connected to the electrically conductive pads at the cutout connecting region. The flex cable electrically connects the electrically conductive pads to at least one of (i) circuitry of another printed circuit board of the camera and (ii) circuitry of another printed circuit board of the vision system.
1. A camera for a vision system of a vehicle, said camera comprising: a printed circuit board comprising a plurality of layers laminated together; wherein said plurality of layers includes an outermost layer; a pixelated imaging array having a plurality of photosensing elements disposed at said outermost layer of said plurality of layers of said printed circuit board; wherein said outermost layer comprises a cutout connecting region to expose electrically conductive pads at a layer below said outermost layer; and an electrically conductive connecting element connected to said electrically conductive pads at said cutout connecting region, wherein said electrically conductive connecting element electrically connects said electrically conductive pads to at least one of (i) circuitry of another printed circuit board of said camera and (ii) circuitry of another printed circuit board of said vision system. 2. The camera of claim 1, wherein said electrically conductive connecting element comprises a flexible cable. 3. The camera of claim 1, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a soldering process. 4. The camera of claim 1, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a strain-relieving measure. 5. The camera of claim 1, wherein said electrically conductive connecting element is connected to said electrically conductive pads using component underfill. 6. The camera of claim 1, wherein said electrically conductive connecting element is disposed below an outer surface of said outermost layer of said printed circuit board. 7. The camera of claim 1, wherein said plurality of layers of said printed circuit board comprise FR4 material. 8. The camera of claim 1, wherein said exposed electrically conductive pads a said cutout connecting region are electrically connected to circuitry of said printed circuit board. 9. The camera of claim 8, wherein said electrically conductive pads are disposed at one of said plurality of layers of said printed circuit board, and wherein said circuitry of said printed circuit board is disposed at at least one other layer of said plurality of layers of said printed circuit board. 10. A camera for a vision system of a vehicle, said camera comprising: a printed circuit board comprising a plurality of layers laminated together; wherein said plurality of layers includes an outermost layer; a pixelated imaging array having a plurality of photosensing elements disposed at said outermost layer of said plurality of layers of said printed circuit board; wherein said outermost layer comprises a cutout connecting region to expose electrically conductive pads at a layer below said outermost layer; an electrically conductive connecting element connected to said electrically conductive pads at said cutout connecting region, wherein said electrically conductive connecting element electrically connects said electrically conductive pads to at least one of (i) circuitry of another printed circuit board of said camera and (ii) circuitry of another printed circuit board of said vision system; wherein said electrically conductive connecting element is disposed below an outer surface of said outermost layer of said printed circuit board; and wherein said electrically conductive connecting element comprises a flexible cable. 11. The camera of claim 10, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a soldering process. 12. The camera of claim 10, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a strain-relieving measure. 13. The camera of claim 10, wherein said electrically conductive connecting element is connected to said electrically conductive pads using component underfill. 14. The camera of claim 10, wherein said exposed electrically conductive pads a said cutout connecting region are electrically connected to circuitry of said printed circuit board. 15. The camera of claim 14, wherein said electrically conductive pads are disposed at one of said plurality of layers of said printed circuit board, and wherein said circuitry of said printed circuit board is disposed at at least one other layer of said plurality of layers of said printed circuit board. 16. A camera for a vision system of a vehicle, said camera comprising: a printed circuit board comprising a plurality of layers laminated together; wherein said plurality of layers of said printed circuit board comprise FR4 material; wherein said plurality of layers includes an outermost layer; a pixelated imaging array having a plurality of photosensing elements disposed at said outermost layer of said plurality of layers of said printed circuit board; wherein said outermost layer comprises a cutout connecting region to expose electrically conductive pads at a layer below said outermost layer; wherein said exposed electrically conductive pads a said cutout connecting region are electrically connected to circuitry of said printed circuit board; an electrically conductive connecting element connected to said electrically conductive pads at said cutout connecting region, wherein said electrically conductive connecting element electrically connects said electrically conductive pads to at least one of (i) circuitry of another printed circuit board of said camera and (ii) circuitry of another printed circuit board of said vision system; and wherein said electrically conductive connecting element is disposed below an outer surface of said outermost layer of said printed circuit board. 17. The camera of claim 16, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a soldering process. 18. The camera of claim 16, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a strain-relieving measure. 19. The camera of claim 16, wherein said electrically conductive connecting element is connected to said electrically conductive pads using component underfill. 20. The camera of claim 16, wherein said electrically conductive pads are disposed at one of said plurality of layers of said printed circuit board, and wherein said circuitry of said printed circuit board is disposed at at least one other layer of said plurality of layers of said printed circuit board.
A camera for a vision system of a vehicle includes a printed circuit board having a plurality of layers laminated together. The plurality of layers includes an outermost layer. A pixelated imaging array having a plurality of photosensing elements is disposed at the outermost layer of the plurality of layers of the printed circuit board. The outermost layer has a cutout connecting region to expose electrically conductive pads at a layer below the outermost layer. A discrete flex cable is connected to the electrically conductive pads at the cutout connecting region. The flex cable electrically connects the electrically conductive pads to at least one of (i) circuitry of another printed circuit board of the camera and (ii) circuitry of another printed circuit board of the vision system.1. A camera for a vision system of a vehicle, said camera comprising: a printed circuit board comprising a plurality of layers laminated together; wherein said plurality of layers includes an outermost layer; a pixelated imaging array having a plurality of photosensing elements disposed at said outermost layer of said plurality of layers of said printed circuit board; wherein said outermost layer comprises a cutout connecting region to expose electrically conductive pads at a layer below said outermost layer; and an electrically conductive connecting element connected to said electrically conductive pads at said cutout connecting region, wherein said electrically conductive connecting element electrically connects said electrically conductive pads to at least one of (i) circuitry of another printed circuit board of said camera and (ii) circuitry of another printed circuit board of said vision system. 2. The camera of claim 1, wherein said electrically conductive connecting element comprises a flexible cable. 3. The camera of claim 1, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a soldering process. 4. The camera of claim 1, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a strain-relieving measure. 5. The camera of claim 1, wherein said electrically conductive connecting element is connected to said electrically conductive pads using component underfill. 6. The camera of claim 1, wherein said electrically conductive connecting element is disposed below an outer surface of said outermost layer of said printed circuit board. 7. The camera of claim 1, wherein said plurality of layers of said printed circuit board comprise FR4 material. 8. The camera of claim 1, wherein said exposed electrically conductive pads a said cutout connecting region are electrically connected to circuitry of said printed circuit board. 9. The camera of claim 8, wherein said electrically conductive pads are disposed at one of said plurality of layers of said printed circuit board, and wherein said circuitry of said printed circuit board is disposed at at least one other layer of said plurality of layers of said printed circuit board. 10. A camera for a vision system of a vehicle, said camera comprising: a printed circuit board comprising a plurality of layers laminated together; wherein said plurality of layers includes an outermost layer; a pixelated imaging array having a plurality of photosensing elements disposed at said outermost layer of said plurality of layers of said printed circuit board; wherein said outermost layer comprises a cutout connecting region to expose electrically conductive pads at a layer below said outermost layer; an electrically conductive connecting element connected to said electrically conductive pads at said cutout connecting region, wherein said electrically conductive connecting element electrically connects said electrically conductive pads to at least one of (i) circuitry of another printed circuit board of said camera and (ii) circuitry of another printed circuit board of said vision system; wherein said electrically conductive connecting element is disposed below an outer surface of said outermost layer of said printed circuit board; and wherein said electrically conductive connecting element comprises a flexible cable. 11. The camera of claim 10, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a soldering process. 12. The camera of claim 10, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a strain-relieving measure. 13. The camera of claim 10, wherein said electrically conductive connecting element is connected to said electrically conductive pads using component underfill. 14. The camera of claim 10, wherein said exposed electrically conductive pads a said cutout connecting region are electrically connected to circuitry of said printed circuit board. 15. The camera of claim 14, wherein said electrically conductive pads are disposed at one of said plurality of layers of said printed circuit board, and wherein said circuitry of said printed circuit board is disposed at at least one other layer of said plurality of layers of said printed circuit board. 16. A camera for a vision system of a vehicle, said camera comprising: a printed circuit board comprising a plurality of layers laminated together; wherein said plurality of layers of said printed circuit board comprise FR4 material; wherein said plurality of layers includes an outermost layer; a pixelated imaging array having a plurality of photosensing elements disposed at said outermost layer of said plurality of layers of said printed circuit board; wherein said outermost layer comprises a cutout connecting region to expose electrically conductive pads at a layer below said outermost layer; wherein said exposed electrically conductive pads a said cutout connecting region are electrically connected to circuitry of said printed circuit board; an electrically conductive connecting element connected to said electrically conductive pads at said cutout connecting region, wherein said electrically conductive connecting element electrically connects said electrically conductive pads to at least one of (i) circuitry of another printed circuit board of said camera and (ii) circuitry of another printed circuit board of said vision system; and wherein said electrically conductive connecting element is disposed below an outer surface of said outermost layer of said printed circuit board. 17. The camera of claim 16, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a soldering process. 18. The camera of claim 16, wherein said electrically conductive connecting element is connected to said electrically conductive pads using a strain-relieving measure. 19. The camera of claim 16, wherein said electrically conductive connecting element is connected to said electrically conductive pads using component underfill. 20. The camera of claim 16, wherein said electrically conductive pads are disposed at one of said plurality of layers of said printed circuit board, and wherein said circuitry of said printed circuit board is disposed at at least one other layer of said plurality of layers of said printed circuit board.
2,400
8,190
8,190
14,330,982
2,442
Provided are techniques for grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment.
1. A method, comprising: grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources. 2. The method of claim 1, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment. 3. The method of claim 1, the grouping comprising identifying the ownership metadata associated with the resources. 4. The method of claim 1, the grouping comprising identifying the ownership by utilizing representational state transfer (REST) application programming interfaces (APIs). 5. The method of claim 1, further comprising: associating the two or more virtual machines with the composite application; grouping a plurality of agents associated with the two or more virtual machines into corresponding lists based upon each agents correspondence to the composite application; and managing the addition and removal of particular virtual machines of the two or more virtual machines from the lists based upon scaling requirements. 6. The method of claim 1, further comprising: associating a user with the composite application; and providing, to the user, access to the monitored resources via the graphical user interface (GUI) for monitoring the monitored resources. 7. The method of claim 1, wherein the e resources are selected from a group of resources, comprising: an operating system; a message queue; data storage; and an application server.
Provided are techniques for grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment.1. A method, comprising: grouping resources based upon ownership in a cloud environment into a collection representing a composite application corresponding to a particular user; automatically monitoring the resources across two or more virtual machines and two or more physical computing devices; and displaying the monitored resources in a graphical user interface (GUI) in a context associated with the composite application for managing the composite application and the resources. 2. The method of claim 1, wherein the resources are provided as one or more of a platform as service (PaaS) environment and an infrastructure as Service (IaaS) environment. 3. The method of claim 1, the grouping comprising identifying the ownership metadata associated with the resources. 4. The method of claim 1, the grouping comprising identifying the ownership by utilizing representational state transfer (REST) application programming interfaces (APIs). 5. The method of claim 1, further comprising: associating the two or more virtual machines with the composite application; grouping a plurality of agents associated with the two or more virtual machines into corresponding lists based upon each agents correspondence to the composite application; and managing the addition and removal of particular virtual machines of the two or more virtual machines from the lists based upon scaling requirements. 6. The method of claim 1, further comprising: associating a user with the composite application; and providing, to the user, access to the monitored resources via the graphical user interface (GUI) for monitoring the monitored resources. 7. The method of claim 1, wherein the e resources are selected from a group of resources, comprising: an operating system; a message queue; data storage; and an application server.
2,400
8,191
8,191
15,351,722
2,458
The methods and systems described herein centralize simulation resources and effectively delivering training and simulation services to a broad set of distributed users at both the enterprise and operational levels. The cloud-based delivery of simulation applications described herein enables on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. In exemplary systems, users may provision computing capabilities, such as server time and network storage, as needed, automatically without requiring human interaction.
1. A process for dynamically managing a pool of virtual and physical resources accessible by multiple tenants running multiple instances of a distributable simulation application, the method comprising: monitoring, by at least one processing server in the pool, a processing load of each of the multiple instances of the distributable simulation application running within one or more virtual machines in the pool, the one or more virtual machines each comprising a plurality of nodes; determining, by the at least one processing server, that one or more nodes of the one or more virtual machines is overloaded or underloaded by a status of each of the multiple instance of the distributable simulation application; if overloaded then starting, by the at least one processing server, at least one new virtual machine; and instructing, by the at least one processing server, the one or more multiple instances of the application causing the overload to transfer a portion of its processing load from the one or more overloaded nodes to the at least one new virtual machine; if underloaded then instructing, by the at least one processing server, the one or more multiple instances of the application currently running on one or more underloaded nodes to transfer all of its processing load from the one or more underloaded nodes to one or more alternate nodes; and stopping, by the at least one processing server, the one or more of the underloaded nodes. 2. The process according to claim 1, wherein monitoring the processing loads includes determining by an extension framework module for each of the multiple instances of the distributable simulation application health of each of the multiple instances. 3. The process according to claim 2, wherein determining health includes determination a status of each of the multiple instances wherein status indicates one of a limitation of a distributable simulation application instance' s objectives or an excess processing capacity in view of the distributable simulation application instance' s objectives. 4. The process according to claim 2, wherein the monitoring by the at least one processing server is initiated by launching the individual extension framework modules by each of the multiple instances of the application, wherein each of the individual extension framework modules facilitates reporting of status for an individual multiple instance back to the at least one processing server. 5. The process according to claim 1, further comprising: monitoring, by the at least one processing server in the pool, a status of each of the one or more virtual machines in the pool to determine level of performance thereof and increasing or decreasing assigned use thereof by instance of the application accordingly. 6. The process according to claim 2, further comprising: determining by extension framework module for at least one of the multiple instances distributable simulation application occurrence of a gasp condition and a status of overloaded. 7. A process for dynamically managing a pool of virtual and physical resources accessible by multiple tenants running multiple instances of a distributable simulation application, the method comprising: monitoring by a processing server a status of each of the multiple instances of the distributable simulation application, wherein the monitoring is initiated by launching an individual extension framework module by each of the multiple instances of the distributable simulation application, wherein the individual extension framework modules facilitate reporting of the status of each of the multiple instances back to the processing server and further wherein monitoring the status of each of the multiple instances of the distributable simulation application includes monitoring the health of the distributable simulation application including gasp conditions; and managing by the processing server the pool of virtual and physical resources responsive to the status of each of the multiple instances of the distributable simulation application. 8. The process according to claim 7, wherein monitoring the status of the multiple instances of the distributable simulation application includes determining for each of the multiple instances a status causing a limitation of distributable simulation application objectives or excess processing capacity to occur during each of the multiple instances. 9. The process according to claim 7, wherein the extension framework modules provide transfer instructions for adjusting the pool of virtual and physical resources responsive to the reported status of the multiple instances of the distributable simulation application. 10. The process according to claim 8, wherein the extension framework modules provide transfer instructions for adjusting the pool of virtual and physical resources responsive to the reported status of the multiple instances of the distributable simulation application. 11. The method according to claim 10, wherein the transfer instructions include instructing one or more corresponding plugin modules to transfer a portion of a processing load for an instance of the distributable simulation application to one or more different virtual machines when a status of the instance of the distributable simulation application causes a limitation of distributable simulation application objectives. 12. The method according to claim 9, wherein the transfer instructions include instructing one or more corresponding plugin modules to transfer a portion of a processing load for an instance of the distributable simulation application to one or more different virtual machines when a status of the instance of the distributable simulation application causes a gasp condition. 13. The method according to claim 10, wherein the transfer instructions include: instructing one or more corresponding plugin modules to transfer all of a processing load for an instance of the distributable simulation application to one or more different virtual machines when a status of the instance of the distributable simulation application causes excess processing capacity to occur during the instance; and instructing a virtual machine on which the instance of the distributable simulation application was running to stop processing.
The methods and systems described herein centralize simulation resources and effectively delivering training and simulation services to a broad set of distributed users at both the enterprise and operational levels. The cloud-based delivery of simulation applications described herein enables on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. In exemplary systems, users may provision computing capabilities, such as server time and network storage, as needed, automatically without requiring human interaction.1. A process for dynamically managing a pool of virtual and physical resources accessible by multiple tenants running multiple instances of a distributable simulation application, the method comprising: monitoring, by at least one processing server in the pool, a processing load of each of the multiple instances of the distributable simulation application running within one or more virtual machines in the pool, the one or more virtual machines each comprising a plurality of nodes; determining, by the at least one processing server, that one or more nodes of the one or more virtual machines is overloaded or underloaded by a status of each of the multiple instance of the distributable simulation application; if overloaded then starting, by the at least one processing server, at least one new virtual machine; and instructing, by the at least one processing server, the one or more multiple instances of the application causing the overload to transfer a portion of its processing load from the one or more overloaded nodes to the at least one new virtual machine; if underloaded then instructing, by the at least one processing server, the one or more multiple instances of the application currently running on one or more underloaded nodes to transfer all of its processing load from the one or more underloaded nodes to one or more alternate nodes; and stopping, by the at least one processing server, the one or more of the underloaded nodes. 2. The process according to claim 1, wherein monitoring the processing loads includes determining by an extension framework module for each of the multiple instances of the distributable simulation application health of each of the multiple instances. 3. The process according to claim 2, wherein determining health includes determination a status of each of the multiple instances wherein status indicates one of a limitation of a distributable simulation application instance' s objectives or an excess processing capacity in view of the distributable simulation application instance' s objectives. 4. The process according to claim 2, wherein the monitoring by the at least one processing server is initiated by launching the individual extension framework modules by each of the multiple instances of the application, wherein each of the individual extension framework modules facilitates reporting of status for an individual multiple instance back to the at least one processing server. 5. The process according to claim 1, further comprising: monitoring, by the at least one processing server in the pool, a status of each of the one or more virtual machines in the pool to determine level of performance thereof and increasing or decreasing assigned use thereof by instance of the application accordingly. 6. The process according to claim 2, further comprising: determining by extension framework module for at least one of the multiple instances distributable simulation application occurrence of a gasp condition and a status of overloaded. 7. A process for dynamically managing a pool of virtual and physical resources accessible by multiple tenants running multiple instances of a distributable simulation application, the method comprising: monitoring by a processing server a status of each of the multiple instances of the distributable simulation application, wherein the monitoring is initiated by launching an individual extension framework module by each of the multiple instances of the distributable simulation application, wherein the individual extension framework modules facilitate reporting of the status of each of the multiple instances back to the processing server and further wherein monitoring the status of each of the multiple instances of the distributable simulation application includes monitoring the health of the distributable simulation application including gasp conditions; and managing by the processing server the pool of virtual and physical resources responsive to the status of each of the multiple instances of the distributable simulation application. 8. The process according to claim 7, wherein monitoring the status of the multiple instances of the distributable simulation application includes determining for each of the multiple instances a status causing a limitation of distributable simulation application objectives or excess processing capacity to occur during each of the multiple instances. 9. The process according to claim 7, wherein the extension framework modules provide transfer instructions for adjusting the pool of virtual and physical resources responsive to the reported status of the multiple instances of the distributable simulation application. 10. The process according to claim 8, wherein the extension framework modules provide transfer instructions for adjusting the pool of virtual and physical resources responsive to the reported status of the multiple instances of the distributable simulation application. 11. The method according to claim 10, wherein the transfer instructions include instructing one or more corresponding plugin modules to transfer a portion of a processing load for an instance of the distributable simulation application to one or more different virtual machines when a status of the instance of the distributable simulation application causes a limitation of distributable simulation application objectives. 12. The method according to claim 9, wherein the transfer instructions include instructing one or more corresponding plugin modules to transfer a portion of a processing load for an instance of the distributable simulation application to one or more different virtual machines when a status of the instance of the distributable simulation application causes a gasp condition. 13. The method according to claim 10, wherein the transfer instructions include: instructing one or more corresponding plugin modules to transfer all of a processing load for an instance of the distributable simulation application to one or more different virtual machines when a status of the instance of the distributable simulation application causes excess processing capacity to occur during the instance; and instructing a virtual machine on which the instance of the distributable simulation application was running to stop processing.
2,400
8,192
8,192
15,124,203
2,465
Methods and devices are presented using reference virtual noise for computing a bit loading.
1-27. (canceled) 28. A device to mitigate crosstalk noise on a signal to be transmitted over a transmission line of a network, the device comprising: a transceiver configured to receive a reference noise signal that is an estimation of noise on the transmission line, wherein the transceiver is configured to determine a bit loading value of the signal that mitigates the crosstalk noise based on the reference noise signal. 29. The device of claim 28, wherein the transceiver is configured to receive an update of the reference noise signal at showtime. 30. The device of claim 29, wherein the transceiver is configured to determine the bit loading value of the signal based on the update of the reference noise signal. 31. The device of claim 29, wherein the transceiver is configured to receive the update of the reference noise signal that represents an actual noise at the transceiver. 32. The device of claim 31, wherein the transceiver is configured to receive the update of the reference noise signal that represents the actual noise caused by crosstalk from cables situated adjacent each other. 33. The device of claim 29, wherein the transceiver is configured to receive the reference noise signal that represents a reference noise pattern of the actual noise at the transceiver. 34. The device of claim 33, wherein the transceiver is configured to receive the reference noise signal from measured noise values of the reference noise pattern. 35. The device of claim 28, wherein the transceiver receives the reference noise signal at initialization of the transceiver. 36. The device of claim 28, wherein the transceiver is configured to receive the reference noise signal that corresponds to a signal-to-noise-ratio margin used by the transceiver. 37. The device of claim 28, wherein the transceiver is configured to generate the bit loading value based on the reference noise signal for a plurality of a modes of communication. 38. The device of claim 37, wherein the transceiver is configured to switch between the modes of communication, wherein at least one of the modes of communication is a power saving mode. 39. The device of claim 28, wherein the transceiver is configured to receive the reference noise signal that represents non-stationary noise arising from power transitions in other transmission lines. 40. The device of claim 28, wherein the transceiver is configured to receive the reference noise signal in the form of power spectral density signals sampled at breakpoints of a frequency band of the estimation of noise on the transmission line. 41. The device of claim 28, wherein the transceiver is configured to determine the bit loading such that any changes of a power spectral density in other transmission lines do not cause an increase in the crosstalk beyond a predefined signal-to-noise-ratio margin. 42. The device of claim 28, wherein the transceiver is configured to transmit the signal over the network in accordance with a DSL protocol. 43. The device of claim 28, wherein the device transmits the signal over the network in accordance with a G.fast protocol. 44. The device of claim 28, wherein the transceiver is part of a Customer Premises Equipment device. 45. The device of claim 28, wherein the reference noise signal represents a virtual noise. 46. The device of claim 28, wherein the transceiver is configured to generate transmission parameters of the signal to be transmitted over the transmission line. 47. The device of claim 28, wherein the transceiver includes a receiver that receives the reference noise signal
Methods and devices are presented using reference virtual noise for computing a bit loading.1-27. (canceled) 28. A device to mitigate crosstalk noise on a signal to be transmitted over a transmission line of a network, the device comprising: a transceiver configured to receive a reference noise signal that is an estimation of noise on the transmission line, wherein the transceiver is configured to determine a bit loading value of the signal that mitigates the crosstalk noise based on the reference noise signal. 29. The device of claim 28, wherein the transceiver is configured to receive an update of the reference noise signal at showtime. 30. The device of claim 29, wherein the transceiver is configured to determine the bit loading value of the signal based on the update of the reference noise signal. 31. The device of claim 29, wherein the transceiver is configured to receive the update of the reference noise signal that represents an actual noise at the transceiver. 32. The device of claim 31, wherein the transceiver is configured to receive the update of the reference noise signal that represents the actual noise caused by crosstalk from cables situated adjacent each other. 33. The device of claim 29, wherein the transceiver is configured to receive the reference noise signal that represents a reference noise pattern of the actual noise at the transceiver. 34. The device of claim 33, wherein the transceiver is configured to receive the reference noise signal from measured noise values of the reference noise pattern. 35. The device of claim 28, wherein the transceiver receives the reference noise signal at initialization of the transceiver. 36. The device of claim 28, wherein the transceiver is configured to receive the reference noise signal that corresponds to a signal-to-noise-ratio margin used by the transceiver. 37. The device of claim 28, wherein the transceiver is configured to generate the bit loading value based on the reference noise signal for a plurality of a modes of communication. 38. The device of claim 37, wherein the transceiver is configured to switch between the modes of communication, wherein at least one of the modes of communication is a power saving mode. 39. The device of claim 28, wherein the transceiver is configured to receive the reference noise signal that represents non-stationary noise arising from power transitions in other transmission lines. 40. The device of claim 28, wherein the transceiver is configured to receive the reference noise signal in the form of power spectral density signals sampled at breakpoints of a frequency band of the estimation of noise on the transmission line. 41. The device of claim 28, wherein the transceiver is configured to determine the bit loading such that any changes of a power spectral density in other transmission lines do not cause an increase in the crosstalk beyond a predefined signal-to-noise-ratio margin. 42. The device of claim 28, wherein the transceiver is configured to transmit the signal over the network in accordance with a DSL protocol. 43. The device of claim 28, wherein the device transmits the signal over the network in accordance with a G.fast protocol. 44. The device of claim 28, wherein the transceiver is part of a Customer Premises Equipment device. 45. The device of claim 28, wherein the reference noise signal represents a virtual noise. 46. The device of claim 28, wherein the transceiver is configured to generate transmission parameters of the signal to be transmitted over the transmission line. 47. The device of claim 28, wherein the transceiver includes a receiver that receives the reference noise signal
2,400
8,193
8,193
15,412,532
2,487
A portable inspection unit is provided. The portable inspection unit may include a unit body, a flexible cable, and an imager housing. The flexible cable may extend from the unit body and the imager housing may be disposed at a distal end of the flexible cable. The portable inspection unit may be configured to receive an actuating accessory.
1-16. (canceled) 17. A portable inspection unit comprising: a unit body; a flexible cable having a proximal end portion and a distal end portion, the proximal end portion extending from the unit body; an imager housing disposed at the distal end of the flexible cable; a port disposed in a top surface portion of the unit body; and a translational motion assembly comprising a drive unit and an actuating unit, the scaling device being configured to cause greater output motion in the actuating unit than input to the driving unit. 18. The portable inspection unit of claim 17, wherein the ratio of the translational scaling device is greater than 1.5:1. 19. The portable inspection unit of claim 17, wherein the drive unit comprises a drive rack, the actuating unit comprises an actuating rack, and further comprising a plurality of gears configured to engage the driving rack and the actuating rack. 20. The portable inspection unit of claim 19, wherein the translational scaling device is configured to cause translation in the actuating rack in the same direction as the driving rack. 21. The portable inspection unit of claim 19, wherein the plurality of gears comprises dual spur gears. 22. The portable inspection unit of claim 17, further comprising a spring extending from the unit body to the drive unit, configured to extend the drive unit away from the unit body in a resting state. 23. A portable inspection unit comprising: a unit body; a flexible cable having a proximal end portion and a distal end portion, the proximal end portion extending from the unit body; an imager housing disposed at the distal end of the flexible cable; and a port disposed in a top surface portion of the unit body configured to receive a fluid dispersion tube through an internal passageway extending through the portable the unit body. 24. The portable inspection unit of claim 23, wherein the fluid dispersion tube is configured to engage a nozzle of an aerosol canister. 25. The portable inspection unit of claim 23, wherein an internal passage extends through the port, into the unit body, through the flexible cable and imager housing, and outward proximate to a distal end portion of the imager housing 26. The portable inspection unit of claim 23, wherein the internal passageway is coated in Teflon™. 27. The portable inspection unit of claim 23, wherein the internal passageway is also configured to receive an actuating device.
A portable inspection unit is provided. The portable inspection unit may include a unit body, a flexible cable, and an imager housing. The flexible cable may extend from the unit body and the imager housing may be disposed at a distal end of the flexible cable. The portable inspection unit may be configured to receive an actuating accessory.1-16. (canceled) 17. A portable inspection unit comprising: a unit body; a flexible cable having a proximal end portion and a distal end portion, the proximal end portion extending from the unit body; an imager housing disposed at the distal end of the flexible cable; a port disposed in a top surface portion of the unit body; and a translational motion assembly comprising a drive unit and an actuating unit, the scaling device being configured to cause greater output motion in the actuating unit than input to the driving unit. 18. The portable inspection unit of claim 17, wherein the ratio of the translational scaling device is greater than 1.5:1. 19. The portable inspection unit of claim 17, wherein the drive unit comprises a drive rack, the actuating unit comprises an actuating rack, and further comprising a plurality of gears configured to engage the driving rack and the actuating rack. 20. The portable inspection unit of claim 19, wherein the translational scaling device is configured to cause translation in the actuating rack in the same direction as the driving rack. 21. The portable inspection unit of claim 19, wherein the plurality of gears comprises dual spur gears. 22. The portable inspection unit of claim 17, further comprising a spring extending from the unit body to the drive unit, configured to extend the drive unit away from the unit body in a resting state. 23. A portable inspection unit comprising: a unit body; a flexible cable having a proximal end portion and a distal end portion, the proximal end portion extending from the unit body; an imager housing disposed at the distal end of the flexible cable; and a port disposed in a top surface portion of the unit body configured to receive a fluid dispersion tube through an internal passageway extending through the portable the unit body. 24. The portable inspection unit of claim 23, wherein the fluid dispersion tube is configured to engage a nozzle of an aerosol canister. 25. The portable inspection unit of claim 23, wherein an internal passage extends through the port, into the unit body, through the flexible cable and imager housing, and outward proximate to a distal end portion of the imager housing 26. The portable inspection unit of claim 23, wherein the internal passageway is coated in Teflon™. 27. The portable inspection unit of claim 23, wherein the internal passageway is also configured to receive an actuating device.
2,400
8,194
8,194
15,361,229
2,481
A sample testing system includes a sample analyzer that analyzes a particle in a sample, and an information processing apparatus that receives a result of an analysis on the sample from the sample analyzer. The information processing apparatus includes a controller, a display unit, and an input unit. The controller controls the display unit to display a screen including a reference-information display region for the result of the analysis on the sample and an input-value display region for a visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample. With the screen displayed, the controller receives a count value of the particle counted in the visual test on the sample, through the input unit, and controls the display unit to display the received count value in the input-value display region.
1. A sample testing system comprising: a sample analyzer that analyzes a particle in a sample; and an information processing apparatus that receives a result of an analysis on the sample received from the sample analyzer, wherein the information processing apparatus comprises a controller, a display unit, and an input unit, the controller controls the display unit to display a screen including a reference-information display region for the result of the analysis on the sample, received from the sample analyzer, and an input-value display region for a visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample, and with the screen displayed, the controller receives a count value of the particle counted in the visual test on the sample, through the input unit, and controls the display unit to display the received count value in the input-value display region. 2. The sample testing system according to claim 1, wherein the sample analyzer analyzes the sample in terms of analysis items related to the particle, the reference-information display region displays analysis values of the respective analysis items as the result of the analysis on the sample, the input-value display region displays test items and value display regions for the respective test items, the test items including sub-classified items that are more detailed than the analysis items, and the controller receives, through the input unit, the count value of the particle counted in the visual test for each of the test items and displays the received count values in the value display regions. 3. The sample testing system according to claim 2, wherein when the analysis value of any of the analysis items for the sample represents low reliability or abnormality, the controller displays information indicating the low reliability or the abnormality, in association with the analysis item. 4. The sample testing system according to claim 1, further comprising a second sample analyzer that analyzes the sample with a measurement principle different from that of the sample analyzer, wherein the sample analyzer analyzes the sample as a retest after the second sample analyzer analyzes the sample, and the controller further displays a result of an analysis on the sample received from the second sample analyzer in the reference-information display region. 5. The sample testing system according to claim 4, wherein the controller displays a first region and a second region next to each other within the reference-information display region, the first region including the result of the analysis by the sample analyzer, the second region including the result of the analysis by the second sample analyzer. 6. The sample testing system according to claim 4, wherein the controller displays a first region and a third region next to each other within the reference-information display region, the first region including the result of the analysis by the sample analyzer, the third region including a result of the visual test on the sample, and when receiving a predetermined command with the count value displayed in the input-value display region, the controller displays the count value displayed in the input-value display region, as the result of the visual test in the third region. 7. The sample testing system according to claim 4, wherein the sample analyzer is a urinary sediment analyzer, and the second sample analyzer is a urine qualitative analyzer. 8. The sample testing system according to claim 7, wherein the controller displays analysis items in a first region and a second region within the reference-information display region such that the analysis items in the first region and the second region related to each other are arranged next to each other horizontally, the first region including the result of the analysis by the urinary sediment analyzer, the second region including the result of the analysis by the urine qualitative analyzer. 9. The sample testing system according to claim 8, wherein the related analysis items in the first region and the second region are one combination selected from a group of combinations of occult blood and red blood cell, protein concentration and cast, nitrite and bacterium, and specific gravity and electric conductivity. 10. The sample testing system according to claim 7, wherein the controller performs a cross-check between the result of the analysis by the urine qualitative analyzer and the result of the analysis by the urinary sediment analyzer, and displays a result of the cross-check in the reference-information display region. 11. The sample testing system according to claim 4, wherein the controller determines whether or not the visual test is necessary for the sample, basing the result of the analysis on the sample by the sample analyzer and the result of the analysis on the sample by the second sample analyzer, and if determining that the visual test is necessary, the controller controls the display unit to display information indicating that the visual test is necessary for the sample. 12. The sample testing system according to claim 11, wherein if determining that the visual test is necessary for the sample, the controller displays information indicating a ground for the determination that the visual test is necessary, in the reference-information display region. 13. The sample testing system according to claim 1, wherein the controller displays a distribution chart indicating distribution of the particle in the sample, in the reference-information display region. 14. The sample testing system according to claim 1, wherein the controller displays results of analyses on the sample in time-series order in the reference-information display region. 15. The sample testing system according to claim 1, wherein the controller receives input of a comment on the visual test on the sample through the input unit, and displays the comment on the visual test on the sample in the reference-information display region. 16. The sample testing system according to claim 1, wherein the controller displays annotative information on the result of the analysis on the sample in the reference-information display region. 17. The sample testing system according to claim 16, wherein when a predetermined relation exists between analysis values of at least two analysis items obtained from the sample, the controller displays the annotative information on a disease based on the predetermined relation, in the reference-information display region. 18. An information processing apparatus that receives a result of an analysis on a sample from a sample analyzer that analyzes a particle in the sample, comprising: a controller; a display unit; and an input unit, wherein the controller controls the display unit to display a screen including a reference-information display region for the result of the analysis on the sample, received from the sample analyzer, and an input-value display region for a visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample, and with the screen displayed, the controller receives a count value of the particle counted in the visual test on the sample, through the input unit, and controls the display unit to display the received count value in the input-value display region. 19. An information processing method of inputting a result of a visual test on a sample containing a particle, comprising: displaying a screen including a reference-information display region for a result of an analysis obtained by a sample analyzer analyzing the particle in the sample, and an input-value display region for the visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample; and with the screen displayed, receiving a count value of the particle counted in the visual test on the sample and displaying the received count value in the input-value display region. 20. A non transitory computer readable storage storing a computer program capable of being executed by a computer to perform operations for inputting a result of a visual test on a sample containing a particle, the operations comprising: displaying a screen including a reference-information display region for a result of an analysis obtained by a sample analyzer analyzing the particle in the sample, and an input-value display region for the visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample; and with the screen displayed, receiving a count value of the particle counted in the visual test on the sample and displaying the received count value in the input-value display region.
A sample testing system includes a sample analyzer that analyzes a particle in a sample, and an information processing apparatus that receives a result of an analysis on the sample from the sample analyzer. The information processing apparatus includes a controller, a display unit, and an input unit. The controller controls the display unit to display a screen including a reference-information display region for the result of the analysis on the sample and an input-value display region for a visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample. With the screen displayed, the controller receives a count value of the particle counted in the visual test on the sample, through the input unit, and controls the display unit to display the received count value in the input-value display region.1. A sample testing system comprising: a sample analyzer that analyzes a particle in a sample; and an information processing apparatus that receives a result of an analysis on the sample received from the sample analyzer, wherein the information processing apparatus comprises a controller, a display unit, and an input unit, the controller controls the display unit to display a screen including a reference-information display region for the result of the analysis on the sample, received from the sample analyzer, and an input-value display region for a visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample, and with the screen displayed, the controller receives a count value of the particle counted in the visual test on the sample, through the input unit, and controls the display unit to display the received count value in the input-value display region. 2. The sample testing system according to claim 1, wherein the sample analyzer analyzes the sample in terms of analysis items related to the particle, the reference-information display region displays analysis values of the respective analysis items as the result of the analysis on the sample, the input-value display region displays test items and value display regions for the respective test items, the test items including sub-classified items that are more detailed than the analysis items, and the controller receives, through the input unit, the count value of the particle counted in the visual test for each of the test items and displays the received count values in the value display regions. 3. The sample testing system according to claim 2, wherein when the analysis value of any of the analysis items for the sample represents low reliability or abnormality, the controller displays information indicating the low reliability or the abnormality, in association with the analysis item. 4. The sample testing system according to claim 1, further comprising a second sample analyzer that analyzes the sample with a measurement principle different from that of the sample analyzer, wherein the sample analyzer analyzes the sample as a retest after the second sample analyzer analyzes the sample, and the controller further displays a result of an analysis on the sample received from the second sample analyzer in the reference-information display region. 5. The sample testing system according to claim 4, wherein the controller displays a first region and a second region next to each other within the reference-information display region, the first region including the result of the analysis by the sample analyzer, the second region including the result of the analysis by the second sample analyzer. 6. The sample testing system according to claim 4, wherein the controller displays a first region and a third region next to each other within the reference-information display region, the first region including the result of the analysis by the sample analyzer, the third region including a result of the visual test on the sample, and when receiving a predetermined command with the count value displayed in the input-value display region, the controller displays the count value displayed in the input-value display region, as the result of the visual test in the third region. 7. The sample testing system according to claim 4, wherein the sample analyzer is a urinary sediment analyzer, and the second sample analyzer is a urine qualitative analyzer. 8. The sample testing system according to claim 7, wherein the controller displays analysis items in a first region and a second region within the reference-information display region such that the analysis items in the first region and the second region related to each other are arranged next to each other horizontally, the first region including the result of the analysis by the urinary sediment analyzer, the second region including the result of the analysis by the urine qualitative analyzer. 9. The sample testing system according to claim 8, wherein the related analysis items in the first region and the second region are one combination selected from a group of combinations of occult blood and red blood cell, protein concentration and cast, nitrite and bacterium, and specific gravity and electric conductivity. 10. The sample testing system according to claim 7, wherein the controller performs a cross-check between the result of the analysis by the urine qualitative analyzer and the result of the analysis by the urinary sediment analyzer, and displays a result of the cross-check in the reference-information display region. 11. The sample testing system according to claim 4, wherein the controller determines whether or not the visual test is necessary for the sample, basing the result of the analysis on the sample by the sample analyzer and the result of the analysis on the sample by the second sample analyzer, and if determining that the visual test is necessary, the controller controls the display unit to display information indicating that the visual test is necessary for the sample. 12. The sample testing system according to claim 11, wherein if determining that the visual test is necessary for the sample, the controller displays information indicating a ground for the determination that the visual test is necessary, in the reference-information display region. 13. The sample testing system according to claim 1, wherein the controller displays a distribution chart indicating distribution of the particle in the sample, in the reference-information display region. 14. The sample testing system according to claim 1, wherein the controller displays results of analyses on the sample in time-series order in the reference-information display region. 15. The sample testing system according to claim 1, wherein the controller receives input of a comment on the visual test on the sample through the input unit, and displays the comment on the visual test on the sample in the reference-information display region. 16. The sample testing system according to claim 1, wherein the controller displays annotative information on the result of the analysis on the sample in the reference-information display region. 17. The sample testing system according to claim 16, wherein when a predetermined relation exists between analysis values of at least two analysis items obtained from the sample, the controller displays the annotative information on a disease based on the predetermined relation, in the reference-information display region. 18. An information processing apparatus that receives a result of an analysis on a sample from a sample analyzer that analyzes a particle in the sample, comprising: a controller; a display unit; and an input unit, wherein the controller controls the display unit to display a screen including a reference-information display region for the result of the analysis on the sample, received from the sample analyzer, and an input-value display region for a visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample, and with the screen displayed, the controller receives a count value of the particle counted in the visual test on the sample, through the input unit, and controls the display unit to display the received count value in the input-value display region. 19. An information processing method of inputting a result of a visual test on a sample containing a particle, comprising: displaying a screen including a reference-information display region for a result of an analysis obtained by a sample analyzer analyzing the particle in the sample, and an input-value display region for the visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample; and with the screen displayed, receiving a count value of the particle counted in the visual test on the sample and displaying the received count value in the input-value display region. 20. A non transitory computer readable storage storing a computer program capable of being executed by a computer to perform operations for inputting a result of a visual test on a sample containing a particle, the operations comprising: displaying a screen including a reference-information display region for a result of an analysis obtained by a sample analyzer analyzing the particle in the sample, and an input-value display region for the visual test to be performed on the sample with a microscope when retesting is determined as necessary, basing the result of the analysis on the sample; and with the screen displayed, receiving a count value of the particle counted in the visual test on the sample and displaying the received count value in the input-value display region.
2,400
8,195
8,195
14,654,690
2,461
In accordance with an embodiment, the method includes detecting an update event whereupon a precoder needs to be updated, sending signal adjustment information to a receiver remotely coupled to a subscriber line out of the plurality of subscriber lines indicative of a signal compensation factor to be applied to a receive communication signal to compensate for a channel bias caused by the scheduled precoder update, and time-coordinating the precoder update with the enforcement of the signal compensation factor at the receiver.
1.-16. (canceled) 17. A method for controlling communications over a plurality of subscriber lines, the communications making use of communication signals that are jointly processed through a linear precoder for crosstalk pre-compensation, wherein the method comprises detecting an update event whereupon the precoder needs to be updated, determining a signal scaling factor to be applied to a transmit communication signal for conformance to a transmit Power Spectral Density PSD mask after joint processing of the communication signals through the updated precoder, sending signal adjustment information to a receiver remotely coupled to a subscriber line out of the plurality of subscriber lines indicative of a signal compensation factor to be applied to a receive communication signal to compensate for a channel equalization bias at the receiver caused by a corresponding transmit signal scaling, and time-coordinating the precoder update and the corresponding transmit signal scaling with the enforcement of the signal compensation factor at the receiver. 18. A method according to claim 17, wherein the precoder update comprises determining one or more coupling coefficients of the precoder for mitigating crosstalk from the subscriber line into one or more victim lines, and wherein the signal compensation factor further compensates for a further channel equalization bias at the receiver caused by corresponding crosstalk pre-compensation signals superimposed over the one or more victim lines. 19. A method according to claim 17, wherein the signal compensation factor is a scalar factor that compensates for an amplitude bias. 20. A method according to claim 17, wherein the signal compensation factor is a complex factor that compensates for both an amplitude bias and a phase bias. 21. A method according to claim 17, wherein the sending step and the corresponding signal adjustment at the receiver is conditioned to the amount of channel equalization bias caused by the scheduled precoder update. 22. A method according to claim 21, wherein the precoder is updated in two steps, a first precoder update with partial precoding gains and limited channel equalization bias, and a second precoder update with full precoding gains, and wherein the sending step takes place between the first and second precoder updates, and the second precoder update is time-coordinated with the enforcement of the signal compensation factor at the receiver. 23. A method according to claim 17, wherein the update event is a new subscriber line joining or leaving the plurality of subscriber lines. 24. A method according to claim 17, wherein the update event is a substantial change in transmit power over a reconfigured subscriber line. 25. A method according to claim 17, wherein the method further comprises the step of, upon receipt of the gain adjustment information, returning an adapted bit loading value and/or an adapted fine gain tuning factor for a respective carrier to a corresponding transmitter. 26. A method according to claim 17, wherein the communication signals are multi-carrier signals, and wherein the signal compensation factor is determined on a per carrier basis. 27. A method according to claim 17, wherein the amplitude of the signal scaling factor is based on a multi-user fairness criterion. 28. A communication controller for controlling communications over a plurality of subscriber lines, the communications making use of communication signals that are jointly processed through a linear precoder for crosstalk pre-compensation, wherein the communication controller is configured to detect an update event whereupon the precoder needs to be updated, to determine a signal scaling factor to be applied to a transmit communication signal for conformance to a transmit Power Spectral Density PSD mask after joint processing of the communication signals through the updated precoder, to send signal adjustment information to a receiver remotely coupled to a subscriber line out of the plurality of subscriber lines indicative of a signal compensation factor to be applied to a receive communication signal to compensate for a channel equalization bias caused by a corresponding transmit signal scaling, and to time-coordinate the precoder update and the corresponding transmit signal scaling with the enforcement of the signal compensation factor at the receiver. 29. An access node comprising a communication controller according to claim 28. 30. A communication controller for controlling a communication over a subscriber line out of a plurality of subscriber lines, the communications over the plurality of subscriber lines making use of communication signals that are jointly processed through a linear precoder for crosstalk pre-compensation, wherein the communication controller is configured to receive signal adjustment information from a transmitter remotely coupled to the subscriber line indicative of a signal compensation factor to be applied to a receive communication signal to compensate for a channel equalization bias caused by a corresponding transmit signal scaling to be applied at the transmitter to a transmit communication signal upon a precoder update for conformance to a transmit Power Spectral Density PSD mask after joint processing of the communication signals through the updated precoder, and to time-coordinate the enforcement of the signal compensation factor with the precoder update and the corresponding transmit signal scaling. 31. A subscriber device comprising a communication controller according to claim 30.
In accordance with an embodiment, the method includes detecting an update event whereupon a precoder needs to be updated, sending signal adjustment information to a receiver remotely coupled to a subscriber line out of the plurality of subscriber lines indicative of a signal compensation factor to be applied to a receive communication signal to compensate for a channel bias caused by the scheduled precoder update, and time-coordinating the precoder update with the enforcement of the signal compensation factor at the receiver.1.-16. (canceled) 17. A method for controlling communications over a plurality of subscriber lines, the communications making use of communication signals that are jointly processed through a linear precoder for crosstalk pre-compensation, wherein the method comprises detecting an update event whereupon the precoder needs to be updated, determining a signal scaling factor to be applied to a transmit communication signal for conformance to a transmit Power Spectral Density PSD mask after joint processing of the communication signals through the updated precoder, sending signal adjustment information to a receiver remotely coupled to a subscriber line out of the plurality of subscriber lines indicative of a signal compensation factor to be applied to a receive communication signal to compensate for a channel equalization bias at the receiver caused by a corresponding transmit signal scaling, and time-coordinating the precoder update and the corresponding transmit signal scaling with the enforcement of the signal compensation factor at the receiver. 18. A method according to claim 17, wherein the precoder update comprises determining one or more coupling coefficients of the precoder for mitigating crosstalk from the subscriber line into one or more victim lines, and wherein the signal compensation factor further compensates for a further channel equalization bias at the receiver caused by corresponding crosstalk pre-compensation signals superimposed over the one or more victim lines. 19. A method according to claim 17, wherein the signal compensation factor is a scalar factor that compensates for an amplitude bias. 20. A method according to claim 17, wherein the signal compensation factor is a complex factor that compensates for both an amplitude bias and a phase bias. 21. A method according to claim 17, wherein the sending step and the corresponding signal adjustment at the receiver is conditioned to the amount of channel equalization bias caused by the scheduled precoder update. 22. A method according to claim 21, wherein the precoder is updated in two steps, a first precoder update with partial precoding gains and limited channel equalization bias, and a second precoder update with full precoding gains, and wherein the sending step takes place between the first and second precoder updates, and the second precoder update is time-coordinated with the enforcement of the signal compensation factor at the receiver. 23. A method according to claim 17, wherein the update event is a new subscriber line joining or leaving the plurality of subscriber lines. 24. A method according to claim 17, wherein the update event is a substantial change in transmit power over a reconfigured subscriber line. 25. A method according to claim 17, wherein the method further comprises the step of, upon receipt of the gain adjustment information, returning an adapted bit loading value and/or an adapted fine gain tuning factor for a respective carrier to a corresponding transmitter. 26. A method according to claim 17, wherein the communication signals are multi-carrier signals, and wherein the signal compensation factor is determined on a per carrier basis. 27. A method according to claim 17, wherein the amplitude of the signal scaling factor is based on a multi-user fairness criterion. 28. A communication controller for controlling communications over a plurality of subscriber lines, the communications making use of communication signals that are jointly processed through a linear precoder for crosstalk pre-compensation, wherein the communication controller is configured to detect an update event whereupon the precoder needs to be updated, to determine a signal scaling factor to be applied to a transmit communication signal for conformance to a transmit Power Spectral Density PSD mask after joint processing of the communication signals through the updated precoder, to send signal adjustment information to a receiver remotely coupled to a subscriber line out of the plurality of subscriber lines indicative of a signal compensation factor to be applied to a receive communication signal to compensate for a channel equalization bias caused by a corresponding transmit signal scaling, and to time-coordinate the precoder update and the corresponding transmit signal scaling with the enforcement of the signal compensation factor at the receiver. 29. An access node comprising a communication controller according to claim 28. 30. A communication controller for controlling a communication over a subscriber line out of a plurality of subscriber lines, the communications over the plurality of subscriber lines making use of communication signals that are jointly processed through a linear precoder for crosstalk pre-compensation, wherein the communication controller is configured to receive signal adjustment information from a transmitter remotely coupled to the subscriber line indicative of a signal compensation factor to be applied to a receive communication signal to compensate for a channel equalization bias caused by a corresponding transmit signal scaling to be applied at the transmitter to a transmit communication signal upon a precoder update for conformance to a transmit Power Spectral Density PSD mask after joint processing of the communication signals through the updated precoder, and to time-coordinate the enforcement of the signal compensation factor with the precoder update and the corresponding transmit signal scaling. 31. A subscriber device comprising a communication controller according to claim 30.
2,400
8,196
8,196
13,799,156
2,451
Embodiments of the invention relate to identifying users for initiating information spreading in social network. In one embodiment, information for one or more users of a social network is collected and one or more features for each of the one or more users based on the collected information is computed. The one or more features are compared with a statistical model and calculating a probability that each of the one or more users will spread a message received from outside their social network based on the comparison.
1. A method for identifying users for initiating information spreading in social network, the method comprising: collecting information for one or more users of a social network; computing one or more features for each of the one or more users based on the collected information; compare the one or more features with a statistical model; and calculating a probability that each of the one or more users will spread a message received from outside their social network based on the comparison. 2. The method of claim 1, wherein the method further comprises creating the statistical model, the creating comprising: requesting that each of the one or more users of a social network spread a message; monitoring the social network to identify a subset of users that spread the message; and building the statistical model based on the one or more features of the subset of users. 3. The method of claim 2, wherein identifying the subset of users comprises determining which of the one or more users re-transmitted the message during a predetermined period of time. 4. The method of claim 3, wherein the one or more features comprises at least one of: a number of message shares per status message; a number of message shares per day during a predetermined period; a rate of sharing a directly requested message; and a rate of message sharing a message from outside their social network. 5. The method of claim 1, wherein the statistical model is a support vector machine that is trained with collected historical data collected from users of the social network. 6. The method of claim 5, wherein calculating the probability that each of the one or more users will spread the message includes inputting the one or more features of each of the one or more users into the support vector machine. 7. The method of claim 1, wherein the one or more features include at least one of a personality feature, a profile feature, a social network feature, a activity feature, an information-spreading feature, a readiness feature, and a relatedness feature. 8. The method of claim 1, wherein the method further comprises classifying each of the one or more users as likely to re-transmit or unlikely to re-transmit based upon the probability. 9. The method of claim 1, wherein the method further comprises ranking the one or more users in descending order based on the probability. 10. A computer system for identifying users for initiating information spreading in social network, the computer system comprising: a memory device, the memory device having computer readable computer instructions; and a processor for executing the computer readable instructions, the instructions including: collecting information for one or more users of a social network; computing one or more features for each of the one or more users based on the collected information; comparing the one or more features with a statistical model; and calculating a probability that each of the one or more users will spread a message received from outside their social network based on the comparison. 11. The computer system of claim 10, further comprising creating the statistical model by: requesting that each of the one or more users of a social network spread a message; monitoring the social network to identify a subset of users that spread the message; and building the statistical model based on the one or more features of the subset of users. 12. A computer program product for identifying users for initiating information spreading in social network, the computer program product comprising: a computer readable storage medium having program code embodied therewith, the program code executable by a processor to: collect information for one or more users of a social network; compute one or more features for each of the one or more users based on the collected information; compare the one or more features with a statistical model; and calculate a probability that each of the one or more users will spread a message received from outside their social network based on the comparison. 13. The computer program product of claim 12, further comprising creating the statistical model by: requesting that each of the one or more users of a social network spread a message; monitoring the social network to identify a subset of users that spread the message; and building the statistical model based on the one or more features of the subset of users. 14. The computer program product of claim 13, wherein identifying the subset of users comprises determining which of the one or more users re-transmitted the message during a predetermined period of time. 15. The computer program product of claim 14, wherein the an information-spreading feature comprises at least one of: a number of message shares per status message; a number of message shares per day during a predetermined period; a rate of sharing a directly requested message; and a rate of message sharing a message from outside their social network. 16. The computer program product of claim 12, wherein the statistical model is a support vector machine that is trained with collected historical data collected from users of the social network. 17. The computer program product of claim 16, wherein calculating the probability that each of the one or more users will spread the message includes inputting the one or more features of each of the one or more users into the support vector machine. 18. The computer program product of claim 12, wherein the one or more features include a personality feature, a profile feature, a social network feature, a activity feature, an information-spreading feature, a readiness feature, and a relatedness feature. 19. The computer program product of claim 12, further comprising classifying each of the one or more users as likely to re-transmit or unlikely to re-transmit based upon the probability. 20. The computer program product of claim 12, further comprising ranking the one or more users in descending order based on the probability.
Embodiments of the invention relate to identifying users for initiating information spreading in social network. In one embodiment, information for one or more users of a social network is collected and one or more features for each of the one or more users based on the collected information is computed. The one or more features are compared with a statistical model and calculating a probability that each of the one or more users will spread a message received from outside their social network based on the comparison.1. A method for identifying users for initiating information spreading in social network, the method comprising: collecting information for one or more users of a social network; computing one or more features for each of the one or more users based on the collected information; compare the one or more features with a statistical model; and calculating a probability that each of the one or more users will spread a message received from outside their social network based on the comparison. 2. The method of claim 1, wherein the method further comprises creating the statistical model, the creating comprising: requesting that each of the one or more users of a social network spread a message; monitoring the social network to identify a subset of users that spread the message; and building the statistical model based on the one or more features of the subset of users. 3. The method of claim 2, wherein identifying the subset of users comprises determining which of the one or more users re-transmitted the message during a predetermined period of time. 4. The method of claim 3, wherein the one or more features comprises at least one of: a number of message shares per status message; a number of message shares per day during a predetermined period; a rate of sharing a directly requested message; and a rate of message sharing a message from outside their social network. 5. The method of claim 1, wherein the statistical model is a support vector machine that is trained with collected historical data collected from users of the social network. 6. The method of claim 5, wherein calculating the probability that each of the one or more users will spread the message includes inputting the one or more features of each of the one or more users into the support vector machine. 7. The method of claim 1, wherein the one or more features include at least one of a personality feature, a profile feature, a social network feature, a activity feature, an information-spreading feature, a readiness feature, and a relatedness feature. 8. The method of claim 1, wherein the method further comprises classifying each of the one or more users as likely to re-transmit or unlikely to re-transmit based upon the probability. 9. The method of claim 1, wherein the method further comprises ranking the one or more users in descending order based on the probability. 10. A computer system for identifying users for initiating information spreading in social network, the computer system comprising: a memory device, the memory device having computer readable computer instructions; and a processor for executing the computer readable instructions, the instructions including: collecting information for one or more users of a social network; computing one or more features for each of the one or more users based on the collected information; comparing the one or more features with a statistical model; and calculating a probability that each of the one or more users will spread a message received from outside their social network based on the comparison. 11. The computer system of claim 10, further comprising creating the statistical model by: requesting that each of the one or more users of a social network spread a message; monitoring the social network to identify a subset of users that spread the message; and building the statistical model based on the one or more features of the subset of users. 12. A computer program product for identifying users for initiating information spreading in social network, the computer program product comprising: a computer readable storage medium having program code embodied therewith, the program code executable by a processor to: collect information for one or more users of a social network; compute one or more features for each of the one or more users based on the collected information; compare the one or more features with a statistical model; and calculate a probability that each of the one or more users will spread a message received from outside their social network based on the comparison. 13. The computer program product of claim 12, further comprising creating the statistical model by: requesting that each of the one or more users of a social network spread a message; monitoring the social network to identify a subset of users that spread the message; and building the statistical model based on the one or more features of the subset of users. 14. The computer program product of claim 13, wherein identifying the subset of users comprises determining which of the one or more users re-transmitted the message during a predetermined period of time. 15. The computer program product of claim 14, wherein the an information-spreading feature comprises at least one of: a number of message shares per status message; a number of message shares per day during a predetermined period; a rate of sharing a directly requested message; and a rate of message sharing a message from outside their social network. 16. The computer program product of claim 12, wherein the statistical model is a support vector machine that is trained with collected historical data collected from users of the social network. 17. The computer program product of claim 16, wherein calculating the probability that each of the one or more users will spread the message includes inputting the one or more features of each of the one or more users into the support vector machine. 18. The computer program product of claim 12, wherein the one or more features include a personality feature, a profile feature, a social network feature, a activity feature, an information-spreading feature, a readiness feature, and a relatedness feature. 19. The computer program product of claim 12, further comprising classifying each of the one or more users as likely to re-transmit or unlikely to re-transmit based upon the probability. 20. The computer program product of claim 12, further comprising ranking the one or more users in descending order based on the probability.
2,400
8,197
8,197
15,292,242
2,485
An on-loom fabric inspection system comprising at least one imaging device configured to collect images of at least one section of a weaving area of a loom including a shed region, a woven fabric region and a fell region. The system is operable to detect faults in the weaving area and to produce batches of woven fabric assigned with an objective quality index.
1. An on-loom fabric inspection system comprising: at least one imaging device configured to collect images of at least one section of a weaving area of a loom and to detect at least one fault in said weaving area; wherein said section of the weaving area comprises a shed region, a woven fabric region and a fell region, said fell region being a section of the weaving area where a reed strikes a weft yarn along a fell line during operation of said loom. 2. The system of claim 1 further comprising at least one image processor configured to receive data pertaining to said images and to identify irregularities in said data. 3. The system of claim 1 wherein said imaging device comprises a camera. 4. The system of claim 1 wherein said imaging device is configured to image a plurality of weft yarns in the fell region. 5. The system of claim 4 further comprising an image processor operable to measure weft-spacing. 6. The system of claim 1 further comprising an image processor operable to detect irregularities in image data indicating the occurrence of weaving faults. 7. The system of claim 6 wherein said weaving faults are selected from a group consisting of: slubs, holes, missing yarns, yarn variation, end out, soiled yarns, wrong yarn faults, oil spots, loom-stop marks, thin place, smash marks, open reed, mixed filling, mixed end, knots, jerk-in, dropped picks, drawbacks, burl marks and combinations thereof. 8. The system of claim 1 further comprising a controller operable to respond to detection of weaving faults. 9. The system of claim 8 wherein said controller is operable to stop the loom upon detection of critical weaving faults. 10. The system of claim 8 wherein said controller is operable to adjust the loom settings to correct for weaving faults. 11. The system of claim 8 wherein said controller is operable to assign a quality index to a batch of woven fabric. 12. The system of claim 11 wherein said quality index is at least partially based upon deviation of weft-spacing in the fell region from a desired weft-spacing function. 13. The system of claim 2 wherein said image processor is configured to segment a frame of said image data and to analyze each segment separately. 14. The system of claim 13 wherein each segment is analyzed at a different rate. 15. The system of claim 13 wherein at least one segment shows the shed region. 16. The system of claim 13 wherein at least one segment shows the fell region. 17. The system of claim 13 wherein at least one segment shows the newly woven fabric region. 18. A batch of woven fabric assigned an objective quality index by a controller operable to receive data pertaining to images collected by an on-loom imaging device configured to image a plurality of weft yarns in a fell region section of weaving area where a reed strikes a weft yarn along a fell line during operation of a loom, said controller further operable to measure weft-spacing in the fell region, wherein said objective quality index is at least partially based upon deviation of said weft-spacing from a desired weft-spacing function. 19. A method for producing woven fabric comprising: providing at least one loom comprising at least one yarn roll, at least one take-up roll, at least one pair of heald frames and at least one reed; providing at least one imaging device configured to collect images of at least one section of a weaving area of said loom; threading an array of warp yarns through the heald frames and the reed; forming a shed region by raising at least one heald frame and lowering at least one other heald frame thereby raising separating the warp yarns threaded therethrough; inserting a filler yarn through said shed; battening, by said reed in a fell region, said filler yarn against weft yarns along a fell line of the newly woven fabric; said imaging device collecting image data from at least said shed region, said fell region and said woven fabric region; said imaging device transferring said image data to an image processor; said image processor analyzing said image data for irregularities indicative of weaving faults; recording said weaving faults; collecting said woven fabric on said take-up roll; and assigning a value to the quality of the woven fabric. 20. A woven fabric produced by the method of claim 19.
An on-loom fabric inspection system comprising at least one imaging device configured to collect images of at least one section of a weaving area of a loom including a shed region, a woven fabric region and a fell region. The system is operable to detect faults in the weaving area and to produce batches of woven fabric assigned with an objective quality index.1. An on-loom fabric inspection system comprising: at least one imaging device configured to collect images of at least one section of a weaving area of a loom and to detect at least one fault in said weaving area; wherein said section of the weaving area comprises a shed region, a woven fabric region and a fell region, said fell region being a section of the weaving area where a reed strikes a weft yarn along a fell line during operation of said loom. 2. The system of claim 1 further comprising at least one image processor configured to receive data pertaining to said images and to identify irregularities in said data. 3. The system of claim 1 wherein said imaging device comprises a camera. 4. The system of claim 1 wherein said imaging device is configured to image a plurality of weft yarns in the fell region. 5. The system of claim 4 further comprising an image processor operable to measure weft-spacing. 6. The system of claim 1 further comprising an image processor operable to detect irregularities in image data indicating the occurrence of weaving faults. 7. The system of claim 6 wherein said weaving faults are selected from a group consisting of: slubs, holes, missing yarns, yarn variation, end out, soiled yarns, wrong yarn faults, oil spots, loom-stop marks, thin place, smash marks, open reed, mixed filling, mixed end, knots, jerk-in, dropped picks, drawbacks, burl marks and combinations thereof. 8. The system of claim 1 further comprising a controller operable to respond to detection of weaving faults. 9. The system of claim 8 wherein said controller is operable to stop the loom upon detection of critical weaving faults. 10. The system of claim 8 wherein said controller is operable to adjust the loom settings to correct for weaving faults. 11. The system of claim 8 wherein said controller is operable to assign a quality index to a batch of woven fabric. 12. The system of claim 11 wherein said quality index is at least partially based upon deviation of weft-spacing in the fell region from a desired weft-spacing function. 13. The system of claim 2 wherein said image processor is configured to segment a frame of said image data and to analyze each segment separately. 14. The system of claim 13 wherein each segment is analyzed at a different rate. 15. The system of claim 13 wherein at least one segment shows the shed region. 16. The system of claim 13 wherein at least one segment shows the fell region. 17. The system of claim 13 wherein at least one segment shows the newly woven fabric region. 18. A batch of woven fabric assigned an objective quality index by a controller operable to receive data pertaining to images collected by an on-loom imaging device configured to image a plurality of weft yarns in a fell region section of weaving area where a reed strikes a weft yarn along a fell line during operation of a loom, said controller further operable to measure weft-spacing in the fell region, wherein said objective quality index is at least partially based upon deviation of said weft-spacing from a desired weft-spacing function. 19. A method for producing woven fabric comprising: providing at least one loom comprising at least one yarn roll, at least one take-up roll, at least one pair of heald frames and at least one reed; providing at least one imaging device configured to collect images of at least one section of a weaving area of said loom; threading an array of warp yarns through the heald frames and the reed; forming a shed region by raising at least one heald frame and lowering at least one other heald frame thereby raising separating the warp yarns threaded therethrough; inserting a filler yarn through said shed; battening, by said reed in a fell region, said filler yarn against weft yarns along a fell line of the newly woven fabric; said imaging device collecting image data from at least said shed region, said fell region and said woven fabric region; said imaging device transferring said image data to an image processor; said image processor analyzing said image data for irregularities indicative of weaving faults; recording said weaving faults; collecting said woven fabric on said take-up roll; and assigning a value to the quality of the woven fabric. 20. A woven fabric produced by the method of claim 19.
2,400
8,198
8,198
14,957,708
2,483
A vision system of a vehicle includes a camera and a non-imaging sensor. With the camera and the non-imaging sensor disposed at the vehicle, the field of view of the camera at least partially overlaps the field of sensing of the non-imaging sensor at an overlapping region. A processor is operable to process image data captured by the camera and sensor data captured by the non-imaging sensor to determine a driving situation of the vehicle. Responsive to determination of the driving situation, Kalman Filter parameters associated with the determined driving situation are determined and, using the determined Kalman Filter parameters, a Kalman Filter fusion may be determined. The determined Kalman Filter fusion may be applied to captured image data and captured sensor data to determine an object present in the overlapping region.
1. A vision system of a vehicle, said vision system comprising: a camera configured to be disposed at a vehicle equipped with said vision system so as to have a field of view exterior of the equipped vehicle; wherein said camera comprises a pixelated imaging array having a plurality of photosensing elements; a non-imaging sensor configured to be disposed at the equipped vehicle so as to have a field of sensing exterior of the equipped vehicle; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, the field of view of said camera at least partially overlaps the field of sensing of said non-imaging sensor at an overlapping region; a processor operable to process image data captured by said camera and sensor data captured by said non-imaging sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, said processor is operable to process captured image data and captured sensor data to determine a driving situation of the equipped vehicle; and wherein, responsive to determination by said processor of the driving situation by processing of captured image data and captured sensor data, Kalman Filter parameters associated with the determined driving situation are determined, and, using the determined Kalman Filter parameters, a Kalman Filter fusion is determined, and wherein the determined Kalman Filter fusion is applied to captured image data and captured sensor data to determine an object present in the overlapping region. 2. The vision system of claim 1, wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region, and wherein, responsive to matching of determined objects, said processor determines if the matched objects are stationary or moving and Kalman Filter parameters associated with the determined matched objects are determined. 3. The vision system of claim 2, wherein the Kalman Filter parameters comprise a gain and covariance. 4. The vision system of claim 3, wherein, responsive to matching of objects, said processor determines if a moving object is indicative of an approaching head-on vehicle and, responsive to determination that the moving object is indicative of an approaching head-on vehicle, a gain and covariance associated with an approaching head-on vehicle are determined, and wherein, using the determined gain and covariance, the Kalman Filter fusion is determined. 5. The vision system of claim 3, wherein, responsive to matching of objects, said processor determines if a moving object is not indicative of an approaching head-on vehicle and, responsive to determination that the moving object is not indicative of an approaching head-on vehicle, a gain and covariance associated with other object motion are determined, and wherein, using the determined gain and covariance, the Kalman Filter fusion is determined. 6. The vision system of claim 1, wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region, and wherein, responsive to matching of determined objects, said processor determines a classification of the matched objects and Kalman Filter parameters associated with the determined matched objects are determined. 7. The vision system of claim 6, wherein the determined classification comprises one of (i) a vehicle cutting in front of the equipped vehicle and (ii) a vehicle stopped in front of the equipped vehicle. 8. The vision system of claim 1, wherein the Kalman Filter parameters comprise a gain and covariance. 9. The vision system of claim 1, wherein said non-imaging sensor comprises a radar sensor. 10. The vision system of claim 1, wherein said non-imaging sensor comprises one of a lidar sensor and an ultrasonic sensor. 11. The vision system of claim 1, wherein said processor is operable to communicate via a vehicle-to-vehicle communication system of the equipped vehicle. 12. The vision system of claim 1, wherein said camera has a field of view forward of the equipped vehicle and wherein said non-imaging sensor has a field of sensing forward of the equipped vehicle. 13. A vision system of a vehicle, said vision system comprising: a camera configured to be disposed at a vehicle equipped with said vision system so as to have a field of view exterior and forward of the equipped vehicle; wherein said camera comprises a pixelated imaging array having a plurality of photosensing elements; a non-imaging sensor configured to be disposed at the equipped vehicle so as to have a field of sensing exterior and forward of the equipped vehicle, wherein said non-imaging sensor comprises one of a radar sensor and a lidar sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, the field of view of said camera at least partially overlaps the field of sensing of said non-imaging sensor at an overlapping region; a processor operable to process image data captured by said camera and sensor data captured by said non-imaging sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, said processor is operable to process captured image data and captured sensor data to determine a driving situation of the equipped vehicle; wherein the determined driving situation comprises a vehicle cutting in front of the equipped vehicle; and wherein, responsive to determination by said processor of the driving situation by processing of captured image data and captured sensor data, Kalman Filter parameters associated with the determined driving situation are determined. 14. The vision system of claim 13, wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region, and wherein, responsive to matching of determined objects, said processor determines if the matched objects are stationary or moving and Kalman Filter parameters associated with the determined matched objects are determined. 15. The vision system of claim 13, wherein the Kalman Filter parameters comprise a gain and covariance. 16. The vision system of claim 13, wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region, and wherein, responsive to matching of determined objects, said processor determines a classification of the matched objects and Kalman Filter parameters associated with the determined matched objects are determined. 17. The vision system of claim 13, wherein, using the determined Kalman Filter parameters, a Kalman Filter fusion is determined, and wherein the determined Kalman Filter fusion is applied to captured image data and captured sensor data to determine an object present in the overlapping region. 18. A vision system of a vehicle, said vision system comprising: a camera configured to be disposed at a vehicle equipped with said vision system so as to have a field of view exterior and forward of the equipped vehicle; wherein said camera comprises a pixelated imaging array having a plurality of photosensing elements; a non-imaging sensor configured to be disposed at the equipped vehicle so as to have a field of sensing exterior and forward of the equipped vehicle, wherein said non-imaging sensor comprises one of a radar sensor and a lidar sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, the field of view of said camera at least partially overlaps the field of sensing of said non-imaging sensor at an overlapping region; a processor operable to process image data captured by said camera and sensor data captured by said non-imaging sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, said processor is operable to process captured image data and captured sensor data to determine a driving situation of the equipped vehicle; wherein, responsive to determination by said processor of the driving situation by processing of captured image data and captured sensor data, Kalman Filter parameters associated with the determined driving situation are determined; and wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region. 19. The vision system of claim 18, wherein, responsive to matching of determined objects, said processor determines if the matched objects are stationary or moving and Kalman Filter parameters associated with the determined matched objects are determined. 20. The vision system of claim 18, wherein, using the determined Kalman Filter parameters, a Kalman Filter fusion is determined, and wherein the determined Kalman Filter fusion is applied to captured image data and captured sensor data to determine an object present in the overlapping region.
A vision system of a vehicle includes a camera and a non-imaging sensor. With the camera and the non-imaging sensor disposed at the vehicle, the field of view of the camera at least partially overlaps the field of sensing of the non-imaging sensor at an overlapping region. A processor is operable to process image data captured by the camera and sensor data captured by the non-imaging sensor to determine a driving situation of the vehicle. Responsive to determination of the driving situation, Kalman Filter parameters associated with the determined driving situation are determined and, using the determined Kalman Filter parameters, a Kalman Filter fusion may be determined. The determined Kalman Filter fusion may be applied to captured image data and captured sensor data to determine an object present in the overlapping region.1. A vision system of a vehicle, said vision system comprising: a camera configured to be disposed at a vehicle equipped with said vision system so as to have a field of view exterior of the equipped vehicle; wherein said camera comprises a pixelated imaging array having a plurality of photosensing elements; a non-imaging sensor configured to be disposed at the equipped vehicle so as to have a field of sensing exterior of the equipped vehicle; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, the field of view of said camera at least partially overlaps the field of sensing of said non-imaging sensor at an overlapping region; a processor operable to process image data captured by said camera and sensor data captured by said non-imaging sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, said processor is operable to process captured image data and captured sensor data to determine a driving situation of the equipped vehicle; and wherein, responsive to determination by said processor of the driving situation by processing of captured image data and captured sensor data, Kalman Filter parameters associated with the determined driving situation are determined, and, using the determined Kalman Filter parameters, a Kalman Filter fusion is determined, and wherein the determined Kalman Filter fusion is applied to captured image data and captured sensor data to determine an object present in the overlapping region. 2. The vision system of claim 1, wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region, and wherein, responsive to matching of determined objects, said processor determines if the matched objects are stationary or moving and Kalman Filter parameters associated with the determined matched objects are determined. 3. The vision system of claim 2, wherein the Kalman Filter parameters comprise a gain and covariance. 4. The vision system of claim 3, wherein, responsive to matching of objects, said processor determines if a moving object is indicative of an approaching head-on vehicle and, responsive to determination that the moving object is indicative of an approaching head-on vehicle, a gain and covariance associated with an approaching head-on vehicle are determined, and wherein, using the determined gain and covariance, the Kalman Filter fusion is determined. 5. The vision system of claim 3, wherein, responsive to matching of objects, said processor determines if a moving object is not indicative of an approaching head-on vehicle and, responsive to determination that the moving object is not indicative of an approaching head-on vehicle, a gain and covariance associated with other object motion are determined, and wherein, using the determined gain and covariance, the Kalman Filter fusion is determined. 6. The vision system of claim 1, wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region, and wherein, responsive to matching of determined objects, said processor determines a classification of the matched objects and Kalman Filter parameters associated with the determined matched objects are determined. 7. The vision system of claim 6, wherein the determined classification comprises one of (i) a vehicle cutting in front of the equipped vehicle and (ii) a vehicle stopped in front of the equipped vehicle. 8. The vision system of claim 1, wherein the Kalman Filter parameters comprise a gain and covariance. 9. The vision system of claim 1, wherein said non-imaging sensor comprises a radar sensor. 10. The vision system of claim 1, wherein said non-imaging sensor comprises one of a lidar sensor and an ultrasonic sensor. 11. The vision system of claim 1, wherein said processor is operable to communicate via a vehicle-to-vehicle communication system of the equipped vehicle. 12. The vision system of claim 1, wherein said camera has a field of view forward of the equipped vehicle and wherein said non-imaging sensor has a field of sensing forward of the equipped vehicle. 13. A vision system of a vehicle, said vision system comprising: a camera configured to be disposed at a vehicle equipped with said vision system so as to have a field of view exterior and forward of the equipped vehicle; wherein said camera comprises a pixelated imaging array having a plurality of photosensing elements; a non-imaging sensor configured to be disposed at the equipped vehicle so as to have a field of sensing exterior and forward of the equipped vehicle, wherein said non-imaging sensor comprises one of a radar sensor and a lidar sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, the field of view of said camera at least partially overlaps the field of sensing of said non-imaging sensor at an overlapping region; a processor operable to process image data captured by said camera and sensor data captured by said non-imaging sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, said processor is operable to process captured image data and captured sensor data to determine a driving situation of the equipped vehicle; wherein the determined driving situation comprises a vehicle cutting in front of the equipped vehicle; and wherein, responsive to determination by said processor of the driving situation by processing of captured image data and captured sensor data, Kalman Filter parameters associated with the determined driving situation are determined. 14. The vision system of claim 13, wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region, and wherein, responsive to matching of determined objects, said processor determines if the matched objects are stationary or moving and Kalman Filter parameters associated with the determined matched objects are determined. 15. The vision system of claim 13, wherein the Kalman Filter parameters comprise a gain and covariance. 16. The vision system of claim 13, wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region, and wherein, responsive to matching of determined objects, said processor determines a classification of the matched objects and Kalman Filter parameters associated with the determined matched objects are determined. 17. The vision system of claim 13, wherein, using the determined Kalman Filter parameters, a Kalman Filter fusion is determined, and wherein the determined Kalman Filter fusion is applied to captured image data and captured sensor data to determine an object present in the overlapping region. 18. A vision system of a vehicle, said vision system comprising: a camera configured to be disposed at a vehicle equipped with said vision system so as to have a field of view exterior and forward of the equipped vehicle; wherein said camera comprises a pixelated imaging array having a plurality of photosensing elements; a non-imaging sensor configured to be disposed at the equipped vehicle so as to have a field of sensing exterior and forward of the equipped vehicle, wherein said non-imaging sensor comprises one of a radar sensor and a lidar sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, the field of view of said camera at least partially overlaps the field of sensing of said non-imaging sensor at an overlapping region; a processor operable to process image data captured by said camera and sensor data captured by said non-imaging sensor; wherein, with said camera and said non-imaging sensor disposed at the equipped vehicle, said processor is operable to process captured image data and captured sensor data to determine a driving situation of the equipped vehicle; wherein, responsive to determination by said processor of the driving situation by processing of captured image data and captured sensor data, Kalman Filter parameters associated with the determined driving situation are determined; and wherein said processor is operable to process captured image data and captured sensor data to match objects determined, via processing of captured image data, to be present in the overlapping region and objects determined, via processing of captured sensor data, to be present in the overlapping region. 19. The vision system of claim 18, wherein, responsive to matching of determined objects, said processor determines if the matched objects are stationary or moving and Kalman Filter parameters associated with the determined matched objects are determined. 20. The vision system of claim 18, wherein, using the determined Kalman Filter parameters, a Kalman Filter fusion is determined, and wherein the determined Kalman Filter fusion is applied to captured image data and captured sensor data to determine an object present in the overlapping region.
2,400
8,199
8,199
14,513,751
2,465
A vehicle gateway module is configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks. The gateway module has a cellular data link which provides a direct connection between the gateway module and the Internet whereby communication between a vehicle device connected to a vehicle network connected to the gateway module and the Internet is enabled via the gateway module and the cellular data link.
1. A system for a vehicle comprising: a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks, the gateway module having a cellular data link which provides a direct connection between the gateway module and the Internet whereby communication between a vehicle device connected to a vehicle network connected to the gateway module and the Internet is enabled via the gateway module and the cellular data link. 2. The system of claim 1 wherein: the gateway module is further configured to receive re-flash software via the cellular data link from a remote entity connected to the Internet for receipt by a recipient vehicle device connected to a vehicle network connected to the gateway module. 3. The system of claim 1 wherein: the gateway module is further configured to exchange communication communicated via the cellular data link between a remote entity connected to the Internet and a vehicle device connected to a vehicle network connected to the gateway module. 4. The system of claim 1 wherein: the vehicle networks are respectively one of a Controller Area Network (CAN), a Local Interconnect Network (LIN), an Ethernet network, a F1exRay™ network, and a Media Oriented Systems Transport (MOST) network. 5. The system of claim 1 wherein: the vehicle networks use different communication protocols from one another. 6. The system of claim 1 wherein: the gateway module is further configured to enable communication between vehicle devices connected to vehicle networks connected to the gateway module in which the vehicle networks use different communication protocols from one another. 7. The system of claim 1 wherein: the cellular data link is one of a 3G data link and a 4G data link. 8. The system of claim 1 wherein: the gateway module further includes a first wireless link, the first wireless link being one of a WiFi™ wireless link and a Bluetooth™ wireless link. 9. The system of claim 8 wherein: the gateway module further includes a second wireless link, the second wireless link being the other one of a WiFi™ wireless link and a Bluetooth™ wireless link. 10. A method for a vehicle comprising: providing a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks; and communicating, via the gateway module and a cellular data link of the gateway module in which the cellular data link provides a direct connection between the gateway module and the Internet, between a vehicle device connected to a vehicle network connected to the gateway module and the Internet. 11. The method of claim 10 further comprising: receiving by the gateway module, via the cellular data link, re-flash software for receipt by a recipient vehicle device connected to a vehicle network connected to the gateway module. 12. The method of claim 11 further comprising: transferring by the gateway module the re-flash software to the recipient vehicle device over the vehicle network connected to the recipient vehicle device. 13. The method of claim 10 further comprising: transferring by a remote entity connected to the Internet, via the cellular data link and the gateway module, re-flash software to a recipient vehicle device connected to a vehicle network connected to the gateway module. 14. The method of claim 10 further comprising: communicating, via the cellular data link and the gateway module, between a vehicle device connected to a vehicle network connected to the gateway module and a remote entity connected to the Internet. 15. The method of claim 10 further comprising: controlling by a remote entity connected to the Internet, via the cellular data link and the gateway module, a targeted vehicle device connected to a vehicle network connected to the gateway module. 16. The method of claim 10 further comprising: receiving by a remote entity connected to the Internet, via the cellular data link and the gateway module, diagnostic information from a targeted vehicle device connected to a vehicle network connected to the gateway module. 17. A method for a vehicle comprising: providing a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks and having a cellular data link which provides a direct connection between the gateway module and the Internet; and downloading from a remote entity connected to the Internet, via the cellular data link, re-flash software to a recipient vehicle device connected to a vehicle network connected to the gateway module. 18. The method of claim 17 wherein: the downloading includes communicating from the remote entity, via the cellular data link, the re-flash software to the gateway module, and subsequently transferring by the gateway module the re-flash software to the recipient vehicle device over the vehicle network connected to the recipient vehicle device. 19. The method of claim 17 wherein: the vehicle devices include vehicle controllers and vehicle sensors. 20. The method of claim 17 wherein: the cellular data link is one of a 3G data link and a 4G data link.
A vehicle gateway module is configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks. The gateway module has a cellular data link which provides a direct connection between the gateway module and the Internet whereby communication between a vehicle device connected to a vehicle network connected to the gateway module and the Internet is enabled via the gateway module and the cellular data link.1. A system for a vehicle comprising: a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks, the gateway module having a cellular data link which provides a direct connection between the gateway module and the Internet whereby communication between a vehicle device connected to a vehicle network connected to the gateway module and the Internet is enabled via the gateway module and the cellular data link. 2. The system of claim 1 wherein: the gateway module is further configured to receive re-flash software via the cellular data link from a remote entity connected to the Internet for receipt by a recipient vehicle device connected to a vehicle network connected to the gateway module. 3. The system of claim 1 wherein: the gateway module is further configured to exchange communication communicated via the cellular data link between a remote entity connected to the Internet and a vehicle device connected to a vehicle network connected to the gateway module. 4. The system of claim 1 wherein: the vehicle networks are respectively one of a Controller Area Network (CAN), a Local Interconnect Network (LIN), an Ethernet network, a F1exRay™ network, and a Media Oriented Systems Transport (MOST) network. 5. The system of claim 1 wherein: the vehicle networks use different communication protocols from one another. 6. The system of claim 1 wherein: the gateway module is further configured to enable communication between vehicle devices connected to vehicle networks connected to the gateway module in which the vehicle networks use different communication protocols from one another. 7. The system of claim 1 wherein: the cellular data link is one of a 3G data link and a 4G data link. 8. The system of claim 1 wherein: the gateway module further includes a first wireless link, the first wireless link being one of a WiFi™ wireless link and a Bluetooth™ wireless link. 9. The system of claim 8 wherein: the gateway module further includes a second wireless link, the second wireless link being the other one of a WiFi™ wireless link and a Bluetooth™ wireless link. 10. A method for a vehicle comprising: providing a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks; and communicating, via the gateway module and a cellular data link of the gateway module in which the cellular data link provides a direct connection between the gateway module and the Internet, between a vehicle device connected to a vehicle network connected to the gateway module and the Internet. 11. The method of claim 10 further comprising: receiving by the gateway module, via the cellular data link, re-flash software for receipt by a recipient vehicle device connected to a vehicle network connected to the gateway module. 12. The method of claim 11 further comprising: transferring by the gateway module the re-flash software to the recipient vehicle device over the vehicle network connected to the recipient vehicle device. 13. The method of claim 10 further comprising: transferring by a remote entity connected to the Internet, via the cellular data link and the gateway module, re-flash software to a recipient vehicle device connected to a vehicle network connected to the gateway module. 14. The method of claim 10 further comprising: communicating, via the cellular data link and the gateway module, between a vehicle device connected to a vehicle network connected to the gateway module and a remote entity connected to the Internet. 15. The method of claim 10 further comprising: controlling by a remote entity connected to the Internet, via the cellular data link and the gateway module, a targeted vehicle device connected to a vehicle network connected to the gateway module. 16. The method of claim 10 further comprising: receiving by a remote entity connected to the Internet, via the cellular data link and the gateway module, diagnostic information from a targeted vehicle device connected to a vehicle network connected to the gateway module. 17. A method for a vehicle comprising: providing a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks and having a cellular data link which provides a direct connection between the gateway module and the Internet; and downloading from a remote entity connected to the Internet, via the cellular data link, re-flash software to a recipient vehicle device connected to a vehicle network connected to the gateway module. 18. The method of claim 17 wherein: the downloading includes communicating from the remote entity, via the cellular data link, the re-flash software to the gateway module, and subsequently transferring by the gateway module the re-flash software to the recipient vehicle device over the vehicle network connected to the recipient vehicle device. 19. The method of claim 17 wherein: the vehicle devices include vehicle controllers and vehicle sensors. 20. The method of claim 17 wherein: the cellular data link is one of a 3G data link and a 4G data link.
2,400