Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
7,400
7,400
14,320,582
2,491
For a host that executes one or more guest virtual machines (GVMs), some embodiments provide a novel encryption method for encrypting the data messages sent by the GVMs. The method initially receives a data message to send for a GVM executing on the host. The method then determines whether it should encrypt the data message based on a set of one or more encryption rules. When the process determines that it should encrypt the received data message, it encrypts the data message and forwards the encrypted data message to its destination; otherwise, the method just forwards the received data message unencrypted to its destination. In some embodiments, the host encrypts differently the data messages for different GVMs that execute on the host. When two different GVMs are part of two different logical overlay networks that are implemented on common network fabric, the method in some embodiments encrypts the data messages exchanged between the GVMs of one logical network differently than the data messages exchanged between the GVMs of another logical network. In some embodiments, the method can also encrypt different types of data messages from the same GVM differently. Also, in some embodiments, the method can dynamically enforce encryption rules in response to dynamically detected events, such as malware infections.
1. A method of providing encryption services in a system with a plurality of computing machines, the method comprising: defining a plurality of encryption groups, each group having a set of computing-machine members; defining a set of encryption policies for each group; based on a dynamically detected event that relates to a particular machine, dynamically adding the particular machine as a member to at least one particular group; and applying the set of encryption policies for the particular group to the particular machine. 2. The method of claim 1, wherein applying the set of encryption policies comprises transmitting the set of encryption policies from a first device to a second device, so that the set of encryption policies will be enforced for the particular machine. 3. The method of claim 2, wherein the particular machine is a virtual machine, and the second device is a host computer on which the virtual machine executes. 4. The method of claim 3, wherein the first device is a computer that was used to define the set of encryption policies and that stores the set of encryption policies. 5. The method of claim 3, wherein the first device is a computer on which the set of policies is stored. 6. The method of claim 2, wherein the first device is a computer on which the set of policies is stored; wherein the second device is the particular machine. 7. The method of claim 1, wherein applying the set of encryption policies comprises transmitting, from a first device to a second device, an update to the membership of the particular group, so that the set of encryption policies will be enforced for the particular machine. 8. The method of claim 7, wherein the particular machine is a virtual machine, and the second device is a host computer on which the virtual machine executes. 9. The method of claim 8, wherein the first device is a computer that was used to define the set of encryption policies and that stores the set of encryption policies. 10. The method of claim 8, wherein the first device is a computer on which the set of policies is stored. 11. The method of claim 7, wherein the first device is a computer on which the set of policies is stored; wherein the second device is the particular machine. 12. The method of claim 1, wherein applying the set of encryption policies comprises transmitting, from a first device to a second device, at least one encryption rule related to the set of encryption policies, said encryption rule for applying to data messages of the particular machine. 13. The method of claim 12, wherein the particular machine is a virtual machine, and the second device is a host computer on which the virtual machine executes. 14. The method of claim 13, wherein the first device is a computer that was used to define the set of encryption policies and that stores the set of encryption policies. 15. The method of claim 13, wherein the first device is a computer on which the set of policies is stored. 16. The method of claim 12, wherein the first device is a computer on which the set of policies is stored; wherein the second device is the particular machine. 17. A non-transitory machine readable medium storing a program for providing encryption services in a system with a plurality of computing machines, the program comprising sets of instructions for: defining a plurality of encryption groups, each group having a set of computing-machine members; defining a set of encryption policies for each group; based on a dynamically detected event that relates to a particular machine, dynamically adding a particular machine as a member to at least one particular group; and applying the set of encryption policies for the particular group to the particular machine. 18-26. (canceled) 27. The non-transitory machine readable medium of claim 17, wherein the program further comprises sets of instructions for: based on another dynamically detected event that relates to the particular machine, dynamically removing the particular machine as the member of the group; and removing the applicability of the set of encryption policies to the particular machine. 28. The non-transitory machine readable medium of claim 27, wherein the program further comprises a set of instructions for discarding an encryption rule that was defined to enforce the set of encryption policies to the particular machine. 29. The non-transitory machine readable medium of claim 17, wherein when an encryption group is initially defined, the encryption group has no computing machine as a member. 30. The non-transitory machine readable medium of claim 17, wherein the set of encryption policies includes one or more encryption policies based on which one or more encryption rules have to be defined for a computing machine that is a member of an encryption group.
For a host that executes one or more guest virtual machines (GVMs), some embodiments provide a novel encryption method for encrypting the data messages sent by the GVMs. The method initially receives a data message to send for a GVM executing on the host. The method then determines whether it should encrypt the data message based on a set of one or more encryption rules. When the process determines that it should encrypt the received data message, it encrypts the data message and forwards the encrypted data message to its destination; otherwise, the method just forwards the received data message unencrypted to its destination. In some embodiments, the host encrypts differently the data messages for different GVMs that execute on the host. When two different GVMs are part of two different logical overlay networks that are implemented on common network fabric, the method in some embodiments encrypts the data messages exchanged between the GVMs of one logical network differently than the data messages exchanged between the GVMs of another logical network. In some embodiments, the method can also encrypt different types of data messages from the same GVM differently. Also, in some embodiments, the method can dynamically enforce encryption rules in response to dynamically detected events, such as malware infections.1. A method of providing encryption services in a system with a plurality of computing machines, the method comprising: defining a plurality of encryption groups, each group having a set of computing-machine members; defining a set of encryption policies for each group; based on a dynamically detected event that relates to a particular machine, dynamically adding the particular machine as a member to at least one particular group; and applying the set of encryption policies for the particular group to the particular machine. 2. The method of claim 1, wherein applying the set of encryption policies comprises transmitting the set of encryption policies from a first device to a second device, so that the set of encryption policies will be enforced for the particular machine. 3. The method of claim 2, wherein the particular machine is a virtual machine, and the second device is a host computer on which the virtual machine executes. 4. The method of claim 3, wherein the first device is a computer that was used to define the set of encryption policies and that stores the set of encryption policies. 5. The method of claim 3, wherein the first device is a computer on which the set of policies is stored. 6. The method of claim 2, wherein the first device is a computer on which the set of policies is stored; wherein the second device is the particular machine. 7. The method of claim 1, wherein applying the set of encryption policies comprises transmitting, from a first device to a second device, an update to the membership of the particular group, so that the set of encryption policies will be enforced for the particular machine. 8. The method of claim 7, wherein the particular machine is a virtual machine, and the second device is a host computer on which the virtual machine executes. 9. The method of claim 8, wherein the first device is a computer that was used to define the set of encryption policies and that stores the set of encryption policies. 10. The method of claim 8, wherein the first device is a computer on which the set of policies is stored. 11. The method of claim 7, wherein the first device is a computer on which the set of policies is stored; wherein the second device is the particular machine. 12. The method of claim 1, wherein applying the set of encryption policies comprises transmitting, from a first device to a second device, at least one encryption rule related to the set of encryption policies, said encryption rule for applying to data messages of the particular machine. 13. The method of claim 12, wherein the particular machine is a virtual machine, and the second device is a host computer on which the virtual machine executes. 14. The method of claim 13, wherein the first device is a computer that was used to define the set of encryption policies and that stores the set of encryption policies. 15. The method of claim 13, wherein the first device is a computer on which the set of policies is stored. 16. The method of claim 12, wherein the first device is a computer on which the set of policies is stored; wherein the second device is the particular machine. 17. A non-transitory machine readable medium storing a program for providing encryption services in a system with a plurality of computing machines, the program comprising sets of instructions for: defining a plurality of encryption groups, each group having a set of computing-machine members; defining a set of encryption policies for each group; based on a dynamically detected event that relates to a particular machine, dynamically adding a particular machine as a member to at least one particular group; and applying the set of encryption policies for the particular group to the particular machine. 18-26. (canceled) 27. The non-transitory machine readable medium of claim 17, wherein the program further comprises sets of instructions for: based on another dynamically detected event that relates to the particular machine, dynamically removing the particular machine as the member of the group; and removing the applicability of the set of encryption policies to the particular machine. 28. The non-transitory machine readable medium of claim 27, wherein the program further comprises a set of instructions for discarding an encryption rule that was defined to enforce the set of encryption policies to the particular machine. 29. The non-transitory machine readable medium of claim 17, wherein when an encryption group is initially defined, the encryption group has no computing machine as a member. 30. The non-transitory machine readable medium of claim 17, wherein the set of encryption policies includes one or more encryption policies based on which one or more encryption rules have to be defined for a computing machine that is a member of an encryption group.
2,400
7,401
7,401
14,180,548
2,487
Detection of three dimensional obstacles using a system mountable in a host vehicle including a camera connectible to a processor. Multiple image frames are captured in the field of view of the camera. In the image frames, an imaged feature is detected of an object in the environment of the vehicle. The image frames are portioned locally around the imaged feature to produce imaged portions of the image frames including the imaged feature. The image frames are processed to compute a depth map locally around the detected imaged feature in the image portions. Responsive to the depth map, it is determined if the object is an obstacle to the motion of the vehicle.
1. A method for detection of three dimensional obstacles, the method performed by a system mountable in a host vehicle, wherein the system includes a camera operatively connectible to a processor, the method comprising: capturing a plurality of image frames in the field of view of the camera; and in the image frames, detecting an imaged feature of an object in the environment of the vehicle; portioning the image frames locally around the imaged feature to produce image portions of the image frames including the imaged feature; processing the image frames thereby computing a depth map locally around the detected imaged feature in said image portions, wherein the depth map includes an image of the feature with a color or grayscale coordinate related to a function of distance from the camera to the object; and responsive to the depth map, determining if the object is an obstacle to the motion of the vehicle. 2. The method of claim 1, further comprising: representing the object with a plurality of models, computing a plurality of model depth maps of the respective models; comparing the depth map of the detected feature with the model depth maps; and based on the comparison, said determining that the object is an obstacle or not an obstacle to the motion of the vehicle. 3. The method of claim 2, wherein the models are selected from the group consisting of: a horizontal planar model, a vertical planar model, a mixed model including horizontal and vertical portions, a spherical model, a circular model, a model of a guard rail, a model of lane marker, a model of a road curb and a model of an upright pedestrian. 4. The method of claim 1, wherein said computing the depth map is performed only locally around the detected feature in said image portions. 5. The method of claim 1, further comprising: adjusting the resolution of the computation of the depth map only to achieve an accuracy required based on the imaged feature. 6. A system for detection of three dimensional obstacles, the system mountable in a host vehicle, the system including a camera operatively connectible to a processor, the system operable to: capture a plurality of image frames in the field of view of the camera; detect an imaged feature in the image frames of an object in the environment of the vehicle; portion the image frames locally around the imaged feature to produce imaged portions of the image frames including the imaged feature; process the image frames and compute thereby a depth map locally around the detected imaged feature in said image portions, wherein the depth map includes an image of the feature with a color or grayscale coordinate related to a function of distance from the camera to the object; and responsive to the depth map, determine if the object is an obstacle to the motion of the vehicle. 7. The system of claim 6, further operable to: represent the object with a plurality of models, compute a plurality of model depth maps of the respective models; compare the depth map of the detected feature with the model depth maps; and based on the comparison, determine that the object is an obstacle or not an obstacle to the motion of the vehicle. 8. The system of claim 7, wherein the models are selected from the group consisting of: a horizontal planar model, a vertical planar model, a mixed model including horizontal and vertical portions, a spherical model, a circular model, a model of a guard rail, a model of lane marker, a model of a road curb and a model of an upright pedestrian. 9. The system of claim 6, wherein the depth map is computed only locally around the detected feature in said image portions. 10. The system of claim 6, further operable to: adjust the resolution of the computation of the depth map only to achieve an accuracy required based on the imaged feature.
Detection of three dimensional obstacles using a system mountable in a host vehicle including a camera connectible to a processor. Multiple image frames are captured in the field of view of the camera. In the image frames, an imaged feature is detected of an object in the environment of the vehicle. The image frames are portioned locally around the imaged feature to produce imaged portions of the image frames including the imaged feature. The image frames are processed to compute a depth map locally around the detected imaged feature in the image portions. Responsive to the depth map, it is determined if the object is an obstacle to the motion of the vehicle.1. A method for detection of three dimensional obstacles, the method performed by a system mountable in a host vehicle, wherein the system includes a camera operatively connectible to a processor, the method comprising: capturing a plurality of image frames in the field of view of the camera; and in the image frames, detecting an imaged feature of an object in the environment of the vehicle; portioning the image frames locally around the imaged feature to produce image portions of the image frames including the imaged feature; processing the image frames thereby computing a depth map locally around the detected imaged feature in said image portions, wherein the depth map includes an image of the feature with a color or grayscale coordinate related to a function of distance from the camera to the object; and responsive to the depth map, determining if the object is an obstacle to the motion of the vehicle. 2. The method of claim 1, further comprising: representing the object with a plurality of models, computing a plurality of model depth maps of the respective models; comparing the depth map of the detected feature with the model depth maps; and based on the comparison, said determining that the object is an obstacle or not an obstacle to the motion of the vehicle. 3. The method of claim 2, wherein the models are selected from the group consisting of: a horizontal planar model, a vertical planar model, a mixed model including horizontal and vertical portions, a spherical model, a circular model, a model of a guard rail, a model of lane marker, a model of a road curb and a model of an upright pedestrian. 4. The method of claim 1, wherein said computing the depth map is performed only locally around the detected feature in said image portions. 5. The method of claim 1, further comprising: adjusting the resolution of the computation of the depth map only to achieve an accuracy required based on the imaged feature. 6. A system for detection of three dimensional obstacles, the system mountable in a host vehicle, the system including a camera operatively connectible to a processor, the system operable to: capture a plurality of image frames in the field of view of the camera; detect an imaged feature in the image frames of an object in the environment of the vehicle; portion the image frames locally around the imaged feature to produce imaged portions of the image frames including the imaged feature; process the image frames and compute thereby a depth map locally around the detected imaged feature in said image portions, wherein the depth map includes an image of the feature with a color or grayscale coordinate related to a function of distance from the camera to the object; and responsive to the depth map, determine if the object is an obstacle to the motion of the vehicle. 7. The system of claim 6, further operable to: represent the object with a plurality of models, compute a plurality of model depth maps of the respective models; compare the depth map of the detected feature with the model depth maps; and based on the comparison, determine that the object is an obstacle or not an obstacle to the motion of the vehicle. 8. The system of claim 7, wherein the models are selected from the group consisting of: a horizontal planar model, a vertical planar model, a mixed model including horizontal and vertical portions, a spherical model, a circular model, a model of a guard rail, a model of lane marker, a model of a road curb and a model of an upright pedestrian. 9. The system of claim 6, wherein the depth map is computed only locally around the detected feature in said image portions. 10. The system of claim 6, further operable to: adjust the resolution of the computation of the depth map only to achieve an accuracy required based on the imaged feature.
2,400
7,402
7,402
13,299,102
2,477
A modular switching network node for a communications network, i.e., an industrial communications network, where the modular switching network node comprises a switching network node base unit and at least one port module, the at least one port module comprises at least one connection interface for coupling to the communications network, and where the modular switching network node is configured to forward communication data over one of the connection interfaces of the modular switching network node to at least one additional connection interface of the modular switching network node. The switching network node base unit is configured such that at least one of the port modules is swappable for a functional module to expand the functionality of the switching network node.
1. A modular switching network node for a communications network, comprising: a switching network node base unit; and at least one port module comprising at least one connection interface for coupling to the communications network; wherein the modular switching network node is configured to forward communication data arriving over the at least one connection interface to at least one additional connection interface of the modular switching network node; and wherein the switching network node base unit is configured such that the at least one port module is swappable for a functional module to expand a functionality of the modular switching network node. 2. The modular switching network node as claimed in claim 1, wherein the switching network node base unit comprises at least one module receiving region configured to receive one of the at least one port module and the functional module. 3. The modular switching network node as claimed in claim 2, wherein the switching network node base unit comprises an internal network bus for forwarding communication data received over connection interfaces within the switching network node base unit, and a system bus; wherein the system bus is configured for communication with a central control unit of the modular switching network node; and wherein the switching network node base unit comprises, in a region of the at least one module receiving region, a network bus interface for contacting port modules located in the module-receiving region and a system bus interface for contacting functional modules located in the at least one module-receiving region. 4. The modular switching network node as claimed in claim 2, wherein the switching network node base unit is configured to at least one of detect and configure a functional module located in one of the at least one module receiving region. 5. The modular switching network node as claimed in claim 3, wherein the switching network node base unit is configured to at least one of detect and configure a functional module located in one of the at least one module-receiving region. 6. The modular switching network node as claimed in claim 2, wherein the switching network node base unit is configured to at least one of configure and diagnose functional modules located in one of the at least one module receiving region over a user interface of the switching network node. 7. The modular switching network node of claim 1, wherein the communications network comprises an industrial communications network. 8. A functional module for a switching network node base unit of a modular switching network node, wherein the switching network node base unit is configured such that a port module is swappable for the functional module to expand a functionality of the modular switching network node, the functional node comprising: at least one connection interface, wherein the functional module is configured to forward communication data arriving over the at least one connection interface to at least one additional connection interface of the modular switching network node. 9. The functional module as claimed in claim 8, wherein the functional module comprises a module system bus interface for communication with a system bus interface of a module-receiving region of the switching network node base unit in communication with a system bus of the switching network node base unit, wherein the system bus is configured for communication with a central control unit of the modular switching network node. 10. The functional module as claimed in claim 9, wherein the functional module further comprises a module network bus interface for communication with a network bus interface of the module-receiving region of the switching network node base unit, the network bus interface of the module-receiving region connected to an internal network bus of the switching network node base unit for forwarding communication data received over connection interfaces within the switching network node base unit. 11. The functional module as claimed in claim 8, wherein the functional module comprises one of an additional arithmetic logic unit and central unit configured to support a central control unit of the modular switching network node. 12. The functional module as claimed in claim 8, wherein the functional module comprises a server module for implementing a server functionality with an independent central unit. 13. The functional module as claimed in claim 8, wherein the functional module comprises at least one of a display module and an operating module for the modular switching network node. 14. The functional module as claimed in claim 8, wherein the functional module comprises an energy saving-module for at least one of setting up and operating energy saving functionalities of the modular switching network node or for the modular switching network node. 15. The functional module as claimed in claim 8, wherein the functional module comprises a communication security module including a functionality for at least one of setting up and increasing security functions during communication over at least one of the connection interfaces of the modular switching network node. 16. The functional module as claimed in claim 8, wherein the functional module comprises a real time-communication module configured to one of set up, expand and improve real-time communication over at least one of the connection interfaces of the modular switching network node. 17. A modular switching network node for a communications network, comprising: a switching network node base unit; at least one of: a port module comprising at least one connection interface for coupling to the communications network; and a functional module configured to forward communication data arriving over the at least one connection interface to at least one additional connection interface of the modular switching network node; the switching network node base unit being configured such that the port module is swappable for the functional module to expand a functionality of the switching network node; wherein the switching network node base unit comprises at least one module receiving region configured to receive one of the port module and the functional module; and wherein the modular switching network node is configured to forward communication data arriving over the at least one connection interface to at least one additional connection interface of the modular switching network node.
A modular switching network node for a communications network, i.e., an industrial communications network, where the modular switching network node comprises a switching network node base unit and at least one port module, the at least one port module comprises at least one connection interface for coupling to the communications network, and where the modular switching network node is configured to forward communication data over one of the connection interfaces of the modular switching network node to at least one additional connection interface of the modular switching network node. The switching network node base unit is configured such that at least one of the port modules is swappable for a functional module to expand the functionality of the switching network node.1. A modular switching network node for a communications network, comprising: a switching network node base unit; and at least one port module comprising at least one connection interface for coupling to the communications network; wherein the modular switching network node is configured to forward communication data arriving over the at least one connection interface to at least one additional connection interface of the modular switching network node; and wherein the switching network node base unit is configured such that the at least one port module is swappable for a functional module to expand a functionality of the modular switching network node. 2. The modular switching network node as claimed in claim 1, wherein the switching network node base unit comprises at least one module receiving region configured to receive one of the at least one port module and the functional module. 3. The modular switching network node as claimed in claim 2, wherein the switching network node base unit comprises an internal network bus for forwarding communication data received over connection interfaces within the switching network node base unit, and a system bus; wherein the system bus is configured for communication with a central control unit of the modular switching network node; and wherein the switching network node base unit comprises, in a region of the at least one module receiving region, a network bus interface for contacting port modules located in the module-receiving region and a system bus interface for contacting functional modules located in the at least one module-receiving region. 4. The modular switching network node as claimed in claim 2, wherein the switching network node base unit is configured to at least one of detect and configure a functional module located in one of the at least one module receiving region. 5. The modular switching network node as claimed in claim 3, wherein the switching network node base unit is configured to at least one of detect and configure a functional module located in one of the at least one module-receiving region. 6. The modular switching network node as claimed in claim 2, wherein the switching network node base unit is configured to at least one of configure and diagnose functional modules located in one of the at least one module receiving region over a user interface of the switching network node. 7. The modular switching network node of claim 1, wherein the communications network comprises an industrial communications network. 8. A functional module for a switching network node base unit of a modular switching network node, wherein the switching network node base unit is configured such that a port module is swappable for the functional module to expand a functionality of the modular switching network node, the functional node comprising: at least one connection interface, wherein the functional module is configured to forward communication data arriving over the at least one connection interface to at least one additional connection interface of the modular switching network node. 9. The functional module as claimed in claim 8, wherein the functional module comprises a module system bus interface for communication with a system bus interface of a module-receiving region of the switching network node base unit in communication with a system bus of the switching network node base unit, wherein the system bus is configured for communication with a central control unit of the modular switching network node. 10. The functional module as claimed in claim 9, wherein the functional module further comprises a module network bus interface for communication with a network bus interface of the module-receiving region of the switching network node base unit, the network bus interface of the module-receiving region connected to an internal network bus of the switching network node base unit for forwarding communication data received over connection interfaces within the switching network node base unit. 11. The functional module as claimed in claim 8, wherein the functional module comprises one of an additional arithmetic logic unit and central unit configured to support a central control unit of the modular switching network node. 12. The functional module as claimed in claim 8, wherein the functional module comprises a server module for implementing a server functionality with an independent central unit. 13. The functional module as claimed in claim 8, wherein the functional module comprises at least one of a display module and an operating module for the modular switching network node. 14. The functional module as claimed in claim 8, wherein the functional module comprises an energy saving-module for at least one of setting up and operating energy saving functionalities of the modular switching network node or for the modular switching network node. 15. The functional module as claimed in claim 8, wherein the functional module comprises a communication security module including a functionality for at least one of setting up and increasing security functions during communication over at least one of the connection interfaces of the modular switching network node. 16. The functional module as claimed in claim 8, wherein the functional module comprises a real time-communication module configured to one of set up, expand and improve real-time communication over at least one of the connection interfaces of the modular switching network node. 17. A modular switching network node for a communications network, comprising: a switching network node base unit; at least one of: a port module comprising at least one connection interface for coupling to the communications network; and a functional module configured to forward communication data arriving over the at least one connection interface to at least one additional connection interface of the modular switching network node; the switching network node base unit being configured such that the port module is swappable for the functional module to expand a functionality of the switching network node; wherein the switching network node base unit comprises at least one module receiving region configured to receive one of the port module and the functional module; and wherein the modular switching network node is configured to forward communication data arriving over the at least one connection interface to at least one additional connection interface of the modular switching network node.
2,400
7,403
7,403
14,568,621
2,439
One embodiment provides a method, including: accessing, on a mobile end user device, a media file; processing, using a processor of the mobile end user device, the media file to characterize the media file; detecting, using the processor, at least one privacy-sensitive characteristic of the media file; and setting an indicator, using the processor, denoting the media file as privacy-sensitive prior to permitting the media file to be stored on a cloud account device. Other embodiments are described and claimed.
1. A method, comprising: accessing, on a device, a media file; processing, using a processor of the device, data of the media file to characterize the media file as being privacy sensitive or not privacy sensitive; wherein the processing comprises processing the data of the media file to detect at least one privacy sensitive characteristic; setting an indicator, using the processor, denoting the media file as privacy-sensitive; and automatically encrypting the data of the media file in response to characterizing the media file as being privacy sensitive. 2. The method of claim 1, wherein the indicator is processed to exclude the media file from cloud synchronization. 3. The method of claim 1, further comprising prompting a user regarding the indicator prior to permitting the media file to be stored on a cloud account device. 4. (canceled) 5. The method of claim 1, wherein the media is automatically encrypted if it is determined to be privacy sensitive prior to storage in a location selected from the group consisting of local device storage, cloud storage, and removable storage. 6. The method of claim 5, wherein the encrypting applies an encryption factor matching an encryption factor used for the device and not a remote storage location. 7. The method of claim 6, wherein the encryption factor is a biometric encryption factor. 8. The method of claim 1, wherein the processing comprises processing the media file to recognize a characteristic selected from the group consisting of: a threshold amount of exposed skin, an individual, a geographic location, and a topic. 9. The method of claim 1, wherein the processing comprises image processing. 10. The method of claim 1, further comprising automatically obfuscating the media file based on the presence of the indicator. 11. A device, comprising: a network communication device for communicating with a networked device; a processor coupled to the network communication device; a memory that stores instructions executable by the processor to: access, on the device, a media file; process the data of the media file to characterize the media file as being privacy sensitive or not privacy sensitive; wherein the processing comprises processing the data of the media file to detect at least one privacy-sensitive characteristic; set an indicator denoting the media file as privacy-sensitive; and automatically encrypt the data of the media file in response to characterizing the media file as being privacy sensitive 12. The device of claim 11, wherein the indicator is processed to exclude the media file from cloud synchronization. 13. The device of claim 11, wherein the instructions are executed by the processor to prompt a user regarding the indicator prior to permitting the media file to be stored on a cloud account device. 14. (canceled) 15. The device of claim 11, wherein the media is automatically encrypted if it is determined to be privacy sensitive prior to storage in a location selected from the group consisting of local device storage, cloud storage, and removable storage. 16. The device of claim 15, wherein encrypting applies an encryption factor matching an encryption factor used for the device and not a remote storage location. 17. The device of claim 16, wherein the encryption factor is a biometric encryption factor. 18. The device of claim 11, wherein processing the media file comprises processing the media file to recognize a characteristic selected from the group consisting of: a threshold amount of exposed skin, an individual, a geographic location, and a topic. 19. The device of claim 11, wherein processing the media file comprises image processing. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor of a device and comprising: code that accesses, on a device, a media file; code that processes data of the media file to characterize the media file as being privacy sensitive or not privacy sensitive; wherein the processing comprises processing the data of the media file to detect at least one privacy-sensitive characteristic; code that sets an indicator, using the processor, denoting the media file as privacy-sensitive; and code that automatically encrypts the data of the media file in response to characterizing the media file as being privacy sensitive.
One embodiment provides a method, including: accessing, on a mobile end user device, a media file; processing, using a processor of the mobile end user device, the media file to characterize the media file; detecting, using the processor, at least one privacy-sensitive characteristic of the media file; and setting an indicator, using the processor, denoting the media file as privacy-sensitive prior to permitting the media file to be stored on a cloud account device. Other embodiments are described and claimed.1. A method, comprising: accessing, on a device, a media file; processing, using a processor of the device, data of the media file to characterize the media file as being privacy sensitive or not privacy sensitive; wherein the processing comprises processing the data of the media file to detect at least one privacy sensitive characteristic; setting an indicator, using the processor, denoting the media file as privacy-sensitive; and automatically encrypting the data of the media file in response to characterizing the media file as being privacy sensitive. 2. The method of claim 1, wherein the indicator is processed to exclude the media file from cloud synchronization. 3. The method of claim 1, further comprising prompting a user regarding the indicator prior to permitting the media file to be stored on a cloud account device. 4. (canceled) 5. The method of claim 1, wherein the media is automatically encrypted if it is determined to be privacy sensitive prior to storage in a location selected from the group consisting of local device storage, cloud storage, and removable storage. 6. The method of claim 5, wherein the encrypting applies an encryption factor matching an encryption factor used for the device and not a remote storage location. 7. The method of claim 6, wherein the encryption factor is a biometric encryption factor. 8. The method of claim 1, wherein the processing comprises processing the media file to recognize a characteristic selected from the group consisting of: a threshold amount of exposed skin, an individual, a geographic location, and a topic. 9. The method of claim 1, wherein the processing comprises image processing. 10. The method of claim 1, further comprising automatically obfuscating the media file based on the presence of the indicator. 11. A device, comprising: a network communication device for communicating with a networked device; a processor coupled to the network communication device; a memory that stores instructions executable by the processor to: access, on the device, a media file; process the data of the media file to characterize the media file as being privacy sensitive or not privacy sensitive; wherein the processing comprises processing the data of the media file to detect at least one privacy-sensitive characteristic; set an indicator denoting the media file as privacy-sensitive; and automatically encrypt the data of the media file in response to characterizing the media file as being privacy sensitive 12. The device of claim 11, wherein the indicator is processed to exclude the media file from cloud synchronization. 13. The device of claim 11, wherein the instructions are executed by the processor to prompt a user regarding the indicator prior to permitting the media file to be stored on a cloud account device. 14. (canceled) 15. The device of claim 11, wherein the media is automatically encrypted if it is determined to be privacy sensitive prior to storage in a location selected from the group consisting of local device storage, cloud storage, and removable storage. 16. The device of claim 15, wherein encrypting applies an encryption factor matching an encryption factor used for the device and not a remote storage location. 17. The device of claim 16, wherein the encryption factor is a biometric encryption factor. 18. The device of claim 11, wherein processing the media file comprises processing the media file to recognize a characteristic selected from the group consisting of: a threshold amount of exposed skin, an individual, a geographic location, and a topic. 19. The device of claim 11, wherein processing the media file comprises image processing. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor of a device and comprising: code that accesses, on a device, a media file; code that processes data of the media file to characterize the media file as being privacy sensitive or not privacy sensitive; wherein the processing comprises processing the data of the media file to detect at least one privacy-sensitive characteristic; code that sets an indicator, using the processor, denoting the media file as privacy-sensitive; and code that automatically encrypts the data of the media file in response to characterizing the media file as being privacy sensitive.
2,400
7,404
7,404
13,957,534
2,461
A method and apparatus for exchanging antenna capability information between a transmitting station (STA) and a receiving STA in a wireless communication system may include an antenna capability information element (IE) that includes information regarding the capability of the transmitting STA. The antenna capability IE may be transmitted from the transmitting STA to the receiving STA prior to data transmission between the transmitting STA and the receiving STA. When used in a wireless local area network, the antenna capability IE may be transmitted as part of a management frame, control frame, or data frame.
1. A first 802.11 station (STA) for exchanging antenna capability information in wireless communications, the first 802.11 STA comprising: a receiver configured to receive a probe request frame that contains an antenna capability information element that indicates a number of supported beams of a second 802.11 STA; and a transmitter configured to transmit a probe response frame that contains a second antenna capability information element that indicates a number of supported beams of the first 802.11 STA. 2. The first 802.11 STA of claim 1, wherein the first or the second antenna capability information element further includes at least one field selected from the group consisting of an indicator of support of transmit antenna information after physical layer convergence protocol header, an indicator of diversity technique, an indicator of antenna measurement signaling support, and an indicator of multiple input support. 3. The first 802.11 STA of claim 1, wherein the first 802.11 STA is a wireless local area network (WLAN) access point. 4. The first 802.11 STA of claim 1, wherein the first 802.11 STA is a STA in an 802.11 wireless local area network (WLAN). 5. The first 802.11 STA of claim 1, wherein the first or the second antenna capability information element includes an antenna technology field. 6. The first 802.11 STA of claim 5, wherein each of the at least one fields is derived by the second STA from the antenna technology field. 7. The first 802.11 STA of claim 1, wherein the probe response frame is transmitted at any time after association between the first 802.11 STA and the second 802.11 STA. 8. The first 802.11 STA of claim 1, wherein the probe response frame is transmitted at any time after a data transfer between the first 802.11 STA and the second 802.11 STA. 9. The first 802.11 STA of claim 1, wherein the processor is further configured to adjust antenna settings of the first 802.11 STA to use a set of antenna capabilities belonging to both the first and second 802.11 STAs on a condition that the first 802.11 STA supports at least one of the indicated plurality of antenna capabilities. 10. A method for exchanging antenna capability information at a first 802.11 station (STA) in wireless communications, the method comprising: receiving a probe request frame that contains an antenna capability information element that indicates a number of supported beams of a second 802.11 STA; and transmitting a probe response frame that contains a second antenna capability information element that indicates a number of supported beams of the first 802.11 STA. 11. The method of claim 10, further comprising: adjusting settings at the first STA to use a number of supported beams belonging to both the first and second STAs. 12. The method of claim 11, wherein the adjusting includes adjusting at least one setting selected from the group consisting of a number of antennas used, a diversity method, a smart antenna technology used, and additional antenna measurements. 13. The method of claim 11, wherein the transmitting is performed prior to the adjusting. 14. The method of claim 11, wherein the transmitting is performed after the adjusting. 15. The method of claim 10, wherein the transmitting includes notifying the second 802.11 STA that the first 802.11 STA does not support the requested number of beams. 16. The method of claim 10, wherein the receiving includes receiving data from the second 802.11 STA using a different number of supported beams than the indicated number of supported beams. 17. A method for exchanging antenna capability information at a first 802.11 station (STA) in wireless communications, the method comprising: transmitting a probe request frame containing an antenna capability information element relating to an antenna capability of the first 802.11 STA, prior to data transmission with a second 802.11 STA, wherein the antenna capability information element indicates a number of supported beams; receiving a probe response frame containing an antenna capability information element relating to an antenna capability of the second 802.11 STA; and determining which antenna capabilities to use for future transmission and reception. 18. The method of claim 17, wherein the determining is performed without any additional communication. 19. The method of claim 17, further comprising: exchanging measurement information with the second 802.11 STA, wherein the exchanging is performed prior to the determining. 20. The method of claim 17, further comprising: negotiating antenna capability information with the second 802.11 STA, wherein the negotiating is performed prior to the determining. 21. A first 802.11 station (STA) for exchanging antenna capability information in wireless communications, the first 802.11 STA comprising: a transmitter configured to transmit a probe request frame that contains an antenna capability information element relating to an antenna capability of the first 802.11 STA, prior to data transmission with a second 802.11 STA, wherein the antenna capability information element indicates a number of supported beams; a receiver configured to receive a probe response frame that contains an antenna capability information element relating to an antenna capability of the second 802.11 STA; and a processor configured to determine which antenna capabilities to use for future transmission and reception. 22. The first 802.11 STA of claim 21, wherein the processor is configured to determine which antenna capabilities to use for future transmission and reception without any additional communication. 23. The first 802.11 STA of claim 21, wherein the processor is configured to exchange measurement information with the second 802.11 STA before it determines which antenna capabilities to use for future transmission and reception. 24. The first 802.11 STA of claim 21, wherein the processor is configured to negotiate antenna capability information with the second 802.11 STA before it determines which antenna capabilities to use for future transmission and reception.
A method and apparatus for exchanging antenna capability information between a transmitting station (STA) and a receiving STA in a wireless communication system may include an antenna capability information element (IE) that includes information regarding the capability of the transmitting STA. The antenna capability IE may be transmitted from the transmitting STA to the receiving STA prior to data transmission between the transmitting STA and the receiving STA. When used in a wireless local area network, the antenna capability IE may be transmitted as part of a management frame, control frame, or data frame.1. A first 802.11 station (STA) for exchanging antenna capability information in wireless communications, the first 802.11 STA comprising: a receiver configured to receive a probe request frame that contains an antenna capability information element that indicates a number of supported beams of a second 802.11 STA; and a transmitter configured to transmit a probe response frame that contains a second antenna capability information element that indicates a number of supported beams of the first 802.11 STA. 2. The first 802.11 STA of claim 1, wherein the first or the second antenna capability information element further includes at least one field selected from the group consisting of an indicator of support of transmit antenna information after physical layer convergence protocol header, an indicator of diversity technique, an indicator of antenna measurement signaling support, and an indicator of multiple input support. 3. The first 802.11 STA of claim 1, wherein the first 802.11 STA is a wireless local area network (WLAN) access point. 4. The first 802.11 STA of claim 1, wherein the first 802.11 STA is a STA in an 802.11 wireless local area network (WLAN). 5. The first 802.11 STA of claim 1, wherein the first or the second antenna capability information element includes an antenna technology field. 6. The first 802.11 STA of claim 5, wherein each of the at least one fields is derived by the second STA from the antenna technology field. 7. The first 802.11 STA of claim 1, wherein the probe response frame is transmitted at any time after association between the first 802.11 STA and the second 802.11 STA. 8. The first 802.11 STA of claim 1, wherein the probe response frame is transmitted at any time after a data transfer between the first 802.11 STA and the second 802.11 STA. 9. The first 802.11 STA of claim 1, wherein the processor is further configured to adjust antenna settings of the first 802.11 STA to use a set of antenna capabilities belonging to both the first and second 802.11 STAs on a condition that the first 802.11 STA supports at least one of the indicated plurality of antenna capabilities. 10. A method for exchanging antenna capability information at a first 802.11 station (STA) in wireless communications, the method comprising: receiving a probe request frame that contains an antenna capability information element that indicates a number of supported beams of a second 802.11 STA; and transmitting a probe response frame that contains a second antenna capability information element that indicates a number of supported beams of the first 802.11 STA. 11. The method of claim 10, further comprising: adjusting settings at the first STA to use a number of supported beams belonging to both the first and second STAs. 12. The method of claim 11, wherein the adjusting includes adjusting at least one setting selected from the group consisting of a number of antennas used, a diversity method, a smart antenna technology used, and additional antenna measurements. 13. The method of claim 11, wherein the transmitting is performed prior to the adjusting. 14. The method of claim 11, wherein the transmitting is performed after the adjusting. 15. The method of claim 10, wherein the transmitting includes notifying the second 802.11 STA that the first 802.11 STA does not support the requested number of beams. 16. The method of claim 10, wherein the receiving includes receiving data from the second 802.11 STA using a different number of supported beams than the indicated number of supported beams. 17. A method for exchanging antenna capability information at a first 802.11 station (STA) in wireless communications, the method comprising: transmitting a probe request frame containing an antenna capability information element relating to an antenna capability of the first 802.11 STA, prior to data transmission with a second 802.11 STA, wherein the antenna capability information element indicates a number of supported beams; receiving a probe response frame containing an antenna capability information element relating to an antenna capability of the second 802.11 STA; and determining which antenna capabilities to use for future transmission and reception. 18. The method of claim 17, wherein the determining is performed without any additional communication. 19. The method of claim 17, further comprising: exchanging measurement information with the second 802.11 STA, wherein the exchanging is performed prior to the determining. 20. The method of claim 17, further comprising: negotiating antenna capability information with the second 802.11 STA, wherein the negotiating is performed prior to the determining. 21. A first 802.11 station (STA) for exchanging antenna capability information in wireless communications, the first 802.11 STA comprising: a transmitter configured to transmit a probe request frame that contains an antenna capability information element relating to an antenna capability of the first 802.11 STA, prior to data transmission with a second 802.11 STA, wherein the antenna capability information element indicates a number of supported beams; a receiver configured to receive a probe response frame that contains an antenna capability information element relating to an antenna capability of the second 802.11 STA; and a processor configured to determine which antenna capabilities to use for future transmission and reception. 22. The first 802.11 STA of claim 21, wherein the processor is configured to determine which antenna capabilities to use for future transmission and reception without any additional communication. 23. The first 802.11 STA of claim 21, wherein the processor is configured to exchange measurement information with the second 802.11 STA before it determines which antenna capabilities to use for future transmission and reception. 24. The first 802.11 STA of claim 21, wherein the processor is configured to negotiate antenna capability information with the second 802.11 STA before it determines which antenna capabilities to use for future transmission and reception.
2,400
7,405
7,405
13,928,532
2,454
An event selection system for computing user migration pattern across social network pages is provided. The event selection system includes a monitor module to monitor predetermined activities of social media users on preconfigured resources. The event selection system further includes a profile module to build a social media user profile based on the monitored activities of the social media users. The event selection system further includes a computing module to compute user migration patterns based on the social media user profiles. The event selection system further includes a display module to display the user migration patterns on a system user terminal. The event selection system further includes a reporting module configured to produce a report based on the monitored social media user activities.
1. An event selection system for computing user migration patterns across social network pages, the event selection system comprising: a monitor module to monitor predetermined activities of social media users on preconfigured resources; a profile module to build a social media user profile for each user based on the monitored activities of social media users; a computing module to compute user migration patterns based on the social media user profiles; and a display module to display the user migration patterns on a system user terminal. 2. The event selection system of claim 1, wherein the monitoring module records social media users' activities in a repository. 3. The event selection system of claim 1, wherein the predetermined activities comprising posts, comments, likes or dislikes, posting or viewing of photos, tags, publicly visible user and page-owner expressions. 4. The event selection system of claim 1, wherein the preconfigured resources comprises a set of pages on a social networking site. 5. The event selection system of claim 4, wherein the set of pages comprises pages of business competitors' on the social networking site. 6. The event selection system of claim 1, wherein the monitoring module utilizes social media observable events and mines social media user related data from the preconfigured resources. 7. The event selection system of claim 6, wherein the profile module is configured to update the social media user profile of each user based on the mined social media user related data. 8. The event selection system of claim 6, wherein the profile module is further configured to prepare an activity repository based on the mined social media user related data. 9. The event selection system of claim 8, wherein the computing module is further configured to analyze the activity repository of social media user while visiting social networking sites. 10. The event selection system of claim 9, wherein the computing module is further configured to compute temporal migration metrics indicating continuation of the migration patterns. 11. The event selection system of claim 6, wherein the profile module is further configured to prepare an extended attributes set based on the mined social media user related data. 12. The event selection system of claim 11, wherein the extended attributes set comprises social media users' interest, education and work histories, hobbies, locations, hometowns, favorite sport teams and TV shows, and cultural background. 13. The event selection system of claim 1, wherein the computing module is configured to maintain a list of entries, wherein each entry represents an abstract representation of an activity or interaction of a social media user on a monitored page at the configured time interval. 14. The event selection system of claim 1, further comprising a query module configured to receive a query from a system user. 15. The event selection system of claim 14, wherein the query module utilizes the social media user profiles to retrieve information corresponding to the query. 16. The event selection system of claim 14, further comprising a reporting module configured to produce a report that indicates desired information based on the query received from the system user. 17. A computer-implemented method for computing user migration pattern on pages of a social network, the computer-implemented method comprising: monitoring predetermined activities of social media users on preconfigured resources; building a social media user profile for each user based on the monitored activities of the social media users; analyzing and computing social media users migration patterns based on the social media user profiles; and reporting monitored activities of the social media users to a system user. 18. A computer-implemented method of claim 17, wherein the predetermined activities comprising posts, comments, likes or dislikes, posting or viewing of photos, tags, publicly visible user, and page-owner expressions. 19. A computer-implemented method of claim 17, further comprising receiving a query and utilizing the social media user profiles to retrieve information corresponding to the query. 20. A computer readable medium storing computer readable instructions when executed by a processor perform a method comprising: monitoring predetermined activities of social media users on preconfigured resources; building a social media user profile based on the monitored activities of the social media users; analyzing and computing social media user migration patterns based on the social media user profiles; and reporting monitored activities of the social media user to a system user.
An event selection system for computing user migration pattern across social network pages is provided. The event selection system includes a monitor module to monitor predetermined activities of social media users on preconfigured resources. The event selection system further includes a profile module to build a social media user profile based on the monitored activities of the social media users. The event selection system further includes a computing module to compute user migration patterns based on the social media user profiles. The event selection system further includes a display module to display the user migration patterns on a system user terminal. The event selection system further includes a reporting module configured to produce a report based on the monitored social media user activities.1. An event selection system for computing user migration patterns across social network pages, the event selection system comprising: a monitor module to monitor predetermined activities of social media users on preconfigured resources; a profile module to build a social media user profile for each user based on the monitored activities of social media users; a computing module to compute user migration patterns based on the social media user profiles; and a display module to display the user migration patterns on a system user terminal. 2. The event selection system of claim 1, wherein the monitoring module records social media users' activities in a repository. 3. The event selection system of claim 1, wherein the predetermined activities comprising posts, comments, likes or dislikes, posting or viewing of photos, tags, publicly visible user and page-owner expressions. 4. The event selection system of claim 1, wherein the preconfigured resources comprises a set of pages on a social networking site. 5. The event selection system of claim 4, wherein the set of pages comprises pages of business competitors' on the social networking site. 6. The event selection system of claim 1, wherein the monitoring module utilizes social media observable events and mines social media user related data from the preconfigured resources. 7. The event selection system of claim 6, wherein the profile module is configured to update the social media user profile of each user based on the mined social media user related data. 8. The event selection system of claim 6, wherein the profile module is further configured to prepare an activity repository based on the mined social media user related data. 9. The event selection system of claim 8, wherein the computing module is further configured to analyze the activity repository of social media user while visiting social networking sites. 10. The event selection system of claim 9, wherein the computing module is further configured to compute temporal migration metrics indicating continuation of the migration patterns. 11. The event selection system of claim 6, wherein the profile module is further configured to prepare an extended attributes set based on the mined social media user related data. 12. The event selection system of claim 11, wherein the extended attributes set comprises social media users' interest, education and work histories, hobbies, locations, hometowns, favorite sport teams and TV shows, and cultural background. 13. The event selection system of claim 1, wherein the computing module is configured to maintain a list of entries, wherein each entry represents an abstract representation of an activity or interaction of a social media user on a monitored page at the configured time interval. 14. The event selection system of claim 1, further comprising a query module configured to receive a query from a system user. 15. The event selection system of claim 14, wherein the query module utilizes the social media user profiles to retrieve information corresponding to the query. 16. The event selection system of claim 14, further comprising a reporting module configured to produce a report that indicates desired information based on the query received from the system user. 17. A computer-implemented method for computing user migration pattern on pages of a social network, the computer-implemented method comprising: monitoring predetermined activities of social media users on preconfigured resources; building a social media user profile for each user based on the monitored activities of the social media users; analyzing and computing social media users migration patterns based on the social media user profiles; and reporting monitored activities of the social media users to a system user. 18. A computer-implemented method of claim 17, wherein the predetermined activities comprising posts, comments, likes or dislikes, posting or viewing of photos, tags, publicly visible user, and page-owner expressions. 19. A computer-implemented method of claim 17, further comprising receiving a query and utilizing the social media user profiles to retrieve information corresponding to the query. 20. A computer readable medium storing computer readable instructions when executed by a processor perform a method comprising: monitoring predetermined activities of social media users on preconfigured resources; building a social media user profile based on the monitored activities of the social media users; analyzing and computing social media user migration patterns based on the social media user profiles; and reporting monitored activities of the social media user to a system user.
2,400
7,406
7,406
15,144,541
2,466
A Layer 2 network switch is partitionable into a plurality of switch fabrics. The single-chassis switch is partitionable into a plurality of logical switches, each associated with one of the virtual fabrics. The logical switches behave as complete and self-contained switches. A logical switch fabric can span multiple single-chassis switch chassis. Logical switches are connected by inter-switch links that can be either dedicated single-chassis links or logical links. An extended inter-switch link can be used to transport traffic for one or more logical inter-switch links. Physical ports of the chassis are assigned to logical switches and are managed by the logical switch. Legacy switches that are not partitionable into logical switches can serve as transit switches between two logical switches.
1. A method of managing a network switch, comprising: partitioning a first network switch into a first plurality of logical switches comprising: partitioning physical ports of the first network switch among the first plurality of logical switches; and mapping each of the physical ports to a logical port of a logical switch of the first plurality of logical switches; and managing ports of each of the first plurality of logical switches independent of ports of each other of the plurality of the first plurality of logical switches. 2. The method of claim 1, wherein partitioning a first network switch comprises: dedicating a resource of the first network switch to a logical switch of the first plurality of logical switches. 3. The method of claim 1, further comprising: isolating data traffic through a first logical switch of the first plurality of logical switches from the other logical switches of the first plurality of logical switches. 4. The method of claim 1, further comprising: defining a link between a first logical switch of the first plurality of logical switches and a second switch; and communicating data between the first logical switch and the second switch. 5. The method of claim 4, wherein the second switch is a second network switch. 6. The method of claim 4, further comprising: partitioning a second network switch into a second plurality of logical switches, wherein the second switch is a logical switch of the second plurality of logical switches. 7. A method of partitioning network switches, comprising: partitioning a first network switch into a first plurality of virtual switch fabrics; partitioning the first network switch into a first plurality of logical switches, comprising: partitioning physical ports of the first network switch among the first plurality of logical switches; and mapping each of the physical port to a logical port of a first logical switch of the first plurality of logical switches; and associating a first logical switch of the first plurality of logical switches with a first virtual fabric of the first plurality of virtual switch fabrics. 8. The method of claim 7, further comprising: partitioning a second network switch into a second plurality of virtual switch fabrics; defining a multi-chassis virtual fabric, comprising the first virtual fabric of the first plurality of virtual switch fabrics and a second virtual fabric of the second plurality of virtual switch fabrics; and configuring the first virtual fabric and the second virtual fabric as a multi-chassis virtual fabric. 9. The method of claim 8, further comprising: partitioning the second network switch into a second plurality of logical switches; associating a second logical switch of the second plurality of logical switches with the multi-chassis virtual fabric; and communicating data between the first logical switch and the second logical switch across the multi-chassis virtual fabric. 10. A network switch, comprising: a switch, partitionable into a plurality of logical switches, wherein each of the plurality of logical switches is a complete and self-contained network switch; a processor; a storage medium, connected to the processor; a chassis management system, stored on the storage medium, wherein the chassis management system, when executed by the processor, causes the processor to perform actions that are associated with the switch as a whole; and a logical switch management system, stored on the storage medium, wherein the logical switch management system, when executed by the processor, causes the processor to perform actions associated with any of the plurality of logical switches, wherein the switch comprises a plurality of physical ports, wherein the plurality of physical ports are partitioned among the plurality of logical switches, and wherein each of the plurality of physical ports is mapped to a logical port of a logical switch of the plurality of logical switches. 11. The network switch of claim 10, wherein the switch comprises a plurality of network resources, wherein each of the plurality of logical switches is assigned a network resource of the plurality of network resources, wherein a first logical switch of the plurality of logical switches is defined as a default logical switch, and wherein the default logical switch is assigned any of the network resources not assigned to any other logical switch. 12. The network switch of claim 11, wherein the chassis management system comprises: a logical fabric manager, configured to create and maintain a virtual fabric topology, comprising: a controller, configured to handle incoming events; a fabric database, stored in the storage medium; a fabric database manager, configured to store configuration information for virtual fabrics in the fabric database; a logical topology database, stored in the storage medium; a logical topology manager, configured to store topology information for each virtual fabric in the logical topology database; a logical link database, stored in the storage medium; and a logical link manager, configured to store information about logical links associated with the plurality of logical switches in the logical link database. 13. The network switch of claim 10, wherein the switch is partitionable into a plurality of virtual fabrics, wherein each of the plurality of logical switches is assigned to one of the plurality of virtual fabrics. 14. The network switch of claim 13, wherein each of the virtual fabrics comprises: a virtual fabric identifier, wherein the virtual fabric identifier is associated with each of the plurality of logical switches to assign that logical switch to a virtual fabric. 15. A non-transitory computer readable medium on which is stored software for partitioning a network switch, the software for instructing a processor of the network switch to perform actions comprising: partitioning the network switch into a first plurality of logical switches, comprising: partitioning physical ports of the network switch among the first plurality of logical switches; and mapping each of the physical ports to a logical port of a logical switch of the first plurality of logical switches; and managing each of the first plurality of logical switches independent of each other of the plurality of the first plurality of logical switches. 16. The computer readable medium of claim 15, wherein the actions further comprise: defining a link between a first logical switch of the first plurality of logical switches and a second switch; and communicating data between the first logical switch and the second switch. 17. The computer readable medium of claim 15, wherein the actions further comprise: partitioning a second network switch into a second plurality of logical switches, defining a link between a first logical switch of the first plurality of logical switches and a second switch; and wherein the second switch is a logical switch of the second plurality of logical switches. 18. The computer readable medium of claim 15, wherein the actions further comprise: partitioning the network switch into a first plurality of virtual switch fabrics; and associating a first logical switch of the first plurality of logical switches with a first virtual fabric of the first plurality of virtual switch fabrics. 19. The computer readable medium of claim 15, wherein the actions further comprise: partitioning the network switch into a first plurality of virtual switch fabrics; partitioning a second network switch into a second plurality of virtual switch fabrics; defining a multi-chassis virtual fabric, comprising a virtual fabric of the first plurality of virtual switch fabrics and a virtual fabric of the second plurality of virtual switch fabrics; and associating a first logical switch of the first plurality of logical switches with the multi-chassis virtual fabric. 20. The computer readable medium of claim 19, wherein the actions further comprise: partitioning the second network switch into a second plurality of logical switches; associating a second logical switch of the second plurality of logical switches with the multi-chassis virtual fabric; and communicating data between the first logical switch and the second logical switch across the multi-chassis virtual fabric. 21. A network comprising: a plurality of external devices; a plurality of chassis, each comprising: a single-chassis fabric; and a switch configured for use with the single-chassis fabric; a multi-chassis virtual fabric coupling the plurality of external devices, wherein the multi-chassis virtual fabric comprises: a first virtual single-chassis fabric to which are coupled a first portion of the plurality of external devices, the first virtual single-chassis fabric selected from a plurality of virtual fabrics configured from the single-chassis fabric of a first chassis of the plurality of chassis; and a second virtual single-chassis fabric to which are coupled a second portion of the plurality of external devices, the second virtual single-chassis fabric selected from a plurality of virtual fabrics configured from the single-chassis fabric of a second chassis of the plurality of chassis; and a software stored on a storage medium of each of the plurality of chassis, the software for instructing a processor of the corresponding chassis to perform actions comprising: partitioning the single-chassis fabric of the chassis into a plurality of virtual single-chassis fabrics; associating a virtual single-chassis fabric of the plurality of virtual single-chassis fabrics with the multi-chassis virtual fabric; partitioning the switch into a plurality of logical switches, comprising: partitioning physical ports of the chassis among the plurality of logical switches; and mapping each physical port to a logical port of a logical switch of the plurality of logical switches; and assigning a first logical switch of the plurality of logical switches to the multi-chassis virtual fabric. 22. The network of claim 21, the software for instructing the processor of the corresponding chassis to perform actions further comprising: linking the first logical switch with a second logical switch of another of the plurality of chassis and assigned to the multi-chassis virtual fabric; and communicating data between the first logical switch and the second logical switch. 23. The method of claim 1, further comprising: defining a logical port of a first logical switch of the first plurality of logical switches, the logical port not mapped to a physical port of the first network switch; and associating a link between the first logical switch and another switch to the logical port.
A Layer 2 network switch is partitionable into a plurality of switch fabrics. The single-chassis switch is partitionable into a plurality of logical switches, each associated with one of the virtual fabrics. The logical switches behave as complete and self-contained switches. A logical switch fabric can span multiple single-chassis switch chassis. Logical switches are connected by inter-switch links that can be either dedicated single-chassis links or logical links. An extended inter-switch link can be used to transport traffic for one or more logical inter-switch links. Physical ports of the chassis are assigned to logical switches and are managed by the logical switch. Legacy switches that are not partitionable into logical switches can serve as transit switches between two logical switches.1. A method of managing a network switch, comprising: partitioning a first network switch into a first plurality of logical switches comprising: partitioning physical ports of the first network switch among the first plurality of logical switches; and mapping each of the physical ports to a logical port of a logical switch of the first plurality of logical switches; and managing ports of each of the first plurality of logical switches independent of ports of each other of the plurality of the first plurality of logical switches. 2. The method of claim 1, wherein partitioning a first network switch comprises: dedicating a resource of the first network switch to a logical switch of the first plurality of logical switches. 3. The method of claim 1, further comprising: isolating data traffic through a first logical switch of the first plurality of logical switches from the other logical switches of the first plurality of logical switches. 4. The method of claim 1, further comprising: defining a link between a first logical switch of the first plurality of logical switches and a second switch; and communicating data between the first logical switch and the second switch. 5. The method of claim 4, wherein the second switch is a second network switch. 6. The method of claim 4, further comprising: partitioning a second network switch into a second plurality of logical switches, wherein the second switch is a logical switch of the second plurality of logical switches. 7. A method of partitioning network switches, comprising: partitioning a first network switch into a first plurality of virtual switch fabrics; partitioning the first network switch into a first plurality of logical switches, comprising: partitioning physical ports of the first network switch among the first plurality of logical switches; and mapping each of the physical port to a logical port of a first logical switch of the first plurality of logical switches; and associating a first logical switch of the first plurality of logical switches with a first virtual fabric of the first plurality of virtual switch fabrics. 8. The method of claim 7, further comprising: partitioning a second network switch into a second plurality of virtual switch fabrics; defining a multi-chassis virtual fabric, comprising the first virtual fabric of the first plurality of virtual switch fabrics and a second virtual fabric of the second plurality of virtual switch fabrics; and configuring the first virtual fabric and the second virtual fabric as a multi-chassis virtual fabric. 9. The method of claim 8, further comprising: partitioning the second network switch into a second plurality of logical switches; associating a second logical switch of the second plurality of logical switches with the multi-chassis virtual fabric; and communicating data between the first logical switch and the second logical switch across the multi-chassis virtual fabric. 10. A network switch, comprising: a switch, partitionable into a plurality of logical switches, wherein each of the plurality of logical switches is a complete and self-contained network switch; a processor; a storage medium, connected to the processor; a chassis management system, stored on the storage medium, wherein the chassis management system, when executed by the processor, causes the processor to perform actions that are associated with the switch as a whole; and a logical switch management system, stored on the storage medium, wherein the logical switch management system, when executed by the processor, causes the processor to perform actions associated with any of the plurality of logical switches, wherein the switch comprises a plurality of physical ports, wherein the plurality of physical ports are partitioned among the plurality of logical switches, and wherein each of the plurality of physical ports is mapped to a logical port of a logical switch of the plurality of logical switches. 11. The network switch of claim 10, wherein the switch comprises a plurality of network resources, wherein each of the plurality of logical switches is assigned a network resource of the plurality of network resources, wherein a first logical switch of the plurality of logical switches is defined as a default logical switch, and wherein the default logical switch is assigned any of the network resources not assigned to any other logical switch. 12. The network switch of claim 11, wherein the chassis management system comprises: a logical fabric manager, configured to create and maintain a virtual fabric topology, comprising: a controller, configured to handle incoming events; a fabric database, stored in the storage medium; a fabric database manager, configured to store configuration information for virtual fabrics in the fabric database; a logical topology database, stored in the storage medium; a logical topology manager, configured to store topology information for each virtual fabric in the logical topology database; a logical link database, stored in the storage medium; and a logical link manager, configured to store information about logical links associated with the plurality of logical switches in the logical link database. 13. The network switch of claim 10, wherein the switch is partitionable into a plurality of virtual fabrics, wherein each of the plurality of logical switches is assigned to one of the plurality of virtual fabrics. 14. The network switch of claim 13, wherein each of the virtual fabrics comprises: a virtual fabric identifier, wherein the virtual fabric identifier is associated with each of the plurality of logical switches to assign that logical switch to a virtual fabric. 15. A non-transitory computer readable medium on which is stored software for partitioning a network switch, the software for instructing a processor of the network switch to perform actions comprising: partitioning the network switch into a first plurality of logical switches, comprising: partitioning physical ports of the network switch among the first plurality of logical switches; and mapping each of the physical ports to a logical port of a logical switch of the first plurality of logical switches; and managing each of the first plurality of logical switches independent of each other of the plurality of the first plurality of logical switches. 16. The computer readable medium of claim 15, wherein the actions further comprise: defining a link between a first logical switch of the first plurality of logical switches and a second switch; and communicating data between the first logical switch and the second switch. 17. The computer readable medium of claim 15, wherein the actions further comprise: partitioning a second network switch into a second plurality of logical switches, defining a link between a first logical switch of the first plurality of logical switches and a second switch; and wherein the second switch is a logical switch of the second plurality of logical switches. 18. The computer readable medium of claim 15, wherein the actions further comprise: partitioning the network switch into a first plurality of virtual switch fabrics; and associating a first logical switch of the first plurality of logical switches with a first virtual fabric of the first plurality of virtual switch fabrics. 19. The computer readable medium of claim 15, wherein the actions further comprise: partitioning the network switch into a first plurality of virtual switch fabrics; partitioning a second network switch into a second plurality of virtual switch fabrics; defining a multi-chassis virtual fabric, comprising a virtual fabric of the first plurality of virtual switch fabrics and a virtual fabric of the second plurality of virtual switch fabrics; and associating a first logical switch of the first plurality of logical switches with the multi-chassis virtual fabric. 20. The computer readable medium of claim 19, wherein the actions further comprise: partitioning the second network switch into a second plurality of logical switches; associating a second logical switch of the second plurality of logical switches with the multi-chassis virtual fabric; and communicating data between the first logical switch and the second logical switch across the multi-chassis virtual fabric. 21. A network comprising: a plurality of external devices; a plurality of chassis, each comprising: a single-chassis fabric; and a switch configured for use with the single-chassis fabric; a multi-chassis virtual fabric coupling the plurality of external devices, wherein the multi-chassis virtual fabric comprises: a first virtual single-chassis fabric to which are coupled a first portion of the plurality of external devices, the first virtual single-chassis fabric selected from a plurality of virtual fabrics configured from the single-chassis fabric of a first chassis of the plurality of chassis; and a second virtual single-chassis fabric to which are coupled a second portion of the plurality of external devices, the second virtual single-chassis fabric selected from a plurality of virtual fabrics configured from the single-chassis fabric of a second chassis of the plurality of chassis; and a software stored on a storage medium of each of the plurality of chassis, the software for instructing a processor of the corresponding chassis to perform actions comprising: partitioning the single-chassis fabric of the chassis into a plurality of virtual single-chassis fabrics; associating a virtual single-chassis fabric of the plurality of virtual single-chassis fabrics with the multi-chassis virtual fabric; partitioning the switch into a plurality of logical switches, comprising: partitioning physical ports of the chassis among the plurality of logical switches; and mapping each physical port to a logical port of a logical switch of the plurality of logical switches; and assigning a first logical switch of the plurality of logical switches to the multi-chassis virtual fabric. 22. The network of claim 21, the software for instructing the processor of the corresponding chassis to perform actions further comprising: linking the first logical switch with a second logical switch of another of the plurality of chassis and assigned to the multi-chassis virtual fabric; and communicating data between the first logical switch and the second logical switch. 23. The method of claim 1, further comprising: defining a logical port of a first logical switch of the first plurality of logical switches, the logical port not mapped to a physical port of the first network switch; and associating a link between the first logical switch and another switch to the logical port.
2,400
7,407
7,407
15,010,106
2,443
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reducing redirects. In one aspect, a method includes receiving request data indicating that a user device has requested a content item. The request data specifies other data processing apparatus to which user interactions with the content item are to be reported. The content item includes a reference to a resource that is requested in response to user interaction with the content item. Response data is provided. The response data includes data that cause presentation of the content item. Interaction data is received specifying user interaction with the content item occurred. Redirect data is provided that cause the user device to be redirected to the resource. Reporting data is provided to the other data processing apparatus, specifying user interaction with the content item occurred. The reporting data is provided asynchronously relative to the redirect data.
1. (canceled) 2. A method performed by one or more data processing apparatus, the method comprising: receiving, from a remote user device and by one or more first servers, interaction data specifying that user interaction with a given portion of content occurred; determining, based on the interaction data, that the user interaction is to be reported to multiple different tracking servers that are remote to the one or more first servers; determining, based on the interaction data, a destination page that is associated with the given portion of content; redirecting, by the one or more first servers, the user device to the destination page, including not redirecting the user device to at least one of the multiple different tracking servers; and transmitting, by the one or more first servers and independent of the user device, reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers. 3. The method of claim 2, wherein determining that the user interaction is to be reported to multiple different tracking servers comprises identifying the multiple different tracking servers based on the interaction data. 4. The method of claim 3, wherein identifying the multiple different tracking servers based on the interaction data comprises identifying a server unique identifier for each of the multiple different tracking servers. 5. The method of claim 4, wherein the server unique identifier for each of the multiple different tracking servers is embedded in a URL that is included in the interaction data. 6. The method of claim 5, wherein the URL includes a network location of the destination page. 7. The method of claim 2, wherein redirecting the user device to the destination page comprises transmitting, to the user device, a redirect instructions that includes a URL of the destination page. 8. The method of claim 7, wherein transmitting reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers comprises transmitting an encrypted shared identifier corresponding to the user device. 9. A system, comprising: a user device; and one or more first servers operated by a first entity, the one or more first servers being operable to interact with the user device and further operable to perform operations including: receiving, from a remote user device, interaction data specifying that user interaction with a given portion of content occurred; determining, based on the interaction data, that the user interaction is to be reported to multiple different tracking servers that are remote to the one or more first servers; determining, based on the interaction data, a destination page that is associated with the given portion of content; redirecting the user device to the destination page, including not redirecting the user device to at least one of the multiple different tracking servers; and transmitting, independent of the user device, reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers. 10. The system of claim 9, wherein determining that the user interaction is to be reported to multiple different tracking servers comprises identifying the multiple different tracking servers based on the interaction data. 11. The system of claim 10, wherein identifying the multiple different tracking servers based on the interaction data comprises identifying a server unique identifier for each of the multiple different tracking servers. 12. The system of claim 11, wherein the server unique identifier for each of the multiple different tracking servers is embedded in a URL that is included in the interaction data. 13. The system of claim 12, wherein the URL includes a network location of the destination page. 14. The system of claim 9, wherein redirecting the user device to the destination page comprises transmitting, to the user device, a redirect instructions that includes a URL of the destination page. 15. The system of claim 9, wherein transmitting reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers comprises transmitting an encrypted shared identifier corresponding to the user device. 16. A non-transitory computer storage medium encoded with a computer program, the program comprising instructions that when executed by one or more first servers operated by a first entity cause the one or more first servers to perform operations comprising: receiving, from a remote user device, interaction data specifying that user interaction with a given portion of content occurred; determining, based on the interaction data, that the user interaction is to be reported to multiple different tracking servers that are remote to the one or more first servers; determining, based on the interaction data, a destination page that is associated with the given portion of content; redirecting the user device to the destination page, including not redirecting the user device to at least one of the multiple different tracking servers; and transmitting, independent of the user device, reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers. 17. The computer storage medium of claim 16, wherein determining that the user interaction is to be reported to multiple different tracking servers comprises identifying the multiple different tracking servers based on the interaction data. 18. The computer storage medium of claim 17, wherein identifying the multiple different tracking servers based on the interaction data comprises identifying a server unique identifier for each of the multiple different tracking servers. 19. The computer storage medium of claim 18, wherein the server unique identifier for each of the multiple different tracking servers is embedded in a URL that is included in the interaction data. 20. The computer storage medium of claim 19, wherein the URL includes a network location of the destination page. 21. The computer storage medium of claim 16, wherein transmitting reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers comprises transmitting an encrypted shared identifier corresponding to the user device.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reducing redirects. In one aspect, a method includes receiving request data indicating that a user device has requested a content item. The request data specifies other data processing apparatus to which user interactions with the content item are to be reported. The content item includes a reference to a resource that is requested in response to user interaction with the content item. Response data is provided. The response data includes data that cause presentation of the content item. Interaction data is received specifying user interaction with the content item occurred. Redirect data is provided that cause the user device to be redirected to the resource. Reporting data is provided to the other data processing apparatus, specifying user interaction with the content item occurred. The reporting data is provided asynchronously relative to the redirect data.1. (canceled) 2. A method performed by one or more data processing apparatus, the method comprising: receiving, from a remote user device and by one or more first servers, interaction data specifying that user interaction with a given portion of content occurred; determining, based on the interaction data, that the user interaction is to be reported to multiple different tracking servers that are remote to the one or more first servers; determining, based on the interaction data, a destination page that is associated with the given portion of content; redirecting, by the one or more first servers, the user device to the destination page, including not redirecting the user device to at least one of the multiple different tracking servers; and transmitting, by the one or more first servers and independent of the user device, reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers. 3. The method of claim 2, wherein determining that the user interaction is to be reported to multiple different tracking servers comprises identifying the multiple different tracking servers based on the interaction data. 4. The method of claim 3, wherein identifying the multiple different tracking servers based on the interaction data comprises identifying a server unique identifier for each of the multiple different tracking servers. 5. The method of claim 4, wherein the server unique identifier for each of the multiple different tracking servers is embedded in a URL that is included in the interaction data. 6. The method of claim 5, wherein the URL includes a network location of the destination page. 7. The method of claim 2, wherein redirecting the user device to the destination page comprises transmitting, to the user device, a redirect instructions that includes a URL of the destination page. 8. The method of claim 7, wherein transmitting reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers comprises transmitting an encrypted shared identifier corresponding to the user device. 9. A system, comprising: a user device; and one or more first servers operated by a first entity, the one or more first servers being operable to interact with the user device and further operable to perform operations including: receiving, from a remote user device, interaction data specifying that user interaction with a given portion of content occurred; determining, based on the interaction data, that the user interaction is to be reported to multiple different tracking servers that are remote to the one or more first servers; determining, based on the interaction data, a destination page that is associated with the given portion of content; redirecting the user device to the destination page, including not redirecting the user device to at least one of the multiple different tracking servers; and transmitting, independent of the user device, reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers. 10. The system of claim 9, wherein determining that the user interaction is to be reported to multiple different tracking servers comprises identifying the multiple different tracking servers based on the interaction data. 11. The system of claim 10, wherein identifying the multiple different tracking servers based on the interaction data comprises identifying a server unique identifier for each of the multiple different tracking servers. 12. The system of claim 11, wherein the server unique identifier for each of the multiple different tracking servers is embedded in a URL that is included in the interaction data. 13. The system of claim 12, wherein the URL includes a network location of the destination page. 14. The system of claim 9, wherein redirecting the user device to the destination page comprises transmitting, to the user device, a redirect instructions that includes a URL of the destination page. 15. The system of claim 9, wherein transmitting reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers comprises transmitting an encrypted shared identifier corresponding to the user device. 16. A non-transitory computer storage medium encoded with a computer program, the program comprising instructions that when executed by one or more first servers operated by a first entity cause the one or more first servers to perform operations comprising: receiving, from a remote user device, interaction data specifying that user interaction with a given portion of content occurred; determining, based on the interaction data, that the user interaction is to be reported to multiple different tracking servers that are remote to the one or more first servers; determining, based on the interaction data, a destination page that is associated with the given portion of content; redirecting the user device to the destination page, including not redirecting the user device to at least one of the multiple different tracking servers; and transmitting, independent of the user device, reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers. 17. The computer storage medium of claim 16, wherein determining that the user interaction is to be reported to multiple different tracking servers comprises identifying the multiple different tracking servers based on the interaction data. 18. The computer storage medium of claim 17, wherein identifying the multiple different tracking servers based on the interaction data comprises identifying a server unique identifier for each of the multiple different tracking servers. 19. The computer storage medium of claim 18, wherein the server unique identifier for each of the multiple different tracking servers is embedded in a URL that is included in the interaction data. 20. The computer storage medium of claim 19, wherein the URL includes a network location of the destination page. 21. The computer storage medium of claim 16, wherein transmitting reporting data specifying the occurrence of the user interaction with the given portion of content to at least some of the multiple different tracking servers comprises transmitting an encrypted shared identifier corresponding to the user device.
2,400
7,408
7,408
14,387,965
2,426
For allowing an intelligent and/or resource efficient delivery of video content and/or determining of video content popularity and/or user behavior a method for determining user behavior during delivery of video content is claimed, wherein a user is requesting a delivery of a video content from a provider via a network. The method is characterized in that the provider exploits information transmitted between the user and the provider due to the execution of a trick play function by the user for scheduling the delivery and/or determining video content popularity and/or user behavior. Further, an according network is claimed, preferably for carrying out the above mentioned method.
1. A method for determining user behavior during delivery of video content, wherein a user is requesting a delivery of a video content from a provider via a network, characterized in that the provider exploits information transmitted between the user and the provider due to the execution of a trick play function by the user for scheduling the delivery and/or determining video content popularity and/or user behavior. 2. A method according to claim 1, wherein the exploitation will be performed on a per user basis. 3. A method according to claim 1, wherein the exploitation will be performed on a per video content basis. 4. A method according to claim 1, wherein the provider records user behavior during trick play while the user is downloading and/or viewing the video content. 5. A method according to claim 1, wherein the information comprises a message or messages that a browser or an appropriate application on the user side sends towards the provider when the user executes a trick play function. 6. A method according to claim 5, wherein the message will be generated each time a track slider is advanced beyond or outside a play-out buffer. 7. A method according to claim 1, wherein the exploitation will be performed statistically. 8. A method according to claim 1, wherein the exploitation comprises the determination of a user or group of user behavior or viewing behavior in relation to the video content or video content type. 9. A method according to claim 1, wherein the exploitation comprises the determination of a user or group of user behavior or viewing behavior by setting thresholds on number of skips and subsequently classifying the user and/or classifying the video content. 10. A method according to claim 1, wherein the exploitation comprises the determination of video content popularity based on the number of times a user executes the trick play function. 11. A method according to claim 1, wherein the exploitation comprises the determination of the popularity of sub-segments within a video content based on how many users and/or how many times a user advances forwards and/or backwards to re-view a particular segment of a scene or video content and/or skip over a particular segment of a segment, scene or video content. 12. A method according to claim 1, wherein the exploitation comprises the possibility of delivering the video content in a differentiated and/or personalized pacing. 13. A method according to claim 1, wherein the exploitation comprises an estimation of a buffer utilization at a user or UE (User Equipment). 14. A method according to claim 1, wherein the provider delivers content chunks of a definable and/or popular segment of the video content at a higher rate than of other segments. 15. A method according to claim 14, wherein the definable and/or popular segment is a segment after which the user is expected to or might skip forward or backward. 16. A method according to claim 14, wherein the definable and/or popular segment is a segment into which the user is expected to or might skip forward or backward. 17. A method according to claim 1, wherein the information comprises a number of skip events and/or a skip location within the video content. 18. A method according to claim 1, wherein the information comprises the starting and the end point of a skip event within the video content. 19. A method according to claim 1, wherein the information comprises the skip direction within the video content. 20. A method according to claim 1, wherein the user receives a video content stream at a normal, variable, increased or decreased pace according to the exploitation by the provider and/or according to optimization target settings and/or according to its subscription profile. 21. A network, preferably for carrying out the method for determining user behavior during delivery of video content according to claim 1, wherein a user is requesting a delivery of a video content from a provider via the network, characterized in that the provider comprises means for exploiting information transmitted between the user and the provider due to the execution of a trick play function by the user for scheduling the delivery and/or determining video content popularity and/or user behavior.
For allowing an intelligent and/or resource efficient delivery of video content and/or determining of video content popularity and/or user behavior a method for determining user behavior during delivery of video content is claimed, wherein a user is requesting a delivery of a video content from a provider via a network. The method is characterized in that the provider exploits information transmitted between the user and the provider due to the execution of a trick play function by the user for scheduling the delivery and/or determining video content popularity and/or user behavior. Further, an according network is claimed, preferably for carrying out the above mentioned method.1. A method for determining user behavior during delivery of video content, wherein a user is requesting a delivery of a video content from a provider via a network, characterized in that the provider exploits information transmitted between the user and the provider due to the execution of a trick play function by the user for scheduling the delivery and/or determining video content popularity and/or user behavior. 2. A method according to claim 1, wherein the exploitation will be performed on a per user basis. 3. A method according to claim 1, wherein the exploitation will be performed on a per video content basis. 4. A method according to claim 1, wherein the provider records user behavior during trick play while the user is downloading and/or viewing the video content. 5. A method according to claim 1, wherein the information comprises a message or messages that a browser or an appropriate application on the user side sends towards the provider when the user executes a trick play function. 6. A method according to claim 5, wherein the message will be generated each time a track slider is advanced beyond or outside a play-out buffer. 7. A method according to claim 1, wherein the exploitation will be performed statistically. 8. A method according to claim 1, wherein the exploitation comprises the determination of a user or group of user behavior or viewing behavior in relation to the video content or video content type. 9. A method according to claim 1, wherein the exploitation comprises the determination of a user or group of user behavior or viewing behavior by setting thresholds on number of skips and subsequently classifying the user and/or classifying the video content. 10. A method according to claim 1, wherein the exploitation comprises the determination of video content popularity based on the number of times a user executes the trick play function. 11. A method according to claim 1, wherein the exploitation comprises the determination of the popularity of sub-segments within a video content based on how many users and/or how many times a user advances forwards and/or backwards to re-view a particular segment of a scene or video content and/or skip over a particular segment of a segment, scene or video content. 12. A method according to claim 1, wherein the exploitation comprises the possibility of delivering the video content in a differentiated and/or personalized pacing. 13. A method according to claim 1, wherein the exploitation comprises an estimation of a buffer utilization at a user or UE (User Equipment). 14. A method according to claim 1, wherein the provider delivers content chunks of a definable and/or popular segment of the video content at a higher rate than of other segments. 15. A method according to claim 14, wherein the definable and/or popular segment is a segment after which the user is expected to or might skip forward or backward. 16. A method according to claim 14, wherein the definable and/or popular segment is a segment into which the user is expected to or might skip forward or backward. 17. A method according to claim 1, wherein the information comprises a number of skip events and/or a skip location within the video content. 18. A method according to claim 1, wherein the information comprises the starting and the end point of a skip event within the video content. 19. A method according to claim 1, wherein the information comprises the skip direction within the video content. 20. A method according to claim 1, wherein the user receives a video content stream at a normal, variable, increased or decreased pace according to the exploitation by the provider and/or according to optimization target settings and/or according to its subscription profile. 21. A network, preferably for carrying out the method for determining user behavior during delivery of video content according to claim 1, wherein a user is requesting a delivery of a video content from a provider via the network, characterized in that the provider comprises means for exploiting information transmitted between the user and the provider due to the execution of a trick play function by the user for scheduling the delivery and/or determining video content popularity and/or user behavior.
2,400
7,409
7,409
15,070,045
2,462
Aspects of the subject disclosure may include, for example, a client node device having a radio configured to wirelessly receive downstream channel signals from a communication network. An access point repeater (APR) launches the downstream channel signals on a guided wave communication system as guided electromagnetic waves that propagate along a transmission medium and to wirelessly transmit the downstream channel signals to at least one client device. Other embodiments are disclosed.
1. A client node device comprising: a communication interface configured to receive first channel signals from a communication network; and an access point repeater (APR) configured to launch the first channel signals on a guided wave communication system as guided electromagnetic waves at non-optical frequencies that are bound to a physical structure of a transmission medium and to wirelessly transmit the first channel signals to at least one client device via an antenna. 2. The client node device of claim 1 wherein the transmission medium includes a dielectric member and the guided electromagnetic waves propagate along an outer surface of the dielectric member. 3. The client node device of claim 1, wherein the APR comprises: an amplifier configured to amplify the first channel signals to generate amplified first channel signals; a channel selection filter configured to select one or more of the amplified first channel signals to wirelessly communicate with the at least one client device; a coupler configured to guide the amplified first channel signals to the transmission medium of the guided wave communication system; and a channel duplexer configured to transfer the amplified first channel signals to the coupler and to the channel selection filter. 4. The client node device of claim 1, wherein the transmission medium includes a dielectric core surrounded by cladding and the guided electromagnetic waves are bound to an outer surface of the dielectric core. 5. The client node device of claim 1, wherein the communication interface is an analog radio that generates the first channel signals by downconverting radio frequency (RF) signals that have higher carrier frequencies relative to carrier frequencies of the first channel signals. 6. The client node device of claim 1, wherein the APR is further configured to extract second channel signals from the guided wave communication system; and wherein the communication interface wirelessly transmits the second channel signals to the communication network. 7. The client node device of claim 6, wherein the APR wirelessly receives third channel signals from the at least one client device; and wherein the communication interface wirelessly transmits the third channel signals to the communication network. 8. The client node device of claim 1 wherein the transmission medium is conductorless and the guided electromagnetic waves propagate along the transmission medium without an electrical return path. 9. The client node device of claim 1 wherein at least a portion of the first channel signals is formatted in accordance with a data over cable system interface specification (DOCSIS) protocol. 10. The client node device of claim 1, wherein at least a portion of the first channel signals is formatted in accordance with a fifth generation (5G) mobile wireless protocol. 11. A method comprising: receiving first channel signals from a communication network; launching the first channel signals on a guided wave communication system as guided electromagnetic waves that are bound to a transmission medium; and wirelessly transmitting the first channel signals to at least one client device via an antenna. 12. The method of claim 11, wherein the transmission medium includes a dielectric member and the guided electromagnetic waves propagate along the dielectric member. 13. The method of claim 11, wherein the wirelessly transmitting the first channel signals to the at least one client device comprises: amplifying the first channel signals to generate amplified first channel signals; selecting one or more of the amplified first channel signals; and wirelessly transmitting the one or more of the amplified first channel signals to the at least one client device via the antenna. 14. The method of claim 11, wherein the launching the first channel signals on the guided wave communication system as guided electromagnetic waves comprises: amplifying the first channel signals to generate amplified first channel signals; and guiding the amplified first channel signals to the transmission medium of the guided wave communication system. 15. The method of claim 11, wherein the receiving the first channel signals from the communication network includes: downconverting radio frequency (RF) signals that have higher carrier frequencies compared with carrier frequencies of the first channel signals. 16. The method of claim 11, further comprising: extracting second channel signals from the guided wave communication system; and transmitting the second channel signals to the communication network. 17. The method of claim 11, wherein the transmission medium includes a dielectric core surrounded by cladding and the guided electromagnetic waves are bound to an outer surface of the dielectric core. 18. The method of claim 11, wherein the transmission medium is conductorless and the guided electromagnetic waves propagate along the transmission medium without an electrical return path. 19. A client node device comprising: a radio configured to receive first channel signals from a communication network and to transmit second channel signals and third channel signals to the communication network; and an access point repeater (APR) configured to launch the first channel signals on a guided wave communication system as guided electromagnetic waves that propagate along a dielectric transmission medium, to extract the second channel signals from the guided wave communication system, to wirelessly transmit the first channel signals to at least one client device via an antenna and to receive the third channel signals from the communication network. 20. The client node device of claim 19, wherein the dielectric transmission medium includes a dielectric core surrounded by cladding and the guided electromagnetic waves are bound to an outer surface of the dielectric core.
Aspects of the subject disclosure may include, for example, a client node device having a radio configured to wirelessly receive downstream channel signals from a communication network. An access point repeater (APR) launches the downstream channel signals on a guided wave communication system as guided electromagnetic waves that propagate along a transmission medium and to wirelessly transmit the downstream channel signals to at least one client device. Other embodiments are disclosed.1. A client node device comprising: a communication interface configured to receive first channel signals from a communication network; and an access point repeater (APR) configured to launch the first channel signals on a guided wave communication system as guided electromagnetic waves at non-optical frequencies that are bound to a physical structure of a transmission medium and to wirelessly transmit the first channel signals to at least one client device via an antenna. 2. The client node device of claim 1 wherein the transmission medium includes a dielectric member and the guided electromagnetic waves propagate along an outer surface of the dielectric member. 3. The client node device of claim 1, wherein the APR comprises: an amplifier configured to amplify the first channel signals to generate amplified first channel signals; a channel selection filter configured to select one or more of the amplified first channel signals to wirelessly communicate with the at least one client device; a coupler configured to guide the amplified first channel signals to the transmission medium of the guided wave communication system; and a channel duplexer configured to transfer the amplified first channel signals to the coupler and to the channel selection filter. 4. The client node device of claim 1, wherein the transmission medium includes a dielectric core surrounded by cladding and the guided electromagnetic waves are bound to an outer surface of the dielectric core. 5. The client node device of claim 1, wherein the communication interface is an analog radio that generates the first channel signals by downconverting radio frequency (RF) signals that have higher carrier frequencies relative to carrier frequencies of the first channel signals. 6. The client node device of claim 1, wherein the APR is further configured to extract second channel signals from the guided wave communication system; and wherein the communication interface wirelessly transmits the second channel signals to the communication network. 7. The client node device of claim 6, wherein the APR wirelessly receives third channel signals from the at least one client device; and wherein the communication interface wirelessly transmits the third channel signals to the communication network. 8. The client node device of claim 1 wherein the transmission medium is conductorless and the guided electromagnetic waves propagate along the transmission medium without an electrical return path. 9. The client node device of claim 1 wherein at least a portion of the first channel signals is formatted in accordance with a data over cable system interface specification (DOCSIS) protocol. 10. The client node device of claim 1, wherein at least a portion of the first channel signals is formatted in accordance with a fifth generation (5G) mobile wireless protocol. 11. A method comprising: receiving first channel signals from a communication network; launching the first channel signals on a guided wave communication system as guided electromagnetic waves that are bound to a transmission medium; and wirelessly transmitting the first channel signals to at least one client device via an antenna. 12. The method of claim 11, wherein the transmission medium includes a dielectric member and the guided electromagnetic waves propagate along the dielectric member. 13. The method of claim 11, wherein the wirelessly transmitting the first channel signals to the at least one client device comprises: amplifying the first channel signals to generate amplified first channel signals; selecting one or more of the amplified first channel signals; and wirelessly transmitting the one or more of the amplified first channel signals to the at least one client device via the antenna. 14. The method of claim 11, wherein the launching the first channel signals on the guided wave communication system as guided electromagnetic waves comprises: amplifying the first channel signals to generate amplified first channel signals; and guiding the amplified first channel signals to the transmission medium of the guided wave communication system. 15. The method of claim 11, wherein the receiving the first channel signals from the communication network includes: downconverting radio frequency (RF) signals that have higher carrier frequencies compared with carrier frequencies of the first channel signals. 16. The method of claim 11, further comprising: extracting second channel signals from the guided wave communication system; and transmitting the second channel signals to the communication network. 17. The method of claim 11, wherein the transmission medium includes a dielectric core surrounded by cladding and the guided electromagnetic waves are bound to an outer surface of the dielectric core. 18. The method of claim 11, wherein the transmission medium is conductorless and the guided electromagnetic waves propagate along the transmission medium without an electrical return path. 19. A client node device comprising: a radio configured to receive first channel signals from a communication network and to transmit second channel signals and third channel signals to the communication network; and an access point repeater (APR) configured to launch the first channel signals on a guided wave communication system as guided electromagnetic waves that propagate along a dielectric transmission medium, to extract the second channel signals from the guided wave communication system, to wirelessly transmit the first channel signals to at least one client device via an antenna and to receive the third channel signals from the communication network. 20. The client node device of claim 19, wherein the dielectric transmission medium includes a dielectric core surrounded by cladding and the guided electromagnetic waves are bound to an outer surface of the dielectric core.
2,400
7,410
7,410
12,329,890
2,473
Methods and apparatus for reverse link acknowledgement in a wireless local area network. A method includes receiving, at a first node, a data communication over a common channel, the data communication being decodable by other nodes. The method also includes determining transmission resources from the data communication, wherein the transmission resources are different for each node, and transmitting a response over the common channel using the determined transmission resources. An apparatus includes a transmitter configured to transmit to a plurality of nodes a data communication over the common channel, and a receiver configured to receive responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication.
1. A method for communication using a channel that is common to a plurality of nodes, the method comprising: receiving, at a first node of the plurality of nodes, a data communication over the common channel, the data communication being decodable by other nodes of the plurality of nodes; determining transmission resources from the data communication, wherein the transmission resources are different for each node; and transmitting a response over the common channel using the determined transmission resources. 2. The method of claim 1, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 3. The method of claim 1, wherein the determination comprises determining the transmission resources based on a protocol that is known to both the first node and a node that transmitted the data communication. 4. The method of claim 1, wherein the determination comprises determining the transmission resources based on a frame of the data communication in which transmission resources to be used by all nodes are specified. 5. The method of claim 1, wherein the determination comprises determining the transmission resources based on a frame of the data communication in which transmission resources specific to only the first node are specified. 6. An apparatus for communication using a channel that is common to a plurality of nodes and the apparatus, the apparatus comprising: a receiver configured to receive a data communication over the common channel, the data communication being decodable by the plurality of nodes; a controller configured to determine transmission resources from the data communication, wherein the transmission resources are different for each of the nodes and the apparatus; and a transmitter configured to transmit a response over the common channel using the determined transmission resources. 7. The apparatus of claim 6, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 8. The apparatus of claim 6, wherein the controller is configured to determine the transmission resources based on a protocol that is known to both the apparatus and a node that transmitted the data communication. 9. The apparatus of claim 6, wherein the controller is configured to determine the transmission resources based on a frame of the data communication in which transmission resources to be used by the plurality of nodes and the apparatus are specified. 10. The apparatus of claim 6, wherein the controller is configured to determine the transmission resources based on a frame of the data communication in which transmission resources specific to only the apparatus are specified. 11. An apparatus for communication using a channel that is common to a plurality of nodes and the apparatus, the apparatus comprising: means for receiving a data communication over the common channel, the data communication being decodable by the plurality of nodes; means for determining transmission resources from the data communication, wherein the transmission resources are different for each of the nodes and the apparatus; and means for transmitting a response over the common channel using the determined transmission resources. 12. The apparatus of claim 11, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 13. The apparatus of claim 11, wherein said means for determining comprises means for determining the transmission resources based on a protocol that is known to both the apparatus and a node that transmitted the data communication. 14. The apparatus of claim 11, wherein said means for determining comprises means for determining the transmission resources based on a frame of the data communication in which transmission resources to be used by the plurality of nodes and the apparatus are specified. 15. The apparatus of claim 11, wherein said means for determining comprises means for determining the transmission resources based on a frame of the data communication in which transmission resources specific to only the apparatus are specified. 16. A computer program product for communication using a channel that is common to a plurality of nodes, the computer program product comprising: a computer-readable medium encoded with codes executable to: receive, at a first node of the plurality of nodes, a data communication over the common channel, the data communication being decodable by other nodes of the plurality of nodes; determine transmission resources from the data communication, wherein the transmission resources are different for each node; and transmit a response over the common channel using the determined transmission resources. 17. An access terminal for communication using a channel that is common to a plurality of nodes and the access terminal, the access terminal comprising: an antenna; a receiver configured to receive, via the antenna, a data communication over the common channel, the data communication being decodable by the plurality of nodes; a controller configured to determine transmission resources from the data communication, wherein the transmission resources are different for each of the plurality of nodes and the access terminal; and a transmitter configured to transmit a response over the common channel using the determined transmission resources. 18. A method for communication using a channel that is common to a plurality of nodes, the method comprising: transmitting to the plurality of nodes a data communication over the common channel; and receiving responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication. 19. The method of claim 18, wherein the data communication comprises information specifying selected transmission resources to be used by each node. 20. The method of claim 18, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 21. The method of claim 18, wherein the data communication comprises a frame in which transmission resources to be used by all nodes are specified. 22. The method of claim 18, wherein the data communication comprises a first frame in which transmission resources to be used by only a selected node are specified. 23. The method of claim 22, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein the transmission comprises transmitting the first and second frames at different rates or at the same rate. 24. The method of claim 22, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein the transmission comprises transmitting the first and second frames at different times or at the same time. 25. An apparatus for communication using a channel that is common to a plurality of nodes and the apparatus, the apparatus comprising: a transmitter configured to transmit to the plurality of nodes a data communication over the common channel; and a receiver configured to receive responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication. 26. The apparatus of claim 25, wherein the data communication comprises information specifying a selected transmission resource to be used by each node. 27. The apparatus of claim 25, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 28. The apparatus of claim 25, wherein the data communication comprises a frame in which transmission resources to be used by all nodes are specified. 29. The apparatus of claim 25, wherein the data communication comprises a first frame in which transmission resources to be used by only a selected node are specified. 30. The apparatus of claim 29, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein said transmitter is configured to transmit the first and second frames at different rates or at the same rate. 31. The apparatus of claim 29, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein said transmitter is configured to transmit the first and second frames at different times or at the same time. 32. An apparatus for communication using a channel that is common to a plurality of nodes and the apparatus, the apparatus comprising: means for transmitting to the plurality of nodes a data communication over the common channel; and means for receiving responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication. 33. The apparatus of claim 32, wherein the data communication comprises information specifying selected transmission resources to be used by each node. 34. The apparatus of claim 32, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 35. The apparatus of claim 32, wherein the data communication comprises a frame in which transmission resources to be used by all nodes are specified. 36. The apparatus of claim 32, wherein the data communication comprises a first frame in which transmission resources to be used by only a selected node are specified. 37. The apparatus of claim 36, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein said means for transmitting comprises means for transmitting the first and second frames at different rates or at the same rate. 38. The apparatus of claim 36, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein said means for transmitting comprises means for transmitting the first and second frames at different times or at the same time. 39. A computer program product for communication using a channel that is common to a plurality of nodes, the computer program product comprising: a computer-readable medium encoded with codes executable to: transmit to the plurality of nodes a data communication over the common channel; and receive responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication. 40. An access point for communication using a channel that is common to a plurality of nodes and the access point, the access point comprising: an antenna; a transmitter configured to transmit, via the antenna, a data communication over the common channel to the plurality of nodes; and a receiver configured to receive responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication.
Methods and apparatus for reverse link acknowledgement in a wireless local area network. A method includes receiving, at a first node, a data communication over a common channel, the data communication being decodable by other nodes. The method also includes determining transmission resources from the data communication, wherein the transmission resources are different for each node, and transmitting a response over the common channel using the determined transmission resources. An apparatus includes a transmitter configured to transmit to a plurality of nodes a data communication over the common channel, and a receiver configured to receive responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication.1. A method for communication using a channel that is common to a plurality of nodes, the method comprising: receiving, at a first node of the plurality of nodes, a data communication over the common channel, the data communication being decodable by other nodes of the plurality of nodes; determining transmission resources from the data communication, wherein the transmission resources are different for each node; and transmitting a response over the common channel using the determined transmission resources. 2. The method of claim 1, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 3. The method of claim 1, wherein the determination comprises determining the transmission resources based on a protocol that is known to both the first node and a node that transmitted the data communication. 4. The method of claim 1, wherein the determination comprises determining the transmission resources based on a frame of the data communication in which transmission resources to be used by all nodes are specified. 5. The method of claim 1, wherein the determination comprises determining the transmission resources based on a frame of the data communication in which transmission resources specific to only the first node are specified. 6. An apparatus for communication using a channel that is common to a plurality of nodes and the apparatus, the apparatus comprising: a receiver configured to receive a data communication over the common channel, the data communication being decodable by the plurality of nodes; a controller configured to determine transmission resources from the data communication, wherein the transmission resources are different for each of the nodes and the apparatus; and a transmitter configured to transmit a response over the common channel using the determined transmission resources. 7. The apparatus of claim 6, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 8. The apparatus of claim 6, wherein the controller is configured to determine the transmission resources based on a protocol that is known to both the apparatus and a node that transmitted the data communication. 9. The apparatus of claim 6, wherein the controller is configured to determine the transmission resources based on a frame of the data communication in which transmission resources to be used by the plurality of nodes and the apparatus are specified. 10. The apparatus of claim 6, wherein the controller is configured to determine the transmission resources based on a frame of the data communication in which transmission resources specific to only the apparatus are specified. 11. An apparatus for communication using a channel that is common to a plurality of nodes and the apparatus, the apparatus comprising: means for receiving a data communication over the common channel, the data communication being decodable by the plurality of nodes; means for determining transmission resources from the data communication, wherein the transmission resources are different for each of the nodes and the apparatus; and means for transmitting a response over the common channel using the determined transmission resources. 12. The apparatus of claim 11, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 13. The apparatus of claim 11, wherein said means for determining comprises means for determining the transmission resources based on a protocol that is known to both the apparatus and a node that transmitted the data communication. 14. The apparatus of claim 11, wherein said means for determining comprises means for determining the transmission resources based on a frame of the data communication in which transmission resources to be used by the plurality of nodes and the apparatus are specified. 15. The apparatus of claim 11, wherein said means for determining comprises means for determining the transmission resources based on a frame of the data communication in which transmission resources specific to only the apparatus are specified. 16. A computer program product for communication using a channel that is common to a plurality of nodes, the computer program product comprising: a computer-readable medium encoded with codes executable to: receive, at a first node of the plurality of nodes, a data communication over the common channel, the data communication being decodable by other nodes of the plurality of nodes; determine transmission resources from the data communication, wherein the transmission resources are different for each node; and transmit a response over the common channel using the determined transmission resources. 17. An access terminal for communication using a channel that is common to a plurality of nodes and the access terminal, the access terminal comprising: an antenna; a receiver configured to receive, via the antenna, a data communication over the common channel, the data communication being decodable by the plurality of nodes; a controller configured to determine transmission resources from the data communication, wherein the transmission resources are different for each of the plurality of nodes and the access terminal; and a transmitter configured to transmit a response over the common channel using the determined transmission resources. 18. A method for communication using a channel that is common to a plurality of nodes, the method comprising: transmitting to the plurality of nodes a data communication over the common channel; and receiving responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication. 19. The method of claim 18, wherein the data communication comprises information specifying selected transmission resources to be used by each node. 20. The method of claim 18, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 21. The method of claim 18, wherein the data communication comprises a frame in which transmission resources to be used by all nodes are specified. 22. The method of claim 18, wherein the data communication comprises a first frame in which transmission resources to be used by only a selected node are specified. 23. The method of claim 22, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein the transmission comprises transmitting the first and second frames at different rates or at the same rate. 24. The method of claim 22, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein the transmission comprises transmitting the first and second frames at different times or at the same time. 25. An apparatus for communication using a channel that is common to a plurality of nodes and the apparatus, the apparatus comprising: a transmitter configured to transmit to the plurality of nodes a data communication over the common channel; and a receiver configured to receive responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication. 26. The apparatus of claim 25, wherein the data communication comprises information specifying a selected transmission resource to be used by each node. 27. The apparatus of claim 25, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 28. The apparatus of claim 25, wherein the data communication comprises a frame in which transmission resources to be used by all nodes are specified. 29. The apparatus of claim 25, wherein the data communication comprises a first frame in which transmission resources to be used by only a selected node are specified. 30. The apparatus of claim 29, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein said transmitter is configured to transmit the first and second frames at different rates or at the same rate. 31. The apparatus of claim 29, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein said transmitter is configured to transmit the first and second frames at different times or at the same time. 32. An apparatus for communication using a channel that is common to a plurality of nodes and the apparatus, the apparatus comprising: means for transmitting to the plurality of nodes a data communication over the common channel; and means for receiving responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication. 33. The apparatus of claim 32, wherein the data communication comprises information specifying selected transmission resources to be used by each node. 34. The apparatus of claim 32, wherein the transmission resources comprise some of space, frequency, time, data rate, and code resources. 35. The apparatus of claim 32, wherein the data communication comprises a frame in which transmission resources to be used by all nodes are specified. 36. The apparatus of claim 32, wherein the data communication comprises a first frame in which transmission resources to be used by only a selected node are specified. 37. The apparatus of claim 36, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein said means for transmitting comprises means for transmitting the first and second frames at different rates or at the same rate. 38. The apparatus of claim 36, wherein the data communication further comprises a second frame in which transmission resources to be used by only a selected second node are specified, and wherein said means for transmitting comprises means for transmitting the first and second frames at different times or at the same time. 39. A computer program product for communication using a channel that is common to a plurality of nodes, the computer program product comprising: a computer-readable medium encoded with codes executable to: transmit to the plurality of nodes a data communication over the common channel; and receive responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication. 40. An access point for communication using a channel that is common to a plurality of nodes and the access point, the access point comprising: an antenna; a transmitter configured to transmit, via the antenna, a data communication over the common channel to the plurality of nodes; and a receiver configured to receive responses from the plurality of nodes, wherein each response was sent using different transmission resources determined from the data communication.
2,400
7,411
7,411
14,053,260
2,453
Generating a user unavailability alert in a collaborative environment. An embodiment can include receiving a user input from a user indicating an unavailability of the user. Responsive to the user input, activity of the user in the collaborative environment can be analyzed to identify whether any pending actions are allocated to the user which relate to other people identified by the user's participation in the collaborative environment. Responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, a first message can be generated to be communicated to the at least one other person indicating the unavailability of the user, and the first message can be communicated to the at least one other person.
1. A method of generating a user unavailability alert in a collaborative environment, the method comprising: receiving a user input from a user indicating an unavailability of the user; responsive to the user input, via a processor, analyzing activity of the user in the collaborative environment to identify whether any pending actions are allocated to the user which relate to other people identified by the user's participation in the collaborative environment; responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, generating a first message to be communicated to the at least one other person indicating the unavailability of the user; and communicating the first message to the at least one other person. 2. The method of claim 1, wherein the first message further indicates that a completion time or date of the pending action allocated to the user will be affected by the unavailability of the user. 3. The method of claim 2, further comprising: automatically estimating a new completion time or date for the pending action allocated to the user; wherein the first message further indicates the new completion time or date for the pending action. 4. The method of claim 1, further comprising: responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, automatically determining a probability the user will be able to complete the pending action by an assigned time or date, wherein the first message further indicates the probability the user will be able to complete the pending action by the assigned time or date. 5. The method of claim 1, further comprising: responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, automatically re-scheduling the pending action. 6. The method of claim 1, wherein: the first message is a personalized audio message to be presented to the at least one other person identified by the user's participation in the collaborative environment; and the personalized audio message is telephonically communicated to the at least one other person in response to a telephone call to the user being received from the at least one of the other person. 7. The method of claim 1, further comprising: responsive to determining a user input indicating an availability of the user is not received within a specified period of time subsequent to the user input, generating a second message to the at least one other person indicating the unavailability of the user. 8. The method of claim 7, further comprising: automatically estimating a new completion time or date of the pending action allocated to the user; wherein the second message further indicates the new completion time or date. 9. A method of generating a user unavailability alert in a collaborative environment, the method comprising: receiving a user input from a user indicating an unavailability of the user; responsive to the user input, via a processor, analyzing activity of the user in the collaborative environment to identify whether any pending actions are allocated to the user which relate to other people identified by the user's participation in the collaborative environment; responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, generating a first message to be communicated to the at least one other person indicating the unavailability of the user; and communicating the first message to the at least one other person, the first message indicating a new completion time or date for the pending action. 10-25. (canceled)
Generating a user unavailability alert in a collaborative environment. An embodiment can include receiving a user input from a user indicating an unavailability of the user. Responsive to the user input, activity of the user in the collaborative environment can be analyzed to identify whether any pending actions are allocated to the user which relate to other people identified by the user's participation in the collaborative environment. Responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, a first message can be generated to be communicated to the at least one other person indicating the unavailability of the user, and the first message can be communicated to the at least one other person.1. A method of generating a user unavailability alert in a collaborative environment, the method comprising: receiving a user input from a user indicating an unavailability of the user; responsive to the user input, via a processor, analyzing activity of the user in the collaborative environment to identify whether any pending actions are allocated to the user which relate to other people identified by the user's participation in the collaborative environment; responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, generating a first message to be communicated to the at least one other person indicating the unavailability of the user; and communicating the first message to the at least one other person. 2. The method of claim 1, wherein the first message further indicates that a completion time or date of the pending action allocated to the user will be affected by the unavailability of the user. 3. The method of claim 2, further comprising: automatically estimating a new completion time or date for the pending action allocated to the user; wherein the first message further indicates the new completion time or date for the pending action. 4. The method of claim 1, further comprising: responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, automatically determining a probability the user will be able to complete the pending action by an assigned time or date, wherein the first message further indicates the probability the user will be able to complete the pending action by the assigned time or date. 5. The method of claim 1, further comprising: responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, automatically re-scheduling the pending action. 6. The method of claim 1, wherein: the first message is a personalized audio message to be presented to the at least one other person identified by the user's participation in the collaborative environment; and the personalized audio message is telephonically communicated to the at least one other person in response to a telephone call to the user being received from the at least one of the other person. 7. The method of claim 1, further comprising: responsive to determining a user input indicating an availability of the user is not received within a specified period of time subsequent to the user input, generating a second message to the at least one other person indicating the unavailability of the user. 8. The method of claim 7, further comprising: automatically estimating a new completion time or date of the pending action allocated to the user; wherein the second message further indicates the new completion time or date. 9. A method of generating a user unavailability alert in a collaborative environment, the method comprising: receiving a user input from a user indicating an unavailability of the user; responsive to the user input, via a processor, analyzing activity of the user in the collaborative environment to identify whether any pending actions are allocated to the user which relate to other people identified by the user's participation in the collaborative environment; responsive to determining at least one pending action is allocated to the user which relates to at least one other person identified by the user's participation in the collaborative environment, generating a first message to be communicated to the at least one other person indicating the unavailability of the user; and communicating the first message to the at least one other person, the first message indicating a new completion time or date for the pending action. 10-25. (canceled)
2,400
7,412
7,412
14,935,791
2,425
Roughly described, a system and method for delivering video content to a user's client device in a video-on-demand (VOD) system, which includes providing a collection of video segments, the segments having a predefined default sequence; establishing a streaming video session according to a session-oriented protocol; transmitting toward the client device a script executable by the client device, the script operable to transmit navigational codes toward the head-end equipment in response to and indicating user selection among navigational choices; beginning transmission of the video segments in the collection toward the client device in accordance with the default sequence of segments; and in response to receipt of one of the navigational codes, and without tearing down the streaming video session, altering the transmission sequence to jump to the segment that the user selected.
1-31. (canceled) 32. One or more non-transitory computer-readable storage mediums storing one or more sequences of instructions for playing video content on a user's client device in a video-on-demand system, the video content including a first collection of video segments, the video segments having a predefined default sequence in the first collection, wherein execution of the one or more sequences by one or more processors cause: establishing a streaming video session according to a session-oriented protocol for streaming video from head-end equipment; receiving from head-end equipment a script executable by the client device, the script operable to transmit codes toward the head-end equipment in response to and indicative of user selection among navigational choices; receiving transmission of video segments in the first collection in accordance with the default sequence for the first collection; in response to detection during the streaming video session of user selection of a selected one of the segments in the first collection, transmitting toward the head-end equipment a code indicated by the script that identifies the selected segment without tearing down the streaming video session; and subsequently, before tearing down the streaming video session, receiving transmission of the selected segment. 33. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein the user selection of a selected segment is detected during transmission of a current one of the video segments in the first collection, wherein the script is operable to transmit a first code toward the head-end equipment in response to user selection of a previous segment in the predefined default sequence and a second code toward the head-end equipment in response to user selection of a next segment in the predefined default sequence, and wherein the transmitted code is a member of the group consisting of the first and second codes. 34. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein the selected segment has an identifier associated therewith, and wherein the transmitted code includes the identifier. 35. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein at least a subset of the segments in the first collection have associated therewith respective scripts each operable to transmit codes toward the head-end equipment in response to and indicative of user selection among navigational choices, and wherein receiving from head-end equipment a script executable by the client device comprises: in association with receipt of each current segment in the subset of segments, receiving from the head-end equipment the script associated with the current segment. 36. The one or more non-transitory computer-readable storage mediums according to claim 35, wherein the script associated with each given one of the segments in the subset includes one or more of a first identifier for the segment that precedes the given segment in the default sequence or a second identifier for the segment that follows the given segment in the default sequence, and wherein the transmitted code comprises or corresponds to the first identifier or the second identifier. 37. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein a navigational code is transmitted during transmission of a current one of the segments, and wherein the step of receiving transmission of the selected segment begins before completing receipt of the current segment. 38. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein the video content further includes a second collection of video segments, the segments in the second collection being addressably stored to head-end equipment, and wherein execution of the one or more sequences of instructions further cause: in response to detection during the streaming video session of user selection of the second collection, transmitting toward the head-end equipment a code indicated by the script that identifies the second collection without tearing down the streaming video session. 39. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein a particular one of the segments in the first collection is associated with a particular video asset, and wherein execution of the one or more sequences of instructions further cause: in response to a user asset selection request detected during receipt of the particular segment: tearing down the streaming video session according to the session-oriented protocol, and initiating a new streaming video session for receiving the particular video asset from the head-end. 40. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein at least a subset of the video segments in the first collection are associated with respective video assets deliverable toward the client device, and wherein execution of the one or more sequences of instructions further cause: receiving, in association with receipt of each of given one of the video segments in the subset, an asset identifier for the video asset associated with the given video segment. 41. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein each segment, in at least a subset of the first collection, includes spatially composited therewith one or more of a first cue that visibly pre-indicates the segment that is next in the default sequence or a second cue that visibly pre-indicates the segment that is previous in the default sequence. 42. An apparatus for playing video content on a user's client device in a video-on-demand system, the video content including a first collection of video segments, the video segments having a predefined default sequence in the first collection, comprising: one or more processors; and one or more non-transitory computer-readable storage mediums storing one or more sequences of instructions, which when executed, cause: establishing a streaming video session according to a session-oriented protocol for streaming video from head-end equipment; receiving from head-end equipment a script executable by the client device, the script operable to transmit codes toward the head-end equipment in response to and indicative of user selection among navigational choices; receiving transmission of video segments in the first collection in accordance with the default sequence for the first collection; in response to detection during the streaming video session of user selection of a selected one of the segments in the first collection, transmitting toward the head-end equipment a code indicated by the script that identifies the selected segment without tearing down the streaming video session; and and subsequently, before tearing down the streaming video session, receiving transmission of the selected segment. 43. The apparatus according to claim 42, wherein the user selection of a selected segment is detected during transmission of a current one of the video segments in the first collection, wherein the script is operable to transmit a first code toward the head-end equipment in response to user selection of a previous segment in the predefined default sequence and a second code toward the head-end equipment in response to user selection of a next segment in the predefined default sequence, and wherein the transmitted code is a member of the group consisting of the first and second codes. 44. The apparatus according to claim 42, wherein the selected segment has an identifier associated therewith, and wherein the transmitted code includes the identifier. 45. The apparatus according to claim 42, wherein at least a subset of the segments in the first collection have associated therewith respective scripts each operable to transmit codes toward the head-end equipment in response to and indicative of user selection among navigational choices, and wherein receiving from head-end equipment a script executable by the client device comprises: in association with receipt of each current segment in the subset of segments, receiving from the head-end equipment the script associated with the current segment. 46. The apparatus according to claim 45, wherein the script associated with each given one of the segments in the subset includes one or more of a first identifier for the segment that precedes the given segment in the default sequence or a second identifier for the segment that follows the given segment in the default sequence, and wherein the transmitted code comprises or corresponds to the first identifier or the second identifier. 47. The apparatus according to claim 42, wherein a navigational code is transmitted during transmission of a current one of the segments, and wherein the step of receiving transmission of the selected segment begins before completing receipt of the current segment. 48. The apparatus according to claim 42, wherein the video content further includes a second collection of video segments, the segments in the second collection being addressably stored to head-end equipment, and wherein execution of the one or more sequences of instructions further cause: in response to detection during the streaming video session of user selection of the second collection, transmitting toward the head-end equipment a code indicated by the script that identifies the second collection without tearing down the streaming video session. 49. The apparatus according to claim 42, wherein a particular one of the segments in the first collection is associated with a particular video asset, and wherein execution of the one or more sequences of instructions further cause: in response to a user asset selection request detected during receipt of the particular segment: tearing down the streaming video session according to the session-oriented protocol, and initiating a new streaming video session for receiving the particular video asset from the head-end. 50. The apparatus according to claim 42, wherein at least a subset of the video segments in the first collection are associated with respective video assets deliverable toward the client device, and wherein execution of the one or more sequences of instructions further cause: receiving, in association with receipt of each of given one of the video segments in the subset, an asset identifier for the video asset associated with the given video segment. 51. The apparatus according to claim 42, wherein each segment, in at least a subset of the first collection, includes spatially composited therewith one or more of a first cue that visibly pre-indicates the segment that is next in the default sequence or a second cue that visibly pre-indicates the segment that is previous in the default sequence.
Roughly described, a system and method for delivering video content to a user's client device in a video-on-demand (VOD) system, which includes providing a collection of video segments, the segments having a predefined default sequence; establishing a streaming video session according to a session-oriented protocol; transmitting toward the client device a script executable by the client device, the script operable to transmit navigational codes toward the head-end equipment in response to and indicating user selection among navigational choices; beginning transmission of the video segments in the collection toward the client device in accordance with the default sequence of segments; and in response to receipt of one of the navigational codes, and without tearing down the streaming video session, altering the transmission sequence to jump to the segment that the user selected.1-31. (canceled) 32. One or more non-transitory computer-readable storage mediums storing one or more sequences of instructions for playing video content on a user's client device in a video-on-demand system, the video content including a first collection of video segments, the video segments having a predefined default sequence in the first collection, wherein execution of the one or more sequences by one or more processors cause: establishing a streaming video session according to a session-oriented protocol for streaming video from head-end equipment; receiving from head-end equipment a script executable by the client device, the script operable to transmit codes toward the head-end equipment in response to and indicative of user selection among navigational choices; receiving transmission of video segments in the first collection in accordance with the default sequence for the first collection; in response to detection during the streaming video session of user selection of a selected one of the segments in the first collection, transmitting toward the head-end equipment a code indicated by the script that identifies the selected segment without tearing down the streaming video session; and subsequently, before tearing down the streaming video session, receiving transmission of the selected segment. 33. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein the user selection of a selected segment is detected during transmission of a current one of the video segments in the first collection, wherein the script is operable to transmit a first code toward the head-end equipment in response to user selection of a previous segment in the predefined default sequence and a second code toward the head-end equipment in response to user selection of a next segment in the predefined default sequence, and wherein the transmitted code is a member of the group consisting of the first and second codes. 34. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein the selected segment has an identifier associated therewith, and wherein the transmitted code includes the identifier. 35. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein at least a subset of the segments in the first collection have associated therewith respective scripts each operable to transmit codes toward the head-end equipment in response to and indicative of user selection among navigational choices, and wherein receiving from head-end equipment a script executable by the client device comprises: in association with receipt of each current segment in the subset of segments, receiving from the head-end equipment the script associated with the current segment. 36. The one or more non-transitory computer-readable storage mediums according to claim 35, wherein the script associated with each given one of the segments in the subset includes one or more of a first identifier for the segment that precedes the given segment in the default sequence or a second identifier for the segment that follows the given segment in the default sequence, and wherein the transmitted code comprises or corresponds to the first identifier or the second identifier. 37. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein a navigational code is transmitted during transmission of a current one of the segments, and wherein the step of receiving transmission of the selected segment begins before completing receipt of the current segment. 38. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein the video content further includes a second collection of video segments, the segments in the second collection being addressably stored to head-end equipment, and wherein execution of the one or more sequences of instructions further cause: in response to detection during the streaming video session of user selection of the second collection, transmitting toward the head-end equipment a code indicated by the script that identifies the second collection without tearing down the streaming video session. 39. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein a particular one of the segments in the first collection is associated with a particular video asset, and wherein execution of the one or more sequences of instructions further cause: in response to a user asset selection request detected during receipt of the particular segment: tearing down the streaming video session according to the session-oriented protocol, and initiating a new streaming video session for receiving the particular video asset from the head-end. 40. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein at least a subset of the video segments in the first collection are associated with respective video assets deliverable toward the client device, and wherein execution of the one or more sequences of instructions further cause: receiving, in association with receipt of each of given one of the video segments in the subset, an asset identifier for the video asset associated with the given video segment. 41. The one or more non-transitory computer-readable storage mediums according to claim 32, wherein each segment, in at least a subset of the first collection, includes spatially composited therewith one or more of a first cue that visibly pre-indicates the segment that is next in the default sequence or a second cue that visibly pre-indicates the segment that is previous in the default sequence. 42. An apparatus for playing video content on a user's client device in a video-on-demand system, the video content including a first collection of video segments, the video segments having a predefined default sequence in the first collection, comprising: one or more processors; and one or more non-transitory computer-readable storage mediums storing one or more sequences of instructions, which when executed, cause: establishing a streaming video session according to a session-oriented protocol for streaming video from head-end equipment; receiving from head-end equipment a script executable by the client device, the script operable to transmit codes toward the head-end equipment in response to and indicative of user selection among navigational choices; receiving transmission of video segments in the first collection in accordance with the default sequence for the first collection; in response to detection during the streaming video session of user selection of a selected one of the segments in the first collection, transmitting toward the head-end equipment a code indicated by the script that identifies the selected segment without tearing down the streaming video session; and and subsequently, before tearing down the streaming video session, receiving transmission of the selected segment. 43. The apparatus according to claim 42, wherein the user selection of a selected segment is detected during transmission of a current one of the video segments in the first collection, wherein the script is operable to transmit a first code toward the head-end equipment in response to user selection of a previous segment in the predefined default sequence and a second code toward the head-end equipment in response to user selection of a next segment in the predefined default sequence, and wherein the transmitted code is a member of the group consisting of the first and second codes. 44. The apparatus according to claim 42, wherein the selected segment has an identifier associated therewith, and wherein the transmitted code includes the identifier. 45. The apparatus according to claim 42, wherein at least a subset of the segments in the first collection have associated therewith respective scripts each operable to transmit codes toward the head-end equipment in response to and indicative of user selection among navigational choices, and wherein receiving from head-end equipment a script executable by the client device comprises: in association with receipt of each current segment in the subset of segments, receiving from the head-end equipment the script associated with the current segment. 46. The apparatus according to claim 45, wherein the script associated with each given one of the segments in the subset includes one or more of a first identifier for the segment that precedes the given segment in the default sequence or a second identifier for the segment that follows the given segment in the default sequence, and wherein the transmitted code comprises or corresponds to the first identifier or the second identifier. 47. The apparatus according to claim 42, wherein a navigational code is transmitted during transmission of a current one of the segments, and wherein the step of receiving transmission of the selected segment begins before completing receipt of the current segment. 48. The apparatus according to claim 42, wherein the video content further includes a second collection of video segments, the segments in the second collection being addressably stored to head-end equipment, and wherein execution of the one or more sequences of instructions further cause: in response to detection during the streaming video session of user selection of the second collection, transmitting toward the head-end equipment a code indicated by the script that identifies the second collection without tearing down the streaming video session. 49. The apparatus according to claim 42, wherein a particular one of the segments in the first collection is associated with a particular video asset, and wherein execution of the one or more sequences of instructions further cause: in response to a user asset selection request detected during receipt of the particular segment: tearing down the streaming video session according to the session-oriented protocol, and initiating a new streaming video session for receiving the particular video asset from the head-end. 50. The apparatus according to claim 42, wherein at least a subset of the video segments in the first collection are associated with respective video assets deliverable toward the client device, and wherein execution of the one or more sequences of instructions further cause: receiving, in association with receipt of each of given one of the video segments in the subset, an asset identifier for the video asset associated with the given video segment. 51. The apparatus according to claim 42, wherein each segment, in at least a subset of the first collection, includes spatially composited therewith one or more of a first cue that visibly pre-indicates the segment that is next in the default sequence or a second cue that visibly pre-indicates the segment that is previous in the default sequence.
2,400
7,413
7,413
14,319,736
2,451
Implementations described and claimed herein provide systems and methods for investigating, tracking, preventing, and providing accountability for telecommunication network outages, particularly human error outages. In one implementation, a ticket is received for a network outage. The ticket specifies a responsible team and an estimated outage reason for the network outage. The estimated outage reason indicates a human error by the responsible team. A notification of the ticket is provided to the responsible team. The notification prompts an action by the responsible team. The action specifies whether the team made the human error. The ticket is completed based on the action and stored. The completed ticket details a root cause of the network outage and a performance management strategy for preventing future network outages similar to the network outage.
1. A method for investigating a network outage of one or more telecommunication clients comprising: receiving a ticket for the network outage, the ticket specifying a responsible team and an estimated outage reason for the network outage, the estimated outage reason indicating a human error by the responsible team; providing a notification of the ticket to the responsible team using at least one computing unit, the notification prompting an action by the responsible team, the action specifying whether the responsible team made the human error; and storing a completed ticket based on the action using the at least one computing unit, the completed ticket detailing a root cause of the network outage and a performance management strategy for preventing future network outages similar to the network outage. 2. The method of claim 1, wherein the action includes validating the ticket where the responsible team made the human error. 3. The method of claim 2, wherein the root cause detailed in the completed ticket includes one or more causes of the human error made by the responsible team. 4. The method of claim 1, wherein the action includes disputing the ticket where the responsible team did not make the human error, the action specifying one or more reasons for disputing the ticket. 5. The method of claim 4, wherein the one or more reasons for disputing the ticket include that another team made the human error. 6. The method of claim 5, wherein disputing the ticket assigns the ticket to the other team for investigation. 7. The method of claim 5, wherein the root cause detailed in the completed ticket includes one or more causes of the human error made by the other team. 8. The method of claim 4, wherein the root cause detailed in the completed ticket was not the human error. 9. The method of claim 1, wherein the performance management strategy includes at least one preventative measure and a deadline for completing the at least one preventative measure. 10. One or more non-transitory tangible computer-readable storage media storing computer-executable instructions for performing a computer process on a computing system, the computer process comprising: receiving a plurality of tickets, each of the tickets including an assigned team for investigating a network outage, the assigned team selected from a plurality of teams; storing the tickets in one or more databases; tracking a status of each of the tickets, the status designated based on a responsibility of the assigned team for causing the network outage; generating consolidated analytics based on the tracked statuses of the tickets; and outputting the consolidated analytics. 11. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the status is designed new where the assigned team is estimated to be responsible for causing the network outage. 12. The one or more non-transitory tangible computer-readable storage media of claim 11, wherein the consolidated analytics includes a correlation of the tracked statuses that are designated new with at least one of the teams. 13. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the status is designated validated where the assigned team confirms responsibility for causing the network outage. 14. The one or more non-transitory tangible computer-readable storage media of claim 13, wherein the consolidated analytics includes a correlation of the tracked statuses that are designated validated with at least one of: one or more root causes; one or more preventative measures; one or more of the teams. 15. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the status is designated disputed where the assigned team disputes responsibility for causing the network outage. 16. The one or more non-transitory tangible computer-readable storage media of claim 15, wherein the consolidated analytics includes a correlation of the tracked statuses that are designated disputed with at least one of the teams and a dispute status. 17. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the network outage is a human error outage. 18. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the consolidated analytics are output for display on a graphical user interface. 19. A system for improving client satisfaction comprising: one or more databases storing a plurality of tickets, each of the tickets including an assigned team for investigating a service event, the assigned team determined based on an estimated cause of the service event; and at least one server in communication with the one or more databases, the at least one server configured to track investigation information and performance management information for each of the tickets, the investigation information including a root cause of the service event and a responsible team accountable for the root cause, the performance management information including at least one preventative measure to prevent similar service events. 20. The system of claim 19, wherein the service event is a network outage.
Implementations described and claimed herein provide systems and methods for investigating, tracking, preventing, and providing accountability for telecommunication network outages, particularly human error outages. In one implementation, a ticket is received for a network outage. The ticket specifies a responsible team and an estimated outage reason for the network outage. The estimated outage reason indicates a human error by the responsible team. A notification of the ticket is provided to the responsible team. The notification prompts an action by the responsible team. The action specifies whether the team made the human error. The ticket is completed based on the action and stored. The completed ticket details a root cause of the network outage and a performance management strategy for preventing future network outages similar to the network outage.1. A method for investigating a network outage of one or more telecommunication clients comprising: receiving a ticket for the network outage, the ticket specifying a responsible team and an estimated outage reason for the network outage, the estimated outage reason indicating a human error by the responsible team; providing a notification of the ticket to the responsible team using at least one computing unit, the notification prompting an action by the responsible team, the action specifying whether the responsible team made the human error; and storing a completed ticket based on the action using the at least one computing unit, the completed ticket detailing a root cause of the network outage and a performance management strategy for preventing future network outages similar to the network outage. 2. The method of claim 1, wherein the action includes validating the ticket where the responsible team made the human error. 3. The method of claim 2, wherein the root cause detailed in the completed ticket includes one or more causes of the human error made by the responsible team. 4. The method of claim 1, wherein the action includes disputing the ticket where the responsible team did not make the human error, the action specifying one or more reasons for disputing the ticket. 5. The method of claim 4, wherein the one or more reasons for disputing the ticket include that another team made the human error. 6. The method of claim 5, wherein disputing the ticket assigns the ticket to the other team for investigation. 7. The method of claim 5, wherein the root cause detailed in the completed ticket includes one or more causes of the human error made by the other team. 8. The method of claim 4, wherein the root cause detailed in the completed ticket was not the human error. 9. The method of claim 1, wherein the performance management strategy includes at least one preventative measure and a deadline for completing the at least one preventative measure. 10. One or more non-transitory tangible computer-readable storage media storing computer-executable instructions for performing a computer process on a computing system, the computer process comprising: receiving a plurality of tickets, each of the tickets including an assigned team for investigating a network outage, the assigned team selected from a plurality of teams; storing the tickets in one or more databases; tracking a status of each of the tickets, the status designated based on a responsibility of the assigned team for causing the network outage; generating consolidated analytics based on the tracked statuses of the tickets; and outputting the consolidated analytics. 11. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the status is designed new where the assigned team is estimated to be responsible for causing the network outage. 12. The one or more non-transitory tangible computer-readable storage media of claim 11, wherein the consolidated analytics includes a correlation of the tracked statuses that are designated new with at least one of the teams. 13. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the status is designated validated where the assigned team confirms responsibility for causing the network outage. 14. The one or more non-transitory tangible computer-readable storage media of claim 13, wherein the consolidated analytics includes a correlation of the tracked statuses that are designated validated with at least one of: one or more root causes; one or more preventative measures; one or more of the teams. 15. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the status is designated disputed where the assigned team disputes responsibility for causing the network outage. 16. The one or more non-transitory tangible computer-readable storage media of claim 15, wherein the consolidated analytics includes a correlation of the tracked statuses that are designated disputed with at least one of the teams and a dispute status. 17. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the network outage is a human error outage. 18. The one or more non-transitory tangible computer-readable storage media of claim 10, wherein the consolidated analytics are output for display on a graphical user interface. 19. A system for improving client satisfaction comprising: one or more databases storing a plurality of tickets, each of the tickets including an assigned team for investigating a service event, the assigned team determined based on an estimated cause of the service event; and at least one server in communication with the one or more databases, the at least one server configured to track investigation information and performance management information for each of the tickets, the investigation information including a root cause of the service event and a responsible team accountable for the root cause, the performance management information including at least one preventative measure to prevent similar service events. 20. The system of claim 19, wherein the service event is a network outage.
2,400
7,414
7,414
13,977,936
2,482
Multi-layer dependencies are signaled in an efficient way for a multi-view video stream (1). Coding or decoding relationship information defining coding or decoding dependencies is represented in bit-efficient syntax code, preferably through usage of hierarchical layer dependencies using layer indices for representing layer dependencies.
1-25. (canceled) 26. A method, in a processing circuit, of determining a decoding relationship for a digitally coded multi-layer video stream defining multiple layers of pictures, said method comprising: retrieving, based on said digitally coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and determining information defining any decoding relationship between said multiple layers based on said at least one direct decoding flag. 27. The method of claim 26, wherein retrieving said at least one direct decoding flag comprises retrieving, for said layer with layer index i, a respective direct dependency flag for each layer index j based on said coded multi-layer video stream, wherein j<i and said direct dependency flag indicates whether said layer with layer index j is a direct reference layer for said layer with layer index i; and wherein determining said information comprises determining information defining any layer with layer index j<i which said layer with layer index i depends on, based on said direct dependency flags. 28. The method of claim 26, wherein retrieving said at least one direct decoding flag comprises retrieving, from a video parameter set or video parameter set extension associated with said coded multi-layer video stream, said at least one direct decoding flag indicating said direct coding relationship between said layer with layer index i and said layer with layer index j. 29. A method, in a processing circuit, of decoding a digitally coded multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said method comprising: retrieving, for a layer with a layer index of said multiple layers, decoding relationship information based on said digitally coded multi-layer video stream, said decoding relationship information defining a respective layer index of any reference layer of said multiple layers on which said layer directly depends; mapping, for each reference layer and for said layer, its layer index to a layer identifier based on mapping information of a hierarchical mapping relationship between layer identifiers and layer indices, wherein said mapping information is retrieved based on said digitally coded multi-layer video stream; and decoding a picture of said layer based on at least one previously decoded picture in a layer of said multiple layers identified based on said layer identifiers mapped from layer indices. 30. The method of claim 29, wherein retrieving decoding relationship information comprises: retrieving, based on said coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and determining said decoding relationship information based on said at least one direct decoding flag. 31. The method of claim 29, wherein mapping its layer index to a layer identifier comprises: retrieving a flag vps_nuh_layer_id_present_flag based on said coded multi-layer video stream; setting, for each reference layer and for said layer and if vps_nuh_layer_id_present_flag=0, its layer identifier equal to its layer index; and retrieving, for each reference layer and for said layer and if vps_nuh_layer_id_present_flag=1, its layer identifier from a vector layer_id_in_nuh[i], iε[1, vps_max_layers_minus1], wherein vps_max_layers_minus1+1 indicates a maximum number of layers and layer_id_in_nuh[i] indicates a layer identifier for a layer with layer index i. 32. A device for determining decoding relationship for a digitally coded multi-layer video stream defining multiple layers of pictures, said device comprising: a flag retriever configured to retrieve, based on said coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and an information determiner configured to determine information defining any decoding relationship between said multiple layers based on said at least one direct decoding flag. 33. The device of claim 32, wherein said flag retriever is configured to retrieve, for said layer with layer index i, a respective direct dependency flag for each layer index j based on said coded multi-layer video stream, wherein j is less than i and wherein said direct dependency flag indicates whether said layer with layer index j is a direct reference layer for said layer with layer index i; and wherein said information determiner is configured to determine information defining any layer with layer index j less than i which said layer with layer index i depends on, based on said direct dependency flags. 34. The device of claim 32, wherein said flag retriever is configured to retrieve, from a video parameter set or video parameter set extension associated with said coded multi-layer video stream, said at least one direct decoding flag indicating said direct coding relationship between said layer with layer index i and said layer with layer index j. 35. A device for determining a decoding relationship for a digitally coded multi-layer video stream defining multiple layers of pictures, said device comprising a non-transitory computer readable medium and an associated processor configured to process computer program instructions stored in the computer readable medium, wherein the stored computer program instructions, when run on said processor, are configured to cause the processor to: retrieve, based on said coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and determine information defining any decoding relationship between said multiple layers based on said at least one direct decoding flag. 36. A decoder device configured to decode a coded multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said decoder comprising: a decoding relationship information retriever configured to retrieve, for a layer with a layer index of said multiple layers, decoding relationship information based on said coded multi-layer video stream, said decoding relationship information defining a respective layer index of any reference layer of said multiple layers, on which said layer directly depends; an index-to-identifier mapping unit configured to map, for each reference layer and for said layer, its layer index to a layer identifier based on mapping information of a hierarchical mapping relationship between layer identifiers and layer indices, said mapping information is retrieved based on said coded multi-layer video stream; and a decoding unit configured to decode a picture of said layer based on at least one previously decoded picture in a layer of said multiple layers identified based on said layer identifiers mapped from layer indices. 37. The decoder device of claim 36, wherein said decoding relationship information retriever comprises: a flag retriever configured to retrieve, based on said coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and an information determiner configured to determine said decoding relationship information based on said at least one direct decoding flag. 38. The decoder device of claim 36, wherein said index-to-identifier mapping unit is configured to a) retrieve a flag vps_nuh_layer_id_present_flag based on said coded multi-layer video stream; b) set, for each reference layer and for said layer and if vps_nuh_layer_id_present_flag=0, its layer identifier equal to its layer index; and c) retrieve, for each reference layer and for said layer and if vps_nuh_layer_id_present_flag=1, its layer identifier from a vector layer_id_in_nuh[i], iε[1, vps_max_layers_minus1], wherein vps_max_layers_minus1+1 indicates a maximum number of layers and layer_id_in_nuh[i] indicates a layer identifier for a layer with layer index i. 39. A decoder device configured to decode a coded multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said decoder comprising a non-transitory computer readable medium and an associated processor configured to process computer program instructions stored in the computer readable medium, wherein the stored computer program instructions, when run on said processor, are configured to cause the processor to: retrieve, for a layer with a layer index of said multiple layers, decoding relationship information based on said coded multi-layer video stream, said decoding relationship information defining a respective layer index of any reference layer of said multiple layers, on which said layer directly depends; map, for each reference layer and for said layer, its layer index to a layer identifier based on mapping information of a hierarchical mapping relationship between layer identifiers and layer indices, said mapping information is retrieved based on said coded multi-layer video stream; and decode a picture of said layer based on at least one previously decoded picture in a layer of said multiple layers identified based on said layer identifiers mapped from layer indices. 40. A method of determining coding relationship for a multi-layer video stream defining multiple layers of pictures, said method comprising: determining any coding relationship between said multiple layers; determining, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and associating said at least one direct decoding flag with a coded representation of said multi-layer video stream. 41. A method of encoding a multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said method comprises: hierarchically mapping, for each layer of said multiple layers a layer identifier of said layer to a layer index based on coding dependencies between said multiple layers; determining coding relationship information defining a respective layer index of any reference layer of said multiple layers, on which a layer of said multiple layers directly depends; generating a coded multi-layer video stream by encoding said pictures of said multiple layers based on said coding dependencies; and associating said coding relationship information with said coded multi-layer video stream. 42. The method of claim 41, wherein determining said coding relationship information comprises: determining any coding relationship between said multiple layers and determining, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and associating said coding relationship information comprises associating said at least one direct decoding flag with said coded multi-layer video stream. 43. A device for determining a coding relationship for a multi-layer video stream defining multiple layers of pictures, said device comprising: a relationship determiner configured to determine any coding relationship between said multiple layers; a flag determiner configured to determine, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and an associating unit configured to associate said at least one direct decoding flag with a coded representation of said multi-layer video stream. 44. A device for determining a coding relationship for a multi-layer video stream defining multiple layers of pictures, said device comprising a non-transitory computer readable medium and an associated processor configured to process computer program instructions stored in the computer readable medium, wherein the stored computer program instructions, when run on said processor, are configured to cause the processor to: determine any coding relationship between said multiple layers; determine, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and associate said at least one direct decoding flag with a coded representation of said multi-layer video stream. 45. An encoder device configured to digitally encode a multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said encoder device comprising: a mapping unit configured to hierarchically map, for each layer of said multiple layers, a layer identifier of said layer to a layer index based on coding dependencies between said multiple layers; an information determiner configured to determine coding relationship information defining a respective layer index of any reference layer of said multiple layers, on which a layer of said multiple layers directly depends; an encoding unit configured to generate a coded multi-layer video stream by encoding said pictures of said multiple layers based on said coding dependencies; and an associating unit configured to associate said coding relationship information with said coded multi-layer video stream. 46. The encoder device of claim 45, wherein said information determiner comprises: a relationship determiner configured to determine any coding relationship between said multiple layers; and a flag determiner configured to determine, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and said associating unit is configured to associate said at least one direct decoding flag with said coded multi-layer video stream. 47. An encoder device configured to encode a multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said encoder comprising a non-transitory computer readable medium and an associated processor configured to process computer program instructions stored in the computer readable medium, wherein the stored computer program instructions, when run on said processor, are configured to cause the processor to: hierarchically map, for each layer of said multiple layers, a layer identifier of said layer to a layer index based on coding dependencies between said multiple layers; determine coding relationship information defining a respective layer index of any reference layer of said multiple layers, on which a layer of said multiple layers directly depends; generate a coded multi-layer video stream by encoding said pictures of said multiple layers based on said coding dependencies; and associate said coding relationship information with said coded multi-layer video stream. 48. A user device comprising the decoder device of claim 36. 49. A user device comprising the device of claim 43. 50. A network node comprising the decoder device of claim 36.
Multi-layer dependencies are signaled in an efficient way for a multi-view video stream (1). Coding or decoding relationship information defining coding or decoding dependencies is represented in bit-efficient syntax code, preferably through usage of hierarchical layer dependencies using layer indices for representing layer dependencies.1-25. (canceled) 26. A method, in a processing circuit, of determining a decoding relationship for a digitally coded multi-layer video stream defining multiple layers of pictures, said method comprising: retrieving, based on said digitally coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and determining information defining any decoding relationship between said multiple layers based on said at least one direct decoding flag. 27. The method of claim 26, wherein retrieving said at least one direct decoding flag comprises retrieving, for said layer with layer index i, a respective direct dependency flag for each layer index j based on said coded multi-layer video stream, wherein j<i and said direct dependency flag indicates whether said layer with layer index j is a direct reference layer for said layer with layer index i; and wherein determining said information comprises determining information defining any layer with layer index j<i which said layer with layer index i depends on, based on said direct dependency flags. 28. The method of claim 26, wherein retrieving said at least one direct decoding flag comprises retrieving, from a video parameter set or video parameter set extension associated with said coded multi-layer video stream, said at least one direct decoding flag indicating said direct coding relationship between said layer with layer index i and said layer with layer index j. 29. A method, in a processing circuit, of decoding a digitally coded multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said method comprising: retrieving, for a layer with a layer index of said multiple layers, decoding relationship information based on said digitally coded multi-layer video stream, said decoding relationship information defining a respective layer index of any reference layer of said multiple layers on which said layer directly depends; mapping, for each reference layer and for said layer, its layer index to a layer identifier based on mapping information of a hierarchical mapping relationship between layer identifiers and layer indices, wherein said mapping information is retrieved based on said digitally coded multi-layer video stream; and decoding a picture of said layer based on at least one previously decoded picture in a layer of said multiple layers identified based on said layer identifiers mapped from layer indices. 30. The method of claim 29, wherein retrieving decoding relationship information comprises: retrieving, based on said coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and determining said decoding relationship information based on said at least one direct decoding flag. 31. The method of claim 29, wherein mapping its layer index to a layer identifier comprises: retrieving a flag vps_nuh_layer_id_present_flag based on said coded multi-layer video stream; setting, for each reference layer and for said layer and if vps_nuh_layer_id_present_flag=0, its layer identifier equal to its layer index; and retrieving, for each reference layer and for said layer and if vps_nuh_layer_id_present_flag=1, its layer identifier from a vector layer_id_in_nuh[i], iε[1, vps_max_layers_minus1], wherein vps_max_layers_minus1+1 indicates a maximum number of layers and layer_id_in_nuh[i] indicates a layer identifier for a layer with layer index i. 32. A device for determining decoding relationship for a digitally coded multi-layer video stream defining multiple layers of pictures, said device comprising: a flag retriever configured to retrieve, based on said coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and an information determiner configured to determine information defining any decoding relationship between said multiple layers based on said at least one direct decoding flag. 33. The device of claim 32, wherein said flag retriever is configured to retrieve, for said layer with layer index i, a respective direct dependency flag for each layer index j based on said coded multi-layer video stream, wherein j is less than i and wherein said direct dependency flag indicates whether said layer with layer index j is a direct reference layer for said layer with layer index i; and wherein said information determiner is configured to determine information defining any layer with layer index j less than i which said layer with layer index i depends on, based on said direct dependency flags. 34. The device of claim 32, wherein said flag retriever is configured to retrieve, from a video parameter set or video parameter set extension associated with said coded multi-layer video stream, said at least one direct decoding flag indicating said direct coding relationship between said layer with layer index i and said layer with layer index j. 35. A device for determining a decoding relationship for a digitally coded multi-layer video stream defining multiple layers of pictures, said device comprising a non-transitory computer readable medium and an associated processor configured to process computer program instructions stored in the computer readable medium, wherein the stored computer program instructions, when run on said processor, are configured to cause the processor to: retrieve, based on said coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and determine information defining any decoding relationship between said multiple layers based on said at least one direct decoding flag. 36. A decoder device configured to decode a coded multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said decoder comprising: a decoding relationship information retriever configured to retrieve, for a layer with a layer index of said multiple layers, decoding relationship information based on said coded multi-layer video stream, said decoding relationship information defining a respective layer index of any reference layer of said multiple layers, on which said layer directly depends; an index-to-identifier mapping unit configured to map, for each reference layer and for said layer, its layer index to a layer identifier based on mapping information of a hierarchical mapping relationship between layer identifiers and layer indices, said mapping information is retrieved based on said coded multi-layer video stream; and a decoding unit configured to decode a picture of said layer based on at least one previously decoded picture in a layer of said multiple layers identified based on said layer identifiers mapped from layer indices. 37. The decoder device of claim 36, wherein said decoding relationship information retriever comprises: a flag retriever configured to retrieve, based on said coded multi-layer video stream, at least one direct decoding flag indicating a direct coding relationship between a layer with layer index i of said multiple layers and a layer with layer index j of said multiple layers, where i is not equal to j; and an information determiner configured to determine said decoding relationship information based on said at least one direct decoding flag. 38. The decoder device of claim 36, wherein said index-to-identifier mapping unit is configured to a) retrieve a flag vps_nuh_layer_id_present_flag based on said coded multi-layer video stream; b) set, for each reference layer and for said layer and if vps_nuh_layer_id_present_flag=0, its layer identifier equal to its layer index; and c) retrieve, for each reference layer and for said layer and if vps_nuh_layer_id_present_flag=1, its layer identifier from a vector layer_id_in_nuh[i], iε[1, vps_max_layers_minus1], wherein vps_max_layers_minus1+1 indicates a maximum number of layers and layer_id_in_nuh[i] indicates a layer identifier for a layer with layer index i. 39. A decoder device configured to decode a coded multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said decoder comprising a non-transitory computer readable medium and an associated processor configured to process computer program instructions stored in the computer readable medium, wherein the stored computer program instructions, when run on said processor, are configured to cause the processor to: retrieve, for a layer with a layer index of said multiple layers, decoding relationship information based on said coded multi-layer video stream, said decoding relationship information defining a respective layer index of any reference layer of said multiple layers, on which said layer directly depends; map, for each reference layer and for said layer, its layer index to a layer identifier based on mapping information of a hierarchical mapping relationship between layer identifiers and layer indices, said mapping information is retrieved based on said coded multi-layer video stream; and decode a picture of said layer based on at least one previously decoded picture in a layer of said multiple layers identified based on said layer identifiers mapped from layer indices. 40. A method of determining coding relationship for a multi-layer video stream defining multiple layers of pictures, said method comprising: determining any coding relationship between said multiple layers; determining, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and associating said at least one direct decoding flag with a coded representation of said multi-layer video stream. 41. A method of encoding a multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said method comprises: hierarchically mapping, for each layer of said multiple layers a layer identifier of said layer to a layer index based on coding dependencies between said multiple layers; determining coding relationship information defining a respective layer index of any reference layer of said multiple layers, on which a layer of said multiple layers directly depends; generating a coded multi-layer video stream by encoding said pictures of said multiple layers based on said coding dependencies; and associating said coding relationship information with said coded multi-layer video stream. 42. The method of claim 41, wherein determining said coding relationship information comprises: determining any coding relationship between said multiple layers and determining, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and associating said coding relationship information comprises associating said at least one direct decoding flag with said coded multi-layer video stream. 43. A device for determining a coding relationship for a multi-layer video stream defining multiple layers of pictures, said device comprising: a relationship determiner configured to determine any coding relationship between said multiple layers; a flag determiner configured to determine, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and an associating unit configured to associate said at least one direct decoding flag with a coded representation of said multi-layer video stream. 44. A device for determining a coding relationship for a multi-layer video stream defining multiple layers of pictures, said device comprising a non-transitory computer readable medium and an associated processor configured to process computer program instructions stored in the computer readable medium, wherein the stored computer program instructions, when run on said processor, are configured to cause the processor to: determine any coding relationship between said multiple layers; determine, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and associate said at least one direct decoding flag with a coded representation of said multi-layer video stream. 45. An encoder device configured to digitally encode a multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said encoder device comprising: a mapping unit configured to hierarchically map, for each layer of said multiple layers, a layer identifier of said layer to a layer index based on coding dependencies between said multiple layers; an information determiner configured to determine coding relationship information defining a respective layer index of any reference layer of said multiple layers, on which a layer of said multiple layers directly depends; an encoding unit configured to generate a coded multi-layer video stream by encoding said pictures of said multiple layers based on said coding dependencies; and an associating unit configured to associate said coding relationship information with said coded multi-layer video stream. 46. The encoder device of claim 45, wherein said information determiner comprises: a relationship determiner configured to determine any coding relationship between said multiple layers; and a flag determiner configured to determine, for a layer with layer index i of said multiple layers and based on said coding relationship, at least one direct decoding flag indicating a direct coding relationship between said layer with layer index i and a layer with layer index j of said multiple layers, where i is not equal to j; and said associating unit is configured to associate said at least one direct decoding flag with said coded multi-layer video stream. 47. An encoder device configured to encode a multi-layer video stream defining multiple layers of pictures, each layer of said multiple layers having a respective layer identifier, said encoder comprising a non-transitory computer readable medium and an associated processor configured to process computer program instructions stored in the computer readable medium, wherein the stored computer program instructions, when run on said processor, are configured to cause the processor to: hierarchically map, for each layer of said multiple layers, a layer identifier of said layer to a layer index based on coding dependencies between said multiple layers; determine coding relationship information defining a respective layer index of any reference layer of said multiple layers, on which a layer of said multiple layers directly depends; generate a coded multi-layer video stream by encoding said pictures of said multiple layers based on said coding dependencies; and associate said coding relationship information with said coded multi-layer video stream. 48. A user device comprising the decoder device of claim 36. 49. A user device comprising the device of claim 43. 50. A network node comprising the decoder device of claim 36.
2,400
7,415
7,415
14,450,676
2,458
Methods and systems may provide for joining an overlay network of a plurality of peer devices and identifying a local preference for an area service available to the plurality of peer devices. Additionally, the local preference may be used to negotiate a common preference for the area service with the plurality of peer devices. In one example, the common preference is a best fit value for the plurality of peer devices on the overlay network.
1. A computer program product to negotiate preferences, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a local peer device to cause the local peer device to: join an overlay network of a plurality of peer devices; obtain a local preference for an area service available to the plurality of peer devices from one or more of a user interface of the local peer device or a profile stored on the local peer device; use the local preference to negotiate a common preference for the area service with the plurality of peer devices, wherein the common preference is to be a best fit value for the plurality of peer devices on the overlay network; communicate the common preference to an infrastructure component that provides the area service; and re-negotiate the common preference in response to one or more of an additional peer device joining the overlay network, a remaining peer device leaving the overlay network or an expiration of a periodic timer. 2. The computer program product of claim 1, wherein the program instructions are executable to cause the local peer device to send the local preference from the local peer device to one or more remaining peer devices on the overlay network. 3. The computer program product of claim 1, wherein the program instructions are executable to cause the local peer device to: receive one or more remote preferences from one or more remaining peer devices on the overlay network; and compare the one or more remote preferences to the local preference. 4. The computer program product of claim 1, wherein the local preference and the common preference are to include one or more of a crosswalk timer setting, a public safety staffing level, a temperature setting, a lighting setting, a music genre setting or a listening volume setting. 5-12. (canceled) 13. A computer program product to negotiate preferences, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a local peer device to cause the local peer device to: join an overlay network of a plurality of peer devices; identify a local preference for an area service available to the plurality of peer devices; and use the local preference to negotiate a common preference for the area services with the plurality of peer devices. 14. The computer program product of claim 13, wherein the common preference is to be a best fit value for the plurality of peer devices on the overlay network. 15. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to send the local preference from the local peer device to one or more remaining peer devices on the overlay network. 16. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to: receive one or more remote preferences from one or more remaining peer devices on the overlay network; and compare the one or more remote preferences to the local preference. 17. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to obtain the local preference from one or more of a user interface of the local peer device or a profile stored on the local peer device. 18. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to communicate the common preference to an infrastructure component that provides the area service. 19. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to re-negotiate the common preference in response to one or more of an additional peer device joining the overlay network, a remaining peer device leaving the overlay network or an expiration of a periodic timer. 20. The computer program product of claim 13, wherein the local preference and the common preference are to include one or more of a crosswalk timer setting, a public safety staffing level, a temperature setting, a lighting setting, a music genre setting or a listening volume setting.
Methods and systems may provide for joining an overlay network of a plurality of peer devices and identifying a local preference for an area service available to the plurality of peer devices. Additionally, the local preference may be used to negotiate a common preference for the area service with the plurality of peer devices. In one example, the common preference is a best fit value for the plurality of peer devices on the overlay network.1. A computer program product to negotiate preferences, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a local peer device to cause the local peer device to: join an overlay network of a plurality of peer devices; obtain a local preference for an area service available to the plurality of peer devices from one or more of a user interface of the local peer device or a profile stored on the local peer device; use the local preference to negotiate a common preference for the area service with the plurality of peer devices, wherein the common preference is to be a best fit value for the plurality of peer devices on the overlay network; communicate the common preference to an infrastructure component that provides the area service; and re-negotiate the common preference in response to one or more of an additional peer device joining the overlay network, a remaining peer device leaving the overlay network or an expiration of a periodic timer. 2. The computer program product of claim 1, wherein the program instructions are executable to cause the local peer device to send the local preference from the local peer device to one or more remaining peer devices on the overlay network. 3. The computer program product of claim 1, wherein the program instructions are executable to cause the local peer device to: receive one or more remote preferences from one or more remaining peer devices on the overlay network; and compare the one or more remote preferences to the local preference. 4. The computer program product of claim 1, wherein the local preference and the common preference are to include one or more of a crosswalk timer setting, a public safety staffing level, a temperature setting, a lighting setting, a music genre setting or a listening volume setting. 5-12. (canceled) 13. A computer program product to negotiate preferences, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a local peer device to cause the local peer device to: join an overlay network of a plurality of peer devices; identify a local preference for an area service available to the plurality of peer devices; and use the local preference to negotiate a common preference for the area services with the plurality of peer devices. 14. The computer program product of claim 13, wherein the common preference is to be a best fit value for the plurality of peer devices on the overlay network. 15. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to send the local preference from the local peer device to one or more remaining peer devices on the overlay network. 16. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to: receive one or more remote preferences from one or more remaining peer devices on the overlay network; and compare the one or more remote preferences to the local preference. 17. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to obtain the local preference from one or more of a user interface of the local peer device or a profile stored on the local peer device. 18. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to communicate the common preference to an infrastructure component that provides the area service. 19. The computer program product of claim 13, wherein the program instructions are executable to cause the local peer device to re-negotiate the common preference in response to one or more of an additional peer device joining the overlay network, a remaining peer device leaving the overlay network or an expiration of a periodic timer. 20. The computer program product of claim 13, wherein the local preference and the common preference are to include one or more of a crosswalk timer setting, a public safety staffing level, a temperature setting, a lighting setting, a music genre setting or a listening volume setting.
2,400
7,416
7,416
13,724,405
2,442
A method and system for implementing domain name services (DNS) is described. In one aspect a query from a user device for access to a particular resource record may be received and forwarded to an authoritative DNS device. A reply to the query may be received from the authoritative DNS device. Information of the reply also may be distributed to other DNS devices.
1. A method comprising: receiving, at a first domain name services (DNS) device of a plurality of DNS devices, a reply to a query from a user device for access to a particular resource record; caching, at the first caching DNS server, the reply; and distributing from the first DNS server to at least one other DNS device in the plurality of DNS devices, information of the reply to the query. 2. The method of claim 1, further comprising receiving, at the first DNS device, the query from the user device for access to the particular resource record. 3. The method of claim 2, further comprising forwarding, by the first DNS device, the query to an authoritative DNS server. 4. The method of claim 2, wherein the receiving the query from the user device for access to the particular resource record includes receiving, from a load balancer, the query from the user device. 5. The method of claim 1, wherein the caching is based on a time to live value associated with the particular resource record. 6. The method of claim 1, wherein the particular resource record is a website, wherein the query includes a request to resolve the website into an internet protocol address, and wherein the reply is the internet protocol address. 7. The method of claim 1, wherein the distributing includes distributing from the first DNS device to all other DNS devices in the plurality of DNS devices, the information of the reply to the query. 8. The method of claim 1, wherein the distributing includes: creating, by the first DNS device, transaction signature key information for authentication of communication between DNS devices in the plurality of DNS devices; and distributing, from the first DNS device to at least one other DNS device in the plurality of DNS devices, the created transaction signature key information. 9. The method of claim 1, further comprising forwarding the reply to the user device. 10. A method comprising: receiving at a first DNS device of a plurality of DNS devices from a second DNS device of the plurality of DNS devices, information of a reply to a first query from a first user device for access to a particular resource record of an authoritative DNS device; and caching, at the first DNS device, the information of the reply. 11. The method of claim 10, further comprising receiving, at the first DNS device, a second query from a second user device for access to the particular resource record. 12. The method of claim 11, wherein the receiving the second query from the second user device for access to the particular resource record includes receiving, from a load balancer, the second query from the second user device. 13. The method of claim 11, further comprising forwarding the information of the reply to the second user device. 14. The method of claim 10, further comprising distributing, from the first DNS device to at least one third DNS device in the plurality of DNS devices, the information of the reply to the first query. 15. The method of claim 10, further comprising receiving, at the first DNS device from the second DNS device, transaction signature key information for authentication of communication between DNS devices in the plurality of DNS devices. 16. A system comprising: a first domain name services (DNS) server and at least one second DNS server each configured to communicate with the other DNS server; the first DNS server configured to: receive, from an authoritative DNS server, a reply to a query from a user device for access to a particular resource record, cache the reply, and distribute, from the first DNS server to the at least one second DNS server, information of the reply to the query; and the at least one second DNS server configured to: receive, from the first caching DNS server, the information of the reply, and cache the information of the reply. 17. The system of claim 16, the first DNS server further configured to: receive the query from the user device for access to the particular resource record, and forward the query to the authoritative DNS server. 18. The system of claim 16, wherein the distribute includes: create transaction signature key information for authentication of communication between DNS servers, and distribute, from the first DNS server to the at least one other DNS server, the created transaction signature key information. 19. The system of claim 16, the at least one other DNS server further configured to: receive a second query from a second user device for access to the particular resource record, and forwarding the information of the reply to the second user device. 20. The system of claim 16, the at least one other DNS server further configured to distribute, from the at least one second DNS server to at least one third DNS server, the information of the reply to the first query.
A method and system for implementing domain name services (DNS) is described. In one aspect a query from a user device for access to a particular resource record may be received and forwarded to an authoritative DNS device. A reply to the query may be received from the authoritative DNS device. Information of the reply also may be distributed to other DNS devices.1. A method comprising: receiving, at a first domain name services (DNS) device of a plurality of DNS devices, a reply to a query from a user device for access to a particular resource record; caching, at the first caching DNS server, the reply; and distributing from the first DNS server to at least one other DNS device in the plurality of DNS devices, information of the reply to the query. 2. The method of claim 1, further comprising receiving, at the first DNS device, the query from the user device for access to the particular resource record. 3. The method of claim 2, further comprising forwarding, by the first DNS device, the query to an authoritative DNS server. 4. The method of claim 2, wherein the receiving the query from the user device for access to the particular resource record includes receiving, from a load balancer, the query from the user device. 5. The method of claim 1, wherein the caching is based on a time to live value associated with the particular resource record. 6. The method of claim 1, wherein the particular resource record is a website, wherein the query includes a request to resolve the website into an internet protocol address, and wherein the reply is the internet protocol address. 7. The method of claim 1, wherein the distributing includes distributing from the first DNS device to all other DNS devices in the plurality of DNS devices, the information of the reply to the query. 8. The method of claim 1, wherein the distributing includes: creating, by the first DNS device, transaction signature key information for authentication of communication between DNS devices in the plurality of DNS devices; and distributing, from the first DNS device to at least one other DNS device in the plurality of DNS devices, the created transaction signature key information. 9. The method of claim 1, further comprising forwarding the reply to the user device. 10. A method comprising: receiving at a first DNS device of a plurality of DNS devices from a second DNS device of the plurality of DNS devices, information of a reply to a first query from a first user device for access to a particular resource record of an authoritative DNS device; and caching, at the first DNS device, the information of the reply. 11. The method of claim 10, further comprising receiving, at the first DNS device, a second query from a second user device for access to the particular resource record. 12. The method of claim 11, wherein the receiving the second query from the second user device for access to the particular resource record includes receiving, from a load balancer, the second query from the second user device. 13. The method of claim 11, further comprising forwarding the information of the reply to the second user device. 14. The method of claim 10, further comprising distributing, from the first DNS device to at least one third DNS device in the plurality of DNS devices, the information of the reply to the first query. 15. The method of claim 10, further comprising receiving, at the first DNS device from the second DNS device, transaction signature key information for authentication of communication between DNS devices in the plurality of DNS devices. 16. A system comprising: a first domain name services (DNS) server and at least one second DNS server each configured to communicate with the other DNS server; the first DNS server configured to: receive, from an authoritative DNS server, a reply to a query from a user device for access to a particular resource record, cache the reply, and distribute, from the first DNS server to the at least one second DNS server, information of the reply to the query; and the at least one second DNS server configured to: receive, from the first caching DNS server, the information of the reply, and cache the information of the reply. 17. The system of claim 16, the first DNS server further configured to: receive the query from the user device for access to the particular resource record, and forward the query to the authoritative DNS server. 18. The system of claim 16, wherein the distribute includes: create transaction signature key information for authentication of communication between DNS servers, and distribute, from the first DNS server to the at least one other DNS server, the created transaction signature key information. 19. The system of claim 16, the at least one other DNS server further configured to: receive a second query from a second user device for access to the particular resource record, and forwarding the information of the reply to the second user device. 20. The system of claim 16, the at least one other DNS server further configured to distribute, from the at least one second DNS server to at least one third DNS server, the information of the reply to the first query.
2,400
7,417
7,417
13,206,886
2,473
Apparatuses and methods for transmitting and receiving signals in a mobile communication system are provided. A method for transmitting a signal by an evolved Node B (eNB) in a mobile communication system includes transmitting a same control channel signal to each of a plurality of Radio Units (RUs), and transmitting a different data channel signal to each of the plurality of RUs. A data channel signal transmitted to each of the plurality of RUs may be determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing.
1. A method for transmitting a signal by an evolved Node B (eNB) in a mobile communication system, the method comprising: transmitting a same control channel signal to each of a plurality of Radio Units (RUs); and transmitting a different data channel signal to each of the plurality of RUs, wherein a data channel signal transmitted to each of the plurality of RUs is determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing. 2. The method of claim 1, further comprising: transmitting a control channel reference signal to each of the plurality of RUs; and transmitting a data channel reference signal to each of the plurality of RUs. 3. An evolved Node B (eNB) in a mobile communication system, the eNB comprising: a digital unit for transmitting a same control channel signal to each of a plurality of Radio Units (RUs), and for transmitting a different data channel signal to each of the plurality of RUs, wherein a data channel signal transmitted to each of the plurality of RUs is determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing. 4. The eNB of claim 3, wherein the digital unit comprises: a control channel manager for generating a control channel signal to be equally transmitted to each of the plurality of RUs; a data channel manager for generating a different data channel signal to be transmitted to each of the plurality of RUs; and a multiplexing and connection unit for multiplexing the control channel signal and the data channel signal, and for transmitting the multiplexed signal to the plurality of RUs. 5. The eNB of claim 4, wherein the control channel manager further generates a control channel reference signal to be transmitted to the plurality of RUs, wherein the data channel manager further generates a data channel reference signal to be transmitted to the plurality of RUs, and wherein the multiplexing and connection unit multiplexes the control channel signal and the data channel signal with the control channel reference signal and the data channel reference signal, and transmits the multiplexed signals to the plurality of RUs. 6. A method for transmitting and receiving a signal by a Radio Unit (RU) in a mobile communication system, the method comprising: receiving a control channel signal and a data channel signal from an evolved Node B (eNB), wherein the control channel signal is equal to control channel signals that the eNB transmits to a plurality of RUs except for the RU, wherein the data channel signal is different from data channel signals that the eNB transmits to the plurality of RUs except for the RU, and wherein a data channel signal transmitted to each of the RU and the plurality of RUs is determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing. 7. The method of claim 6, further comprising receiving a control channel reference signal and a data channel reference signal from the eNB. 8. The method of claim 7, further comprising transmitting at least one of the control channel reference signal and the data channel reference signal to the UE. 9. A Radio Unit (RU) in a mobile communication system, the RU comprising: a receiver for receiving a control channel signal and a data channel signal from an evolved Node B (eNB), wherein the control channel signal is equal to control channel signals that the eNB transmits to a plurality of RUs except for the RU, wherein the data channel signal is different from data channel signals that the eNB transmits to the plurality of RUs except for the RU, and wherein a data channel signal transmitted to each of the RU and the plurality of RUs is determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing. 10. The RU of claim 9, wherein the receiver receives a control channel reference signal and a data channel reference signal from the eNB. 11. The RU of claim 10, further comprising a transmitter for transmitting at least one of the control channel reference signal and the data channel reference signal to the UE. 12. A method for receiving a signal by a User Equipment (UE) in a mobile communication system, the method comprising: receiving a control channel signal and a data channel signal from each of a plurality of Radio Units (RUs), wherein a control channel signal received from each of the plurality of RUs is equal, wherein a data channel signal received from each of the plurality of RUs is different, and wherein a data channel signal received from each of the plurality of RUs is determined taking into account at least one of a location of the UE, and load balancing. 13. The method of claim 12, further comprising: receiving a control channel reference signal from each of the plurality of RUs; and estimating the data channel signal based on the control channel reference signal and beamforming weight information. 14. The method of claim 12, further comprising: receiving a data channel reference signal from each of the plurality of RUs; and estimating the data channel signal based on the data channel reference signal. 15. A User Equipment (UE) in a mobile communication system, the UE comprising: a receiver for receiving a control channel signal and a data channel signal from each of a plurality of Radio Units (RUs), wherein a control channel signal received from each of the plurality of RUs is equal, wherein a data channel signal received from each of the plurality of RUs is different, and wherein a data channel signal received from each of the plurality of RUs is determined taking into account at least one of a location of the UE, and load balancing. 16. The UE of claim 15, further comprising an estimator; wherein the receiver receives a control channel reference signal from each of the plurality of RUs, and wherein the estimator estimates the data channel signal based on the control channel reference signal and beamforming weight information. 17. The UE of claim 15, further comprising an estimator: wherein the receiver receives a data channel reference signal from each of the plurality of RUs, and wherein the estimator estimates the data channel signal based on the data channel reference signal.
Apparatuses and methods for transmitting and receiving signals in a mobile communication system are provided. A method for transmitting a signal by an evolved Node B (eNB) in a mobile communication system includes transmitting a same control channel signal to each of a plurality of Radio Units (RUs), and transmitting a different data channel signal to each of the plurality of RUs. A data channel signal transmitted to each of the plurality of RUs may be determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing.1. A method for transmitting a signal by an evolved Node B (eNB) in a mobile communication system, the method comprising: transmitting a same control channel signal to each of a plurality of Radio Units (RUs); and transmitting a different data channel signal to each of the plurality of RUs, wherein a data channel signal transmitted to each of the plurality of RUs is determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing. 2. The method of claim 1, further comprising: transmitting a control channel reference signal to each of the plurality of RUs; and transmitting a data channel reference signal to each of the plurality of RUs. 3. An evolved Node B (eNB) in a mobile communication system, the eNB comprising: a digital unit for transmitting a same control channel signal to each of a plurality of Radio Units (RUs), and for transmitting a different data channel signal to each of the plurality of RUs, wherein a data channel signal transmitted to each of the plurality of RUs is determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing. 4. The eNB of claim 3, wherein the digital unit comprises: a control channel manager for generating a control channel signal to be equally transmitted to each of the plurality of RUs; a data channel manager for generating a different data channel signal to be transmitted to each of the plurality of RUs; and a multiplexing and connection unit for multiplexing the control channel signal and the data channel signal, and for transmitting the multiplexed signal to the plurality of RUs. 5. The eNB of claim 4, wherein the control channel manager further generates a control channel reference signal to be transmitted to the plurality of RUs, wherein the data channel manager further generates a data channel reference signal to be transmitted to the plurality of RUs, and wherein the multiplexing and connection unit multiplexes the control channel signal and the data channel signal with the control channel reference signal and the data channel reference signal, and transmits the multiplexed signals to the plurality of RUs. 6. A method for transmitting and receiving a signal by a Radio Unit (RU) in a mobile communication system, the method comprising: receiving a control channel signal and a data channel signal from an evolved Node B (eNB), wherein the control channel signal is equal to control channel signals that the eNB transmits to a plurality of RUs except for the RU, wherein the data channel signal is different from data channel signals that the eNB transmits to the plurality of RUs except for the RU, and wherein a data channel signal transmitted to each of the RU and the plurality of RUs is determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing. 7. The method of claim 6, further comprising receiving a control channel reference signal and a data channel reference signal from the eNB. 8. The method of claim 7, further comprising transmitting at least one of the control channel reference signal and the data channel reference signal to the UE. 9. A Radio Unit (RU) in a mobile communication system, the RU comprising: a receiver for receiving a control channel signal and a data channel signal from an evolved Node B (eNB), wherein the control channel signal is equal to control channel signals that the eNB transmits to a plurality of RUs except for the RU, wherein the data channel signal is different from data channel signals that the eNB transmits to the plurality of RUs except for the RU, and wherein a data channel signal transmitted to each of the RU and the plurality of RUs is determined taking into account at least one of a location of a User Equipment (UE) that will receive the data channel signal, and load balancing. 10. The RU of claim 9, wherein the receiver receives a control channel reference signal and a data channel reference signal from the eNB. 11. The RU of claim 10, further comprising a transmitter for transmitting at least one of the control channel reference signal and the data channel reference signal to the UE. 12. A method for receiving a signal by a User Equipment (UE) in a mobile communication system, the method comprising: receiving a control channel signal and a data channel signal from each of a plurality of Radio Units (RUs), wherein a control channel signal received from each of the plurality of RUs is equal, wherein a data channel signal received from each of the plurality of RUs is different, and wherein a data channel signal received from each of the plurality of RUs is determined taking into account at least one of a location of the UE, and load balancing. 13. The method of claim 12, further comprising: receiving a control channel reference signal from each of the plurality of RUs; and estimating the data channel signal based on the control channel reference signal and beamforming weight information. 14. The method of claim 12, further comprising: receiving a data channel reference signal from each of the plurality of RUs; and estimating the data channel signal based on the data channel reference signal. 15. A User Equipment (UE) in a mobile communication system, the UE comprising: a receiver for receiving a control channel signal and a data channel signal from each of a plurality of Radio Units (RUs), wherein a control channel signal received from each of the plurality of RUs is equal, wherein a data channel signal received from each of the plurality of RUs is different, and wherein a data channel signal received from each of the plurality of RUs is determined taking into account at least one of a location of the UE, and load balancing. 16. The UE of claim 15, further comprising an estimator; wherein the receiver receives a control channel reference signal from each of the plurality of RUs, and wherein the estimator estimates the data channel signal based on the control channel reference signal and beamforming weight information. 17. The UE of claim 15, further comprising an estimator: wherein the receiver receives a data channel reference signal from each of the plurality of RUs, and wherein the estimator estimates the data channel signal based on the data channel reference signal.
2,400
7,418
7,418
15,146,005
2,487
A video encoding device includes: transform means 11 for transforming an image block; entropy encoding means 12 for entropy-encoding transformed data of the image block transformed by the transform means 11 ; PCM encoding means 13 for PCM-encoding the image block; multiplex data selection means 14 for selecting output data of the entropy encoding means 12 or the PCM encoding means 13 , in a block of a block size set from the outside; and multiplexing means 15 for embedding a PCM header in a bitstream, in the block of the set from the outside block size.
1. A video decoding device comprising: one or more processors, the one or more processors configured to: extract PCM block size information from a bitstream; compute a maximum value of a PCM block size for parsing a PCM header based on the PCM block size information; parse from the bitstream the PCM header whose block has a block size less than or equal to the maximum value of the PCM block size; parse transformed data of an image in the bitstream based on the PCM header; and decode by PCM decoding PCM data of the image in the bitstream based on the PCM header, wherein block size indicates a number of samples contained in a block. 2. A video decoding method comprising: extracting PCM block size information from a bitstream; computing a maximum value of a PCM block size for parsing a PCM header based on the PCM block size information; parsing from the bitstream the PCM header whose block has a block size less than or equal to the maximum value of the PCM block size; controlling an entropy decoding process and a PCM decoding process based on the PCM header; parsing transformed data of an image in the bitstream upon the entropy decoding process being controlled; and decoding by PCM-decoding PCM data of the image in the bitstream upon the PCM decoding process being controlled, wherein block size indicates a number of samples contained in a block. 3. A non-transitory computer readable information recording medium storing video decoding program which, when executed by a processor, performs a method comprising: extracting PCM block size information from a bitstream; computing a maximum value of a PCM block size for parsing a PCM header based on the PCM block size information; parsing from the bitstream the PCM header whose block has a block size less than or equal to the maximum value of the PCM block size; controlling an entropy decoding process and a PCM decoding process based on the PCM header; parsing transformed data of an image in the bitstream upon the entropy decoding process being controlled; and decoding by PCM-decoding PCM data of the image in the bitstream upon the PCM decoding process being controlled, wherein block size indicates a number of samples contained in a block.
A video encoding device includes: transform means 11 for transforming an image block; entropy encoding means 12 for entropy-encoding transformed data of the image block transformed by the transform means 11 ; PCM encoding means 13 for PCM-encoding the image block; multiplex data selection means 14 for selecting output data of the entropy encoding means 12 or the PCM encoding means 13 , in a block of a block size set from the outside; and multiplexing means 15 for embedding a PCM header in a bitstream, in the block of the set from the outside block size.1. A video decoding device comprising: one or more processors, the one or more processors configured to: extract PCM block size information from a bitstream; compute a maximum value of a PCM block size for parsing a PCM header based on the PCM block size information; parse from the bitstream the PCM header whose block has a block size less than or equal to the maximum value of the PCM block size; parse transformed data of an image in the bitstream based on the PCM header; and decode by PCM decoding PCM data of the image in the bitstream based on the PCM header, wherein block size indicates a number of samples contained in a block. 2. A video decoding method comprising: extracting PCM block size information from a bitstream; computing a maximum value of a PCM block size for parsing a PCM header based on the PCM block size information; parsing from the bitstream the PCM header whose block has a block size less than or equal to the maximum value of the PCM block size; controlling an entropy decoding process and a PCM decoding process based on the PCM header; parsing transformed data of an image in the bitstream upon the entropy decoding process being controlled; and decoding by PCM-decoding PCM data of the image in the bitstream upon the PCM decoding process being controlled, wherein block size indicates a number of samples contained in a block. 3. A non-transitory computer readable information recording medium storing video decoding program which, when executed by a processor, performs a method comprising: extracting PCM block size information from a bitstream; computing a maximum value of a PCM block size for parsing a PCM header based on the PCM block size information; parsing from the bitstream the PCM header whose block has a block size less than or equal to the maximum value of the PCM block size; controlling an entropy decoding process and a PCM decoding process based on the PCM header; parsing transformed data of an image in the bitstream upon the entropy decoding process being controlled; and decoding by PCM-decoding PCM data of the image in the bitstream upon the PCM decoding process being controlled, wherein block size indicates a number of samples contained in a block.
2,400
7,419
7,419
14,513,121
2,481
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a base layer (BL) and an enhancement layer (EL), the BL having a BL picture in a first access unit, and the EL having an EL picture in the first access unit. The BL picture may be associated with a flag. The processor is configured to determine a value of the flag associated with the BL picture, and perform, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. The processor may encode or decode the video information.
1. An apparatus configured to code video information, the apparatus comprising: a memory unit configured to store video information associated with a base layer (BL) and an enhancement layer (EL), the BL having a BL picture in a first access unit, and the EL having an EL picture in the first access unit, wherein the BL picture has a flag associated therewith; and a processor in communication with the memory unit, the processor configured to: determine a value of the flag associated with the BL picture; and perform, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 2. The apparatus of claim 1, wherein the flag associated with the BL picture indicates whether the first access unit immediately follows a splice point where two bitstreams are joined together into a single bitstream comprising the BL and the EL. 3. The apparatus of claim 1, wherein the processor is further configured to remove, based on determining that the value of the flag indicates that the first access unit immediately follows a splice point, the one or more EL pictures in the DPB without outputting the one or more EL pictures before the EL picture is coded. 4. The apparatus of claim 1, wherein the processor is further configured to refrain, based on determining that the value of the flag indicates that the first access unit does not immediately follow a splice point, from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 5. The apparatus of claim 1, wherein the EL picture is not an intra random access point (IRAP) picture. 6. The apparatus of claim 1, wherein the first access unit is an initial IRAP access unit. 7. The apparatus of claim 1, wherein the BL picture has NoRaslOutputFlag equal to 1. 8. The apparatus of claim 1, wherein the flag associated with the BL picture is NoClrasOutputFlag. 9. The apparatus of claim 1, wherein the BL picture is an IRAP picture. 10. The apparatus of claim 1, wherein the BL picture is associated with a smallest layer ID of all layer IDs used for the video information. 11. The apparatus of claim 1, wherein the apparatus comprises an encoder, and wherein the processor is further configured to encode the video information in a bitstream. 12. The apparatus of claim 1, wherein the apparatus comprises a decoder, and wherein the processor is further configured to decode the video information in a bitstream. 13. The apparatus of claim 1, wherein the apparatus comprises a device selected from a group consisting of one or more of: a computer, a notebook, a laptop computer, a tablet computer, a set-top box, a telephone handset, a smart phone, a smart pad, a television, a camera, a display device, a digital media player, a video gaming console, and an in-car computer. 14. A method of encoding video information, the method comprising: determining a value of a flag associated with a BL picture in a first access unit; and performing, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before an EL picture in the first access unit is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 15. The method of claim 14, wherein the flag associated with the BL picture indicates whether the first access unit immediately follows a splice point where two bitstreams are joined together into a single bitstream comprising the BL and the EL. 16. The method of claim 14, further comprising at least one of (1) removing, based on determining that the value of the flag indicates that the first access unit immediately follows a splice point, the one or more EL pictures in the DPB without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining, based on determining that the value of the flag indicates that the first access unit does not immediately follow a splice point, from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 17. The method of claim 14, wherein the EL picture is not an intra random access point (IRAP) picture. 18. The method of claim 14, wherein the first access unit is an initial IRAP access unit. 19. The method of claim 14, wherein the BL picture has NoRaslOutputFlag equal to 1. 20. The method of claim 14, wherein the flag associated with the BL picture is NoClrasOutputFlag. 21. The method of claim 14, wherein the BL picture is an IRAP picture. 22. The method of claim 14, wherein the BL picture is associated with a smallest layer ID of all layer IDs used for the video information. 23. A non-transitory computer readable medium comprising code that, when executed, causes an apparatus to perform a process comprising: storing video information associated with a base layer (BL) and an enhancement layer (EL), the BL having a BL picture in a first access unit, and the EL having an EL picture in the first access unit, wherein the BL picture has a flag associated therewith; determining a value of the flag associated with the BL picture; and performing, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 24. The computer readable medium of claim 23, wherein the flag associated with the BL picture indicates whether the first access unit immediately follows a splice point where two bitstreams are joined together into a single bitstream comprising the BL and the EL. 25. The computer readable medium of claim 23, wherein the process further comprises at least one of (1) removing, based on determining that the value of the flag indicates that the first access unit immediately follows a splice point, the one or more EL pictures in the DPB without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining, based on determining that the value of the flag indicates that the first access unit does not immediately follow a splice point, from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 26. A video coding device configured to code video information, the video coding device comprising: means for storing video information associated with a base layer (BL) and an enhancement layer (EL), the BL having a BL picture in a first access unit, and the EL having an EL picture in the first access unit, wherein the BL picture has a flag associated therewith; means for determining a value of the flag associated with the BL picture; and means for performing, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 27. The video coding device of claim 26, wherein the flag associated with the BL picture indicates whether the first access unit immediately follows a splice point where two bitstreams are joined together into a single bitstream comprising the BL and the EL. 28. The video coding device of claim 26, further comprising at least one of (1) means for removing, based on determining that the value of the flag indicates that the first access unit immediately follows a splice point, the one or more EL pictures in the DPB without outputting the one or more EL pictures before the EL picture is coded, or (2) means for refraining, based on determining that the value of the flag indicates that the first access unit does not immediately follow a splice point, from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures.
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a base layer (BL) and an enhancement layer (EL), the BL having a BL picture in a first access unit, and the EL having an EL picture in the first access unit. The BL picture may be associated with a flag. The processor is configured to determine a value of the flag associated with the BL picture, and perform, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. The processor may encode or decode the video information.1. An apparatus configured to code video information, the apparatus comprising: a memory unit configured to store video information associated with a base layer (BL) and an enhancement layer (EL), the BL having a BL picture in a first access unit, and the EL having an EL picture in the first access unit, wherein the BL picture has a flag associated therewith; and a processor in communication with the memory unit, the processor configured to: determine a value of the flag associated with the BL picture; and perform, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 2. The apparatus of claim 1, wherein the flag associated with the BL picture indicates whether the first access unit immediately follows a splice point where two bitstreams are joined together into a single bitstream comprising the BL and the EL. 3. The apparatus of claim 1, wherein the processor is further configured to remove, based on determining that the value of the flag indicates that the first access unit immediately follows a splice point, the one or more EL pictures in the DPB without outputting the one or more EL pictures before the EL picture is coded. 4. The apparatus of claim 1, wherein the processor is further configured to refrain, based on determining that the value of the flag indicates that the first access unit does not immediately follow a splice point, from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 5. The apparatus of claim 1, wherein the EL picture is not an intra random access point (IRAP) picture. 6. The apparatus of claim 1, wherein the first access unit is an initial IRAP access unit. 7. The apparatus of claim 1, wherein the BL picture has NoRaslOutputFlag equal to 1. 8. The apparatus of claim 1, wherein the flag associated with the BL picture is NoClrasOutputFlag. 9. The apparatus of claim 1, wherein the BL picture is an IRAP picture. 10. The apparatus of claim 1, wherein the BL picture is associated with a smallest layer ID of all layer IDs used for the video information. 11. The apparatus of claim 1, wherein the apparatus comprises an encoder, and wherein the processor is further configured to encode the video information in a bitstream. 12. The apparatus of claim 1, wherein the apparatus comprises a decoder, and wherein the processor is further configured to decode the video information in a bitstream. 13. The apparatus of claim 1, wherein the apparatus comprises a device selected from a group consisting of one or more of: a computer, a notebook, a laptop computer, a tablet computer, a set-top box, a telephone handset, a smart phone, a smart pad, a television, a camera, a display device, a digital media player, a video gaming console, and an in-car computer. 14. A method of encoding video information, the method comprising: determining a value of a flag associated with a BL picture in a first access unit; and performing, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before an EL picture in the first access unit is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 15. The method of claim 14, wherein the flag associated with the BL picture indicates whether the first access unit immediately follows a splice point where two bitstreams are joined together into a single bitstream comprising the BL and the EL. 16. The method of claim 14, further comprising at least one of (1) removing, based on determining that the value of the flag indicates that the first access unit immediately follows a splice point, the one or more EL pictures in the DPB without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining, based on determining that the value of the flag indicates that the first access unit does not immediately follow a splice point, from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 17. The method of claim 14, wherein the EL picture is not an intra random access point (IRAP) picture. 18. The method of claim 14, wherein the first access unit is an initial IRAP access unit. 19. The method of claim 14, wherein the BL picture has NoRaslOutputFlag equal to 1. 20. The method of claim 14, wherein the flag associated with the BL picture is NoClrasOutputFlag. 21. The method of claim 14, wherein the BL picture is an IRAP picture. 22. The method of claim 14, wherein the BL picture is associated with a smallest layer ID of all layer IDs used for the video information. 23. A non-transitory computer readable medium comprising code that, when executed, causes an apparatus to perform a process comprising: storing video information associated with a base layer (BL) and an enhancement layer (EL), the BL having a BL picture in a first access unit, and the EL having an EL picture in the first access unit, wherein the BL picture has a flag associated therewith; determining a value of the flag associated with the BL picture; and performing, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 24. The computer readable medium of claim 23, wherein the flag associated with the BL picture indicates whether the first access unit immediately follows a splice point where two bitstreams are joined together into a single bitstream comprising the BL and the EL. 25. The computer readable medium of claim 23, wherein the process further comprises at least one of (1) removing, based on determining that the value of the flag indicates that the first access unit immediately follows a splice point, the one or more EL pictures in the DPB without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining, based on determining that the value of the flag indicates that the first access unit does not immediately follow a splice point, from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 26. A video coding device configured to code video information, the video coding device comprising: means for storing video information associated with a base layer (BL) and an enhancement layer (EL), the BL having a BL picture in a first access unit, and the EL having an EL picture in the first access unit, wherein the BL picture has a flag associated therewith; means for determining a value of the flag associated with the BL picture; and means for performing, based on the value of the flag, one of (1) removing one or more EL pictures in a decoded picture buffer (DPB) without outputting the one or more EL pictures before the EL picture is coded, or (2) refraining from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures. 27. The video coding device of claim 26, wherein the flag associated with the BL picture indicates whether the first access unit immediately follows a splice point where two bitstreams are joined together into a single bitstream comprising the BL and the EL. 28. The video coding device of claim 26, further comprising at least one of (1) means for removing, based on determining that the value of the flag indicates that the first access unit immediately follows a splice point, the one or more EL pictures in the DPB without outputting the one or more EL pictures before the EL picture is coded, or (2) means for refraining, based on determining that the value of the flag indicates that the first access unit does not immediately follow a splice point, from removing the one or more EL pictures in the DPB without outputting the one or more EL pictures.
2,400
7,420
7,420
14,307,380
2,449
A computer-implemented method includes collecting information about at least one resource, the information having been determined based on (i) a set of features of the at least one resource and on (ii) information about previous requests for the at least one resource; determining a computable function of the set of features of the at least one resource, the computable function having been determined based on the information about the at least one resource, the function defining a peering policy for the at least one resource; and providing the function to at least one service endpoint in a cluster or supercluster.
1. A computer-implemented method, the method operable on one or more devices comprising hardware including memory and at least one processor, the method comprising: (A) collecting information about at least one resource, said information having been determined based on (i) a set of features of said at least one resource and on (ii) information about previous requests for said at least one resource; (B) determining a computable function of said set of features of said at least one resource, said computable function having been determined based on said information about said at least one resource, said function defining a peering policy for said at least one resource; and (C) providing said function to at least one service endpoint in a cluster or supercluster. 2. The method of claim 1 wherein the peering policy is for said cluster or super-cluster, and wherein said peering policy defines: (i) a number of cache-responsible nodes in the cluster or super-cluster for the one or more resources. 3. The method of claim 2 wherein the peering policy also defines: (ii) a number of fill-responsible nodes in the cluster or super-cluster for the one or more resources. 4. The method of claim 3 wherein the number of fill-responsible nodes is one. 5. The method of claim 1 wherein the one or more resources comprise a property. 6. The method of claim 1 wherein the computable function is determined using at least one machine learning technique. 7. The method of claim 1 wherein the at least one service endpoint in the cluster or supercluster uses the function to determine a number of cache responsible nodes in the cluster for the one or more resources. 8. The method of claim 1 wherein the at least one service endpoint in the cluster or supercluster uses the function to determine a number of fill responsible nodes in the cluster for the one or more resources. 9. The method of claim 1 further comprising: repeating acts (A) to (C). 10. The method of claim 1 wherein each feature in the set of features of said at least one resource is determinable at request time of the at least one resource. 11. The method of claim 1 wherein the information about previous requests for said at least one resource comprises information about the number of requests for the at least one resource. 12. The method of claim 1 wherein the set of features of the at least one resource includes a size of the at least one resource. 13. The method of claim 1 further comprising: (A)(2) collecting second information about at least one resource, said second information having been determined based on (i) said set of features of said at least one resource and on (ii) second information about previous requests for said at least one resource; (B)(2) determining a second computable function of said set of features of said at least one resource, said computable function having been determined based on said second information about said at least one resource, said second function defining a second peering policy for said at least one resource; and (C) providing said second function to said at least one service endpoint in said cluster or supercluster. 14. The method of claim 13 wherein said second function is distinct from said first function. 15. The method of claim 13 wherein the at least one service endpoint in the cluster or supercluster uses the function to determine a first number of cache responsible nodes in the cluster for the one or more resources, and the at least one service endpoint uses the second function to determine a second number of cache responsible nodes in the cluster for the one or more resources. 16. The method of claim 15 wherein the first number of cache responsible nodes is distinct from the second number of cache responsible nodes. 17. A system, operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, the system comprising: (a) hardware including memory and at least one processor, and (b) one or more services running on said hardware, wherein said one or more services are configured to: (A) collect information about at least one resource, said information having been determined based on (i) a set of features of said at least one resource and on (ii) information about previous requests for said at least one resource; (B) determine a computable function of said set of features of said at least one resource, said computable function having been determined based on said information about said at least one resource, said function defining a peering policy for said at least one resource; and (C) provide said function to at least one service endpoint in a cluster or supercluster. 18. The system of claim 17 wherein the peering policy is for said cluster or super-cluster, and wherein said peering policy defines: (i) a number of cache-responsible nodes in the cluster or super-cluster for the one or more resources. 19. The system of claim 18 wherein the peering policy also defines: (ii) a number of fill-responsible nodes in the cluster or super-cluster for the one or more resources. 20. A computer program product having computer readable instructions stored on non-transitory computer readable media, the computer readable instructions including instructions for implementing a computer-implemented method, said method operable on one or more devices comprising hardware including memory and at least one processor and running one or more services on said hardware, said method operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, and said method comprising: (A) collecting information about at least one resource, said information having been determined based on (i) a set of features of said at least one resource and on (ii) information about previous requests for said at least one resource; (B) determining a computable function of said set of features of said at least one resource, said computable function having been determined based on said information about said at least one resource, said function defining a peering policy for said at least one resource; and (C) providing said function to at least one service endpoint in a cluster or supercluster. 21. The computer program product of claim 20 wherein the peering policy is for said cluster or super-cluster, and wherein said peering policy defines: (i) a number of cache-responsible nodes in the cluster or super-cluster for the one or more resources. 22. The computer program product of claim 21 wherein the peering policy also defines: (ii) a number of fill-responsible nodes in the cluster or super-cluster for the one or more resources.
A computer-implemented method includes collecting information about at least one resource, the information having been determined based on (i) a set of features of the at least one resource and on (ii) information about previous requests for the at least one resource; determining a computable function of the set of features of the at least one resource, the computable function having been determined based on the information about the at least one resource, the function defining a peering policy for the at least one resource; and providing the function to at least one service endpoint in a cluster or supercluster.1. A computer-implemented method, the method operable on one or more devices comprising hardware including memory and at least one processor, the method comprising: (A) collecting information about at least one resource, said information having been determined based on (i) a set of features of said at least one resource and on (ii) information about previous requests for said at least one resource; (B) determining a computable function of said set of features of said at least one resource, said computable function having been determined based on said information about said at least one resource, said function defining a peering policy for said at least one resource; and (C) providing said function to at least one service endpoint in a cluster or supercluster. 2. The method of claim 1 wherein the peering policy is for said cluster or super-cluster, and wherein said peering policy defines: (i) a number of cache-responsible nodes in the cluster or super-cluster for the one or more resources. 3. The method of claim 2 wherein the peering policy also defines: (ii) a number of fill-responsible nodes in the cluster or super-cluster for the one or more resources. 4. The method of claim 3 wherein the number of fill-responsible nodes is one. 5. The method of claim 1 wherein the one or more resources comprise a property. 6. The method of claim 1 wherein the computable function is determined using at least one machine learning technique. 7. The method of claim 1 wherein the at least one service endpoint in the cluster or supercluster uses the function to determine a number of cache responsible nodes in the cluster for the one or more resources. 8. The method of claim 1 wherein the at least one service endpoint in the cluster or supercluster uses the function to determine a number of fill responsible nodes in the cluster for the one or more resources. 9. The method of claim 1 further comprising: repeating acts (A) to (C). 10. The method of claim 1 wherein each feature in the set of features of said at least one resource is determinable at request time of the at least one resource. 11. The method of claim 1 wherein the information about previous requests for said at least one resource comprises information about the number of requests for the at least one resource. 12. The method of claim 1 wherein the set of features of the at least one resource includes a size of the at least one resource. 13. The method of claim 1 further comprising: (A)(2) collecting second information about at least one resource, said second information having been determined based on (i) said set of features of said at least one resource and on (ii) second information about previous requests for said at least one resource; (B)(2) determining a second computable function of said set of features of said at least one resource, said computable function having been determined based on said second information about said at least one resource, said second function defining a second peering policy for said at least one resource; and (C) providing said second function to said at least one service endpoint in said cluster or supercluster. 14. The method of claim 13 wherein said second function is distinct from said first function. 15. The method of claim 13 wherein the at least one service endpoint in the cluster or supercluster uses the function to determine a first number of cache responsible nodes in the cluster for the one or more resources, and the at least one service endpoint uses the second function to determine a second number of cache responsible nodes in the cluster for the one or more resources. 16. The method of claim 15 wherein the first number of cache responsible nodes is distinct from the second number of cache responsible nodes. 17. A system, operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, the system comprising: (a) hardware including memory and at least one processor, and (b) one or more services running on said hardware, wherein said one or more services are configured to: (A) collect information about at least one resource, said information having been determined based on (i) a set of features of said at least one resource and on (ii) information about previous requests for said at least one resource; (B) determine a computable function of said set of features of said at least one resource, said computable function having been determined based on said information about said at least one resource, said function defining a peering policy for said at least one resource; and (C) provide said function to at least one service endpoint in a cluster or supercluster. 18. The system of claim 17 wherein the peering policy is for said cluster or super-cluster, and wherein said peering policy defines: (i) a number of cache-responsible nodes in the cluster or super-cluster for the one or more resources. 19. The system of claim 18 wherein the peering policy also defines: (ii) a number of fill-responsible nodes in the cluster or super-cluster for the one or more resources. 20. A computer program product having computer readable instructions stored on non-transitory computer readable media, the computer readable instructions including instructions for implementing a computer-implemented method, said method operable on one or more devices comprising hardware including memory and at least one processor and running one or more services on said hardware, said method operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, and said method comprising: (A) collecting information about at least one resource, said information having been determined based on (i) a set of features of said at least one resource and on (ii) information about previous requests for said at least one resource; (B) determining a computable function of said set of features of said at least one resource, said computable function having been determined based on said information about said at least one resource, said function defining a peering policy for said at least one resource; and (C) providing said function to at least one service endpoint in a cluster or supercluster. 21. The computer program product of claim 20 wherein the peering policy is for said cluster or super-cluster, and wherein said peering policy defines: (i) a number of cache-responsible nodes in the cluster or super-cluster for the one or more resources. 22. The computer program product of claim 21 wherein the peering policy also defines: (ii) a number of fill-responsible nodes in the cluster or super-cluster for the one or more resources.
2,400
7,421
7,421
14,847,913
2,482
An imaging and analysis system for a component of a rotary machine includes an image capture device operable to capture image data from at least one selected type of electromagnetic radiation that is at least one of reflected from and transmitted through the component. The system also includes an image processor configured to generate processed data from the captured image data. The system further includes a control system configured to automatically identify a condition of the component by comparing the processed data to stored reference data. The reference data is stored in a format that enables direct comparison to the processed data.
1. An imaging and analysis system for a component of a rotary machine, said system comprising: an image capture device operable to capture image data from at least one selected type of electromagnetic radiation that is at least one of reflected from and transmitted through the component; an image processor configured to generate processed data from the captured image data; and a control system configured to automatically identify a condition of the component by comparing the processed data to stored reference data, wherein the reference data is stored in a format that enables direct comparison to the processed data. 2. The system of claim 1, further comprising a mounting rig configured to present the component in at least one preselected orientation relative to said image capture device. 3. The system of claim 2, wherein the at least one preselected orientation includes a plurality of preselected orientations, said mounting rig comprises a component mounting system configured to fixedly receive the component, said component mounting system is rotatable to selectively present the component in each of the plurality of preselected orientations. 4. The system of claim 3, wherein said component mounting system includes a plurality of interchangeable mounting fixtures, each of said interchangeable mounting fixtures is configured to receive a different size and/or type of component of the rotary machine. 5. The system of claim 2, wherein said mounting rig further comprises: a base; and a positioning system configured to selectively position said image capture device relative to said base. 6. The system of claim 1, wherein said image processor is configured to generate the processed data as at least one of a filtered image, a composite image from a plurality of captured images of the component, and a mathematical model of the component. 7. The system of claim 1, wherein the stored reference data includes an ideal set of data associated with a reference component that is at least one of new and undamaged. 8. The system of claim 7, wherein said control system is further configured to generate and store the ideal set of data from the reference component. 9. The system of claim 1, wherein the stored reference data includes a condition-type set of data associated with a reference component that has a specified condition. 10. The system of claim 9, wherein said control system is further configured to generate and store the condition-type set of data from the reference component. 11. A method of analyzing a component of a rotary machine, said method comprising: capturing image data from at least one selected type of electromagnetic radiation that is at least one of reflected from and transmitted through the component; generating processed data from the captured image data; and automatically identifying a condition of the component by comparing the processed data to stored reference data, wherein the reference data is stored in a format that enables direct comparison to the processed data. 12. The method of claim 11, further comprising receiving the component in a mounting rig configured to present the component in at least one preselected orientation relative to the image capture device. 13. The method of claim 12, wherein the at least one preselected orientation includes a plurality of preselected orientations, said method further comprising rotating a component mounting system of the mounting rig to selectively present the component in each of the plurality of preselected orientations. 14. The method of claim 13, wherein said receiving the component further comprises receiving the component in one of a plurality of interchangeable mounting fixtures of the component mounting system, each of the interchangeable mounting fixtures is configured to receive a different size and/or type of component of the rotary machine. 15. The method of claim 12, further comprising selectively positioning the image capture device relative to a base of the mounting rig using a positioning system of the mounting rig. 16. The method of claim 11, wherein said generating processed data further comprises generating at least one of a filtered image, a composite image from a plurality of captured images of the component, and a mathematical model of the component. 17. The method of claim 11, wherein said comparing the processed data to stored reference data further comprises comparing the processed data to an ideal set of data associated with a reference component that is at least one of new and undamaged. 18. The method of claim 17, further comprising generating and storing the ideal set of data from the reference component. 19. The method of claim 11, wherein said comparing the processed data to stored reference data further comprises comparing the processed data to a condition-type set of data associated with a reference component that has a specified condition. 20. The method of claim 19, further comprising generating and storing the condition-type set of data from the reference component.
An imaging and analysis system for a component of a rotary machine includes an image capture device operable to capture image data from at least one selected type of electromagnetic radiation that is at least one of reflected from and transmitted through the component. The system also includes an image processor configured to generate processed data from the captured image data. The system further includes a control system configured to automatically identify a condition of the component by comparing the processed data to stored reference data. The reference data is stored in a format that enables direct comparison to the processed data.1. An imaging and analysis system for a component of a rotary machine, said system comprising: an image capture device operable to capture image data from at least one selected type of electromagnetic radiation that is at least one of reflected from and transmitted through the component; an image processor configured to generate processed data from the captured image data; and a control system configured to automatically identify a condition of the component by comparing the processed data to stored reference data, wherein the reference data is stored in a format that enables direct comparison to the processed data. 2. The system of claim 1, further comprising a mounting rig configured to present the component in at least one preselected orientation relative to said image capture device. 3. The system of claim 2, wherein the at least one preselected orientation includes a plurality of preselected orientations, said mounting rig comprises a component mounting system configured to fixedly receive the component, said component mounting system is rotatable to selectively present the component in each of the plurality of preselected orientations. 4. The system of claim 3, wherein said component mounting system includes a plurality of interchangeable mounting fixtures, each of said interchangeable mounting fixtures is configured to receive a different size and/or type of component of the rotary machine. 5. The system of claim 2, wherein said mounting rig further comprises: a base; and a positioning system configured to selectively position said image capture device relative to said base. 6. The system of claim 1, wherein said image processor is configured to generate the processed data as at least one of a filtered image, a composite image from a plurality of captured images of the component, and a mathematical model of the component. 7. The system of claim 1, wherein the stored reference data includes an ideal set of data associated with a reference component that is at least one of new and undamaged. 8. The system of claim 7, wherein said control system is further configured to generate and store the ideal set of data from the reference component. 9. The system of claim 1, wherein the stored reference data includes a condition-type set of data associated with a reference component that has a specified condition. 10. The system of claim 9, wherein said control system is further configured to generate and store the condition-type set of data from the reference component. 11. A method of analyzing a component of a rotary machine, said method comprising: capturing image data from at least one selected type of electromagnetic radiation that is at least one of reflected from and transmitted through the component; generating processed data from the captured image data; and automatically identifying a condition of the component by comparing the processed data to stored reference data, wherein the reference data is stored in a format that enables direct comparison to the processed data. 12. The method of claim 11, further comprising receiving the component in a mounting rig configured to present the component in at least one preselected orientation relative to the image capture device. 13. The method of claim 12, wherein the at least one preselected orientation includes a plurality of preselected orientations, said method further comprising rotating a component mounting system of the mounting rig to selectively present the component in each of the plurality of preselected orientations. 14. The method of claim 13, wherein said receiving the component further comprises receiving the component in one of a plurality of interchangeable mounting fixtures of the component mounting system, each of the interchangeable mounting fixtures is configured to receive a different size and/or type of component of the rotary machine. 15. The method of claim 12, further comprising selectively positioning the image capture device relative to a base of the mounting rig using a positioning system of the mounting rig. 16. The method of claim 11, wherein said generating processed data further comprises generating at least one of a filtered image, a composite image from a plurality of captured images of the component, and a mathematical model of the component. 17. The method of claim 11, wherein said comparing the processed data to stored reference data further comprises comparing the processed data to an ideal set of data associated with a reference component that is at least one of new and undamaged. 18. The method of claim 17, further comprising generating and storing the ideal set of data from the reference component. 19. The method of claim 11, wherein said comparing the processed data to stored reference data further comprises comparing the processed data to a condition-type set of data associated with a reference component that has a specified condition. 20. The method of claim 19, further comprising generating and storing the condition-type set of data from the reference component.
2,400
7,422
7,422
13,931,708
2,491
A method, an apparatus, and a computer program product for wireless communication are provided in connection with providing efficient SE functionality. In one example, a communications device includes a SE which includes a processor, RAM, and NVM, and secured and unsecured components. The SE may be equipped to receive a request to access a function that is accessible through information stored in the SE, retrieve a first portion of the information associated with the function that is stored in the secured component, obtain a second portion of the information associated with the function that is stored in the unsecured component, and facilitate access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information. In an aspect, the secured component may include the processor and the RAM, and the unsecured component may include substantially all of the NVM.
1. An apparatus for communications, comprising: a secure element (SE) comprises a processor, random access memory (RAM), and non-volatile memory (NVM), wherein the SE further comprises a secured component of the SE, an unsecured component of the SE, wherein the unsecured component and the secured component are coupled through an interface, and wherein the SE is configure to: receive a request to access a function that is accessible through information stored in the SE; retrieve a first portion of the information associated with the function that is stored in the secured component of the SE, wherein the secured component comprises the processor and the RAM; obtain a second portion of the information associated with the function that is stored in the unsecured component of the SE, wherein the unsecured component comprises substantially all of the NVM; and facilitate access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information. 2. The apparatus of claim 1, wherein the function is an application stored on a communications device, and wherein the request is received through a cryptographically secure interface between the SE and the communications device. 3. The apparatus of claim 1, wherein the NVM included in the unsecured component of the SE comprises standard NVM. 4. The apparatus of claim 1, wherein the secured component of the SE is secured using a security shielding. 5. The apparatus of claim 1, wherein the secured component of the SE is integrated into a system on chip (SoC). 6. The apparatus of claim 5, wherein the SoC is a near field communication controller (NFCC). 7. The apparatus of claim 5, wherein the SoC is a mobile station modem (MSM) chip. 8. The apparatus of claim 5, wherein a footprint of the SE on the SoC is minimized by integrating only the secured component of the SE into the SoC. 9. The apparatus of claim 8, wherein the secured component of the SE has a geometry less than or equal to 65 nm. 10. The apparatus of claim 5, wherein a security shielding for the secured component includes one or more existing metal layers associated with the SoC. 11. The apparatus of claim 1, wherein the SE is further configured to use a high speed interface between the unsecured component of the SE and the secured component of the SE. 12. The apparatus of claim 1, wherein the second portion of the information associated with the function that is stored in the unsecured component of the SE is stored in an encrypted format based on the first portion of the information associated with the function that is stored in the secured component. 13. The apparatus of claim 12, wherein the SE is further configured to decrypt the second portion of the information using the processor included in the secured component of the SE, based on one or more ciphers included in the first portion of the information. 14. A method of communication using a secure element (SE), comprising: receiving a request to access a function that is accessible through information stored in the SE, wherein the SE comprises a processor, random access memory (RAM), and non-volatile memory (NVM); retrieving a first portion of the information associated with the function that is stored in a secured component of the SE, wherein the secured component comprises the processor and the RAM; obtaining a second portion of the information associated with the function that is stored in an unsecured component of the SE, wherein the unsecured component comprises substantially all of the NVM; and facilitating access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information. 15. The method of claim 14, wherein the function is an application stored on a communications device, and wherein the request is received through a cryptographically secure interface between the SE and the communications device. 16. The method of claim 14, wherein the NVM included in the unsecured component of the SE comprises standard NVM. 17. The method of claim 14, wherein the secured component of the SE is secured using a security shielding. 18. The method of claim 14, wherein the secured component of the SE is integrated into a system on chip (SoC). 19. The method of claim 18, wherein the SoC is a near field communication controller (NFCC). 20. The method of claim 18, wherein the SoC is a mobile station modem (MSM) chip. 21. The method of claim 18, wherein a footprint of the SE on the SoC is minimized by integrating only the secured component of the SE into the SoC. 22. The method of claim 21, wherein the secured component of the SE has a geometry less than or equal to 65 nm. 23. The method of claim 18, wherein a security shielding for the secured component includes one or more existing metal layers associated with the SoC. 24. The method of claim 14, wherein the obtaining comprises using a high speed interface between the unsecured component of the SE and the secured component of the SE. 25. The method of claim 14, wherein the second portion of the information associated with the function that is stored in the unsecured component of the SE is stored in an encrypted format based on the first portion of the information associated with the function that is stored in the secured component. 26. The method of claim 25, wherein the accessing further comprises decrypting the second portion of the information, by the processor included in the secured component of the SE, based on one or more ciphers included in the first portion of the information. 27. An apparatus for communications, comprising: means for receiving a request to access a function that is accessible through information stored in a secure element (SE), wherein the SE comprises a processor, random access memory (RAM), and non-volatile memory (NVM); means for retrieving a first portion of the information associated with the function that is stored in a secured component of the SE, wherein the secured component comprises the processor and the RAM; means for obtaining a second portion of the information associated with the function that is stored in an unsecured component of the SE, wherein the unsecured component comprises substantially all of the NVM; and means for facilitating access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information. 28. The apparatus of claim 27, wherein the function is an application stored on a communications device, and wherein the request is received through a cryptographically secure interface between the SE and the communications device. 29. The apparatus of claim 27, wherein the NVM included in the unsecured component of the SE comprises standard NVM. 30. The apparatus of claim 27, wherein the secured component of the SE is secured using a security shielding. 31. The apparatus of claim 27, wherein the secured component of the SE is integrated into a system on chip (SoC). 32. The apparatus of claim 31, wherein the SoC is a near field communication controller (NFCC). 33. The apparatus of claim 31, wherein the SoC is a mobile station modem (MSM) chip. 34. The apparatus of claim 31, wherein a footprint of the SE on the SoC is minimized by integrating only the secured component of the SE into the SoC. 35. The apparatus of claim 34, wherein the secured component of the SE has a geometry less than or equal to 65 nm. 36. The apparatus of claim 31, wherein a security shielding for the secured component includes one or more existing metal layers associated with the SoC. 37. The apparatus of claim 36, wherein the means for obtaining are further configured to use a high speed interface between the unsecured component of the SE and the secured component of the SE. 38. The apparatus of claim 27, wherein the second portion of the information associated with the function that is stored in the unsecured component of the SE is stored in an encrypted format based on the first portion of the information associated with the function that is stored in the secured component. 39. The apparatus of claim 38, wherein the means for facilitating access are further configured to decrypt the second portion of the information, based on one or more ciphers included in the first portion of the information. 40. A computer program product, comprising: a computer-readable medium comprising code for: receiving a request to access a function that is accessible through information stored in the SE, wherein the SE comprises a processor, random access memory (RAM), and non-volatile memory (NVM); retrieving a first portion of the information associated with the function that is stored in a secured component of the SE, wherein the secured component comprises the processor and the RAM; obtaining a second portion of the information associated with the function that is stored in an unsecured component of the SE, wherein the unsecured component comprises substantially all of the NVM; and facilitating access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information.
A method, an apparatus, and a computer program product for wireless communication are provided in connection with providing efficient SE functionality. In one example, a communications device includes a SE which includes a processor, RAM, and NVM, and secured and unsecured components. The SE may be equipped to receive a request to access a function that is accessible through information stored in the SE, retrieve a first portion of the information associated with the function that is stored in the secured component, obtain a second portion of the information associated with the function that is stored in the unsecured component, and facilitate access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information. In an aspect, the secured component may include the processor and the RAM, and the unsecured component may include substantially all of the NVM.1. An apparatus for communications, comprising: a secure element (SE) comprises a processor, random access memory (RAM), and non-volatile memory (NVM), wherein the SE further comprises a secured component of the SE, an unsecured component of the SE, wherein the unsecured component and the secured component are coupled through an interface, and wherein the SE is configure to: receive a request to access a function that is accessible through information stored in the SE; retrieve a first portion of the information associated with the function that is stored in the secured component of the SE, wherein the secured component comprises the processor and the RAM; obtain a second portion of the information associated with the function that is stored in the unsecured component of the SE, wherein the unsecured component comprises substantially all of the NVM; and facilitate access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information. 2. The apparatus of claim 1, wherein the function is an application stored on a communications device, and wherein the request is received through a cryptographically secure interface between the SE and the communications device. 3. The apparatus of claim 1, wherein the NVM included in the unsecured component of the SE comprises standard NVM. 4. The apparatus of claim 1, wherein the secured component of the SE is secured using a security shielding. 5. The apparatus of claim 1, wherein the secured component of the SE is integrated into a system on chip (SoC). 6. The apparatus of claim 5, wherein the SoC is a near field communication controller (NFCC). 7. The apparatus of claim 5, wherein the SoC is a mobile station modem (MSM) chip. 8. The apparatus of claim 5, wherein a footprint of the SE on the SoC is minimized by integrating only the secured component of the SE into the SoC. 9. The apparatus of claim 8, wherein the secured component of the SE has a geometry less than or equal to 65 nm. 10. The apparatus of claim 5, wherein a security shielding for the secured component includes one or more existing metal layers associated with the SoC. 11. The apparatus of claim 1, wherein the SE is further configured to use a high speed interface between the unsecured component of the SE and the secured component of the SE. 12. The apparatus of claim 1, wherein the second portion of the information associated with the function that is stored in the unsecured component of the SE is stored in an encrypted format based on the first portion of the information associated with the function that is stored in the secured component. 13. The apparatus of claim 12, wherein the SE is further configured to decrypt the second portion of the information using the processor included in the secured component of the SE, based on one or more ciphers included in the first portion of the information. 14. A method of communication using a secure element (SE), comprising: receiving a request to access a function that is accessible through information stored in the SE, wherein the SE comprises a processor, random access memory (RAM), and non-volatile memory (NVM); retrieving a first portion of the information associated with the function that is stored in a secured component of the SE, wherein the secured component comprises the processor and the RAM; obtaining a second portion of the information associated with the function that is stored in an unsecured component of the SE, wherein the unsecured component comprises substantially all of the NVM; and facilitating access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information. 15. The method of claim 14, wherein the function is an application stored on a communications device, and wherein the request is received through a cryptographically secure interface between the SE and the communications device. 16. The method of claim 14, wherein the NVM included in the unsecured component of the SE comprises standard NVM. 17. The method of claim 14, wherein the secured component of the SE is secured using a security shielding. 18. The method of claim 14, wherein the secured component of the SE is integrated into a system on chip (SoC). 19. The method of claim 18, wherein the SoC is a near field communication controller (NFCC). 20. The method of claim 18, wherein the SoC is a mobile station modem (MSM) chip. 21. The method of claim 18, wherein a footprint of the SE on the SoC is minimized by integrating only the secured component of the SE into the SoC. 22. The method of claim 21, wherein the secured component of the SE has a geometry less than or equal to 65 nm. 23. The method of claim 18, wherein a security shielding for the secured component includes one or more existing metal layers associated with the SoC. 24. The method of claim 14, wherein the obtaining comprises using a high speed interface between the unsecured component of the SE and the secured component of the SE. 25. The method of claim 14, wherein the second portion of the information associated with the function that is stored in the unsecured component of the SE is stored in an encrypted format based on the first portion of the information associated with the function that is stored in the secured component. 26. The method of claim 25, wherein the accessing further comprises decrypting the second portion of the information, by the processor included in the secured component of the SE, based on one or more ciphers included in the first portion of the information. 27. An apparatus for communications, comprising: means for receiving a request to access a function that is accessible through information stored in a secure element (SE), wherein the SE comprises a processor, random access memory (RAM), and non-volatile memory (NVM); means for retrieving a first portion of the information associated with the function that is stored in a secured component of the SE, wherein the secured component comprises the processor and the RAM; means for obtaining a second portion of the information associated with the function that is stored in an unsecured component of the SE, wherein the unsecured component comprises substantially all of the NVM; and means for facilitating access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information. 28. The apparatus of claim 27, wherein the function is an application stored on a communications device, and wherein the request is received through a cryptographically secure interface between the SE and the communications device. 29. The apparatus of claim 27, wherein the NVM included in the unsecured component of the SE comprises standard NVM. 30. The apparatus of claim 27, wherein the secured component of the SE is secured using a security shielding. 31. The apparatus of claim 27, wherein the secured component of the SE is integrated into a system on chip (SoC). 32. The apparatus of claim 31, wherein the SoC is a near field communication controller (NFCC). 33. The apparatus of claim 31, wherein the SoC is a mobile station modem (MSM) chip. 34. The apparatus of claim 31, wherein a footprint of the SE on the SoC is minimized by integrating only the secured component of the SE into the SoC. 35. The apparatus of claim 34, wherein the secured component of the SE has a geometry less than or equal to 65 nm. 36. The apparatus of claim 31, wherein a security shielding for the secured component includes one or more existing metal layers associated with the SoC. 37. The apparatus of claim 36, wherein the means for obtaining are further configured to use a high speed interface between the unsecured component of the SE and the secured component of the SE. 38. The apparatus of claim 27, wherein the second portion of the information associated with the function that is stored in the unsecured component of the SE is stored in an encrypted format based on the first portion of the information associated with the function that is stored in the secured component. 39. The apparatus of claim 38, wherein the means for facilitating access are further configured to decrypt the second portion of the information, based on one or more ciphers included in the first portion of the information. 40. A computer program product, comprising: a computer-readable medium comprising code for: receiving a request to access a function that is accessible through information stored in the SE, wherein the SE comprises a processor, random access memory (RAM), and non-volatile memory (NVM); retrieving a first portion of the information associated with the function that is stored in a secured component of the SE, wherein the secured component comprises the processor and the RAM; obtaining a second portion of the information associated with the function that is stored in an unsecured component of the SE, wherein the unsecured component comprises substantially all of the NVM; and facilitating access to the function using the first retrieved portion of the information to enable access to the second obtained portion of the information.
2,400
7,423
7,423
14,151,798
2,449
Embodiments of the present invention provide a method, system and computer program product for the alternate playback of streaming media segments. In an embodiment of the invention, a method for alternate playback of streaming media segments includes receiving different segments of streaming media for playback in sequence in a media player executing in memory of a playback device and rejecting playback of one of the different segments during playback of the one of the different segments. The method also includes selecting an alternate segment for playback in the media player in place of the rejected one of the different segments. Finally, the method includes playing back in the media player the alternate segment in place of the rejected one of the different segments, and subsequently playing back in the media player a next one of the different segments in the sequence.
1. A method for alternate playback of streaming media segments, the method comprising: receiving from a streaming media source different segments of streaming media for playback in sequence in a media player executing in memory of a playback device; rejecting playback of one of the different segments during playback of the one of the different segments; selecting an alternate segment for playback in the media player in place of the rejected one of the different segments; and, playing back in the media player the alternate segment in place of the rejected one of the different segments, and subsequently playing back in the media player a next one of the different segments in the sequence. 2. The method of claim 1, wherein the streaming media source is satellite radio. 3. The method of claim 2, wherein the set of alternate segments are stored in fixed storage of the playback device. 4. The method of claim 2, wherein the set of alternate segments are stored in fixed storage of a remote computing system. 5. The method of claim 1, wherein the alternate segment is selected based upon a playback time of the alternate selection most closely matching a remaining playback time of the rejected one of the different segments. 6. The method of claim 5, wherein a buffer segment is played back after playback of the alternate segment and before the next one of the different segments, the buffer segment having a playback time comparable to a difference between the remaining playback time and the playback time of the alternate selection. 7-20. (canceled)
Embodiments of the present invention provide a method, system and computer program product for the alternate playback of streaming media segments. In an embodiment of the invention, a method for alternate playback of streaming media segments includes receiving different segments of streaming media for playback in sequence in a media player executing in memory of a playback device and rejecting playback of one of the different segments during playback of the one of the different segments. The method also includes selecting an alternate segment for playback in the media player in place of the rejected one of the different segments. Finally, the method includes playing back in the media player the alternate segment in place of the rejected one of the different segments, and subsequently playing back in the media player a next one of the different segments in the sequence.1. A method for alternate playback of streaming media segments, the method comprising: receiving from a streaming media source different segments of streaming media for playback in sequence in a media player executing in memory of a playback device; rejecting playback of one of the different segments during playback of the one of the different segments; selecting an alternate segment for playback in the media player in place of the rejected one of the different segments; and, playing back in the media player the alternate segment in place of the rejected one of the different segments, and subsequently playing back in the media player a next one of the different segments in the sequence. 2. The method of claim 1, wherein the streaming media source is satellite radio. 3. The method of claim 2, wherein the set of alternate segments are stored in fixed storage of the playback device. 4. The method of claim 2, wherein the set of alternate segments are stored in fixed storage of a remote computing system. 5. The method of claim 1, wherein the alternate segment is selected based upon a playback time of the alternate selection most closely matching a remaining playback time of the rejected one of the different segments. 6. The method of claim 5, wherein a buffer segment is played back after playback of the alternate segment and before the next one of the different segments, the buffer segment having a playback time comparable to a difference between the remaining playback time and the playback time of the alternate selection. 7-20. (canceled)
2,400
7,424
7,424
14,255,914
2,485
The invention relates to a digital microscope and to a method for optimizing a work process in such a digital microscope. The digital microscope comprises according to the invention at least one first monitoring sensor for observing a sample ( 08 ), a sample table ( 4 ), an optics unit ( 02 ) or a user, and a monitoring unit. In the method according to the invention, first operating data of the first operating sensor are acquired and analyzed and evaluated in an automated manner in the monitoring unit, in order to generate control data and to use said data for controlling the work process of the digital microscope.
1. A digital microscope comprising: an optics unit (02) and a digital image processing unit, which are arranged on a microscope body (03); and a microscope image sensor for capturing an image of a sample (08) to be arranged on a sample table (04); at least one first monitoring sensor for observing the sample (08), the sample table (04), the optics unit (02) or a user; and a monitoring unit, wherein, in the monitoring unit, data of the monitoring sensor are evaluated in an automated manner and used for automated control of the digital microscope. 2. A digital microscope according to claim 1, further comprising: a second monitoring sensor, wherein the first monitoring sensor and the second monitoring sensor are arranged at spatially different sites on the digital microscope and data of the two monitoring sensors are processed in the monitoring unit into three-dimensional overview information. 3. A digital microscope according to claim 1, further comprising: a third monitoring sensor, wherein the first monitoring sensor and the third monitoring sensor are arranged at spatially different sites on the digital microscope, and data of the first and of the third monitoring sensor are processed in the monitoring unit into collision control information. 4. A digital microscope according to claim 1, wherein the first monitoring sensor is an image sensor (06, 07, 09) or a camera (41). 5. A digital microscope according to claim 2, wherein the second monitoring sensor is an image sensor (06, 07, 09). 6. A digital microscope according to claim 1, further comprising an auxiliary illumination device (21). 7. A method for optimizing a work process in a digital microscope with an optics unit (02) and with a first monitoring sensor, comprising the following steps: acquiring first observation data from the first monitoring sensor of a sample table (04), of the optics unit (02) or of a user, at the time of observation of a sample (08) arranged on the sample table (04); automatically analyzing and evaluating the first observation data of the first monitoring sensor and generating control data; using the control data to control the work process of the digital microscope. 8. A method according to claim 7, further comprising: acquiring second observation data from a second monitoring sensor, which is arranged with spatial offset relative to the first monitoring sensor in the digital microscope; generating a three-dimensional overview image or an elevation map of the sample from the first and second observation data, 9. A method according to claim 8, further comprising at least one of: using the first and second observation data for an approximate positioning of the sample table (04); and using the first and second observation data for an automated adjustment of a focus of a lens (01). 10. A method according to claim 7, wherein an illumination of the sample table (04) occurs during the acquisition of illumination data.
The invention relates to a digital microscope and to a method for optimizing a work process in such a digital microscope. The digital microscope comprises according to the invention at least one first monitoring sensor for observing a sample ( 08 ), a sample table ( 4 ), an optics unit ( 02 ) or a user, and a monitoring unit. In the method according to the invention, first operating data of the first operating sensor are acquired and analyzed and evaluated in an automated manner in the monitoring unit, in order to generate control data and to use said data for controlling the work process of the digital microscope.1. A digital microscope comprising: an optics unit (02) and a digital image processing unit, which are arranged on a microscope body (03); and a microscope image sensor for capturing an image of a sample (08) to be arranged on a sample table (04); at least one first monitoring sensor for observing the sample (08), the sample table (04), the optics unit (02) or a user; and a monitoring unit, wherein, in the monitoring unit, data of the monitoring sensor are evaluated in an automated manner and used for automated control of the digital microscope. 2. A digital microscope according to claim 1, further comprising: a second monitoring sensor, wherein the first monitoring sensor and the second monitoring sensor are arranged at spatially different sites on the digital microscope and data of the two monitoring sensors are processed in the monitoring unit into three-dimensional overview information. 3. A digital microscope according to claim 1, further comprising: a third monitoring sensor, wherein the first monitoring sensor and the third monitoring sensor are arranged at spatially different sites on the digital microscope, and data of the first and of the third monitoring sensor are processed in the monitoring unit into collision control information. 4. A digital microscope according to claim 1, wherein the first monitoring sensor is an image sensor (06, 07, 09) or a camera (41). 5. A digital microscope according to claim 2, wherein the second monitoring sensor is an image sensor (06, 07, 09). 6. A digital microscope according to claim 1, further comprising an auxiliary illumination device (21). 7. A method for optimizing a work process in a digital microscope with an optics unit (02) and with a first monitoring sensor, comprising the following steps: acquiring first observation data from the first monitoring sensor of a sample table (04), of the optics unit (02) or of a user, at the time of observation of a sample (08) arranged on the sample table (04); automatically analyzing and evaluating the first observation data of the first monitoring sensor and generating control data; using the control data to control the work process of the digital microscope. 8. A method according to claim 7, further comprising: acquiring second observation data from a second monitoring sensor, which is arranged with spatial offset relative to the first monitoring sensor in the digital microscope; generating a three-dimensional overview image or an elevation map of the sample from the first and second observation data, 9. A method according to claim 8, further comprising at least one of: using the first and second observation data for an approximate positioning of the sample table (04); and using the first and second observation data for an automated adjustment of a focus of a lens (01). 10. A method according to claim 7, wherein an illumination of the sample table (04) occurs during the acquisition of illumination data.
2,400
7,425
7,425
14,133,881
2,449
Embodiments of the present invention provide a method, system and computer program product for the alternate playback of streaming media segments. In an embodiment of the invention, a method for alternate playback of streaming media segments includes receiving different segments of streaming media for playback in sequence in a media player executing in memory of a playback device and rejecting playback of one of the different segments during playback of the one of the different segments. The method also includes selecting an alternate segment for playback in the media player in place of the rejected one of the different segments. Finally, the method includes playing back in the media player the alternate segment in place of the rejected one of the different segments, and subsequently playing back in the media player a next one of the different segments in the sequence.
1-6. (canceled) 7. A streaming media data processing system comprising: a media player executing in memory of a computer and communicatively coupled to a streaming media server over a computer communications network, the media player receiving from the streaming media server different segments of streaming media for playback in sequence; and, an alternate media playback module executing in the memory of the computer, the module comprising program code enabled to reject playback of one of the different segments during playback of the one of the different segments, to select an alternate segment for playback in the media player in place of the rejected one of the different segments, and to direct the media player to play back the alternate segment in place of the rejected one of the different segments. 8. The system of claim 7, wherein the alternate segment is selected from a set of alternate segments. 9. The system of claim 8, wherein the set of alternate segments are stored in a local media store of the computer. 10. The system of claim 8, wherein the set of alternate segments are stored in remote media store accessible over the computer communications network. 11. The system of claim 7, wherein the alternate segment is selected based upon a playback time of the alternate selection most closely matching a remaining playback time of the rejected one of the different segments. 12. The system of claim 11, wherein a buffer segment is played back after playback of the alternate segment and before the next one of the different segments, the buffer segment having a playback time comparable to a difference between the remaining playback time and the playback time of the alternate selection. 13. The system of claim 7, wherein the streaming media is audio. 14. The system of claim 7, wherein the streaming media is video. 15. A computer program product for alternate playback of streaming media segments, the computer program product comprising: a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code for receiving different segments of streaming media for playback in sequence in a media player executing in memory of a computer; computer readable program code for rejecting playback of one of the different segments during playback of the one of the different segments; computer readable program code for selecting an alternate segment for playback in the media player in place of the rejected one of the different segments; and, computer readable program code for playing back in the media player the alternate segment in place of the rejected one of the different segments, and subsequently playing back in the media player a next one of the different segments in the sequence. 16. The computer program product of claim 15, wherein the alternate segment is selected from a set of alternate segments. 17. The computer program product of claim 16, wherein the set of alternate segments are stored in fixed storage of the computer. 18. The computer program product of claim 16, wherein the set of alternate segments are stored in fixed storage of a remote computing system. 19. The computer program product of claim 15, wherein the alternate segment is selected based upon a playback time of the alternate selection most closely matching a remaining playback time of the rejected one of the different segments. 20. The computer program product of claim 19, wherein a buffer segment is played back after playback of the alternate segment and before the next one of the different segments, the buffer segment having a playback time comparable to a difference between the remaining playback time and the playback time of the alternate selection.
Embodiments of the present invention provide a method, system and computer program product for the alternate playback of streaming media segments. In an embodiment of the invention, a method for alternate playback of streaming media segments includes receiving different segments of streaming media for playback in sequence in a media player executing in memory of a playback device and rejecting playback of one of the different segments during playback of the one of the different segments. The method also includes selecting an alternate segment for playback in the media player in place of the rejected one of the different segments. Finally, the method includes playing back in the media player the alternate segment in place of the rejected one of the different segments, and subsequently playing back in the media player a next one of the different segments in the sequence.1-6. (canceled) 7. A streaming media data processing system comprising: a media player executing in memory of a computer and communicatively coupled to a streaming media server over a computer communications network, the media player receiving from the streaming media server different segments of streaming media for playback in sequence; and, an alternate media playback module executing in the memory of the computer, the module comprising program code enabled to reject playback of one of the different segments during playback of the one of the different segments, to select an alternate segment for playback in the media player in place of the rejected one of the different segments, and to direct the media player to play back the alternate segment in place of the rejected one of the different segments. 8. The system of claim 7, wherein the alternate segment is selected from a set of alternate segments. 9. The system of claim 8, wherein the set of alternate segments are stored in a local media store of the computer. 10. The system of claim 8, wherein the set of alternate segments are stored in remote media store accessible over the computer communications network. 11. The system of claim 7, wherein the alternate segment is selected based upon a playback time of the alternate selection most closely matching a remaining playback time of the rejected one of the different segments. 12. The system of claim 11, wherein a buffer segment is played back after playback of the alternate segment and before the next one of the different segments, the buffer segment having a playback time comparable to a difference between the remaining playback time and the playback time of the alternate selection. 13. The system of claim 7, wherein the streaming media is audio. 14. The system of claim 7, wherein the streaming media is video. 15. A computer program product for alternate playback of streaming media segments, the computer program product comprising: a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code for receiving different segments of streaming media for playback in sequence in a media player executing in memory of a computer; computer readable program code for rejecting playback of one of the different segments during playback of the one of the different segments; computer readable program code for selecting an alternate segment for playback in the media player in place of the rejected one of the different segments; and, computer readable program code for playing back in the media player the alternate segment in place of the rejected one of the different segments, and subsequently playing back in the media player a next one of the different segments in the sequence. 16. The computer program product of claim 15, wherein the alternate segment is selected from a set of alternate segments. 17. The computer program product of claim 16, wherein the set of alternate segments are stored in fixed storage of the computer. 18. The computer program product of claim 16, wherein the set of alternate segments are stored in fixed storage of a remote computing system. 19. The computer program product of claim 15, wherein the alternate segment is selected based upon a playback time of the alternate selection most closely matching a remaining playback time of the rejected one of the different segments. 20. The computer program product of claim 19, wherein a buffer segment is played back after playback of the alternate segment and before the next one of the different segments, the buffer segment having a playback time comparable to a difference between the remaining playback time and the playback time of the alternate selection.
2,400
7,426
7,426
13,034,115
2,445
A first server computing system receives a request from a second server computing system over a first network. The request is for product asset management data relating to an owner of product assets. The product asset management data comprises product business model data of a third party. The first computing system collects the product asset management data relating to the owner and sends the product asset management data to the second server computing system to allow the second server computing system to manage the product assets of the owner as specified by the third party via a second network.
1. A method, implemented by a first server computing system programmed to perform the following, comprising: receiving, by the first server computing system, a request from a second server computing system over a first network, the request being for product asset management data relating to an owner of product assets, the product asset management data comprising product business model data of a third party; collecting, by the first server computing system, the product asset management data relating to the owner; and sending, by the first server computing system, the product asset management data to the second server computing system to allow the second server computing system to manage the product assets of the owner as specified by the third party via a second network. 2. The method of claim 1, wherein the product asset management data further comprises at least one of: third party extensions and client configuration data. 3. The method of claim 1, further comprising: determining entitlements that are available to the owner; determining entitlements to send to the second server computing system; and marking the entitlements to be sent to the second server computing system as consumed. 4. The method of claim 1, wherein the second network is a private network. 5. A method, implemented by a first server computing system programmed to perform the following, comprising: receiving, by the first server computing system, product asset management data from a second server computing system over a first network, the product asset management data relating to an owner of product assets and comprising product business model data of a third party; storing, by the first server computing system, the product asset management data in a data store that is coupled to the first server computing system; and configuring, by the first server computing system, one or more entities in a second network to communicate with the first server computing system over the second network to manage the product assets of the owner as specified by the third party via the second network using the locally stored product asset management data, the second network being a private network. 6. The method of claim 5, wherein the product asset management data further comprises at least one of: third party extensions and client configuration data. 7. The method of claim 5, wherein configuring comprises: configuring an entity, using the client configuration data, to communicate with the first server computing system over the second network to consume a product. 8. The method of claim 5, further comprising: sending a request to the second computing system over the first network for updated product asset management data. 9. A server computing system comprising: a persistent storage unit to store product asset management data relating to an owner of product assets and comprising product business model data of a third party; and a processor coupled to the persistent storage unit to receive a request from a second server computing system over a first network, the request being for product asset management data relating to an owner of product assets, to collect the product asset management data relating to the owner, and to send the product asset management data to the second server computing system to allow the second server computing system to manage the product assets of the owner as specified by the third party via a second network. 10. The system of claim 9, wherein the product asset management data further comprises at least one of: third party extensions and client configuration data. 11. The system of claim 9, further comprising the processor: to determine entitlements that are available to the owner, to determine entitlements to send to the second server computing system, and to mark the entitlements to be sent to the second server computing system as consumed. 12. The system of claim 9, wherein the second network is a private network. 13. A first server computing system comprising: a persistent storage unit to store product asset management data from a second server computing system over a first network, the product asset management data relating to an owner of product assets and comprising product business model data of a third party; and a processor coupled to the persistent storage unit to receive the product asset management data from the second server computing system over the first network, and to configure one or more entities in a second network to communicate with the first server computing system to manage the product assets of the owner as specified by the third party via the second network using the product asset management data stored in the persistent storage unit, the second network being a private network. 14. The system of claim 13, wherein the product asset management data further comprises at least one of: third party extensions and client configuration data. 15. The system of claim 13, wherein to configure one or more entities comprises the processor: to configure an entity, using the client configuration data, to communicate with the first server computing system over the second network to consume a product. 16. The system of claim 14, further comprising the processor: to send a request to the second computing system over the first network for updated product asset management data. 17. A non-transitory computer-readable storage medium including instructions that, when executed by a computer system, cause the computer system to perform a set of operations comprising: receiving a request from a server computing system over a first network, the request being for product asset management data relating to an owner of product assets, the product asset management data comprising product business model data of a third party; collecting the product asset management data relating to the owner; and sending the product asset management data to the second server computing system to allow the second server computing system to manage the product assets of the owner as specified by the third party via a second network. 18. A non-transitory computer-readable storage medium including instructions that, when executed by a computer system, cause the computer system to perform a set of operations comprising: receiving product asset management data from a server computing system over a first network, the product asset management data relating to an owner of product assets and comprising product business model data of a third party; storing the product asset management data in a data store; and managing the product assets of the owner as specified by the third party via a second network using the locally stored product asset management data. 19. The non-transitory computer-readable storage medium of claim 18, wherein configuring comprises: configuring an entity, using the client configuration data, to communicate with the first server computing system over the second network to consume a product. 20. The non-transitory computer-readable storage medium of claim 18, further comprising: sending a request to the second computing system over the first network for updated product asset management data.
A first server computing system receives a request from a second server computing system over a first network. The request is for product asset management data relating to an owner of product assets. The product asset management data comprises product business model data of a third party. The first computing system collects the product asset management data relating to the owner and sends the product asset management data to the second server computing system to allow the second server computing system to manage the product assets of the owner as specified by the third party via a second network.1. A method, implemented by a first server computing system programmed to perform the following, comprising: receiving, by the first server computing system, a request from a second server computing system over a first network, the request being for product asset management data relating to an owner of product assets, the product asset management data comprising product business model data of a third party; collecting, by the first server computing system, the product asset management data relating to the owner; and sending, by the first server computing system, the product asset management data to the second server computing system to allow the second server computing system to manage the product assets of the owner as specified by the third party via a second network. 2. The method of claim 1, wherein the product asset management data further comprises at least one of: third party extensions and client configuration data. 3. The method of claim 1, further comprising: determining entitlements that are available to the owner; determining entitlements to send to the second server computing system; and marking the entitlements to be sent to the second server computing system as consumed. 4. The method of claim 1, wherein the second network is a private network. 5. A method, implemented by a first server computing system programmed to perform the following, comprising: receiving, by the first server computing system, product asset management data from a second server computing system over a first network, the product asset management data relating to an owner of product assets and comprising product business model data of a third party; storing, by the first server computing system, the product asset management data in a data store that is coupled to the first server computing system; and configuring, by the first server computing system, one or more entities in a second network to communicate with the first server computing system over the second network to manage the product assets of the owner as specified by the third party via the second network using the locally stored product asset management data, the second network being a private network. 6. The method of claim 5, wherein the product asset management data further comprises at least one of: third party extensions and client configuration data. 7. The method of claim 5, wherein configuring comprises: configuring an entity, using the client configuration data, to communicate with the first server computing system over the second network to consume a product. 8. The method of claim 5, further comprising: sending a request to the second computing system over the first network for updated product asset management data. 9. A server computing system comprising: a persistent storage unit to store product asset management data relating to an owner of product assets and comprising product business model data of a third party; and a processor coupled to the persistent storage unit to receive a request from a second server computing system over a first network, the request being for product asset management data relating to an owner of product assets, to collect the product asset management data relating to the owner, and to send the product asset management data to the second server computing system to allow the second server computing system to manage the product assets of the owner as specified by the third party via a second network. 10. The system of claim 9, wherein the product asset management data further comprises at least one of: third party extensions and client configuration data. 11. The system of claim 9, further comprising the processor: to determine entitlements that are available to the owner, to determine entitlements to send to the second server computing system, and to mark the entitlements to be sent to the second server computing system as consumed. 12. The system of claim 9, wherein the second network is a private network. 13. A first server computing system comprising: a persistent storage unit to store product asset management data from a second server computing system over a first network, the product asset management data relating to an owner of product assets and comprising product business model data of a third party; and a processor coupled to the persistent storage unit to receive the product asset management data from the second server computing system over the first network, and to configure one or more entities in a second network to communicate with the first server computing system to manage the product assets of the owner as specified by the third party via the second network using the product asset management data stored in the persistent storage unit, the second network being a private network. 14. The system of claim 13, wherein the product asset management data further comprises at least one of: third party extensions and client configuration data. 15. The system of claim 13, wherein to configure one or more entities comprises the processor: to configure an entity, using the client configuration data, to communicate with the first server computing system over the second network to consume a product. 16. The system of claim 14, further comprising the processor: to send a request to the second computing system over the first network for updated product asset management data. 17. A non-transitory computer-readable storage medium including instructions that, when executed by a computer system, cause the computer system to perform a set of operations comprising: receiving a request from a server computing system over a first network, the request being for product asset management data relating to an owner of product assets, the product asset management data comprising product business model data of a third party; collecting the product asset management data relating to the owner; and sending the product asset management data to the second server computing system to allow the second server computing system to manage the product assets of the owner as specified by the third party via a second network. 18. A non-transitory computer-readable storage medium including instructions that, when executed by a computer system, cause the computer system to perform a set of operations comprising: receiving product asset management data from a server computing system over a first network, the product asset management data relating to an owner of product assets and comprising product business model data of a third party; storing the product asset management data in a data store; and managing the product assets of the owner as specified by the third party via a second network using the locally stored product asset management data. 19. The non-transitory computer-readable storage medium of claim 18, wherein configuring comprises: configuring an entity, using the client configuration data, to communicate with the first server computing system over the second network to consume a product. 20. The non-transitory computer-readable storage medium of claim 18, further comprising: sending a request to the second computing system over the first network for updated product asset management data.
2,400
7,427
7,427
14,394,834
2,425
A method of video coding in respect of a 4:2:2 chroma subsampling format comprises dividing image data into transform units; in the case of a non-square transform unit, splitting the non-square transform unit into square blocks prior to applying a spatial frequency transform; and applying a spatial frequency transform to the square blocks to generate corresponding sets of spatial frequency coefficients.
1. A method of video coding, the method comprising: dividing image data into transform units; in the case of a non-square transform unit, splitting the non-square transform unit into square blocks prior to applying a spatial frequency transform; and applying a spatial frequency transform to the square blocks to generate corresponding sets of spatial frequency coefficients. 2. A method according to claim 1, comprising the step of: combining the sets of spatial frequency coefficients relating to the square blocks derived from a transform unit. 3. A method according to claim 1, in which the splitting step comprises applying a Haar transform. 4. A method according to claim 1, in which the non-square transform unit is rectangular, and the splitting step comprises selecting respective square blocks either side of a centre axis of the rectangular transform unit. 5. A method according to claim 1, in which the non-square transform unit is rectangular, and the splitting step comprises selecting alternate rows or columns of samples of the transform unit. 6. A method according to claim 1, in which the transform unit has twice as many samples in a vertical direction as in a horizontal direction. 7. A method according to claim 1, in which, in respect of transform units of an intra-prediction unit, performing the splitting step before generating predicted image data in respect of that prediction unit. 8. A method according to claim 1, comprising the steps of: enabling non-square quad-tree transforms; enabling asymmetric motion partitioning; and selecting transform unit block sizes to align with a resulting asymmetric prediction unit block layout. 9. A method according to claim 1, comprising the step of: associating intra-prediction mode angles for square prediction units with different intra-prediction mode angles for non-square prediction units. 10. A method according to claim 1, in respect of a 4:2:2 chroma subsampling format, the method comprising the steps of: interpolating a chroma intra-prediction unit having a height twice that of a corresponding 4:2:0 format prediction unit using the chroma filter employed for the corresponding 4:2:0 format prediction unit; and using only alternate vertical values of the interpolated chroma prediction unit 11. A method according to claim 1, comprising the steps of: deriving a luma motion vector for a prediction unit; and independently deriving a chroma motion vector for that prediction unit. 12. A method according to claim 1, comprising the steps of: indicating that luma residual data is to be included in a bitstream losslessly; and independently indicating that chroma residual data is to be included in the bitstream losslessly. 13. A method according to claim 1, comprising the step of: providing a quantisation parameter association table between luma and chroma quantisation parameters, where the maximum chroma quantisation parameter value is 6 smaller than the maximum luma quantisation parameter. 14. A method according to claim 1, comprising the step of: defining one or more quantisation matrices as difference values with respect to quantisation matrices defined for a different chroma subsampling format. 15. A method according to claim 1, comprising the steps of: mapping an entropy encoding context variable from a luma context variable map for use with a chroma transform unit; and entropy encoding one or more coefficients of a chroma transform unit using the mapped context variable. 16. A method according to claim 1, comprising the steps of: enabling adaptive loop filtering; and categorising respective chroma samples into one of a plurality of categories each having a respective filter. 17. A method according to claim 1, comprising the steps of: enabling adaptive loop filtering; and providing at least a first adaptive loop filtering control flag for the chroma channels. 18. A method of video decoding, the method comprising: applying a spatial frequency transform to blocks of spatial frequency coefficients to generate two or more corresponding square blocks of samples; and combining the two or more square blocks of samples into a non-square transform unit. 19. A method according to claim 18, comprising the step of: splitting a block of spatial frequency coefficients into two or more sub-blocks; and applying the spatial frequency transform separately to each of the sub-blocks. 20. A method according to claim 18, in which the combining step comprises applying an inverse Haar transform. 21. A method according to claim 18, in which the non-square transform unit is rectangular, and the combining step comprises concatenating the respective square blocks either side of a centre axis of the rectangular transform unit. 22. A method according to claim 18, in which the non-square transform unit is rectangular, and the combining step comprises selecting alternate rows or columns of samples of the transform unit from alternate ones of the square blocks. 23. A method according to claim 18, in which the transform unit has twice as many samples in a vertical direction as in a horizontal direction. 24. A method according to claim 18, in which the video coding is in respect of a 4:2:2 chroma subsampling format. 25. Computer software which, when executed by a computer, causes the computer to implement the method of claim 18. 26. Video coding apparatus, the apparatus comprising: a divider configured to divide image data into transform units; a splitter operable in the case of a non-square transform unit and configured to split the non-square transform unit into square blocks prior to applying a spatial frequency transform; and a spatial frequency transformer configured to apply a spatial frequency transform to the square blocks to generate corresponding sets of spatial frequency coefficients. 27. Video decoding apparatus, the apparatus comprising: a spatial frequency transformer configured to apply a spatial frequency transform to blocks of spatial frequency coefficients to generate two or more corresponding square blocks of samples; and a combiner configured to combine the two or more square blocks of samples into a non-square transform unit. 28. Apparatus according to claim 26, operable in respect of a 4:2:2 chroma subsampling format. 29. Video capture, storage, display, transmission and/or reception apparatus comprising decoding apparatus according to claim 27.
A method of video coding in respect of a 4:2:2 chroma subsampling format comprises dividing image data into transform units; in the case of a non-square transform unit, splitting the non-square transform unit into square blocks prior to applying a spatial frequency transform; and applying a spatial frequency transform to the square blocks to generate corresponding sets of spatial frequency coefficients.1. A method of video coding, the method comprising: dividing image data into transform units; in the case of a non-square transform unit, splitting the non-square transform unit into square blocks prior to applying a spatial frequency transform; and applying a spatial frequency transform to the square blocks to generate corresponding sets of spatial frequency coefficients. 2. A method according to claim 1, comprising the step of: combining the sets of spatial frequency coefficients relating to the square blocks derived from a transform unit. 3. A method according to claim 1, in which the splitting step comprises applying a Haar transform. 4. A method according to claim 1, in which the non-square transform unit is rectangular, and the splitting step comprises selecting respective square blocks either side of a centre axis of the rectangular transform unit. 5. A method according to claim 1, in which the non-square transform unit is rectangular, and the splitting step comprises selecting alternate rows or columns of samples of the transform unit. 6. A method according to claim 1, in which the transform unit has twice as many samples in a vertical direction as in a horizontal direction. 7. A method according to claim 1, in which, in respect of transform units of an intra-prediction unit, performing the splitting step before generating predicted image data in respect of that prediction unit. 8. A method according to claim 1, comprising the steps of: enabling non-square quad-tree transforms; enabling asymmetric motion partitioning; and selecting transform unit block sizes to align with a resulting asymmetric prediction unit block layout. 9. A method according to claim 1, comprising the step of: associating intra-prediction mode angles for square prediction units with different intra-prediction mode angles for non-square prediction units. 10. A method according to claim 1, in respect of a 4:2:2 chroma subsampling format, the method comprising the steps of: interpolating a chroma intra-prediction unit having a height twice that of a corresponding 4:2:0 format prediction unit using the chroma filter employed for the corresponding 4:2:0 format prediction unit; and using only alternate vertical values of the interpolated chroma prediction unit 11. A method according to claim 1, comprising the steps of: deriving a luma motion vector for a prediction unit; and independently deriving a chroma motion vector for that prediction unit. 12. A method according to claim 1, comprising the steps of: indicating that luma residual data is to be included in a bitstream losslessly; and independently indicating that chroma residual data is to be included in the bitstream losslessly. 13. A method according to claim 1, comprising the step of: providing a quantisation parameter association table between luma and chroma quantisation parameters, where the maximum chroma quantisation parameter value is 6 smaller than the maximum luma quantisation parameter. 14. A method according to claim 1, comprising the step of: defining one or more quantisation matrices as difference values with respect to quantisation matrices defined for a different chroma subsampling format. 15. A method according to claim 1, comprising the steps of: mapping an entropy encoding context variable from a luma context variable map for use with a chroma transform unit; and entropy encoding one or more coefficients of a chroma transform unit using the mapped context variable. 16. A method according to claim 1, comprising the steps of: enabling adaptive loop filtering; and categorising respective chroma samples into one of a plurality of categories each having a respective filter. 17. A method according to claim 1, comprising the steps of: enabling adaptive loop filtering; and providing at least a first adaptive loop filtering control flag for the chroma channels. 18. A method of video decoding, the method comprising: applying a spatial frequency transform to blocks of spatial frequency coefficients to generate two or more corresponding square blocks of samples; and combining the two or more square blocks of samples into a non-square transform unit. 19. A method according to claim 18, comprising the step of: splitting a block of spatial frequency coefficients into two or more sub-blocks; and applying the spatial frequency transform separately to each of the sub-blocks. 20. A method according to claim 18, in which the combining step comprises applying an inverse Haar transform. 21. A method according to claim 18, in which the non-square transform unit is rectangular, and the combining step comprises concatenating the respective square blocks either side of a centre axis of the rectangular transform unit. 22. A method according to claim 18, in which the non-square transform unit is rectangular, and the combining step comprises selecting alternate rows or columns of samples of the transform unit from alternate ones of the square blocks. 23. A method according to claim 18, in which the transform unit has twice as many samples in a vertical direction as in a horizontal direction. 24. A method according to claim 18, in which the video coding is in respect of a 4:2:2 chroma subsampling format. 25. Computer software which, when executed by a computer, causes the computer to implement the method of claim 18. 26. Video coding apparatus, the apparatus comprising: a divider configured to divide image data into transform units; a splitter operable in the case of a non-square transform unit and configured to split the non-square transform unit into square blocks prior to applying a spatial frequency transform; and a spatial frequency transformer configured to apply a spatial frequency transform to the square blocks to generate corresponding sets of spatial frequency coefficients. 27. Video decoding apparatus, the apparatus comprising: a spatial frequency transformer configured to apply a spatial frequency transform to blocks of spatial frequency coefficients to generate two or more corresponding square blocks of samples; and a combiner configured to combine the two or more square blocks of samples into a non-square transform unit. 28. Apparatus according to claim 26, operable in respect of a 4:2:2 chroma subsampling format. 29. Video capture, storage, display, transmission and/or reception apparatus comprising decoding apparatus according to claim 27.
2,400
7,428
7,428
14,569,816
2,431
In a procedure for handling security settings of a mobile end device the operating conditions of the end device are determined Then minimum security requirements are established according to the operating conditions by evaluating contextual data regarding the operating conditions of the end device. Next it is determined whether the security settings on the end device comply with at a least with the minimum security requirements. Access to applications is allowed or denied according to the security settings on the mobile end device. Should the end device not meet minimum security requirements the user may be prompted to change the security settings on the end device. The method may involve locating the end device and issuing of a warning in the end device does not meet minimum security settings.
1. A method for the handling of security settings of a mobile end device comprising (a) determining operating conditions of the end device; (b) establishing minimum security requirements according to the operating conditions by evaluating of contextual data regarding the operating conditions of end device; (c) automatically determining whether security settings on the end device, comply with at a least the minimum security requirements; and (d) controlling applications according to the security settings on the end device, wherein implementation of the above steps (a) to (d) is controlled by at least one end device enabled agent. 2. The method according to claim 1, also comprising presenting at least one of the security settings and the minimum security requirements for a use of the end device in a recognizable way on a display of the end device. 3. The method according to claim 1, also comprising tagging applications in terms of restrictions, which result from the security settings by changing an ideograph of the respective application on a display of the end device. 4. The method according to claim 1, also comprising: changing the security settings to prevent circumventing of the minimum security requirements in response to at least one user interaction: starting an application, terminating an application; and changing the operating conditions. 5. The method according to claim 1, wherein compliance with the minimum security requirements is ensured by at least one of: switching off the end device; terminating, disabling, or blocking applications that do not meet the minimum security requirements; terminating, deactivating, or blocking functions which would violate or infringe on the minimum security requirements; and ignoring user settings or user input which would violate the minimum security requirements. 6. A method for the control of security settings of a mobile end device comprising: (a) locating the end device; (b) contacting the end device; (c) determining if an agent for the implementation of the procedure the handling security setting is installed and activated on the end device; and (d) when an answer in regard to step (c) is no: issuing a warning, and where the steps (a) to (d) are executed by an instance external to the end device. 7. An improved mobile end device of the type having a memory containing programs and a processor connected to the memory which executes programs contained in the memory wherein the improvement comprises the memory containing a program defining a method that is executable by the mobile end device the method comprising: (a) determining operating conditions of the end device; (b) establishing minimum security requirements according to the operating conditions by evaluating of contextual data regarding the operating conditions of end device ; (c) automatically determining whether security settings on the end device comply with at a least the minimum security requirements; and (d) controlling applications according to the security settings, wherein implementation of the above steps (a) to (d) is controlled by at least one end device enabled agent. 8. The mobile end device of claim 7 wherein, the mobile end devices is a mobile devices selected from the group consisting of cell phones, personal digital assistants, tablets, computers, laptop computers and desktop computers. 9. A non-transitory computer readable medium having a program defining a method that is executable by a mobile end device the method comprising: (a) determining operating conditions of the end device; (b) establishing minimum security requirements according to the operating conditions by evaluating of contextual data regarding the operating conditions of end device; (c) automatically determining whether security settings on the end device comply with at a least the minimum security requirements; and (d) controlling of applications according to the security settings, wherein implementation of the above steps (a) to (d) is controlled by at least one end device enabled agent.
In a procedure for handling security settings of a mobile end device the operating conditions of the end device are determined Then minimum security requirements are established according to the operating conditions by evaluating contextual data regarding the operating conditions of the end device. Next it is determined whether the security settings on the end device comply with at a least with the minimum security requirements. Access to applications is allowed or denied according to the security settings on the mobile end device. Should the end device not meet minimum security requirements the user may be prompted to change the security settings on the end device. The method may involve locating the end device and issuing of a warning in the end device does not meet minimum security settings.1. A method for the handling of security settings of a mobile end device comprising (a) determining operating conditions of the end device; (b) establishing minimum security requirements according to the operating conditions by evaluating of contextual data regarding the operating conditions of end device; (c) automatically determining whether security settings on the end device, comply with at a least the minimum security requirements; and (d) controlling applications according to the security settings on the end device, wherein implementation of the above steps (a) to (d) is controlled by at least one end device enabled agent. 2. The method according to claim 1, also comprising presenting at least one of the security settings and the minimum security requirements for a use of the end device in a recognizable way on a display of the end device. 3. The method according to claim 1, also comprising tagging applications in terms of restrictions, which result from the security settings by changing an ideograph of the respective application on a display of the end device. 4. The method according to claim 1, also comprising: changing the security settings to prevent circumventing of the minimum security requirements in response to at least one user interaction: starting an application, terminating an application; and changing the operating conditions. 5. The method according to claim 1, wherein compliance with the minimum security requirements is ensured by at least one of: switching off the end device; terminating, disabling, or blocking applications that do not meet the minimum security requirements; terminating, deactivating, or blocking functions which would violate or infringe on the minimum security requirements; and ignoring user settings or user input which would violate the minimum security requirements. 6. A method for the control of security settings of a mobile end device comprising: (a) locating the end device; (b) contacting the end device; (c) determining if an agent for the implementation of the procedure the handling security setting is installed and activated on the end device; and (d) when an answer in regard to step (c) is no: issuing a warning, and where the steps (a) to (d) are executed by an instance external to the end device. 7. An improved mobile end device of the type having a memory containing programs and a processor connected to the memory which executes programs contained in the memory wherein the improvement comprises the memory containing a program defining a method that is executable by the mobile end device the method comprising: (a) determining operating conditions of the end device; (b) establishing minimum security requirements according to the operating conditions by evaluating of contextual data regarding the operating conditions of end device ; (c) automatically determining whether security settings on the end device comply with at a least the minimum security requirements; and (d) controlling applications according to the security settings, wherein implementation of the above steps (a) to (d) is controlled by at least one end device enabled agent. 8. The mobile end device of claim 7 wherein, the mobile end devices is a mobile devices selected from the group consisting of cell phones, personal digital assistants, tablets, computers, laptop computers and desktop computers. 9. A non-transitory computer readable medium having a program defining a method that is executable by a mobile end device the method comprising: (a) determining operating conditions of the end device; (b) establishing minimum security requirements according to the operating conditions by evaluating of contextual data regarding the operating conditions of end device; (c) automatically determining whether security settings on the end device comply with at a least the minimum security requirements; and (d) controlling of applications according to the security settings, wherein implementation of the above steps (a) to (d) is controlled by at least one end device enabled agent.
2,400
7,429
7,429
14,233,508
2,482
A vehicular vision system includes a plurality of imaging sensors disposed at the vehicle and a display screen disposed in the vehicle. A processing system is operable to process captured image data and to combine and/or manipulate captured image data to provide a three-dimensional representation of the exterior scene for display at the display screen. The processing system is operable to process the captured image data in accordance with a curved surface model, and is operable to process the image data to provide the three-dimensional representation as if seen by a virtual observer from a first virtual viewing point exterior of the vehicle having a first viewing direction. The processing system is operable to adjust the curved surface model when displaying the three-dimensional representation from a second virtual viewing point exterior of the vehicle having a second viewing direction to provide enhanced display of the images.
1. A vision system for a vehicle, said vision system comprising: a plurality of imaging sensors disposed at the vehicle, each having a respective exterior field of view and each capturing respective image data; a display screen disposed in the vehicle and operable to display images for viewing by a driver of the vehicle, wherein said display screen is operable to display images derived from image data captured by said imaging sensors; a processing system operable to process image data captured by said imaging sensors and to at least one of combine and manipulate image data captured by said imaging sensors to provide a three-dimensional representation of the exterior scene for display at said display screen; wherein said processing system is operable to process said captured image data in accordance with a curved surface model; wherein said processing system is operable to process said captured image data to provide the three-dimensional representation as if seen by a virtual observer from a first virtual viewing point exterior of the vehicle having a first viewing direction; and wherein said processing system is operable to adjust said curved surface model when displaying the three-dimensional representation from a second virtual viewing point exterior of the vehicle having a second viewing direction to provide enhanced display of the images as if viewed from the second virtual viewing point. 2. The vision system of claim 1, wherein said first virtual viewing point is generally above the vehicle and said first viewing direction is substantially horizontal and said curved surface model has substantially curved surfaces around the vehicle, and wherein said second virtual viewing point is generally above the vehicle and said second viewing direction is vertically downward towards the top of the vehicle, and wherein said processing system is operable to adjust said curved surface model to have substantially planar surfaces for displaying the three-dimensional representation from said second virtual viewing point. 3. The vision system of claim 2, wherein a surface curvature of said curved surface model is adjusted by said processing system depending on a vertical (y) component of said second virtual viewing direction. 4. The vision system of claim 3, wherein said dependency of said vertical (y) component of said second virtual viewing direction is linear. 5. The vision system of claim 3, wherein said dependency of said vertical (y) component of said second virtual viewing direction is exponential. 6. The vision system of claim 2, wherein a curve characteristic of said surface curvature of said curved surface model is given by an at least partially continuous function. 7. The vision system of claim 6, wherein said at least partially continuous function has one substantially static area and at least one substantially exponential area. 8. The vision system of claim 6, wherein said at least partially continuous function has one substantially static area and at least one substantially cosine area. 9. The vision system of claim 6, wherein said at least partially continuous function has one substantially static area and at least one substantially polynomial area. 10. The vision system of claim 1, wherein said display comprises a display screen disposed in one of (i) an interior rearview mirror assembly of the vehicle and (ii) a head unit assembly of the vehicle. 11. The vision system of claim 1, wherein said display screen comprises a video display screen operable to display video images captured by a portion of said imaging sensors. 12. The vision system of claim 11, wherein said display screen comprises a video mirror display screen and wherein video information displayed by said display screen is viewable through a transflective mirror reflector of the mirror reflective element of an interior rearview mirror assembly of the vehicle. 13. A vision system for a vehicle, said vision system comprising: a plurality of imaging sensors disposed at the vehicle and having exterior fields of view, said imaging sensors capturing image data; a display screen for displaying images derived from said captured image data; a processing system that is operable to store raw image data in a main memory device to reduce an amount of data to be moved to the memory device; and wherein said processing system accesses and processes blocks of data and wherein said processing of said blocks of data comprises at least one of (a) de-mosaic processing of said image data to convert to RGB, YUV or YCrCb color space, (b) visibility enhancement processing and (c) merging of image data from two or more of said imaging sensors. 14. The vision system of claim 13, wherein said processing of said blocks of data comprises visibility enhancement processing and wherein said visibility enhancement processing comprises at least one of (a) gamma correction, (b) tone mapping, (c) color correction, (d) white balance correction or brightness, (e) contrast, (f) saturation correction and (g) exposure correction. 15. A vision system for a vehicle, said vision system comprising: a plurality of imaging sensors disposed at the vehicle and having exterior fields of view, said imaging sensors capturing image data; a display screen for displaying images derived from said captured image data; a processing system that is operable to transform image data to produce a view of the exterior area surrounding the vehicle, and wherein said processing system is operable to select a portion of said transformed image data for transmitting to said display screen for displaying images at said display screen; and wherein, responsive to an indication that information outside of the selected portion of said transformed image data is to be displayed on said display screen, said processing system selects another portion of said image data and transmits said another portion of said image data for displaying images at said display screen. 16. The vision system of claim 15, wherein said vision system provides for reduced bandwidth requirements by transmitting only data appropriate for providing the selected image display. 17. The vision system of claim 15, wherein a subset of captured image data is transmitted by each of said imaging sensors and processed by said processing system. 18. The vision system of claim 17, wherein said processing system is operable to set a resolution of subsets of captured image data and wherein at least one subset of captured image data is transmitted at a lower resolution as compared to others of said subsets of captured image data. 19. The vision system of claim 13, wherein said processing of said blocks of data comprises de-mosaic processing of said image data to convert to RGB, YUV or YCrCb color space. 20. The vision system of claim 13, wherein said processing of said blocks of data comprises merging of image data from two or more of said imaging sensors.
A vehicular vision system includes a plurality of imaging sensors disposed at the vehicle and a display screen disposed in the vehicle. A processing system is operable to process captured image data and to combine and/or manipulate captured image data to provide a three-dimensional representation of the exterior scene for display at the display screen. The processing system is operable to process the captured image data in accordance with a curved surface model, and is operable to process the image data to provide the three-dimensional representation as if seen by a virtual observer from a first virtual viewing point exterior of the vehicle having a first viewing direction. The processing system is operable to adjust the curved surface model when displaying the three-dimensional representation from a second virtual viewing point exterior of the vehicle having a second viewing direction to provide enhanced display of the images.1. A vision system for a vehicle, said vision system comprising: a plurality of imaging sensors disposed at the vehicle, each having a respective exterior field of view and each capturing respective image data; a display screen disposed in the vehicle and operable to display images for viewing by a driver of the vehicle, wherein said display screen is operable to display images derived from image data captured by said imaging sensors; a processing system operable to process image data captured by said imaging sensors and to at least one of combine and manipulate image data captured by said imaging sensors to provide a three-dimensional representation of the exterior scene for display at said display screen; wherein said processing system is operable to process said captured image data in accordance with a curved surface model; wherein said processing system is operable to process said captured image data to provide the three-dimensional representation as if seen by a virtual observer from a first virtual viewing point exterior of the vehicle having a first viewing direction; and wherein said processing system is operable to adjust said curved surface model when displaying the three-dimensional representation from a second virtual viewing point exterior of the vehicle having a second viewing direction to provide enhanced display of the images as if viewed from the second virtual viewing point. 2. The vision system of claim 1, wherein said first virtual viewing point is generally above the vehicle and said first viewing direction is substantially horizontal and said curved surface model has substantially curved surfaces around the vehicle, and wherein said second virtual viewing point is generally above the vehicle and said second viewing direction is vertically downward towards the top of the vehicle, and wherein said processing system is operable to adjust said curved surface model to have substantially planar surfaces for displaying the three-dimensional representation from said second virtual viewing point. 3. The vision system of claim 2, wherein a surface curvature of said curved surface model is adjusted by said processing system depending on a vertical (y) component of said second virtual viewing direction. 4. The vision system of claim 3, wherein said dependency of said vertical (y) component of said second virtual viewing direction is linear. 5. The vision system of claim 3, wherein said dependency of said vertical (y) component of said second virtual viewing direction is exponential. 6. The vision system of claim 2, wherein a curve characteristic of said surface curvature of said curved surface model is given by an at least partially continuous function. 7. The vision system of claim 6, wherein said at least partially continuous function has one substantially static area and at least one substantially exponential area. 8. The vision system of claim 6, wherein said at least partially continuous function has one substantially static area and at least one substantially cosine area. 9. The vision system of claim 6, wherein said at least partially continuous function has one substantially static area and at least one substantially polynomial area. 10. The vision system of claim 1, wherein said display comprises a display screen disposed in one of (i) an interior rearview mirror assembly of the vehicle and (ii) a head unit assembly of the vehicle. 11. The vision system of claim 1, wherein said display screen comprises a video display screen operable to display video images captured by a portion of said imaging sensors. 12. The vision system of claim 11, wherein said display screen comprises a video mirror display screen and wherein video information displayed by said display screen is viewable through a transflective mirror reflector of the mirror reflective element of an interior rearview mirror assembly of the vehicle. 13. A vision system for a vehicle, said vision system comprising: a plurality of imaging sensors disposed at the vehicle and having exterior fields of view, said imaging sensors capturing image data; a display screen for displaying images derived from said captured image data; a processing system that is operable to store raw image data in a main memory device to reduce an amount of data to be moved to the memory device; and wherein said processing system accesses and processes blocks of data and wherein said processing of said blocks of data comprises at least one of (a) de-mosaic processing of said image data to convert to RGB, YUV or YCrCb color space, (b) visibility enhancement processing and (c) merging of image data from two or more of said imaging sensors. 14. The vision system of claim 13, wherein said processing of said blocks of data comprises visibility enhancement processing and wherein said visibility enhancement processing comprises at least one of (a) gamma correction, (b) tone mapping, (c) color correction, (d) white balance correction or brightness, (e) contrast, (f) saturation correction and (g) exposure correction. 15. A vision system for a vehicle, said vision system comprising: a plurality of imaging sensors disposed at the vehicle and having exterior fields of view, said imaging sensors capturing image data; a display screen for displaying images derived from said captured image data; a processing system that is operable to transform image data to produce a view of the exterior area surrounding the vehicle, and wherein said processing system is operable to select a portion of said transformed image data for transmitting to said display screen for displaying images at said display screen; and wherein, responsive to an indication that information outside of the selected portion of said transformed image data is to be displayed on said display screen, said processing system selects another portion of said image data and transmits said another portion of said image data for displaying images at said display screen. 16. The vision system of claim 15, wherein said vision system provides for reduced bandwidth requirements by transmitting only data appropriate for providing the selected image display. 17. The vision system of claim 15, wherein a subset of captured image data is transmitted by each of said imaging sensors and processed by said processing system. 18. The vision system of claim 17, wherein said processing system is operable to set a resolution of subsets of captured image data and wherein at least one subset of captured image data is transmitted at a lower resolution as compared to others of said subsets of captured image data. 19. The vision system of claim 13, wherein said processing of said blocks of data comprises de-mosaic processing of said image data to convert to RGB, YUV or YCrCb color space. 20. The vision system of claim 13, wherein said processing of said blocks of data comprises merging of image data from two or more of said imaging sensors.
2,400
7,430
7,430
12,907,420
2,485
A robot system that includes a robot face with a monitor, a camera, a speaker and a microphone. The robot face is connected to a stand that can be placed in a chair. The stand is configured so that the robot face is at a height that approximates the location of a person's head if they were sitting in the chair. The robot face is coupled to a remote station that can be operated by a user. The face includes a monitor that displays a video image of a user of the remote station. The stand may be coupled to the robot face with articulated joints that can be controlled by the remote station. By way of example, the user at the remote station can cause the face to pan and/or tilt.
1. A robot face system that can be placed on a chair, comprising: a robot face that includes a camera, a monitor, a microphone and a speaker; and a stand that is connected to said robot face and adapted to support said robot face on the chair. 2. The robot system of claim 1, wherein said robot face includes at least one joint that can move said camera and said monitor. 3. The robot system of claim 1, wherein said robot includes a tilt joint that can move said camera and said monitor about a tilt axis and a pan joint that can move said camera and said monitor about a pan axis. 4. The robot system of claim 1, wherein said stand has a length with a range between 20 to 40 inches. 5. The robot system of claim 1, wherein said stand includes two spaced-apart leg portions. 6. The robot system of claim 1, wherein said robot includes an input port. 7. The robot system of claim 6, wherein said input port is located on an arm adapted to be placed on a table. 8. The robot system of claim 1, wherein said robot includes a video output port. 9. A robot system that includes a robot face that can be placed on a chair, comprising: a robot face that includes a camera, a monitor, a microphone and a speaker; a stand that is connected to said robot face and adapted to support said robot face on the chair; and a remote station that includes a camera coupled to said robot monitor, a monitor coupled to said robot camera, a microphone coupled to said robot speaker and a speaker coupled to said robot microphone. 10. The robot system of claim 9, wherein said robot face includes at least one joint that can move said camera and said monitor and be controlled by said remote station. 11. The robot system of claim 9, wherein said robot includes a tilt joint that can move said camera and said monitor about a tilt axis and a pan joint that can move said camera and said monitor about a pan axis, said tilt and pan joints can be controlled by said remote station. 12. The robot system of claim 9, wherein said stand has a length with a range between 20 to 40 inches. 13. The robot system of claim 9, wherein said stand includes two spaced-apart leg portions. 14. The robot system of claim 9, wherein said remote station is connected to said robot through a WiFi link and through a cellular link. 15. The robot system of claim 9, wherein said robot includes an input port. 16. The robot system of claim 15, wherein said input port is located on an arm adapted to be placed on a table. 17. The robot system of claim 9, wherein said robot includes a video output port. 18. A method for conducting a teleconference, comprising: placing a robot face onto a chair, the robot face includes a camera, a monitor, a microphone and a speaker, and is coupled to a remote station that includes a camera coupled to the robot monitor, a monitor coupled to the robot camera, a microphone coupled to the robot speaker and a speaker coupled to the robot microphone; and communicating between the robot face and the remote station. 19. The method of claim 18, further comprising moving the robot face in at least one degree of freedom with commands from the remote station. 20. The method of claim 18, further comprising moving the robot face in at least two degrees of freedom with commands from the remote station. 21. The method of claim 18, further comprising transmitting pre-existing information from the remote station to the robot and displaying that information on a projector connected to a video output port disposed on said robot. 22. A method of sharing a physical whiteboard remotely comprising: transmitting an image of a physical whiteboard through a camera; displaying the image on a remote station; telestrating the image by a remote user; and projecting telestration lines onto the physical whiteboard by a projector proximate to the physical whiteboard. 23. The method of claim 22, further comprising aligning the shared physical whiteboard, whereby said projector projects registration marks on said whiteboard surface, said whiteboard and registration marks are visible on the remote station via transmission from a camera adjacent to the whiteboard, and the user of said remote station selects the location of said registration marks as seen on the remote station image. 24. The method of claim 22, further comprising aligning said shared physical whiteboard, whereby said projector projects registration marks on said whiteboard surface, said whiteboard and registration marks are captured by a camera adjacent to the whiteboard, and said registration marks are automatically detected.
A robot system that includes a robot face with a monitor, a camera, a speaker and a microphone. The robot face is connected to a stand that can be placed in a chair. The stand is configured so that the robot face is at a height that approximates the location of a person's head if they were sitting in the chair. The robot face is coupled to a remote station that can be operated by a user. The face includes a monitor that displays a video image of a user of the remote station. The stand may be coupled to the robot face with articulated joints that can be controlled by the remote station. By way of example, the user at the remote station can cause the face to pan and/or tilt.1. A robot face system that can be placed on a chair, comprising: a robot face that includes a camera, a monitor, a microphone and a speaker; and a stand that is connected to said robot face and adapted to support said robot face on the chair. 2. The robot system of claim 1, wherein said robot face includes at least one joint that can move said camera and said monitor. 3. The robot system of claim 1, wherein said robot includes a tilt joint that can move said camera and said monitor about a tilt axis and a pan joint that can move said camera and said monitor about a pan axis. 4. The robot system of claim 1, wherein said stand has a length with a range between 20 to 40 inches. 5. The robot system of claim 1, wherein said stand includes two spaced-apart leg portions. 6. The robot system of claim 1, wherein said robot includes an input port. 7. The robot system of claim 6, wherein said input port is located on an arm adapted to be placed on a table. 8. The robot system of claim 1, wherein said robot includes a video output port. 9. A robot system that includes a robot face that can be placed on a chair, comprising: a robot face that includes a camera, a monitor, a microphone and a speaker; a stand that is connected to said robot face and adapted to support said robot face on the chair; and a remote station that includes a camera coupled to said robot monitor, a monitor coupled to said robot camera, a microphone coupled to said robot speaker and a speaker coupled to said robot microphone. 10. The robot system of claim 9, wherein said robot face includes at least one joint that can move said camera and said monitor and be controlled by said remote station. 11. The robot system of claim 9, wherein said robot includes a tilt joint that can move said camera and said monitor about a tilt axis and a pan joint that can move said camera and said monitor about a pan axis, said tilt and pan joints can be controlled by said remote station. 12. The robot system of claim 9, wherein said stand has a length with a range between 20 to 40 inches. 13. The robot system of claim 9, wherein said stand includes two spaced-apart leg portions. 14. The robot system of claim 9, wherein said remote station is connected to said robot through a WiFi link and through a cellular link. 15. The robot system of claim 9, wherein said robot includes an input port. 16. The robot system of claim 15, wherein said input port is located on an arm adapted to be placed on a table. 17. The robot system of claim 9, wherein said robot includes a video output port. 18. A method for conducting a teleconference, comprising: placing a robot face onto a chair, the robot face includes a camera, a monitor, a microphone and a speaker, and is coupled to a remote station that includes a camera coupled to the robot monitor, a monitor coupled to the robot camera, a microphone coupled to the robot speaker and a speaker coupled to the robot microphone; and communicating between the robot face and the remote station. 19. The method of claim 18, further comprising moving the robot face in at least one degree of freedom with commands from the remote station. 20. The method of claim 18, further comprising moving the robot face in at least two degrees of freedom with commands from the remote station. 21. The method of claim 18, further comprising transmitting pre-existing information from the remote station to the robot and displaying that information on a projector connected to a video output port disposed on said robot. 22. A method of sharing a physical whiteboard remotely comprising: transmitting an image of a physical whiteboard through a camera; displaying the image on a remote station; telestrating the image by a remote user; and projecting telestration lines onto the physical whiteboard by a projector proximate to the physical whiteboard. 23. The method of claim 22, further comprising aligning the shared physical whiteboard, whereby said projector projects registration marks on said whiteboard surface, said whiteboard and registration marks are visible on the remote station via transmission from a camera adjacent to the whiteboard, and the user of said remote station selects the location of said registration marks as seen on the remote station image. 24. The method of claim 22, further comprising aligning said shared physical whiteboard, whereby said projector projects registration marks on said whiteboard surface, said whiteboard and registration marks are captured by a camera adjacent to the whiteboard, and said registration marks are automatically detected.
2,400
7,431
7,431
14,502,617
2,419
An approach for implementing a network of cameras for providing one or more services in a neighborhood is provided. The approach includes creating a network of cameras, wherein the cameras are associated with one or more users participating in the network. The approach also includes stitching one or more images, one or more videos, or a combination thereof captured by the cameras to generate a composite image, a composite video, or a combination thereof. The approach also includes providing access to the composite image, the composite video, the network, or a combination thereof to the one or more users.
1. A method comprising: creating a network of a plurality of cameras, wherein the plurality of cameras are associated with a plurality of users participating in the network; stitching one or more images, one or more videos, or a combination thereof captured by the plurality of cameras to generate a composite image, a composite video, or a combination thereof; and providing access to the composite image, the composite video, the network, or a combination thereof to the plurality of users. 2. A method of claim 1, further comprising: determining whether to include one or more candidate cameras in the plurality of cameras based on orientation information, field of view information, location information, or a combination thereof associated with the one or more candidate cameras. 3. A method of claim 2, further comprising: determining a sequence of the one or more images, the one or more videos, or a combination thereof for generating the composite image, the composite video, or a combination thereof based on the orientation information, the field of view information, the location information, or a combination thereof. 4. A method of claim 2, further comprising: determining the orientation information, the field of view information, the location information, or a combination thereof based on image processing information, location sensor information, micro-location information, or a combination thereof. 5. A method of claim 1, further comprising: initiating a tracking of an object by processing the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to determine a presence, a movement, or a combination thereof of the object; and generating an alert message based on the tracking. 6. A method of claim 5, further comprising: receiving a request to initiate the tracking, wherein the request specifies object identifying information, and wherein the processing of the composite image, the composite video, or a combination thereof is further based on the object identifying information. 7. A method of claim 5, further comprising: granting access to the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to an entity other than the plurality of users based on the tracking of the object, a detected event associated with the object, or a combination thereof. 8. A method of claim 1, further comprising: initiating an activation or a deactivation of at least one of the plurality of cameras based on presence information associated with the one or more users. 9. A method of claim 1, further comprising: initiating a storage of the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof independently or redundantly across one or more data storage repositories associated with the plurality of users. 10. An apparatus comprising: a processor; and a memory including computer program code for one or more programs, the memory and the computer program code configured to, with the processor, cause the apparatus to perform at least the following: create a network of a plurality of cameras, wherein the plurality of cameras are associated with a plurality of users participating in the network; stitch one or more images, one or more videos, or a combination thereof captured by the plurality of cameras to generate a composite image, a composite video, or a combination thereof; and provide access to the composite image, the composite video, the network, or a combination thereof to the plurality of users. 11. An apparatus of claim 10, wherein the apparatus is further caused to: determine whether to include one or more candidate cameras in the plurality of cameras based on orientation information, field of view information, location information, or a combination thereof associated with the one or more candidate cameras. 12. An apparatus of claim 11, wherein the apparatus is further caused to: determine a sequence of the one or more images, the one or more videos, or a combination thereof for generating the composite image, the composite video, or a combination thereof based on the orientation information, the field of view information, the location information, or a combination thereof. 13. An apparatus of claim 11, wherein the apparatus is further caused to: determine the orientation information, the field of view information, the location information, or a combination thereof based on image processing information, location sensor information, micro-location information, or a combination thereof. 14. An apparatus of claim 10, wherein the apparatus is further caused to: initiate a tracking of an object by processing the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to determine a presence, a movement, or a combination thereof of the object; and generate an alert message based on the tracking. 15. An apparatus of claim 14, wherein the apparatus is further caused to: receive a request to initiate the tracking, wherein the request specifies object identifying information, and wherein the processing of the composite image, the composite video, or a combination thereof is further based on the object identifying information. 16. An apparatus of claim 14, wherein the apparatus is further caused to: grant access to the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to an entity other than the plurality of users based on the tracking of the object, a detected event associated with the object, or a combination thereof. 17. A system comprising: A networked camera platform configured to create a network of a plurality of cameras, wherein the plurality of cameras are associated with a plurality of users participating in the network; stitch one or more images, one or more videos, or a combination thereof captured by the plurality of cameras to generate a composite image, a composite video, or a combination thereof; and provide access to the composite image, the composite video, the network, or a combination thereof to the plurality of users. 18. A system of claim 17, wherein the networked camera platform is further configured to: determine whether to include one or more candidate cameras in the plurality of cameras based on orientation information, field of view information, location information, or a combination thereof associated with the one or more candidate cameras. 19. A system of claim 18, wherein the networked camera platform is further configured to: determine a sequence of the one or more images, the one or more videos, or a combination thereof for generating the composite image, the composite video, or a combination thereof based on the orientation information, the field of view information, the location information, or a combination thereof. 20. A system of claim 18, wherein the networked camera platform is further configured to: determine the orientation information, the field of view information, the location information, or a combination thereof based on image processing information, location sensor information, micro-location information, or a combination thereof. 21. A system of claim 17, wherein the networked camera platform is further configured to: initiate a tracking of an object by processing the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to determine a presence, a movement, or a combination thereof of the object; and generate an alert message based on the tracking. 22. A system of claim 21, wherein the networked camera platform is further configured to: receive a request to initiate the tracking, wherein the request specifies object identifying information, and wherein the processing of the composite image, the composite video, or a combination thereof is further based on the object identifying information. 23. A system of claim 21, wherein the networked camera platform is further configured to: grant access to the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to an entity other than the plurality of users based on the tracking of the object, a detected event associated with the object, or a combination thereof.
An approach for implementing a network of cameras for providing one or more services in a neighborhood is provided. The approach includes creating a network of cameras, wherein the cameras are associated with one or more users participating in the network. The approach also includes stitching one or more images, one or more videos, or a combination thereof captured by the cameras to generate a composite image, a composite video, or a combination thereof. The approach also includes providing access to the composite image, the composite video, the network, or a combination thereof to the one or more users.1. A method comprising: creating a network of a plurality of cameras, wherein the plurality of cameras are associated with a plurality of users participating in the network; stitching one or more images, one or more videos, or a combination thereof captured by the plurality of cameras to generate a composite image, a composite video, or a combination thereof; and providing access to the composite image, the composite video, the network, or a combination thereof to the plurality of users. 2. A method of claim 1, further comprising: determining whether to include one or more candidate cameras in the plurality of cameras based on orientation information, field of view information, location information, or a combination thereof associated with the one or more candidate cameras. 3. A method of claim 2, further comprising: determining a sequence of the one or more images, the one or more videos, or a combination thereof for generating the composite image, the composite video, or a combination thereof based on the orientation information, the field of view information, the location information, or a combination thereof. 4. A method of claim 2, further comprising: determining the orientation information, the field of view information, the location information, or a combination thereof based on image processing information, location sensor information, micro-location information, or a combination thereof. 5. A method of claim 1, further comprising: initiating a tracking of an object by processing the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to determine a presence, a movement, or a combination thereof of the object; and generating an alert message based on the tracking. 6. A method of claim 5, further comprising: receiving a request to initiate the tracking, wherein the request specifies object identifying information, and wherein the processing of the composite image, the composite video, or a combination thereof is further based on the object identifying information. 7. A method of claim 5, further comprising: granting access to the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to an entity other than the plurality of users based on the tracking of the object, a detected event associated with the object, or a combination thereof. 8. A method of claim 1, further comprising: initiating an activation or a deactivation of at least one of the plurality of cameras based on presence information associated with the one or more users. 9. A method of claim 1, further comprising: initiating a storage of the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof independently or redundantly across one or more data storage repositories associated with the plurality of users. 10. An apparatus comprising: a processor; and a memory including computer program code for one or more programs, the memory and the computer program code configured to, with the processor, cause the apparatus to perform at least the following: create a network of a plurality of cameras, wherein the plurality of cameras are associated with a plurality of users participating in the network; stitch one or more images, one or more videos, or a combination thereof captured by the plurality of cameras to generate a composite image, a composite video, or a combination thereof; and provide access to the composite image, the composite video, the network, or a combination thereof to the plurality of users. 11. An apparatus of claim 10, wherein the apparatus is further caused to: determine whether to include one or more candidate cameras in the plurality of cameras based on orientation information, field of view information, location information, or a combination thereof associated with the one or more candidate cameras. 12. An apparatus of claim 11, wherein the apparatus is further caused to: determine a sequence of the one or more images, the one or more videos, or a combination thereof for generating the composite image, the composite video, or a combination thereof based on the orientation information, the field of view information, the location information, or a combination thereof. 13. An apparatus of claim 11, wherein the apparatus is further caused to: determine the orientation information, the field of view information, the location information, or a combination thereof based on image processing information, location sensor information, micro-location information, or a combination thereof. 14. An apparatus of claim 10, wherein the apparatus is further caused to: initiate a tracking of an object by processing the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to determine a presence, a movement, or a combination thereof of the object; and generate an alert message based on the tracking. 15. An apparatus of claim 14, wherein the apparatus is further caused to: receive a request to initiate the tracking, wherein the request specifies object identifying information, and wherein the processing of the composite image, the composite video, or a combination thereof is further based on the object identifying information. 16. An apparatus of claim 14, wherein the apparatus is further caused to: grant access to the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to an entity other than the plurality of users based on the tracking of the object, a detected event associated with the object, or a combination thereof. 17. A system comprising: A networked camera platform configured to create a network of a plurality of cameras, wherein the plurality of cameras are associated with a plurality of users participating in the network; stitch one or more images, one or more videos, or a combination thereof captured by the plurality of cameras to generate a composite image, a composite video, or a combination thereof; and provide access to the composite image, the composite video, the network, or a combination thereof to the plurality of users. 18. A system of claim 17, wherein the networked camera platform is further configured to: determine whether to include one or more candidate cameras in the plurality of cameras based on orientation information, field of view information, location information, or a combination thereof associated with the one or more candidate cameras. 19. A system of claim 18, wherein the networked camera platform is further configured to: determine a sequence of the one or more images, the one or more videos, or a combination thereof for generating the composite image, the composite video, or a combination thereof based on the orientation information, the field of view information, the location information, or a combination thereof. 20. A system of claim 18, wherein the networked camera platform is further configured to: determine the orientation information, the field of view information, the location information, or a combination thereof based on image processing information, location sensor information, micro-location information, or a combination thereof. 21. A system of claim 17, wherein the networked camera platform is further configured to: initiate a tracking of an object by processing the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to determine a presence, a movement, or a combination thereof of the object; and generate an alert message based on the tracking. 22. A system of claim 21, wherein the networked camera platform is further configured to: receive a request to initiate the tracking, wherein the request specifies object identifying information, and wherein the processing of the composite image, the composite video, or a combination thereof is further based on the object identifying information. 23. A system of claim 21, wherein the networked camera platform is further configured to: grant access to the composite image, the composite video, the one or more images, the one or more videos, or a combination thereof to an entity other than the plurality of users based on the tracking of the object, a detected event associated with the object, or a combination thereof.
2,400
7,432
7,432
12,730,124
2,437
Disclosed is a method for user identity authentication for a peer device joining a peer-to-peer overlay network. In the method, a credential server of the overlay network receives a registered user identity from a joining peer device. The credential server verifies the registered user identity with an identity provider. Upon receiving, at the credential server, successful verification of the registered user identity from the identity provider, the credential server issues to the joining peer device a signed certificate for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of the credential server.
1. A method for user identity authentication of a peer device joining a peer-to-peer overlay network, comprising: a credential server of the overlay network receiving a registered user identity from a joining peer device; the credential server verifying the registered user identity with an identity provider; and upon receiving, at the credential server, successful verification of the registered user identity from the identity provider, the credential server issuing to the joining peer device a signed certificate for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of the credential server. 2. A method for user identity authentication as defined in claim 1, wherein each authenticated peer device in the overlay network has a public key of the credential server that allows each authenticated peer device to verify that the source of the signed certificate for the joining peer device is the credential server. 3. A method for user identity authentication as defined in claim 1, wherein the credential server uses an OpenID protocol for verifying the registered user identity with the identity provider. 4. A method for user identity authentication as defined in claim 1, wherein the signed certificate comprises the verified registered user identity, a public key of the joining peer device, and a public key of the credential server. 5. A method for user identity authentication as defined in claim 4, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 6. A method for user identity authentication as defined in claim 1, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 7. A method for user identity authentication as defined in claim 1, wherein the registered user identity of the joining peer device is a globally unique identifier. 8. A method for user identity authentication as defined in claim 7, wherein the registered user identity is registered with a third-party identity provider. 9. A method for user identity authentication as defined in claim 7, wherein the registered user identity is an email address. 10. A credential server having user identity authentication of a peer device joining a peer-to-peer overlay network, the credential server comprising: means for receiving a registered user identity from a joining peer device; means for verifying the registered user identity with an identity provider; and means for issuing to the joining peer device a signed certificate, upon receiving successful verification of the registered user identity from the identity provider, for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of the credential server. 11. A credential server as defined in claim 10, wherein each authenticated peer device in the overlay network has a public key of the credential server that allows each authenticated peer device to verify that the source of the signed certificate for the joining peer device is the credential server. 12. A credential server as defined in claim 10, wherein the credential server uses an OpenID protocol for verifying the registered user identity with the identity provider. 13. A credential server as defined, in claim 10, wherein the signed certificate comprises the verified registered user identity, a public key of the joining peer device, and a public key of the credential server. 14. A credential server as defined in claim 13, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 15. A credential server as defined in claim 10, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 16. A credential server as defined in claim 10, wherein the registered user identity of the joining peer device is a globally unique identifier. 17. A credential server as defined in claim 16, wherein the registered user identity is registered with a third-party identity provider. 18. A credential server as defined in claim 16, wherein the registered user identity is an email address. 19. A credential server having user identity authentication of a peer device joining a peer-to-peer overlay network, the credential server comprising: a processor configured to: receive a registered user identity from a joining peer device; verify the registered user identity with an identity provider; and issue to the joining peer device a signed certificate, upon receiving successful verification of the registered user identity from the identity provider, for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of the credential server. 20. A credential server as defined in claim 19, wherein each authenticated peer device in the overlay network has a public key of the credential server that allows each authenticated peer device to verify that the source of the signed certificate for the joining peer device is the credential server. 21. A credential server as defined in claim 19, wherein the credential server uses an OpenID protocol for verifying the registered user identity with the identity provider. 22. A credential server as defined in claim 19, wherein the signed certificate comprises the verified registered user identity, a public key of the joining peer device, and a public key of the credential server. 23. A credential server as defined in claim 22, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 24. A credential server as defined in claim 19, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 25. A credential server as defined in claim 19, wherein the registered user identity of the joining peer device is a globally unique identifier. 26. A credential server as defined in claim 25, wherein the registered user identity is registered with a third-party identity provider. 27. A credential server as defined in claim 25, wherein the registered user identity is an email address. 28. A computer program product, comprising: computer readable medium, comprising: code for causing a computer to receive a registered user identity from a joining peer device; code for causing a computer to verify the registered user identity with an identity provider; and code for causing a computer to issue to the joining peer device a signed certificate, upon receiving successful verification of the registered user identity from the identity provider, for use by an authenticated peer device in an overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of a credential server. 29. A computer program product as defined in claim 28, wherein each authenticated peer device in the overlay network has a public key of the credential server that allows each authenticated peer device to verify that the source of the signed certificate for the joining peer device is the credential server. 30. A computer program product as defined in claim 28, wherein the credential server uses an OpenID protocol for verifying the registered user identity with the identity provider. 31. A computer program product as defined in claim 28, wherein the signed certificate comprises the verified registered user identity, a public key of the joining peer device, and a public key of the credential server. 32. A computer program product as defined in claim 31, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 33. A computer program product as defined in claim 28, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 34. A computer program product as defined in claim 28, wherein the registered user identity of the joining peer device is a globally unique identifier. 35. A computer program product as defined in claim 34, wherein the registered user identity is registered with a third-party identity provider. 36. A computer program product as defined in claim 34, wherein the registered user identity is an email address. 37. A method for user identity authentication for a peer device joining a peer-to-peer overlay network, comprising: a joining peer device providing a registered user identity to a credential server, wherein the credential server provides a public key to each authenticated peer device in the overlay network that allows each authenticated peer device to verify messages from the credential server; the credential server verifying the registered user identity with an identity provider; and upon receiving, at the credential server, successful verification of the registered user identity from the identity provider, the credential server issuing to the joining peer device a certificate for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the certificate is signed by a private key of the credential server. 38. A method for user identity authentication as defined in claim 37, wherein the credential server uses an OpenID protocol to verify the registered user identity with the identity provider. 39. A method for user identity authentication as defined in claim 37, wherein the signed certificate comprises the verified registered user identity, the public key of the joining peer device, and the public key of the credential server. 40. A method for user identity authentication as defined in claim 39, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 41. A method for user identity authentication as defined in claim 37, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 42. An apparatus having user identity authentication for joining a peer-to-peer overlay network, comprising: means for providing a registered user identity of a joining peer device to a credential server, wherein the credential server provides a public key to each authenticated peer device in the overlay network that allows each authenticated peer device to verify messages from the credential server; and means for receiving a certificate from the credential server upon successful verification of the registered user identity with an identity provider, wherein the certificate is for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, and wherein the certificate is signed by a private key of the credential server. 43. An apparatus having user identity authentication as defined in claim 42, wherein the signed certificate comprises the verified registered user identity, the public key of the joining peer device, and the public key of the credential server. 44. An apparatus having user identity authentication as defined in claim 43, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 45. An apparatus having user identity authentication as defined in claim 42, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 46. An apparatus having user identity authentication as defined in claim 42, wherein the apparatus comprises a watch, a headset, or a sensing device. 47. An apparatus having user identity authentication for joining a peer-to-peer overlay network, comprising: a processor configured to: provide a registered user identity of a joining peer device to a credential server, wherein the credential server provides a public key to each authenticated peer device in a peer-to-peer overlay network that allows each authenticated peer device to verify messages from the credential server; and receive a certificate from the credential server upon successful verification of the registered user identity with an identity provider, wherein the certificate is for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, and wherein the certificate is signed by a private key of the credential server. 48. An apparatus having user identity authentication as defined in claim 47, wherein the signed certificate comprises the verified registered user identity, the public key of the joining peer device, and the public key of the credential server. 49. An apparatus having user identity authentication as defined in claim 48, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 50. An apparatus having user identity authentication as defined in claim 47, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 51. A computer program product, comprising: computer readable medium, comprising: code for causing a computer to provide a registered user identity of a joining peer device to a credential server, wherein the credential server provides a public key to each authenticated peer device in a peer-to-peer overlay network that allows each authenticated peer device to verify messages from the credential server; and code for causing a computer to receive a certificate from the credential server upon successful verification of the registered user identity with an identity provider, wherein the certificate is for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, and wherein the certificate is signed by a private key of the credential server. 52. A computer program product as defined in claim 51, wherein the signed certificate comprises the verified registered user identity and the public key of the credential server. 53. A computer program product as defined in claim 51, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations.
Disclosed is a method for user identity authentication for a peer device joining a peer-to-peer overlay network. In the method, a credential server of the overlay network receives a registered user identity from a joining peer device. The credential server verifies the registered user identity with an identity provider. Upon receiving, at the credential server, successful verification of the registered user identity from the identity provider, the credential server issues to the joining peer device a signed certificate for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of the credential server.1. A method for user identity authentication of a peer device joining a peer-to-peer overlay network, comprising: a credential server of the overlay network receiving a registered user identity from a joining peer device; the credential server verifying the registered user identity with an identity provider; and upon receiving, at the credential server, successful verification of the registered user identity from the identity provider, the credential server issuing to the joining peer device a signed certificate for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of the credential server. 2. A method for user identity authentication as defined in claim 1, wherein each authenticated peer device in the overlay network has a public key of the credential server that allows each authenticated peer device to verify that the source of the signed certificate for the joining peer device is the credential server. 3. A method for user identity authentication as defined in claim 1, wherein the credential server uses an OpenID protocol for verifying the registered user identity with the identity provider. 4. A method for user identity authentication as defined in claim 1, wherein the signed certificate comprises the verified registered user identity, a public key of the joining peer device, and a public key of the credential server. 5. A method for user identity authentication as defined in claim 4, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 6. A method for user identity authentication as defined in claim 1, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 7. A method for user identity authentication as defined in claim 1, wherein the registered user identity of the joining peer device is a globally unique identifier. 8. A method for user identity authentication as defined in claim 7, wherein the registered user identity is registered with a third-party identity provider. 9. A method for user identity authentication as defined in claim 7, wherein the registered user identity is an email address. 10. A credential server having user identity authentication of a peer device joining a peer-to-peer overlay network, the credential server comprising: means for receiving a registered user identity from a joining peer device; means for verifying the registered user identity with an identity provider; and means for issuing to the joining peer device a signed certificate, upon receiving successful verification of the registered user identity from the identity provider, for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of the credential server. 11. A credential server as defined in claim 10, wherein each authenticated peer device in the overlay network has a public key of the credential server that allows each authenticated peer device to verify that the source of the signed certificate for the joining peer device is the credential server. 12. A credential server as defined in claim 10, wherein the credential server uses an OpenID protocol for verifying the registered user identity with the identity provider. 13. A credential server as defined, in claim 10, wherein the signed certificate comprises the verified registered user identity, a public key of the joining peer device, and a public key of the credential server. 14. A credential server as defined in claim 13, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 15. A credential server as defined in claim 10, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 16. A credential server as defined in claim 10, wherein the registered user identity of the joining peer device is a globally unique identifier. 17. A credential server as defined in claim 16, wherein the registered user identity is registered with a third-party identity provider. 18. A credential server as defined in claim 16, wherein the registered user identity is an email address. 19. A credential server having user identity authentication of a peer device joining a peer-to-peer overlay network, the credential server comprising: a processor configured to: receive a registered user identity from a joining peer device; verify the registered user identity with an identity provider; and issue to the joining peer device a signed certificate, upon receiving successful verification of the registered user identity from the identity provider, for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of the credential server. 20. A credential server as defined in claim 19, wherein each authenticated peer device in the overlay network has a public key of the credential server that allows each authenticated peer device to verify that the source of the signed certificate for the joining peer device is the credential server. 21. A credential server as defined in claim 19, wherein the credential server uses an OpenID protocol for verifying the registered user identity with the identity provider. 22. A credential server as defined in claim 19, wherein the signed certificate comprises the verified registered user identity, a public key of the joining peer device, and a public key of the credential server. 23. A credential server as defined in claim 22, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 24. A credential server as defined in claim 19, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 25. A credential server as defined in claim 19, wherein the registered user identity of the joining peer device is a globally unique identifier. 26. A credential server as defined in claim 25, wherein the registered user identity is registered with a third-party identity provider. 27. A credential server as defined in claim 25, wherein the registered user identity is an email address. 28. A computer program product, comprising: computer readable medium, comprising: code for causing a computer to receive a registered user identity from a joining peer device; code for causing a computer to verify the registered user identity with an identity provider; and code for causing a computer to issue to the joining peer device a signed certificate, upon receiving successful verification of the registered user identity from the identity provider, for use by an authenticated peer device in an overlay network to authenticate the registered user identity of the joining peer device, wherein the signed certificate is signed by a private key of a credential server. 29. A computer program product as defined in claim 28, wherein each authenticated peer device in the overlay network has a public key of the credential server that allows each authenticated peer device to verify that the source of the signed certificate for the joining peer device is the credential server. 30. A computer program product as defined in claim 28, wherein the credential server uses an OpenID protocol for verifying the registered user identity with the identity provider. 31. A computer program product as defined in claim 28, wherein the signed certificate comprises the verified registered user identity, a public key of the joining peer device, and a public key of the credential server. 32. A computer program product as defined in claim 31, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 33. A computer program product as defined in claim 28, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 34. A computer program product as defined in claim 28, wherein the registered user identity of the joining peer device is a globally unique identifier. 35. A computer program product as defined in claim 34, wherein the registered user identity is registered with a third-party identity provider. 36. A computer program product as defined in claim 34, wherein the registered user identity is an email address. 37. A method for user identity authentication for a peer device joining a peer-to-peer overlay network, comprising: a joining peer device providing a registered user identity to a credential server, wherein the credential server provides a public key to each authenticated peer device in the overlay network that allows each authenticated peer device to verify messages from the credential server; the credential server verifying the registered user identity with an identity provider; and upon receiving, at the credential server, successful verification of the registered user identity from the identity provider, the credential server issuing to the joining peer device a certificate for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, wherein the certificate is signed by a private key of the credential server. 38. A method for user identity authentication as defined in claim 37, wherein the credential server uses an OpenID protocol to verify the registered user identity with the identity provider. 39. A method for user identity authentication as defined in claim 37, wherein the signed certificate comprises the verified registered user identity, the public key of the joining peer device, and the public key of the credential server. 40. A method for user identity authentication as defined in claim 39, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 41. A method for user identity authentication as defined in claim 37, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 42. An apparatus having user identity authentication for joining a peer-to-peer overlay network, comprising: means for providing a registered user identity of a joining peer device to a credential server, wherein the credential server provides a public key to each authenticated peer device in the overlay network that allows each authenticated peer device to verify messages from the credential server; and means for receiving a certificate from the credential server upon successful verification of the registered user identity with an identity provider, wherein the certificate is for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, and wherein the certificate is signed by a private key of the credential server. 43. An apparatus having user identity authentication as defined in claim 42, wherein the signed certificate comprises the verified registered user identity, the public key of the joining peer device, and the public key of the credential server. 44. An apparatus having user identity authentication as defined in claim 43, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 45. An apparatus having user identity authentication as defined in claim 42, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 46. An apparatus having user identity authentication as defined in claim 42, wherein the apparatus comprises a watch, a headset, or a sensing device. 47. An apparatus having user identity authentication for joining a peer-to-peer overlay network, comprising: a processor configured to: provide a registered user identity of a joining peer device to a credential server, wherein the credential server provides a public key to each authenticated peer device in a peer-to-peer overlay network that allows each authenticated peer device to verify messages from the credential server; and receive a certificate from the credential server upon successful verification of the registered user identity with an identity provider, wherein the certificate is for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, and wherein the certificate is signed by a private key of the credential server. 48. An apparatus having user identity authentication as defined in claim 47, wherein the signed certificate comprises the verified registered user identity, the public key of the joining peer device, and the public key of the credential server. 49. An apparatus having user identity authentication as defined in claim 48, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations. 50. An apparatus having user identity authentication as defined in claim 47, wherein at least one authenticated peer device in the overlay network is unable to establish a connection with the identity provider for verifying a registered user identity. 51. A computer program product, comprising: computer readable medium, comprising: code for causing a computer to provide a registered user identity of a joining peer device to a credential server, wherein the credential server provides a public key to each authenticated peer device in a peer-to-peer overlay network that allows each authenticated peer device to verify messages from the credential server; and code for causing a computer to receive a certificate from the credential server upon successful verification of the registered user identity with an identity provider, wherein the certificate is for use by an authenticated peer device in the overlay network to authenticate the registered user identity of the joining peer device, and wherein the certificate is signed by a private key of the credential server. 52. A computer program product as defined in claim 51, wherein the signed certificate comprises the verified registered user identity and the public key of the credential server. 53. A computer program product as defined in claim 51, wherein the signed certificate further comprises a node identity assigned by the credential server for network operations.
2,400
7,433
7,433
14,732,393
2,483
Coding techniques for image data may cause a still image to be converted to a “phantom” video sequence, which is coded by motion compensated prediction techniques. Thus, coded video data obtained from the coding operation may include temporal prediction references between frames of the video sequence. Metadata may be generated that identifies allocations of content from the still image to the frames of the video sequence. The coded data and the metadata may be transmitted to another device, whereupon it may be decoded by motion compensated prediction techniques and converted back to a still image data. Other techniques may involve coding an image in both a base layer representation and at least one coded enhancement layer representation. The enhancement layer representation may be coded predictively with reference to the base layer representation. The coded base layer representation may be partitioned into a plurality of individually-transmittable segments and stored. Prediction references of elements of the enhancement layer representation may be confined to segments of the base layer representation that correspond to a location of those elements. Meaning, when a pixel block of an enhancement layer maps to a given segment of the base layer representation, prediction references are confined to that segment and do not reference portions of the base layer representation that may be found in other segment(s).
1. A coding method, comprising: converting a still image to be coded to a video sequence; coding the video sequence by motion compensated prediction that includes temporal prediction references between frames of the video sequence; generating metadata identifying allocations of content from the still image to the frames of the video sequence; and transmitting coded data of the video sequence and the metadata to a channel. 2. The method of claim 1, wherein the converting includes transforming spatial correlation among the content of the still image to temporal correlation between frames of the video sequence. 3. The method of claim 1, wherein sizes of the frames of the video sequence increase incrementally through at least a portion of the video sequence. 4. The method of claim 1, wherein the converting comprises assigning pixel blocks of the still image to frames of the video sequence. 5. The method of claim 1, wherein the converting comprises partitioning the image into tiles and allocating pixel blocks of different tiles to different frames of the video sequence. 6. The method of claim 1, wherein the converting comprises: identifying regions of spatial correlation within the still image, and distributing pixel blocks of each region among the frames. 7. The method of claim 1, wherein a first frame of the video sequence is coded by intra-coding and other frames of the video sequence are coded by inter-coding. 8. The method of claim 1, wherein the frames of the video sequence have smaller spatial sizes than a size of the still image. 9. The method of claim 1, wherein coding comprises quantizing an intermediate coded representation of a pixel block by a quantization parameter, which is selected from an analysis of brightness level, spatial complexity and edge structures within and around the pixel block. 10. The method of claim 1, wherein coding comprises quantizing an intermediate coded representation of a frame by a quantization parameter, which is selected based on a classification of the pixel block as one of a one of the low-loss, medium-loss or high-loss category. 11. An image coder comprising: a pre-processor to convert a still image to be coded to a video sequence and to generate metadata representing the conversion from the still image to the video sequence; a motion compensated prediction-based video coder that receives the video sequence from the pre-processor and outputs coded video data having temporal prediction references between frames of the video sequence; and a transmitter to transmit coded data of the video sequence and the metadata to a channel. 12. The coder of claim 11, wherein the pre-processor's conversion includes transforming spatial correlation among the content of the still image to temporal correlation between frames of the video sequence. 13. The coder of claim 11, wherein the conversion assigns pixel blocks of the still image to frames of the video sequence. 14. The coder of claim 11, wherein the conversion identifies regions of spatial correlation within the still image and distributes pixel blocks of each region among the frames. 15. The coder of claim 11, wherein video coder codes the first frame of the video sequence by intra-coding and other frames of the video sequence by inter-coding. 16. A computer readable medium storing program instructions that, when executed by a processing device, causes the device to: convert a still image to be coded to a video sequence; code the video sequence by motion compensated prediction that includes temporal prediction references between frames of the video sequence; generate metadata identifying allocations of content from the still image to the frames of the video sequence; and transmit coded data of the video sequence and the metadata to a channel. 17. A decoding method, comprising: decoding a coded video sequence by motion compensated prediction that includes temporal prediction references between frames of the video sequence, and responsive to metadata, received in a channel with the coded video sequence, converting the decoded video sequence to a still image. 18. The method of claim 17, wherein the converting transforms temporal correlation between frames of the decoded video sequence to spatial correlation among the content of the still image. 19. The method of claim 17, wherein the converting comprises assigning pixel blocks of the frames of the video sequence to the still image. 20. The method of claim 17, wherein a first frame of the coded video sequence is decoded by intra-coding and other frames of the video sequence are decoded by inter-coding. 21. An image decoder, comprising: a motion compensated prediction-based video decoder that receives a coded video sequence having temporal prediction references between frames of a video sequence and outputs a decoded video sequence therefrom; and a post-processor to convert the decoded video sequence to a still image in response to metadata received in a channel with the coded video sequence. 22. The decoder of claim 21, wherein the post-processor transforms temporal correlation between frames of the decoded video sequence to spatial correlation among the content of the still image. 23. The decoder of claim 21, wherein the post-processor assigns pixel blocks of the frames of the video sequence to the still image. 24. The decoder of claim 21, wherein a first frame of the coded video sequence is decoded by intra-coding and other frames of the video sequence are decoded by inter-coding. 25-35. (canceled)
Coding techniques for image data may cause a still image to be converted to a “phantom” video sequence, which is coded by motion compensated prediction techniques. Thus, coded video data obtained from the coding operation may include temporal prediction references between frames of the video sequence. Metadata may be generated that identifies allocations of content from the still image to the frames of the video sequence. The coded data and the metadata may be transmitted to another device, whereupon it may be decoded by motion compensated prediction techniques and converted back to a still image data. Other techniques may involve coding an image in both a base layer representation and at least one coded enhancement layer representation. The enhancement layer representation may be coded predictively with reference to the base layer representation. The coded base layer representation may be partitioned into a plurality of individually-transmittable segments and stored. Prediction references of elements of the enhancement layer representation may be confined to segments of the base layer representation that correspond to a location of those elements. Meaning, when a pixel block of an enhancement layer maps to a given segment of the base layer representation, prediction references are confined to that segment and do not reference portions of the base layer representation that may be found in other segment(s).1. A coding method, comprising: converting a still image to be coded to a video sequence; coding the video sequence by motion compensated prediction that includes temporal prediction references between frames of the video sequence; generating metadata identifying allocations of content from the still image to the frames of the video sequence; and transmitting coded data of the video sequence and the metadata to a channel. 2. The method of claim 1, wherein the converting includes transforming spatial correlation among the content of the still image to temporal correlation between frames of the video sequence. 3. The method of claim 1, wherein sizes of the frames of the video sequence increase incrementally through at least a portion of the video sequence. 4. The method of claim 1, wherein the converting comprises assigning pixel blocks of the still image to frames of the video sequence. 5. The method of claim 1, wherein the converting comprises partitioning the image into tiles and allocating pixel blocks of different tiles to different frames of the video sequence. 6. The method of claim 1, wherein the converting comprises: identifying regions of spatial correlation within the still image, and distributing pixel blocks of each region among the frames. 7. The method of claim 1, wherein a first frame of the video sequence is coded by intra-coding and other frames of the video sequence are coded by inter-coding. 8. The method of claim 1, wherein the frames of the video sequence have smaller spatial sizes than a size of the still image. 9. The method of claim 1, wherein coding comprises quantizing an intermediate coded representation of a pixel block by a quantization parameter, which is selected from an analysis of brightness level, spatial complexity and edge structures within and around the pixel block. 10. The method of claim 1, wherein coding comprises quantizing an intermediate coded representation of a frame by a quantization parameter, which is selected based on a classification of the pixel block as one of a one of the low-loss, medium-loss or high-loss category. 11. An image coder comprising: a pre-processor to convert a still image to be coded to a video sequence and to generate metadata representing the conversion from the still image to the video sequence; a motion compensated prediction-based video coder that receives the video sequence from the pre-processor and outputs coded video data having temporal prediction references between frames of the video sequence; and a transmitter to transmit coded data of the video sequence and the metadata to a channel. 12. The coder of claim 11, wherein the pre-processor's conversion includes transforming spatial correlation among the content of the still image to temporal correlation between frames of the video sequence. 13. The coder of claim 11, wherein the conversion assigns pixel blocks of the still image to frames of the video sequence. 14. The coder of claim 11, wherein the conversion identifies regions of spatial correlation within the still image and distributes pixel blocks of each region among the frames. 15. The coder of claim 11, wherein video coder codes the first frame of the video sequence by intra-coding and other frames of the video sequence by inter-coding. 16. A computer readable medium storing program instructions that, when executed by a processing device, causes the device to: convert a still image to be coded to a video sequence; code the video sequence by motion compensated prediction that includes temporal prediction references between frames of the video sequence; generate metadata identifying allocations of content from the still image to the frames of the video sequence; and transmit coded data of the video sequence and the metadata to a channel. 17. A decoding method, comprising: decoding a coded video sequence by motion compensated prediction that includes temporal prediction references between frames of the video sequence, and responsive to metadata, received in a channel with the coded video sequence, converting the decoded video sequence to a still image. 18. The method of claim 17, wherein the converting transforms temporal correlation between frames of the decoded video sequence to spatial correlation among the content of the still image. 19. The method of claim 17, wherein the converting comprises assigning pixel blocks of the frames of the video sequence to the still image. 20. The method of claim 17, wherein a first frame of the coded video sequence is decoded by intra-coding and other frames of the video sequence are decoded by inter-coding. 21. An image decoder, comprising: a motion compensated prediction-based video decoder that receives a coded video sequence having temporal prediction references between frames of a video sequence and outputs a decoded video sequence therefrom; and a post-processor to convert the decoded video sequence to a still image in response to metadata received in a channel with the coded video sequence. 22. The decoder of claim 21, wherein the post-processor transforms temporal correlation between frames of the decoded video sequence to spatial correlation among the content of the still image. 23. The decoder of claim 21, wherein the post-processor assigns pixel blocks of the frames of the video sequence to the still image. 24. The decoder of claim 21, wherein a first frame of the coded video sequence is decoded by intra-coding and other frames of the video sequence are decoded by inter-coding. 25-35. (canceled)
2,400
7,434
7,434
14,724,654
2,474
A method for connecting a plurality of networks. The method may include establishing a first network link between a first network element and a second network element. The first network link may implement an interconnection between a first network and a second network. The method may include establishing a second network link between the first network element and a third network element. The first network element, the second network element, and the third network element may be located on a virtual network. The method may include detecting, over the virtual network, a first network event regarding the first network link. The method may include, in response to detecting the first network event, adjusting the interconnection between the first network and the second network. The method may include disregarding a second network event regarding the second network link.
1. A method for connecting a plurality of networks, comprising: establishing a first network link between a first network element and a second network element, wherein the first network link implements an interconnection between a first network and a second network; establishing a second network link between the first network element and a third network element, wherein the first network element, the second network element, and the third network element are located on a virtual network; detecting, over the virtual network, a first network event regarding the first network link; in response to detecting the first network event, adjusting the interconnection between the first network and the second network; and disregarding a second network event regarding the second network link. 2. The method of claim 1, further comprising: receiving, before disregarding the second network event, an event message regarding the second network event from the third network element. 3. The method of claim 1, further comprising: monitoring the first network link for a first fault, wherein the second network event is a second fault that occurred on the second network link. 4. The method of claim 1, further comprising: detecting a third network event; determining whether the third network event corresponds to one selected from a group consisting of the first network link or the second network link; and in response to a determination that the third network event corresponds to the second network link, disregarding the third network event. 5. The method of claim 1, wherein the first network element and the third network element are located within the first network and the second network element is located in the second network. 6. The method of claim 1, further comprising: establishing a third network link between the third network element and a fourth network element, wherein the fourth network element is located in the second network. 7. The method of claim 6, wherein adjusting the interconnection comprises: terminating communication over the first network link between a first plurality of endpoints connected to the first network and a second plurality of endpoints connected to the second network; and establishing communication over the third network link between the first plurality of endpoints and the second plurality of endpoints. 8. The method of claim 6, wherein adjusting the interconnection comprises: terminating communication over the third network link between a first plurality of endpoints connected to the first network and a second plurality of endpoints connected to the second network; and establishing communication over the first network link between the first plurality of endpoints and the second plurality of endpoints. 9. A system for connecting a plurality of networks, comprising: a first network element; a second network element connected to the first network element by a first network link, wherein the first network link implements an interconnection between a first network and a second network; and a third network element connected to the first network element by a second network link, wherein the first network element, the second network element, and the third network element are located in a virtual network, wherein the first network element is configured to adjust the interconnection in response to detecting, over the virtual network, a first network event on the first network link, and wherein the first network element is configured to disregard a second network event regarding the second network link. 10. The system of claim 9, wherein the first network element is configured to terminate, in response to detecting the first network event, communication over the first network link between a first plurality of endpoints in the first network and a second plurality of endpoints of the second network. 11. The system of claim 9, wherein the first network element and the third network element are located within the first network and the second network element is located in the second network. 12. The system of claim 9, further comprising: a fourth network element connected to the third network element by a third network link, wherein the fourth network element is located in the second network, and wherein the first network element is configured to adjust the interconnection in response to detecting a third network event on the third network link. 13. The system of claim 9, wherein the second network event is a fault on the second network link. 14. A non-transitory computer readable medium storing instructions for connecting a plurality of networks, the instructions, when executed by a computer processor, comprising functionality for: establishing a first network link between a first network element and a second network element, wherein the first network link implements an interconnection between a first network and a second network; establishing a second network link between the first network element and a third network element, wherein the first network element, the second network element, and the third network element are located on a virtual network; detecting, over the virtual network, a first network event regarding the first network link; in response to detecting the first network event, adjusting the interconnection between the first network and the second network; and disregarding a second network event regarding the second network link. 15. The non-transitory computer readable medium of claim 14, receiving, before disregarding the second network event, an event message regarding the second network event from the third network element. 16. The non-transitory computer readable medium of claim 14, further comprising instructions, when executed by the computer processor, comprising functionality for: monitoring the first network link for a first fault, wherein the second event is a second fault that occurred on the second network link. 17. The non-transitory computer readable medium of claim 14, further comprising instructions, when executed by the computer processor, comprising functionality for: detecting a third network event; determining whether the third network event corresponds to one selected from a group consisting of the first network link and the second network link; and in response to a determination that the third network event corresponds to the second network link, disregarding the third network event. 18. The non-transitory computer readable medium of claim 14, further comprising instructions, when executed by the computer processor, comprising functionality for: establishing a third network link between the third network element and a fourth network element, wherein the fourth network element is located in the second network. 19. The non-transitory computer readable medium of claim 18, the instructions for adjusting the interconnection comprising functionality for: terminating communication over the first network link between a first plurality of endpoints connected to the first network and a second plurality of endpoints connected to the second network; and establishing communication over the third network link between the first plurality of endpoints and the second plurality of endpoints. 20. The non-transitory computer readable medium of claim 18, the instructions for adjusting the interconnection comprising functionality for: terminating communication over the third network link between a first plurality of endpoints connected to the first network and a second plurality of endpoints connected to the second network; and establishing communication over the first network link between the first plurality of endpoints and the second plurality of endpoints.
A method for connecting a plurality of networks. The method may include establishing a first network link between a first network element and a second network element. The first network link may implement an interconnection between a first network and a second network. The method may include establishing a second network link between the first network element and a third network element. The first network element, the second network element, and the third network element may be located on a virtual network. The method may include detecting, over the virtual network, a first network event regarding the first network link. The method may include, in response to detecting the first network event, adjusting the interconnection between the first network and the second network. The method may include disregarding a second network event regarding the second network link.1. A method for connecting a plurality of networks, comprising: establishing a first network link between a first network element and a second network element, wherein the first network link implements an interconnection between a first network and a second network; establishing a second network link between the first network element and a third network element, wherein the first network element, the second network element, and the third network element are located on a virtual network; detecting, over the virtual network, a first network event regarding the first network link; in response to detecting the first network event, adjusting the interconnection between the first network and the second network; and disregarding a second network event regarding the second network link. 2. The method of claim 1, further comprising: receiving, before disregarding the second network event, an event message regarding the second network event from the third network element. 3. The method of claim 1, further comprising: monitoring the first network link for a first fault, wherein the second network event is a second fault that occurred on the second network link. 4. The method of claim 1, further comprising: detecting a third network event; determining whether the third network event corresponds to one selected from a group consisting of the first network link or the second network link; and in response to a determination that the third network event corresponds to the second network link, disregarding the third network event. 5. The method of claim 1, wherein the first network element and the third network element are located within the first network and the second network element is located in the second network. 6. The method of claim 1, further comprising: establishing a third network link between the third network element and a fourth network element, wherein the fourth network element is located in the second network. 7. The method of claim 6, wherein adjusting the interconnection comprises: terminating communication over the first network link between a first plurality of endpoints connected to the first network and a second plurality of endpoints connected to the second network; and establishing communication over the third network link between the first plurality of endpoints and the second plurality of endpoints. 8. The method of claim 6, wherein adjusting the interconnection comprises: terminating communication over the third network link between a first plurality of endpoints connected to the first network and a second plurality of endpoints connected to the second network; and establishing communication over the first network link between the first plurality of endpoints and the second plurality of endpoints. 9. A system for connecting a plurality of networks, comprising: a first network element; a second network element connected to the first network element by a first network link, wherein the first network link implements an interconnection between a first network and a second network; and a third network element connected to the first network element by a second network link, wherein the first network element, the second network element, and the third network element are located in a virtual network, wherein the first network element is configured to adjust the interconnection in response to detecting, over the virtual network, a first network event on the first network link, and wherein the first network element is configured to disregard a second network event regarding the second network link. 10. The system of claim 9, wherein the first network element is configured to terminate, in response to detecting the first network event, communication over the first network link between a first plurality of endpoints in the first network and a second plurality of endpoints of the second network. 11. The system of claim 9, wherein the first network element and the third network element are located within the first network and the second network element is located in the second network. 12. The system of claim 9, further comprising: a fourth network element connected to the third network element by a third network link, wherein the fourth network element is located in the second network, and wherein the first network element is configured to adjust the interconnection in response to detecting a third network event on the third network link. 13. The system of claim 9, wherein the second network event is a fault on the second network link. 14. A non-transitory computer readable medium storing instructions for connecting a plurality of networks, the instructions, when executed by a computer processor, comprising functionality for: establishing a first network link between a first network element and a second network element, wherein the first network link implements an interconnection between a first network and a second network; establishing a second network link between the first network element and a third network element, wherein the first network element, the second network element, and the third network element are located on a virtual network; detecting, over the virtual network, a first network event regarding the first network link; in response to detecting the first network event, adjusting the interconnection between the first network and the second network; and disregarding a second network event regarding the second network link. 15. The non-transitory computer readable medium of claim 14, receiving, before disregarding the second network event, an event message regarding the second network event from the third network element. 16. The non-transitory computer readable medium of claim 14, further comprising instructions, when executed by the computer processor, comprising functionality for: monitoring the first network link for a first fault, wherein the second event is a second fault that occurred on the second network link. 17. The non-transitory computer readable medium of claim 14, further comprising instructions, when executed by the computer processor, comprising functionality for: detecting a third network event; determining whether the third network event corresponds to one selected from a group consisting of the first network link and the second network link; and in response to a determination that the third network event corresponds to the second network link, disregarding the third network event. 18. The non-transitory computer readable medium of claim 14, further comprising instructions, when executed by the computer processor, comprising functionality for: establishing a third network link between the third network element and a fourth network element, wherein the fourth network element is located in the second network. 19. The non-transitory computer readable medium of claim 18, the instructions for adjusting the interconnection comprising functionality for: terminating communication over the first network link between a first plurality of endpoints connected to the first network and a second plurality of endpoints connected to the second network; and establishing communication over the third network link between the first plurality of endpoints and the second plurality of endpoints. 20. The non-transitory computer readable medium of claim 18, the instructions for adjusting the interconnection comprising functionality for: terminating communication over the third network link between a first plurality of endpoints connected to the first network and a second plurality of endpoints connected to the second network; and establishing communication over the first network link between the first plurality of endpoints and the second plurality of endpoints.
2,400
7,435
7,435
14,729,178
2,462
Aspects of the subject disclosure may include, for example, a client node device having a radio configured to wirelessly receive downstream channel signals from a communication network. An access point repeater (APR) launches the downstream channel signals on a guided wave communication system as guided electromagnetic waves that propagate along a transmission medium and to wirelessly transmit the downstream channel signals to at least one client device. Other embodiments are disclosed.
1. A client node device comprising: a radio configured to wirelessly receive first channel signals from a communication network; and an access point repeater (APR) configured to launch the first channel signals on a guided wave communication system as guided electromagnetic waves at non-optical frequencies that are bound to a physical structure of a transmission medium and to wirelessly transmit the first channel signals to at least one client device. 2. The client node device of claim 1 wherein the transmission medium includes a wire and the guided electromagnetic waves are bound to an outer surface of the wire. 3. The client node device of claim 1, wherein the APR comprises: an amplifier configured to amplify the first channel signals to generate amplified first channel signals; a channel selection filter configured to select one or more of the amplified first channel signals to wirelessly communicate with the at least one client device; a coupler configured to guide the amplified first channel signals to the transmission medium of the guided wave communication system; and a channel duplexer configured to transfer the amplified first channel signals to the coupler and to the channel selection filter. 4. The client node device of claim 1 wherein the radio receives the first channel signals from the communication network via one of: a host node device coupled to the communication network or another client node device. 5. The client node device of claim 1, wherein the radio is an analog radio that generates the first channel signals by downconverting RF signals that have a higher carrier frequencies relative to carrier frequencies of the first channel signals. 6. The client node device of claim 1, wherein the APR is further configured to extract second channel signals from the guided wave communication system; and wherein the radio wirelessly transmits the second channel signals to the communication network. 7. The client node device of claim 6, wherein the APR wirelessly receives third channel signals from the at least one client device; and wherein the radio wirelessly transmits the third channel signals to the communication network. 8. The client node device of claim 1 wherein the transmission medium includes one of: a power line of a public utility or a dielectric core surrounded by cladding and a jacket. 9. The client node device of claim 1 wherein at least a portion of the first channel signals are formatted in accordance with a data over cable system interface specification (DOCSIS) protocol. 10. The client node device of claim 1 wherein at least a portion of the first channel signals are formatted in accordance with a 802.11 protocol or a fourth generation or higher mobile wireless protocol. 11. A method comprising: wirelessly receiving downstream channel signals from a communication network; launching the downstream channel signals on a guided wave communication system as guided electromagnetic waves that are bound to a surface of a transmission medium; and wirelessly transmitting the downstream channel signals to at least one client device. 12. The method of claim 11 wherein the transmission medium includes a wire and the guided electromagnetic waves are bound to an outer surface of the wire. 13. The method of claim 11, wherein wirelessly transmitting the downstream channel signals to at least one client device comprises: amplifying the downstream channel signals to generate amplified downstream channel signals; selecting one or more of the amplified downstream channel signals; and wirelessly transmitting the one or more of the amplified downstream channel signals to the at least one client device via an antenna. 14. The method of claim 11, wherein launching the downstream channel signals on the guided wave communication system as guided electromagnetic waves that propagation along the transmission medium comprises: amplifying the downstream channel signals to generate amplified downstream channel signals; and guiding the amplified downstream channel signals to the transmission medium of the guided wave communication system. 15. The method of claim 11, wherein wirelessly receiving downstream channel signals from the communication network includes: downconverting RF signals that have a higher carrier frequencies compared with carrier frequencies of the downstream channel signals. 16. The method of claim 11, further comprising: extracting first upstream channel signals from the guided wave communication system; and wirelessly transmitting the first upstream channel signals to the communication network. 17. The method of claim 16, further comprising: wirelessly receiving second upstream channel signals from the at least one client device; and wirelessly transmitting the second upstream channel signals to the communication network. 18. The method of claim 11 wherein the transmission medium includes one of: a power line of a public utility or a dielectric core surrounded by cladding and a jacket. 19. A client node device comprising: a radio configured to wirelessly receive first channel signals from a communication network and to wirelessly transmit second channel signals and third channel signals to the communication network; and an access point repeater (APR) configured to launch the first channel signals on a guided wave communication system as guided electromagnetic waves that propagation along a transmission medium, to extract the second channel signals from the guided wave communication system, to wirelessly transmit the first channel signals to at least one client device and to wirelessly receive the third channel signals from the communication network. 20. The client node device of claim 19 wherein the transmission medium includes one of: a wire and the guided electromagnetic wave is bound to an outer surface of the wire, a power line of a public utility and the guided electromagnetic wave is bound to an outer surface of the power line or a dielectric core surrounded by cladding and a jacket and the guided electromagnetic wave is bound to an outer surface of the dielectric core.
Aspects of the subject disclosure may include, for example, a client node device having a radio configured to wirelessly receive downstream channel signals from a communication network. An access point repeater (APR) launches the downstream channel signals on a guided wave communication system as guided electromagnetic waves that propagate along a transmission medium and to wirelessly transmit the downstream channel signals to at least one client device. Other embodiments are disclosed.1. A client node device comprising: a radio configured to wirelessly receive first channel signals from a communication network; and an access point repeater (APR) configured to launch the first channel signals on a guided wave communication system as guided electromagnetic waves at non-optical frequencies that are bound to a physical structure of a transmission medium and to wirelessly transmit the first channel signals to at least one client device. 2. The client node device of claim 1 wherein the transmission medium includes a wire and the guided electromagnetic waves are bound to an outer surface of the wire. 3. The client node device of claim 1, wherein the APR comprises: an amplifier configured to amplify the first channel signals to generate amplified first channel signals; a channel selection filter configured to select one or more of the amplified first channel signals to wirelessly communicate with the at least one client device; a coupler configured to guide the amplified first channel signals to the transmission medium of the guided wave communication system; and a channel duplexer configured to transfer the amplified first channel signals to the coupler and to the channel selection filter. 4. The client node device of claim 1 wherein the radio receives the first channel signals from the communication network via one of: a host node device coupled to the communication network or another client node device. 5. The client node device of claim 1, wherein the radio is an analog radio that generates the first channel signals by downconverting RF signals that have a higher carrier frequencies relative to carrier frequencies of the first channel signals. 6. The client node device of claim 1, wherein the APR is further configured to extract second channel signals from the guided wave communication system; and wherein the radio wirelessly transmits the second channel signals to the communication network. 7. The client node device of claim 6, wherein the APR wirelessly receives third channel signals from the at least one client device; and wherein the radio wirelessly transmits the third channel signals to the communication network. 8. The client node device of claim 1 wherein the transmission medium includes one of: a power line of a public utility or a dielectric core surrounded by cladding and a jacket. 9. The client node device of claim 1 wherein at least a portion of the first channel signals are formatted in accordance with a data over cable system interface specification (DOCSIS) protocol. 10. The client node device of claim 1 wherein at least a portion of the first channel signals are formatted in accordance with a 802.11 protocol or a fourth generation or higher mobile wireless protocol. 11. A method comprising: wirelessly receiving downstream channel signals from a communication network; launching the downstream channel signals on a guided wave communication system as guided electromagnetic waves that are bound to a surface of a transmission medium; and wirelessly transmitting the downstream channel signals to at least one client device. 12. The method of claim 11 wherein the transmission medium includes a wire and the guided electromagnetic waves are bound to an outer surface of the wire. 13. The method of claim 11, wherein wirelessly transmitting the downstream channel signals to at least one client device comprises: amplifying the downstream channel signals to generate amplified downstream channel signals; selecting one or more of the amplified downstream channel signals; and wirelessly transmitting the one or more of the amplified downstream channel signals to the at least one client device via an antenna. 14. The method of claim 11, wherein launching the downstream channel signals on the guided wave communication system as guided electromagnetic waves that propagation along the transmission medium comprises: amplifying the downstream channel signals to generate amplified downstream channel signals; and guiding the amplified downstream channel signals to the transmission medium of the guided wave communication system. 15. The method of claim 11, wherein wirelessly receiving downstream channel signals from the communication network includes: downconverting RF signals that have a higher carrier frequencies compared with carrier frequencies of the downstream channel signals. 16. The method of claim 11, further comprising: extracting first upstream channel signals from the guided wave communication system; and wirelessly transmitting the first upstream channel signals to the communication network. 17. The method of claim 16, further comprising: wirelessly receiving second upstream channel signals from the at least one client device; and wirelessly transmitting the second upstream channel signals to the communication network. 18. The method of claim 11 wherein the transmission medium includes one of: a power line of a public utility or a dielectric core surrounded by cladding and a jacket. 19. A client node device comprising: a radio configured to wirelessly receive first channel signals from a communication network and to wirelessly transmit second channel signals and third channel signals to the communication network; and an access point repeater (APR) configured to launch the first channel signals on a guided wave communication system as guided electromagnetic waves that propagation along a transmission medium, to extract the second channel signals from the guided wave communication system, to wirelessly transmit the first channel signals to at least one client device and to wirelessly receive the third channel signals from the communication network. 20. The client node device of claim 19 wherein the transmission medium includes one of: a wire and the guided electromagnetic wave is bound to an outer surface of the wire, a power line of a public utility and the guided electromagnetic wave is bound to an outer surface of the power line or a dielectric core surrounded by cladding and a jacket and the guided electromagnetic wave is bound to an outer surface of the dielectric core.
2,400
7,436
7,436
14,342,917
2,426
[Object] To expand end control of an application related to a broadcast content. [Solving Means] In an information processing system capable of controlling an application related to a broadcast content by an AIT, “SUSPEND” is newly added to a set of application control codes of an ETSI standard. By executing “SUSPEND”, a new function of setting an application from an executed state to a pause state can be added. As a result, control involving storing a processing result up to an application end point and resuming processing while taking over the past processing result when the application is activated next time can be realized.
1. An information processing apparatus, comprising: a broadcast content processing unit that receives and processes a broadcast content; and a controller that acquires an application information table with which suspend information can be set as control information for controlling an application related to the broadcast content, and sets, when the application information table including the suspend information is acquired, an application being executed to a pause state based on the suspend control information. 2. The information processing apparatus according to claim 1, further comprising a memory capable of storing information, wherein the controller sets the application being executed to the pause state by storing the application being executed in the memory for each execution state. 3. The information processing apparatus according to claim 2, wherein the controller cancels, when the application information table including predetermined control information is acquired, the pause state of the application based on the predetermined control information. 4. An information processing method, comprising: receiving and processing a broadcast content; acquiring, by a controller, an application information table with which suspend information can be set as control information for controlling an application related to the broadcast content; and setting, by the controller, when the application information table including the suspend information is acquired, an application being executed to a pause state based on the suspend control information. 5. A program that causes a computer to function as a controller that acquires an application information table with which suspend information can be set as control information for controlling an application related to a broadcast content, and sets, when the application information table including the suspend information is acquired, an application being executed to a pause state based on the suspend control information.
[Object] To expand end control of an application related to a broadcast content. [Solving Means] In an information processing system capable of controlling an application related to a broadcast content by an AIT, “SUSPEND” is newly added to a set of application control codes of an ETSI standard. By executing “SUSPEND”, a new function of setting an application from an executed state to a pause state can be added. As a result, control involving storing a processing result up to an application end point and resuming processing while taking over the past processing result when the application is activated next time can be realized.1. An information processing apparatus, comprising: a broadcast content processing unit that receives and processes a broadcast content; and a controller that acquires an application information table with which suspend information can be set as control information for controlling an application related to the broadcast content, and sets, when the application information table including the suspend information is acquired, an application being executed to a pause state based on the suspend control information. 2. The information processing apparatus according to claim 1, further comprising a memory capable of storing information, wherein the controller sets the application being executed to the pause state by storing the application being executed in the memory for each execution state. 3. The information processing apparatus according to claim 2, wherein the controller cancels, when the application information table including predetermined control information is acquired, the pause state of the application based on the predetermined control information. 4. An information processing method, comprising: receiving and processing a broadcast content; acquiring, by a controller, an application information table with which suspend information can be set as control information for controlling an application related to the broadcast content; and setting, by the controller, when the application information table including the suspend information is acquired, an application being executed to a pause state based on the suspend control information. 5. A program that causes a computer to function as a controller that acquires an application information table with which suspend information can be set as control information for controlling an application related to a broadcast content, and sets, when the application information table including the suspend information is acquired, an application being executed to a pause state based on the suspend control information.
2,400
7,437
7,437
15,184,502
2,488
A temporal merging motion information candidate generation unit derives, when information indicating whether or not to derive a temporal merging motion information candidate shared for all prediction blocks in a coding block is information indicating the derivation of a temporal merging motion information candidate shared for all the prediction blocks in the coding block, a temporal merging motion information candidate shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding. A merging motion information candidate list construction unit generates a plurality of merging motion information candidates including a temporal merging motion information candidate.
1. A moving picture coding device adapted to code a coding block consisting of greater than or equal to one prediction block, comprising: a temporal merging motion information candidate generation unit configured to derive, when information indicating whether or not to derive a temporal merging motion information candidate shared for all prediction blocks in the coding block is information indicating the derivation of a temporal merging motion information candidate shared for all the prediction blocks in the coding block, a temporal merging motion information candidate shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding; a merging motion information candidate generation unit configured to generate a plurality of merging motion information candidates including the temporal merging motion information candidate; a merging motion information selection unit configured to select one merging motion information candidate from the plurality of merging motion information candidates and to use the selected merging motion information candidate as motion information of the prediction block subject to coding; and a coding unit configured to code an index for specifying the selected merging motion information candidate as a candidate specifying index. 2. A transmitting device comprising: a packet processing unit configured to packetize a bitstream coded by a moving picture coding method adapted to code a coding block consisting of greater than or equal to one prediction block so as to obtain coding data; and a transmitting unit configured to transmit the coding data that has been packetized, wherein the moving picture coding method includes: deriving, when information indicating whether or not to derive a temporal merging motion information candidate shared for all prediction blocks in the coding block is information indicating the derivation of a temporal merging motion information candidate shared for all the prediction blocks in the coding block, a temporal merging motion information candidate shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding; generating a plurality of merging motion information candidates including the temporal merging motion information candidate; selecting one merging motion information candidate from the plurality of merging motion information candidates and using the selected merging motion information candidate as motion information of the prediction block subject to coding; and coding an index for specifying the selected merging motion information candidate as a candidate specifying index. 3. A transmitting method comprising: packetizing a bitstream coded by a moving picture coding method adapted to code a coding block consisting of greater than or equal to one prediction block so as to obtain coding data; and transmitting the coding data that has been packetized, wherein the moving picture coding method includes: deriving, when information indicating whether or not to derive a temporal merging motion information candidate shared for all prediction blocks in the coding block is information indicating the derivation of a temporal merging motion information candidate shared for all the prediction blocks in the coding block, a temporal merging motion information candidate shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding; generating a plurality of merging motion information candidates including the temporal merging motion information candidate; selecting one merging motion information candidate from the plurality of merging motion information candidates and using the selected merging motion information candidate as motion information of the prediction block subject to coding; and coding an index for specifying the selected merging motion information candidate as a candidate specifying index.
A temporal merging motion information candidate generation unit derives, when information indicating whether or not to derive a temporal merging motion information candidate shared for all prediction blocks in a coding block is information indicating the derivation of a temporal merging motion information candidate shared for all the prediction blocks in the coding block, a temporal merging motion information candidate shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding. A merging motion information candidate list construction unit generates a plurality of merging motion information candidates including a temporal merging motion information candidate.1. A moving picture coding device adapted to code a coding block consisting of greater than or equal to one prediction block, comprising: a temporal merging motion information candidate generation unit configured to derive, when information indicating whether or not to derive a temporal merging motion information candidate shared for all prediction blocks in the coding block is information indicating the derivation of a temporal merging motion information candidate shared for all the prediction blocks in the coding block, a temporal merging motion information candidate shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding; a merging motion information candidate generation unit configured to generate a plurality of merging motion information candidates including the temporal merging motion information candidate; a merging motion information selection unit configured to select one merging motion information candidate from the plurality of merging motion information candidates and to use the selected merging motion information candidate as motion information of the prediction block subject to coding; and a coding unit configured to code an index for specifying the selected merging motion information candidate as a candidate specifying index. 2. A transmitting device comprising: a packet processing unit configured to packetize a bitstream coded by a moving picture coding method adapted to code a coding block consisting of greater than or equal to one prediction block so as to obtain coding data; and a transmitting unit configured to transmit the coding data that has been packetized, wherein the moving picture coding method includes: deriving, when information indicating whether or not to derive a temporal merging motion information candidate shared for all prediction blocks in the coding block is information indicating the derivation of a temporal merging motion information candidate shared for all the prediction blocks in the coding block, a temporal merging motion information candidate shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding; generating a plurality of merging motion information candidates including the temporal merging motion information candidate; selecting one merging motion information candidate from the plurality of merging motion information candidates and using the selected merging motion information candidate as motion information of the prediction block subject to coding; and coding an index for specifying the selected merging motion information candidate as a candidate specifying index. 3. A transmitting method comprising: packetizing a bitstream coded by a moving picture coding method adapted to code a coding block consisting of greater than or equal to one prediction block so as to obtain coding data; and transmitting the coding data that has been packetized, wherein the moving picture coding method includes: deriving, when information indicating whether or not to derive a temporal merging motion information candidate shared for all prediction blocks in the coding block is information indicating the derivation of a temporal merging motion information candidate shared for all the prediction blocks in the coding block, a temporal merging motion information candidate shared for all the prediction blocks in the coding block from a prediction block of a coded picture different from a picture having a prediction block subject to coding; generating a plurality of merging motion information candidates including the temporal merging motion information candidate; selecting one merging motion information candidate from the plurality of merging motion information candidates and using the selected merging motion information candidate as motion information of the prediction block subject to coding; and coding an index for specifying the selected merging motion information candidate as a candidate specifying index.
2,400
7,438
7,438
14,683,964
2,439
Implementations described and claimed herein provide systems and methods for generating threat intelligence based on network security data. In one implementation, a network traffic dataset representative of network traffic for an Internet Protocol address across one or more ports of a primary network is obtained. A content distribution network log associated with a content distribution network is obtained. The content distribution network log includes a history of content requests by the Internet Protocol address. The network traffic dataset is correlated with the content distribution network log based on the Internet Protocol address to obtain network security data. One or more threat attributes representative of malicious activity are identified from the network security data. The one or more threat attributes are weighted. Network threat intelligence is generated based on the weighted threat attributes using a processing cluster.
1. A method for identifying network threats, the method comprising: obtaining a network traffic dataset representative of network traffic for an Internet Protocol address across one or more ports of a primary network, the primary network in communication with a content distribution network, the Internet Protocol address corresponding to a computing device; obtaining a content distribution network log associated with the content distribution network, the content distribution network log including a history of content requests by the Internet Protocol address; correlating the network traffic dataset with the content distribution network log based on the Internet Protocol address to obtain network security data; identifying one or more threat attributes representative of malicious activity from the network security data; weighting the one or more threat attributes; and generating network threat intelligence based on the weighted threat attributes using a processing cluster. 2. The method of claim 1, wherein the one or more threat attributes are weighted using machine learning. 3. The method of claim 1, wherein the one or more threat attributes are weighted based on at least one of a type of activity of the malicious activity or a source reporting the malicious activity. 4. The method of claim 1, wherein the network traffic dataset and the content distribution network log are further correlated with domain name system log associated with the content distribution network based on the Internet Protocol address. 5. The method of claim 1, wherein the network traffic dataset and the content distribution network log are further correlated with other data from one or more enrichment feeds based on the Internet Protocol address. 6. The method of claim 1, wherein the network threat intelligence includes a reputation score for the Internet Protocol address. 7. The method of claim 6, the reputation score is normalized based on one or more neighborhood scores, each of corresponding to an internet neighborhood of the IP address. 8. The method of claim 7, wherein the internet neighborhood is a netblock, an autonomous system, a region, or a country. 9. The method of claim 1, wherein the network threat intelligence includes threat analytics. 10. The method of claim 9, wherein the threat analytics includes at least one of: network threat trends; maps providing visual representations of the network threats; predictions of future malicious activity; proposed responses to the network threats; or an effectiveness of responses to the network threats. 11. The method of claim 1, further comprising: responding to a threat by the Internet Protocol address based on the network threat intelligence. 12. The method of claim 11, wherein the response includes at least one of: filtering future network traffic sent from the Internet Protocol address; null routing future network traffic associated with the threat; logically separating a malicious network associated with the Internet Protocol address; pushing data relating to the threat to firewalls on a friendly network; using Access Control List blocks; providing information regarding the Internet Protocol address to other networks for use in blocking future network traffic; publishing a list of malicious actors, including the Internet Protocol address; or not responding to a future content request by the Internet Protocol address to the content distribution network. 13. One or more non-transitory tangible computer-readable storage media storing computer-executable instructions for performing a computer process on a computing system, the computer process comprising: extracting network traffic patterns for an Internet Protocol address from a network traffic dataset representative of network traffic for an Internet Protocol address across one or more ports of a primary network, the primary network in communication with a content distribution network, the Internet Protocol address corresponding to a computing device; extracting a user agent for the Internet Protocol address and a history of content requests by the Internet Protocol address from a content distribution log associated with the content distribution network; correlating the network traffic patterns with the user agent and the history of content requests to obtain network security data for the Internet Protocol address; and generating network threat intelligence based on the network security data. 14. The one or more non-transitory tangible computer-readable storage media of claim 13, wherein the network threat intelligence includes a reputation score for the Internet Protocol address. 15. The one or more non-transitory tangible computer-readable storage media of claim 14, wherein the reputation score is generated based on one or more weighted threat attributes identified from the network security data. 16. The one or more non-transitory tangible computer-readable storage media of claim 14, wherein the reputation score is normalized based on one or more neighborhood scores, each of corresponding to an internet neighborhood of the IP address. 17. The one or more non-transitory tangible computer-readable storage media of claim 13, further comprising: responding to a threat by the Internet Protocol address based on the network threat intelligence. 18. A system for identifying network threats, the system comprising: a primary network in communication with a content distribution network, the primary network having one or more router interfaces through which network traffic for an Internet Protocol address is transceived, the Internet Protocol address corresponding to a computing device; and a processing cluster configured to generate network threat intelligence based on network security data obtained from an interaction of the Internet Protocol address with the primary network and the content distribution network, the network security data including a network traffic dataset corresponding to the network traffic transceived over the one or more router interfaces for the Internet Protocol address and a content distribution log including a history of content requests from the Internet Protocol address over the primary network. 19. The system of claim 18, wherein the network threat intelligence includes a reputation score for the Internet Protocol address. 20. The system of claim 18, wherein the network threat intelligence includes a proposed response to a threat by the Internet Protocol address.
Implementations described and claimed herein provide systems and methods for generating threat intelligence based on network security data. In one implementation, a network traffic dataset representative of network traffic for an Internet Protocol address across one or more ports of a primary network is obtained. A content distribution network log associated with a content distribution network is obtained. The content distribution network log includes a history of content requests by the Internet Protocol address. The network traffic dataset is correlated with the content distribution network log based on the Internet Protocol address to obtain network security data. One or more threat attributes representative of malicious activity are identified from the network security data. The one or more threat attributes are weighted. Network threat intelligence is generated based on the weighted threat attributes using a processing cluster.1. A method for identifying network threats, the method comprising: obtaining a network traffic dataset representative of network traffic for an Internet Protocol address across one or more ports of a primary network, the primary network in communication with a content distribution network, the Internet Protocol address corresponding to a computing device; obtaining a content distribution network log associated with the content distribution network, the content distribution network log including a history of content requests by the Internet Protocol address; correlating the network traffic dataset with the content distribution network log based on the Internet Protocol address to obtain network security data; identifying one or more threat attributes representative of malicious activity from the network security data; weighting the one or more threat attributes; and generating network threat intelligence based on the weighted threat attributes using a processing cluster. 2. The method of claim 1, wherein the one or more threat attributes are weighted using machine learning. 3. The method of claim 1, wherein the one or more threat attributes are weighted based on at least one of a type of activity of the malicious activity or a source reporting the malicious activity. 4. The method of claim 1, wherein the network traffic dataset and the content distribution network log are further correlated with domain name system log associated with the content distribution network based on the Internet Protocol address. 5. The method of claim 1, wherein the network traffic dataset and the content distribution network log are further correlated with other data from one or more enrichment feeds based on the Internet Protocol address. 6. The method of claim 1, wherein the network threat intelligence includes a reputation score for the Internet Protocol address. 7. The method of claim 6, the reputation score is normalized based on one or more neighborhood scores, each of corresponding to an internet neighborhood of the IP address. 8. The method of claim 7, wherein the internet neighborhood is a netblock, an autonomous system, a region, or a country. 9. The method of claim 1, wherein the network threat intelligence includes threat analytics. 10. The method of claim 9, wherein the threat analytics includes at least one of: network threat trends; maps providing visual representations of the network threats; predictions of future malicious activity; proposed responses to the network threats; or an effectiveness of responses to the network threats. 11. The method of claim 1, further comprising: responding to a threat by the Internet Protocol address based on the network threat intelligence. 12. The method of claim 11, wherein the response includes at least one of: filtering future network traffic sent from the Internet Protocol address; null routing future network traffic associated with the threat; logically separating a malicious network associated with the Internet Protocol address; pushing data relating to the threat to firewalls on a friendly network; using Access Control List blocks; providing information regarding the Internet Protocol address to other networks for use in blocking future network traffic; publishing a list of malicious actors, including the Internet Protocol address; or not responding to a future content request by the Internet Protocol address to the content distribution network. 13. One or more non-transitory tangible computer-readable storage media storing computer-executable instructions for performing a computer process on a computing system, the computer process comprising: extracting network traffic patterns for an Internet Protocol address from a network traffic dataset representative of network traffic for an Internet Protocol address across one or more ports of a primary network, the primary network in communication with a content distribution network, the Internet Protocol address corresponding to a computing device; extracting a user agent for the Internet Protocol address and a history of content requests by the Internet Protocol address from a content distribution log associated with the content distribution network; correlating the network traffic patterns with the user agent and the history of content requests to obtain network security data for the Internet Protocol address; and generating network threat intelligence based on the network security data. 14. The one or more non-transitory tangible computer-readable storage media of claim 13, wherein the network threat intelligence includes a reputation score for the Internet Protocol address. 15. The one or more non-transitory tangible computer-readable storage media of claim 14, wherein the reputation score is generated based on one or more weighted threat attributes identified from the network security data. 16. The one or more non-transitory tangible computer-readable storage media of claim 14, wherein the reputation score is normalized based on one or more neighborhood scores, each of corresponding to an internet neighborhood of the IP address. 17. The one or more non-transitory tangible computer-readable storage media of claim 13, further comprising: responding to a threat by the Internet Protocol address based on the network threat intelligence. 18. A system for identifying network threats, the system comprising: a primary network in communication with a content distribution network, the primary network having one or more router interfaces through which network traffic for an Internet Protocol address is transceived, the Internet Protocol address corresponding to a computing device; and a processing cluster configured to generate network threat intelligence based on network security data obtained from an interaction of the Internet Protocol address with the primary network and the content distribution network, the network security data including a network traffic dataset corresponding to the network traffic transceived over the one or more router interfaces for the Internet Protocol address and a content distribution log including a history of content requests from the Internet Protocol address over the primary network. 19. The system of claim 18, wherein the network threat intelligence includes a reputation score for the Internet Protocol address. 20. The system of claim 18, wherein the network threat intelligence includes a proposed response to a threat by the Internet Protocol address.
2,400
7,439
7,439
13,058,962
2,482
A camera system comprises a 3D TOF camera for acquiring a camera-perspective range image of a scene and an image processor for processing the range image. The image processor contains a position and orientation calibration routine implemented therein in hardware and/or software, which position and orientation calibration routine, when executed by the image processor, detects one or more planes within a range image acquired by the 3D TOF camera, selects a reference plane among the at least one or more planes detected and computes position and orientation parameters of the 3D TOF camera with respect to the reference plane, such as, e.g., elevation above the reference plane and/or camera roll angle and/or camera pitch angle.
1-15. (canceled) 16. Position and orientation calibration method for a camera system including a 3D time-of-flight camera, said method comprising acquiring a camera-perspective range image of a scene using said 3D time-of-flight camera, detecting one or more planes within said range image and selecting a reference plane among said one or more planes detected, computing position and orientation parameters of said 3D time-of-flight camera with respect to said reference plane. 17. The method as claimed in claim 16, wherein said position and orientation parameters of said 3D time-of flight camera include at least one of height above said reference plane, camera roll angle and camera pitch angle. 18. The method as claimed in claim 16, wherein said detection of one or more planes within said range image comprises RANSAC-based plane detection. 19. The method as claimed in claim 16, wherein said selecting of a reference plane comprises identifying a floor plane and fixing said floor plane as said reference plane. 20. The method as claimed in claim 19, wherein said selection of said reference plane is effected by said camera based upon and following input of user-defined limits of at least one of camera roll angle and camera pitch angle with respect to said floor plane. 21. The method as claimed in claim 16, wherein said selection of said reference plane comprises presenting said one or more detected planes using a user interface and fixing said reference plane based upon user interaction. 22. The method as claimed in claim 16, comprising computing coordinate transformation parameters of a coordinate transformation, said coordinate transformation being such that it transforms, when applied to a camera-perspective range image of said scene, such range image into a Cartesian representation of said scene, in which coordinates are defined with respect to said reference plane, and storing said coordinate transformation parameters within a memory of said camera. 23. The method as claimed in claim 16, wherein said detection of one or more planes comprises at least one of compensating for range errors induced by light spreading in said 3D time-of-flight camera and discarding image pixels containing range information deemed unreliable. 24. A camera system comprising a 3D time-of-flight camera for acquiring a camera-perspective range image of a scene, and an image processor for processing said range image, wherein said image processor comprises a position and orientation calibration routine implemented in at least one of hardware and software, wherein said position and orientation calibration routine, when executed by said image processor, detects one or more planes within a range image acquired by said 3D time-of-flight camera, selects a reference plane among said one or more planes detected and computes position and orientation parameters of said 3D time-of-flight camera with respect to said reference plane. 25. The camera system according to claim 24, wherein said position and orientation parameters comprise at least one of height above said reference plane, camera roll angle and camera pitch angle. 26. The camera system as claimed in any one of claims 24, comprising a user interface for presenting output data to or receiving input data from a user. 27. The camera system as claimed in claim 24, wherein said position and orientation calibration routine, when executed by said image processor, selects said reference plane by identifying a floor plane and fixing said floor plane as said reference plane. 28. The camera system as claimed in claim 26, comprising a user interface for presenting output data to and receiving input data from a user, wherein said calibration routine, when executed by said image processor, selects said reference plane based upon and following input, via said user interface, of user-defined limits of at least one of camera roll angle and camera pitch angle with respect to said floor plane. 29. The camera system as claimed in claim 25, wherein said selection of said reference plane comprises presenting said one or more detected planes using a user interface and fixing said reference plane based upon user interaction. 30. The camera system as claimed in claim 24, comprising a memory, and wherein said calibration routine, when executed by said image processor, computes coordinate transformation parameters of a coordinate transformation from a camera reference system into a world reference system, in which coordinates are defined with respect to said reference plane, and stores said coordinate transformation parameters within said memory. 31. Pedestrian detection system comprising a camera system as claimed in claim 24. 32. A camera system comprising a 3D time-of-flight camera for acquiring a camera-perspective range image of a scene, and an image processor for processing said range image, wherein said image processor is configured to detect one or more planes within a range image acquired by said 3D time-of-flight camera, to select a reference plane among said one or more planes detected and to compute position and orientation parameters of said 3D time-of-flight camera with respect to said reference plane, wherein said camera system comprises a memory, and wherein said processor is further configured to compute coordinate transformation parameters of a coordinate transformation from a camera reference system into a world reference system, in which coordinates are defined with respect to said reference plane, to store said coordinate transformation parameters within said memory.
A camera system comprises a 3D TOF camera for acquiring a camera-perspective range image of a scene and an image processor for processing the range image. The image processor contains a position and orientation calibration routine implemented therein in hardware and/or software, which position and orientation calibration routine, when executed by the image processor, detects one or more planes within a range image acquired by the 3D TOF camera, selects a reference plane among the at least one or more planes detected and computes position and orientation parameters of the 3D TOF camera with respect to the reference plane, such as, e.g., elevation above the reference plane and/or camera roll angle and/or camera pitch angle.1-15. (canceled) 16. Position and orientation calibration method for a camera system including a 3D time-of-flight camera, said method comprising acquiring a camera-perspective range image of a scene using said 3D time-of-flight camera, detecting one or more planes within said range image and selecting a reference plane among said one or more planes detected, computing position and orientation parameters of said 3D time-of-flight camera with respect to said reference plane. 17. The method as claimed in claim 16, wherein said position and orientation parameters of said 3D time-of flight camera include at least one of height above said reference plane, camera roll angle and camera pitch angle. 18. The method as claimed in claim 16, wherein said detection of one or more planes within said range image comprises RANSAC-based plane detection. 19. The method as claimed in claim 16, wherein said selecting of a reference plane comprises identifying a floor plane and fixing said floor plane as said reference plane. 20. The method as claimed in claim 19, wherein said selection of said reference plane is effected by said camera based upon and following input of user-defined limits of at least one of camera roll angle and camera pitch angle with respect to said floor plane. 21. The method as claimed in claim 16, wherein said selection of said reference plane comprises presenting said one or more detected planes using a user interface and fixing said reference plane based upon user interaction. 22. The method as claimed in claim 16, comprising computing coordinate transformation parameters of a coordinate transformation, said coordinate transformation being such that it transforms, when applied to a camera-perspective range image of said scene, such range image into a Cartesian representation of said scene, in which coordinates are defined with respect to said reference plane, and storing said coordinate transformation parameters within a memory of said camera. 23. The method as claimed in claim 16, wherein said detection of one or more planes comprises at least one of compensating for range errors induced by light spreading in said 3D time-of-flight camera and discarding image pixels containing range information deemed unreliable. 24. A camera system comprising a 3D time-of-flight camera for acquiring a camera-perspective range image of a scene, and an image processor for processing said range image, wherein said image processor comprises a position and orientation calibration routine implemented in at least one of hardware and software, wherein said position and orientation calibration routine, when executed by said image processor, detects one or more planes within a range image acquired by said 3D time-of-flight camera, selects a reference plane among said one or more planes detected and computes position and orientation parameters of said 3D time-of-flight camera with respect to said reference plane. 25. The camera system according to claim 24, wherein said position and orientation parameters comprise at least one of height above said reference plane, camera roll angle and camera pitch angle. 26. The camera system as claimed in any one of claims 24, comprising a user interface for presenting output data to or receiving input data from a user. 27. The camera system as claimed in claim 24, wherein said position and orientation calibration routine, when executed by said image processor, selects said reference plane by identifying a floor plane and fixing said floor plane as said reference plane. 28. The camera system as claimed in claim 26, comprising a user interface for presenting output data to and receiving input data from a user, wherein said calibration routine, when executed by said image processor, selects said reference plane based upon and following input, via said user interface, of user-defined limits of at least one of camera roll angle and camera pitch angle with respect to said floor plane. 29. The camera system as claimed in claim 25, wherein said selection of said reference plane comprises presenting said one or more detected planes using a user interface and fixing said reference plane based upon user interaction. 30. The camera system as claimed in claim 24, comprising a memory, and wherein said calibration routine, when executed by said image processor, computes coordinate transformation parameters of a coordinate transformation from a camera reference system into a world reference system, in which coordinates are defined with respect to said reference plane, and stores said coordinate transformation parameters within said memory. 31. Pedestrian detection system comprising a camera system as claimed in claim 24. 32. A camera system comprising a 3D time-of-flight camera for acquiring a camera-perspective range image of a scene, and an image processor for processing said range image, wherein said image processor is configured to detect one or more planes within a range image acquired by said 3D time-of-flight camera, to select a reference plane among said one or more planes detected and to compute position and orientation parameters of said 3D time-of-flight camera with respect to said reference plane, wherein said camera system comprises a memory, and wherein said processor is further configured to compute coordinate transformation parameters of a coordinate transformation from a camera reference system into a world reference system, in which coordinates are defined with respect to said reference plane, to store said coordinate transformation parameters within said memory.
2,400
7,440
7,440
14,940,098
2,447
An intelligent lighting system employs energy efficient outdoor lighting and intelligent sensor technology in cooperation with video analytics processing. The lighting system selectively illuminates outdoor spaces and identifies and evaluates events in a scene monitored by a video camera, thereby to facilitate proactive and appropriate security responses to those events. Selective use of advanced lighting fixtures may significantly reduce costs of lighting areas that are monitored by security systems such as streets, public parks, and parking lots, while simultaneously improving security, safety, and traffic control. Energy savings alone, for a properly designed system, are estimated at 50%-90% of current usage. When combined with remote monitoring, such systems may prevent accidents and criminal activity.
1. An illumination system, comprising: a first light source configured to illuminate a first area of a location; a first image sensor configured to acquire mega-pixel resolution image data corresponding to the location; video analytics configured to detect an event in the location using the acquired mega-pixel resolution image data and to provide information about the detected event; and a controller configured to direct the first light source to assume one extended illumination state among a plurality of extended states of illumination in response to the provided information about the detected event, the plurality of extended states of illumination including a non-illuminated state and a fully illuminated state. 2. The illumination system of claim 1, wherein the video analytics is further configured to detect a blob using the acquired mega-pixel resolution image data, the blob representing a group of pixels within the acquired mega-pixel resolution image data identified as related by the video analytics. 3. The illumination system of claim 2, wherein the video analytics is further configured to determine a distance from the first image sensor to the detected blob using the acquired mega-pixel resolution image data. 4. The illumination system of claim 3, wherein the controller is further configured to direct the first light source to assume the non-illuminated state until the determined distance of the detected blob satisfies a predefined threshold distance. 5. The illumination system of claim 2, wherein the video analytics is further configured to determine a speed of the detected blob using the acquired mega-pixel resolution image data. 6. The illumination system of claim 5, wherein the speed of the detected blob determines a timing of when the controller directs the first light source to assume the one extended illumination state. 7. The illumination system of claim 1, further comprising: a video camera comprising the first image sensor, the acquired mega-pixel resolution image data corresponding to a field of view of the video camera. 8. The illumination system of claim 7, wherein the video camera includes one or more additional image sensors configured to detect electromagnetic radiation at wavelengths outside of the visible spectrum. 9. The illumination system of claim 7, wherein the video camera includes the video analytics. 10. The illumination system of claim 1, wherein the video analytics is further configured to distinguish between background lighting changes in the first area. 11. A light fixture, comprising: a first light source configured to illuminate a first area of a location; and a lighting controller operatively coupled to the first light source and configured to direct the first light source to assume one extended illumination state among a plurality of extended states of illumination using light control signals in response to information received from a video analytics over a communication medium, the plurality of extended states of illumination including a non-illuminated state and a fully illuminated state, the received information associated with events detected by the video analytics using mega-pixel resolution image data corresponding to the location acquired by one or more image sensors. 12. The light fixture of claim 11, wherein the lighting controller is further configured to direct the first light source to assume the non-illuminated state until the received information indicates the video analytics has detected an event in the location using the acquired mega-pixel resolution image data 13. The light fixture of claim 11, further comprising: a second light source to illuminate a second area of the location, the first and second areas being different areas of the location, the light fixture being configured to provide composite illumination patterns, the first area being a near field area of the composite illumination patterns and the second area being a far field area of the composite illumination patterns. 14. The light fixture of claim 13, wherein the composite illumination patterns change over time in response to a direction of travel corresponding to a blob detected by the video analytics using the acquired mega-pixel resolution image data. 15. The light fixture of claim 11, wherein the communication medium is a power line, a wireless communication link, a wired communication link, or a combination thereof. 16. A method, comprising: receiving mega-pixel resolution image data from a first image sensor corresponding to a location proximate an area that a first light source is configured to illuminate; detecting a first blob using the received mega-pixel resolution image data; recognizing the first detected blob as a first object in the location; and providing information about the first recognized object to a controller, the controller configured to deliver light control signals that direct the first light source to assume one extended illumination state among a plurality of extended states of illumination in response to the provided information about the first recognized object, the plurality of extended states of illumination including a non-illuminated state and a fully illuminated state. 17. The method of claim 16, wherein detecting a first blob using the received mega-pixel resolution image data further comprises: identifying a first group of related pixels within the received mega-pixel resolution image data. 18. The method of claim 16, wherein the provided information about the first recognized object comprises a distance from the first image sensor to the first recognized object determined using the received mega-pixel resolution image data, and wherein the controller is further configured to determine a timing of the light control signals based on the determined distance. 19. The method of claim 16, further comprising: tracking the recognized first object by analyzing the received mega-pixel resolution image data over time; and detecting an event while tracking the recognized first object. 20. The method of claim 19, wherein the controller if further configured to trigger a prerecorded announcement over an audio output associated with the location based on the detected event.
An intelligent lighting system employs energy efficient outdoor lighting and intelligent sensor technology in cooperation with video analytics processing. The lighting system selectively illuminates outdoor spaces and identifies and evaluates events in a scene monitored by a video camera, thereby to facilitate proactive and appropriate security responses to those events. Selective use of advanced lighting fixtures may significantly reduce costs of lighting areas that are monitored by security systems such as streets, public parks, and parking lots, while simultaneously improving security, safety, and traffic control. Energy savings alone, for a properly designed system, are estimated at 50%-90% of current usage. When combined with remote monitoring, such systems may prevent accidents and criminal activity.1. An illumination system, comprising: a first light source configured to illuminate a first area of a location; a first image sensor configured to acquire mega-pixel resolution image data corresponding to the location; video analytics configured to detect an event in the location using the acquired mega-pixel resolution image data and to provide information about the detected event; and a controller configured to direct the first light source to assume one extended illumination state among a plurality of extended states of illumination in response to the provided information about the detected event, the plurality of extended states of illumination including a non-illuminated state and a fully illuminated state. 2. The illumination system of claim 1, wherein the video analytics is further configured to detect a blob using the acquired mega-pixel resolution image data, the blob representing a group of pixels within the acquired mega-pixel resolution image data identified as related by the video analytics. 3. The illumination system of claim 2, wherein the video analytics is further configured to determine a distance from the first image sensor to the detected blob using the acquired mega-pixel resolution image data. 4. The illumination system of claim 3, wherein the controller is further configured to direct the first light source to assume the non-illuminated state until the determined distance of the detected blob satisfies a predefined threshold distance. 5. The illumination system of claim 2, wherein the video analytics is further configured to determine a speed of the detected blob using the acquired mega-pixel resolution image data. 6. The illumination system of claim 5, wherein the speed of the detected blob determines a timing of when the controller directs the first light source to assume the one extended illumination state. 7. The illumination system of claim 1, further comprising: a video camera comprising the first image sensor, the acquired mega-pixel resolution image data corresponding to a field of view of the video camera. 8. The illumination system of claim 7, wherein the video camera includes one or more additional image sensors configured to detect electromagnetic radiation at wavelengths outside of the visible spectrum. 9. The illumination system of claim 7, wherein the video camera includes the video analytics. 10. The illumination system of claim 1, wherein the video analytics is further configured to distinguish between background lighting changes in the first area. 11. A light fixture, comprising: a first light source configured to illuminate a first area of a location; and a lighting controller operatively coupled to the first light source and configured to direct the first light source to assume one extended illumination state among a plurality of extended states of illumination using light control signals in response to information received from a video analytics over a communication medium, the plurality of extended states of illumination including a non-illuminated state and a fully illuminated state, the received information associated with events detected by the video analytics using mega-pixel resolution image data corresponding to the location acquired by one or more image sensors. 12. The light fixture of claim 11, wherein the lighting controller is further configured to direct the first light source to assume the non-illuminated state until the received information indicates the video analytics has detected an event in the location using the acquired mega-pixel resolution image data 13. The light fixture of claim 11, further comprising: a second light source to illuminate a second area of the location, the first and second areas being different areas of the location, the light fixture being configured to provide composite illumination patterns, the first area being a near field area of the composite illumination patterns and the second area being a far field area of the composite illumination patterns. 14. The light fixture of claim 13, wherein the composite illumination patterns change over time in response to a direction of travel corresponding to a blob detected by the video analytics using the acquired mega-pixel resolution image data. 15. The light fixture of claim 11, wherein the communication medium is a power line, a wireless communication link, a wired communication link, or a combination thereof. 16. A method, comprising: receiving mega-pixel resolution image data from a first image sensor corresponding to a location proximate an area that a first light source is configured to illuminate; detecting a first blob using the received mega-pixel resolution image data; recognizing the first detected blob as a first object in the location; and providing information about the first recognized object to a controller, the controller configured to deliver light control signals that direct the first light source to assume one extended illumination state among a plurality of extended states of illumination in response to the provided information about the first recognized object, the plurality of extended states of illumination including a non-illuminated state and a fully illuminated state. 17. The method of claim 16, wherein detecting a first blob using the received mega-pixel resolution image data further comprises: identifying a first group of related pixels within the received mega-pixel resolution image data. 18. The method of claim 16, wherein the provided information about the first recognized object comprises a distance from the first image sensor to the first recognized object determined using the received mega-pixel resolution image data, and wherein the controller is further configured to determine a timing of the light control signals based on the determined distance. 19. The method of claim 16, further comprising: tracking the recognized first object by analyzing the received mega-pixel resolution image data over time; and detecting an event while tracking the recognized first object. 20. The method of claim 19, wherein the controller if further configured to trigger a prerecorded announcement over an audio output associated with the location based on the detected event.
2,400
7,441
7,441
15,416,402
2,482
A temporal sequence of pictures is generated in a method for encoding of a first video stream. To do so, a synchronization signal can be used, which can be derived from a second video stream independently of the first video stream. Alternatively, the encoding of a second video stream independent of the first video stream can be based on the same principle as for the encoding of the first video stream.
1.-10. (canceled) 11. A conference system comprising: a first end point; a second end point; and a mixing device; the first end point configured to generate a first video stream comprising a first temporal sequence of frames, the first end point configured to send the first temporal sequence of frames to the mixing device; the second end point configured to generate a second video stream comprising a second temporal sequence of frames, the second end point configured to send the second temporal sequence of frames to the mixing device; and the mixing device configured to derive a synchronization signal from at least one of: the first temporal sequence of frames, the second temporal sequence of frames, and a timing signal, the mixing device configured to send the synchronization signal to at least one of the first end point and the second end point so that subsequent frames of the first temporal sequence and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other via synchronized encoding of the first and second subsequent frames by the first and second end points. 12. The system of claim 11, wherein at least one of the first end point and the second end point configured to adjust encoding such that subsequent frames of the first temporal sequence sent to the mixing device and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other to have a same predictive structure based on the received synchronization signal in response to receiving the synchronization signal. 13. The system of claim 12, wherein the mixing device is configured to receive the subsequent frames of the first temporal sequence and the subsequent frames of the second temporal sequence after having sent the synchronization signal and the mixing device is configured to mix the received subsequent frames of the first temporal sequence with the received subsequent frames of the second temporal sequence to generate a mixed video stream. 14. The system of claim 13, wherein the mixing device is a server or a central server, the first end point is a terminal and the second end point is a terminal. 15. The system of claim 11, wherein the synchronization signal identifies a predictive structure defining a group of pictures for encoding of at least one of the first video stream and the second video stream such that the subsequent frames of the first temporal sequence sent to the mixing device and the subsequent frames of the second temporal sequence sent to the mixing device have a same length for individual picture groups; and wherein the synchronization signal is configured so that an encoder of the first endpoint encodes the subsequent frames of the first temporal sequence and an encoder of the second end point encodes the subsequent frames of the second temporal sequence in a corresponding manner so that I-frames within the subsequent frames of the first temporal sequence correspond with I-frames within the subsequent frames of the second temporal sequence and P-frames within the subsequent frames of the first temporal sequence correspond with P-frames within the subsequent frames of the second temporal sequence. 16. The system of claim 15, wherein the synchronization signal contains an information bit identifying a time offset between the positions of I-frames within the subsequent frames of the first temporal sequence sent to the mixing device or within the subsequent frames of the second temporal sequence sent to the mixing device. 17. A conference apparatus comprising: a mixing device having non-transitory memory and a processor; the mixing device configured to receive first video stream comprising a first temporal sequence of frames from a first end point and a second video stream comprising a second temporal sequence of frames from a second end point; and the mixing device configured to derive a synchronization signal from at least one of: the first temporal sequence of frames, the second temporal sequence of frames, and a timing signal, the mixing device configured to send the synchronization signal to at least one of the first end point and the second end point so that subsequent frames of the first temporal sequence and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other via synchronized encoding of the first and second subsequent frames by encoders of the first and second end points. 18. The conference apparatus of claim 17, wherein the mixing device is a server or a central server. 19. The conference apparatus of claim 17, wherein the apparatus also comprises the first and second end points, the first end point being a terminal device and the second end point being a terminal device. 20. The conference apparatus of claim 17, wherein the synchronization signal is configured so that the first and second end points adjust encoding such that subsequent frames of the first temporal sequence sent to the mixing device and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other to have a same predictive structure based on the received synchronization signal; and the mixing device is configured to receive the subsequent frames of the first temporal sequence and the subsequent frames of the second temporal sequence after having sent the synchronization signal and the mixing device is configured to mix the received subsequent frames of the first temporal sequence with the received subsequent frames of the second temporal sequence to generate a mixed video stream. 21. The conference apparatus of claim 17, wherein the synchronization signal identifies a predictive structure defining a group of pictures for encoding of at least one of the first video stream and the second video stream such that the subsequent frames of the first temporal sequence sent to the mixing device and the subsequent frames of the second temporal sequence sent to the mixing device have a same length for individual picture groups; and wherein the synchronization signal is configured so that an encoder of the first endpoint encodes the subsequent frames of the first temporal sequence and an encoder of the second end point encodes the subsequent frames of the second temporal sequence in a corresponding manner so that I-frames within the subsequent frames of the first temporal sequence correspond with I-frames within the subsequent frames of the second temporal sequence and P-frames within the subsequent frames of the first temporal sequence correspond with P-frames within the subsequent frames of the second temporal sequence. 22. The system of claim 21, wherein the synchronization signal contains an information bit identifying a time offset between the positions of I-frames within the subsequent frames of the first temporal sequence sent to the mixing device or within the subsequent frames of the second temporal sequence sent to the mixing device. 23. A method for mixing at least two video streams comprising: a mixing device deriving a synchronization signal from at least one of: a first temporal sequence of frames of a first video stream received from a first end point, a second temporal sequence of frames of a second video stream received from a second end point, and a timing signal; and the mixing device sending the synchronization signal to at least one of the first end point and the second end point so that subsequent frames of the first temporal sequence and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other via synchronized encoding of the first and second subsequent frames by encoders of the first and second end points. 24. The method of claim 23, comprising: the first end point generating the first video stream comprising the first temporal sequence of frames and sending the first temporal sequence of frames to the mixing device; and the second end point generating the second video stream comprising the second temporal sequence of frames sending the second temporal sequence of frames to the mixing device. 25. The method of claim 23, wherein the synchronization signal is sent to both the first end point and the second end point. 26. The method of claim 23, wherein the synchronization signal is only sent to the second end point and is derived from the first temporal sequence of frames. 27. The method of claim 23, comprising: the first end point sending the subsequent frames of the first temporal sequence to the mixing device; the second end point sending the subsequent frames of the second temporal sequence to the mixing device; and wherein the subsequent frames of the first temporal sequence sent to the mixing device and the subsequent frames of the second temporal sequence sent to the mixing device have a synchronized predictive structure that is configured so that each P-frame of a sequence of P-frames in the subsequent frames of the first temporal sequence has a position corresponding to a respective P-frame of a sequence of P-frames in the subsequent frames of the second temporal sequence. 28. The method of claim 23, wherein the mixing device is a server of a video teleconference system, the first end point is a subscriber terminal and the second end point is a subscriber terminal. 29. The method of claim 23, wherein the synchronization signal contains an information bit identifying a time offset between the positions of I-frames within the subsequent frames of the first temporal sequence sent to the mixing device or within the subsequent frames of the second temporal sequence sent to the mixing device. 30. The method of claim 23, wherein the synchronization signal contains an information item identifying a number of P-frames or number of B-frames to follow an I-frame in at least one of the subsequent frames of the first temporal sequence sent to the mixing device and the subsequent frames of the second temporal sequence sent to the mixing device.
A temporal sequence of pictures is generated in a method for encoding of a first video stream. To do so, a synchronization signal can be used, which can be derived from a second video stream independently of the first video stream. Alternatively, the encoding of a second video stream independent of the first video stream can be based on the same principle as for the encoding of the first video stream.1.-10. (canceled) 11. A conference system comprising: a first end point; a second end point; and a mixing device; the first end point configured to generate a first video stream comprising a first temporal sequence of frames, the first end point configured to send the first temporal sequence of frames to the mixing device; the second end point configured to generate a second video stream comprising a second temporal sequence of frames, the second end point configured to send the second temporal sequence of frames to the mixing device; and the mixing device configured to derive a synchronization signal from at least one of: the first temporal sequence of frames, the second temporal sequence of frames, and a timing signal, the mixing device configured to send the synchronization signal to at least one of the first end point and the second end point so that subsequent frames of the first temporal sequence and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other via synchronized encoding of the first and second subsequent frames by the first and second end points. 12. The system of claim 11, wherein at least one of the first end point and the second end point configured to adjust encoding such that subsequent frames of the first temporal sequence sent to the mixing device and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other to have a same predictive structure based on the received synchronization signal in response to receiving the synchronization signal. 13. The system of claim 12, wherein the mixing device is configured to receive the subsequent frames of the first temporal sequence and the subsequent frames of the second temporal sequence after having sent the synchronization signal and the mixing device is configured to mix the received subsequent frames of the first temporal sequence with the received subsequent frames of the second temporal sequence to generate a mixed video stream. 14. The system of claim 13, wherein the mixing device is a server or a central server, the first end point is a terminal and the second end point is a terminal. 15. The system of claim 11, wherein the synchronization signal identifies a predictive structure defining a group of pictures for encoding of at least one of the first video stream and the second video stream such that the subsequent frames of the first temporal sequence sent to the mixing device and the subsequent frames of the second temporal sequence sent to the mixing device have a same length for individual picture groups; and wherein the synchronization signal is configured so that an encoder of the first endpoint encodes the subsequent frames of the first temporal sequence and an encoder of the second end point encodes the subsequent frames of the second temporal sequence in a corresponding manner so that I-frames within the subsequent frames of the first temporal sequence correspond with I-frames within the subsequent frames of the second temporal sequence and P-frames within the subsequent frames of the first temporal sequence correspond with P-frames within the subsequent frames of the second temporal sequence. 16. The system of claim 15, wherein the synchronization signal contains an information bit identifying a time offset between the positions of I-frames within the subsequent frames of the first temporal sequence sent to the mixing device or within the subsequent frames of the second temporal sequence sent to the mixing device. 17. A conference apparatus comprising: a mixing device having non-transitory memory and a processor; the mixing device configured to receive first video stream comprising a first temporal sequence of frames from a first end point and a second video stream comprising a second temporal sequence of frames from a second end point; and the mixing device configured to derive a synchronization signal from at least one of: the first temporal sequence of frames, the second temporal sequence of frames, and a timing signal, the mixing device configured to send the synchronization signal to at least one of the first end point and the second end point so that subsequent frames of the first temporal sequence and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other via synchronized encoding of the first and second subsequent frames by encoders of the first and second end points. 18. The conference apparatus of claim 17, wherein the mixing device is a server or a central server. 19. The conference apparatus of claim 17, wherein the apparatus also comprises the first and second end points, the first end point being a terminal device and the second end point being a terminal device. 20. The conference apparatus of claim 17, wherein the synchronization signal is configured so that the first and second end points adjust encoding such that subsequent frames of the first temporal sequence sent to the mixing device and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other to have a same predictive structure based on the received synchronization signal; and the mixing device is configured to receive the subsequent frames of the first temporal sequence and the subsequent frames of the second temporal sequence after having sent the synchronization signal and the mixing device is configured to mix the received subsequent frames of the first temporal sequence with the received subsequent frames of the second temporal sequence to generate a mixed video stream. 21. The conference apparatus of claim 17, wherein the synchronization signal identifies a predictive structure defining a group of pictures for encoding of at least one of the first video stream and the second video stream such that the subsequent frames of the first temporal sequence sent to the mixing device and the subsequent frames of the second temporal sequence sent to the mixing device have a same length for individual picture groups; and wherein the synchronization signal is configured so that an encoder of the first endpoint encodes the subsequent frames of the first temporal sequence and an encoder of the second end point encodes the subsequent frames of the second temporal sequence in a corresponding manner so that I-frames within the subsequent frames of the first temporal sequence correspond with I-frames within the subsequent frames of the second temporal sequence and P-frames within the subsequent frames of the first temporal sequence correspond with P-frames within the subsequent frames of the second temporal sequence. 22. The system of claim 21, wherein the synchronization signal contains an information bit identifying a time offset between the positions of I-frames within the subsequent frames of the first temporal sequence sent to the mixing device or within the subsequent frames of the second temporal sequence sent to the mixing device. 23. A method for mixing at least two video streams comprising: a mixing device deriving a synchronization signal from at least one of: a first temporal sequence of frames of a first video stream received from a first end point, a second temporal sequence of frames of a second video stream received from a second end point, and a timing signal; and the mixing device sending the synchronization signal to at least one of the first end point and the second end point so that subsequent frames of the first temporal sequence and subsequent frames of the second temporal sequence sent to the mixing device are synchronized with each other via synchronized encoding of the first and second subsequent frames by encoders of the first and second end points. 24. The method of claim 23, comprising: the first end point generating the first video stream comprising the first temporal sequence of frames and sending the first temporal sequence of frames to the mixing device; and the second end point generating the second video stream comprising the second temporal sequence of frames sending the second temporal sequence of frames to the mixing device. 25. The method of claim 23, wherein the synchronization signal is sent to both the first end point and the second end point. 26. The method of claim 23, wherein the synchronization signal is only sent to the second end point and is derived from the first temporal sequence of frames. 27. The method of claim 23, comprising: the first end point sending the subsequent frames of the first temporal sequence to the mixing device; the second end point sending the subsequent frames of the second temporal sequence to the mixing device; and wherein the subsequent frames of the first temporal sequence sent to the mixing device and the subsequent frames of the second temporal sequence sent to the mixing device have a synchronized predictive structure that is configured so that each P-frame of a sequence of P-frames in the subsequent frames of the first temporal sequence has a position corresponding to a respective P-frame of a sequence of P-frames in the subsequent frames of the second temporal sequence. 28. The method of claim 23, wherein the mixing device is a server of a video teleconference system, the first end point is a subscriber terminal and the second end point is a subscriber terminal. 29. The method of claim 23, wherein the synchronization signal contains an information bit identifying a time offset between the positions of I-frames within the subsequent frames of the first temporal sequence sent to the mixing device or within the subsequent frames of the second temporal sequence sent to the mixing device. 30. The method of claim 23, wherein the synchronization signal contains an information item identifying a number of P-frames or number of B-frames to follow an I-frame in at least one of the subsequent frames of the first temporal sequence sent to the mixing device and the subsequent frames of the second temporal sequence sent to the mixing device.
2,400
7,442
7,442
15,146,324
2,488
A wearable terminal device includes circuitry configured to receive sensor data from one or more sensors, wherein the sensor data corresponds to a behavior of a user in possession of the wearable terminal device. The circuitry is configured to determine, based on the sensor data, the behavior of the user. The circuitry is configured to control, based on the determined behavior of the user, a photographing interval of a camera.
1. A wearable terminal device comprising: circuitry configured to receive sensor data from one or more sensors, wherein the sensor data corresponds to a psychological state of a user in possession of the wearable terminal device; determine, based on the sensor data, the psychological state of the user; and control, based on the determined psychological state of the user, a photographing interval of a camera. 2. The wearable terminal device of claim 1, wherein the one or more sensors include at least one biological sensor. 3. The wearable terminal device of claim 2, wherein the at least one biological sensor measures physiological conditions of the user's body. 4. The wearable terminal device of claim 3, wherein the physiological conditions include one or more of heart rate, temperature, perspiration, breathing rate, and blood pressure. 5. The wearable terminal device of claim 3, wherein the one or more sensors include at least one motion sensor; the sensor data includes an indication of a motion of one or more of the wearable terminal device and the user, and the circuitry is further configured to control the photographing interval of the camera based on the psychological state of the user and the motion. 6. The wearable terminal device of claim 1, wherein the camera is included in the wearable terminal device. 7. A photographing system comprising: a wearable terminal device including a camera configured to capture image data at a photographing interval; one or more sensors configured to generate sensor data, wherein the sensor data corresponds to a behavior of a user in possession of the wearable terminal device; and circuitry configured to transmit the sensor data to one or more external devices, receive an indication of a behavior of the user, and determine, based on the indication of the behavior of the user, the photographing interval of the camera, and control the camera to capture the image data at the determined photographing interval; and a communication device including circuitry configured to receive the sensor data from the wearable terminal device, determine, based on the sensor data, the behavior of the user, and output the determination result of the behavior of the user to the wearable terminal device. 8. The photographing system of claim 7, wherein the circuitry of the communication apparatus is further configured to: determine, based on the indication of the behavior of the user, the photographing interval of the camera, and control the camera to capture the image data at the determined photographing interval.
A wearable terminal device includes circuitry configured to receive sensor data from one or more sensors, wherein the sensor data corresponds to a behavior of a user in possession of the wearable terminal device. The circuitry is configured to determine, based on the sensor data, the behavior of the user. The circuitry is configured to control, based on the determined behavior of the user, a photographing interval of a camera.1. A wearable terminal device comprising: circuitry configured to receive sensor data from one or more sensors, wherein the sensor data corresponds to a psychological state of a user in possession of the wearable terminal device; determine, based on the sensor data, the psychological state of the user; and control, based on the determined psychological state of the user, a photographing interval of a camera. 2. The wearable terminal device of claim 1, wherein the one or more sensors include at least one biological sensor. 3. The wearable terminal device of claim 2, wherein the at least one biological sensor measures physiological conditions of the user's body. 4. The wearable terminal device of claim 3, wherein the physiological conditions include one or more of heart rate, temperature, perspiration, breathing rate, and blood pressure. 5. The wearable terminal device of claim 3, wherein the one or more sensors include at least one motion sensor; the sensor data includes an indication of a motion of one or more of the wearable terminal device and the user, and the circuitry is further configured to control the photographing interval of the camera based on the psychological state of the user and the motion. 6. The wearable terminal device of claim 1, wherein the camera is included in the wearable terminal device. 7. A photographing system comprising: a wearable terminal device including a camera configured to capture image data at a photographing interval; one or more sensors configured to generate sensor data, wherein the sensor data corresponds to a behavior of a user in possession of the wearable terminal device; and circuitry configured to transmit the sensor data to one or more external devices, receive an indication of a behavior of the user, and determine, based on the indication of the behavior of the user, the photographing interval of the camera, and control the camera to capture the image data at the determined photographing interval; and a communication device including circuitry configured to receive the sensor data from the wearable terminal device, determine, based on the sensor data, the behavior of the user, and output the determination result of the behavior of the user to the wearable terminal device. 8. The photographing system of claim 7, wherein the circuitry of the communication apparatus is further configured to: determine, based on the indication of the behavior of the user, the photographing interval of the camera, and control the camera to capture the image data at the determined photographing interval.
2,400
7,443
7,443
14,148,129
2,435
A system includes a processor configured to receive a notification request from an entity in communication with a vehicle computing system (VCS). The processor is also configured to receive notification content and parameters. The processor is further configured to validate a right of the entity to display a notification on the VCS. Also, the processor is configured to validate the content based on permitted content. The processor is additionally configured to validate the parameters based on permitted parameters and queue a notification for display following successful right, the content and parameter verification.
1. A system comprising: a processor configured to: receive a notification request from an entity in communication with a vehicle computing system (VCS); receive notification content and parameters; validate a right of the entity to display a notification on the VCS; validate the content based on permitted content; validate the parameters based on permitted parameters; and queue a notification for display following successful right, the content and parameter verification. 2. The system of claim 1, wherein the entity includes a radar detector. 3. The system of claim 1, wherein the entity includes an application running on a remote device. 4. The system of claim 1, wherein the permitted content includes certain types of content. 5. The system of claim 1, wherein the permitted content includes certain types of content in predefined situations. 6. The system of claim 1, wherein the processor is further configured to receive replacement content to replace content that is invalidated. 7. The system of claim 6, wherein the processor is further configured to replace a queued notification with a new notification generated based on received replacement content. 8. The system of claim 1, wherein the processor is further configured to generate a notification according to a predefined format based on the received notification request and content. 9. The system of claim 1, wherein the processor is further configured to report invalidated content to the entity. 10. The system of claim 1, wherein the processor is further configured to replace invalidated parameters with predefined generic parameters. 11. A computer-implemented method comprising: receiving a notification request from an entity in communication with a vehicle computing system (VCS); receiving notification content and parameters; validating a right of the entity to display a notification on the VCS; validating the content based on permitted content; validating the parameters based on permitted parameters; and queuing a notification for display following successful right, the content and parameter verification. 12. The method of claim 11, wherein the entity includes a radar detector. 13. The method of claim 11, wherein the entity includes an application running on a remote device. 14. The method of claim 11, wherein the permitted content includes certain types of content. 15. The method of claim 11, wherein the permitted content includes certain types of content in predefined situations. 16. The method of claim 11, wherein the method includes receiving replacement content to replace content that is invalidated. 17. The method of claim 16, wherein the method includes replacing a queued notification with a new notification generated based on received replacement content. 18. The method of claim 11, wherein the processor is further configured to report invalidated content to the entity. 19. The method of claim 11, wherein the processor is further configured to replace invalidated parameters with predefined generic parameters. 20. A non-transitory computer-readable storage medium, storing instructions that, when executed by a processor, cause the processor to perform a method comprising: receiving a notification request from an entity in communication with a vehicle computing system (VCS); receiving notification content and parameters; validating a right of the entity to display a notification on the VCS; validating the content based on permitted content; validating the parameters based on permitted parameters; and queuing a notification for display following successful right, the content and parameter verification.
A system includes a processor configured to receive a notification request from an entity in communication with a vehicle computing system (VCS). The processor is also configured to receive notification content and parameters. The processor is further configured to validate a right of the entity to display a notification on the VCS. Also, the processor is configured to validate the content based on permitted content. The processor is additionally configured to validate the parameters based on permitted parameters and queue a notification for display following successful right, the content and parameter verification.1. A system comprising: a processor configured to: receive a notification request from an entity in communication with a vehicle computing system (VCS); receive notification content and parameters; validate a right of the entity to display a notification on the VCS; validate the content based on permitted content; validate the parameters based on permitted parameters; and queue a notification for display following successful right, the content and parameter verification. 2. The system of claim 1, wherein the entity includes a radar detector. 3. The system of claim 1, wherein the entity includes an application running on a remote device. 4. The system of claim 1, wherein the permitted content includes certain types of content. 5. The system of claim 1, wherein the permitted content includes certain types of content in predefined situations. 6. The system of claim 1, wherein the processor is further configured to receive replacement content to replace content that is invalidated. 7. The system of claim 6, wherein the processor is further configured to replace a queued notification with a new notification generated based on received replacement content. 8. The system of claim 1, wherein the processor is further configured to generate a notification according to a predefined format based on the received notification request and content. 9. The system of claim 1, wherein the processor is further configured to report invalidated content to the entity. 10. The system of claim 1, wherein the processor is further configured to replace invalidated parameters with predefined generic parameters. 11. A computer-implemented method comprising: receiving a notification request from an entity in communication with a vehicle computing system (VCS); receiving notification content and parameters; validating a right of the entity to display a notification on the VCS; validating the content based on permitted content; validating the parameters based on permitted parameters; and queuing a notification for display following successful right, the content and parameter verification. 12. The method of claim 11, wherein the entity includes a radar detector. 13. The method of claim 11, wherein the entity includes an application running on a remote device. 14. The method of claim 11, wherein the permitted content includes certain types of content. 15. The method of claim 11, wherein the permitted content includes certain types of content in predefined situations. 16. The method of claim 11, wherein the method includes receiving replacement content to replace content that is invalidated. 17. The method of claim 16, wherein the method includes replacing a queued notification with a new notification generated based on received replacement content. 18. The method of claim 11, wherein the processor is further configured to report invalidated content to the entity. 19. The method of claim 11, wherein the processor is further configured to replace invalidated parameters with predefined generic parameters. 20. A non-transitory computer-readable storage medium, storing instructions that, when executed by a processor, cause the processor to perform a method comprising: receiving a notification request from an entity in communication with a vehicle computing system (VCS); receiving notification content and parameters; validating a right of the entity to display a notification on the VCS; validating the content based on permitted content; validating the parameters based on permitted parameters; and queuing a notification for display following successful right, the content and parameter verification.
2,400
7,444
7,444
13,596,222
2,442
A system is for pushing information from a host system to a mobile data communication device upon sensing a triggering event. A redirector program operating at the host system enables a user to continuously redirect user-selected data items from the host system to the user's mobile data communication device upon detecting that one or more user-defined triggering events has occurred. The redirector program operates in connection with event generating applications and repackaging systems at the host system to configure and detect particular user-defined events, and to repackage the user-selected data items in an electronic wrapper prior to pushing the data items to the mobile device.
1. A method comprising: receiving, by a mobile communication device, a message that included an attachment before being received by the mobile communication device; creating, on the mobile communication device, an electronic message that includes a control file; and transmitting, by the mobile communication device, the electronic message to a wireless network for delivery to a relay system that provides a communication link between the wireless network and an attachment processor via a computer network; the control file being configured to control the attachment processor to process the attachment. 2. The method of claim 1 wherein the attachment processor is a printer for which processing the attachment entails printing the attachment. 3. The method of claim 1 wherein the attachment is a video file and the attachment processor is a video displayer. 4. The method of claim 1 wherein the attachment processor is a projector for projecting the attachment. 5. The method of claim 1 wherein the received message is an email message. 6. The method of claim 1 wherein the control file is an attachment to the electronic message. 7. The method of claim 1 wherein the control file is converted into an executable format by the attachment processor. 8. The method of claim 1 further comprising, after the transmitting step: receiving, by the mobile communication device, a notification that the processing of the attachment has been performed. 9. The method of claim 6 wherein the notification is received by the mobile communication device in an encrypted format, and the message is decrypted by the mobile communication device. 10. The method of claim 1 wherein the message is stripped of the attachment before being received by the mobile communication device. 11. The method of claim 1 wherein only a portion of the attachment is included in the message when the message is received by the mobile communication device. 12. The method of claim 1 further comprising, between the receiving step and the creating step: querying, by the mobile communication device, attachment processors including said attachment processor; receiving, by the mobile communication device, responses from the attachment processors; selecting, at the mobile communication device, said attachment processor from among the responding attachment processors; and transmitting, by the mobile communication device to a host system, a request to forward the attachment to the selected attachment processor. 13. The method of claim 12 wherein the querying is via short range wireless communication from the mobile communication device to the attachment processor. 14. The method of claim 12 wherein the responses include unique identifiers of the respective attachment processors for addressing communications to the attachment processors through the communication link. 15. The method of claim 1 further comprising: using a bar code scanner on the mobile communication device to scan a bar code on the attachment processor, the bar code indicating a unique identifier of the attachment processor for addressing communications to the attachment processor through the communication link. 16. The method of claim 1 further comprising, between the creating step and the receiving step: receiving, by the mobile communication device from a host system, a list of attachment processors including said attachment processor; selecting, at the mobile communication device, said attachment processor from among the attachment processors; and transmitting, by the mobile communication device to the host system, a request to forward the attachment to the selected attachment processor. 17. A system comprising: a mobile communication device configured to create an electronic message that includes a control file; the mobile communication device being further configured to transmit the electronic message to a wireless network for delivery to a relay system that provides a communication link between the wireless network and a computer network; the electronic message being for transmission by the relay system to the computer network for delivery to an attachment processor; and the control file being included in the electronic message for use by the attachment processor to control processing of an attachment that was included in a message that was received by the mobile communication device. 18. The system of claim 17 wherein the control file is an attachment to the electronic message. 19. The system of claim 17 wherein the control file is converted into an executable format by the attachment processor. 20. The system of claim 17 wherein the mobile communication device is configured to, automatically upon receipt of the message, query attachment processors including said attachment processor, receive responses from the attachment processors, and select said attachment processor from among the responding attachment processors.
A system is for pushing information from a host system to a mobile data communication device upon sensing a triggering event. A redirector program operating at the host system enables a user to continuously redirect user-selected data items from the host system to the user's mobile data communication device upon detecting that one or more user-defined triggering events has occurred. The redirector program operates in connection with event generating applications and repackaging systems at the host system to configure and detect particular user-defined events, and to repackage the user-selected data items in an electronic wrapper prior to pushing the data items to the mobile device.1. A method comprising: receiving, by a mobile communication device, a message that included an attachment before being received by the mobile communication device; creating, on the mobile communication device, an electronic message that includes a control file; and transmitting, by the mobile communication device, the electronic message to a wireless network for delivery to a relay system that provides a communication link between the wireless network and an attachment processor via a computer network; the control file being configured to control the attachment processor to process the attachment. 2. The method of claim 1 wherein the attachment processor is a printer for which processing the attachment entails printing the attachment. 3. The method of claim 1 wherein the attachment is a video file and the attachment processor is a video displayer. 4. The method of claim 1 wherein the attachment processor is a projector for projecting the attachment. 5. The method of claim 1 wherein the received message is an email message. 6. The method of claim 1 wherein the control file is an attachment to the electronic message. 7. The method of claim 1 wherein the control file is converted into an executable format by the attachment processor. 8. The method of claim 1 further comprising, after the transmitting step: receiving, by the mobile communication device, a notification that the processing of the attachment has been performed. 9. The method of claim 6 wherein the notification is received by the mobile communication device in an encrypted format, and the message is decrypted by the mobile communication device. 10. The method of claim 1 wherein the message is stripped of the attachment before being received by the mobile communication device. 11. The method of claim 1 wherein only a portion of the attachment is included in the message when the message is received by the mobile communication device. 12. The method of claim 1 further comprising, between the receiving step and the creating step: querying, by the mobile communication device, attachment processors including said attachment processor; receiving, by the mobile communication device, responses from the attachment processors; selecting, at the mobile communication device, said attachment processor from among the responding attachment processors; and transmitting, by the mobile communication device to a host system, a request to forward the attachment to the selected attachment processor. 13. The method of claim 12 wherein the querying is via short range wireless communication from the mobile communication device to the attachment processor. 14. The method of claim 12 wherein the responses include unique identifiers of the respective attachment processors for addressing communications to the attachment processors through the communication link. 15. The method of claim 1 further comprising: using a bar code scanner on the mobile communication device to scan a bar code on the attachment processor, the bar code indicating a unique identifier of the attachment processor for addressing communications to the attachment processor through the communication link. 16. The method of claim 1 further comprising, between the creating step and the receiving step: receiving, by the mobile communication device from a host system, a list of attachment processors including said attachment processor; selecting, at the mobile communication device, said attachment processor from among the attachment processors; and transmitting, by the mobile communication device to the host system, a request to forward the attachment to the selected attachment processor. 17. A system comprising: a mobile communication device configured to create an electronic message that includes a control file; the mobile communication device being further configured to transmit the electronic message to a wireless network for delivery to a relay system that provides a communication link between the wireless network and a computer network; the electronic message being for transmission by the relay system to the computer network for delivery to an attachment processor; and the control file being included in the electronic message for use by the attachment processor to control processing of an attachment that was included in a message that was received by the mobile communication device. 18. The system of claim 17 wherein the control file is an attachment to the electronic message. 19. The system of claim 17 wherein the control file is converted into an executable format by the attachment processor. 20. The system of claim 17 wherein the mobile communication device is configured to, automatically upon receipt of the message, query attachment processors including said attachment processor, receive responses from the attachment processors, and select said attachment processor from among the responding attachment processors.
2,400
7,445
7,445
14,396,713
2,466
The invention relates methods for interference reporting by a mobile terminal in a mobile communication system. The invention is also providing apparatus for performing these methods, and computer readable media the instructions of which cause the apparatus to perform the methods described herein. In order to allow for interference reporting, the mobile terminal detects an interference condition between the communication with the base station via the first resource and a communication with the wireless communication device via the second resource, reports on the interference condition to the base station; receives reconfiguration information indicating a third resource and reconfigures the communication with the base station to the third resource. Further, the mobile terminal detects whether or not the interference condition persists for potential communications with the base station via the first resource, and, in case the interference condition has been re(optional) solved, reports on an interference resolution to the base station.
1. A method for interference reporting by a mobile terminal in a mobile communication system including a base station and a wireless communication device, the mobile terminal being in communication with the base station via a first resource and being configured for communication with the wireless communication device via a second resource, the method comprising the steps of: detecting, by the mobile terminal, an interference condition between the communication with the base station via the first resource and a communication with the wireless communication device via the second resource; reporting, by the mobile terminal, on the interference condition to the base station; receiving, by the mobile terminal, reconfiguration information indicating a third resource for communication with the base station and reconfiguring the communication with the base station to the third resource; and detecting, by the mobile terminal, whether or not the interference condition persists for potential communications with the base station via the first resource, and, in case the interference condition has been resolved, reporting, by the mobile terminal, on an interference resolution to the base station for communications via the first resource. 2. The method according to claim 1, wherein the step of detecting whether or not the interference condition persists for potential communications with the base station via the first resource is performed by the mobile terminal after a step of: reporting, by the mobile terminal, on an interference avoidance to the base station, in case there is no interference condition between the reconfigured communication with the base station via the third resource and the communication with the wireless communication device via the second resource. 3. The method according to claim 1, wherein the step of detecting whether or not the interference condition persists for potential communications with the base station via the first resource is repeatedly performed until detection that the interference condition has been resolved. 4. The method according to claim 1, wherein the step of detecting whether or not the interference condition persists for potential communications with the base station via the first resource includes determining by the mobile terminal: if the communication with the wireless communication device via the second resource has been terminated, and/or if the communication with the wireless communication device has been reconfigured. 5. The method according to claim 1, wherein the step of detecting, by the mobile terminal, the interference condition includes detecting interference conditions between uplink and/or downlink transmissions with the base station via the first resource and uplink and/or downlink transmissions with the wireless communication device via the second resource. 6. The method according to claim 1, wherein the step of reporting, by the mobile terminal, on the interference condition includes transmitting an indication of the first resource to the base station and/or transmitting a radio resource control (RRC) message via an uplink distributed control channel (UL-DCCH) message to the base station. 7. (canceled) 8. The method according to claim 2, wherein the step of reporting, by the mobile terminal, on an interference avoidance includes transmitting another radio resource control (RRC) message via an uplink distributed control channel, UL-DCCH, message to the base station, wherein the another RRC message optionally includes an indication of the first resource for which the interference condition was detected. 9. The method according to claim 1, wherein the step of reporting on the interference resolution is performed by the mobile terminal including an indication on the interference resolution for communications via the first resource within: a further radio resource control (RRC) message, a power headroom report (PHR) message, an extended power headroom report (e-PHR) message, a channel quality identifier (CQI) message, or a buffer status report (BSR) message, to be transmitted by the mobile terminal to the base station. 10. The method according to claim 9, wherein, in case a PHR/e-PHR message is used by the mobile terminal for reporting on the interference resolution to the base station, a first value of a reserved bit (R-bit) of the PHR/e-PHR message indicates a resolved interference condition to the base station for communications via the first resource, and wherein optionally, the transmission of the PHR/e-PHR message including the first value is triggered upon detection, by the mobile terminal, that the interference condition has been resolved for communications via the first resource, a second value of a reserved bit (R-bit) of the PHR/e-PHR message indicates a persisting interference condition for communications via the first resource, and/or PHR/e-PHR messages including the second value are to be transmitted by the mobile terminal during a time period between the detection of the interference condition for communications with the base station via the first resource and the detection that the interference condition has been resolved for communications via the first resource. 11. (canceled) 12. The method according to claim 9, wherein, in case an e-PHR message is used by the mobile terminal for reporting on the interference resolution to the base station, another reserved bit of the e-PHR message indicates the communication technology used for communicating with the wireless communication device via the second resource, and/or at least one further reserved bit of the e-PHR message indicates the interference condition on a per-cell basis of the cells included in the first resource for which the interference condition has been detected. 13. A mobile terminal for interference reporting in a mobile communication system including a base station and a wireless communication device, the mobile terminal being in communication with the base station via a first resource and being configured for communication with the wireless communication device via a second resource, the mobile terminal comprising: a processor configured to detect an interference condition between the communication with the base station via the first resource and a communication with the wireless communication device via the second resource; a transmitting circuit configured to report on the interference condition to the base station; and a receiving circuit configured to receive reconfiguration information indicating a third resource for communication with the base station, the processor being configured to reconfigure the communication with the base station to the third resource; wherein the processor is further configured to detect whether or not the interference condition persists for potential communications with the base station via the first resource, and, in case the interference condition has been resolved, the transmitting circuit is configured to report on an interference resolution to the base station for communications via the first resource. 14. The mobile terminal according to claim 13, wherein the transmitting circuit is further configured to report on an interference avoidance to the base station, in case there is no interference condition between the reconfigured communication with the base station via the third resource and the communication with the wireless communication device via the second resource, and the processor is configured to detect whether or not the interference condition persists for potential communications with the base station via the first resource after the transmitting circuit reports on the interference avoidance to the base station. 15. The mobile terminal according to claim 13, wherein the processor of the mobile terminal is configured to repeatedly detect whether or not the interference condition persists for potential communications with the base station via the first resource until detection that the interference condition has been resolved. 16. The mobile terminal according to claim 13, wherein the processor of the mobile terminal is configured to detect whether or not the interference condition persists for potential communications with the base station via the first resource by determining if the communication with the wireless communication device via the second resource has been terminated, and/or if the communication with the wireless communication device has been reconfigured. 17. The mobile terminal according to claim 13, wherein the processor of the mobile terminal is configured to detect interference conditions between uplink and/or downlink transmissions with the base station via the first resource and uplink and/or downlink transmissions with the wireless communication device via the second resource. 18. The mobile terminal according to claim 13, wherein the transmitting circuit of the mobile terminal is configured to report on the interference condition by transmitting an indication of the first resource to the base station and/or transmitting a radio resource control (RRC) message via an uplink distributed control channel (UL-DCCH) message to the base station. 19. (canceled) 20. The mobile terminal according to claim 13, wherein the transmitting circuit of the mobile terminal is configured to report on an interference avoidance by transmitting another radio resource control (RRC) message via an uplink distributed control channel (UL-DCCH) message to the base station, wherein the another RRC message optionally includes an indication of the first resource for which the interference condition was detected. 21. The mobile terminal according to claim 13, wherein the transmitting circuit of the mobile terminal is configured to report on the interference resolution by including an indication on the interference resolution for communications via the first resource within: a further radio resource control (RRC) message, a power headroom report (PHR) message, an extended power headroom report (e-PHR) message, a channel quality identifier (CQI) message, or a buffer status report (BSR) message to be transmitted by the mobile terminal to the base station. 22. The mobile terminal according to claim 21, wherein, in case the transmitting circuit of the mobile terminal is configured to use a PHR/e-PHR message for reporting on the interference resolution to the base station, a first value of a reserved bit (R-bit) of the PHR/e-PHR message indicates a resolved interference condition to the base station for communications via the first resource, and wherein optionally, the transmission of the PHR/e-PHR message including the first value is triggered upon detection, by the processor of mobile terminal, that the interference condition has been resolved for communications via the first resource, a second value of a reserved bit (R-bit) of the PHR/e-PHR message indicates a persisting interference condition for communications via the first resource, and/or the transmitting circuit of the mobile terminal is configured to transmit PHR/e-PHR messages including the second value during a time period between the detection of the interference condition for communications with the base station via the first resource and the detection that the interference condition has been resolved for communications via the first resource. 23-24. (canceled) 25. The mobile terminal according to claim 21, wherein, in case the transmitting circuit of the mobile terminal is configured to use an e-PHR message for reporting on the interference resolution to the base station, another reserved bit of the e-PHR message indicates the communication technology used for communicating with the wireless communication device via the second resource, and/or at least one further reserved bit of the e-PHR message indicates the interference condition on a per-cell basis of the cells included in the first resource for which the interference condition has been detected. 26. (canceled) 27. A computer readable medium storing instructions that, when executed by a processor of a mobile terminal, cause the mobile terminal to report on interference in a mobile communication system including a base station and a wireless communication device, the mobile terminal being in communication with the base station via a first resource and being configured for communication with the wireless communication device via a second resource, by: detecting, by the mobile terminal, an interference condition between the communication with the base station via the first resource and a communication with the wireless communication device via the second resource; reporting on the interference condition to the base station; receiving reconfiguration information indicating a third resource for communication with the base station and reconfiguring the communication with the base station to the third resource; and detecting whether or not the interference condition persists for potential communications with the base station via the first resource, and, in case the interference condition has been resolved, reporting on an interference resolution to the base station for communications via the first resource. 28. (canceled)
The invention relates methods for interference reporting by a mobile terminal in a mobile communication system. The invention is also providing apparatus for performing these methods, and computer readable media the instructions of which cause the apparatus to perform the methods described herein. In order to allow for interference reporting, the mobile terminal detects an interference condition between the communication with the base station via the first resource and a communication with the wireless communication device via the second resource, reports on the interference condition to the base station; receives reconfiguration information indicating a third resource and reconfigures the communication with the base station to the third resource. Further, the mobile terminal detects whether or not the interference condition persists for potential communications with the base station via the first resource, and, in case the interference condition has been re(optional) solved, reports on an interference resolution to the base station.1. A method for interference reporting by a mobile terminal in a mobile communication system including a base station and a wireless communication device, the mobile terminal being in communication with the base station via a first resource and being configured for communication with the wireless communication device via a second resource, the method comprising the steps of: detecting, by the mobile terminal, an interference condition between the communication with the base station via the first resource and a communication with the wireless communication device via the second resource; reporting, by the mobile terminal, on the interference condition to the base station; receiving, by the mobile terminal, reconfiguration information indicating a third resource for communication with the base station and reconfiguring the communication with the base station to the third resource; and detecting, by the mobile terminal, whether or not the interference condition persists for potential communications with the base station via the first resource, and, in case the interference condition has been resolved, reporting, by the mobile terminal, on an interference resolution to the base station for communications via the first resource. 2. The method according to claim 1, wherein the step of detecting whether or not the interference condition persists for potential communications with the base station via the first resource is performed by the mobile terminal after a step of: reporting, by the mobile terminal, on an interference avoidance to the base station, in case there is no interference condition between the reconfigured communication with the base station via the third resource and the communication with the wireless communication device via the second resource. 3. The method according to claim 1, wherein the step of detecting whether or not the interference condition persists for potential communications with the base station via the first resource is repeatedly performed until detection that the interference condition has been resolved. 4. The method according to claim 1, wherein the step of detecting whether or not the interference condition persists for potential communications with the base station via the first resource includes determining by the mobile terminal: if the communication with the wireless communication device via the second resource has been terminated, and/or if the communication with the wireless communication device has been reconfigured. 5. The method according to claim 1, wherein the step of detecting, by the mobile terminal, the interference condition includes detecting interference conditions between uplink and/or downlink transmissions with the base station via the first resource and uplink and/or downlink transmissions with the wireless communication device via the second resource. 6. The method according to claim 1, wherein the step of reporting, by the mobile terminal, on the interference condition includes transmitting an indication of the first resource to the base station and/or transmitting a radio resource control (RRC) message via an uplink distributed control channel (UL-DCCH) message to the base station. 7. (canceled) 8. The method according to claim 2, wherein the step of reporting, by the mobile terminal, on an interference avoidance includes transmitting another radio resource control (RRC) message via an uplink distributed control channel, UL-DCCH, message to the base station, wherein the another RRC message optionally includes an indication of the first resource for which the interference condition was detected. 9. The method according to claim 1, wherein the step of reporting on the interference resolution is performed by the mobile terminal including an indication on the interference resolution for communications via the first resource within: a further radio resource control (RRC) message, a power headroom report (PHR) message, an extended power headroom report (e-PHR) message, a channel quality identifier (CQI) message, or a buffer status report (BSR) message, to be transmitted by the mobile terminal to the base station. 10. The method according to claim 9, wherein, in case a PHR/e-PHR message is used by the mobile terminal for reporting on the interference resolution to the base station, a first value of a reserved bit (R-bit) of the PHR/e-PHR message indicates a resolved interference condition to the base station for communications via the first resource, and wherein optionally, the transmission of the PHR/e-PHR message including the first value is triggered upon detection, by the mobile terminal, that the interference condition has been resolved for communications via the first resource, a second value of a reserved bit (R-bit) of the PHR/e-PHR message indicates a persisting interference condition for communications via the first resource, and/or PHR/e-PHR messages including the second value are to be transmitted by the mobile terminal during a time period between the detection of the interference condition for communications with the base station via the first resource and the detection that the interference condition has been resolved for communications via the first resource. 11. (canceled) 12. The method according to claim 9, wherein, in case an e-PHR message is used by the mobile terminal for reporting on the interference resolution to the base station, another reserved bit of the e-PHR message indicates the communication technology used for communicating with the wireless communication device via the second resource, and/or at least one further reserved bit of the e-PHR message indicates the interference condition on a per-cell basis of the cells included in the first resource for which the interference condition has been detected. 13. A mobile terminal for interference reporting in a mobile communication system including a base station and a wireless communication device, the mobile terminal being in communication with the base station via a first resource and being configured for communication with the wireless communication device via a second resource, the mobile terminal comprising: a processor configured to detect an interference condition between the communication with the base station via the first resource and a communication with the wireless communication device via the second resource; a transmitting circuit configured to report on the interference condition to the base station; and a receiving circuit configured to receive reconfiguration information indicating a third resource for communication with the base station, the processor being configured to reconfigure the communication with the base station to the third resource; wherein the processor is further configured to detect whether or not the interference condition persists for potential communications with the base station via the first resource, and, in case the interference condition has been resolved, the transmitting circuit is configured to report on an interference resolution to the base station for communications via the first resource. 14. The mobile terminal according to claim 13, wherein the transmitting circuit is further configured to report on an interference avoidance to the base station, in case there is no interference condition between the reconfigured communication with the base station via the third resource and the communication with the wireless communication device via the second resource, and the processor is configured to detect whether or not the interference condition persists for potential communications with the base station via the first resource after the transmitting circuit reports on the interference avoidance to the base station. 15. The mobile terminal according to claim 13, wherein the processor of the mobile terminal is configured to repeatedly detect whether or not the interference condition persists for potential communications with the base station via the first resource until detection that the interference condition has been resolved. 16. The mobile terminal according to claim 13, wherein the processor of the mobile terminal is configured to detect whether or not the interference condition persists for potential communications with the base station via the first resource by determining if the communication with the wireless communication device via the second resource has been terminated, and/or if the communication with the wireless communication device has been reconfigured. 17. The mobile terminal according to claim 13, wherein the processor of the mobile terminal is configured to detect interference conditions between uplink and/or downlink transmissions with the base station via the first resource and uplink and/or downlink transmissions with the wireless communication device via the second resource. 18. The mobile terminal according to claim 13, wherein the transmitting circuit of the mobile terminal is configured to report on the interference condition by transmitting an indication of the first resource to the base station and/or transmitting a radio resource control (RRC) message via an uplink distributed control channel (UL-DCCH) message to the base station. 19. (canceled) 20. The mobile terminal according to claim 13, wherein the transmitting circuit of the mobile terminal is configured to report on an interference avoidance by transmitting another radio resource control (RRC) message via an uplink distributed control channel (UL-DCCH) message to the base station, wherein the another RRC message optionally includes an indication of the first resource for which the interference condition was detected. 21. The mobile terminal according to claim 13, wherein the transmitting circuit of the mobile terminal is configured to report on the interference resolution by including an indication on the interference resolution for communications via the first resource within: a further radio resource control (RRC) message, a power headroom report (PHR) message, an extended power headroom report (e-PHR) message, a channel quality identifier (CQI) message, or a buffer status report (BSR) message to be transmitted by the mobile terminal to the base station. 22. The mobile terminal according to claim 21, wherein, in case the transmitting circuit of the mobile terminal is configured to use a PHR/e-PHR message for reporting on the interference resolution to the base station, a first value of a reserved bit (R-bit) of the PHR/e-PHR message indicates a resolved interference condition to the base station for communications via the first resource, and wherein optionally, the transmission of the PHR/e-PHR message including the first value is triggered upon detection, by the processor of mobile terminal, that the interference condition has been resolved for communications via the first resource, a second value of a reserved bit (R-bit) of the PHR/e-PHR message indicates a persisting interference condition for communications via the first resource, and/or the transmitting circuit of the mobile terminal is configured to transmit PHR/e-PHR messages including the second value during a time period between the detection of the interference condition for communications with the base station via the first resource and the detection that the interference condition has been resolved for communications via the first resource. 23-24. (canceled) 25. The mobile terminal according to claim 21, wherein, in case the transmitting circuit of the mobile terminal is configured to use an e-PHR message for reporting on the interference resolution to the base station, another reserved bit of the e-PHR message indicates the communication technology used for communicating with the wireless communication device via the second resource, and/or at least one further reserved bit of the e-PHR message indicates the interference condition on a per-cell basis of the cells included in the first resource for which the interference condition has been detected. 26. (canceled) 27. A computer readable medium storing instructions that, when executed by a processor of a mobile terminal, cause the mobile terminal to report on interference in a mobile communication system including a base station and a wireless communication device, the mobile terminal being in communication with the base station via a first resource and being configured for communication with the wireless communication device via a second resource, by: detecting, by the mobile terminal, an interference condition between the communication with the base station via the first resource and a communication with the wireless communication device via the second resource; reporting on the interference condition to the base station; receiving reconfiguration information indicating a third resource for communication with the base station and reconfiguring the communication with the base station to the third resource; and detecting whether or not the interference condition persists for potential communications with the base station via the first resource, and, in case the interference condition has been resolved, reporting on an interference resolution to the base station for communications via the first resource. 28. (canceled)
2,400
7,446
7,446
13,240,572
2,492
A method and system for creating a composite security rating from security characterization data of a third party computer system. The security characterization data is derived from externally observable characteristics of the third party computer system. Advantageously, the composite security score has a relatively high likelihood of corresponding to an internal audit score despite use of externally observable security characteristics. Also, the method and system may include use of multiple security characterizations all solely derived from externally observable characteristics of the third party computer system.
1. A method comprising: collecting information about two or more companies or other organizations that have computer systems, network resources, and employees, the organizations posing risks to themselves or to other parties through business relationships of the organizations with the other parties, the information collected about the organizations being indicative of compromises or of vulnerabilities of technology systems, data, or other information of the organizations, and indicative of resiliencies of the organizations to recover from security breaches including compromises or vulnerabilities or configurations, at least some of the information about each of the organizations being collected automatically by computer using sensors on the Internet, the information about each of the organizations being collected from two or more sources, one or more of the sources not being controlled by the organization, the information from at least the one or more sources that are not controlled by the organization being collected without permission of the organization, at least partly automatically gathering information about assets that each of the organizations owns, controls, uses, or is affiliated with, including IP network address ranges, computer services residing within address ranges, or domain names, at least one of the sources for each of the organizations comprising a public source or a commercial source, processing by computer the information from the two or more sources for each of the organizations to form a composite rating of the organization that is indicative of a degree of risk to the organization or to a party through a business relationship with the organization, the composite rating comprising a calculated composite of metrics and data derived or collected from the sources, the processing comprising applying transformations to the data and metrics, and the processing comprising applying weights to the data and the metrics, the metrics including a measure of the extent of or the frequency of or duration of compromise of the computer systems or data of the organization, or of a configuration or vulnerability of the organization, and a measure of the resilience of the organization to recover from a security breach or vulnerability, and in connection with a assessing a business risk to the organization or to a party through a business relationship with at least one of the organizations, delivering a report of the composite ratings of the organizations through a reporting facility to enable a user of the reporting facility to monitor, assess, and mitigate the risks, based on the security vulnerabilities and resiliencies, in doing business with the organization and to compare the composite ratings of the organizations. 2. (canceled) 3. (canceled) 4. The method of claim 1, wherein the collected information is represented by at least two data types. 5. The method of claim 4, wherein the at least two data types include at least one of breach disclosures, block lists, configuration parameters, an identification of malware servers, an identification of a reputation, an identification of suspicious activity, an identification of spyware, white lists, an identification of compromised hosts, an identification of malicious activity, an identification of spam activity, an identification of vulnerable hosts, an identification of phishing activity, or an identification of e-mail viruses. 6. (canceled) 7. The method of claim 1, wherein the collected information evidences internal security controls. 8. The method of claim 1, wherein the collected information comprises outcomes of each of the organizations. 9. The method of claim 1, wherein the collected information evidences operational execution of security measures of each of the organizations. 10. The method of claim 1, wherein the collected information indicates whether a computer system of each of the organizations served malicious code to another system. 11. The method of claim 1, wherein the collected information indicates whether a computer system of each of the organizations communicated with a known attacker controlled network or sensor outside the control or network of the organization. 12. The method of claim 1, comprising: forming a series of the security ratings of each of the organizations. 13. The method of claim 12, comprising: determining a trend from the series of ratings. 14. The method of claim 12, comprising displaying the series of composite ratings. 15. (canceled) 16. The method of claim 14, wherein displaying the series of composite ratings for each of the organizations comprises posting the series of composite ratings for the organization to a web portal. 17. (canceled) 18. (canceled) 19-120. (canceled) 121. The method of claim 1, wherein the collected information represents externally observable outcome information. 122. The method of claim 121, wherein the outcome information comprises outcome information associated with security vulnerability or resilience of each of the organizations. 123. (canceled) 124. The method of claim 1, comprising: determining a badness score that corresponds to an intensity or duration of malicious activity determined from the collected information. 125. (canceled) 126. The method of claim 1, wherein: the collected information comprises at least two security characterizations of a computer system of each of the organizations. 127. The method of claim 1, wherein the collected information comprises a characterization of behavior of an employee of at least one of the organizations. 128. The method of claim 1, wherein the collected information comprises characterizations other than characterizations about a computer system of at least one of the organizations. 129. The method of claim 1, wherein the collected information comprises characterizations about policies of at least one of the organizations. 130. The method of claim 1, wherein the collected information comprises characterizations about information technology assets that at least one of the organizations owns, controls, uses, or is affiliated with. 131. The method of claim 1 in which collected information represents: (a) physical states, (b) technical states, (c) organizational states, or (d) cultural states of at least one of the organizations which can be exploited to create a security breach; or (e) the organization's ability to recover from a security breach; or any combination of two or more of these. 132. The method of claim 1 in which processing the information to form a composite rating comprises correlating data across the sources of information. 133. (canceled) 134. The method of claim 1 in which processing the information to form a composite rating comprises statistically correlating the composite rating with actual outcomes. 135. The method of claim 1 in which the vulnerability comprises physical, technical, organizational, or cultural states that can be exploited to create a security breach. 136. A method comprising collecting, from at least two sources, information about two or more companies or other organizations, the collected information representing at least two data types, the collected information comprising outcomes of each of the organizations, at least some of the information for each of the organizations being collected automatically by computer from at least two sources, one or more of the sources not controlled by the organization, the information from at least the one or more sources that are not controlled by the organization being collected without permission of the-organization, at least one of the sources including a commercial data source, processing the information from both of the two sources for each of the organizations to form a composite rating of a security vulnerability of the organization to creation of a security breach or a compromise of a system of the organization or of a resilience of the organization to recover from a security breach or both, the processing including applying models that account for differences in the respective sources, normalizing the composite rating of each of the organizations based on a size characteristic of the organization to enable comparisons of composite ratings between the two or more organizations, forming a series of the security ratings of each of the organizations, determining a trend from the series of ratings, and displaying the series of composite ratings through a portal, determining a badness score that corresponds to an intensity or duration of malicious activity determined from the collected information, and reporting the composite ratings of the organizations through a portal to enable customers to monitor, assess, and mitigate risk in doing business with the organizations and to compare the composite ratings across the organizations. 137. The system of claim 1, determining a confidence range of the composite rating, the size of the range varying inversely with the level of confidence in the composite rating. 138. The system of claim 137, wherein the confidence range is based on redundancy of the security characterizations. 139. The system of claim 137, wherein the confidence range is based on a size of the third party computer system. 140. The method of claim 1 comprising ranking the organization and other organizations within sectors or peer-groups or globally. 141. A method comprising: collecting information about an organization that has computer systems, network resources, and employees, the organization posing risks to itself or to other parties through business relationships of the organization with the other parties, the information collected about the organization including (a) information collected automatically by computer on the Internet without permission of the organization, and (b) information indicative of resiliencies of the organization to recover from a security breach associated with a compromise or a vulnerability or a configuration, processing the information by computer to form a composite rating of the organization that is indicative of a degree of risk to the organization or to a party through a business relationship with the organization, the composite rating being based on metrics that include a measure of the resiliencies of the organization to recover from a security breach associated with a compromise or a vulnerability or a configuration, and in connection with assessing a business risk to the organization or to a party through a business relationship with the organization, delivering a report of the composite rating of the organization through a reporting facility to enable a user of the reporting facility to assess the risks, based at least in part on the resiliencies, in doing business with the organization.
A method and system for creating a composite security rating from security characterization data of a third party computer system. The security characterization data is derived from externally observable characteristics of the third party computer system. Advantageously, the composite security score has a relatively high likelihood of corresponding to an internal audit score despite use of externally observable security characteristics. Also, the method and system may include use of multiple security characterizations all solely derived from externally observable characteristics of the third party computer system.1. A method comprising: collecting information about two or more companies or other organizations that have computer systems, network resources, and employees, the organizations posing risks to themselves or to other parties through business relationships of the organizations with the other parties, the information collected about the organizations being indicative of compromises or of vulnerabilities of technology systems, data, or other information of the organizations, and indicative of resiliencies of the organizations to recover from security breaches including compromises or vulnerabilities or configurations, at least some of the information about each of the organizations being collected automatically by computer using sensors on the Internet, the information about each of the organizations being collected from two or more sources, one or more of the sources not being controlled by the organization, the information from at least the one or more sources that are not controlled by the organization being collected without permission of the organization, at least partly automatically gathering information about assets that each of the organizations owns, controls, uses, or is affiliated with, including IP network address ranges, computer services residing within address ranges, or domain names, at least one of the sources for each of the organizations comprising a public source or a commercial source, processing by computer the information from the two or more sources for each of the organizations to form a composite rating of the organization that is indicative of a degree of risk to the organization or to a party through a business relationship with the organization, the composite rating comprising a calculated composite of metrics and data derived or collected from the sources, the processing comprising applying transformations to the data and metrics, and the processing comprising applying weights to the data and the metrics, the metrics including a measure of the extent of or the frequency of or duration of compromise of the computer systems or data of the organization, or of a configuration or vulnerability of the organization, and a measure of the resilience of the organization to recover from a security breach or vulnerability, and in connection with a assessing a business risk to the organization or to a party through a business relationship with at least one of the organizations, delivering a report of the composite ratings of the organizations through a reporting facility to enable a user of the reporting facility to monitor, assess, and mitigate the risks, based on the security vulnerabilities and resiliencies, in doing business with the organization and to compare the composite ratings of the organizations. 2. (canceled) 3. (canceled) 4. The method of claim 1, wherein the collected information is represented by at least two data types. 5. The method of claim 4, wherein the at least two data types include at least one of breach disclosures, block lists, configuration parameters, an identification of malware servers, an identification of a reputation, an identification of suspicious activity, an identification of spyware, white lists, an identification of compromised hosts, an identification of malicious activity, an identification of spam activity, an identification of vulnerable hosts, an identification of phishing activity, or an identification of e-mail viruses. 6. (canceled) 7. The method of claim 1, wherein the collected information evidences internal security controls. 8. The method of claim 1, wherein the collected information comprises outcomes of each of the organizations. 9. The method of claim 1, wherein the collected information evidences operational execution of security measures of each of the organizations. 10. The method of claim 1, wherein the collected information indicates whether a computer system of each of the organizations served malicious code to another system. 11. The method of claim 1, wherein the collected information indicates whether a computer system of each of the organizations communicated with a known attacker controlled network or sensor outside the control or network of the organization. 12. The method of claim 1, comprising: forming a series of the security ratings of each of the organizations. 13. The method of claim 12, comprising: determining a trend from the series of ratings. 14. The method of claim 12, comprising displaying the series of composite ratings. 15. (canceled) 16. The method of claim 14, wherein displaying the series of composite ratings for each of the organizations comprises posting the series of composite ratings for the organization to a web portal. 17. (canceled) 18. (canceled) 19-120. (canceled) 121. The method of claim 1, wherein the collected information represents externally observable outcome information. 122. The method of claim 121, wherein the outcome information comprises outcome information associated with security vulnerability or resilience of each of the organizations. 123. (canceled) 124. The method of claim 1, comprising: determining a badness score that corresponds to an intensity or duration of malicious activity determined from the collected information. 125. (canceled) 126. The method of claim 1, wherein: the collected information comprises at least two security characterizations of a computer system of each of the organizations. 127. The method of claim 1, wherein the collected information comprises a characterization of behavior of an employee of at least one of the organizations. 128. The method of claim 1, wherein the collected information comprises characterizations other than characterizations about a computer system of at least one of the organizations. 129. The method of claim 1, wherein the collected information comprises characterizations about policies of at least one of the organizations. 130. The method of claim 1, wherein the collected information comprises characterizations about information technology assets that at least one of the organizations owns, controls, uses, or is affiliated with. 131. The method of claim 1 in which collected information represents: (a) physical states, (b) technical states, (c) organizational states, or (d) cultural states of at least one of the organizations which can be exploited to create a security breach; or (e) the organization's ability to recover from a security breach; or any combination of two or more of these. 132. The method of claim 1 in which processing the information to form a composite rating comprises correlating data across the sources of information. 133. (canceled) 134. The method of claim 1 in which processing the information to form a composite rating comprises statistically correlating the composite rating with actual outcomes. 135. The method of claim 1 in which the vulnerability comprises physical, technical, organizational, or cultural states that can be exploited to create a security breach. 136. A method comprising collecting, from at least two sources, information about two or more companies or other organizations, the collected information representing at least two data types, the collected information comprising outcomes of each of the organizations, at least some of the information for each of the organizations being collected automatically by computer from at least two sources, one or more of the sources not controlled by the organization, the information from at least the one or more sources that are not controlled by the organization being collected without permission of the-organization, at least one of the sources including a commercial data source, processing the information from both of the two sources for each of the organizations to form a composite rating of a security vulnerability of the organization to creation of a security breach or a compromise of a system of the organization or of a resilience of the organization to recover from a security breach or both, the processing including applying models that account for differences in the respective sources, normalizing the composite rating of each of the organizations based on a size characteristic of the organization to enable comparisons of composite ratings between the two or more organizations, forming a series of the security ratings of each of the organizations, determining a trend from the series of ratings, and displaying the series of composite ratings through a portal, determining a badness score that corresponds to an intensity or duration of malicious activity determined from the collected information, and reporting the composite ratings of the organizations through a portal to enable customers to monitor, assess, and mitigate risk in doing business with the organizations and to compare the composite ratings across the organizations. 137. The system of claim 1, determining a confidence range of the composite rating, the size of the range varying inversely with the level of confidence in the composite rating. 138. The system of claim 137, wherein the confidence range is based on redundancy of the security characterizations. 139. The system of claim 137, wherein the confidence range is based on a size of the third party computer system. 140. The method of claim 1 comprising ranking the organization and other organizations within sectors or peer-groups or globally. 141. A method comprising: collecting information about an organization that has computer systems, network resources, and employees, the organization posing risks to itself or to other parties through business relationships of the organization with the other parties, the information collected about the organization including (a) information collected automatically by computer on the Internet without permission of the organization, and (b) information indicative of resiliencies of the organization to recover from a security breach associated with a compromise or a vulnerability or a configuration, processing the information by computer to form a composite rating of the organization that is indicative of a degree of risk to the organization or to a party through a business relationship with the organization, the composite rating being based on metrics that include a measure of the resiliencies of the organization to recover from a security breach associated with a compromise or a vulnerability or a configuration, and in connection with assessing a business risk to the organization or to a party through a business relationship with the organization, delivering a report of the composite rating of the organization through a reporting facility to enable a user of the reporting facility to assess the risks, based at least in part on the resiliencies, in doing business with the organization.
2,400
7,447
7,447
13,996,577
2,486
Systems, devices and methods are described including performing scalable video coding using inter-layer residual prediction, inter-layer residual prediction in an enhancement layer coding unit, prediction unit, or transform unit may use residual data obtained from a base layer or from a lower enhancement layer. The residual may be subjected to upsample filtering and/or refinement filtering. The upsample or refinement filter coefficients may be predetermined or may be adoptively determined.
1.-34. (canceled) 35. A method, comprising: at an enhancement layer (EL) video decoder: determining a predicted residual for a block of an EL frame based, at least in part, on a residual obtained from at least one of a lower EL or a base layer (BL) frame. 36. The method of claim 35, further comprising accessing the residual in memory. 37. The method of claim 35, wherein determining the predicted residual comprises determining the predicted residual in response to an indicator included in a bitstream received at the EL video decoder. 38. The method of claim 37, wherein, in a first state, the indicator specifies that the EL video decoder is to perform inter-layer residual prediction, and wherein, in a second state, the indicator specifies that the EL video decoder is to not to perform inter-layer residual prediction. 39. The method of claim 38, wherein the indicator has been placed in one of the first state or the second state based on a rate-distortion cost. 40. The method of claim 35, wherein the residual corresponds to one or more co-located blocks of the lower EL or of the BL. 41. The method of claim 35, further comprising applying at least one of an upsample filter or a refining filter to the residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients. 42. At least one machine-readable medium comprising a plurality of instructions that in response to being executed on a computing device, cause the computing device to determine a predicted residual for a block of an enhancement layer (EL) frame based, at least in part, on a residual obtained from at least one of a lower EL or a base layer (BL) frame. 43. The at least one machine-readable medium of claim 42, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to access the residual in memory. 44. The at least one machine-readable medium of claim 42, wherein determining the predicted residual comprises determining the predicted residual in response to an indicator included in a bitstream received at the computing device, wherein, in a first state, the indicator specifies that the computing device is to perform inter-layer residual prediction, and wherein, in a second state, the indicator specifies that the computing device is to not to perform inter-layer residual prediction. 45. The at least one machine-readable medium of claim 42, wherein the indicator has been placed in one of the first state or the second state based on a rate-distortion cost. 46. The at least one machine-readable medium of claim 42, wherein the residual corresponds to one or more co-located blocks of the lower EL or of the BL. 47. The at least one machine-readable medium of claim 42, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to apply at least one of an upsample filter or a refining filter to the residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients. 48. An apparatus, comprising: memory to store video content including at least one residual; and circuitry to access the memory, and to determine a predicted residual of a block of an enhancement layer (EL) frame based, at least in part, on the at least one residual, wherein the at least one residual comprises at least one residual obtained from at least one of a lower EL frame or a base layer (BL) frame. 49. The apparatus of claim 48, wherein determining a predicted residual comprises determining a predicted residual based on an indicator stored in the memory, wherein, in a first state, the indicator specifies that the circuitry is to perform inter-layer residual prediction, and wherein, in a second state, the indicator specifies that the circuitry is to not to perform inter-layer residual prediction. 50. The apparatus of claim 49, wherein the indicator has been placed in one of the first state or the second state in response to a rate-distortion cost. 51. The apparatus of claim 48, wherein the at least one residual corresponds to one or more co-located blocks of the lower EL frame or the BL frame. 52. The apparatus of claim 48, wherein the circuitry is to apply at least one of an upsample filter or a refining filter to the at least one residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients. 53. A method, comprising: at an ELvideo encoder: determining a predicted residual for a block of an EL frame based, at least in part, on a residual obtained from at least one of a lower EL or a base layer (BL) frame. 54. The method of claim 53, further comprising accessing the residual in memory. 55. The method of claim 53, further comprising: entropy encoding the EL frame after determining the predicted residual; and generating a bitstream that includes the entropy encoded EL frame. 56. The method of claim 55, further comprising: generating an indicator, wherein, in a first state, the indicator specifies that inter-layer residual prediction is to be performed for the block, and wherein, in a second state, the indicator specifies that inter-layer residual prediction is not to be performed for the block; and including the indicator in the bitstream. 57. The method of claim 56, further comprising: placing the indicator in one of the first state or the second state based on a rate-distortion cost. 58. The method of claim 53, wherein the residual corresponds to one or more co-located blocks of the lower level EL frame or the BL frame. 59. The method of claim 53, further comprising applying at least one of an upsample filter or a refining filter to the residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients. 60. At least one machine-readable medium comprising a plurality of instructions that in response to being executed on a computing device, cause the computing device to determine a predicted residual for a block of an enhancement layer (EL) frame based, at least in part, on a residual obtained from at least one of a lower EL or a base layer (BL) frame. 61. The at least one machine-readable medium of claim 60, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to access the residual in memory. 62. The at least one machine-readable medium of claim 60, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to: entropy encode the EL frame after determining the predicted residual; and generate a bitstream that includes the entropy encoded EL frame. 63. The at least one machine-readable medium of claim 62, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to: generate an indicator, wherein, in a first state, the indicator specifies that inter-layer residual prediction is to be performed for the block, and wherein, in a second state, the indicator specifies that inter-layer residual prediction is not to be performed for the block; and include the indicator in the bitstream. 64. The at least one machine-readable medium of claim 63, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to place the indicator in one of the first state or the second state based on a rate-distortion cost. 65. The at least one machine-readable medium of claim 60, wherein the residual corresponds to one or more co-located blocks of the lower level EL frame or the BL frame. 66. The at least one machine-readable medium of claim 60, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to apply at least one of an upsample filter or a refining filter to the residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients.
Systems, devices and methods are described including performing scalable video coding using inter-layer residual prediction, inter-layer residual prediction in an enhancement layer coding unit, prediction unit, or transform unit may use residual data obtained from a base layer or from a lower enhancement layer. The residual may be subjected to upsample filtering and/or refinement filtering. The upsample or refinement filter coefficients may be predetermined or may be adoptively determined.1.-34. (canceled) 35. A method, comprising: at an enhancement layer (EL) video decoder: determining a predicted residual for a block of an EL frame based, at least in part, on a residual obtained from at least one of a lower EL or a base layer (BL) frame. 36. The method of claim 35, further comprising accessing the residual in memory. 37. The method of claim 35, wherein determining the predicted residual comprises determining the predicted residual in response to an indicator included in a bitstream received at the EL video decoder. 38. The method of claim 37, wherein, in a first state, the indicator specifies that the EL video decoder is to perform inter-layer residual prediction, and wherein, in a second state, the indicator specifies that the EL video decoder is to not to perform inter-layer residual prediction. 39. The method of claim 38, wherein the indicator has been placed in one of the first state or the second state based on a rate-distortion cost. 40. The method of claim 35, wherein the residual corresponds to one or more co-located blocks of the lower EL or of the BL. 41. The method of claim 35, further comprising applying at least one of an upsample filter or a refining filter to the residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients. 42. At least one machine-readable medium comprising a plurality of instructions that in response to being executed on a computing device, cause the computing device to determine a predicted residual for a block of an enhancement layer (EL) frame based, at least in part, on a residual obtained from at least one of a lower EL or a base layer (BL) frame. 43. The at least one machine-readable medium of claim 42, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to access the residual in memory. 44. The at least one machine-readable medium of claim 42, wherein determining the predicted residual comprises determining the predicted residual in response to an indicator included in a bitstream received at the computing device, wherein, in a first state, the indicator specifies that the computing device is to perform inter-layer residual prediction, and wherein, in a second state, the indicator specifies that the computing device is to not to perform inter-layer residual prediction. 45. The at least one machine-readable medium of claim 42, wherein the indicator has been placed in one of the first state or the second state based on a rate-distortion cost. 46. The at least one machine-readable medium of claim 42, wherein the residual corresponds to one or more co-located blocks of the lower EL or of the BL. 47. The at least one machine-readable medium of claim 42, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to apply at least one of an upsample filter or a refining filter to the residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients. 48. An apparatus, comprising: memory to store video content including at least one residual; and circuitry to access the memory, and to determine a predicted residual of a block of an enhancement layer (EL) frame based, at least in part, on the at least one residual, wherein the at least one residual comprises at least one residual obtained from at least one of a lower EL frame or a base layer (BL) frame. 49. The apparatus of claim 48, wherein determining a predicted residual comprises determining a predicted residual based on an indicator stored in the memory, wherein, in a first state, the indicator specifies that the circuitry is to perform inter-layer residual prediction, and wherein, in a second state, the indicator specifies that the circuitry is to not to perform inter-layer residual prediction. 50. The apparatus of claim 49, wherein the indicator has been placed in one of the first state or the second state in response to a rate-distortion cost. 51. The apparatus of claim 48, wherein the at least one residual corresponds to one or more co-located blocks of the lower EL frame or the BL frame. 52. The apparatus of claim 48, wherein the circuitry is to apply at least one of an upsample filter or a refining filter to the at least one residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients. 53. A method, comprising: at an ELvideo encoder: determining a predicted residual for a block of an EL frame based, at least in part, on a residual obtained from at least one of a lower EL or a base layer (BL) frame. 54. The method of claim 53, further comprising accessing the residual in memory. 55. The method of claim 53, further comprising: entropy encoding the EL frame after determining the predicted residual; and generating a bitstream that includes the entropy encoded EL frame. 56. The method of claim 55, further comprising: generating an indicator, wherein, in a first state, the indicator specifies that inter-layer residual prediction is to be performed for the block, and wherein, in a second state, the indicator specifies that inter-layer residual prediction is not to be performed for the block; and including the indicator in the bitstream. 57. The method of claim 56, further comprising: placing the indicator in one of the first state or the second state based on a rate-distortion cost. 58. The method of claim 53, wherein the residual corresponds to one or more co-located blocks of the lower level EL frame or the BL frame. 59. The method of claim 53, further comprising applying at least one of an upsample filter or a refining filter to the residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients. 60. At least one machine-readable medium comprising a plurality of instructions that in response to being executed on a computing device, cause the computing device to determine a predicted residual for a block of an enhancement layer (EL) frame based, at least in part, on a residual obtained from at least one of a lower EL or a base layer (BL) frame. 61. The at least one machine-readable medium of claim 60, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to access the residual in memory. 62. The at least one machine-readable medium of claim 60, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to: entropy encode the EL frame after determining the predicted residual; and generate a bitstream that includes the entropy encoded EL frame. 63. The at least one machine-readable medium of claim 62, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to: generate an indicator, wherein, in a first state, the indicator specifies that inter-layer residual prediction is to be performed for the block, and wherein, in a second state, the indicator specifies that inter-layer residual prediction is not to be performed for the block; and include the indicator in the bitstream. 64. The at least one machine-readable medium of claim 63, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to place the indicator in one of the first state or the second state based on a rate-distortion cost. 65. The at least one machine-readable medium of claim 60, wherein the residual corresponds to one or more co-located blocks of the lower level EL frame or the BL frame. 66. The at least one machine-readable medium of claim 60, further comprising one or more instructions that in response to being executed on the computing device, cause the computing device to apply at least one of an upsample filter or a refining filter to the residual prior to determining the predicted residual, wherein the upsample filter comprises one of fixed upsample coefficients or adaptive upsample coefficients, and wherein the refining filter comprises one of fixed refining coefficients or adaptive refining coefficients.
2,400
7,448
7,448
15,245,638
2,458
Disclosed are various embodiments that facilitate sending input commands to an application over a network that may have variable latency characteristics. A first computing device sends a request to initiate a remote session with the application being executed by a second computing device. Upon initiation of the remote session, the first computing device receives application output data associated with the application for display via the first computing device. The first computing device may capture an input command associated with a video frame of the application output data being displayed. The input command is transmitted to the second computing device. To account for latency characteristics associated with the network, the second computing device provides the input command to the application after a delay.
1. A system, comprising: a first computing device; and a first application executable in the first computing device, wherein, when executed, the first application causes the first computing device to at least: initiate a remote session over a network with a second application being executed in a hosted environment by at least one second computing device; receive a video stream associated with a video signal being generated by the second application; render the video stream on a display associated with the first computing device; capture a first input command associated with a first input and a second input command associated with a second input; and transmit application input data comprising the first input command and the second input command to the at least one second computing device, the second input command being provided to the second application after a delay based at least in part on a latency characteristic of the network. 2. The system of claim 1, wherein the delay preserves a relative temporal relationship between the first input command and the second input command. 3. The system of claim 1, wherein the first input command is associated with a first video frame of the video stream and the second input command is associated with a second video frame of the video stream, the first video frame being displayed when the first input command is generated and the second video frame being displayed when the second input command is generated. 4. The system of claim 1, wherein the network has a variable amount of latency. 5. The system of claim 1, wherein the delay is based at least in part a comparison of a first time period and a second time period, the first time period being between when the first input command was generated by the first computing device and when the second input command was generated by the first computing device and the second time period being between when the first input command is received by the at least one second computing device and when the second input command is received by the at least one second computing device. 6. The system of claim 1, wherein the first input command and the second input command are transmitted to the at least one second computing device in a batch. 7. A method, comprising: initiating, by a first computing device, a remote session with an application being executed in a hosted environment by at least one second computing device; rendering, by the first computing device, an application video output generated by the application on a display, the application video output being received from the at least one second device over a network; receiving, by the first computing device, an input command from an input device, the input command corresponding to a video frame of the application video output being displayed when the input command is generated; and transmitting, by the first computing device, the input command and a video frame identifier to the at least one second computing device, the input command being provided to the application via the at least one second computing device after a delay that accounts for a latency characteristic of the network. 8. The method of claim 7, further comprising receiving another input command from the input device, the other input command corresponding to another video frame of the application video output, and the input command and the other input command being transmitted to the at least one second computing device in a batch. 9. The method of claim 7, wherein the delay is further determined based at least in part on an input command type associated with the input command. 10. The method of claim 7, wherein the network has a variable amount of latency. 11. The method of claim 7, further comprising encoding, via the first computing device, the input command prior to transmitting to the at least one second computing device. 12. The method of claim 7, wherein the input command is associated with a video frame generated by the application, the video frame having been displayed by the first computing device relative to the input command being generated. 13. A non-transitory computer-readable medium embodying a program executable in a first computing device, wherein, when executed, the program causes the first computing device to at least: render one or more frames of a video stream on a display associated with the first computing device, the video stream being generated by an application being executed in a hosted environment by at least one second computing device; capture a plurality of input commands via one or more input devices, individual input commands of the plurality of input commands corresponding to a respective interaction by a user with the application; generate application input data comprising the individual input commands; and transmit the application input data including the individual input commands over a network to the at least one second computing device in response to determining that the individual input commands are ready to be transmitted. 14. The non-transitory computer-readable medium of claim 13, wherein the video stream is included in application output data received from the at least one computing device. 15. The non-transitory computer-readable medium of claim 14, wherein the application output data further comprises force feedback data for the one or more input devices. 16. The non-transitory computer-readable medium of claim 14, wherein the application output data further comprises an audio stream multiplexed with the video stream. 17. The non-transitory computer-readable medium of claim 13, wherein the individual input commands are transmitted to the at least one second computing device at a predefined interval. 18. The non-transitory computer-readable medium of claim 13, wherein the application input data includes the plurality of input commands and a corresponding timestamp for the individual input commands that indicates when a respective input command was generated by the first computing device. 19. The non-transitory computer-readable medium of claim 13, wherein the application input data includes the individual input commands and a corresponding video frame identifier that correlates a particular video frame with a respective input command, the particular video frame being rendered on the display when the respective input command was generated. 20. The non-transitory computer-readable medium of claim 13, wherein the input command is provided to the application via the at least one second computing device after a delay in order to preserve a relative temporal relationship between the input command and another input command.
Disclosed are various embodiments that facilitate sending input commands to an application over a network that may have variable latency characteristics. A first computing device sends a request to initiate a remote session with the application being executed by a second computing device. Upon initiation of the remote session, the first computing device receives application output data associated with the application for display via the first computing device. The first computing device may capture an input command associated with a video frame of the application output data being displayed. The input command is transmitted to the second computing device. To account for latency characteristics associated with the network, the second computing device provides the input command to the application after a delay.1. A system, comprising: a first computing device; and a first application executable in the first computing device, wherein, when executed, the first application causes the first computing device to at least: initiate a remote session over a network with a second application being executed in a hosted environment by at least one second computing device; receive a video stream associated with a video signal being generated by the second application; render the video stream on a display associated with the first computing device; capture a first input command associated with a first input and a second input command associated with a second input; and transmit application input data comprising the first input command and the second input command to the at least one second computing device, the second input command being provided to the second application after a delay based at least in part on a latency characteristic of the network. 2. The system of claim 1, wherein the delay preserves a relative temporal relationship between the first input command and the second input command. 3. The system of claim 1, wherein the first input command is associated with a first video frame of the video stream and the second input command is associated with a second video frame of the video stream, the first video frame being displayed when the first input command is generated and the second video frame being displayed when the second input command is generated. 4. The system of claim 1, wherein the network has a variable amount of latency. 5. The system of claim 1, wherein the delay is based at least in part a comparison of a first time period and a second time period, the first time period being between when the first input command was generated by the first computing device and when the second input command was generated by the first computing device and the second time period being between when the first input command is received by the at least one second computing device and when the second input command is received by the at least one second computing device. 6. The system of claim 1, wherein the first input command and the second input command are transmitted to the at least one second computing device in a batch. 7. A method, comprising: initiating, by a first computing device, a remote session with an application being executed in a hosted environment by at least one second computing device; rendering, by the first computing device, an application video output generated by the application on a display, the application video output being received from the at least one second device over a network; receiving, by the first computing device, an input command from an input device, the input command corresponding to a video frame of the application video output being displayed when the input command is generated; and transmitting, by the first computing device, the input command and a video frame identifier to the at least one second computing device, the input command being provided to the application via the at least one second computing device after a delay that accounts for a latency characteristic of the network. 8. The method of claim 7, further comprising receiving another input command from the input device, the other input command corresponding to another video frame of the application video output, and the input command and the other input command being transmitted to the at least one second computing device in a batch. 9. The method of claim 7, wherein the delay is further determined based at least in part on an input command type associated with the input command. 10. The method of claim 7, wherein the network has a variable amount of latency. 11. The method of claim 7, further comprising encoding, via the first computing device, the input command prior to transmitting to the at least one second computing device. 12. The method of claim 7, wherein the input command is associated with a video frame generated by the application, the video frame having been displayed by the first computing device relative to the input command being generated. 13. A non-transitory computer-readable medium embodying a program executable in a first computing device, wherein, when executed, the program causes the first computing device to at least: render one or more frames of a video stream on a display associated with the first computing device, the video stream being generated by an application being executed in a hosted environment by at least one second computing device; capture a plurality of input commands via one or more input devices, individual input commands of the plurality of input commands corresponding to a respective interaction by a user with the application; generate application input data comprising the individual input commands; and transmit the application input data including the individual input commands over a network to the at least one second computing device in response to determining that the individual input commands are ready to be transmitted. 14. The non-transitory computer-readable medium of claim 13, wherein the video stream is included in application output data received from the at least one computing device. 15. The non-transitory computer-readable medium of claim 14, wherein the application output data further comprises force feedback data for the one or more input devices. 16. The non-transitory computer-readable medium of claim 14, wherein the application output data further comprises an audio stream multiplexed with the video stream. 17. The non-transitory computer-readable medium of claim 13, wherein the individual input commands are transmitted to the at least one second computing device at a predefined interval. 18. The non-transitory computer-readable medium of claim 13, wherein the application input data includes the plurality of input commands and a corresponding timestamp for the individual input commands that indicates when a respective input command was generated by the first computing device. 19. The non-transitory computer-readable medium of claim 13, wherein the application input data includes the individual input commands and a corresponding video frame identifier that correlates a particular video frame with a respective input command, the particular video frame being rendered on the display when the respective input command was generated. 20. The non-transitory computer-readable medium of claim 13, wherein the input command is provided to the application via the at least one second computing device after a delay in order to preserve a relative temporal relationship between the input command and another input command.
2,400
7,449
7,449
12,974,041
2,455
In some embodiments, an apparatus includes a server that stores a set of media files. The server is configured to send an authentication code to a first communication device in response to a request from the first communication device to access the set of media files such that the first communication device can present the authentication code to a user. The server is configured to associate an identifier of a second communication device with the first communication device such that a user of the second communication device can authorize access to the set of media files from the first communication device by sending the authentication code to the server using the second communication device.
1. A non-transitory processor-readable medium storing code representing instructions to cause a processor to: receive, from a first communication device, a request to access a media file stored on a server; send an authentication code to the first communication device such that the first communication device can present the authentication code to a user of the first communication device; receive the authentication code from a second communication device; and provide access to the media file from the first communication device based on receiving the authentication code from the second communication device. 2. The non-transitory processor-readable medium of claim 1, wherein the media file is associated with a user account. 3. The non-transitory processor-readable medium of claim 1, wherein the authentication code is a barcode, the second communication device being configured to scan the barcode. 4. The non-transitory processor-readable medium of claim 1, wherein the authentication code includes a plurality of alpha-numeric characters. 5. The non-transitory processor-readable medium of claim 2, further comprising code representing instructions to cause the processor to: receive a request from the second communication device to establish the user account; and associate an identifier of the second communication device with the user account. 6. The non-transitory processor-readable medium of claim 2, further comprising code representing instructions to cause the processor to: receive, from the second communication device, a signal instructing the server to store the media file; and associate the media file with the user account. 7. The non-transitory processor-readable medium of claim 1, wherein the code representing instructions to receive the authentication code includes code representing instructions to cause the processor to receive the authentication code from the second communication device in response to a user of the second communication device providing the authentication code to a media account application executing on the second communication device. 8. The non-transitory processor-readable medium of claim 1, further comprising code representing instructions to cause the processor to: receive, from the second communication device, a first signal associated with a presentation of the media file on the first communication device; and send, in response to the first signal, a second signal associated with the presentation of the media file on the first communication device to the first communication device such that the first communication device modifies the presentation of the media file on the first communication device based on the second signal. 9. An apparatus, comprising: a server storing a plurality of media files, the server configured to send an authentication code to a first communication device in response to a request from the first communication device to access the plurality of media files such that the first communication device can present the authentication code to a user, the server configured to associate an identifier of a second communication device with the first communication device such that a user of the second communication device can authorize access to the plurality of media files from the first communication device by sending the authentication code to the server using the second communication device. 10. The apparatus of claim 9, wherein the plurality of media files are associated with a user account. 11. The apparatus of claim 9, wherein the authentication code is a barcode, the second communication device being configured to scan the barcode. 12. The apparatus of claim 9, wherein the authentication code includes a plurality of alpha-numeric characters. 13. The apparatus of claim 9, wherein the server is configured to receive the authentication code from the second communication device in response to the user of the second communication device providing the authentication code to a media account application executing on the second communication device. 14. The apparatus of claim 9, wherein the server is configured to send at least one media file from the plurality of media files to the first communication device in response to receiving the authentication code from the second communication device. 15. The apparatus of claim 9, wherein the server is configured to receive, from the second communication device, a first signal associated with a presentation of at least one media file from the plurality of media files on the first communication device, the server configured to send, in response to the first signal, a second signal associated with the presentation of the at least one media file on the first communication device to the first communication device such that the first communication device modifies the presentation of the at least one media file based on the second signal. 16. The apparatus of claim 9, wherein the server is configured to route at least one control signal from the second communication device to the first communication device such that the second communication device can control the first communication device via the server. 17. A method, comprising: receiving, from a first communication device, a request to access a media file stored on a server; sending an authentication code to the first communication device such that the first communication device can present the authentication code to a user of the first communication device; receiving the authentication code from a second communication device; and providing access to the media file from the first communication device based on receiving the authentication code from the second communication device. 18. The method of claim 17, wherein the media file is associated with a user account. 19. The method of claim 17, wherein the authentication code is a barcode, the second communication device being configured to scan the barcode. 20. The method of claim 17, wherein the authentication code includes a plurality of alpha-numeric characters. 21. The method of claim 18, further comprising: receiving a request from the second communication device to establish the user account; and associating an identifier of the second communication device with the user account. 22. The method of claim 18, further comprising: receiving, from the second communication device, a signal instructing the server to store the media file; and associating the media file with the user account. 23. The method of claim 17, wherein the receiving the authentication code includes receiving the authentication code from the second communication device in response to a user of the second communication device providing the authentication code to a media account application executing on the second communication device. 24. The method of claim 17, further comprising: receiving, from the second communication device, a first signal associated with a presentation of the media file on the first communication device; and sending, in response to the first signal, a second signal associated with the presentation of the media file on the first communication device to the first communication device such that the first communication device modifies the presentation of the media file on the first communication device based on the second signal. 25. A method, comprising: sending, from a first communication device, a request to access a media file stored on a server; receiving, from the server and in response to the request, an authentication code associated with the request; presenting the authentication code to a user such that the user can send the authentication code to the server via a second communication device; and receiving, from the server and in response to the server receiving the authentication code from the second communication device, a signal indicating that the server has granted, to the first communication device, access to the media file. 26. The method of claim 25, wherein the media file is associated with a user account. 27. The method of claim 25, wherein the authentication code is a barcode, the second communication device being configured to scan the barcode. 28. The method of claim 25, wherein the authentication code includes a plurality of alpha-numeric characters. 29. The method of claim 26, wherein the media file is a first media file, the method further comprising: receiving, from the first communication device and after receiving the signal indicating that the server has granted access to the first media file, a signal instructing the server to store a second media file; and associating the second media file with the user account. 30. The method of claim 25, wherein the signal is a first signal, the method further comprising: receiving, from the server, a second signal associated with controlling a presentation of the media file on the first communication device, the second signal being sent by the server in response to the server receiving, from the second communication device, a third signal associated with controlling the presentation of the media file on the first communication device; and modifying, based on the second signal, the presentation of the media file on the first communication device.
In some embodiments, an apparatus includes a server that stores a set of media files. The server is configured to send an authentication code to a first communication device in response to a request from the first communication device to access the set of media files such that the first communication device can present the authentication code to a user. The server is configured to associate an identifier of a second communication device with the first communication device such that a user of the second communication device can authorize access to the set of media files from the first communication device by sending the authentication code to the server using the second communication device.1. A non-transitory processor-readable medium storing code representing instructions to cause a processor to: receive, from a first communication device, a request to access a media file stored on a server; send an authentication code to the first communication device such that the first communication device can present the authentication code to a user of the first communication device; receive the authentication code from a second communication device; and provide access to the media file from the first communication device based on receiving the authentication code from the second communication device. 2. The non-transitory processor-readable medium of claim 1, wherein the media file is associated with a user account. 3. The non-transitory processor-readable medium of claim 1, wherein the authentication code is a barcode, the second communication device being configured to scan the barcode. 4. The non-transitory processor-readable medium of claim 1, wherein the authentication code includes a plurality of alpha-numeric characters. 5. The non-transitory processor-readable medium of claim 2, further comprising code representing instructions to cause the processor to: receive a request from the second communication device to establish the user account; and associate an identifier of the second communication device with the user account. 6. The non-transitory processor-readable medium of claim 2, further comprising code representing instructions to cause the processor to: receive, from the second communication device, a signal instructing the server to store the media file; and associate the media file with the user account. 7. The non-transitory processor-readable medium of claim 1, wherein the code representing instructions to receive the authentication code includes code representing instructions to cause the processor to receive the authentication code from the second communication device in response to a user of the second communication device providing the authentication code to a media account application executing on the second communication device. 8. The non-transitory processor-readable medium of claim 1, further comprising code representing instructions to cause the processor to: receive, from the second communication device, a first signal associated with a presentation of the media file on the first communication device; and send, in response to the first signal, a second signal associated with the presentation of the media file on the first communication device to the first communication device such that the first communication device modifies the presentation of the media file on the first communication device based on the second signal. 9. An apparatus, comprising: a server storing a plurality of media files, the server configured to send an authentication code to a first communication device in response to a request from the first communication device to access the plurality of media files such that the first communication device can present the authentication code to a user, the server configured to associate an identifier of a second communication device with the first communication device such that a user of the second communication device can authorize access to the plurality of media files from the first communication device by sending the authentication code to the server using the second communication device. 10. The apparatus of claim 9, wherein the plurality of media files are associated with a user account. 11. The apparatus of claim 9, wherein the authentication code is a barcode, the second communication device being configured to scan the barcode. 12. The apparatus of claim 9, wherein the authentication code includes a plurality of alpha-numeric characters. 13. The apparatus of claim 9, wherein the server is configured to receive the authentication code from the second communication device in response to the user of the second communication device providing the authentication code to a media account application executing on the second communication device. 14. The apparatus of claim 9, wherein the server is configured to send at least one media file from the plurality of media files to the first communication device in response to receiving the authentication code from the second communication device. 15. The apparatus of claim 9, wherein the server is configured to receive, from the second communication device, a first signal associated with a presentation of at least one media file from the plurality of media files on the first communication device, the server configured to send, in response to the first signal, a second signal associated with the presentation of the at least one media file on the first communication device to the first communication device such that the first communication device modifies the presentation of the at least one media file based on the second signal. 16. The apparatus of claim 9, wherein the server is configured to route at least one control signal from the second communication device to the first communication device such that the second communication device can control the first communication device via the server. 17. A method, comprising: receiving, from a first communication device, a request to access a media file stored on a server; sending an authentication code to the first communication device such that the first communication device can present the authentication code to a user of the first communication device; receiving the authentication code from a second communication device; and providing access to the media file from the first communication device based on receiving the authentication code from the second communication device. 18. The method of claim 17, wherein the media file is associated with a user account. 19. The method of claim 17, wherein the authentication code is a barcode, the second communication device being configured to scan the barcode. 20. The method of claim 17, wherein the authentication code includes a plurality of alpha-numeric characters. 21. The method of claim 18, further comprising: receiving a request from the second communication device to establish the user account; and associating an identifier of the second communication device with the user account. 22. The method of claim 18, further comprising: receiving, from the second communication device, a signal instructing the server to store the media file; and associating the media file with the user account. 23. The method of claim 17, wherein the receiving the authentication code includes receiving the authentication code from the second communication device in response to a user of the second communication device providing the authentication code to a media account application executing on the second communication device. 24. The method of claim 17, further comprising: receiving, from the second communication device, a first signal associated with a presentation of the media file on the first communication device; and sending, in response to the first signal, a second signal associated with the presentation of the media file on the first communication device to the first communication device such that the first communication device modifies the presentation of the media file on the first communication device based on the second signal. 25. A method, comprising: sending, from a first communication device, a request to access a media file stored on a server; receiving, from the server and in response to the request, an authentication code associated with the request; presenting the authentication code to a user such that the user can send the authentication code to the server via a second communication device; and receiving, from the server and in response to the server receiving the authentication code from the second communication device, a signal indicating that the server has granted, to the first communication device, access to the media file. 26. The method of claim 25, wherein the media file is associated with a user account. 27. The method of claim 25, wherein the authentication code is a barcode, the second communication device being configured to scan the barcode. 28. The method of claim 25, wherein the authentication code includes a plurality of alpha-numeric characters. 29. The method of claim 26, wherein the media file is a first media file, the method further comprising: receiving, from the first communication device and after receiving the signal indicating that the server has granted access to the first media file, a signal instructing the server to store a second media file; and associating the second media file with the user account. 30. The method of claim 25, wherein the signal is a first signal, the method further comprising: receiving, from the server, a second signal associated with controlling a presentation of the media file on the first communication device, the second signal being sent by the server in response to the server receiving, from the second communication device, a third signal associated with controlling the presentation of the media file on the first communication device; and modifying, based on the second signal, the presentation of the media file on the first communication device.
2,400
7,450
7,450
14,877,294
2,466
A network customer may support a plurality of network connectivity services (such as an E-line). A network connectivity service may experience spikes of traffic, and therefore, spikes of bandwidth usage. Dynamic capacity allows a network connectivity service to increase its available bandwidth during such traffic spikes. A computer-implemented method is disclosed that facilitates identifying network customers that might be interested in purchasing dynamic capacity. The method comprises collecting bandwidth utilization data of network connectivity services supported by each network customer, and identifying those connectivity services that exhibit patterns (e.g., cogent peaks) in their utilization data indicating the network connectivity service is a candidate for dynamic capacity. A trained pattern recognition algorithm is applied on collected utilization data of all network connectivity services and identifies those connectivity services that match the patterns, within a range of tolerance, in their utilization.
1. A computer-implemented method for determining when a network connectivity service is a candidate for dynamic capacity, comprising: (a) receiving data describing utilization of the network connectivity service, the data including a series of data points describing utilization at respective times during a time period; (b) applying a pattern recognition algorithm to the received data, the pattern recognition algorithm trained to determine whether a pattern exists in a series of data points indicating the network connectivity service is a candidate for dynamic capacity; and (c) when the pattern recognition algorithm indicates that the received data includes the pattern, sending a message identifying the network connectivity service as a candidate for dynamic capacity. 2. The method of claim 1, wherein the pattern comprises a peak. 3. The method of claim 2, further comprising: (d) before applying the pattern recognition algorithm, normalizing the received data according to an amount of usage allowed for the network connectivity service. 4. The method of claim 2, further comprising: (d) receiving a plurality of series of data points, each series corresponding to a different network connectivity service, wherein the data points describe utilization at respective times during a time period for the corresponding network connectivity service; (e) for each of the plurality of series of data points, receiving an indication whether the series includes a peak; and (f) using the received plurality of series and the received indications, training the pattern recognition algorithm to determine whether a peak exists in a series of data points. 5. The method of claim 2, wherein the receiving (a) comprises collecting data from a network device used to provide the network connectivity service. 6. The method of claim 2, wherein the pattern recognition algorithm is a neural network. 7. The method of claim 2, wherein the sending the message (c) comprises sending the message to a customer utilizing the network. connectivity service. 8. The method of claim 2, further comprising: (d) receiving another data set describing utilization of the network connectivity service, the data set including a series of data points describing utilization at respective times during a different time period; (e) applying the pattern recognition algorithm to the received other data set; and (f) when the pattern recognition algorithm indicates that the received other data set includes a peak, sending a message identifying the network connectivity service as a candidate for dynamic capacity. 9. A program storage device tangibly embodying a program of instructions executable by at least one machine to perform a method for determining when a network connectivity service is a candidate for dynamic capacity, comprising: (a) receiving data describing utilization of the network connectivity service, the data including a series of data points describing utilization at respective times during a time period; (b) applying a pattern recognition algorithm to the received data, the pattern recognition algorithm trained to determine whether a pattern exists in a series of data points indicating the network connectivity service is a candidate for dynamic capacity; and (c) when the pattern recognition algorithm indicates that the received data includes the pattern, sending a message identifying the network connectivity service as a candidate for dynamic capacity. 10. The program storage device of claim 9, wherein the pattern comprises a peak. 11. The program storage device of claim 10, the method further comprising: (d) before applying the pattern recognition algorithm, normalizing the received data according to an amount of usage allowed for the network connectivity service. 12. The program storage device of claim 10, the method further comprising: (d) receiving a plurality of series of data points, each series corresponding to a different network connectivity service, wherein the data points describe utilization at respective times during a time period for the corresponding network connectivity service; (e) for each of the plurality of series of data points, receiving an indication whether the series includes a peak; and (f) using the received plurality of series and the received indications, training the pattern recognition algorithm to determine whether a peak exists in a series of data points. 13. The program storage device of claim 10, wherein the receiving (a) comprises collecting data from a network device used to provide the network connectivity service. 14. The program storage device of claim 10, wherein the pattern recognition algorithm is a neural network. 15. The program storage device of claim 10, wherein the sending the message (c) comprises sending the message to a customer utilizing the network connectivity service. 16. The program storage device of claim 10, the method further comprising: (d) receiving another data set describing utilization of the network connectivity service, the data set including a series of data points describing utilization at respective times during a different time period; (e) applying the pattern recognition algorithm to the received other data set; and (f) when the pattern recognition algorithm indicates that the received other data set includes a peak, sending a message identifying the network connectivity service as a candidate for dynamic capacity. 17. A system for determining when a network connectivity service is a candidate for dynamic capacity, comprising: a monitor module configured to receive data describing utilization of the network connectivity service, the data including a series of data points describing utilization at respective times during a time period; a pattern recognition module configured to apply a pattern recognition algorithm to the received data, the pattern recognition algorithm trained to determine whether a pattern exists in a series of data points indicating the network connectivity service is a candidate for dynamic capacity; and a notification module configured to, when the pattern recognition algorithm indicates that the received data includes the pattern, send a message identifying the network connectivity service as a candidate for dynamic capacity. 18. The system of claim 17, wherein the pattern comprises a peak. 19. The system of claim 18, further comprising: a normalization module that, before applying the pattern recognition algorithm, normalizing the received data according to an amount of usage allowed for the network connectivity service. 20. The system of claim 18, further comprising: a historical database that stores a plurality of series of data points, each series corresponding to a different network connectivity service, wherein the data points describe utilization at respective times during a time period for the corresponding network connectivity service; an operator interface module that, for each of the plurality of series of data points, receives an indication whether the series includes a peak; and a training module that, using the received plurality of series and the received indications, training the pattern recognition algorithm to determine whether a peak exists in a series of data points. 21. The system of claim 18, wherein the monitor module collects data from a network device used to provide the network connectivity service. 22. The system of claim 18, wherein the pattern recognition algorithm is a neural network. 23. The system of claim 18, wherein the monitor module receives another data set describing utilization of the network connectivity service, the data set including a series of data points describing utilization at respective times during a different time period, wherein pattern recognition module applies the pattern recognition algorithm to the received other data set; and wherein the notification module, when the pattern recognition algorithm indicates that the received other data set includes a peak, sending a message identifying the network connectivity service as a candidate for dynamic capacity.
A network customer may support a plurality of network connectivity services (such as an E-line). A network connectivity service may experience spikes of traffic, and therefore, spikes of bandwidth usage. Dynamic capacity allows a network connectivity service to increase its available bandwidth during such traffic spikes. A computer-implemented method is disclosed that facilitates identifying network customers that might be interested in purchasing dynamic capacity. The method comprises collecting bandwidth utilization data of network connectivity services supported by each network customer, and identifying those connectivity services that exhibit patterns (e.g., cogent peaks) in their utilization data indicating the network connectivity service is a candidate for dynamic capacity. A trained pattern recognition algorithm is applied on collected utilization data of all network connectivity services and identifies those connectivity services that match the patterns, within a range of tolerance, in their utilization.1. A computer-implemented method for determining when a network connectivity service is a candidate for dynamic capacity, comprising: (a) receiving data describing utilization of the network connectivity service, the data including a series of data points describing utilization at respective times during a time period; (b) applying a pattern recognition algorithm to the received data, the pattern recognition algorithm trained to determine whether a pattern exists in a series of data points indicating the network connectivity service is a candidate for dynamic capacity; and (c) when the pattern recognition algorithm indicates that the received data includes the pattern, sending a message identifying the network connectivity service as a candidate for dynamic capacity. 2. The method of claim 1, wherein the pattern comprises a peak. 3. The method of claim 2, further comprising: (d) before applying the pattern recognition algorithm, normalizing the received data according to an amount of usage allowed for the network connectivity service. 4. The method of claim 2, further comprising: (d) receiving a plurality of series of data points, each series corresponding to a different network connectivity service, wherein the data points describe utilization at respective times during a time period for the corresponding network connectivity service; (e) for each of the plurality of series of data points, receiving an indication whether the series includes a peak; and (f) using the received plurality of series and the received indications, training the pattern recognition algorithm to determine whether a peak exists in a series of data points. 5. The method of claim 2, wherein the receiving (a) comprises collecting data from a network device used to provide the network connectivity service. 6. The method of claim 2, wherein the pattern recognition algorithm is a neural network. 7. The method of claim 2, wherein the sending the message (c) comprises sending the message to a customer utilizing the network. connectivity service. 8. The method of claim 2, further comprising: (d) receiving another data set describing utilization of the network connectivity service, the data set including a series of data points describing utilization at respective times during a different time period; (e) applying the pattern recognition algorithm to the received other data set; and (f) when the pattern recognition algorithm indicates that the received other data set includes a peak, sending a message identifying the network connectivity service as a candidate for dynamic capacity. 9. A program storage device tangibly embodying a program of instructions executable by at least one machine to perform a method for determining when a network connectivity service is a candidate for dynamic capacity, comprising: (a) receiving data describing utilization of the network connectivity service, the data including a series of data points describing utilization at respective times during a time period; (b) applying a pattern recognition algorithm to the received data, the pattern recognition algorithm trained to determine whether a pattern exists in a series of data points indicating the network connectivity service is a candidate for dynamic capacity; and (c) when the pattern recognition algorithm indicates that the received data includes the pattern, sending a message identifying the network connectivity service as a candidate for dynamic capacity. 10. The program storage device of claim 9, wherein the pattern comprises a peak. 11. The program storage device of claim 10, the method further comprising: (d) before applying the pattern recognition algorithm, normalizing the received data according to an amount of usage allowed for the network connectivity service. 12. The program storage device of claim 10, the method further comprising: (d) receiving a plurality of series of data points, each series corresponding to a different network connectivity service, wherein the data points describe utilization at respective times during a time period for the corresponding network connectivity service; (e) for each of the plurality of series of data points, receiving an indication whether the series includes a peak; and (f) using the received plurality of series and the received indications, training the pattern recognition algorithm to determine whether a peak exists in a series of data points. 13. The program storage device of claim 10, wherein the receiving (a) comprises collecting data from a network device used to provide the network connectivity service. 14. The program storage device of claim 10, wherein the pattern recognition algorithm is a neural network. 15. The program storage device of claim 10, wherein the sending the message (c) comprises sending the message to a customer utilizing the network connectivity service. 16. The program storage device of claim 10, the method further comprising: (d) receiving another data set describing utilization of the network connectivity service, the data set including a series of data points describing utilization at respective times during a different time period; (e) applying the pattern recognition algorithm to the received other data set; and (f) when the pattern recognition algorithm indicates that the received other data set includes a peak, sending a message identifying the network connectivity service as a candidate for dynamic capacity. 17. A system for determining when a network connectivity service is a candidate for dynamic capacity, comprising: a monitor module configured to receive data describing utilization of the network connectivity service, the data including a series of data points describing utilization at respective times during a time period; a pattern recognition module configured to apply a pattern recognition algorithm to the received data, the pattern recognition algorithm trained to determine whether a pattern exists in a series of data points indicating the network connectivity service is a candidate for dynamic capacity; and a notification module configured to, when the pattern recognition algorithm indicates that the received data includes the pattern, send a message identifying the network connectivity service as a candidate for dynamic capacity. 18. The system of claim 17, wherein the pattern comprises a peak. 19. The system of claim 18, further comprising: a normalization module that, before applying the pattern recognition algorithm, normalizing the received data according to an amount of usage allowed for the network connectivity service. 20. The system of claim 18, further comprising: a historical database that stores a plurality of series of data points, each series corresponding to a different network connectivity service, wherein the data points describe utilization at respective times during a time period for the corresponding network connectivity service; an operator interface module that, for each of the plurality of series of data points, receives an indication whether the series includes a peak; and a training module that, using the received plurality of series and the received indications, training the pattern recognition algorithm to determine whether a peak exists in a series of data points. 21. The system of claim 18, wherein the monitor module collects data from a network device used to provide the network connectivity service. 22. The system of claim 18, wherein the pattern recognition algorithm is a neural network. 23. The system of claim 18, wherein the monitor module receives another data set describing utilization of the network connectivity service, the data set including a series of data points describing utilization at respective times during a different time period, wherein pattern recognition module applies the pattern recognition algorithm to the received other data set; and wherein the notification module, when the pattern recognition algorithm indicates that the received other data set includes a peak, sending a message identifying the network connectivity service as a candidate for dynamic capacity.
2,400
7,451
7,451
15,398,019
2,436
The invention relates to a method for providing a wireless local network, wherein stationary communication devices and mobile communication devices are connected in the manner of a mesh as the sub-network, which is particularly connected to an infrastructure network and configured to exchange authentication messages with at least one communication device, which is particularly disposed in the infrastructure network and provides an authentication function. During an attempt to establish a first link by a first communication device connected to a communication device providing the authentication function to a second communication device connected to the communication device providing the authentication function, an authenticator role to be assigned as part of an authentication process is associated with the first and second communication devices, wherein at least one property correlating with the connection is analyzed for meeting a criterion. The invention further relates to an arrangement comprising means for carrying out the method.
1.-15. (canceled) 16. A method for providing a local area network comprising a plurality of communication devices, the method comprising: attempting to establish a first link from a first communication device of the communication devices to a second communication device of the communication devices, at least one of the first and second communication devices having a connection to at least one authentication communication device; testing at least one property correlating to connections that the first and second communication devices have to any of the at least one authentication communication device for fulfillment of at least one criterion for assigning an authenticator role to be assigned as part of authentication to one of the first communication device and the second communication device by a process comprising: upon a determination that there is no authentication communication device that has a connection with each of the first and second communication devices: the first communication device comparing the at least one authentication communication device to which the second communication device is connected with the at least one authentication communication device connected to the first communication device, the at least one authentication communication device to which the second communication device is connected being made known to the first communication device prior to the comparing of the at least one authentication communication device performed by the first communication device; and the second communication device comparing the at least one authentication communication device to which the first communication device is connected to the at least one authentication communication device connected to the second communication device, the at least one authentication communication device to which the first communication device is connected being made known to the second communication device prior to the comparing of the at least one authentication communication device performed by the second communication device; and assigning the authenticator role to the first communication device or the second communication device based upon which of (i) a first connection of the first communication device to one of the at least one authentication communication device and (ii) a second connection of the second communication device to a different one of the at least one authentication communication device that has the at least one property that best fulfills the at least one criterion. 17. The method of claim 16 wherein the at least one criterion is comprised of a lowest number of hops along a route forming a connection to the at least one authentication communication device performing the authentication function and the at least one property is comprised of a metric defining a number of hops to an authentication communication device. 18. The method of claim 16 wherein the at least one criterion comprises at least one value that is determinable based on data of a routing log. 19. The method of claim 18 wherein the routing log is a mesh routing log. 20. The method of claim 16 wherein the at least one criterion is a minimum number of hops along a route forming a connection to the at least one authentication communication device performing the authentication function. 21. The method of claim 16 wherein the at least one criterion comprises a value indicating a best quality of a physical property, the physical property comprising a signal quality and the value indicating best quality of the physical property comprising a value indicating superior signal quality. 22. The method of claim 16 wherein the at least one criterion comprises a value indicating a minimum capacity for a connection to any of the at least one authentication communication device. 23. The method of claim 16 wherein the at least one criterion comprises a power supply for the subnet or the infrastructure network. 24. The method of claim 16 wherein the at least one criterion comprises a minimum processor use. 25. The method of claim 16 further comprising: upon a determination that the testing shows that the at least one property for the first connection and the second connection coincide, assigning the authenticator role based on a comparison of Media Access Control (“MAC”) addresses of the first communication device and the second communication device. 26. The method of claim 25 wherein the authenticator role is assigned to which of the first communication device and the second communication device has a smaller MAC address. 27. The method of claim 16 further comprising: upon determining that results of the testing cannot be determined, assigning the authenticator role based on a comparison of Media Access Control (“MAC”) addresses of the first communication device and the second communication device. 28. The method of claim 27 wherein the authenticator role is assigned to which of the first communication device and the second communication device has a smaller MAC address. 29. The method of claim 16 wherein an adaptation of an authentication method takes place based on results of the testing. 30. The method of claim 29 wherein the adaptation of the authentication method comprises a selection of an authentication method defined in accordance with Extensible Authentication Protocol (“EAP”). 31. The method of claim 16 wherein the at least one authentication communication device comprises at least one mesh key distributor and at least one authentication server. 32. A communication apparatus comprising: a first communication device that is connectable to at least one authentication communication device; the first communication device configured to establish a first link to a second communication device, the first communication device configured to test at least one property correlating to connections that the first and second communication devices have to the at least one authentication communication device for fulfillment of at least one criterion for assigning an authenticator role to be assigned as part of authentication to one of the first communication device and the second communication device by a process comprising: upon a determination that there is no authentication communication device that has a connection with each of the first and second communication devices: the first communication device comparing the at least one authentication communication device to which the second communication device is connected with the at least one authentication communication device connected to the first communication device, the at least one authentication communication device to which the second communication device is connected being made known to the first communication device prior to the comparing of the at least one authentication communication device performed by the first communication device; and assigning the authenticator role to the first communication device or the second communication device based upon which of (i) a first connection of the first communication device to one of the at least one authentication communication device and (ii) a second connection of the second communication device to a different one of the at least one authentication communication device that has the at least one property that best fulfills the at least one criterion. 33. The communication apparatus of claim 32, comprising the second communication device, the second communication device configured such that, upon a determination that there is no authentication communication device that has a connection with each of the first and second communication devices, the second communication device compares the at least one authentication communication device to which the first communication device is connected to at least one authentication communication device connected to the second communication device. 34. The communication apparatus of claim 33, comprising: at least one authentication communication device connected to the second communication device and at least one authentication communication device connected to the first communication device; and wherein the first and second communication devices are configured so that: the at least one authentication communication device to which the first communication device is connected is made known to the second communication device prior to the comparing of the at least one authentication communication device performed by the second communication device; and the at least one authentication communication device to which the second communication device is connected is made known to the first communication device prior to the comparing of the at least one authentication communication device performed by the first communication device; and wherein the apparatus is configured as a network. 35. A network comprising: a first communication device; a second communication device; a plurality of authentication communication devices, each of the authentication communication devices connectable to one of the first communication device and the second communication device; the first and second communication devices configured to connect to each other via an establishment of a first link, the first and second communication device configured so that the establishment of the first link is performed such that testing of at least one property correlating to connections that the first and second communication devices have to any of the authentication communication devices for fulfillment of at least one criterion for assigning an authenticator role to be assigned as part of authentication to one of the first communication device and the second communication device is performed by a process comprising: upon a determination that there is no authentication communication device that has a connection with each of the first and second communication devices: the first communication device comparing at least one authentication communication device to which the second communication device is connected with at least one authentication communication device connected to the first communication device, the at least one authentication communication device to which the second communication device is connected being made known to the first communication device prior to the comparing of the at least one authentication communication device performed by the first communication device; and the second communication device comparing the at least one authentication communication device to which the first communication device is connected to the at least one authentication communication device connected to the second communication device, the at least one authentication communication device to which the first communication device is connected being made known to the second communication device prior to the comparing of the at least one authentication communication device performed by the second communication device; and assigning the authenticator role to the first communication device or the second communication device based upon which of (i) a first connection of the first communication device to one of the authentication communication devices and (ii) a second connection of the second communication device to a different one of the authentication communication devices that has the at least one property that best fulfills the at least one criterion.
The invention relates to a method for providing a wireless local network, wherein stationary communication devices and mobile communication devices are connected in the manner of a mesh as the sub-network, which is particularly connected to an infrastructure network and configured to exchange authentication messages with at least one communication device, which is particularly disposed in the infrastructure network and provides an authentication function. During an attempt to establish a first link by a first communication device connected to a communication device providing the authentication function to a second communication device connected to the communication device providing the authentication function, an authenticator role to be assigned as part of an authentication process is associated with the first and second communication devices, wherein at least one property correlating with the connection is analyzed for meeting a criterion. The invention further relates to an arrangement comprising means for carrying out the method.1.-15. (canceled) 16. A method for providing a local area network comprising a plurality of communication devices, the method comprising: attempting to establish a first link from a first communication device of the communication devices to a second communication device of the communication devices, at least one of the first and second communication devices having a connection to at least one authentication communication device; testing at least one property correlating to connections that the first and second communication devices have to any of the at least one authentication communication device for fulfillment of at least one criterion for assigning an authenticator role to be assigned as part of authentication to one of the first communication device and the second communication device by a process comprising: upon a determination that there is no authentication communication device that has a connection with each of the first and second communication devices: the first communication device comparing the at least one authentication communication device to which the second communication device is connected with the at least one authentication communication device connected to the first communication device, the at least one authentication communication device to which the second communication device is connected being made known to the first communication device prior to the comparing of the at least one authentication communication device performed by the first communication device; and the second communication device comparing the at least one authentication communication device to which the first communication device is connected to the at least one authentication communication device connected to the second communication device, the at least one authentication communication device to which the first communication device is connected being made known to the second communication device prior to the comparing of the at least one authentication communication device performed by the second communication device; and assigning the authenticator role to the first communication device or the second communication device based upon which of (i) a first connection of the first communication device to one of the at least one authentication communication device and (ii) a second connection of the second communication device to a different one of the at least one authentication communication device that has the at least one property that best fulfills the at least one criterion. 17. The method of claim 16 wherein the at least one criterion is comprised of a lowest number of hops along a route forming a connection to the at least one authentication communication device performing the authentication function and the at least one property is comprised of a metric defining a number of hops to an authentication communication device. 18. The method of claim 16 wherein the at least one criterion comprises at least one value that is determinable based on data of a routing log. 19. The method of claim 18 wherein the routing log is a mesh routing log. 20. The method of claim 16 wherein the at least one criterion is a minimum number of hops along a route forming a connection to the at least one authentication communication device performing the authentication function. 21. The method of claim 16 wherein the at least one criterion comprises a value indicating a best quality of a physical property, the physical property comprising a signal quality and the value indicating best quality of the physical property comprising a value indicating superior signal quality. 22. The method of claim 16 wherein the at least one criterion comprises a value indicating a minimum capacity for a connection to any of the at least one authentication communication device. 23. The method of claim 16 wherein the at least one criterion comprises a power supply for the subnet or the infrastructure network. 24. The method of claim 16 wherein the at least one criterion comprises a minimum processor use. 25. The method of claim 16 further comprising: upon a determination that the testing shows that the at least one property for the first connection and the second connection coincide, assigning the authenticator role based on a comparison of Media Access Control (“MAC”) addresses of the first communication device and the second communication device. 26. The method of claim 25 wherein the authenticator role is assigned to which of the first communication device and the second communication device has a smaller MAC address. 27. The method of claim 16 further comprising: upon determining that results of the testing cannot be determined, assigning the authenticator role based on a comparison of Media Access Control (“MAC”) addresses of the first communication device and the second communication device. 28. The method of claim 27 wherein the authenticator role is assigned to which of the first communication device and the second communication device has a smaller MAC address. 29. The method of claim 16 wherein an adaptation of an authentication method takes place based on results of the testing. 30. The method of claim 29 wherein the adaptation of the authentication method comprises a selection of an authentication method defined in accordance with Extensible Authentication Protocol (“EAP”). 31. The method of claim 16 wherein the at least one authentication communication device comprises at least one mesh key distributor and at least one authentication server. 32. A communication apparatus comprising: a first communication device that is connectable to at least one authentication communication device; the first communication device configured to establish a first link to a second communication device, the first communication device configured to test at least one property correlating to connections that the first and second communication devices have to the at least one authentication communication device for fulfillment of at least one criterion for assigning an authenticator role to be assigned as part of authentication to one of the first communication device and the second communication device by a process comprising: upon a determination that there is no authentication communication device that has a connection with each of the first and second communication devices: the first communication device comparing the at least one authentication communication device to which the second communication device is connected with the at least one authentication communication device connected to the first communication device, the at least one authentication communication device to which the second communication device is connected being made known to the first communication device prior to the comparing of the at least one authentication communication device performed by the first communication device; and assigning the authenticator role to the first communication device or the second communication device based upon which of (i) a first connection of the first communication device to one of the at least one authentication communication device and (ii) a second connection of the second communication device to a different one of the at least one authentication communication device that has the at least one property that best fulfills the at least one criterion. 33. The communication apparatus of claim 32, comprising the second communication device, the second communication device configured such that, upon a determination that there is no authentication communication device that has a connection with each of the first and second communication devices, the second communication device compares the at least one authentication communication device to which the first communication device is connected to at least one authentication communication device connected to the second communication device. 34. The communication apparatus of claim 33, comprising: at least one authentication communication device connected to the second communication device and at least one authentication communication device connected to the first communication device; and wherein the first and second communication devices are configured so that: the at least one authentication communication device to which the first communication device is connected is made known to the second communication device prior to the comparing of the at least one authentication communication device performed by the second communication device; and the at least one authentication communication device to which the second communication device is connected is made known to the first communication device prior to the comparing of the at least one authentication communication device performed by the first communication device; and wherein the apparatus is configured as a network. 35. A network comprising: a first communication device; a second communication device; a plurality of authentication communication devices, each of the authentication communication devices connectable to one of the first communication device and the second communication device; the first and second communication devices configured to connect to each other via an establishment of a first link, the first and second communication device configured so that the establishment of the first link is performed such that testing of at least one property correlating to connections that the first and second communication devices have to any of the authentication communication devices for fulfillment of at least one criterion for assigning an authenticator role to be assigned as part of authentication to one of the first communication device and the second communication device is performed by a process comprising: upon a determination that there is no authentication communication device that has a connection with each of the first and second communication devices: the first communication device comparing at least one authentication communication device to which the second communication device is connected with at least one authentication communication device connected to the first communication device, the at least one authentication communication device to which the second communication device is connected being made known to the first communication device prior to the comparing of the at least one authentication communication device performed by the first communication device; and the second communication device comparing the at least one authentication communication device to which the first communication device is connected to the at least one authentication communication device connected to the second communication device, the at least one authentication communication device to which the first communication device is connected being made known to the second communication device prior to the comparing of the at least one authentication communication device performed by the second communication device; and assigning the authenticator role to the first communication device or the second communication device based upon which of (i) a first connection of the first communication device to one of the authentication communication devices and (ii) a second connection of the second communication device to a different one of the authentication communication devices that has the at least one property that best fulfills the at least one criterion.
2,400
7,452
7,452
14,854,173
2,426
In one embodiment, a method is provided. The method includes using a processor to receive pre-defined rules related to one or more desired characteristics of available digital content. The pre-defined rules indicate user preferences for receiving the available digital content with a personal computing device. The method further includes using a processor to determine current operating conditions corresponding to the pre-defined rules and to identify the one or more desired characteristics of available digital content based on the pre-defined rules and the current operating conditions. The method also includes using a processor to identify a data channel from one or more data channels. The identified data channel is configured to receive the available digital content from a content delivery system based on the pre-defined rules and the current operating conditions. The method also includes using the processor to receive digital content with the personal computing device via the identified data channel, where the received digital content includes the one or more desired characteristics.
1. A method, comprising: using a processor configured to: receive pre-defined rules related to one or more desired characteristics of available digital content, wherein the pre-defined rules indicate user preferences for receiving the available digital content with a personal computing device; determine current operating conditions corresponding to the pre-defined rules; identify the one or more desired characteristics of available digital content based on the pre-defined rules and the current operating conditions; identify a data channel from one or more data channels, wherein the identified data channel is configured to receive the available digital content from a content delivery system based on the pre-defined rules and the current operating conditions; and receive digital content with the personal computing device via the identified data channel, wherein the received digital content comprises the one or more desired characteristics. 2. The method of claim 1, wherein determining the current operating conditions corresponding to the pre-defined rules comprises determining device-related data or network-related data. 3. The method of claim 1, wherein determining the current operating conditions corresponding to the pre-defined rules comprises determining a current time or a current geo-location. 4. The method of claim 1, wherein the pre-defined rules comprise rules related to time, location, device configurations, or network configurations. 5. The method of claim 1, wherein the pre-defined rules comprise custom rules that combine one or more rules. 6. The method of claim 1, wherein the one or more desired characteristics of available digital content comprises audio quality, video quality, a content provider, sensory information, delivery costs, quality costs, or information related to advertisements. 7. The method of claim 1, wherein the one or more data channels comprise a long-term evolution (e.g., LTE) channel, a wireless local area network (e.g., WiFi) channel, a Bluetooth low energy (e.g., BLE) channel, an ultra high frequency (e.g., UHF) channel, or a near field communication (e.g., NFC) channel. 8. The method of claim 1, comprising identifying and receiving software based on the pre-defined rules and the current operating conditions. 9. The method of claim 1, comprising switching between the one or more data channels based on the pre-defined rules and the current operating conditions. 10. The method of claim 9, comprising switching from a first data channel when a current location is within an entertainment venue to a second data channel when the current location is outside of the entertainment venue. 11. A system, comprising: one or more content delivery systems configured to provide digital content via one or more data channels, wherein the digital content comprises one or more available content characteristics; and a processor-based personal computing device configured to receive the digital content, wherein the processor-based personal computing device is configured to: receive pre-defined rules from a user related to one or more desired content characteristics of the digital content; determine current operating conditions corresponding to the pre-defined rules, wherein the current operating conditions comprise device-related conditions or network-related conditions; and identify the one or more desired characteristics of the digital content from among the available content characteristics based on the pre-defined rules and the current operating conditions. 12. The system of claim 11, wherein the processor-based personal computing device is configured to receive the one or more desired characteristics of the digital content based on the pre-defined rules and the current operating conditions. 13. The system of claim 11, wherein the pre-defined rules comprises a geo-location rule and the current operating conditions comprises a current geo-location, and wherein the processor-based personal computing device is configured to receive the one or more desired characteristics of the digital content based on the geo-location rule and the current geo-location. 14. The system of claim 13, wherein the one or more desired or available content characteristics of the digital content comprise audio quality, video quality, a content provider, sensory information, delivery costs, quality costs, or information related to advertisements. 15. The system of claim 11, wherein the processor-based personal computing device comprises a smart-phone, a computer, a tablet, a hand-held computer, a laptop, a television set, or any combination thereof. 16. The system of claim 11, wherein the processor-based personal computing device comprises a wearable device, and wherein the wearable device comprises a wristband, a watch, goggles, glasses, a necklace, a heads-up display, or a combination thereof. 17. A tangible, non-transitory, computer-readable medium configured to store instructions executable by a processor of a personal computing device, wherein the instructions, when executed, are configured to: receive pre-defined rules from a user related to one or more desired content characteristics of digital content, wherein the digital content comprises one or more available content characteristics; determine current operating conditions corresponding to the pre-defined rules, wherein the current operating conditions comprise device-related conditions or network-related conditions; identify the one or more desired content characteristics among the one or more available content characteristics based on the pre-defined rules and the current operating conditions; and receive digital content with the personal computing device, wherein the received digital content comprises the one or more desired characteristics. 18. The computer-readable medium of claim 17, wherein the pre-defined rules comprise a location rule and the current operating conditions comprise a current geo-location of the personal computing device, and wherein the instructions are configured to identify and receive a high quality digital content when the personal computing device is in a first location. 19. The computer-readable medium of claim 18, wherein the executed instructions are configured to identify and receive a low quality digital content when the personal computing device is in a second location. 20. The computer-readable medium of claim 18, wherein the executed instructions are configured to identify and receive software based on the pre-defined rules and the current operating conditions.
In one embodiment, a method is provided. The method includes using a processor to receive pre-defined rules related to one or more desired characteristics of available digital content. The pre-defined rules indicate user preferences for receiving the available digital content with a personal computing device. The method further includes using a processor to determine current operating conditions corresponding to the pre-defined rules and to identify the one or more desired characteristics of available digital content based on the pre-defined rules and the current operating conditions. The method also includes using a processor to identify a data channel from one or more data channels. The identified data channel is configured to receive the available digital content from a content delivery system based on the pre-defined rules and the current operating conditions. The method also includes using the processor to receive digital content with the personal computing device via the identified data channel, where the received digital content includes the one or more desired characteristics.1. A method, comprising: using a processor configured to: receive pre-defined rules related to one or more desired characteristics of available digital content, wherein the pre-defined rules indicate user preferences for receiving the available digital content with a personal computing device; determine current operating conditions corresponding to the pre-defined rules; identify the one or more desired characteristics of available digital content based on the pre-defined rules and the current operating conditions; identify a data channel from one or more data channels, wherein the identified data channel is configured to receive the available digital content from a content delivery system based on the pre-defined rules and the current operating conditions; and receive digital content with the personal computing device via the identified data channel, wherein the received digital content comprises the one or more desired characteristics. 2. The method of claim 1, wherein determining the current operating conditions corresponding to the pre-defined rules comprises determining device-related data or network-related data. 3. The method of claim 1, wherein determining the current operating conditions corresponding to the pre-defined rules comprises determining a current time or a current geo-location. 4. The method of claim 1, wherein the pre-defined rules comprise rules related to time, location, device configurations, or network configurations. 5. The method of claim 1, wherein the pre-defined rules comprise custom rules that combine one or more rules. 6. The method of claim 1, wherein the one or more desired characteristics of available digital content comprises audio quality, video quality, a content provider, sensory information, delivery costs, quality costs, or information related to advertisements. 7. The method of claim 1, wherein the one or more data channels comprise a long-term evolution (e.g., LTE) channel, a wireless local area network (e.g., WiFi) channel, a Bluetooth low energy (e.g., BLE) channel, an ultra high frequency (e.g., UHF) channel, or a near field communication (e.g., NFC) channel. 8. The method of claim 1, comprising identifying and receiving software based on the pre-defined rules and the current operating conditions. 9. The method of claim 1, comprising switching between the one or more data channels based on the pre-defined rules and the current operating conditions. 10. The method of claim 9, comprising switching from a first data channel when a current location is within an entertainment venue to a second data channel when the current location is outside of the entertainment venue. 11. A system, comprising: one or more content delivery systems configured to provide digital content via one or more data channels, wherein the digital content comprises one or more available content characteristics; and a processor-based personal computing device configured to receive the digital content, wherein the processor-based personal computing device is configured to: receive pre-defined rules from a user related to one or more desired content characteristics of the digital content; determine current operating conditions corresponding to the pre-defined rules, wherein the current operating conditions comprise device-related conditions or network-related conditions; and identify the one or more desired characteristics of the digital content from among the available content characteristics based on the pre-defined rules and the current operating conditions. 12. The system of claim 11, wherein the processor-based personal computing device is configured to receive the one or more desired characteristics of the digital content based on the pre-defined rules and the current operating conditions. 13. The system of claim 11, wherein the pre-defined rules comprises a geo-location rule and the current operating conditions comprises a current geo-location, and wherein the processor-based personal computing device is configured to receive the one or more desired characteristics of the digital content based on the geo-location rule and the current geo-location. 14. The system of claim 13, wherein the one or more desired or available content characteristics of the digital content comprise audio quality, video quality, a content provider, sensory information, delivery costs, quality costs, or information related to advertisements. 15. The system of claim 11, wherein the processor-based personal computing device comprises a smart-phone, a computer, a tablet, a hand-held computer, a laptop, a television set, or any combination thereof. 16. The system of claim 11, wherein the processor-based personal computing device comprises a wearable device, and wherein the wearable device comprises a wristband, a watch, goggles, glasses, a necklace, a heads-up display, or a combination thereof. 17. A tangible, non-transitory, computer-readable medium configured to store instructions executable by a processor of a personal computing device, wherein the instructions, when executed, are configured to: receive pre-defined rules from a user related to one or more desired content characteristics of digital content, wherein the digital content comprises one or more available content characteristics; determine current operating conditions corresponding to the pre-defined rules, wherein the current operating conditions comprise device-related conditions or network-related conditions; identify the one or more desired content characteristics among the one or more available content characteristics based on the pre-defined rules and the current operating conditions; and receive digital content with the personal computing device, wherein the received digital content comprises the one or more desired characteristics. 18. The computer-readable medium of claim 17, wherein the pre-defined rules comprise a location rule and the current operating conditions comprise a current geo-location of the personal computing device, and wherein the instructions are configured to identify and receive a high quality digital content when the personal computing device is in a first location. 19. The computer-readable medium of claim 18, wherein the executed instructions are configured to identify and receive a low quality digital content when the personal computing device is in a second location. 20. The computer-readable medium of claim 18, wherein the executed instructions are configured to identify and receive software based on the pre-defined rules and the current operating conditions.
2,400
7,453
7,453
13,042,375
2,443
In one embodiment, a geo-social networking system records location data of a user, generate a set of recommendations based on the user's location data, and present one or more recommendations of the set of recommendations to the user based on the user's current location.
1. A method, comprising accessing, at a computer system, a data store for location data of a first user; accessing one or more data stores for one or more places in proximity to the first user's location data; generating a set of recommendations based on the one or more places in proximity to the first user's location data; and presenting one or more recommendations of the set of recommendations to the first user. 2. The method of claim 1 wherein a recommendation of the set of recommendations is an advertisement. 3. The method of claim 1 wherein a recommendation of the set of recommendations is a recommended action for the first user. 4. The method of claim 3 wherein the recommended action for the first user is a place check-in. 5. The method of claim 4 wherein the recommending action is enabling an automatic check-in to a place identified in a geo-social networking system. 6. The method of claim 1 wherein the presenting one or more recommendations of the set of recommendations to the first user further comprises: receiving from a remote client device, at the computer system, a current location of the first user; selecting one or more recommendations of the set of recommendations based on the current location of the first user; and transmitting the selected one or more recommendations to the remote client device, causing the remote client device to present the selected one or more recommendations to the first user. 7. The method of claim 1 wherein the location data of the first user comprises route information. 8. A system, comprising: a memory; one or more processors; and a non-transitory, storage medium storing computer-readable instructions operative, when executed, to cause the one or more processors to: access a data store for location data of a first user; access one or more data stores for one or more places in proximity to the first user's location data; generate a set of recommendations based on the one or more places in proximity to the first user's location data; and present one or more recommendations of the set of recommendations to the first user. 9. The system of claim 8 wherein a recommendation of the set of recommendations is an advertisement. 10. The system of claim 8 wherein a recommendation of the set of recommendations is a recommended action for the first user. 11. The system of claim 10 wherein the recommended action for the first user is a place check-in. 12. The system of claim 11 wherein the recommending action is enabling an automatic check-in to a place identified in a geo-social networking system. 13. The system of claim 8, wherein to present one or more recommendations of the set of recommendations to the first user, further comprising instructions operable to cause the one or more processors to: receive from a remote client device a current location of the first user; select one or more recommendations of the set of recommendations based on the current location of the first user; and transmit the selected one or more recommendations to the remote client device, causing the remote client device to present the selected one or more recommendations to the first user. 14. The system of claim 8 wherein the location data of the first user comprises route information.
In one embodiment, a geo-social networking system records location data of a user, generate a set of recommendations based on the user's location data, and present one or more recommendations of the set of recommendations to the user based on the user's current location.1. A method, comprising accessing, at a computer system, a data store for location data of a first user; accessing one or more data stores for one or more places in proximity to the first user's location data; generating a set of recommendations based on the one or more places in proximity to the first user's location data; and presenting one or more recommendations of the set of recommendations to the first user. 2. The method of claim 1 wherein a recommendation of the set of recommendations is an advertisement. 3. The method of claim 1 wherein a recommendation of the set of recommendations is a recommended action for the first user. 4. The method of claim 3 wherein the recommended action for the first user is a place check-in. 5. The method of claim 4 wherein the recommending action is enabling an automatic check-in to a place identified in a geo-social networking system. 6. The method of claim 1 wherein the presenting one or more recommendations of the set of recommendations to the first user further comprises: receiving from a remote client device, at the computer system, a current location of the first user; selecting one or more recommendations of the set of recommendations based on the current location of the first user; and transmitting the selected one or more recommendations to the remote client device, causing the remote client device to present the selected one or more recommendations to the first user. 7. The method of claim 1 wherein the location data of the first user comprises route information. 8. A system, comprising: a memory; one or more processors; and a non-transitory, storage medium storing computer-readable instructions operative, when executed, to cause the one or more processors to: access a data store for location data of a first user; access one or more data stores for one or more places in proximity to the first user's location data; generate a set of recommendations based on the one or more places in proximity to the first user's location data; and present one or more recommendations of the set of recommendations to the first user. 9. The system of claim 8 wherein a recommendation of the set of recommendations is an advertisement. 10. The system of claim 8 wherein a recommendation of the set of recommendations is a recommended action for the first user. 11. The system of claim 10 wherein the recommended action for the first user is a place check-in. 12. The system of claim 11 wherein the recommending action is enabling an automatic check-in to a place identified in a geo-social networking system. 13. The system of claim 8, wherein to present one or more recommendations of the set of recommendations to the first user, further comprising instructions operable to cause the one or more processors to: receive from a remote client device a current location of the first user; select one or more recommendations of the set of recommendations based on the current location of the first user; and transmit the selected one or more recommendations to the remote client device, causing the remote client device to present the selected one or more recommendations to the first user. 14. The system of claim 8 wherein the location data of the first user comprises route information.
2,400
7,454
7,454
14,713,808
2,473
A method of signaling is disclosed. A terminal receives downlink control signaling from a base station and sends an uplink packet to the base station. The uplink packet is sent according to the received downlink control signaling. The downlink control signaling includes a field that includes N bits. The field itself is dynamically indicative of one of a payload size or a redundancy version such that the field itself is indicative of the payload size if a packet that the terminal will send to a base station is an initial transmission and the field itself is indicative of the redundancy version if the packet that the terminal will send to the base station is a retransmission.
1. A method of signaling comprising: receiving, by a terminal, downlink control signaling from a base station (BS); and sending, by the terminal, an uplink packet to the BS, the uplink packet being sent according to the received downlink control signaling, wherein the downlink control signaling comprises a field, wherein the field includes N bits, wherein the field itself is dynamically indicative of a payload size or a Redundancy Version (RV), wherein the field itself is indicative of the payload size if the uplink packet is an initial transmission, and wherein the field itself is indicative of the RV if the uplink packet is a retransmission. 2. The method of claim 1, further comprising: determining, by the terminal, that the RV is a default value if the received downlink control signaling is indicative of the payload size on the field. 3. The method of claim 1, wherein receiving the downlink control signaling comprises: sending, by the terminal, a detecting Discontinuous Transmission signal to the BS; and receiving, by the terminal, the downlink control signaling indicative of the payload size or RV in the field. 4. The method of claim 3, wherein receiving the downlink control signaling comprises: receiving, by the terminal, the downlink control signaling indicating the RV in the field of the downlink control signaling to be received a next time if a packet transmission count reaches or exceeds a pre-defined value; and receiving, by the terminal, the downlink control signaling indicating the payload size in the field of the downlink control signaling to be received the next time if the packet transmission count does not reach the pre-defined value. 5. An apparatus for signaling comprising: a downlink control signaling receiver, configured to receive downlink control signaling from a base station (BS), wherein the downlink control signaling comprises a field, wherein the field includes N bits, wherein the field itself is dynamically indicative of one of a payload size or a Redundancy Version (RV) such that the field itself is indicative of the payload size if an uplink packet being sent to a base station is an initial transmission, and the field itself is indicative of the RV if the uplink packet being sent to the base station is a retransmission; and a transmitter, configured to send the uplink packet to the BS, the uplink packet sent according to the received downlink control signaling. 6. The apparatus of claim 5, further comprising a processor, configured to determine that the RV is a default value if the received downlink control signaling is indicative of the payload size on the field. 7. The apparatus of claim 6, wherein the apparatus is integrated into a terminal.
A method of signaling is disclosed. A terminal receives downlink control signaling from a base station and sends an uplink packet to the base station. The uplink packet is sent according to the received downlink control signaling. The downlink control signaling includes a field that includes N bits. The field itself is dynamically indicative of one of a payload size or a redundancy version such that the field itself is indicative of the payload size if a packet that the terminal will send to a base station is an initial transmission and the field itself is indicative of the redundancy version if the packet that the terminal will send to the base station is a retransmission.1. A method of signaling comprising: receiving, by a terminal, downlink control signaling from a base station (BS); and sending, by the terminal, an uplink packet to the BS, the uplink packet being sent according to the received downlink control signaling, wherein the downlink control signaling comprises a field, wherein the field includes N bits, wherein the field itself is dynamically indicative of a payload size or a Redundancy Version (RV), wherein the field itself is indicative of the payload size if the uplink packet is an initial transmission, and wherein the field itself is indicative of the RV if the uplink packet is a retransmission. 2. The method of claim 1, further comprising: determining, by the terminal, that the RV is a default value if the received downlink control signaling is indicative of the payload size on the field. 3. The method of claim 1, wherein receiving the downlink control signaling comprises: sending, by the terminal, a detecting Discontinuous Transmission signal to the BS; and receiving, by the terminal, the downlink control signaling indicative of the payload size or RV in the field. 4. The method of claim 3, wherein receiving the downlink control signaling comprises: receiving, by the terminal, the downlink control signaling indicating the RV in the field of the downlink control signaling to be received a next time if a packet transmission count reaches or exceeds a pre-defined value; and receiving, by the terminal, the downlink control signaling indicating the payload size in the field of the downlink control signaling to be received the next time if the packet transmission count does not reach the pre-defined value. 5. An apparatus for signaling comprising: a downlink control signaling receiver, configured to receive downlink control signaling from a base station (BS), wherein the downlink control signaling comprises a field, wherein the field includes N bits, wherein the field itself is dynamically indicative of one of a payload size or a Redundancy Version (RV) such that the field itself is indicative of the payload size if an uplink packet being sent to a base station is an initial transmission, and the field itself is indicative of the RV if the uplink packet being sent to the base station is a retransmission; and a transmitter, configured to send the uplink packet to the BS, the uplink packet sent according to the received downlink control signaling. 6. The apparatus of claim 5, further comprising a processor, configured to determine that the RV is a default value if the received downlink control signaling is indicative of the payload size on the field. 7. The apparatus of claim 6, wherein the apparatus is integrated into a terminal.
2,400
7,455
7,455
15,201,171
2,439
Examples relate to computer attack model management. In one example, a computing device may: identify a first set of attack models, each attack model in the first set specifying behavior of a particular attack on a computing system; obtain, for each attack model in the first set, performance data that indicates at least one measure of attack model performance for a previous use of the attack model in determining whether the particular attack occurred on the computing system; and update the first set of attack models based on the performance data.
1. A computing device for computer attack model management, the computing device comprising: a hardware processor; and a data storage device storing instructions that, when executed by the hardware processor, cause the hardware processor to: identify a first set of attack models, each attack model in the first set specifying behavior of a particular attack on a computing system; obtain, for each attack model in the first set, performance data that indicates at least one measure of attack model performance for a previous use of the attack model in determining whether the particular attack occurred on the computing system; and update the first set of attack models based on the performance data. 2. The computing device of claim 1, wherein the performance data includes at least one of: resource usage measurements that indicate computing resources used to execute actions specified by the corresponding attack model; analytics results data that indicates a frequency with which the corresponding attack model successfully detected the particular attack; or user feedback. 3. The computing device of claim 1, wherein the first set of attack models is updated in response to a triggering event. 4. The computing device of claim 3, wherein the triggering event includes at least one of: user input; a time-based threshold being met a resource usage threshold being met; or performance data indicating a predetermined triggering condition. 5. The computing device of claim 1, wherein the first set of attack models is updated by at least one of: adding an attack model to the first set; removing an attack model from the first set; or changing an attack model included in the first set. 6. The computing device of claim 1, wherein the first set of attack models is updated by changing an attack model included in the first set, and wherein changing the attack model includes: removing an attack action from the attack model; adding an attack action to the attack model; or changing an existing attack action specified by the attack model. 7. The computing device of claim 1, wherein the instructions further cause the hardware processor to: determine, based on the performance data that a particular attack model in the first set performed worse than at least one other attack model included in the first set; and in response to the determination, remove or change the particular attack model. 8. A method for computer attack model management, implemented by a hardware processor, the method comprising: identifying a first set of attack models, each attack model in the first set specifying behavior of a particular attack on a computing system; obtaining, for each attack model in the first set, performance data that indicates at least one measure of attack model performance for a previous use of the attack model in determining whether the particular attack occurred on the computing system, the performance data including: resource usage measurements that indicate computing resources used to execute actions specified by the corresponding attack model; and analytics results data that indicates whether the corresponding attack model successfully detected the particular attack; and in response to a triggering event, update the first set of attack models based on the performance data. 9. The method of claim 8, wherein the triggering event includes at least one of: user input; a time-based threshold being met a resource usage threshold being met; or performance data indicating a predetermined triggering condition. 10. The method of claim 8, wherein the first set of attack models is updated by at least one of: adding an attack model to the first set; removing an attack model from the first set; or changing an attack model included in the first set. 11. The method of claim 8, wherein the first set of attack models is updated by changing an attack model included in the first set, and wherein changing the attack model includes: removing an attack action from the attack model; adding an attack action to the attack model; or changing an existing attack action specified by the attack model. 12. The method of claim 8, further comprising: determining, based on the performance data that a particular attack model in the first set performed worse than at least one other attack model included in the first set; and in response to the determination, removing or changing the particular attack model. 13. The method of claim 8, further comprising: clustering attack models included in the first set to create at least two subsets of attack models, the clustering being based on at least one of performance or attack model characteristics of the attack models in the first set. 14. The method of claim 13, wherein the first set of attack models is updated by: removing at least one attack model from at least one of the at least two subsets. 15. A non-transitory machine-readable storage medium encoded with instructions executable by a hardware processor of a computing device for computer attack model management, the machine-readable storage medium comprising instructions to cause the hardware processor to: identify a first set of attack models, each attack model in the first set specifying behavior of a particular attack on a computing system; obtain, for each attack model in the first set, performance data that indicates at least one measure of attack model performance for a previous use of the attack model in determining whether the particular attack occurred on the computing system; and in response to a triggering event, update the first set of attack models based on the performance data. 16. The storage medium of claim 15, wherein the performance data includes at least one of: resource usage measurements that indicate computing resources used to execute actions specified by the corresponding attack model; and analytics results data that indicates a frequency with which the corresponding attack model successfully detected the particular attack; or user feedback. 17. The storage medium of claim 15, wherein the triggering event includes at least one of: user input; a time-based threshold being met a resource usage threshold being met; or performance data indicating a predetermined triggering condition. 18. The storage medium of claim 15, wherein the first set of attack models is updated by at least one of: removing an attack model from the first set; or changing an attack model included in the first set. 19. The storage medium of claim 15, wherein the first set of attack models is updated by changing an attack model included in the first set, and wherein changing the attack model includes: removing an attack action from the attack model; adding an attack action to the attack model; or changing an existing attack action specified by the attack model. 20. The storage medium of claim 15, wherein the instructions further cause the hardware processor to: determine, based on the performance data that a particular attack model in the first set performed worse than at least one other attack model included in the first set; and in response to the determination, remove or change the particular attack model.
Examples relate to computer attack model management. In one example, a computing device may: identify a first set of attack models, each attack model in the first set specifying behavior of a particular attack on a computing system; obtain, for each attack model in the first set, performance data that indicates at least one measure of attack model performance for a previous use of the attack model in determining whether the particular attack occurred on the computing system; and update the first set of attack models based on the performance data.1. A computing device for computer attack model management, the computing device comprising: a hardware processor; and a data storage device storing instructions that, when executed by the hardware processor, cause the hardware processor to: identify a first set of attack models, each attack model in the first set specifying behavior of a particular attack on a computing system; obtain, for each attack model in the first set, performance data that indicates at least one measure of attack model performance for a previous use of the attack model in determining whether the particular attack occurred on the computing system; and update the first set of attack models based on the performance data. 2. The computing device of claim 1, wherein the performance data includes at least one of: resource usage measurements that indicate computing resources used to execute actions specified by the corresponding attack model; analytics results data that indicates a frequency with which the corresponding attack model successfully detected the particular attack; or user feedback. 3. The computing device of claim 1, wherein the first set of attack models is updated in response to a triggering event. 4. The computing device of claim 3, wherein the triggering event includes at least one of: user input; a time-based threshold being met a resource usage threshold being met; or performance data indicating a predetermined triggering condition. 5. The computing device of claim 1, wherein the first set of attack models is updated by at least one of: adding an attack model to the first set; removing an attack model from the first set; or changing an attack model included in the first set. 6. The computing device of claim 1, wherein the first set of attack models is updated by changing an attack model included in the first set, and wherein changing the attack model includes: removing an attack action from the attack model; adding an attack action to the attack model; or changing an existing attack action specified by the attack model. 7. The computing device of claim 1, wherein the instructions further cause the hardware processor to: determine, based on the performance data that a particular attack model in the first set performed worse than at least one other attack model included in the first set; and in response to the determination, remove or change the particular attack model. 8. A method for computer attack model management, implemented by a hardware processor, the method comprising: identifying a first set of attack models, each attack model in the first set specifying behavior of a particular attack on a computing system; obtaining, for each attack model in the first set, performance data that indicates at least one measure of attack model performance for a previous use of the attack model in determining whether the particular attack occurred on the computing system, the performance data including: resource usage measurements that indicate computing resources used to execute actions specified by the corresponding attack model; and analytics results data that indicates whether the corresponding attack model successfully detected the particular attack; and in response to a triggering event, update the first set of attack models based on the performance data. 9. The method of claim 8, wherein the triggering event includes at least one of: user input; a time-based threshold being met a resource usage threshold being met; or performance data indicating a predetermined triggering condition. 10. The method of claim 8, wherein the first set of attack models is updated by at least one of: adding an attack model to the first set; removing an attack model from the first set; or changing an attack model included in the first set. 11. The method of claim 8, wherein the first set of attack models is updated by changing an attack model included in the first set, and wherein changing the attack model includes: removing an attack action from the attack model; adding an attack action to the attack model; or changing an existing attack action specified by the attack model. 12. The method of claim 8, further comprising: determining, based on the performance data that a particular attack model in the first set performed worse than at least one other attack model included in the first set; and in response to the determination, removing or changing the particular attack model. 13. The method of claim 8, further comprising: clustering attack models included in the first set to create at least two subsets of attack models, the clustering being based on at least one of performance or attack model characteristics of the attack models in the first set. 14. The method of claim 13, wherein the first set of attack models is updated by: removing at least one attack model from at least one of the at least two subsets. 15. A non-transitory machine-readable storage medium encoded with instructions executable by a hardware processor of a computing device for computer attack model management, the machine-readable storage medium comprising instructions to cause the hardware processor to: identify a first set of attack models, each attack model in the first set specifying behavior of a particular attack on a computing system; obtain, for each attack model in the first set, performance data that indicates at least one measure of attack model performance for a previous use of the attack model in determining whether the particular attack occurred on the computing system; and in response to a triggering event, update the first set of attack models based on the performance data. 16. The storage medium of claim 15, wherein the performance data includes at least one of: resource usage measurements that indicate computing resources used to execute actions specified by the corresponding attack model; and analytics results data that indicates a frequency with which the corresponding attack model successfully detected the particular attack; or user feedback. 17. The storage medium of claim 15, wherein the triggering event includes at least one of: user input; a time-based threshold being met a resource usage threshold being met; or performance data indicating a predetermined triggering condition. 18. The storage medium of claim 15, wherein the first set of attack models is updated by at least one of: removing an attack model from the first set; or changing an attack model included in the first set. 19. The storage medium of claim 15, wherein the first set of attack models is updated by changing an attack model included in the first set, and wherein changing the attack model includes: removing an attack action from the attack model; adding an attack action to the attack model; or changing an existing attack action specified by the attack model. 20. The storage medium of claim 15, wherein the instructions further cause the hardware processor to: determine, based on the performance data that a particular attack model in the first set performed worse than at least one other attack model included in the first set; and in response to the determination, remove or change the particular attack model.
2,400
7,456
7,456
13,600,749
2,421
A method for provisioning of media is disclosed and may include detecting a plurality of users located at a common location. A personal profile for each of the plurality of users may be accessed. The personal profile for each of the plurality of users may include at least one personal media preference related to consumption of media items. Each of the plurality of available media items may be weighted for each of the plurality of users based on the at least one personal media preference. One of the plurality of available media items may be selected for consumption by the plurality of users at the common location, based on the weighting of each of the plurality of available media items. The plurality of users may be detected at the common location by receiving a near field communication (NFC) signal from a user device for each of the plurality of users.
1. A method for provisioning of media, the method comprising: detecting a plurality of users located at a common location; accessing a personal profile for each of the plurality of users, wherein the personal profile for each of the plurality of users comprises at least one personal media preference related to consumption of media items; weighting for each of the plurality of users, each of a plurality of available media items based on the at least one personal media preference; and selecting one of the plurality of available media items for consumption by the plurality of users at the common location, based on the weighting of each of the plurality of available media items. 2. The method according to claim 1, comprising: detecting the plurality of users at the common location by receiving a near field communication (NFC) signal from a user device for each of the plurality of users. 3. The method according to claim 1, comprising: detecting a network-enabled media device at the common location. 4. The method according to claim 3, comprising: providing the selection of one of the plurality of available media items to the network-enabled media device for consumption by the plurality of users at the common location. 5. The method according to claim 1, comprising: detecting whether the plurality of users form a social group, wherein the social group is associated with the consumption of media items. 6. The method according to claim 5, wherein the detecting whether the plurality of users form the social group comprises receiving login information to a social network from each of the plurality of users. 7. The method according to claim 6, wherein the social network hosts the personal profile for each of the plurality of users. 8. The method according to claim 6, comprising: if the plurality of users form a social group, detecting the plurality of users at the common location by receiving a GPS signal from a user device for each of the plurality of users, the GPS signal indicating proximity to the common location. 9. The method according to claim 1, comprising: ranking the plurality of available media items based on the weighting for each of the plurality of users. 10. The method according to claim 9, comprising: selecting a highest ranked media item from the ranked plurality of available media items for consumption by the plurality of users at the common location. 11. A system for provisioning of media, the system comprising: a network device, the network device being operable to: detect a plurality of users located at a common location; access a personal profile for each of the plurality of users, wherein the personal profile for each of the plurality of users comprises at least one personal media preference related to consumption of media items; weight for each of the plurality of users, each of a plurality of available media items based on the at least one personal media preference; and select one of the plurality of available media items for consumption by the plurality of users at the common location, based on the weighting of each of the plurality of available media items. 12. The system according to claim 11, wherein the network device is operable to: detect the plurality of users at the common location by receiving a near field communication (NFC) signal from a user device for each of the plurality of users. 13. The system according to claim 11, wherein the network device is operable to: detect a network-enabled media device at the common location. 14. The system according to claim 13, wherein the network device is operable to: provide the selection of one of the plurality of available media items to the network-enabled media device for consumption by the plurality of users at the common location. 15. The system according to claim 11, wherein the network device is operable to: detect whether the plurality of users form a social group, wherein the social group is associated with the consumption of media items. 16. The system according to claim 15, wherein the detecting whether the plurality of users form the social group comprises receiving login information to a social network from each of the plurality of users. 17. The system according to claim 6, wherein the social network hosts the personal profile for each of the plurality of users. 18. The system according to claim 6, wherein, if the plurality of users form a social group, the network device is operable to: detect the plurality of users at the common location by receiving a GPS signal from a user device for each of the plurality of users, the GPS signal indicating proximity to the common location. 19. The system according to claim 1, wherein the network device is operable to: rank the plurality of available media items based on the weighting for each of the plurality of users; and select a highest ranked media item from the ranked plurality of available media items for consumption by the plurality of users at the common location. 20. A system for provisioning of media, the system comprising: a network device, the network device being operable to: receive login credentials from a plurality of users, the login credentials providing access to each of the plurality of users to a social group hosted by a network; detect at least a portion of the plurality of users are logged into the social group hosted by the network based on the received login credentials; receive location information from the at least apportion of the plurality of users; determine whether the at least apportion of the plurality of users are at a common location based on the received location information; and if the at least apportion of the plurality of users are at a common location: access a personal profile for each of the at least a portion of the plurality of users, wherein the personal profile for each of the at least a portion of the plurality of users comprises at least one personal media preference related to consumption of media items; weight for each of the at least a portion of the plurality of users, each of a plurality of available media items based on the at least one personal media preference; and select one of the plurality of available media items for consumption by the at least a portion of the plurality of users at the common location, based on the weighting of each of the plurality of available media items.
A method for provisioning of media is disclosed and may include detecting a plurality of users located at a common location. A personal profile for each of the plurality of users may be accessed. The personal profile for each of the plurality of users may include at least one personal media preference related to consumption of media items. Each of the plurality of available media items may be weighted for each of the plurality of users based on the at least one personal media preference. One of the plurality of available media items may be selected for consumption by the plurality of users at the common location, based on the weighting of each of the plurality of available media items. The plurality of users may be detected at the common location by receiving a near field communication (NFC) signal from a user device for each of the plurality of users.1. A method for provisioning of media, the method comprising: detecting a plurality of users located at a common location; accessing a personal profile for each of the plurality of users, wherein the personal profile for each of the plurality of users comprises at least one personal media preference related to consumption of media items; weighting for each of the plurality of users, each of a plurality of available media items based on the at least one personal media preference; and selecting one of the plurality of available media items for consumption by the plurality of users at the common location, based on the weighting of each of the plurality of available media items. 2. The method according to claim 1, comprising: detecting the plurality of users at the common location by receiving a near field communication (NFC) signal from a user device for each of the plurality of users. 3. The method according to claim 1, comprising: detecting a network-enabled media device at the common location. 4. The method according to claim 3, comprising: providing the selection of one of the plurality of available media items to the network-enabled media device for consumption by the plurality of users at the common location. 5. The method according to claim 1, comprising: detecting whether the plurality of users form a social group, wherein the social group is associated with the consumption of media items. 6. The method according to claim 5, wherein the detecting whether the plurality of users form the social group comprises receiving login information to a social network from each of the plurality of users. 7. The method according to claim 6, wherein the social network hosts the personal profile for each of the plurality of users. 8. The method according to claim 6, comprising: if the plurality of users form a social group, detecting the plurality of users at the common location by receiving a GPS signal from a user device for each of the plurality of users, the GPS signal indicating proximity to the common location. 9. The method according to claim 1, comprising: ranking the plurality of available media items based on the weighting for each of the plurality of users. 10. The method according to claim 9, comprising: selecting a highest ranked media item from the ranked plurality of available media items for consumption by the plurality of users at the common location. 11. A system for provisioning of media, the system comprising: a network device, the network device being operable to: detect a plurality of users located at a common location; access a personal profile for each of the plurality of users, wherein the personal profile for each of the plurality of users comprises at least one personal media preference related to consumption of media items; weight for each of the plurality of users, each of a plurality of available media items based on the at least one personal media preference; and select one of the plurality of available media items for consumption by the plurality of users at the common location, based on the weighting of each of the plurality of available media items. 12. The system according to claim 11, wherein the network device is operable to: detect the plurality of users at the common location by receiving a near field communication (NFC) signal from a user device for each of the plurality of users. 13. The system according to claim 11, wherein the network device is operable to: detect a network-enabled media device at the common location. 14. The system according to claim 13, wherein the network device is operable to: provide the selection of one of the plurality of available media items to the network-enabled media device for consumption by the plurality of users at the common location. 15. The system according to claim 11, wherein the network device is operable to: detect whether the plurality of users form a social group, wherein the social group is associated with the consumption of media items. 16. The system according to claim 15, wherein the detecting whether the plurality of users form the social group comprises receiving login information to a social network from each of the plurality of users. 17. The system according to claim 6, wherein the social network hosts the personal profile for each of the plurality of users. 18. The system according to claim 6, wherein, if the plurality of users form a social group, the network device is operable to: detect the plurality of users at the common location by receiving a GPS signal from a user device for each of the plurality of users, the GPS signal indicating proximity to the common location. 19. The system according to claim 1, wherein the network device is operable to: rank the plurality of available media items based on the weighting for each of the plurality of users; and select a highest ranked media item from the ranked plurality of available media items for consumption by the plurality of users at the common location. 20. A system for provisioning of media, the system comprising: a network device, the network device being operable to: receive login credentials from a plurality of users, the login credentials providing access to each of the plurality of users to a social group hosted by a network; detect at least a portion of the plurality of users are logged into the social group hosted by the network based on the received login credentials; receive location information from the at least apportion of the plurality of users; determine whether the at least apportion of the plurality of users are at a common location based on the received location information; and if the at least apportion of the plurality of users are at a common location: access a personal profile for each of the at least a portion of the plurality of users, wherein the personal profile for each of the at least a portion of the plurality of users comprises at least one personal media preference related to consumption of media items; weight for each of the at least a portion of the plurality of users, each of a plurality of available media items based on the at least one personal media preference; and select one of the plurality of available media items for consumption by the at least a portion of the plurality of users at the common location, based on the weighting of each of the plurality of available media items.
2,400
7,457
7,457
14,250,193
2,482
A portable device includes a sensor, a video capture module, a processor, and a computer-readable memory that stores instructions. When executed on the processor, the instructions operate to cause the sensor to generate raw sensor data indicative of a physical quantity, cause the video capture module to capture video imagery of a reference object concurrently with the sensor generating raw sensor data when the portable device is moving relative to the reference object, and cause the processor to calculate correction parameters for the sensor based on the captured video imagery of the reference object and the raw sensor data.
1. A portable device comprising: a sensor; a video capture module; a processor; and a computer-readable memory that stores instructions thereon, wherein the instructions, when executed by the processor, operate to: cause the sensor to generate raw sensor data indicative of a physical quantity, cause the video capture module to capture video imagery of a reference object concurrently with the sensor generating raw sensor data when the portable device is moving relative to the reference object, and cause the processor to calculate correction parameters for the sensor based on the captured video imagery of the reference object and the raw sensor data. 2. The portable device of claim 1, wherein the instructions, when executed by the processor, further operate to identify the reference object as a standard real-world object having known geometric properties. 3. The portable device of claim 2, wherein the standard real-world object is a two-dimensional (2D) image on a 2D surface. 4. The portable device of claim 1, wherein the instructions, when executed by the processor, further operate to match the captured video imagery to a digital 3D model of the reference object, wherein: the digital 3D model is stored in a database to which the portable device is coupled via a communication network, and the digital 3D model specifies geometric properties of the reference object. 5. The portable device of claim 4, wherein to match the captured video imagery to the digital 3D model of the reference object, the instructions operate to transmit at least part of the captured video imagery to a reference object server coupled to the database, via the communication network. 6. The portable device of claim 4, wherein the instructions, when executed by the processor, further operate to generate an approximate position fix of the portable device for matching with a geolocation data of the digital 3D model. 7. The portable device of claim 1, wherein the sensor is one of: (i) an accelerometer, (ii) a gyroscope, or (iii) a magnetometer. 8. The portable device of claim 1, wherein the instructions, when executed by the processor, further cause the processor to apply the correction parameters to subsequent raw sensor data output of the sensor. 9. The portable device of claim 1, wherein to calculate the correction parameters, the instructions operate to: obtain geometric properties of the reference object, apply a 3D reconstruction technique to the captured video imagery using the geometric properties of the reference object, and calculate a plurality of position and orientation fixes of the portable device at respective times based on the captured video imagery. 10. The portable device of claim 9, wherein to calculate the correction parameters, the instructions operate to determine vector a and matrix C in xraw=a+Cx, wherein: vector xraw represents raw sensor data, the vector a represents to drift errors, the matrix C represents cross-axis errors, and x represents corrected raw sensor data; wherein the instructions operate to determine the vector a and the matrix C using the plurality of position and orientation fixes of the portable device. 11. The portable device of claim 1, wherein the instructions, when executed by the processor, further operate to update the correction parameters periodically at a regular interval. 12. The portable device of claim 1, wherein the video capture module is configured to capture video imagery continuously while the portable device is operational. 13. A method implemented on one or more processors for efficiently developing sensor error corrections in a portable device having a sensor and a camera, the method comprising: while the portable device is moving relative to a reference object, causing the sensor to generate raw sensor data indicative of a physical quantity; causing the camera to capture a plurality of images of the reference object concurrently with the sensor generating the raw sensor data; determining a plurality of position and orientation fixes of the portable device based on the plurality of images and geometric properties of the reference object; and calculating correction parameters for the sensor using the plurality of position and orientation fixes and the raw sensor data. 14. The method of claim 13, the method further comprising transmitting the plurality of images to a reference object server via a communication network, wherein the reference object server matches the plurality of images to the reference object. 15. The method of claim 14, the method further comprising transmitting the raw sensor data and sensor information to the reference object server. 16. The method of claim 13, further comprising identifying the reference object as a standard real-world object having known geometric properties. 17. The method of claim 13, further comprising matching the plurality of images to a digital 3D model of the reference object, wherein the digital 3D model is stored in a database. 18. The method of claim 13, wherein matching the plurality of images to a digital 3D model of the reference object includes: generating a set of one or more approximate positioning fixes of the portable device, transmitting the set of one or more approximate positioning fixes to a reference object server via a communication network, and receiving a geolocated digital 3D model of the reference object from the reference object server, wherein the geolocated digital 3D model is indicative of geometric properties of the reference object. 19. A tangible computer-readable medium storing thereon instructions that, when executed on or more processors, cause the one or more processors to: receive raw sensor data generated by a sensor operating in a portable device; receive video imagery of a reference object captured by a video capture module operating in the portable device, wherein the raw sensor data and the video imagery are captured concurrently while the portable device is moving relative to the reference object; calculate correction parameters for the sensor using the captured video imagery of the reference object and the raw sensor data. 20. The computer-readable medium of claim 19, wherein to calculate the correction parameters, the instructions cause the one or more processors to: determine geometric properties of the reference object, determine position and orientation fixes of the portable device based on the geometric properties of the reference object and the video imagery, determine correct sensor data corresponding to the raw sensor data based on the determined position and orientation fixes, and calculate the correction parameters based on a difference between the correct sensor data and the raw sensor data. 21. The computer-readable medium of claim 20, wherein: the sensor is an accelerometer, and to calculate the correction parameters, the instructions cause the one or more processors to calculate average acceleration based on the plurality of position fixes. 22. The computer-readable medium of claim 20, wherein: the sensor is a gyroscope, and to calculate the correction parameters, the instructions cause the one or more processors to calculate a numerical derivative of a time-dependent rotation matrix associated with the plurality of orientation fixes. 23. The computer-readable medium of claim 20, wherein to determine the position and orientation fixes of the portable device, the instructions cause the one or more processors to apply 3D reconstruction. 24. The computer-readable medium of claim 19, wherein the movement of the portable device relative to the reference object includes a change in at least one of position and orientation relative to the reference object.
A portable device includes a sensor, a video capture module, a processor, and a computer-readable memory that stores instructions. When executed on the processor, the instructions operate to cause the sensor to generate raw sensor data indicative of a physical quantity, cause the video capture module to capture video imagery of a reference object concurrently with the sensor generating raw sensor data when the portable device is moving relative to the reference object, and cause the processor to calculate correction parameters for the sensor based on the captured video imagery of the reference object and the raw sensor data.1. A portable device comprising: a sensor; a video capture module; a processor; and a computer-readable memory that stores instructions thereon, wherein the instructions, when executed by the processor, operate to: cause the sensor to generate raw sensor data indicative of a physical quantity, cause the video capture module to capture video imagery of a reference object concurrently with the sensor generating raw sensor data when the portable device is moving relative to the reference object, and cause the processor to calculate correction parameters for the sensor based on the captured video imagery of the reference object and the raw sensor data. 2. The portable device of claim 1, wherein the instructions, when executed by the processor, further operate to identify the reference object as a standard real-world object having known geometric properties. 3. The portable device of claim 2, wherein the standard real-world object is a two-dimensional (2D) image on a 2D surface. 4. The portable device of claim 1, wherein the instructions, when executed by the processor, further operate to match the captured video imagery to a digital 3D model of the reference object, wherein: the digital 3D model is stored in a database to which the portable device is coupled via a communication network, and the digital 3D model specifies geometric properties of the reference object. 5. The portable device of claim 4, wherein to match the captured video imagery to the digital 3D model of the reference object, the instructions operate to transmit at least part of the captured video imagery to a reference object server coupled to the database, via the communication network. 6. The portable device of claim 4, wherein the instructions, when executed by the processor, further operate to generate an approximate position fix of the portable device for matching with a geolocation data of the digital 3D model. 7. The portable device of claim 1, wherein the sensor is one of: (i) an accelerometer, (ii) a gyroscope, or (iii) a magnetometer. 8. The portable device of claim 1, wherein the instructions, when executed by the processor, further cause the processor to apply the correction parameters to subsequent raw sensor data output of the sensor. 9. The portable device of claim 1, wherein to calculate the correction parameters, the instructions operate to: obtain geometric properties of the reference object, apply a 3D reconstruction technique to the captured video imagery using the geometric properties of the reference object, and calculate a plurality of position and orientation fixes of the portable device at respective times based on the captured video imagery. 10. The portable device of claim 9, wherein to calculate the correction parameters, the instructions operate to determine vector a and matrix C in xraw=a+Cx, wherein: vector xraw represents raw sensor data, the vector a represents to drift errors, the matrix C represents cross-axis errors, and x represents corrected raw sensor data; wherein the instructions operate to determine the vector a and the matrix C using the plurality of position and orientation fixes of the portable device. 11. The portable device of claim 1, wherein the instructions, when executed by the processor, further operate to update the correction parameters periodically at a regular interval. 12. The portable device of claim 1, wherein the video capture module is configured to capture video imagery continuously while the portable device is operational. 13. A method implemented on one or more processors for efficiently developing sensor error corrections in a portable device having a sensor and a camera, the method comprising: while the portable device is moving relative to a reference object, causing the sensor to generate raw sensor data indicative of a physical quantity; causing the camera to capture a plurality of images of the reference object concurrently with the sensor generating the raw sensor data; determining a plurality of position and orientation fixes of the portable device based on the plurality of images and geometric properties of the reference object; and calculating correction parameters for the sensor using the plurality of position and orientation fixes and the raw sensor data. 14. The method of claim 13, the method further comprising transmitting the plurality of images to a reference object server via a communication network, wherein the reference object server matches the plurality of images to the reference object. 15. The method of claim 14, the method further comprising transmitting the raw sensor data and sensor information to the reference object server. 16. The method of claim 13, further comprising identifying the reference object as a standard real-world object having known geometric properties. 17. The method of claim 13, further comprising matching the plurality of images to a digital 3D model of the reference object, wherein the digital 3D model is stored in a database. 18. The method of claim 13, wherein matching the plurality of images to a digital 3D model of the reference object includes: generating a set of one or more approximate positioning fixes of the portable device, transmitting the set of one or more approximate positioning fixes to a reference object server via a communication network, and receiving a geolocated digital 3D model of the reference object from the reference object server, wherein the geolocated digital 3D model is indicative of geometric properties of the reference object. 19. A tangible computer-readable medium storing thereon instructions that, when executed on or more processors, cause the one or more processors to: receive raw sensor data generated by a sensor operating in a portable device; receive video imagery of a reference object captured by a video capture module operating in the portable device, wherein the raw sensor data and the video imagery are captured concurrently while the portable device is moving relative to the reference object; calculate correction parameters for the sensor using the captured video imagery of the reference object and the raw sensor data. 20. The computer-readable medium of claim 19, wherein to calculate the correction parameters, the instructions cause the one or more processors to: determine geometric properties of the reference object, determine position and orientation fixes of the portable device based on the geometric properties of the reference object and the video imagery, determine correct sensor data corresponding to the raw sensor data based on the determined position and orientation fixes, and calculate the correction parameters based on a difference between the correct sensor data and the raw sensor data. 21. The computer-readable medium of claim 20, wherein: the sensor is an accelerometer, and to calculate the correction parameters, the instructions cause the one or more processors to calculate average acceleration based on the plurality of position fixes. 22. The computer-readable medium of claim 20, wherein: the sensor is a gyroscope, and to calculate the correction parameters, the instructions cause the one or more processors to calculate a numerical derivative of a time-dependent rotation matrix associated with the plurality of orientation fixes. 23. The computer-readable medium of claim 20, wherein to determine the position and orientation fixes of the portable device, the instructions cause the one or more processors to apply 3D reconstruction. 24. The computer-readable medium of claim 19, wherein the movement of the portable device relative to the reference object includes a change in at least one of position and orientation relative to the reference object.
2,400
7,458
7,458
14,643,802
2,436
In one implementation, a media stream is recorded using one or more keys. The one or more keys are also encrypted. The one or more encrypted keys may be stored with the encrypted media session at a cloud storage service. A network device receives a request to record a media stream and accesses at least one stream key for the media stream. The stream key is for encrypting the media stream. The network device encrypts the stream key with a master key. The encrypted stream key is stored in association with the encrypted media stream.
1. A method comprising: establishing a media call stream, wherein the media call stream is a voice over internet protocol call or video conference call and includes at least one parameter; receiving a request to record the media call stream; receiving at least one stream key associated with the media call stream and the at least one parameter for the voice over internet protocol call or video conference call, wherein the media call stream is encrypted with the at least one stream key; accessing a master key; encrypting, with the processor, the at least one stream key with the master key; and storing the encrypted at least one stream key in association with the encrypted media call stream. 2. The method of claim 1, further comprising: identifying a property of the media call stream; and accessing a rule in response to the media call stream, wherein the rule indicates the media call stream should be recorded based on the property. 3. The method of claim 2, wherein the property includes an endpoint of the media call stream, a content of the media call stream, or a user of the media call stream. 4. The method of claim 1, further comprising: generating metadata for the media call stream; and encrypting, with a processor, the metadata with the master key. 5. The method of claim 4, wherein the metadata includes data indicative of a time interval that the at least one stream key is valid. 6. The method of claim 4, wherein the metadata includes data indicative of an authorized user of the at least one stream key. 7. The method of claim 4, wherein the metadata includes data indicative of participants in the media call stream or subject matter of the media call stream. 8. The method of claim 4, wherein the at least one stream key includes multiple stream keys assigned to different time periods of the media call stream. 9. The method of claim 1 further comprising: generating an identifier for the media call stream; and storing the identifier at a cloud device in association with the encrypted media call stream. 10. The method of claim 1, further comprising: receiving a request for the encrypted media call stream; and providing the encrypted media call stream and the encrypted at least one stream key in response to the request for the encrypted media call stream. 11. An apparatus comprising: a processor; and a memory comprising one or more instructions executable by the processor to perform: establishing a media call stream, wherein the media call stream includes at least one parameter; receiving a request to record the media call stream; receiving at least one stream key associated with the media call stream and the at least one parameter, wherein the media call stream is encrypted with the at least one stream key; accessing a master key; encrypting the at least one stream key with the master key; and storing the encrypted at least one stream key in association with the encrypted media call stream at a cloud device. 12. The apparatus of claim 11, wherein the instructions are executable by the processor to perform: identifying a property of the media call stream; and accessing a rule in response to the media call stream, wherein the rule indicates the media call stream should be recorded based on the property. 13. The apparatus of claim 12, wherein the property includes an endpoint of the media call stream, a content of the media call stream, or a user of the media call stream. 14. The apparatus of claim 11, wherein the instructions are executable by the processor to perform: generating metadata for the media call stream; and encrypting the metadata with the master key, wherein the metadata includes data indicative of a time interval that the at least one stream key is valid, data indicative of an authorized user of the at least one stream key, or data indicative of participants in the media call stream or subject matter of the media call stream. 15. A method comprising: receiving a request to access a media call stream, wherein the media call stream is a voice over internet protocol call or a video conference call and includes at least one parameter; accessing, from a cloud device, an encrypted stream key and an encrypted recorded media call stream in response to the request and the at least one parameter; decrypting, with a processor, the encrypted stream key with a master key; decrypting, with the processor, the encrypted media call stream with the decrypted stream key; and providing the decrypted media call stream. 16. The method of claim 15, further comprising: receiving metadata for the media call stream; and decrypting, with the processor, the metadata using the master key. 17. The method of claim 16, wherein the metadata includes data indicative of a time interval during which the stream key is valid. 18. The method of claim 16, wherein the metadata includes data indicative of an authorized user of the stream key, data indicative of participants in the media call stream, or data indicative of subject matter of the media call stream. 19. An apparatus comprising: a processor; and a memory comprising one or more instructions executable by the processor to perform: receiving a request to access a media call stream, wherein the media call stream includes at least one parameter; accessing an encrypted stream key associated with the media call stream and the at least one parameter; decrypting, with a processor, the encrypted stream key with a master key; and decrypting, with the processor, an encrypted media call stream with the decrypted stream key. 20. The apparatus of claim 19, wherein the encrypted recorded media call stream is stored in association with the encrypted stream key.
In one implementation, a media stream is recorded using one or more keys. The one or more keys are also encrypted. The one or more encrypted keys may be stored with the encrypted media session at a cloud storage service. A network device receives a request to record a media stream and accesses at least one stream key for the media stream. The stream key is for encrypting the media stream. The network device encrypts the stream key with a master key. The encrypted stream key is stored in association with the encrypted media stream.1. A method comprising: establishing a media call stream, wherein the media call stream is a voice over internet protocol call or video conference call and includes at least one parameter; receiving a request to record the media call stream; receiving at least one stream key associated with the media call stream and the at least one parameter for the voice over internet protocol call or video conference call, wherein the media call stream is encrypted with the at least one stream key; accessing a master key; encrypting, with the processor, the at least one stream key with the master key; and storing the encrypted at least one stream key in association with the encrypted media call stream. 2. The method of claim 1, further comprising: identifying a property of the media call stream; and accessing a rule in response to the media call stream, wherein the rule indicates the media call stream should be recorded based on the property. 3. The method of claim 2, wherein the property includes an endpoint of the media call stream, a content of the media call stream, or a user of the media call stream. 4. The method of claim 1, further comprising: generating metadata for the media call stream; and encrypting, with a processor, the metadata with the master key. 5. The method of claim 4, wherein the metadata includes data indicative of a time interval that the at least one stream key is valid. 6. The method of claim 4, wherein the metadata includes data indicative of an authorized user of the at least one stream key. 7. The method of claim 4, wherein the metadata includes data indicative of participants in the media call stream or subject matter of the media call stream. 8. The method of claim 4, wherein the at least one stream key includes multiple stream keys assigned to different time periods of the media call stream. 9. The method of claim 1 further comprising: generating an identifier for the media call stream; and storing the identifier at a cloud device in association with the encrypted media call stream. 10. The method of claim 1, further comprising: receiving a request for the encrypted media call stream; and providing the encrypted media call stream and the encrypted at least one stream key in response to the request for the encrypted media call stream. 11. An apparatus comprising: a processor; and a memory comprising one or more instructions executable by the processor to perform: establishing a media call stream, wherein the media call stream includes at least one parameter; receiving a request to record the media call stream; receiving at least one stream key associated with the media call stream and the at least one parameter, wherein the media call stream is encrypted with the at least one stream key; accessing a master key; encrypting the at least one stream key with the master key; and storing the encrypted at least one stream key in association with the encrypted media call stream at a cloud device. 12. The apparatus of claim 11, wherein the instructions are executable by the processor to perform: identifying a property of the media call stream; and accessing a rule in response to the media call stream, wherein the rule indicates the media call stream should be recorded based on the property. 13. The apparatus of claim 12, wherein the property includes an endpoint of the media call stream, a content of the media call stream, or a user of the media call stream. 14. The apparatus of claim 11, wherein the instructions are executable by the processor to perform: generating metadata for the media call stream; and encrypting the metadata with the master key, wherein the metadata includes data indicative of a time interval that the at least one stream key is valid, data indicative of an authorized user of the at least one stream key, or data indicative of participants in the media call stream or subject matter of the media call stream. 15. A method comprising: receiving a request to access a media call stream, wherein the media call stream is a voice over internet protocol call or a video conference call and includes at least one parameter; accessing, from a cloud device, an encrypted stream key and an encrypted recorded media call stream in response to the request and the at least one parameter; decrypting, with a processor, the encrypted stream key with a master key; decrypting, with the processor, the encrypted media call stream with the decrypted stream key; and providing the decrypted media call stream. 16. The method of claim 15, further comprising: receiving metadata for the media call stream; and decrypting, with the processor, the metadata using the master key. 17. The method of claim 16, wherein the metadata includes data indicative of a time interval during which the stream key is valid. 18. The method of claim 16, wherein the metadata includes data indicative of an authorized user of the stream key, data indicative of participants in the media call stream, or data indicative of subject matter of the media call stream. 19. An apparatus comprising: a processor; and a memory comprising one or more instructions executable by the processor to perform: receiving a request to access a media call stream, wherein the media call stream includes at least one parameter; accessing an encrypted stream key associated with the media call stream and the at least one parameter; decrypting, with a processor, the encrypted stream key with a master key; and decrypting, with the processor, an encrypted media call stream with the decrypted stream key. 20. The apparatus of claim 19, wherein the encrypted recorded media call stream is stored in association with the encrypted stream key.
2,400
7,459
7,459
14,341,562
2,468
Power optimization modes for communication between a device and a server is disclosed. The device can dynamically change between communication modes based on an application or quality of service, battery life, an amount of noise associated with the communications link, a frequency of messages, and a type of message received in a given time period. In some examples, the device can determine if the number of pull messages is greater than the number of push messages. The device can select a push mode where a pull message can accompany a push message. In some examples, the device can determine that the number of push messages is greater than the number of pull messages, and the device can select a low-power associated sleep mode.
1. A device comprising: a transceiver configured for communicating through a communications link to a wireless access point; and a processor capable of dynamically switching between at least a first mode and a second mode of communication, each mode including a first period and a second period, wherein the processor is further capable of receiving or retrieving data during the first period and switching to an inactive state during the second period. 2. The device of claim 1, wherein the transceiver is configured to remain associated with the wireless access point during the inactive state. 3. The device of claim 1, wherein the second period of the second mode consumes a lower power than the second period of the first mode. 4. The device of claim 1, wherein the transceiver is configured to receive push messages during the second period of the first mode. 5. The device of claim 1, wherein the first period of the first mode is spaced at periodic intervals. 6. The device of claim 1, wherein the processor is further capable of determining a number of push messages and a number of pull messages within a given time period. 7. The device of claim 6, wherein the processor is further capable of switching to the first mode when the number of pull messages is greater than the number of push messages. 8. The device of claim 6, wherein the processor is further capable of switching to the second mode when the number of push messages is greater than the number of pull messages. 9. The device of claim 1, wherein the processor is further capable of selecting between the first mode and second mode based on at least one of an application, a battery life, and an amount of noise associated with the communications link. 10. The device of claim 1, wherein the processor is further capable of determining whether an application is a red-list application and selecting between the first mode and second mode based on the determination. 11. The device of claim 1, wherein the processor dynamically switches modes when running a same application. 12. The device of claim 1, wherein the processor dynamically switches modes when a noise associated with the communications link exceeds a predetermined value. 13. A method for communicating with a wireless access point, the method comprising: dynamically switching between a first mode and a second mode of communication, each mode including a first period and a second period; receiving or retrieving data during the first period; and switching to an inactive state during the second period. 14. The method of claim 13, further comprising maintaining association with the wireless access point during the inactive state. 15. The method of claim 13, wherein the second period of the second mode consumes a lower power than the second period of the first mode. 16. The method of claim 13, further comprising receiving push messages during the second period of the first mode. 17. The method of claim 13, wherein the first period of the first mode is spaced at periodic intervals. 18. The method of claim 13, further comprising determining a number of push messages and a number of pull messages within a given time period. 19. The method of claim 18, further comprising switching to the first mode when the number of pull messages is greater than the number of push messages. 20. The method of claim 18, further comprising switching to the second mode when the number of push messages is greater than the number of pull messages. 21. The method of claim 13, further comprising selecting between the first mode and second mode based on at least one of an application, a battery life, and an amount of noise associated with the communications link. 22. The method of claim 13, further comprising: determining whether an application is a red-list application; and selecting between the first mode and the second mode based on the determination. 23. The method of claim 13, wherein the processor dynamically switches modes when running a same application. 24. The method of claim 13, wherein the processor dynamically switches modes when a noise associated with the communications link exceeds a predetermined value.
Power optimization modes for communication between a device and a server is disclosed. The device can dynamically change between communication modes based on an application or quality of service, battery life, an amount of noise associated with the communications link, a frequency of messages, and a type of message received in a given time period. In some examples, the device can determine if the number of pull messages is greater than the number of push messages. The device can select a push mode where a pull message can accompany a push message. In some examples, the device can determine that the number of push messages is greater than the number of pull messages, and the device can select a low-power associated sleep mode.1. A device comprising: a transceiver configured for communicating through a communications link to a wireless access point; and a processor capable of dynamically switching between at least a first mode and a second mode of communication, each mode including a first period and a second period, wherein the processor is further capable of receiving or retrieving data during the first period and switching to an inactive state during the second period. 2. The device of claim 1, wherein the transceiver is configured to remain associated with the wireless access point during the inactive state. 3. The device of claim 1, wherein the second period of the second mode consumes a lower power than the second period of the first mode. 4. The device of claim 1, wherein the transceiver is configured to receive push messages during the second period of the first mode. 5. The device of claim 1, wherein the first period of the first mode is spaced at periodic intervals. 6. The device of claim 1, wherein the processor is further capable of determining a number of push messages and a number of pull messages within a given time period. 7. The device of claim 6, wherein the processor is further capable of switching to the first mode when the number of pull messages is greater than the number of push messages. 8. The device of claim 6, wherein the processor is further capable of switching to the second mode when the number of push messages is greater than the number of pull messages. 9. The device of claim 1, wherein the processor is further capable of selecting between the first mode and second mode based on at least one of an application, a battery life, and an amount of noise associated with the communications link. 10. The device of claim 1, wherein the processor is further capable of determining whether an application is a red-list application and selecting between the first mode and second mode based on the determination. 11. The device of claim 1, wherein the processor dynamically switches modes when running a same application. 12. The device of claim 1, wherein the processor dynamically switches modes when a noise associated with the communications link exceeds a predetermined value. 13. A method for communicating with a wireless access point, the method comprising: dynamically switching between a first mode and a second mode of communication, each mode including a first period and a second period; receiving or retrieving data during the first period; and switching to an inactive state during the second period. 14. The method of claim 13, further comprising maintaining association with the wireless access point during the inactive state. 15. The method of claim 13, wherein the second period of the second mode consumes a lower power than the second period of the first mode. 16. The method of claim 13, further comprising receiving push messages during the second period of the first mode. 17. The method of claim 13, wherein the first period of the first mode is spaced at periodic intervals. 18. The method of claim 13, further comprising determining a number of push messages and a number of pull messages within a given time period. 19. The method of claim 18, further comprising switching to the first mode when the number of pull messages is greater than the number of push messages. 20. The method of claim 18, further comprising switching to the second mode when the number of push messages is greater than the number of pull messages. 21. The method of claim 13, further comprising selecting between the first mode and second mode based on at least one of an application, a battery life, and an amount of noise associated with the communications link. 22. The method of claim 13, further comprising: determining whether an application is a red-list application; and selecting between the first mode and the second mode based on the determination. 23. The method of claim 13, wherein the processor dynamically switches modes when running a same application. 24. The method of claim 13, wherein the processor dynamically switches modes when a noise associated with the communications link exceeds a predetermined value.
2,400
7,460
7,460
15,203,449
2,456
Email tags are described. In embodiments, email messages are received for distribution to client devices that correspond to respective recipients of the email messages. Email routing decisions are applied to route an email message to an email folder for a recipient of the email message, where the email folder may include an email inbox, a junk folder, or a user-created folder. The email message is then tagged with an email tag to generate a tagged email message. The email tag includes a routing description that indicates why the email message was routed to the particular email folder.
1-20. (canceled) 21. A computing device, comprising: one or more processors; and a memory comprising instructions stored thereon that, responsive to execution by the one or more processors, implement an email application, the email application configured to: cause display of a tagged email message and a selectable information control that correlates to an email tag for the tagged email message; and in response to an initiation of the selectable information control, cause display of a textual description that indicates that an email filter was applied to the tagged email message. 22. The computing device as recited in claim 21, wherein the email application is configured to receive the tagged email message from an email distribution service. 23. The computing device as recited in claim 21, wherein the email application is configured to receive an input to initiate the selectable information control. 24. The computing device as recited in claim 21, wherein the email application is configured to receive an input to initiate the selectable information control by detecting a pointer that correlates to an input device when the pointer is displayed proximate the selectable information control. 25. The computing device as recited in claim 21, wherein the email application is configured to receive an input to initiate the selectable information control by detecting touch input to an area of the display proximate the selectable information control. 26. The computing device as recited in claim 21, wherein the email application is configured to receive an input to modify the email filter that was applied to the tagged email message. 27. The computing device as recited in claim 21, wherein the email application is configured to communicate the input as feedback to the email distribution service. 28. The computing device as recited in claim 21, wherein the textual description includes a selectable link to modify the email filter if the email filter was incorrectly applied to the tagged email message. 29. A computer-implemented method, comprising: displaying a tagged email message and a selectable information control that correlates to an email tag for the tagged email message; and in response to an initiation of the selectable information control, displaying a textual description that indicates that an email filter was applied to the tagged email message. 30. The computer-implemented method as recited in claim 29, further comprising receiving, at an email application implemented at a client device, the tagged email message from an email distribution service. 31. The computer-implemented method as recited in claim 29, further comprising detecting a pointer that correlates to an input device when the pointer is displayed proximate the selectable information control. 32. The computer-implemented method as recited in claim 29, further comprising receiving input to modify the email filter that was applied to the tagged email message. 33. The computer-implemented method as recited in claim 29, further comprising communicating the input as feedback to the email distribution service. 34. The computer-implemented method as recited in claim 29, further comprising displaying a visual indication of the email tag to warn the recipient that the email filter may have been incorrectly applied to the email message. 35. The computer-implemented method as recited in claim 29, wherein the textual description includes a selectable link to modify the email filter if the email filter was incorrectly applied to the tagged email message. 36. One or more computer-readable storage devices comprising instructions stored thereon that, responsive to execution by a processor, perform operations comprising: receiving, at an email application implemented at a client device, a tagged email message from an email distribution service; displaying the tagged email message and a selectable information control that correlates to an email tag for the tagged email message; and in response to an initiation of the selectable information control, displaying a textual description that indicates that an email filter was applied to the tagged email message. 37. The one or more computer-readable storage devices as recited in claim 36, wherein the instructions, responsive to execution by the processor, perform operations further comprising detecting a pointer that correlates to an input device when the pointer is displayed proximate the selectable information control. 38. The one or more computer-readable storage devices as recited in claim 36, wherein the instructions, responsive to execution by the processor, perform operations further comprising receiving input to modify the email filter that was applied to the tagged email message. 39. The one or more computer-readable storage devices as recited in claim 36, wherein the instructions, responsive to execution by the processor, perform operations further comprising displaying a visual indication of the email tag to warn the recipient that the email filter may have been incorrectly applied to the email message. 40. The computer-implemented method as recited in claim 36, wherein the textual description includes a selectable link to modify the email filter if the email filter was incorrectly applied to the tagged email message.
Email tags are described. In embodiments, email messages are received for distribution to client devices that correspond to respective recipients of the email messages. Email routing decisions are applied to route an email message to an email folder for a recipient of the email message, where the email folder may include an email inbox, a junk folder, or a user-created folder. The email message is then tagged with an email tag to generate a tagged email message. The email tag includes a routing description that indicates why the email message was routed to the particular email folder.1-20. (canceled) 21. A computing device, comprising: one or more processors; and a memory comprising instructions stored thereon that, responsive to execution by the one or more processors, implement an email application, the email application configured to: cause display of a tagged email message and a selectable information control that correlates to an email tag for the tagged email message; and in response to an initiation of the selectable information control, cause display of a textual description that indicates that an email filter was applied to the tagged email message. 22. The computing device as recited in claim 21, wherein the email application is configured to receive the tagged email message from an email distribution service. 23. The computing device as recited in claim 21, wherein the email application is configured to receive an input to initiate the selectable information control. 24. The computing device as recited in claim 21, wherein the email application is configured to receive an input to initiate the selectable information control by detecting a pointer that correlates to an input device when the pointer is displayed proximate the selectable information control. 25. The computing device as recited in claim 21, wherein the email application is configured to receive an input to initiate the selectable information control by detecting touch input to an area of the display proximate the selectable information control. 26. The computing device as recited in claim 21, wherein the email application is configured to receive an input to modify the email filter that was applied to the tagged email message. 27. The computing device as recited in claim 21, wherein the email application is configured to communicate the input as feedback to the email distribution service. 28. The computing device as recited in claim 21, wherein the textual description includes a selectable link to modify the email filter if the email filter was incorrectly applied to the tagged email message. 29. A computer-implemented method, comprising: displaying a tagged email message and a selectable information control that correlates to an email tag for the tagged email message; and in response to an initiation of the selectable information control, displaying a textual description that indicates that an email filter was applied to the tagged email message. 30. The computer-implemented method as recited in claim 29, further comprising receiving, at an email application implemented at a client device, the tagged email message from an email distribution service. 31. The computer-implemented method as recited in claim 29, further comprising detecting a pointer that correlates to an input device when the pointer is displayed proximate the selectable information control. 32. The computer-implemented method as recited in claim 29, further comprising receiving input to modify the email filter that was applied to the tagged email message. 33. The computer-implemented method as recited in claim 29, further comprising communicating the input as feedback to the email distribution service. 34. The computer-implemented method as recited in claim 29, further comprising displaying a visual indication of the email tag to warn the recipient that the email filter may have been incorrectly applied to the email message. 35. The computer-implemented method as recited in claim 29, wherein the textual description includes a selectable link to modify the email filter if the email filter was incorrectly applied to the tagged email message. 36. One or more computer-readable storage devices comprising instructions stored thereon that, responsive to execution by a processor, perform operations comprising: receiving, at an email application implemented at a client device, a tagged email message from an email distribution service; displaying the tagged email message and a selectable information control that correlates to an email tag for the tagged email message; and in response to an initiation of the selectable information control, displaying a textual description that indicates that an email filter was applied to the tagged email message. 37. The one or more computer-readable storage devices as recited in claim 36, wherein the instructions, responsive to execution by the processor, perform operations further comprising detecting a pointer that correlates to an input device when the pointer is displayed proximate the selectable information control. 38. The one or more computer-readable storage devices as recited in claim 36, wherein the instructions, responsive to execution by the processor, perform operations further comprising receiving input to modify the email filter that was applied to the tagged email message. 39. The one or more computer-readable storage devices as recited in claim 36, wherein the instructions, responsive to execution by the processor, perform operations further comprising displaying a visual indication of the email tag to warn the recipient that the email filter may have been incorrectly applied to the email message. 40. The computer-implemented method as recited in claim 36, wherein the textual description includes a selectable link to modify the email filter if the email filter was incorrectly applied to the tagged email message.
2,400
7,461
7,461
14,143,499
2,459
A capability is provided for performing distributed multi-level stateless load balancing. The stateless load balancing may be performed for load balancing of connections of a stateful-connection protocol (e.g., Transmission Control Protocol (TCP) connections, Stream Control Transmission Protocol (SCTP) connections, or the like). The stateless load balancing may be distributed across multiple hierarchical levels. The multiple hierarchical levels may be distributed across multiple network locations, geographic locations, or the like.
1. An apparatus, comprising: a processor and a memory communicatively connected to the processor, the processor configured to: receive an initial connection packet of a stateful-connection protocol at a first load balancer configured to perform load balancing across a set of processing elements, the initial connection packet of the stateful-connection protocol configured to request establishment of a stateful connection; and perform a load balancing operation at the first load balancer to control forwarding of the initial connection packet of the stateful-connection protocol toward a set of second load balancers configured to perform load balancing across respective subsets of processing elements of the set of processing elements. 2. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: select one of the second load balancers in the set of second load balancers; and forward the initial connection packet toward the selected one of the second load balancers. 3. The apparatus of claim 2, wherein the processor is configured to select the one of the second load balancers based on at least one of a round-robin selection scheme, a calculation associated with the one of the second load balancers, or status information associated with the one of the second load balancers. 4. The apparatus of claim 2, wherein the processor is configured to: prior to forwarding the initial connection packet toward the selected one of the second load balancers, modify the initial connection packet to include an identifier of the first load balancer. 5. The apparatus of claim 2, wherein the processor is configured to: receive, from the selected second load balancer, an initial connection response packet generated by one of the processing elements based on the initial connection packet. 6. The apparatus of claim 5, wherein the initial connection packet is received from a client, wherein the processor is configured to: propagate the initial connection response packet toward the client. 7. The apparatus of claim 5, wherein the initial connection response packet comprises an identifier of the one of the processing elements. 8. The apparatus of claim 7, wherein the initial connection packet is received from a client, wherein the processor is configured to: receive, from the client, a subsequent packet of the stateful-connection protocol, the subsequent packet associated with a connection established between the client and the one of the processing elements based on the initial connection packet, wherein the subsequent packet comprises the identifier of the one of the processing elements; and forward the subsequent packet toward the one of the processing elements, based on the identifier of the one of the processing elements, independent of the set of second load balancers. 9. The apparatus of claim 5, wherein the initial connection response packet comprises status information for the one of the processing elements. 10. The apparatus of claim 9, wherein the processor is configured to: update aggregate status information for the selected second load balancer based on the status information for the one of the processing elements. 11. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: initiate a query to obtain a set of addresses of the respective second load balancers in the set of second load balancers and status information associated with the respective second load balancers in the set of second load balancers; select one of the second load balancers in the set of second load balancers based on the status information associated with the second load balancers in the set of second load balancers; and forward the initial connection packet of the stateful-connection protocol toward the selected one of the second load balancers based on the address of the selected one of the second load balancers. 12. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: broadcast the initial connection packet of the stateful-connection protocol toward each of the second load balancers in the set of second load balancers based on a broadcast address assigned for the second load balancers in the set of second load balancers. 13. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: multicast the initial connection packet of the stateful-connection protocol toward a multicast group including two or more of the second load balancers in the set of second load balancers based on a forged multicast address assigned for the second load balancers in the multicast group. 14. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: forward the initial connection packet of the stateful-connection protocol toward two or more of the second load balancers in the set of second load balancers; receive two or more initial connection response packets of the stateful-connection protocol responsive to forwarding of the initial connection packet of the stateful-connection protocol toward the two or more of the second load balancers; and forward one of the initial connection response packets that is received first without forwarding any other of the initial connection response packets. 15. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: forward the initial connection packet of the stateful-connection protocol toward a first one of the second load balancers in the set of second load balancers; and forward the initial connection packet of the stateful-connection protocol toward a second one of the second load balancers in the set of second load balancers based on a determination that a successful response to the initial connection packet of the stateful-connection protocol is not received responsive to forwarding of the initial connection packet of the stateful-connection protocol toward the first one of the second load balancers in the set of second load balancers. 16. The apparatus of claim 1, wherein the processor is configured to: determine, based on status information associated with at least one of the processing elements in the set of processing elements, whether to modify the set of processing elements. 17. The apparatus of claim 1, wherein the processor is configured to: based on a determination to terminate a given processing element from the set of processing elements: prevent forwarding of subsequent packets of the stateful-connection protocol toward the given processing element; monitor a number of open sockets of the given processing element; and initiate termination of the given processing element based on a determination that the number of open sockets of the given processing element is indicative that the given processing element is idle. 18. The apparatus of claim 1, wherein one of: the first load balancer is associated with a network device of a communication network and the second load balancers are associated with respective elements of one or more datacenters; the first load balancer is associated with a network device of a datacenter network and the second load balancers are associated with respective racks of the datacenter network; the first load balancer is associated with a rack of a datacenter network and the second load balancers are associated with respective servers of the rack; or the first load balancer is associated with a server of a datacenter network and the second load balancers are associated with respective processors of the server. 19. A method, comprising: using a processor and a memory for: receiving an initial connection packet of a stateful-connection protocol at a first load balancer configured to perform load balancing across a set of processing elements, the initial connection packet of the stateful-connection protocol configured to request establishment of a stateful connection; and performing a load balancing operation at the first load balancer to control forwarding of the initial connection packet of the stateful-connection protocol toward a set of second load balancers configured to perform load balancing across respective subsets of processing elements of the set of processing elements. 20. A computer-readable storage medium storing instructions which, when executed by a computer, cause the computer to perform a method, the method comprising: receiving an initial connection packet of a stateful-connection protocol at a first load balancer configured to perform load balancing across a set of processing elements, the initial connection packet of the stateful-connection protocol configured to request establishment of a stateful connection; and performing a load balancing operation at the first load balancer to control forwarding of the initial connection packet of the stateful-connection protocol toward a set of second load balancers configured to perform load balancing across respective subsets of processing elements of the set of processing elements.
A capability is provided for performing distributed multi-level stateless load balancing. The stateless load balancing may be performed for load balancing of connections of a stateful-connection protocol (e.g., Transmission Control Protocol (TCP) connections, Stream Control Transmission Protocol (SCTP) connections, or the like). The stateless load balancing may be distributed across multiple hierarchical levels. The multiple hierarchical levels may be distributed across multiple network locations, geographic locations, or the like.1. An apparatus, comprising: a processor and a memory communicatively connected to the processor, the processor configured to: receive an initial connection packet of a stateful-connection protocol at a first load balancer configured to perform load balancing across a set of processing elements, the initial connection packet of the stateful-connection protocol configured to request establishment of a stateful connection; and perform a load balancing operation at the first load balancer to control forwarding of the initial connection packet of the stateful-connection protocol toward a set of second load balancers configured to perform load balancing across respective subsets of processing elements of the set of processing elements. 2. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: select one of the second load balancers in the set of second load balancers; and forward the initial connection packet toward the selected one of the second load balancers. 3. The apparatus of claim 2, wherein the processor is configured to select the one of the second load balancers based on at least one of a round-robin selection scheme, a calculation associated with the one of the second load balancers, or status information associated with the one of the second load balancers. 4. The apparatus of claim 2, wherein the processor is configured to: prior to forwarding the initial connection packet toward the selected one of the second load balancers, modify the initial connection packet to include an identifier of the first load balancer. 5. The apparatus of claim 2, wherein the processor is configured to: receive, from the selected second load balancer, an initial connection response packet generated by one of the processing elements based on the initial connection packet. 6. The apparatus of claim 5, wherein the initial connection packet is received from a client, wherein the processor is configured to: propagate the initial connection response packet toward the client. 7. The apparatus of claim 5, wherein the initial connection response packet comprises an identifier of the one of the processing elements. 8. The apparatus of claim 7, wherein the initial connection packet is received from a client, wherein the processor is configured to: receive, from the client, a subsequent packet of the stateful-connection protocol, the subsequent packet associated with a connection established between the client and the one of the processing elements based on the initial connection packet, wherein the subsequent packet comprises the identifier of the one of the processing elements; and forward the subsequent packet toward the one of the processing elements, based on the identifier of the one of the processing elements, independent of the set of second load balancers. 9. The apparatus of claim 5, wherein the initial connection response packet comprises status information for the one of the processing elements. 10. The apparatus of claim 9, wherein the processor is configured to: update aggregate status information for the selected second load balancer based on the status information for the one of the processing elements. 11. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: initiate a query to obtain a set of addresses of the respective second load balancers in the set of second load balancers and status information associated with the respective second load balancers in the set of second load balancers; select one of the second load balancers in the set of second load balancers based on the status information associated with the second load balancers in the set of second load balancers; and forward the initial connection packet of the stateful-connection protocol toward the selected one of the second load balancers based on the address of the selected one of the second load balancers. 12. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: broadcast the initial connection packet of the stateful-connection protocol toward each of the second load balancers in the set of second load balancers based on a broadcast address assigned for the second load balancers in the set of second load balancers. 13. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: multicast the initial connection packet of the stateful-connection protocol toward a multicast group including two or more of the second load balancers in the set of second load balancers based on a forged multicast address assigned for the second load balancers in the multicast group. 14. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: forward the initial connection packet of the stateful-connection protocol toward two or more of the second load balancers in the set of second load balancers; receive two or more initial connection response packets of the stateful-connection protocol responsive to forwarding of the initial connection packet of the stateful-connection protocol toward the two or more of the second load balancers; and forward one of the initial connection response packets that is received first without forwarding any other of the initial connection response packets. 15. The apparatus of claim 1, wherein, to perform the load balancing operation to control forwarding of the initial connection packet of the stateful-connection protocol toward the set of second load balancers, the processor is configured to: forward the initial connection packet of the stateful-connection protocol toward a first one of the second load balancers in the set of second load balancers; and forward the initial connection packet of the stateful-connection protocol toward a second one of the second load balancers in the set of second load balancers based on a determination that a successful response to the initial connection packet of the stateful-connection protocol is not received responsive to forwarding of the initial connection packet of the stateful-connection protocol toward the first one of the second load balancers in the set of second load balancers. 16. The apparatus of claim 1, wherein the processor is configured to: determine, based on status information associated with at least one of the processing elements in the set of processing elements, whether to modify the set of processing elements. 17. The apparatus of claim 1, wherein the processor is configured to: based on a determination to terminate a given processing element from the set of processing elements: prevent forwarding of subsequent packets of the stateful-connection protocol toward the given processing element; monitor a number of open sockets of the given processing element; and initiate termination of the given processing element based on a determination that the number of open sockets of the given processing element is indicative that the given processing element is idle. 18. The apparatus of claim 1, wherein one of: the first load balancer is associated with a network device of a communication network and the second load balancers are associated with respective elements of one or more datacenters; the first load balancer is associated with a network device of a datacenter network and the second load balancers are associated with respective racks of the datacenter network; the first load balancer is associated with a rack of a datacenter network and the second load balancers are associated with respective servers of the rack; or the first load balancer is associated with a server of a datacenter network and the second load balancers are associated with respective processors of the server. 19. A method, comprising: using a processor and a memory for: receiving an initial connection packet of a stateful-connection protocol at a first load balancer configured to perform load balancing across a set of processing elements, the initial connection packet of the stateful-connection protocol configured to request establishment of a stateful connection; and performing a load balancing operation at the first load balancer to control forwarding of the initial connection packet of the stateful-connection protocol toward a set of second load balancers configured to perform load balancing across respective subsets of processing elements of the set of processing elements. 20. A computer-readable storage medium storing instructions which, when executed by a computer, cause the computer to perform a method, the method comprising: receiving an initial connection packet of a stateful-connection protocol at a first load balancer configured to perform load balancing across a set of processing elements, the initial connection packet of the stateful-connection protocol configured to request establishment of a stateful connection; and performing a load balancing operation at the first load balancer to control forwarding of the initial connection packet of the stateful-connection protocol toward a set of second load balancers configured to perform load balancing across respective subsets of processing elements of the set of processing elements.
2,400
7,462
7,462
14,580,227
2,415
A wireless communication method applied to a beamformer includes: receiving a plurality of reference information corresponding to a plurality of stations, respectively; calculating an evaluation value for each of the stations according to at least one reference information of the plurality of reference information; and comparing a plurality of evaluation values respectively corresponding to the plurality of stations, to select specific stations from the plurality of stations for performing beamforming.
1. A wireless communication method applied to a beamformer, the method comprising: receiving a plurality of reference information corresponding to a plurality of stations, respectively; calculating an evaluation value for each of the stations according to at least one reference information of the plurality of reference information; and comparing a plurality of evaluation values respectively corresponding to the plurality of stations, to select specific stations from the plurality of stations for performing beamforming. 2. The method of claim 1, wherein a number of the plurality of stations is M, a number of the specific stations selected from the plurality of stations for performing beamforming is N, and M and N are both positive integers, wherein N>1 and M>N. 3. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining whether the station has a function of receiving beamforming according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 4. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining a data traffic amount between the station and the beamformer according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 5. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining an encoding/decoding type of the station according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 6. The method of claim 5, wherein the encoding/decoding type of the station is binary convolutional code (BCC), block code, low density parity check (LDPC) code, or turbo Code. 7. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining diversity ability of the station according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 8. The method of claim 7, wherein the diversity ability of the station comprises a number of antennas comprised in the station, and whether the station is compatible with maximum ratio combining (MRC) or space-time block code (STBC). 9. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining connection quality between the station and the beamformer according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 10. The method of claim 9, wherein the connection quality between the station and the beamformer is determined according to at least one of a received signal strength indicator (RSSI), signal quality (SQ), a signal-to-noise ratio (SNR), error vector magnitude (EVM), channel state information (CSI), a bit Error Rate (BER), and a packet error rate (PER). 11. The method of claim 1, wherein the step of receiving the plurality of reference information respectively corresponding to the plurality of stations comprises: dynamically updating the received plurality of reference information. 12. The method of claim 11, wherein the step of dynamically updating the received plurality of reference information comprises: updating the received plurality of reference information in each fixed time period. 13. The method of claim 11, wherein the step of dynamically updating the received plurality of reference information comprises: updating the received plurality of reference information when a number of stations compatible with beamforming changes. 14. The method of claim 11, wherein the step of dynamically updating the received plurality of reference information comprises: updating the received plurality of reference information when a state of at least one specific station which receives beamforming changes. 15. The method of claim 14, wherein the step of dynamically updating the received plurality of reference information comprises: determining that the state of the specific station receiving beamforming is changed when at least one of the following conditions is met: connection between the beamformer and the specific station is interrupted, a data traffic amount between the station and the specific station is lower than a threshold, connection quality between the beamformer and the specific station changes, and the specific station cannot receive a sounding packet. 16. The method of claim 1, wherein the beamformer is an access point (AP), a router, or a station set in an independent basic service set (IBSS) mode. 17. A beamformer comprising: a receiving circuit, arranged for receiving a plurality of reference information corresponding to a plurality of stations, respectively; and a control circuit, arranged for calculating an evaluation value for each of the stations according to at least one reference information of the plurality of reference information, and comparing a plurality of evaluation values respectively corresponding to the plurality of stations, to select specific stations from the plurality of stations for performing beamforming. 18. The beamformer of claim 17, wherein the plurality of reference information is whether a station has a function of beamforming, data traffic amount between a station and the beamformer, an encoding/decoding type of a station, diversity ability of a station, or connection quality between a station and the beamformer. 19. The beamformer of claim 17, wherein the control circuit dynamically updates the received plurality of reference information. 20. The beamformer of claim 17, being an access point (AP), a router, or a station set in an independent basic service set (IBSS) mode.
A wireless communication method applied to a beamformer includes: receiving a plurality of reference information corresponding to a plurality of stations, respectively; calculating an evaluation value for each of the stations according to at least one reference information of the plurality of reference information; and comparing a plurality of evaluation values respectively corresponding to the plurality of stations, to select specific stations from the plurality of stations for performing beamforming.1. A wireless communication method applied to a beamformer, the method comprising: receiving a plurality of reference information corresponding to a plurality of stations, respectively; calculating an evaluation value for each of the stations according to at least one reference information of the plurality of reference information; and comparing a plurality of evaluation values respectively corresponding to the plurality of stations, to select specific stations from the plurality of stations for performing beamforming. 2. The method of claim 1, wherein a number of the plurality of stations is M, a number of the specific stations selected from the plurality of stations for performing beamforming is N, and M and N are both positive integers, wherein N>1 and M>N. 3. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining whether the station has a function of receiving beamforming according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 4. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining a data traffic amount between the station and the beamformer according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 5. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining an encoding/decoding type of the station according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 6. The method of claim 5, wherein the encoding/decoding type of the station is binary convolutional code (BCC), block code, low density parity check (LDPC) code, or turbo Code. 7. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining diversity ability of the station according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 8. The method of claim 7, wherein the diversity ability of the station comprises a number of antennas comprised in the station, and whether the station is compatible with maximum ratio combining (MRC) or space-time block code (STBC). 9. The method of claim 1, wherein the step of calculating the evaluation value for each of the stations according to the reference information of the plurality of reference information comprises: determining connection quality between the station and the beamformer according to the plurality of reference information, to generate a judgment result; and determining the evaluation value according to at least the judgment result. 10. The method of claim 9, wherein the connection quality between the station and the beamformer is determined according to at least one of a received signal strength indicator (RSSI), signal quality (SQ), a signal-to-noise ratio (SNR), error vector magnitude (EVM), channel state information (CSI), a bit Error Rate (BER), and a packet error rate (PER). 11. The method of claim 1, wherein the step of receiving the plurality of reference information respectively corresponding to the plurality of stations comprises: dynamically updating the received plurality of reference information. 12. The method of claim 11, wherein the step of dynamically updating the received plurality of reference information comprises: updating the received plurality of reference information in each fixed time period. 13. The method of claim 11, wherein the step of dynamically updating the received plurality of reference information comprises: updating the received plurality of reference information when a number of stations compatible with beamforming changes. 14. The method of claim 11, wherein the step of dynamically updating the received plurality of reference information comprises: updating the received plurality of reference information when a state of at least one specific station which receives beamforming changes. 15. The method of claim 14, wherein the step of dynamically updating the received plurality of reference information comprises: determining that the state of the specific station receiving beamforming is changed when at least one of the following conditions is met: connection between the beamformer and the specific station is interrupted, a data traffic amount between the station and the specific station is lower than a threshold, connection quality between the beamformer and the specific station changes, and the specific station cannot receive a sounding packet. 16. The method of claim 1, wherein the beamformer is an access point (AP), a router, or a station set in an independent basic service set (IBSS) mode. 17. A beamformer comprising: a receiving circuit, arranged for receiving a plurality of reference information corresponding to a plurality of stations, respectively; and a control circuit, arranged for calculating an evaluation value for each of the stations according to at least one reference information of the plurality of reference information, and comparing a plurality of evaluation values respectively corresponding to the plurality of stations, to select specific stations from the plurality of stations for performing beamforming. 18. The beamformer of claim 17, wherein the plurality of reference information is whether a station has a function of beamforming, data traffic amount between a station and the beamformer, an encoding/decoding type of a station, diversity ability of a station, or connection quality between a station and the beamformer. 19. The beamformer of claim 17, wherein the control circuit dynamically updates the received plurality of reference information. 20. The beamformer of claim 17, being an access point (AP), a router, or a station set in an independent basic service set (IBSS) mode.
2,400
7,463
7,463
13,495,950
2,459
In one embodiment, a method receives a request for a video stream of video content from a client. A playlist for the video stream is retrieved. The playlist is for a plurality of portions of video content. A traffic shaping service adjusts the playlist for a set of portions in the plurality of portions according to a set of rules where adjusting allows the service to perform traffic shaping for the set of portions. The method then sends the adjusted playlist to the client. During playback of the video content at the client, the traffic shaping service receives a request for a portion in the set of portions from the client using the adjusted playlist. The method determines a rule to apply to the portion where the rule is associated with a network condition and simulates the network condition for the portion to perform the traffic shaping service.
1. A method comprising: receiving a request for a video stream of video content from a client; retrieving a playlist for the video stream, wherein the playlist is for a plurality of portions of video content for the video stream; adjusting, at a traffic shaping service, the playlist for a set of portions in the plurality of portions according to a set of rules, wherein adjusting allows the service to perform traffic shaping for the set of portions; sending the adjusted playlist to the client; during playback of the video content at the client, receiving, at the traffic shaping service, a request for a portion in the set of portions from the client using the adjusted playlist; determining a rule to apply to the portion, wherein the rule is associated with a network condition; and simulating the network condition for the portion to perform the traffic shaping service. 2. The method of claim 1, wherein adjusting comprises: determining portions associated with the set of rules; and for each portion associated with the set of rules, adjusting the playlist such that the client requests the portion from the service instead of a content delivery network, the content delivery network providing portions of the video content that are not associated with the set of rules. 3. The method of claim 2, wherein adjusting the playlist comprises changing a link for each portion associated with the set of rules such the request for each portion is sent to the traffic shaping service instead of the content delivery network. 4. The method of claim 1, wherein the request for the video stream identifies the set of rules, the method further comprising: parsing the set of rules to determine a set of parameters for the set of rules; determining portions that are applicable to the set of rules based on the set of parameters; and performing the adjusting of the playlist for the determined portions based on the set of parameters. 5. The method of claim 1, wherein retrieving comprises requesting the playlist for the video stream from a server. 6. The method of claim 1, further comprising: requesting the set of portions from a content delivery network; receiving the set of portions from the content delivery network; sending the set of portions at the traffic shaping service, wherein the set of portions are sent from the traffic shaping service to the client and portions not in the set of portions are sent to the client from the content delivery network. 7. The method of claim 6, wherein portions not in the set of portions that are sent to the client from the content delivery network source are not traffic shaped. 8. The method of claim 1, wherein simulating the network condition comprises sending a traffic shaped portion to the client in a method that simulates the network condition. 9. The method of claim 1, wherein simulating the network condition comprises sending an error code instead of the portion. 10. A non-transitory computer-readable storage medium containing instructions, that when executed, control a computer system to be operable for: receiving a request for a video stream of video content from a client; retrieving a playlist for the video stream, wherein the playlist is for a plurality of portions of video content for the video stream; adjusting, at a traffic shaping service, the playlist for a set of portions in the plurality of portions according to a set of rules, wherein adjusting allows the service to perform traffic shaping for the set of portions; sending the adjusted playlist to the client; during playback of the video content at the client, receiving, at the traffic shaping service, a request for a portion in the set of portions from the client using the adjusted playlist; determining a rule to apply to the portion, wherein the rule is associated with a network condition; and simulating the network condition for the portion to perform the traffic shaping service. 11. The computer-readable storage medium of claim 10, wherein adjusting comprises: determining portions associated with the set of rules; and for each portion associated with the set of rules, adjusting the playlist such that the client requests the portion from the service instead of a content delivery network, the content delivery network providing portions of the video content that are not associated with the set of rules. 12. The computer-readable storage medium of claim 11, wherein adjusting the playlist comprises changing a link for each portion associated with the set of rules such the request for each portion is sent to the traffic shaping service instead of the content delivery network. 13. The computer-readable storage medium of claim 10, wherein the request for the video stream identifies the set of rules, further operable for: parsing the set of rules to determine a set of parameters for the set of rules; determining portions that are applicable to the set of rules based on the set of parameters; and performing the adjusting of the playlist for the determined portions based on the set of parameters. 14. The computer-readable storage medium of claim 10, wherein retrieving comprises requesting the playlist for the video stream from a server. 15. The computer-readable storage medium of claim 10, further operable for: requesting the set of portions from a content delivery network; receiving the set of portions from the content delivery network; sending the set of portions at the traffic shaping service, wherein the set of portions are sent from the traffic shaping service to the client and portions not in the set of portions are sent to the client from the content delivery network. 16. The computer-readable storage medium of claim 15, wherein portions not in the set of portions that are sent to the client from the content delivery network source are not traffic shaped. 17. The computer-readable storage medium of claim 10, wherein simulating the network condition comprises sending a traffic shaped portion to the client in a method that simulates the network condition. 18. The computer-readable storage medium of claim 10, wherein simulating the network condition comprises sending an error code instead of the portion. 19. An apparatus comprising: one or more computer processors; and a computer-readable storage medium comprising instructions, that when executed, control the one or more computer processors to be operable for: receiving a request for a video stream of video content from a client; retrieving a playlist for the video stream, wherein the playlist is for a plurality of portions of video content for the video stream; adjusting, at a traffic shaping service, the playlist for a set of portions in the plurality of portions according to a set of rules, wherein adjusting allows the service to perform traffic shaping for the set of portions; sending the adjusted playlist to the client; during playback of the video content at the client, receiving, at the traffic shaping service, a request for a portion in the set of portions from the client using the adjusted playlist; determining a rule to apply to the portion, wherein the rule is associated with a network condition; and simulating the network condition for the portion to perform the traffic shaping service. 20. The apparatus of claim 19, wherein adjusting comprises: determining portions associated with the set of rules; and for each portion associated with the set of rules, adjusting the playlist such that the client requests the portion from the service instead of a content delivery network, the content delivery network providing portions of the video content that are not associated with the set of rules.
In one embodiment, a method receives a request for a video stream of video content from a client. A playlist for the video stream is retrieved. The playlist is for a plurality of portions of video content. A traffic shaping service adjusts the playlist for a set of portions in the plurality of portions according to a set of rules where adjusting allows the service to perform traffic shaping for the set of portions. The method then sends the adjusted playlist to the client. During playback of the video content at the client, the traffic shaping service receives a request for a portion in the set of portions from the client using the adjusted playlist. The method determines a rule to apply to the portion where the rule is associated with a network condition and simulates the network condition for the portion to perform the traffic shaping service.1. A method comprising: receiving a request for a video stream of video content from a client; retrieving a playlist for the video stream, wherein the playlist is for a plurality of portions of video content for the video stream; adjusting, at a traffic shaping service, the playlist for a set of portions in the plurality of portions according to a set of rules, wherein adjusting allows the service to perform traffic shaping for the set of portions; sending the adjusted playlist to the client; during playback of the video content at the client, receiving, at the traffic shaping service, a request for a portion in the set of portions from the client using the adjusted playlist; determining a rule to apply to the portion, wherein the rule is associated with a network condition; and simulating the network condition for the portion to perform the traffic shaping service. 2. The method of claim 1, wherein adjusting comprises: determining portions associated with the set of rules; and for each portion associated with the set of rules, adjusting the playlist such that the client requests the portion from the service instead of a content delivery network, the content delivery network providing portions of the video content that are not associated with the set of rules. 3. The method of claim 2, wherein adjusting the playlist comprises changing a link for each portion associated with the set of rules such the request for each portion is sent to the traffic shaping service instead of the content delivery network. 4. The method of claim 1, wherein the request for the video stream identifies the set of rules, the method further comprising: parsing the set of rules to determine a set of parameters for the set of rules; determining portions that are applicable to the set of rules based on the set of parameters; and performing the adjusting of the playlist for the determined portions based on the set of parameters. 5. The method of claim 1, wherein retrieving comprises requesting the playlist for the video stream from a server. 6. The method of claim 1, further comprising: requesting the set of portions from a content delivery network; receiving the set of portions from the content delivery network; sending the set of portions at the traffic shaping service, wherein the set of portions are sent from the traffic shaping service to the client and portions not in the set of portions are sent to the client from the content delivery network. 7. The method of claim 6, wherein portions not in the set of portions that are sent to the client from the content delivery network source are not traffic shaped. 8. The method of claim 1, wherein simulating the network condition comprises sending a traffic shaped portion to the client in a method that simulates the network condition. 9. The method of claim 1, wherein simulating the network condition comprises sending an error code instead of the portion. 10. A non-transitory computer-readable storage medium containing instructions, that when executed, control a computer system to be operable for: receiving a request for a video stream of video content from a client; retrieving a playlist for the video stream, wherein the playlist is for a plurality of portions of video content for the video stream; adjusting, at a traffic shaping service, the playlist for a set of portions in the plurality of portions according to a set of rules, wherein adjusting allows the service to perform traffic shaping for the set of portions; sending the adjusted playlist to the client; during playback of the video content at the client, receiving, at the traffic shaping service, a request for a portion in the set of portions from the client using the adjusted playlist; determining a rule to apply to the portion, wherein the rule is associated with a network condition; and simulating the network condition for the portion to perform the traffic shaping service. 11. The computer-readable storage medium of claim 10, wherein adjusting comprises: determining portions associated with the set of rules; and for each portion associated with the set of rules, adjusting the playlist such that the client requests the portion from the service instead of a content delivery network, the content delivery network providing portions of the video content that are not associated with the set of rules. 12. The computer-readable storage medium of claim 11, wherein adjusting the playlist comprises changing a link for each portion associated with the set of rules such the request for each portion is sent to the traffic shaping service instead of the content delivery network. 13. The computer-readable storage medium of claim 10, wherein the request for the video stream identifies the set of rules, further operable for: parsing the set of rules to determine a set of parameters for the set of rules; determining portions that are applicable to the set of rules based on the set of parameters; and performing the adjusting of the playlist for the determined portions based on the set of parameters. 14. The computer-readable storage medium of claim 10, wherein retrieving comprises requesting the playlist for the video stream from a server. 15. The computer-readable storage medium of claim 10, further operable for: requesting the set of portions from a content delivery network; receiving the set of portions from the content delivery network; sending the set of portions at the traffic shaping service, wherein the set of portions are sent from the traffic shaping service to the client and portions not in the set of portions are sent to the client from the content delivery network. 16. The computer-readable storage medium of claim 15, wherein portions not in the set of portions that are sent to the client from the content delivery network source are not traffic shaped. 17. The computer-readable storage medium of claim 10, wherein simulating the network condition comprises sending a traffic shaped portion to the client in a method that simulates the network condition. 18. The computer-readable storage medium of claim 10, wherein simulating the network condition comprises sending an error code instead of the portion. 19. An apparatus comprising: one or more computer processors; and a computer-readable storage medium comprising instructions, that when executed, control the one or more computer processors to be operable for: receiving a request for a video stream of video content from a client; retrieving a playlist for the video stream, wherein the playlist is for a plurality of portions of video content for the video stream; adjusting, at a traffic shaping service, the playlist for a set of portions in the plurality of portions according to a set of rules, wherein adjusting allows the service to perform traffic shaping for the set of portions; sending the adjusted playlist to the client; during playback of the video content at the client, receiving, at the traffic shaping service, a request for a portion in the set of portions from the client using the adjusted playlist; determining a rule to apply to the portion, wherein the rule is associated with a network condition; and simulating the network condition for the portion to perform the traffic shaping service. 20. The apparatus of claim 19, wherein adjusting comprises: determining portions associated with the set of rules; and for each portion associated with the set of rules, adjusting the playlist such that the client requests the portion from the service instead of a content delivery network, the content delivery network providing portions of the video content that are not associated with the set of rules.
2,400
7,464
7,464
14,959,677
2,492
The present disclosure relates to systems and methods of protecting intellectual property and, more particularly to systems and methods of disabling electronic or software components. The system includes: a measurement component provided with an enclosure, which measures environmental conditions within the enclosure; and a countermeasure component provided with the enclosure, which renders inoperative at least one electronic or software component within the enclosure when the measurement component detects that the conditions within the enclosure have changed beyond a predetermined amount.
1. A system, comprising: a measurement component provided with an enclosure, which measures environmental conditions within the enclosure; and a countermeasure component provided with the enclosure, which renders inoperative at least one electronic or software component within the enclosure when the measurement component detects that the environmental conditions within the enclosure have changed beyond a predetermined amount. 2. The system of claim 1, wherein the environmental conditions measured include a controlled amount of substance within the enclosure. 3. The system of claim 1, wherein the measurement component detects one of air pressure, gas, an odor and vacuum leak within the enclosure. 4. The system of claim 3, wherein the measurement component comprises at least one of a pressure gauge, moisture detector and odor detector. 5. The system of claim 1, wherein the countermeasure component is at least one of an electrostatic discharge device (ESD), electromagnetic pulse device (EMP), a chemical device, a counter-intelligence device, an explosive device or software code to disable the electronic or software component. 6. The system of claim 1, wherein the enclosure is a hermetically sealed enclosure. 7. The system of claim 6, wherein the component, the at least one measurement component and the countermeasure component are a single component mounted on a board and provided within the hermetically sealed enclosure formed by a cover which is hermetically sealed to the board. 8. The system of claim 7, wherein the cover includes a temporary injection hole or valve which is plugged or sealed, respectively, after injection of the substance. 9. The system of claim 1, further comprising an additional component which provides the environmental conditions within the enclosure. 10. The system of claim 1, wherein the measurement component further comprises a monitoring function. 11. A system, comprising: an enclosure comprising a board and a cover hermetically sealed to the board; a first component provided with the enclosure and mounted to the board; a second component provided with the enclosure and mounted to the board, the second component detects an environmental condition within the enclosure; and a third component provided with the enclosure and mounted to the board, the third component disables or destroys the first component upon the second component detecting that the environmental condition within the enclosure has changed. 12. The system of claim 11, wherein the first component is at least one of a software component and a hardware component and the second component is a monitoring and measurement component which monitors the environmental condition within the enclosure and measures when the environmental condition changes a predetermined amount. 13. The system of claim 12, wherein the environmental condition is one of air pressure, a gas, an odor and a vacuum condition. 14. The system of claim 11, wherein the environmental condition is invisible. 15. The system of claim 11, wherein the third component is a countermeasure component which provides at least one of an explosion, application of high currents, disabling software code, electromagnetic pulse, or a counter-intelligence input to render useless the first component. 16. The system of claim 11, further comprising a fourth component which provides the environmental condition within the enclosure. 17. A method comprising: monitoring an environmental condition within an enclosure; detecting that the environmental condition has changed; and disabling or altering an electronic or software component, upon the detecting of the change of the environmental condition. 18. The method of claim 17, wherein the environmental condition is one of air pressure, presence of a gas, an odor and a vacuum. 19. The method of claim 17, further comprising introducing the environmental condition with the enclosure. 20. The method of claim 19, wherein the introducing the environmental condition occurs after the enclosure is hermetically sealed.
The present disclosure relates to systems and methods of protecting intellectual property and, more particularly to systems and methods of disabling electronic or software components. The system includes: a measurement component provided with an enclosure, which measures environmental conditions within the enclosure; and a countermeasure component provided with the enclosure, which renders inoperative at least one electronic or software component within the enclosure when the measurement component detects that the conditions within the enclosure have changed beyond a predetermined amount.1. A system, comprising: a measurement component provided with an enclosure, which measures environmental conditions within the enclosure; and a countermeasure component provided with the enclosure, which renders inoperative at least one electronic or software component within the enclosure when the measurement component detects that the environmental conditions within the enclosure have changed beyond a predetermined amount. 2. The system of claim 1, wherein the environmental conditions measured include a controlled amount of substance within the enclosure. 3. The system of claim 1, wherein the measurement component detects one of air pressure, gas, an odor and vacuum leak within the enclosure. 4. The system of claim 3, wherein the measurement component comprises at least one of a pressure gauge, moisture detector and odor detector. 5. The system of claim 1, wherein the countermeasure component is at least one of an electrostatic discharge device (ESD), electromagnetic pulse device (EMP), a chemical device, a counter-intelligence device, an explosive device or software code to disable the electronic or software component. 6. The system of claim 1, wherein the enclosure is a hermetically sealed enclosure. 7. The system of claim 6, wherein the component, the at least one measurement component and the countermeasure component are a single component mounted on a board and provided within the hermetically sealed enclosure formed by a cover which is hermetically sealed to the board. 8. The system of claim 7, wherein the cover includes a temporary injection hole or valve which is plugged or sealed, respectively, after injection of the substance. 9. The system of claim 1, further comprising an additional component which provides the environmental conditions within the enclosure. 10. The system of claim 1, wherein the measurement component further comprises a monitoring function. 11. A system, comprising: an enclosure comprising a board and a cover hermetically sealed to the board; a first component provided with the enclosure and mounted to the board; a second component provided with the enclosure and mounted to the board, the second component detects an environmental condition within the enclosure; and a third component provided with the enclosure and mounted to the board, the third component disables or destroys the first component upon the second component detecting that the environmental condition within the enclosure has changed. 12. The system of claim 11, wherein the first component is at least one of a software component and a hardware component and the second component is a monitoring and measurement component which monitors the environmental condition within the enclosure and measures when the environmental condition changes a predetermined amount. 13. The system of claim 12, wherein the environmental condition is one of air pressure, a gas, an odor and a vacuum condition. 14. The system of claim 11, wherein the environmental condition is invisible. 15. The system of claim 11, wherein the third component is a countermeasure component which provides at least one of an explosion, application of high currents, disabling software code, electromagnetic pulse, or a counter-intelligence input to render useless the first component. 16. The system of claim 11, further comprising a fourth component which provides the environmental condition within the enclosure. 17. A method comprising: monitoring an environmental condition within an enclosure; detecting that the environmental condition has changed; and disabling or altering an electronic or software component, upon the detecting of the change of the environmental condition. 18. The method of claim 17, wherein the environmental condition is one of air pressure, presence of a gas, an odor and a vacuum. 19. The method of claim 17, further comprising introducing the environmental condition with the enclosure. 20. The method of claim 19, wherein the introducing the environmental condition occurs after the enclosure is hermetically sealed.
2,400
7,465
7,465
12,341,948
2,468
A system and method are disclosed that allows for uplink resource reuse. A user device is provided an uplink resource for a first data type. If the user device does not have enough data of the first data type to fill the granted uplink resource, the user device fills the granted uplink resource with a second data type.
1. A method for uplink resource reuse comprising: receiving an uplink resource allocation for a first data type session; determining that an amount of data to be sent for the first data type session is less than the uplink resource allocation; and assigning an unused portion of the uplink resource allocation to a second data type session. 2. The method of claim 1, wherein receiving an uplink resource allocation for a first data type session comprises receiving an uplink resource allocation for a voice over internet protocol session. 3. The method of claim 1, wherein the uplink resource allocation is a semi-persistent resource allocation. 4. The method of claim 1, wherein assigning the unused portion of the uplink resource allocation to a second data type session further comprises assigning the unused portion of the uplink resource allocation to one of the following services: email; and HTTP. 5. A user device comprising: a processor configured to receive an uplink resource allocation for a first data type session; the processor further configured to determine that an amount of data to be sent for the first data type session is less than the uplink resource allocation; and the processor further configured to assign an unused portion of the uplink resource allocation to a second data type session. 6. The device of claim 5, wherein the processor is configured to receive an uplink resource allocation for a voice over internet protocol session. 7. The device of claim 5, wherein the processor is configured to receive a semi-persistent resource allocation. 8. The device of claim 5, wherein the processor is configured to assign the unused portion of the uplink resource allocation to one of the following services: email; and HTTP.
A system and method are disclosed that allows for uplink resource reuse. A user device is provided an uplink resource for a first data type. If the user device does not have enough data of the first data type to fill the granted uplink resource, the user device fills the granted uplink resource with a second data type.1. A method for uplink resource reuse comprising: receiving an uplink resource allocation for a first data type session; determining that an amount of data to be sent for the first data type session is less than the uplink resource allocation; and assigning an unused portion of the uplink resource allocation to a second data type session. 2. The method of claim 1, wherein receiving an uplink resource allocation for a first data type session comprises receiving an uplink resource allocation for a voice over internet protocol session. 3. The method of claim 1, wherein the uplink resource allocation is a semi-persistent resource allocation. 4. The method of claim 1, wherein assigning the unused portion of the uplink resource allocation to a second data type session further comprises assigning the unused portion of the uplink resource allocation to one of the following services: email; and HTTP. 5. A user device comprising: a processor configured to receive an uplink resource allocation for a first data type session; the processor further configured to determine that an amount of data to be sent for the first data type session is less than the uplink resource allocation; and the processor further configured to assign an unused portion of the uplink resource allocation to a second data type session. 6. The device of claim 5, wherein the processor is configured to receive an uplink resource allocation for a voice over internet protocol session. 7. The device of claim 5, wherein the processor is configured to receive a semi-persistent resource allocation. 8. The device of claim 5, wherein the processor is configured to assign the unused portion of the uplink resource allocation to one of the following services: email; and HTTP.
2,400
7,466
7,466
14,811,797
2,432
One example method includes configuring the virtual machine environment to introduce one or more security issues within the virtual machine environment, wherein each security issue elicits a particular malicious application to perform malicious actions when introduced during execution of the particular malicious application; executing a software application within the virtual machine environment; detecting at least one of the malicious actions being performed by the software application during execution within the virtual machine environment; and initiating an analysis action in response to detecting at least one of the malicious actions being performed by the software application.
1. A computer-implemented method executed by one or more processors for analyzing software application behavior within a virtual machine environment, the method comprising: configuring the virtual machine environment to introduce one or more security issues within the virtual machine environment, wherein each security issue elicits a particular malicious application to perform malicious actions when introduced during execution of the particular malicious application; executing a software application within the virtual machine environment; detecting at least one of the malicious actions being performed by the software application during execution within the virtual machine environment; initiating an analysis action in response to detecting at least one of the malicious actions being performed by the software application. 2. The method of claim 1, wherein the analysis action includes initiating a capture of a video signal from the virtual machine environment. 3. The method of claim 1, wherein the analysis action includes initiating a trace on actions performed by the software application within the virtual machine environment. 4. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes placing sensitive data in an unsecure location within the virtual machine environment, and the malicious actions include accessing the sensitive data. 5. The method of claim 4, wherein the unsecure location is a location particular to a type of data associated with the sensitive data. 6. The method of claim 4, wherein the sensitive data includes at least one of passwords, credit card numbers, account information, or social security numbers. 7. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a default password. 8. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a particular security feature disabled. 9. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment without a specific security update installed. 10. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a particular version of a particular software component. 11. A system comprising: one or more processors configured to execute computer program instructions; and computer storage media encoded with computer program instructions that, when executed by one or more processors, cause a computer device to perform operations comprising: configuring the virtual machine environment to introduce one or more security issues within the virtual machine environment, wherein each security issue elicits a particular malicious application to perform malicious actions when introduced during execution of the particular malicious application; executing a software application within the virtual machine environment; detecting at least one of the malicious actions being performed by the software application during execution within the virtual machine environment; initiating an analysis action in response to detecting at least one of the malicious actions being performed by the software application. 12. The system of claim 11, wherein the analysis action includes initiating a capture of a video signal from the virtual machine environment. 13. The system of claim 11, wherein the analysis action includes initiating a trace on actions performed by the software application within the virtual machine environment. 14. The system of claim 11, wherein configuring the virtual machine environment to introduce the one or more security issues includes placing sensitive data in an unsecure location within the virtual machine environment, and the malicious actions include accessing the sensitive data. 15. The system of claim 14, wherein the unsecure location is a location particular to a type of data associated with the sensitive data. 16. The system of claim 14, wherein the sensitive data includes at least one of passwords, credit card numbers, account information, or social security numbers. 17. The system of claim 11, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a default password. 18. The system of claim 11, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a particular security feature disabled. 19. The system of claim 11, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment without a specific security update installed. 20. A computer storage media encoded with computer program instructions that, when executed by one or more processors, cause a computer device to perform operations comprising: configuring the virtual machine environment to introduce one or more security issues within the virtual machine environment, wherein each security issue elicits a particular malicious application to perform malicious actions when introduced during execution of the particular malicious application; executing a software application within the virtual machine environment; detecting at least one of the malicious actions being performed by the software application during execution within the virtual machine environment; initiating an analysis action in response to detecting at least one of the malicious actions being performed by the software application
One example method includes configuring the virtual machine environment to introduce one or more security issues within the virtual machine environment, wherein each security issue elicits a particular malicious application to perform malicious actions when introduced during execution of the particular malicious application; executing a software application within the virtual machine environment; detecting at least one of the malicious actions being performed by the software application during execution within the virtual machine environment; and initiating an analysis action in response to detecting at least one of the malicious actions being performed by the software application.1. A computer-implemented method executed by one or more processors for analyzing software application behavior within a virtual machine environment, the method comprising: configuring the virtual machine environment to introduce one or more security issues within the virtual machine environment, wherein each security issue elicits a particular malicious application to perform malicious actions when introduced during execution of the particular malicious application; executing a software application within the virtual machine environment; detecting at least one of the malicious actions being performed by the software application during execution within the virtual machine environment; initiating an analysis action in response to detecting at least one of the malicious actions being performed by the software application. 2. The method of claim 1, wherein the analysis action includes initiating a capture of a video signal from the virtual machine environment. 3. The method of claim 1, wherein the analysis action includes initiating a trace on actions performed by the software application within the virtual machine environment. 4. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes placing sensitive data in an unsecure location within the virtual machine environment, and the malicious actions include accessing the sensitive data. 5. The method of claim 4, wherein the unsecure location is a location particular to a type of data associated with the sensitive data. 6. The method of claim 4, wherein the sensitive data includes at least one of passwords, credit card numbers, account information, or social security numbers. 7. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a default password. 8. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a particular security feature disabled. 9. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment without a specific security update installed. 10. The method of claim 1, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a particular version of a particular software component. 11. A system comprising: one or more processors configured to execute computer program instructions; and computer storage media encoded with computer program instructions that, when executed by one or more processors, cause a computer device to perform operations comprising: configuring the virtual machine environment to introduce one or more security issues within the virtual machine environment, wherein each security issue elicits a particular malicious application to perform malicious actions when introduced during execution of the particular malicious application; executing a software application within the virtual machine environment; detecting at least one of the malicious actions being performed by the software application during execution within the virtual machine environment; initiating an analysis action in response to detecting at least one of the malicious actions being performed by the software application. 12. The system of claim 11, wherein the analysis action includes initiating a capture of a video signal from the virtual machine environment. 13. The system of claim 11, wherein the analysis action includes initiating a trace on actions performed by the software application within the virtual machine environment. 14. The system of claim 11, wherein configuring the virtual machine environment to introduce the one or more security issues includes placing sensitive data in an unsecure location within the virtual machine environment, and the malicious actions include accessing the sensitive data. 15. The system of claim 14, wherein the unsecure location is a location particular to a type of data associated with the sensitive data. 16. The system of claim 14, wherein the sensitive data includes at least one of passwords, credit card numbers, account information, or social security numbers. 17. The system of claim 11, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a default password. 18. The system of claim 11, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment with a particular security feature disabled. 19. The system of claim 11, wherein configuring the virtual machine environment to introduce the one or more security issues includes configuring the virtual machine environment without a specific security update installed. 20. A computer storage media encoded with computer program instructions that, when executed by one or more processors, cause a computer device to perform operations comprising: configuring the virtual machine environment to introduce one or more security issues within the virtual machine environment, wherein each security issue elicits a particular malicious application to perform malicious actions when introduced during execution of the particular malicious application; executing a software application within the virtual machine environment; detecting at least one of the malicious actions being performed by the software application during execution within the virtual machine environment; initiating an analysis action in response to detecting at least one of the malicious actions being performed by the software application
2,400
7,467
7,467
14,088,356
2,454
A system has multiple service endpoints running on a plurality of devices, wherein each particular service endpoint consumes control resources specifying a configuration for the particular service endpoint. A method includes evaluating a configuration object using first external resource inputs to produce an evaluated configuration object; generating a template, the generating using the evaluated configuration object and second external resource inputs; rendering the template with a set of actual parameter values to produce a localized control resource, the rendering using third external resource inputs; and providing the localized control resource to at least one service endpoint in the system.
1. A computer-implemented method, operable in a system comprising multiple service endpoints, said service endpoints running on a plurality of devices, wherein each particular service endpoint consumes control resources specifying a configuration for said particular service endpoint, the method operable on one or more devices comprising hardware including memory and at least one processor, the method comprising: (A) evaluating a configuration object using first external resource inputs to produce an evaluated configuration object; (B) generating a template, said generating using said evaluated configuration object and second external resource inputs; (C) rendering said template with a set of actual parameter values to produce a localized control resource, said rendering using third external resource inputs; and (D) providing said localized control resource to at least one service endpoint in said system. 2. The method of claim 1 wherein said first external resource inputs comprise global external resource inputs and said third external resource inputs comprise local external resource inputs. 3. The method of claim 2 wherein at least some of said first external resource inputs comprise global external resource inputs and said second external resource inputs and said third external resource inputs comprise: information from a collector system, said information comprising information about at least some of said multiple service endpoints in said network. 4. The method of claim 3 wherein said information about at least some of said multiple service endpoints was determined, at least in part, by said collector system based on information obtained by said collector system during operation of said at least some of said multiple service endpoints. 5. The method of claim 4 wherein the information obtained by said collector system during operation of said at least some of said multiple service endpoints comprises information about ongoing operation of said at least some of said multiple service endpoints. 6. The method of claim 1 wherein said first external resource inputs comprise: information from a collector system, said information comprising information about at least some of said multiple service endpoints in said network. 7. The method of claim 1 wherein said evaluating in (A) is performed by a configuration service. 8. The method of claim 1 wherein said generating in (B) is performed by a configuration service. 9. The method of claim 6 wherein said rendering in (C) is performed by a control service. 10. The method of claim 1 wherein said providing in (D) distributes said localized control resource via a caching network. 11. The method of claim 1 wherein said configuration object is a global configuration object (GCO). 12. The method of claim 1 wherein said configuration object is layer configuration objects (LCO). 13. The method of claim 1 wherein said providing in (D) comprises: providing said localized control resource in response to a request from a particular service endpoint. 14. The method of claim 1 further comprising: (E) by a particular service endpoint, obtaining said localized control resource; and (F) operating in accordance with configuration information in said localized control resource. 15. The method of claim 1 wherein said localized control resource is subject to invalidation. 16. The method of claim 1 wherein said rendering of said template comprises: rendering said template with said set of actual parameters in (C) produces said localized control resource as a ground control resource directly consumable by a target service. 17. The method of claim 1 wherein said configuration object was determined based on a user input. 18. The method of claim 1 wherein said generating in (B) further comprises: generating a localizable parameter set representing a family of control resources. 19. The method of claim 1 wherein said at least one service endpoint in (D) is a content delivery (CD) service selected from: collector services, reducer services, control services, configuration services, delivery services, rendezvous services, caching services, and streaming services. 20. The method of claim 1 further comprising: (E) invalidating said localized control resource. 21. A computer-implemented method, operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, wherein each service endpoint consumes control resources specifying local configuration for said service endpoint, the method operable on a device comprising hardware including memory and at least one processor, the method comprising, by a particular service endpoint: (A) obtaining a localized control resource; and (B) operating in accordance with information in said localized control resource, wherein said localized control resource was formed by rendering a template with a set of actual parameter values, wherein said rendering used local external resource inputs. 22. The method of claim 21 wherein said template was formed from an evaluated configuration object and first external resource inputs, said evaluated configuration object having been generated from a configuration object determined based on a user input. 23. The method of claim 21 wherein said particular service endpoint obtains said localized control resource from a caching network. 24. The method of claim 21 wherein said particular service endpoint obtains said localized control resource from a control service. 25. The method of claim 24 wherein said particular service endpoint obtains said localized control resource from said control service via a caching network. 26. The method of claim 21 wherein said particular service endpoint is a content delivery (CD) service selected from: collector services, reducer services, control services, configuration services, delivery services, rendezvous services, caching services, and streaming services. 27. The method of claim 21 further comprising: (C) invalidating said localized control resource. 28. The method of claim 27 further comprising: in response to said invalidating in (C): (D) obtaining a second localized control resource; and (E) operating in accordance with information in said second localized control resource. 29. A system, operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, the system comprising: (a) hardware including memory and at least one processor, and (b) one or more services running on said hardware, wherein said one or more services are configured to: (A) evaluate a configuration object using first external resource inputs to produce an evaluated configuration object; (B) generate a template, said generating using said evaluated configuration object and second external resource inputs; (C) render said template with a set of actual parameter values to produce a localized control resource, said rendering using third external resource inputs; and (D) provide said localized control resource to at least one service endpoint in said system. 30. A computer program product having computer readable instructions stored on non-transitory computer readable media, the computer readable instructions including instructions for implementing a computer-implemented method, said method operable on one or more devices comprising hardware including memory and at least one processor and running one or more services on said hardware, said method operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, and said method comprising: (A) evaluating a configuration object using first external resource inputs to produce an evaluated configuration object; (B) generating a template, said generating using said evaluated configuration object and second external resource inputs; (C) rendering said template with a set of actual parameter values to produce a localized control resource, said rendering using third external resource inputs; and (D) providing said localized control resource to at least one service endpoint in said system.
A system has multiple service endpoints running on a plurality of devices, wherein each particular service endpoint consumes control resources specifying a configuration for the particular service endpoint. A method includes evaluating a configuration object using first external resource inputs to produce an evaluated configuration object; generating a template, the generating using the evaluated configuration object and second external resource inputs; rendering the template with a set of actual parameter values to produce a localized control resource, the rendering using third external resource inputs; and providing the localized control resource to at least one service endpoint in the system.1. A computer-implemented method, operable in a system comprising multiple service endpoints, said service endpoints running on a plurality of devices, wherein each particular service endpoint consumes control resources specifying a configuration for said particular service endpoint, the method operable on one or more devices comprising hardware including memory and at least one processor, the method comprising: (A) evaluating a configuration object using first external resource inputs to produce an evaluated configuration object; (B) generating a template, said generating using said evaluated configuration object and second external resource inputs; (C) rendering said template with a set of actual parameter values to produce a localized control resource, said rendering using third external resource inputs; and (D) providing said localized control resource to at least one service endpoint in said system. 2. The method of claim 1 wherein said first external resource inputs comprise global external resource inputs and said third external resource inputs comprise local external resource inputs. 3. The method of claim 2 wherein at least some of said first external resource inputs comprise global external resource inputs and said second external resource inputs and said third external resource inputs comprise: information from a collector system, said information comprising information about at least some of said multiple service endpoints in said network. 4. The method of claim 3 wherein said information about at least some of said multiple service endpoints was determined, at least in part, by said collector system based on information obtained by said collector system during operation of said at least some of said multiple service endpoints. 5. The method of claim 4 wherein the information obtained by said collector system during operation of said at least some of said multiple service endpoints comprises information about ongoing operation of said at least some of said multiple service endpoints. 6. The method of claim 1 wherein said first external resource inputs comprise: information from a collector system, said information comprising information about at least some of said multiple service endpoints in said network. 7. The method of claim 1 wherein said evaluating in (A) is performed by a configuration service. 8. The method of claim 1 wherein said generating in (B) is performed by a configuration service. 9. The method of claim 6 wherein said rendering in (C) is performed by a control service. 10. The method of claim 1 wherein said providing in (D) distributes said localized control resource via a caching network. 11. The method of claim 1 wherein said configuration object is a global configuration object (GCO). 12. The method of claim 1 wherein said configuration object is layer configuration objects (LCO). 13. The method of claim 1 wherein said providing in (D) comprises: providing said localized control resource in response to a request from a particular service endpoint. 14. The method of claim 1 further comprising: (E) by a particular service endpoint, obtaining said localized control resource; and (F) operating in accordance with configuration information in said localized control resource. 15. The method of claim 1 wherein said localized control resource is subject to invalidation. 16. The method of claim 1 wherein said rendering of said template comprises: rendering said template with said set of actual parameters in (C) produces said localized control resource as a ground control resource directly consumable by a target service. 17. The method of claim 1 wherein said configuration object was determined based on a user input. 18. The method of claim 1 wherein said generating in (B) further comprises: generating a localizable parameter set representing a family of control resources. 19. The method of claim 1 wherein said at least one service endpoint in (D) is a content delivery (CD) service selected from: collector services, reducer services, control services, configuration services, delivery services, rendezvous services, caching services, and streaming services. 20. The method of claim 1 further comprising: (E) invalidating said localized control resource. 21. A computer-implemented method, operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, wherein each service endpoint consumes control resources specifying local configuration for said service endpoint, the method operable on a device comprising hardware including memory and at least one processor, the method comprising, by a particular service endpoint: (A) obtaining a localized control resource; and (B) operating in accordance with information in said localized control resource, wherein said localized control resource was formed by rendering a template with a set of actual parameter values, wherein said rendering used local external resource inputs. 22. The method of claim 21 wherein said template was formed from an evaluated configuration object and first external resource inputs, said evaluated configuration object having been generated from a configuration object determined based on a user input. 23. The method of claim 21 wherein said particular service endpoint obtains said localized control resource from a caching network. 24. The method of claim 21 wherein said particular service endpoint obtains said localized control resource from a control service. 25. The method of claim 24 wherein said particular service endpoint obtains said localized control resource from said control service via a caching network. 26. The method of claim 21 wherein said particular service endpoint is a content delivery (CD) service selected from: collector services, reducer services, control services, configuration services, delivery services, rendezvous services, caching services, and streaming services. 27. The method of claim 21 further comprising: (C) invalidating said localized control resource. 28. The method of claim 27 further comprising: in response to said invalidating in (C): (D) obtaining a second localized control resource; and (E) operating in accordance with information in said second localized control resource. 29. A system, operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, the system comprising: (a) hardware including memory and at least one processor, and (b) one or more services running on said hardware, wherein said one or more services are configured to: (A) evaluate a configuration object using first external resource inputs to produce an evaluated configuration object; (B) generate a template, said generating using said evaluated configuration object and second external resource inputs; (C) render said template with a set of actual parameter values to produce a localized control resource, said rendering using third external resource inputs; and (D) provide said localized control resource to at least one service endpoint in said system. 30. A computer program product having computer readable instructions stored on non-transitory computer readable media, the computer readable instructions including instructions for implementing a computer-implemented method, said method operable on one or more devices comprising hardware including memory and at least one processor and running one or more services on said hardware, said method operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, and said method comprising: (A) evaluating a configuration object using first external resource inputs to produce an evaluated configuration object; (B) generating a template, said generating using said evaluated configuration object and second external resource inputs; (C) rendering said template with a set of actual parameter values to produce a localized control resource, said rendering using third external resource inputs; and (D) providing said localized control resource to at least one service endpoint in said system.
2,400
7,468
7,468
14,116,520
2,413
Interference management in a wireless network is disclosed in this document. Two networks with overlapping frequency bands and coverage areas are configured to cooperate in order to mitigate mutual interference. Some embodiments relate to medium reservation in a first network to protect a transmission in a second network, while other embodiments relate to aligning communication parameters between the networks.
1-31. (canceled) 32. A method, comprising: detecting, in a first access point of a first network of a wireless local area network, presence of a second network on a frequency band overlapping with a frequency band of the first network; configuring the first access point to utilize a distribution system signalling interface to negotiate about transmission medium reservation for a transmission in the first network or the second network so as to avoid transmission collisions between the first network and the second network; and configuring the first access point to utilize a second signalling interface, different from the distribution system signalling interface, to negotiate within the first network. 33. The method of claim 32, further comprising encapsulating a message related to the negotiation about the transmission medium reservation and transmitted over said distribution system signalling interface into a frame format used for handover messages transmitted over the distribution system signalling interface. 34. The method of claim 32, further comprising triggering said negotiation about the transmission medium reservation upon detection that the second access point utilizes a transmit power higher than a transmit power of the first access point. 35. The method of claim 32, the negotiation about the transmission medium reservation comprising: causing the first access point to request, through the distribution system signalling interface, the second network to be switched to a non-overlapping frequency band; and receiving an acknowledgment for the request through the distribution system signalling interface. 36. The method of claim 32, the negotiation about the transmission medium reservation comprising: causing the first access point to request, through the distribution system signalling interface, that the second access point schedules medium reservation for the first network in the second network on the overlapping frequency band; receiving, through the distribution system signalling interface, a response message indicating transmission opportunity scheduled to the first network; and causing radio transmission in the first network according to the scheduled transmission opportunity. 37. A method, comprising: detecting, in a network apparatus, presence of a first network and a second network on overlapping frequency bands; detecting that the first network and the second network utilize different radio transmission bandwidths; in response to the detection of the different radio transmission bandwidths between the first network and the second network, causing at least a first access point of the first network to initiate a procedure to align the radio transmission bandwidths between the first network and the second network. 38. The method of claim 37, wherein the procedure to align the radio transmission bandwidths between the first network and the second network comprises: determining the bandwidth of the second network; and configuring the first network to utilize the same bandwidth as the second network. 39. The method of claim 37, wherein the procedure to align the radio transmission bandwidths between the first network and the second network comprises in the first access point: determining the bandwidth of the second network; and causing the first access point to transmit to a second access point of the second network a request to change the bandwidth of the second network to match with a bandwidth of the first network. 40. The method of claim 37, wherein the network apparatus is an interworking network apparatus configuring operation of the first network and the second network, and wherein the procedure to align the radio transmission bandwidths between the first network and the second network comprises in the interworking network apparatus: instructing at least one of the first access point and the second access point to change the bandwidth so as to align the bandwidths of the first network and the second network. 41. The method of claim 37, further comprising: configuring the first access point to operate according to IEEE 802.11 wireless local area network specifications. 42. The method of claim 37, further comprising configuring the first network to operate on a frequency band operable by a primary radio system and to avoid utilization of frequency bands currently used by the primary radio system in a geographical area of the first network. 43. An apparatus comprising: at least one processor; and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: detect presence of a second network on a frequency band overlapping with a frequency band a first network of a wireless local area network; configure of a first access point of the first network to utilize a distribution system signalling interface to negotiate about transmission medium reservation for a transmission in the first network or the second network so as to avoid transmission collisions between the first network and the second network; and configure the first access point to utilize a second signalling interface, different from the distribution system signalling interface, to negotiate within the first network. 44. The apparatus of claim 43, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to encapsulate a message related to the negotiation about the transmission medium reservation and transmitted over said distribution system signalling interface into a frame format used for handover messages transmitted over the distribution system signalling interface. 45. The apparatus of claim 43, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to trigger said negotiation about the transmission medium reservation upon detection that the second access point utilizes a transmit power higher than a transmit power of the first access point. 46. The apparatus of claim 43, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to cause the first access point to request, through the distribution system signalling interface, the second network to be switched to a non-overlapping frequency band and to receive an acknowledgment for the request through the distribution system signalling interface. 47. The apparatus of claim 43, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: cause the first access point to request, through the distribution system signalling interface, that the second access point schedules medium reservation for the first network in the second network on the overlapping frequency band; receive, through the distribution system signalling interface, a response message indicating transmission opportunity scheduled to the first network; and cause radio transmission in the first network according to the scheduled transmission opportunity. 48. An apparatus, comprising: at least one processor; and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: detect presence of a first network and a second network on overlapping frequency bands; detect that the first network and the second network utilize different radio transmission bandwidths; in response to the detection of the different radio transmission bandwidths between the first network and the second network, cause at least a first access point of the first network to initiate a procedure to align the radio transmission bandwidths between the first network and the second network. 49. The apparatus of claim 48, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: determine the bandwidth of the second network; and configure the first network to utilize the same bandwidth as the second network. 50. The apparatus of claim 48, wherein the apparatus is applicable to the first access point, and wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: determining the bandwidth of the second network; and cause the first access point to transmit to a second access point of the second network a request to change the bandwidth of the second network to match with a bandwidth of the first network. 51. The apparatus of claim 48, wherein the apparatus is applicable to an interworking network apparatus configuring operation of the first network and the second network, and wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to instruct at least one of the first access point and the second access point to change the bandwidth so as to align the bandwidths of the first network and the second network. 52. The apparatus of claim 48, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to operate according to IEEE 802.11 wireless local area network specifications. 53. The apparatus of claim 48, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to configuring the first network to operate on a frequency band operable by a primary radio system and to avoid utilization of frequency bands currently used by the primary radio system in a geographical area of the first network.
Interference management in a wireless network is disclosed in this document. Two networks with overlapping frequency bands and coverage areas are configured to cooperate in order to mitigate mutual interference. Some embodiments relate to medium reservation in a first network to protect a transmission in a second network, while other embodiments relate to aligning communication parameters between the networks.1-31. (canceled) 32. A method, comprising: detecting, in a first access point of a first network of a wireless local area network, presence of a second network on a frequency band overlapping with a frequency band of the first network; configuring the first access point to utilize a distribution system signalling interface to negotiate about transmission medium reservation for a transmission in the first network or the second network so as to avoid transmission collisions between the first network and the second network; and configuring the first access point to utilize a second signalling interface, different from the distribution system signalling interface, to negotiate within the first network. 33. The method of claim 32, further comprising encapsulating a message related to the negotiation about the transmission medium reservation and transmitted over said distribution system signalling interface into a frame format used for handover messages transmitted over the distribution system signalling interface. 34. The method of claim 32, further comprising triggering said negotiation about the transmission medium reservation upon detection that the second access point utilizes a transmit power higher than a transmit power of the first access point. 35. The method of claim 32, the negotiation about the transmission medium reservation comprising: causing the first access point to request, through the distribution system signalling interface, the second network to be switched to a non-overlapping frequency band; and receiving an acknowledgment for the request through the distribution system signalling interface. 36. The method of claim 32, the negotiation about the transmission medium reservation comprising: causing the first access point to request, through the distribution system signalling interface, that the second access point schedules medium reservation for the first network in the second network on the overlapping frequency band; receiving, through the distribution system signalling interface, a response message indicating transmission opportunity scheduled to the first network; and causing radio transmission in the first network according to the scheduled transmission opportunity. 37. A method, comprising: detecting, in a network apparatus, presence of a first network and a second network on overlapping frequency bands; detecting that the first network and the second network utilize different radio transmission bandwidths; in response to the detection of the different radio transmission bandwidths between the first network and the second network, causing at least a first access point of the first network to initiate a procedure to align the radio transmission bandwidths between the first network and the second network. 38. The method of claim 37, wherein the procedure to align the radio transmission bandwidths between the first network and the second network comprises: determining the bandwidth of the second network; and configuring the first network to utilize the same bandwidth as the second network. 39. The method of claim 37, wherein the procedure to align the radio transmission bandwidths between the first network and the second network comprises in the first access point: determining the bandwidth of the second network; and causing the first access point to transmit to a second access point of the second network a request to change the bandwidth of the second network to match with a bandwidth of the first network. 40. The method of claim 37, wherein the network apparatus is an interworking network apparatus configuring operation of the first network and the second network, and wherein the procedure to align the radio transmission bandwidths between the first network and the second network comprises in the interworking network apparatus: instructing at least one of the first access point and the second access point to change the bandwidth so as to align the bandwidths of the first network and the second network. 41. The method of claim 37, further comprising: configuring the first access point to operate according to IEEE 802.11 wireless local area network specifications. 42. The method of claim 37, further comprising configuring the first network to operate on a frequency band operable by a primary radio system and to avoid utilization of frequency bands currently used by the primary radio system in a geographical area of the first network. 43. An apparatus comprising: at least one processor; and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: detect presence of a second network on a frequency band overlapping with a frequency band a first network of a wireless local area network; configure of a first access point of the first network to utilize a distribution system signalling interface to negotiate about transmission medium reservation for a transmission in the first network or the second network so as to avoid transmission collisions between the first network and the second network; and configure the first access point to utilize a second signalling interface, different from the distribution system signalling interface, to negotiate within the first network. 44. The apparatus of claim 43, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to encapsulate a message related to the negotiation about the transmission medium reservation and transmitted over said distribution system signalling interface into a frame format used for handover messages transmitted over the distribution system signalling interface. 45. The apparatus of claim 43, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to trigger said negotiation about the transmission medium reservation upon detection that the second access point utilizes a transmit power higher than a transmit power of the first access point. 46. The apparatus of claim 43, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to cause the first access point to request, through the distribution system signalling interface, the second network to be switched to a non-overlapping frequency band and to receive an acknowledgment for the request through the distribution system signalling interface. 47. The apparatus of claim 43, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: cause the first access point to request, through the distribution system signalling interface, that the second access point schedules medium reservation for the first network in the second network on the overlapping frequency band; receive, through the distribution system signalling interface, a response message indicating transmission opportunity scheduled to the first network; and cause radio transmission in the first network according to the scheduled transmission opportunity. 48. An apparatus, comprising: at least one processor; and at least one memory including a computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: detect presence of a first network and a second network on overlapping frequency bands; detect that the first network and the second network utilize different radio transmission bandwidths; in response to the detection of the different radio transmission bandwidths between the first network and the second network, cause at least a first access point of the first network to initiate a procedure to align the radio transmission bandwidths between the first network and the second network. 49. The apparatus of claim 48, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: determine the bandwidth of the second network; and configure the first network to utilize the same bandwidth as the second network. 50. The apparatus of claim 48, wherein the apparatus is applicable to the first access point, and wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to: determining the bandwidth of the second network; and cause the first access point to transmit to a second access point of the second network a request to change the bandwidth of the second network to match with a bandwidth of the first network. 51. The apparatus of claim 48, wherein the apparatus is applicable to an interworking network apparatus configuring operation of the first network and the second network, and wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to instruct at least one of the first access point and the second access point to change the bandwidth so as to align the bandwidths of the first network and the second network. 52. The apparatus of claim 48, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to operate according to IEEE 802.11 wireless local area network specifications. 53. The apparatus of claim 48, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to configuring the first network to operate on a frequency band operable by a primary radio system and to avoid utilization of frequency bands currently used by the primary radio system in a geographical area of the first network.
2,400
7,469
7,469
14,161,347
2,449
Techniques are disclosed for notifying network control software of new and moved source MAC addresses. In one embodiment, a switch may redirect a packet sent by a new or migrated virtual machine to the network control software as a notification. The switch does not forward the packet, thereby protecting against denial of service attacks. The switch further adds to a forwarding database a temporary entry which includes a “No_Redirect” flag for a new source MAC address, or updates an existing entry for a source MAC address that hits in the forwarding database by setting the “No_Redirect” flag. The “No_Redirect” flag indicates whether a notification has already been sent to the network control software for this source MAC address. The switch may periodically retry the notification to the network control software, until the network control software validates the source MAC address, depending on whether the “No_Redirect” is set.
1.-8. (canceled) 9. One or more non-transitory computer-readable storage media storing instructions, which when executed by a client device and a server system, performs operations for invalidating static entries in a forwarding database of a switch, comprising: inserting, by network control software, a static entry into the forwarding database; and setting, by the network control software, an age bit for the static entry, wherein the age bit is reset by the switch when a hit on the static entry occurs, and wherein the static entry is invalidated by the network control software if the network control software determines that the age bit is not reset for at least a threshold period of time. 10. The non-transitory computer-readable storage media of claim 9, wherein the network control software determines whether the age bit is reset by periodically polling the forwarding database. 11. The non-transitory computer-readable storage media of claim 9, wherein the static entry is inserted into the forwarding database after the network control software validates a new or moved source MAC address, wherein the validation includes: receiving, by the switch, the first packet; if the first packet includes a new source MAC address, inserting into the forwarding database a temporary entry which includes the source MAC address and a flag which is set to indicate that the network control software has been notified; if the first packet includes a moved source MAC address, updating an existing entry in the forwarding database which includes the source MAC address by setting the flag for the entry, and forwarding the first packet towards the network control software. 12. The non-transitory computer-readable storage media of claim 11, wherein the first packet is not forwarded towards a port associated with the target MAC address included in the first packet. 13. The non-transitory computer-readable storage media of claim 11, wherein the temporary entry includes a field indicating the temporary status of the entry and wherein the temporary entry does not include routing information. 14. The non-transitory computer-readable storage media of claim 11, the operations further comprising: determining that a second packet has a source MAC address that matches the temporary entry or the existing entry; and redirecting the received second packet to the network control software if the flag is reset for the temporary entry or the existing entry. 15. The non-transitory computer-readable storage media of claim 11, the operations further comprising, adding, by the network control software, an access control list (ACL) rule to block or discard packets received from the source MAC address of the first packet if the network control software does not validate the source MAC address. 16. The non-transitory computer-readable storage media of claim 11, wherein the first packet was transmitted by a new virtual machine or a moved virtual machine. 17. A system, comprising: a client device, having a processor and memory, configured to execute a program for invalidating static entries in a forwarding database of a switch, by performing operations comprising: inserting, by network control software, a static entry into the forwarding database; and setting, by the network control software, an age bit for the static entry, wherein the age bit is reset by the switch when a hit on the static entry occurs, and wherein the static entry is invalidated by the network control software if the network control software determines that the age bit is not reset for at least a threshold period of time. 18. The system of claim 17, wherein the network control software determines whether the age bit is reset by periodically polling the forwarding database. 19. The system of claim 17, wherein the static entry is inserted into the forwarding database after the network control software validates a new or moved source MAC address, wherein the validation includes: receiving, by the switch, the first packet; if the first packet includes a new source MAC address, inserting into the forwarding database a temporary entry which includes the source MAC address and a flag which is set to indicate that the network control software has been notified; if the first packet includes a moved source MAC address, updating an existing entry in the forwarding database which includes the source MAC address by setting the flag for the entry, and forwarding the first packet towards the network control software. 20. The system of claim 19, the operations further comprising: determining that a second packet has a source MAC address that matches the temporary entry or the existing entry; and redirecting the received second packet to the network control software if the flag is reset for the temporary entry or the existing entry.
Techniques are disclosed for notifying network control software of new and moved source MAC addresses. In one embodiment, a switch may redirect a packet sent by a new or migrated virtual machine to the network control software as a notification. The switch does not forward the packet, thereby protecting against denial of service attacks. The switch further adds to a forwarding database a temporary entry which includes a “No_Redirect” flag for a new source MAC address, or updates an existing entry for a source MAC address that hits in the forwarding database by setting the “No_Redirect” flag. The “No_Redirect” flag indicates whether a notification has already been sent to the network control software for this source MAC address. The switch may periodically retry the notification to the network control software, until the network control software validates the source MAC address, depending on whether the “No_Redirect” is set.1.-8. (canceled) 9. One or more non-transitory computer-readable storage media storing instructions, which when executed by a client device and a server system, performs operations for invalidating static entries in a forwarding database of a switch, comprising: inserting, by network control software, a static entry into the forwarding database; and setting, by the network control software, an age bit for the static entry, wherein the age bit is reset by the switch when a hit on the static entry occurs, and wherein the static entry is invalidated by the network control software if the network control software determines that the age bit is not reset for at least a threshold period of time. 10. The non-transitory computer-readable storage media of claim 9, wherein the network control software determines whether the age bit is reset by periodically polling the forwarding database. 11. The non-transitory computer-readable storage media of claim 9, wherein the static entry is inserted into the forwarding database after the network control software validates a new or moved source MAC address, wherein the validation includes: receiving, by the switch, the first packet; if the first packet includes a new source MAC address, inserting into the forwarding database a temporary entry which includes the source MAC address and a flag which is set to indicate that the network control software has been notified; if the first packet includes a moved source MAC address, updating an existing entry in the forwarding database which includes the source MAC address by setting the flag for the entry, and forwarding the first packet towards the network control software. 12. The non-transitory computer-readable storage media of claim 11, wherein the first packet is not forwarded towards a port associated with the target MAC address included in the first packet. 13. The non-transitory computer-readable storage media of claim 11, wherein the temporary entry includes a field indicating the temporary status of the entry and wherein the temporary entry does not include routing information. 14. The non-transitory computer-readable storage media of claim 11, the operations further comprising: determining that a second packet has a source MAC address that matches the temporary entry or the existing entry; and redirecting the received second packet to the network control software if the flag is reset for the temporary entry or the existing entry. 15. The non-transitory computer-readable storage media of claim 11, the operations further comprising, adding, by the network control software, an access control list (ACL) rule to block or discard packets received from the source MAC address of the first packet if the network control software does not validate the source MAC address. 16. The non-transitory computer-readable storage media of claim 11, wherein the first packet was transmitted by a new virtual machine or a moved virtual machine. 17. A system, comprising: a client device, having a processor and memory, configured to execute a program for invalidating static entries in a forwarding database of a switch, by performing operations comprising: inserting, by network control software, a static entry into the forwarding database; and setting, by the network control software, an age bit for the static entry, wherein the age bit is reset by the switch when a hit on the static entry occurs, and wherein the static entry is invalidated by the network control software if the network control software determines that the age bit is not reset for at least a threshold period of time. 18. The system of claim 17, wherein the network control software determines whether the age bit is reset by periodically polling the forwarding database. 19. The system of claim 17, wherein the static entry is inserted into the forwarding database after the network control software validates a new or moved source MAC address, wherein the validation includes: receiving, by the switch, the first packet; if the first packet includes a new source MAC address, inserting into the forwarding database a temporary entry which includes the source MAC address and a flag which is set to indicate that the network control software has been notified; if the first packet includes a moved source MAC address, updating an existing entry in the forwarding database which includes the source MAC address by setting the flag for the entry, and forwarding the first packet towards the network control software. 20. The system of claim 19, the operations further comprising: determining that a second packet has a source MAC address that matches the temporary entry or the existing entry; and redirecting the received second packet to the network control software if the flag is reset for the temporary entry or the existing entry.
2,400
7,470
7,470
14,177,409
2,449
Techniques are disclosed for notifying network control software of new and moved source MAC addresses. In one embodiment, a switch may redirect a packet sent by a new or migrated virtual machine to the network control software as a notification. The switch does not forward the packet, thereby protecting against denial of service attacks. The switch further adds to a forwarding database a temporary entry which includes a “No_Redirect” flag for a new source MAC address, or updates an existing entry for a source MAC address that hits in the forwarding database by setting the “No_Redirect” flag. The “No_Redirect” flag indicates whether a notification has already been sent to the network control software for this source MAC address. The switch may periodically retry the notification to the network control software, until the network control software validates the source MAC address, depending on whether the “No_Redirect” is set.
1. A computer-implemented method for invalidating static entries in a forwarding database of a switch, comprising: inserting, by network control software, a static entry into the forwarding database; and setting, by the network control software, an age bit for the static entry, wherein the age bit is reset by the switch when a hit on the static entry occurs, and wherein the static entry is invalidated by the network control software if the network control software determines that the age bit is not reset for at least a threshold period of time. 2. The method of claim 1, wherein the network control software determines whether the age bit is reset by periodically polling the forwarding database. 3. The method of claim 1, wherein the static entry is inserted into the forwarding database after the network control software validates a new or moved source MAC address, wherein the validation includes: receiving, by the switch, the first packet; if the first packet includes a new source MAC address, inserting into the forwarding database a temporary entry which includes the source MAC address and a flag which is set to indicate that the network control software has been notified; if the first packet includes a moved source MAC address, updating an existing entry in the forwarding database which includes the source MAC address by setting the flag for the entry, and forwarding the first packet towards the network control software. 4. The method of claim 3, wherein the first packet is not forwarded towards a port associated with the target MAC address included in the first packet. 5. The method of claim 3, wherein the temporary entry includes a field indicating the temporary status of the entry and wherein the temporary entry does not include routing information. 6. The method of claim 3, further comprising: determining that a second packet has a source MAC address that matches the temporary entry or the existing entry; and redirecting the received second packet to the network control software if the flag is reset for the temporary entry or the existing entry. 7. The method of claim 3, further comprising, adding, by the network control software, an access control list (ACL) rule to block or discard packets received from the source MAC address of the first packet if the network control software does not validate the source MAC address. 8. The method of claim 3, wherein the first packet was transmitted by a new virtual machine or a moved virtual machine.
Techniques are disclosed for notifying network control software of new and moved source MAC addresses. In one embodiment, a switch may redirect a packet sent by a new or migrated virtual machine to the network control software as a notification. The switch does not forward the packet, thereby protecting against denial of service attacks. The switch further adds to a forwarding database a temporary entry which includes a “No_Redirect” flag for a new source MAC address, or updates an existing entry for a source MAC address that hits in the forwarding database by setting the “No_Redirect” flag. The “No_Redirect” flag indicates whether a notification has already been sent to the network control software for this source MAC address. The switch may periodically retry the notification to the network control software, until the network control software validates the source MAC address, depending on whether the “No_Redirect” is set.1. A computer-implemented method for invalidating static entries in a forwarding database of a switch, comprising: inserting, by network control software, a static entry into the forwarding database; and setting, by the network control software, an age bit for the static entry, wherein the age bit is reset by the switch when a hit on the static entry occurs, and wherein the static entry is invalidated by the network control software if the network control software determines that the age bit is not reset for at least a threshold period of time. 2. The method of claim 1, wherein the network control software determines whether the age bit is reset by periodically polling the forwarding database. 3. The method of claim 1, wherein the static entry is inserted into the forwarding database after the network control software validates a new or moved source MAC address, wherein the validation includes: receiving, by the switch, the first packet; if the first packet includes a new source MAC address, inserting into the forwarding database a temporary entry which includes the source MAC address and a flag which is set to indicate that the network control software has been notified; if the first packet includes a moved source MAC address, updating an existing entry in the forwarding database which includes the source MAC address by setting the flag for the entry, and forwarding the first packet towards the network control software. 4. The method of claim 3, wherein the first packet is not forwarded towards a port associated with the target MAC address included in the first packet. 5. The method of claim 3, wherein the temporary entry includes a field indicating the temporary status of the entry and wherein the temporary entry does not include routing information. 6. The method of claim 3, further comprising: determining that a second packet has a source MAC address that matches the temporary entry or the existing entry; and redirecting the received second packet to the network control software if the flag is reset for the temporary entry or the existing entry. 7. The method of claim 3, further comprising, adding, by the network control software, an access control list (ACL) rule to block or discard packets received from the source MAC address of the first packet if the network control software does not validate the source MAC address. 8. The method of claim 3, wherein the first packet was transmitted by a new virtual machine or a moved virtual machine.
2,400
7,471
7,471
14,458,488
2,487
A processor and projector images a coded projector pattern of light on a portion of an object providing a first coded surface pattern of light, images a first sequential projector pattern of light on another portion of the object providing a first sequential surface pattern of light, and images a second sequential projector pattern of light on the other portion providing a second sequential surface pattern of light. A camera forms a first coded image of the first coded surface pattern of light and generates a first coded array, forms a first sequential image of the first sequential surface pattern of light and generates a first sequential array, forms a second sequential image of the second sequential surface pattern of light and generates a second sequential array. The processor determines a correspondence between the camera and projector, and measures three-dimensional coordinates of the object.
1. A method for measuring three-dimensional (3D) coordinates of a surface of an object, the method comprising: providing a structured light scanner that includes a processor, a projector, and a camera; generating by the processor at a first time a first coded projector pattern of light on a plane of patterned illumination, the first coded projector pattern of light being confined to a window in the plane, the window including a collection of subwindows arranged in two dimensions on the plane with each subwindow within the collection of subwindows having a subwindow pattern of light different than and distinguishable from the subwindow pattern of light of each adjacent subwindow; imaging the first coded projector pattern of light using the projector onto a first portion of the surface of the object to obtain a first coded surface pattern of light on the first portion; forming using the camera a first coded image that is an image of the first coded surface pattern of light and generating in response a first coded array, the first coded array being an array of digital values; sending the first coded array to the processor; determining via the processor a correspondence between each element of the first coded array and respective ones of the subwindows; determining via the processor in a first frame of reference of the scanner first coded 3D coordinates, the first coded 3D coordinates being 3D coordinates of points on the first portion, the first coded 3D coordinates based at least in part on the first coded projector pattern of light, the first coded array, the correspondence, a length of a baseline distance between the camera and the projector, a pose of the camera, and a pose of the projector; moving the scanner and/or the object to change the object from a first pose to a second pose, the first pose and the second pose of the object being given in the first frame of reference, the second pose of the object based at least on the first coded 3D coordinates and on a definable scanner standoff distance relative to the object; generating via the processor at a second time a first sequential projector pattern of light on the plane of patterned illumination; imaging the first sequential projector pattern of light using the projector onto a second portion of the surface of the object to obtain a first sequential surface pattern of light on the second portion; forming using the camera a first sequential image that is an image of the first sequential surface pattern of light and generating in response a first sequential array, the first sequential array being an array of digital values; sending the first sequential array to the processor; generating via the processor at a third time a second sequential projector pattern of light on the plane of patterned illumination; imaging the second sequential projector pattern of light using the projector onto the second portion of the surface of the object to obtain a second sequential surface pattern of light on the second portion; forming using the camera a second sequential image that is an image of the second sequential surface pattern of light and generating in response a second sequential array, the second sequential array being an array of digital values; sending the second sequential array to the processor; determining via the processor in the first frame of reference sequential 3D coordinates, the sequential 3D coordinates being 3D coordinates of points on the second portion, the sequential 3D coordinates based at least in part on the first sequential projector pattern of light, the first sequential array, the second sequential projector pattern of light, the second sequential array, the length of the baseline, the camera pose, and the projector pose; and storing the sequential 3D coordinates. 2. The method of claim 1 wherein, in the step of generating by the processor at a first time a first coded projector pattern of light on the plane of patterned illumination, each subwindow within the collection of subwindows having a subwindow pattern of light is further different than and distinguishable from the subwindow pattern of light of each of the other subwindows. 3. The method of claim 1 wherein, in the step of moving the scanner and/or the object from a first pose to a second pose, the second pose is further based at least in part on identifying with the processor a first object feature. 4. The method of claim 3 wherein, in the step of moving the scanner and/or the object, the second pose is further based at least in part on matching the identified first object feature to a computer aided drawing (CAD) model of the object. 5. The method of claim 3 wherein, in the step of moving the scanner and/or the object, the second pose is further based at least in part on a scanned representation of a third portion of the surface of the object, the third portion including a region of the surface not included in the first portion or the second portion. 6. The method of claim 3 wherein, in the step of moving the scanner and/or the object, the second pose is further based at least in part on observing via the processor an edge of the object. 7. The method of claim 1 further including, displaying a representation of the surface of the object on a display, the representation based at least in part on the coded 3D coordinates. 8. The method of claim 7 wherein, in the step of moving the scanner and/or the object, the second pose is further based on moving the scanner and/or the object by a user, the moving based at least in part on the coded 3D coordinates on the display. 9. The method of claim 8 wherein, in the step of displaying the coded 3D coordinates on a display, the coded 3D coordinates are displayed in real time. 10. The method of claim 8 wherein, in the step of displaying the coded 3D coordinates on a display, the coded 3D coordinates are colored to indicate a relative position of each of the 3D coordinate points relative to one or more locations on the scanner. 11. The step of claim 1 wherein, in the step of moving the scanner and/or the object, the second pose is further based at least in part on a depth of field of the scanner. 12. The method of claim 1 wherein, in the step of moving the scanner and/or the object, the moving is performed by a motorized device. 13. The method of claim 12 wherein, in the step of moving the scanner and/or the object, the motorized device is a robot. 14. The method of claim 1 wherein the method further includes: generating via the processor at a fourth time a third sequential projector pattern of light on the plane of patterned illumination; imaging the third sequential projector pattern of light using the projector onto the second portion of the surface of the object to obtain a third sequential surface pattern of light on the second portion; forming using the camera a third sequential image that is an image of the third sequential surface pattern of light and generating in response a third sequential array, the third sequential array being an array of digital values; sending the third sequential array to the processor; and wherein, in the step of determining via the processor in the first frame of reference sequential 3D coordinates, the sequential 3D coordinates are further based at least in part on the third sequential surface pattern of light and the third sequential array. 15. The method of claim 14 wherein, in the step of determining via the processor in the first frame of reference sequential 3D coordinates, the determining further includes calculating a phase of a pixel in a photosensitive array of the camera, the phase based at least in part on a first level of light received by the pixel in the first sequential image, a second level of light received by the pixel in the second sequential image, and a third level of light received by the pixel in the third sequential image. 16. The method of claim 1 further including, adjusting via the processor an average level of optical power of the first sequential projector pattern of light, the adjusting based at least in part on the first coded array. 17. The method of claim 14 wherein, in the step of determining via the processor in the first frame of reference first coded 3D coordinates, the sequential 3D coordinates are based at least in part on the first coded 3D coordinates. 18. The method of claim 1, wherein: the scanner has a first frame of reference; the projector includes a plane of patterned illumination and a projector lens, the projector having a projector perspective center; the camera includes a photosensitive array and a camera lens, the camera having a camera perspective center, the photosensitive array including an array of pixels; the scanner having a baseline, the baseline being a straight line segment between the projector perspective center and the camera perspective center; the camera having a camera pose in the first frame of reference; the projector having a projector pose in the first frame of reference; and, the processor further configured to control the plane of patterned illumination 19. The method of claim 18, wherein: the first coded image is formed on the photosensitive array using the camera lens; the first sequential image is formed on the photosensitive array using the camera lens; and, the second sequential image is formed on the photosensitive array using the camera lens. 20. The method of claim 18, wherein the step of determining via the processor a correspondence between each element of the first coded array and respective ones of the subwindows comprises: facilitating via the processor a search of pixel values on the photosensitive array that have one-to-one correspondence with uniquely identifiable element values of the illuminated pattern source. 21. The method of claim 1, wherein: in the step of determining via the processor in a first frame of reference of the scanner first coded 3D coordinates, and in the step of determining via the processor in the first frame of reference sequential 3D coordinates, each step comprises: executing via the processor triangulation calculations. 22. An apparatus for measuring three-dimensional (3D) coordinates of a surface of an object, the apparatus comprising: a structured light scanner comprising a processor, a projector, and a camera; wherein the processor is responsive to executable instructions which when executed by the processor facilitates the following method: generating by the processor at a first time a first coded projector pattern of light on a plane of patterned illumination, the first coded projector pattern of light being confined to a window in the plane, the window including a collection of subwindows arranged in two dimensions on the plane with each subwindow within the collection of subwindows having a subwindow pattern of light different than and distinguishable from the subwindow pattern of light of each adjacent subwindow; imaging the first coded projector pattern of light using the projector onto a first portion of the surface of the object to obtain a first coded surface pattern of light on the first portion; forming using the camera a first coded image that is an image of the first coded surface pattern of light and generating in response a first coded array, the first coded array being an array of digital values; sending the first coded array to the processor; determining via the processor a correspondence between each element of the first coded array and respective ones of the subwindows; determining via the processor in a first frame of reference of the scanner first coded 3D coordinates, the first coded 3D coordinates being 3D coordinates of points on the first portion, the first coded 3D coordinates based at least in part on the first coded projector pattern of light, the first coded array, the correspondence, a length of a baseline distance between the camera and the projector, a pose of the camera, and a pose of the projector; moving the scanner and/or the object to change the object from a first pose to a second pose, the first pose and the second pose of the object being given in the first frame of reference, the second pose of the object based at least on the first coded 3D coordinates and on a definable scanner standoff distance relative to the object; generating via the processor at a second time a first sequential projector pattern of light on the plane of patterned illumination; imaging the first sequential projector pattern of light using the projector onto a second portion of the surface of the object to obtain a first sequential surface pattern of light on the second portion; forming using the camera a first sequential image that is an image of the first sequential surface pattern of light and generating in response a first sequential array, the first sequential array being an array of digital values; sending the first sequential array to the processor; generating via the processor at a third time a second sequential projector pattern of light on the plane of patterned illumination; imaging the second sequential projector pattern of light using the projector onto the second portion of the surface of the object to obtain a second sequential surface pattern of light on the second portion; forming using the camera a second sequential image that is an image of the second sequential surface pattern of light and generating in response a second sequential array, the second sequential array being an array of digital values; sending the second sequential array to the processor; determining via the processor in the first frame of reference sequential 3D coordinates, the sequential 3D coordinates being 3D coordinates of points on the second portion, the sequential 3D coordinates based at least in part on the first sequential projector pattern of light, the first sequential array, the second sequential projector pattern of light, the second sequential array, the length of the baseline, the camera pose, and the projector pose; and storing the sequential 3D coordinates.
A processor and projector images a coded projector pattern of light on a portion of an object providing a first coded surface pattern of light, images a first sequential projector pattern of light on another portion of the object providing a first sequential surface pattern of light, and images a second sequential projector pattern of light on the other portion providing a second sequential surface pattern of light. A camera forms a first coded image of the first coded surface pattern of light and generates a first coded array, forms a first sequential image of the first sequential surface pattern of light and generates a first sequential array, forms a second sequential image of the second sequential surface pattern of light and generates a second sequential array. The processor determines a correspondence between the camera and projector, and measures three-dimensional coordinates of the object.1. A method for measuring three-dimensional (3D) coordinates of a surface of an object, the method comprising: providing a structured light scanner that includes a processor, a projector, and a camera; generating by the processor at a first time a first coded projector pattern of light on a plane of patterned illumination, the first coded projector pattern of light being confined to a window in the plane, the window including a collection of subwindows arranged in two dimensions on the plane with each subwindow within the collection of subwindows having a subwindow pattern of light different than and distinguishable from the subwindow pattern of light of each adjacent subwindow; imaging the first coded projector pattern of light using the projector onto a first portion of the surface of the object to obtain a first coded surface pattern of light on the first portion; forming using the camera a first coded image that is an image of the first coded surface pattern of light and generating in response a first coded array, the first coded array being an array of digital values; sending the first coded array to the processor; determining via the processor a correspondence between each element of the first coded array and respective ones of the subwindows; determining via the processor in a first frame of reference of the scanner first coded 3D coordinates, the first coded 3D coordinates being 3D coordinates of points on the first portion, the first coded 3D coordinates based at least in part on the first coded projector pattern of light, the first coded array, the correspondence, a length of a baseline distance between the camera and the projector, a pose of the camera, and a pose of the projector; moving the scanner and/or the object to change the object from a first pose to a second pose, the first pose and the second pose of the object being given in the first frame of reference, the second pose of the object based at least on the first coded 3D coordinates and on a definable scanner standoff distance relative to the object; generating via the processor at a second time a first sequential projector pattern of light on the plane of patterned illumination; imaging the first sequential projector pattern of light using the projector onto a second portion of the surface of the object to obtain a first sequential surface pattern of light on the second portion; forming using the camera a first sequential image that is an image of the first sequential surface pattern of light and generating in response a first sequential array, the first sequential array being an array of digital values; sending the first sequential array to the processor; generating via the processor at a third time a second sequential projector pattern of light on the plane of patterned illumination; imaging the second sequential projector pattern of light using the projector onto the second portion of the surface of the object to obtain a second sequential surface pattern of light on the second portion; forming using the camera a second sequential image that is an image of the second sequential surface pattern of light and generating in response a second sequential array, the second sequential array being an array of digital values; sending the second sequential array to the processor; determining via the processor in the first frame of reference sequential 3D coordinates, the sequential 3D coordinates being 3D coordinates of points on the second portion, the sequential 3D coordinates based at least in part on the first sequential projector pattern of light, the first sequential array, the second sequential projector pattern of light, the second sequential array, the length of the baseline, the camera pose, and the projector pose; and storing the sequential 3D coordinates. 2. The method of claim 1 wherein, in the step of generating by the processor at a first time a first coded projector pattern of light on the plane of patterned illumination, each subwindow within the collection of subwindows having a subwindow pattern of light is further different than and distinguishable from the subwindow pattern of light of each of the other subwindows. 3. The method of claim 1 wherein, in the step of moving the scanner and/or the object from a first pose to a second pose, the second pose is further based at least in part on identifying with the processor a first object feature. 4. The method of claim 3 wherein, in the step of moving the scanner and/or the object, the second pose is further based at least in part on matching the identified first object feature to a computer aided drawing (CAD) model of the object. 5. The method of claim 3 wherein, in the step of moving the scanner and/or the object, the second pose is further based at least in part on a scanned representation of a third portion of the surface of the object, the third portion including a region of the surface not included in the first portion or the second portion. 6. The method of claim 3 wherein, in the step of moving the scanner and/or the object, the second pose is further based at least in part on observing via the processor an edge of the object. 7. The method of claim 1 further including, displaying a representation of the surface of the object on a display, the representation based at least in part on the coded 3D coordinates. 8. The method of claim 7 wherein, in the step of moving the scanner and/or the object, the second pose is further based on moving the scanner and/or the object by a user, the moving based at least in part on the coded 3D coordinates on the display. 9. The method of claim 8 wherein, in the step of displaying the coded 3D coordinates on a display, the coded 3D coordinates are displayed in real time. 10. The method of claim 8 wherein, in the step of displaying the coded 3D coordinates on a display, the coded 3D coordinates are colored to indicate a relative position of each of the 3D coordinate points relative to one or more locations on the scanner. 11. The step of claim 1 wherein, in the step of moving the scanner and/or the object, the second pose is further based at least in part on a depth of field of the scanner. 12. The method of claim 1 wherein, in the step of moving the scanner and/or the object, the moving is performed by a motorized device. 13. The method of claim 12 wherein, in the step of moving the scanner and/or the object, the motorized device is a robot. 14. The method of claim 1 wherein the method further includes: generating via the processor at a fourth time a third sequential projector pattern of light on the plane of patterned illumination; imaging the third sequential projector pattern of light using the projector onto the second portion of the surface of the object to obtain a third sequential surface pattern of light on the second portion; forming using the camera a third sequential image that is an image of the third sequential surface pattern of light and generating in response a third sequential array, the third sequential array being an array of digital values; sending the third sequential array to the processor; and wherein, in the step of determining via the processor in the first frame of reference sequential 3D coordinates, the sequential 3D coordinates are further based at least in part on the third sequential surface pattern of light and the third sequential array. 15. The method of claim 14 wherein, in the step of determining via the processor in the first frame of reference sequential 3D coordinates, the determining further includes calculating a phase of a pixel in a photosensitive array of the camera, the phase based at least in part on a first level of light received by the pixel in the first sequential image, a second level of light received by the pixel in the second sequential image, and a third level of light received by the pixel in the third sequential image. 16. The method of claim 1 further including, adjusting via the processor an average level of optical power of the first sequential projector pattern of light, the adjusting based at least in part on the first coded array. 17. The method of claim 14 wherein, in the step of determining via the processor in the first frame of reference first coded 3D coordinates, the sequential 3D coordinates are based at least in part on the first coded 3D coordinates. 18. The method of claim 1, wherein: the scanner has a first frame of reference; the projector includes a plane of patterned illumination and a projector lens, the projector having a projector perspective center; the camera includes a photosensitive array and a camera lens, the camera having a camera perspective center, the photosensitive array including an array of pixels; the scanner having a baseline, the baseline being a straight line segment between the projector perspective center and the camera perspective center; the camera having a camera pose in the first frame of reference; the projector having a projector pose in the first frame of reference; and, the processor further configured to control the plane of patterned illumination 19. The method of claim 18, wherein: the first coded image is formed on the photosensitive array using the camera lens; the first sequential image is formed on the photosensitive array using the camera lens; and, the second sequential image is formed on the photosensitive array using the camera lens. 20. The method of claim 18, wherein the step of determining via the processor a correspondence between each element of the first coded array and respective ones of the subwindows comprises: facilitating via the processor a search of pixel values on the photosensitive array that have one-to-one correspondence with uniquely identifiable element values of the illuminated pattern source. 21. The method of claim 1, wherein: in the step of determining via the processor in a first frame of reference of the scanner first coded 3D coordinates, and in the step of determining via the processor in the first frame of reference sequential 3D coordinates, each step comprises: executing via the processor triangulation calculations. 22. An apparatus for measuring three-dimensional (3D) coordinates of a surface of an object, the apparatus comprising: a structured light scanner comprising a processor, a projector, and a camera; wherein the processor is responsive to executable instructions which when executed by the processor facilitates the following method: generating by the processor at a first time a first coded projector pattern of light on a plane of patterned illumination, the first coded projector pattern of light being confined to a window in the plane, the window including a collection of subwindows arranged in two dimensions on the plane with each subwindow within the collection of subwindows having a subwindow pattern of light different than and distinguishable from the subwindow pattern of light of each adjacent subwindow; imaging the first coded projector pattern of light using the projector onto a first portion of the surface of the object to obtain a first coded surface pattern of light on the first portion; forming using the camera a first coded image that is an image of the first coded surface pattern of light and generating in response a first coded array, the first coded array being an array of digital values; sending the first coded array to the processor; determining via the processor a correspondence between each element of the first coded array and respective ones of the subwindows; determining via the processor in a first frame of reference of the scanner first coded 3D coordinates, the first coded 3D coordinates being 3D coordinates of points on the first portion, the first coded 3D coordinates based at least in part on the first coded projector pattern of light, the first coded array, the correspondence, a length of a baseline distance between the camera and the projector, a pose of the camera, and a pose of the projector; moving the scanner and/or the object to change the object from a first pose to a second pose, the first pose and the second pose of the object being given in the first frame of reference, the second pose of the object based at least on the first coded 3D coordinates and on a definable scanner standoff distance relative to the object; generating via the processor at a second time a first sequential projector pattern of light on the plane of patterned illumination; imaging the first sequential projector pattern of light using the projector onto a second portion of the surface of the object to obtain a first sequential surface pattern of light on the second portion; forming using the camera a first sequential image that is an image of the first sequential surface pattern of light and generating in response a first sequential array, the first sequential array being an array of digital values; sending the first sequential array to the processor; generating via the processor at a third time a second sequential projector pattern of light on the plane of patterned illumination; imaging the second sequential projector pattern of light using the projector onto the second portion of the surface of the object to obtain a second sequential surface pattern of light on the second portion; forming using the camera a second sequential image that is an image of the second sequential surface pattern of light and generating in response a second sequential array, the second sequential array being an array of digital values; sending the second sequential array to the processor; determining via the processor in the first frame of reference sequential 3D coordinates, the sequential 3D coordinates being 3D coordinates of points on the second portion, the sequential 3D coordinates based at least in part on the first sequential projector pattern of light, the first sequential array, the second sequential projector pattern of light, the second sequential array, the length of the baseline, the camera pose, and the projector pose; and storing the sequential 3D coordinates.
2,400
7,472
7,472
14,922,379
2,468
Call numbers are recognized in order to establish a connection from a lie-switched network to a packet-switched network. In one aspect, a device comprises a unit for detecting a selected string of digits as a selected call number, a unit for storing a plurality of authorized call numbers, a comparator unit for comparing the selected all number to the plurality of stored call numbers, and a unit for converting the selected call number into an associated IP address as soon as the comparator unit detest that the selected call number matches one of the stored all numbers.
1.-14. (canceled) 15. A method for detecting dialed digits of a subscriber number for establishing a connection from a circuit-switched network to a packet-switching network comprising the steps of: a) storing a plurality of authorized subscriber numbers; b) detecting a string of digits having a number of digits that is less than the complete number of digits of each of the plurality of stored call numbers; c) continuously comparing the detected string of digits that are not yet a fully dialed subscriber number with the plurality of stored authorized subscriber numbers; d) detecting an end of entering of the string of digits as a fully dialed subscriber number such that upon detecting the fully dialed subscriber number the fully dialed subscriber number is stored as a further authorized subscriber number; and e) converting a desired subscriber number identified from the continuously comparing of the detected string of digits or from the detecting of the fully dialed subscriber number into an associated IP address as soon as the dialed subscriber number matches one of the stored authorized subscriber numbers, wherein the converting of the dialed subscriber number into the IP address comprises converting the identified call number into the IP address based on a routing table received from a server. 16. The method of claim 15, wherein an exchange device performs the detecting of the string of digits, the continuously comparing, the detecting of the end of the entering of the string of digits, and the converting, and the method also comprising: detecting at least one additional digit entered into the string of digits to form an updated string of digits that is not yet a fully dialed subscriber number and the exchange device comparing the updated string of digits to the plurality of stored call numbers to identify a desired call number that corresponds to the updated string of digits from the plurality of stored call numbers. 17. The method of claim 16, wherein when the end of a sequence of digits is identified on the basis of a predetermined specific character or exceeding a predetermined period of time after a dialed number. 18. The method of claim 17, wherein the stored authorized subscriber numbers are sorted according to their frequency of use. 19. The method of claim 18, comprising: deleting a stored authorized subscriber number having a lowest frequency of use in response to a memory for storing the authorized subscriber numbers is at a pre-selected storage capacity. 20. The method of claim 15, wherein stored authorized subscriber numbers are sorted according to their storage age. 21. The method of claim 21, comprising: deleting an oldest stored authorized subscriber number in response to a memory for storing the authorized subscriber numbers is at a pre-selected storage capacity. 22. The method of claim 15, wherein at least one of the stored authorized subscriber numbers has at least one placeholder that is used when comparing the dialed subscriber number to the plurality of authorized stored subscriber numbers to only compare a number of digits that is less than a number of digits in a fully dialed subscriber number. 23. An apparatus for detecting dialed digits of a subscriber number for establishing a connection from a circuit-switched network to a packet-switching network, the apparatus comprising: a digit sequence detection unit configured to detect a string of digits as dialed subscriber number; a non-transitory subscriber number storage unit configured to store a plurality of authorized subscriber numbers; a detection unit configured to detect an end of a selected number sequence such that when an end of a fully dialed subscriber number is recognized the fully dialed subscriber number is stored in the subscriber number storage unit as an authorized subscriber number; a comparator unit configured to continuously compare a not yet fully dialed subscriber number with the plurality of authorized subscriber numbers stored in the subscriber number storage unit; and a conversion unit configured to convert the dialed subscriber number into an associated IP address as soon as the comparator unit detects a match of the dialed subscriber number with one of the stored subscriber numbers, converting of the dialed subscriber number into the IP address comprises converting the identified call number into the IP address based on a routing table received from a server. 24. The apparatus of claim 23, wherein the detection unit is configured to recognize the end of the digit sequence on the basis of a predetermined specific character or exceeding a predetermined period of time after a dialed number. 25. The apparatus of claim 23, wherein the storage unit is configured to sort the stored authorized subscriber numbers according to their frequency of use; and wherein the apparatus is configured to delete a stored authorized subscriber number having a lowest frequency of use in response to the storage unit being at a pre-selected storage capacity. 26. The apparatus of claim 23, wherein the storage unit is configured to sort the stored subscriber numbers according to their storage age; and wherein the apparatus is configured to delete an oldest stored authorized subscriber number in response to the storage unit being at a pre-selected storage capacity. 27. The apparatus of claim 23, wherein at least one of the stored authorized subscriber numbers contain at least one wildcard character that is used by the comparator unit to only compare a number of digits that is less than a number of digits in a fully dialed subscriber number. 28. The apparatus of claim 23, comprising: an IP-signaling unit configured to perform signaling in the packet-switching network based on the IP address. 29. A method for recognizing selected digits of a call number comprising: an exchange device storing a plurality of call numbers, each of the stored call numbers having a complete number of digits; the exchange device receiving a string of digits; the exchange device continuously comparing the received string of digits to the plurality of stored call numbers to identify a desired call number of the plurality of stored call numbers that corresponds to the received string of digits; and the exchange device sending an inquiry to a server to learn what IP address corresponds to the desired call number identified from the continuously comparing of the received string of digits as soon as the desired call number is identified; and the exchange device converting the desired call number into an IP address based on a response to the inquiry received from the server. 30. The method of claim 29, comprising: the exchange device deleting a stored call number of the plurality of stored call numbers based on a predetermined criteria, wherein the predetermined criteria for deleting of the stored call number being the stored call number having a lowest frequency of use or the stored call number being stored for a longest period of time. 31. The method of claim 29, wherein the string of digits comprises prefix digits for a private branch exchange. 32. The method of claim 29, wherein at least one call number of the plurality of stored call numbers has at least one placeholder as a digit in the at least one call number. 33. The method of claim 29, wherein at least one call number of the plurality of stored call numbers has at least one placeholder as a digit in the at least one call number. 34. The method of claim 29, wherein the string of digits has a number of digits that is less than the complete number of digits of each of the plurality of stored call numbers; and wherein the exchange device sending the inquiry to the server to learn what IP address corresponds to the desired call number identified from the continuously comparing of the received string of digits occurs as soon as the desired call number is identified and prior to the complete number of digits of the desired call number being entered.
Call numbers are recognized in order to establish a connection from a lie-switched network to a packet-switched network. In one aspect, a device comprises a unit for detecting a selected string of digits as a selected call number, a unit for storing a plurality of authorized call numbers, a comparator unit for comparing the selected all number to the plurality of stored call numbers, and a unit for converting the selected call number into an associated IP address as soon as the comparator unit detest that the selected call number matches one of the stored all numbers.1.-14. (canceled) 15. A method for detecting dialed digits of a subscriber number for establishing a connection from a circuit-switched network to a packet-switching network comprising the steps of: a) storing a plurality of authorized subscriber numbers; b) detecting a string of digits having a number of digits that is less than the complete number of digits of each of the plurality of stored call numbers; c) continuously comparing the detected string of digits that are not yet a fully dialed subscriber number with the plurality of stored authorized subscriber numbers; d) detecting an end of entering of the string of digits as a fully dialed subscriber number such that upon detecting the fully dialed subscriber number the fully dialed subscriber number is stored as a further authorized subscriber number; and e) converting a desired subscriber number identified from the continuously comparing of the detected string of digits or from the detecting of the fully dialed subscriber number into an associated IP address as soon as the dialed subscriber number matches one of the stored authorized subscriber numbers, wherein the converting of the dialed subscriber number into the IP address comprises converting the identified call number into the IP address based on a routing table received from a server. 16. The method of claim 15, wherein an exchange device performs the detecting of the string of digits, the continuously comparing, the detecting of the end of the entering of the string of digits, and the converting, and the method also comprising: detecting at least one additional digit entered into the string of digits to form an updated string of digits that is not yet a fully dialed subscriber number and the exchange device comparing the updated string of digits to the plurality of stored call numbers to identify a desired call number that corresponds to the updated string of digits from the plurality of stored call numbers. 17. The method of claim 16, wherein when the end of a sequence of digits is identified on the basis of a predetermined specific character or exceeding a predetermined period of time after a dialed number. 18. The method of claim 17, wherein the stored authorized subscriber numbers are sorted according to their frequency of use. 19. The method of claim 18, comprising: deleting a stored authorized subscriber number having a lowest frequency of use in response to a memory for storing the authorized subscriber numbers is at a pre-selected storage capacity. 20. The method of claim 15, wherein stored authorized subscriber numbers are sorted according to their storage age. 21. The method of claim 21, comprising: deleting an oldest stored authorized subscriber number in response to a memory for storing the authorized subscriber numbers is at a pre-selected storage capacity. 22. The method of claim 15, wherein at least one of the stored authorized subscriber numbers has at least one placeholder that is used when comparing the dialed subscriber number to the plurality of authorized stored subscriber numbers to only compare a number of digits that is less than a number of digits in a fully dialed subscriber number. 23. An apparatus for detecting dialed digits of a subscriber number for establishing a connection from a circuit-switched network to a packet-switching network, the apparatus comprising: a digit sequence detection unit configured to detect a string of digits as dialed subscriber number; a non-transitory subscriber number storage unit configured to store a plurality of authorized subscriber numbers; a detection unit configured to detect an end of a selected number sequence such that when an end of a fully dialed subscriber number is recognized the fully dialed subscriber number is stored in the subscriber number storage unit as an authorized subscriber number; a comparator unit configured to continuously compare a not yet fully dialed subscriber number with the plurality of authorized subscriber numbers stored in the subscriber number storage unit; and a conversion unit configured to convert the dialed subscriber number into an associated IP address as soon as the comparator unit detects a match of the dialed subscriber number with one of the stored subscriber numbers, converting of the dialed subscriber number into the IP address comprises converting the identified call number into the IP address based on a routing table received from a server. 24. The apparatus of claim 23, wherein the detection unit is configured to recognize the end of the digit sequence on the basis of a predetermined specific character or exceeding a predetermined period of time after a dialed number. 25. The apparatus of claim 23, wherein the storage unit is configured to sort the stored authorized subscriber numbers according to their frequency of use; and wherein the apparatus is configured to delete a stored authorized subscriber number having a lowest frequency of use in response to the storage unit being at a pre-selected storage capacity. 26. The apparatus of claim 23, wherein the storage unit is configured to sort the stored subscriber numbers according to their storage age; and wherein the apparatus is configured to delete an oldest stored authorized subscriber number in response to the storage unit being at a pre-selected storage capacity. 27. The apparatus of claim 23, wherein at least one of the stored authorized subscriber numbers contain at least one wildcard character that is used by the comparator unit to only compare a number of digits that is less than a number of digits in a fully dialed subscriber number. 28. The apparatus of claim 23, comprising: an IP-signaling unit configured to perform signaling in the packet-switching network based on the IP address. 29. A method for recognizing selected digits of a call number comprising: an exchange device storing a plurality of call numbers, each of the stored call numbers having a complete number of digits; the exchange device receiving a string of digits; the exchange device continuously comparing the received string of digits to the plurality of stored call numbers to identify a desired call number of the plurality of stored call numbers that corresponds to the received string of digits; and the exchange device sending an inquiry to a server to learn what IP address corresponds to the desired call number identified from the continuously comparing of the received string of digits as soon as the desired call number is identified; and the exchange device converting the desired call number into an IP address based on a response to the inquiry received from the server. 30. The method of claim 29, comprising: the exchange device deleting a stored call number of the plurality of stored call numbers based on a predetermined criteria, wherein the predetermined criteria for deleting of the stored call number being the stored call number having a lowest frequency of use or the stored call number being stored for a longest period of time. 31. The method of claim 29, wherein the string of digits comprises prefix digits for a private branch exchange. 32. The method of claim 29, wherein at least one call number of the plurality of stored call numbers has at least one placeholder as a digit in the at least one call number. 33. The method of claim 29, wherein at least one call number of the plurality of stored call numbers has at least one placeholder as a digit in the at least one call number. 34. The method of claim 29, wherein the string of digits has a number of digits that is less than the complete number of digits of each of the plurality of stored call numbers; and wherein the exchange device sending the inquiry to the server to learn what IP address corresponds to the desired call number identified from the continuously comparing of the received string of digits occurs as soon as the desired call number is identified and prior to the complete number of digits of the desired call number being entered.
2,400
7,473
7,473
14,020,295
2,482
Example methods and systems for displaying actionable elements over playing content, such as video content, are described. In some example embodiments, the methods and systems identify video content currently playing within a display environment provided by a playback device, and display an actionable element within the display environment provided by the playback device that is based on the identified video content and includes one or more user-selectable options to perform an action associated with the identified video content. Further, in some example embodiments, the methods and systems may perform an action (e.g., present supplemental content and/or information) in response to a selection of one or more of the user-selectable options.
1. A method, comprising: identifying video content currently playing within a display environment provided by a playback device; and displaying an actionable element within the display environment provided by the playback device that is based on the identified video content and includes one or more user-selectable options to perform an action associated with the identified video content. 2. The method of claim 1, wherein identifying video content currently playing within a display environment provided by a playback device includes identifying the video content by matching a fingerprint associated with the video content to one or more fingerprints associated with known video content; and wherein displaying an actionable element within the display environment provided by the playback device includes: determining a location within the display environment at which to display the actionable element based on information provided by the fingerprint associated with the video content; and displaying the actionable element at or proximate to the determined location. 3. The method of claim 1, wherein displaying an actionable element within the display environment provided by the playback device includes displaying the actionable element over at least a portion of a navigation element displayed by the playback device within the display environment. 4. The method of claim 1, wherein displaying an actionable element within the display environment provided by the playback device includes displaying the actionable element proximate to a navigation element displayed by the playback device within the display environment. 5. The method of claim 1, wherein displaying an actionable element within the display environment provided by the playback device includes displaying the actionable element at a location within the displayed environment that is selected by a viewer of the playback device. 6. The method of claim 1, wherein displaying an actionable element within the display environment provided by the playback device includes displaying the actionable element at a location within the displayed environment that is associated with a low contrast between pixels of the video content. 7. The method of claim 1, further comprising: identifying different video content currently playing within the display environment provided by the playback device; and modifying the displayed actionable element to include one or more user-selectable options to perform an action associated with the identified different video content. 8. The method of claim 1, further comprising: identifying different video content currently playing within the display environment provided by the playback device; and modifying a configuration of the displayed actionable element to a configuration that is associated with the identified different video content. 9. The method of claim 1, further comprising: receiving a selection of one of the one or more user-selectable options of the actionable element; and performing an action associated with the selected option. 10. The method of claim 1, further comprising: receiving a selection of one of the one or more user-selectable options of the actionable element that is associated with retrieving supplemental content from a web-based resource that is related to the video content currently playing via the playback device; and presenting the supplemental content within the display environment. 11. The method of claim 1, further comprising: receiving a selection of one of the one or more user-selectable options of the actionable element that is associated with providing information about the video content currently playing via the playback device; and presenting the information along with the video content within the display environment. 12. The method of claim 1, wherein the displayed actionable element includes one or more user-selectable options to perform an action associated with the identified video content and includes one or more user-selectable buttons configured to facilitate navigation of content provided within the display environment. 13. A system, comprising: a content identification module that is configured to identify video content currently playing within a display environment provided by a playback device; an element display module that is configured to display an actionable element within the display environment provided by the playback device that is based on the identified video content and includes one or more user-selectable options to perform an action associated with the identified video content; and an action module that is configured to receive a selection of one of the one or more user-selectable options of the actionable element and perform an action associated with the selected option. 14. The system of claim 13, wherein the element display module is configured to render the actionable element in a configuration that is based on one or more characteristics of the identified video content. 15. A computer-readable storage medium whose contents, when executed by a computing system, cause the computing system to perform operations comprising: identifying video content currently playing within a display environment provided by a playback device; displaying an actionable element within the display environment provided by the playback device that is based on the identified video content and includes one or more user-selectable options to perform an action associated with the identified video content; receiving a selection of one of the one or more user-selectable options of the actionable element; and performing an action associated with the selected option. 16. The computer-readable storage medium of claim 15, wherein identifying video content currently playing within a display environment provided by a playback device includes identifying the video content by matching a fingerprint associated with the video content to one or more fingerprints associated with known video content; and wherein displaying an actionable element within the display environment provided by the playback device includes: determining a location within the display environment at which to display the actionable element based on information provided by the fingerprint associated with the video content; and displaying the actionable element at or proximate to the determined location. 17. The computer-readable storage medium of claim 15, wherein performing an action associated with the selected option includes presenting supplemental video content having metadata that is similar to metadata for the video content currently playing within the display environment. 18. The computer-readable storage medium of claim 15, wherein performing an action associated with the selected option includes presenting informational content having metadata that is similar to metadata for the video content currently playing within the display environment. 19. The computer-readable storage medium of claim 15, wherein the video content currently playing within the display environment is a television show and the playback device is a television; and wherein performing an action associated with the selected option includes presenting video content retrieved from a web-based resource that is associated with the television show. 20. The computer-readable storage medium of claim 15, wherein the video content currently playing within the display environment is associated with a live sports event and the playback device is a television; and wherein performing an action associated with the selected option includes presenting statistical information related to the live sports event.
Example methods and systems for displaying actionable elements over playing content, such as video content, are described. In some example embodiments, the methods and systems identify video content currently playing within a display environment provided by a playback device, and display an actionable element within the display environment provided by the playback device that is based on the identified video content and includes one or more user-selectable options to perform an action associated with the identified video content. Further, in some example embodiments, the methods and systems may perform an action (e.g., present supplemental content and/or information) in response to a selection of one or more of the user-selectable options.1. A method, comprising: identifying video content currently playing within a display environment provided by a playback device; and displaying an actionable element within the display environment provided by the playback device that is based on the identified video content and includes one or more user-selectable options to perform an action associated with the identified video content. 2. The method of claim 1, wherein identifying video content currently playing within a display environment provided by a playback device includes identifying the video content by matching a fingerprint associated with the video content to one or more fingerprints associated with known video content; and wherein displaying an actionable element within the display environment provided by the playback device includes: determining a location within the display environment at which to display the actionable element based on information provided by the fingerprint associated with the video content; and displaying the actionable element at or proximate to the determined location. 3. The method of claim 1, wherein displaying an actionable element within the display environment provided by the playback device includes displaying the actionable element over at least a portion of a navigation element displayed by the playback device within the display environment. 4. The method of claim 1, wherein displaying an actionable element within the display environment provided by the playback device includes displaying the actionable element proximate to a navigation element displayed by the playback device within the display environment. 5. The method of claim 1, wherein displaying an actionable element within the display environment provided by the playback device includes displaying the actionable element at a location within the displayed environment that is selected by a viewer of the playback device. 6. The method of claim 1, wherein displaying an actionable element within the display environment provided by the playback device includes displaying the actionable element at a location within the displayed environment that is associated with a low contrast between pixels of the video content. 7. The method of claim 1, further comprising: identifying different video content currently playing within the display environment provided by the playback device; and modifying the displayed actionable element to include one or more user-selectable options to perform an action associated with the identified different video content. 8. The method of claim 1, further comprising: identifying different video content currently playing within the display environment provided by the playback device; and modifying a configuration of the displayed actionable element to a configuration that is associated with the identified different video content. 9. The method of claim 1, further comprising: receiving a selection of one of the one or more user-selectable options of the actionable element; and performing an action associated with the selected option. 10. The method of claim 1, further comprising: receiving a selection of one of the one or more user-selectable options of the actionable element that is associated with retrieving supplemental content from a web-based resource that is related to the video content currently playing via the playback device; and presenting the supplemental content within the display environment. 11. The method of claim 1, further comprising: receiving a selection of one of the one or more user-selectable options of the actionable element that is associated with providing information about the video content currently playing via the playback device; and presenting the information along with the video content within the display environment. 12. The method of claim 1, wherein the displayed actionable element includes one or more user-selectable options to perform an action associated with the identified video content and includes one or more user-selectable buttons configured to facilitate navigation of content provided within the display environment. 13. A system, comprising: a content identification module that is configured to identify video content currently playing within a display environment provided by a playback device; an element display module that is configured to display an actionable element within the display environment provided by the playback device that is based on the identified video content and includes one or more user-selectable options to perform an action associated with the identified video content; and an action module that is configured to receive a selection of one of the one or more user-selectable options of the actionable element and perform an action associated with the selected option. 14. The system of claim 13, wherein the element display module is configured to render the actionable element in a configuration that is based on one or more characteristics of the identified video content. 15. A computer-readable storage medium whose contents, when executed by a computing system, cause the computing system to perform operations comprising: identifying video content currently playing within a display environment provided by a playback device; displaying an actionable element within the display environment provided by the playback device that is based on the identified video content and includes one or more user-selectable options to perform an action associated with the identified video content; receiving a selection of one of the one or more user-selectable options of the actionable element; and performing an action associated with the selected option. 16. The computer-readable storage medium of claim 15, wherein identifying video content currently playing within a display environment provided by a playback device includes identifying the video content by matching a fingerprint associated with the video content to one or more fingerprints associated with known video content; and wherein displaying an actionable element within the display environment provided by the playback device includes: determining a location within the display environment at which to display the actionable element based on information provided by the fingerprint associated with the video content; and displaying the actionable element at or proximate to the determined location. 17. The computer-readable storage medium of claim 15, wherein performing an action associated with the selected option includes presenting supplemental video content having metadata that is similar to metadata for the video content currently playing within the display environment. 18. The computer-readable storage medium of claim 15, wherein performing an action associated with the selected option includes presenting informational content having metadata that is similar to metadata for the video content currently playing within the display environment. 19. The computer-readable storage medium of claim 15, wherein the video content currently playing within the display environment is a television show and the playback device is a television; and wherein performing an action associated with the selected option includes presenting video content retrieved from a web-based resource that is associated with the television show. 20. The computer-readable storage medium of claim 15, wherein the video content currently playing within the display environment is associated with a live sports event and the playback device is a television; and wherein performing an action associated with the selected option includes presenting statistical information related to the live sports event.
2,400
7,474
7,474
14,341,323
2,431
Network management of a telecommunications network. An external system, such as a cloud computing environment, receives network element data from the network management system of the telecommunications network over a channel that may be encrypted. The network element data are parameter samples that the network management system has collected from one or more network elements within the telecommunications network. The external system then processes at least some of the received network element data. The external system might also receive network element data from other network management systems of other telecommunications networks also. Furthermore, the external system might also have external information not received from the network management system. The external system may perform processing on all of this information in conjunction with the received network element data in order to perform sophisticated analytics.
1. A method for processing network management data in an external system that is outside of a network management system, the method comprising: an act of receiving network element data from a network management system over a channel into the external system, the network element data being parameter samples taken from one or more network elements that are within a telecommunications network that reports to the network management system; and an act of processing at least some of the received network element data in the external system. 2. The method in accordance with claim 1, further comprising: an act of receiving external information from external to the network management system; and an act of processing at least some of the received external information, the act of processing at least some of the received external information involves at least some of the act of processing at least some of the received network element data. 3. The method in accordance with claim 2, the received external information comprising manufacturer data for at least one network element having at least one reported parameter value within the received network element data. 4. The method in accordance with claim 2, the received external information comprising environmental data. 5. The method in accordance with claim 2, the received external information comprising positional data. 6. The method in accordance with claim 1, at least some of the plurality of network elements being optical network elements of an optical network. 7. The method in accordance with claim 1, the external system being a cloud computing environment. 8. The method in accordance with claim 1, the act of processing at least some of the received network element data comprising an act of storing the at least some of the received network element data. 9. The method in accordance with claim 1, the network management system being a first network management system, the telecommunications network being a first telecommunications network, and network element data being first network element data, and the one or more network element data being a first set of one or more network elements, the method further comprising: an act of receiving second network element data from a second network management system into the external system, the second network element data being parameter samples taken from a second set of one or more network elements that are within a second telecommunications network that is reports to the second network management system; and an act of processing at least some of the received second network element data in the external system, wherein at least some of the act of processing at least some the received second network element data involves at least some of the act of processing at least some of the received first network element data. 10. The method in accordance with claim 1, further comprising: an act of receiving an authentication request purported to be from an authorized administrator of the telecommunications network; an act of authenticating the authorized administrator as the issuer of the authentication request; and in response to the authentication request, permitting the authenticated authorized administrator access to at least one result from the act or processing at least some of the received network element data. 11. The method in accordance with claim 10, the act of processing comprising an act of using the at least some of the received network element data in order to identify trends in parameters measured within the received network element data. 12. A system comprising: a communication module configured to receive network element data from at least one network management system over respective channels, each of the at least one network management system being associated with a corresponding telecommunications network that has at least one network element that reports parameter samples to the corresponding network management system, the network element data for each of the at least one network management system includes at least a processed version of the reported parameter samples reported to that corresponding network management system; and a processing module configured to process of least some of the received network element data. 13. The system in accordance with claim 12, wherein for at least one of the channel for a corresponding network management system, the channel is encrypted and does not permit communications to go from the system to any network element within the associated telecommunications network. 14. The system in accordance with claim 12, the processing module further configured to process external information in association with at least some of the processing of at least some of the received network element data. 15. The system in accordance with claim 14, the external information comprising at least one of 1) manufacturer data for at least one of the network elements having at least one reported parameter value within the received network element data, 2) environmental data, and 3) positional data. 16. The system in accordance with claim 12, the system operating within a cloud computing environment. 17. The system in accordance with claim 12, further comprising: an authentication module configured to authenticate authorized administrators for a particular telecommunications network, and permit access to at least one result of the act of processing at least some of the received network element data corresponding to the particular telecommunications network. 18. The system in accordance with claim 12, further comprising: an encrypted channel corresponding to a respective network management system, the encrypted channel comprising a security appliance on a side of the encrypted channel proximate the respective network management system. 19. A computer program product comprising one or more computer-readable storage media having thereon computer-executable instructions that are structured such that, when executed by one or more processors of a computing system, cause the computing system to perform a method for processing network management data in an external system that is outside of a network management system, the method comprising: an act of processing network element data received from a network management system, the network element data being parameter samples taken from one or more network elements that are within a telecommunications network that reports to the network management system. 20. The computer program product in accordance with claim 19, the network elements data being encrypted, the method further comprising: an act of decrypting the network element data received from the network management system, the network element data received over an encrypted channel.
Network management of a telecommunications network. An external system, such as a cloud computing environment, receives network element data from the network management system of the telecommunications network over a channel that may be encrypted. The network element data are parameter samples that the network management system has collected from one or more network elements within the telecommunications network. The external system then processes at least some of the received network element data. The external system might also receive network element data from other network management systems of other telecommunications networks also. Furthermore, the external system might also have external information not received from the network management system. The external system may perform processing on all of this information in conjunction with the received network element data in order to perform sophisticated analytics.1. A method for processing network management data in an external system that is outside of a network management system, the method comprising: an act of receiving network element data from a network management system over a channel into the external system, the network element data being parameter samples taken from one or more network elements that are within a telecommunications network that reports to the network management system; and an act of processing at least some of the received network element data in the external system. 2. The method in accordance with claim 1, further comprising: an act of receiving external information from external to the network management system; and an act of processing at least some of the received external information, the act of processing at least some of the received external information involves at least some of the act of processing at least some of the received network element data. 3. The method in accordance with claim 2, the received external information comprising manufacturer data for at least one network element having at least one reported parameter value within the received network element data. 4. The method in accordance with claim 2, the received external information comprising environmental data. 5. The method in accordance with claim 2, the received external information comprising positional data. 6. The method in accordance with claim 1, at least some of the plurality of network elements being optical network elements of an optical network. 7. The method in accordance with claim 1, the external system being a cloud computing environment. 8. The method in accordance with claim 1, the act of processing at least some of the received network element data comprising an act of storing the at least some of the received network element data. 9. The method in accordance with claim 1, the network management system being a first network management system, the telecommunications network being a first telecommunications network, and network element data being first network element data, and the one or more network element data being a first set of one or more network elements, the method further comprising: an act of receiving second network element data from a second network management system into the external system, the second network element data being parameter samples taken from a second set of one or more network elements that are within a second telecommunications network that is reports to the second network management system; and an act of processing at least some of the received second network element data in the external system, wherein at least some of the act of processing at least some the received second network element data involves at least some of the act of processing at least some of the received first network element data. 10. The method in accordance with claim 1, further comprising: an act of receiving an authentication request purported to be from an authorized administrator of the telecommunications network; an act of authenticating the authorized administrator as the issuer of the authentication request; and in response to the authentication request, permitting the authenticated authorized administrator access to at least one result from the act or processing at least some of the received network element data. 11. The method in accordance with claim 10, the act of processing comprising an act of using the at least some of the received network element data in order to identify trends in parameters measured within the received network element data. 12. A system comprising: a communication module configured to receive network element data from at least one network management system over respective channels, each of the at least one network management system being associated with a corresponding telecommunications network that has at least one network element that reports parameter samples to the corresponding network management system, the network element data for each of the at least one network management system includes at least a processed version of the reported parameter samples reported to that corresponding network management system; and a processing module configured to process of least some of the received network element data. 13. The system in accordance with claim 12, wherein for at least one of the channel for a corresponding network management system, the channel is encrypted and does not permit communications to go from the system to any network element within the associated telecommunications network. 14. The system in accordance with claim 12, the processing module further configured to process external information in association with at least some of the processing of at least some of the received network element data. 15. The system in accordance with claim 14, the external information comprising at least one of 1) manufacturer data for at least one of the network elements having at least one reported parameter value within the received network element data, 2) environmental data, and 3) positional data. 16. The system in accordance with claim 12, the system operating within a cloud computing environment. 17. The system in accordance with claim 12, further comprising: an authentication module configured to authenticate authorized administrators for a particular telecommunications network, and permit access to at least one result of the act of processing at least some of the received network element data corresponding to the particular telecommunications network. 18. The system in accordance with claim 12, further comprising: an encrypted channel corresponding to a respective network management system, the encrypted channel comprising a security appliance on a side of the encrypted channel proximate the respective network management system. 19. A computer program product comprising one or more computer-readable storage media having thereon computer-executable instructions that are structured such that, when executed by one or more processors of a computing system, cause the computing system to perform a method for processing network management data in an external system that is outside of a network management system, the method comprising: an act of processing network element data received from a network management system, the network element data being parameter samples taken from one or more network elements that are within a telecommunications network that reports to the network management system. 20. The computer program product in accordance with claim 19, the network elements data being encrypted, the method further comprising: an act of decrypting the network element data received from the network management system, the network element data received over an encrypted channel.
2,400
7,475
7,475
13,477,643
2,457
A system is configured to receive, by a first server, a request, from a user device, for a first record stored by a cache associated with the first server, determine, a first timestamp associated with the first record, determine that the first record is invalid based on the first timestamp, and determine, based on determining that the first record is invalid, whether the first record is out of date with respect to a corresponding second record stored by a second server by comparing a second timestamp of the first record with a timestamp of the second record. The system is further configured to update the first record with information from the second record to form an updated first record when the first record is out of date, and to send the updated first record to the user device associated with the request.
1. A method comprising: receiving, by a first server, a request, from a user device, for a first record stored by a cache associated with the first server, determining, by the first server, a first timestamp associated with the first record; determining, by the first server, that the first record is invalid based on the first timestamp; determining, by the first server and based on determining that the first record is invalid, whether the first record is out of date with respect to a corresponding second record stored by a second server by comparing a second timestamp of the first record with a timestamp of the second record; updating, by the first server, the first record with information from the second record to form an updated first record when the first record is out of date; and sending the updated first record to the user device associated with the request. 2. The method of claim 1, where determining whether the first record is out of date with respect to the corresponding second record includes: identifying that the first record is up to date based on comparing the second timestamp of the first record with the timestamp of the second record; and updating the first timestamp associated with the first record based on identifying that the first record is up to date, the first timestamp being updated based on a time value associated with the first record. 3. The method of claim 2, where the cache stores a plurality of records and the time value is defined per record for each of the plurality of records. 4. The method of claim 1, where determining whether the first record is out of date with respect to the corresponding second record is performed independent of the request. 5. The method of claim 1, where updating the first record with information from the second record when the first record is out of date includes: replacing the out-of-date first record with a copy of the second record to form the updated first record, the updated first record including a third timestamp corresponding to a time when the updated first record is formed, the first server being capable of using the third timestamp to validate the updated first record. 6. The method of claim 1, further comprising: receiving a request for a third record, the request for the first record further including the request for the third record and a third timestamp associated with the third record; determining that the third record is invalid based on the third timestamp; determining whether the third record is out of date with respect to a corresponding fourth record stored by a third server, based on determining that the third record is invalid; updating the third record with information from the fourth record to form an updated third record when the third record is out of date; and aggregating the updated first record and the updated third record to form an aggregated record, where sending the updated first record to the user device includes sending the aggregated record. 7. The method of claim 6, where the second server is associated with a different party than the third server. 8. A system comprising: one or more devices to: receive a request, from a user device, for a first record stored by a cache associated with the one or more devices; determine a first timestamp associated with the first record; determine that the first record is invalid based on the first timestamp; determine, based on determining that the first record is invalid, whether the first record is out of date with respect to a corresponding second record stored by a server group by comparing a second timestamp of the first record with a timestamp of the second record, the server group including a plurality of servers associated with different parties; update the first record with information from the second record to form an updated first record when the first record is out of date; and send the updated first record to the user device associated with the request. 9. The system of claim 8, where when determining whether the first record is out of date with respect to the corresponding second record, the one or more devices are further to: identify that the first record is up to date based on comparing the second timestamp of the first record with the timestamp of the second record; and update the first timestamp associated with the first record based on identifying that the first record is up to date; the first timestamp being updated based on a time value associated with the first record. 10. The system of claim 9, where the cache stores a plurality of records and the time value is defined per record for each of the plurality of records. 11. The system of claim 8, where the one or more devices are to determine whether the first record is out of date with respect to the second record independent of the request. 12. The system of claim 8, where when updating the first record with information from the second record, the one or more devices are further to: replace the out-of-date first record with a copy of the corresponding second record to form the updated first record, the updated first record including a third timestamp corresponding to the timestamp associated with the second record, and use the third timestamp to validate the updated first record. 13. The system of claim 8, where the one or more devices are further to: receive a request for a third record, the request for the first record further including the request for the third record and a third timestamp associated with the third record; determine that the third record is invalid based on the third timestamp; determine that the third record is out of date with respect to a corresponding fourth record stored by the server group, based on determining that the third record is invalid; update the third record with information from the fourth record to form an updated third record when the third record is out of date; and aggregate the updated first record and the updated third record to form an aggregated record, where sending the updated first record to the user device includes sending the aggregated record. 14. The system of claim 13, where the server group includes a first server and a second server, the first server storing the second record, and the second server storing the third record. 15. A computer-readable medium comprising: a plurality of instructions which, when executed by one or more processors of a first server, cause the one or more processors to: receive a request, from a user device, for a first record stored by a cache associated with the first server, determine a first timestamp associated with the first record; determine that the first record is invalid based on the first timestamp; determine, based on determining that the first record is invalid, whether the first record is out of date with respect to a corresponding second record stored by a server group by comparing a second timestamp of the first record with a timestamp of the second record, the server group including a plurality of servers; update the first record with information from the second record to form an updated first record when the first record is out of date; and send the updated first record to the user device associated with the request. 16. The computer-readable medium of claim 15, where one or more instructions, of the plurality of instructions to determine whether the first record is out of date with respect to the second record include: one or more instructions to: identify that the first record is up to date based on comparing the second timestamp of the first record with the timestamp of the second record; and update the first timestamp associated with the first record based on identifying that the first record is up to date; the first timestamp being updated based on a time value associated with the first record. 17. The computer-readable medium of claim 16, where the cache stores a plurality of records and the time value is defined per record for each of the plurality of records. 18. The computer-readable medium of claim 15, where one or more instructions, of the plurality of instructions, to determine whether the first record is out of date with respect to the corresponding second record are executed independent of the request. 19. The computer-readable medium of claim 15, where when updating the first record with information from the second record when the first record is out of date, the plurality of instructions further cause the one or more processors to: replace the out-of-date first record with a copy of the corresponding second record to form an updated first record, the updated first record including a third timestamp corresponding to the timestamp associated with the second record, the first server being capable of using the third timestamp to validate the updated first record. 20. The computer-readable medium of claim 15, where the plurality of instructions further cause the one or more processors to: receive a request for a third record, where the request for the first record further includes the request for the third record and a third timestamp associated with the third record; determine that the third record is invalid based on the third timestamp; determine that the third record is up to date with respect to a corresponding fourth record stored by the server group, based on determining that the third record is invalid; update the third timestamp associated with the third record to form an updated third record based on determining that the third record is up to date; and aggregate the updated first record and the updated third record to form an aggregated record, where one or more instructions, of the plurality of instructions, to send the updated first record to the user device include one or more instructions to send the aggregated record. 21. The computer-readable medium of claim 15, where the plurality of instructions further cause the one or more processors to: receive a request for a third record, where the request for the first record further includes the request for the third record and a third timestamp associated with the third record; determine that the third record is valid based on the third timestamp; and aggregate the updated first record and the third record to form an aggregated record, where one or more instructions, of the plurality of instructions, to send the updated first record to the user device include one or more instructions to send the aggregated record.
A system is configured to receive, by a first server, a request, from a user device, for a first record stored by a cache associated with the first server, determine, a first timestamp associated with the first record, determine that the first record is invalid based on the first timestamp, and determine, based on determining that the first record is invalid, whether the first record is out of date with respect to a corresponding second record stored by a second server by comparing a second timestamp of the first record with a timestamp of the second record. The system is further configured to update the first record with information from the second record to form an updated first record when the first record is out of date, and to send the updated first record to the user device associated with the request.1. A method comprising: receiving, by a first server, a request, from a user device, for a first record stored by a cache associated with the first server, determining, by the first server, a first timestamp associated with the first record; determining, by the first server, that the first record is invalid based on the first timestamp; determining, by the first server and based on determining that the first record is invalid, whether the first record is out of date with respect to a corresponding second record stored by a second server by comparing a second timestamp of the first record with a timestamp of the second record; updating, by the first server, the first record with information from the second record to form an updated first record when the first record is out of date; and sending the updated first record to the user device associated with the request. 2. The method of claim 1, where determining whether the first record is out of date with respect to the corresponding second record includes: identifying that the first record is up to date based on comparing the second timestamp of the first record with the timestamp of the second record; and updating the first timestamp associated with the first record based on identifying that the first record is up to date, the first timestamp being updated based on a time value associated with the first record. 3. The method of claim 2, where the cache stores a plurality of records and the time value is defined per record for each of the plurality of records. 4. The method of claim 1, where determining whether the first record is out of date with respect to the corresponding second record is performed independent of the request. 5. The method of claim 1, where updating the first record with information from the second record when the first record is out of date includes: replacing the out-of-date first record with a copy of the second record to form the updated first record, the updated first record including a third timestamp corresponding to a time when the updated first record is formed, the first server being capable of using the third timestamp to validate the updated first record. 6. The method of claim 1, further comprising: receiving a request for a third record, the request for the first record further including the request for the third record and a third timestamp associated with the third record; determining that the third record is invalid based on the third timestamp; determining whether the third record is out of date with respect to a corresponding fourth record stored by a third server, based on determining that the third record is invalid; updating the third record with information from the fourth record to form an updated third record when the third record is out of date; and aggregating the updated first record and the updated third record to form an aggregated record, where sending the updated first record to the user device includes sending the aggregated record. 7. The method of claim 6, where the second server is associated with a different party than the third server. 8. A system comprising: one or more devices to: receive a request, from a user device, for a first record stored by a cache associated with the one or more devices; determine a first timestamp associated with the first record; determine that the first record is invalid based on the first timestamp; determine, based on determining that the first record is invalid, whether the first record is out of date with respect to a corresponding second record stored by a server group by comparing a second timestamp of the first record with a timestamp of the second record, the server group including a plurality of servers associated with different parties; update the first record with information from the second record to form an updated first record when the first record is out of date; and send the updated first record to the user device associated with the request. 9. The system of claim 8, where when determining whether the first record is out of date with respect to the corresponding second record, the one or more devices are further to: identify that the first record is up to date based on comparing the second timestamp of the first record with the timestamp of the second record; and update the first timestamp associated with the first record based on identifying that the first record is up to date; the first timestamp being updated based on a time value associated with the first record. 10. The system of claim 9, where the cache stores a plurality of records and the time value is defined per record for each of the plurality of records. 11. The system of claim 8, where the one or more devices are to determine whether the first record is out of date with respect to the second record independent of the request. 12. The system of claim 8, where when updating the first record with information from the second record, the one or more devices are further to: replace the out-of-date first record with a copy of the corresponding second record to form the updated first record, the updated first record including a third timestamp corresponding to the timestamp associated with the second record, and use the third timestamp to validate the updated first record. 13. The system of claim 8, where the one or more devices are further to: receive a request for a third record, the request for the first record further including the request for the third record and a third timestamp associated with the third record; determine that the third record is invalid based on the third timestamp; determine that the third record is out of date with respect to a corresponding fourth record stored by the server group, based on determining that the third record is invalid; update the third record with information from the fourth record to form an updated third record when the third record is out of date; and aggregate the updated first record and the updated third record to form an aggregated record, where sending the updated first record to the user device includes sending the aggregated record. 14. The system of claim 13, where the server group includes a first server and a second server, the first server storing the second record, and the second server storing the third record. 15. A computer-readable medium comprising: a plurality of instructions which, when executed by one or more processors of a first server, cause the one or more processors to: receive a request, from a user device, for a first record stored by a cache associated with the first server, determine a first timestamp associated with the first record; determine that the first record is invalid based on the first timestamp; determine, based on determining that the first record is invalid, whether the first record is out of date with respect to a corresponding second record stored by a server group by comparing a second timestamp of the first record with a timestamp of the second record, the server group including a plurality of servers; update the first record with information from the second record to form an updated first record when the first record is out of date; and send the updated first record to the user device associated with the request. 16. The computer-readable medium of claim 15, where one or more instructions, of the plurality of instructions to determine whether the first record is out of date with respect to the second record include: one or more instructions to: identify that the first record is up to date based on comparing the second timestamp of the first record with the timestamp of the second record; and update the first timestamp associated with the first record based on identifying that the first record is up to date; the first timestamp being updated based on a time value associated with the first record. 17. The computer-readable medium of claim 16, where the cache stores a plurality of records and the time value is defined per record for each of the plurality of records. 18. The computer-readable medium of claim 15, where one or more instructions, of the plurality of instructions, to determine whether the first record is out of date with respect to the corresponding second record are executed independent of the request. 19. The computer-readable medium of claim 15, where when updating the first record with information from the second record when the first record is out of date, the plurality of instructions further cause the one or more processors to: replace the out-of-date first record with a copy of the corresponding second record to form an updated first record, the updated first record including a third timestamp corresponding to the timestamp associated with the second record, the first server being capable of using the third timestamp to validate the updated first record. 20. The computer-readable medium of claim 15, where the plurality of instructions further cause the one or more processors to: receive a request for a third record, where the request for the first record further includes the request for the third record and a third timestamp associated with the third record; determine that the third record is invalid based on the third timestamp; determine that the third record is up to date with respect to a corresponding fourth record stored by the server group, based on determining that the third record is invalid; update the third timestamp associated with the third record to form an updated third record based on determining that the third record is up to date; and aggregate the updated first record and the updated third record to form an aggregated record, where one or more instructions, of the plurality of instructions, to send the updated first record to the user device include one or more instructions to send the aggregated record. 21. The computer-readable medium of claim 15, where the plurality of instructions further cause the one or more processors to: receive a request for a third record, where the request for the first record further includes the request for the third record and a third timestamp associated with the third record; determine that the third record is valid based on the third timestamp; and aggregate the updated first record and the third record to form an aggregated record, where one or more instructions, of the plurality of instructions, to send the updated first record to the user device include one or more instructions to send the aggregated record.
2,400
7,476
7,476
15,040,688
2,483
An imaging system includes an imaging scope, a camera, an image processor, and a system controller. The imaging scope is configured to illuminate an object and capture light reflected from the object. The camera has a light sensor with a light-sensitive surface configured to receive the captured light from the imaging scope, and generate a digital image representative of the captured light. The image processor is configured to receive the digital image from the camera, and use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary. The system controller is configured to receive the boundary data from the image processor, and use the boundary data to select and/or adjust a setting of the imaging system.
1. An imaging system, comprising: an imaging scope configured to illuminate an object and capture light reflected from the object; a camera having a light sensor with a light-sensitive surface configured to receive the captured light from the imaging scope, and generate a digital image representative of the captured light; an image processor configured to receive the digital image from the camera, and use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary; and a system controller configured to receive the boundary data from the image processor, and use the boundary data to select and/or adjust a setting of the imaging system. 2. The imaging system of claim 1, wherein the image processor includes a grayscale converter configured to convert the digital image to a grayscale digital image; wherein the image processor includes a boundary point detector configured to filter the grayscale digital image to detect boundary points within the grayscale digital image, each of the boundary points corresponding to at least one pixel of the grayscale digital image that might possibly form a portion of the boundary. 3. The imaging system of claim 2, wherein the image processor includes a boundary point thinner configured to perform a non-maximal suppression on the boundary points, and to eliminate any boundary points that are unlikely to correspond to an area of transition between the active portion and the inactive portion of the digital image. 4. The imaging system of claim 2, wherein the image processor includes a boundary identifier configured to fit a curve to the boundary points to thereby identify the boundary and the generate boundary data. 5. The imaging system of claim 4, wherein the image processor is configured to use a RANSAC technique to identify the boundary and generate the boundary data, the RANSAC technique including: (a) randomly selecting a boundary point subset having at least three of the boundary points and determining a fit for the boundary point subset; (b) checking the fit against all of the boundary points to determine an inlier number for the fit, the inlier number being the number of all of the boundary points that lie within the fit; (c) repeating steps (a) and (b) a plurality of times to determine a fit and an inlier number for a plurality of boundary point subsets; and (d) selecting a best fit, the best fit being the fit having the inlier number with the greatest magnitude. 6. The imaging system of claim 5, wherein the fit for the boundary point subset is determined using at least one of a sum of squared error metric, a sum of absolute error metric, and a maximum absolute error metric. 7. The imaging system of claim 4, wherein the image processor is configured to use a RANSAC technique to identify the boundary and generate the boundary data, the RANSAC technique including: (a) randomly selecting a boundary point subset having at least three of the boundary points and determining a fit for the boundary point subset; (b) repeating step (a) a plurality of times to determine a respective fit for each of a plurality of boundary point subsets; and (c) for each of the respective fits determined in steps (a) and (b), comparing the respective fit with all of the boundary points to determine a median error measurement for the respective fit; and (d) selecting a best fit, the best fit being the respective fit having the lowest median error measurement. 8. The imaging system of claim 7, wherein the fit for the boundary point subset is determined using at least one of a sum of squared error metric, a sum of absolute error metric, and a maximum absolute error metric. 9. The imaging system of claim 2, wherein the image processor includes a boundary identifier configured to fit a curve to the boundary points using a best fit of boundary points detected from a previously-analyzed digital image. 10. The imaging system of claim 4, wherein the image processor is configured to use a Hough Transform technique to identify the boundary and generate the boundary data, the Hough Transform including: (a) providing an array of index points, each index point corresponding to a radius and center coordinates of a candidate fit for the boundary points; (b) initializing all of the index points with zeros; (c) for each of the boundary points, incrementing a count of each of the index points corresponding to candidate fits that include the respective boundary point; and (d) selecting a best fit, the best fit being the candidate fit corresponding to the index point with a count that is greatest in magnitude. 11. The imaging system of claim 4, wherein the image processor includes an error and reasonableness detector configured to receive the boundary data and the curve from the boundary identifier and determine whether or not a number of the boundary points lying within the curve satisfies a predetermined confidence measure. 12. The imaging system of claim 4, wherein the image processor includes an error and reasonableness detector configured to receive the boundary data and the curve and determine whether the curve is reasonable in view of a displayable area of the digital image. 13. The imaging system of claim 1, wherein the system controller is configured to use the boundary data to automatically select at least one of an exposure setting and a gain setting of the camera. 14. The imaging system of claim 1, wherein the system controller is configured to use the boundary data to automatically reduce an importance of pixel intensities that correspond to the inactive portion of the digital image. 15. The imaging system of claim 1, wherein the system controller is configured to use the boundary data to filter noise in the inactive portion of the digital image. 16. The imaging system of claim 1, wherein the system controller is configured to use the boundary data to at least one of (i) automatically adjust sharpness of the digital image, (ii) automatically perform digital zooming on the digital image, (iii) automatically re-center the digital image on a monitor and (iv) improve performance of a tone mapping technique performed by the imaging system. 17. The imaging system of claim 1, wherein the active portion of the digital image corresponds to an interior portion of the light-sensitive surface that was occupied by the captured light, and the inactive portion of the digital image corresponds to a peripheral portion of the light-sensitive surface that was not occupied by the captured light. 18. The imaging system of claim 1, wherein the characteristic of the boundary is at least one of a center, a radius, a position, a size, and a shape of the boundary. 19. The imaging system of claim 1, wherein the imaging scope includes an image transmission device that transmits the captured light from an objective lens located proximate a distal end of the imaging scope to a proximal end of the imaging scope, wherein the camera is disposed relative to the proximal end of the imaging scope. 20. The imaging system of claim 19, wherein the image transmission device transmits the captured light therethrough in the form of a captured light beam having a cross-sectional shape that is at least partially circular; and wherein the light-sensitive surface of the light sensor has a rectangular shape. 21. The imaging system of claim 1, wherein the camera is releasably connected to the imaging scope; and wherein the imaging scope is a first type of imaging scope, and the camera is configured to be releasably connected to a second type of imaging scope, the second type being different than the first type. 22. The imaging system of claim 1, wherein the camera is a video camera, and the digital image generated by the light sensor represents one of a plurality of time-sequenced frames of a digital video. 23. The imaging system of claim 1, wherein the camera includes a zoom device configured to receive the captured light from the imaging scope before the captured light is received by the light sensor; wherein the zoom device is selectively adjustable between a low magnification configuration, in which the zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon. 24. A method, comprising: receiving, by an image processor, a digital image generated by an imaging system; using, by the image processor, at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of the digital image; generating, by the image processor, boundary data indicative of a characteristic of the boundary; and automatically selecting and/or adjusting one or more settings of the imaging system based on the boundary data. 25. A computer-readable medium storing instructions, the instructions comprising: using at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of a digital image generated using an imaging system; generating boundary data indicative of a characteristic of the boundary; automatically selecting and/or adjusting one or more settings of the imaging system based on the boundary data.
An imaging system includes an imaging scope, a camera, an image processor, and a system controller. The imaging scope is configured to illuminate an object and capture light reflected from the object. The camera has a light sensor with a light-sensitive surface configured to receive the captured light from the imaging scope, and generate a digital image representative of the captured light. The image processor is configured to receive the digital image from the camera, and use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary. The system controller is configured to receive the boundary data from the image processor, and use the boundary data to select and/or adjust a setting of the imaging system.1. An imaging system, comprising: an imaging scope configured to illuminate an object and capture light reflected from the object; a camera having a light sensor with a light-sensitive surface configured to receive the captured light from the imaging scope, and generate a digital image representative of the captured light; an image processor configured to receive the digital image from the camera, and use at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to (i) identify a boundary between an active portion and an inactive portion of the digital image and (ii) generate boundary data indicative of a characteristic of the boundary; and a system controller configured to receive the boundary data from the image processor, and use the boundary data to select and/or adjust a setting of the imaging system. 2. The imaging system of claim 1, wherein the image processor includes a grayscale converter configured to convert the digital image to a grayscale digital image; wherein the image processor includes a boundary point detector configured to filter the grayscale digital image to detect boundary points within the grayscale digital image, each of the boundary points corresponding to at least one pixel of the grayscale digital image that might possibly form a portion of the boundary. 3. The imaging system of claim 2, wherein the image processor includes a boundary point thinner configured to perform a non-maximal suppression on the boundary points, and to eliminate any boundary points that are unlikely to correspond to an area of transition between the active portion and the inactive portion of the digital image. 4. The imaging system of claim 2, wherein the image processor includes a boundary identifier configured to fit a curve to the boundary points to thereby identify the boundary and the generate boundary data. 5. The imaging system of claim 4, wherein the image processor is configured to use a RANSAC technique to identify the boundary and generate the boundary data, the RANSAC technique including: (a) randomly selecting a boundary point subset having at least three of the boundary points and determining a fit for the boundary point subset; (b) checking the fit against all of the boundary points to determine an inlier number for the fit, the inlier number being the number of all of the boundary points that lie within the fit; (c) repeating steps (a) and (b) a plurality of times to determine a fit and an inlier number for a plurality of boundary point subsets; and (d) selecting a best fit, the best fit being the fit having the inlier number with the greatest magnitude. 6. The imaging system of claim 5, wherein the fit for the boundary point subset is determined using at least one of a sum of squared error metric, a sum of absolute error metric, and a maximum absolute error metric. 7. The imaging system of claim 4, wherein the image processor is configured to use a RANSAC technique to identify the boundary and generate the boundary data, the RANSAC technique including: (a) randomly selecting a boundary point subset having at least three of the boundary points and determining a fit for the boundary point subset; (b) repeating step (a) a plurality of times to determine a respective fit for each of a plurality of boundary point subsets; and (c) for each of the respective fits determined in steps (a) and (b), comparing the respective fit with all of the boundary points to determine a median error measurement for the respective fit; and (d) selecting a best fit, the best fit being the respective fit having the lowest median error measurement. 8. The imaging system of claim 7, wherein the fit for the boundary point subset is determined using at least one of a sum of squared error metric, a sum of absolute error metric, and a maximum absolute error metric. 9. The imaging system of claim 2, wherein the image processor includes a boundary identifier configured to fit a curve to the boundary points using a best fit of boundary points detected from a previously-analyzed digital image. 10. The imaging system of claim 4, wherein the image processor is configured to use a Hough Transform technique to identify the boundary and generate the boundary data, the Hough Transform including: (a) providing an array of index points, each index point corresponding to a radius and center coordinates of a candidate fit for the boundary points; (b) initializing all of the index points with zeros; (c) for each of the boundary points, incrementing a count of each of the index points corresponding to candidate fits that include the respective boundary point; and (d) selecting a best fit, the best fit being the candidate fit corresponding to the index point with a count that is greatest in magnitude. 11. The imaging system of claim 4, wherein the image processor includes an error and reasonableness detector configured to receive the boundary data and the curve from the boundary identifier and determine whether or not a number of the boundary points lying within the curve satisfies a predetermined confidence measure. 12. The imaging system of claim 4, wherein the image processor includes an error and reasonableness detector configured to receive the boundary data and the curve and determine whether the curve is reasonable in view of a displayable area of the digital image. 13. The imaging system of claim 1, wherein the system controller is configured to use the boundary data to automatically select at least one of an exposure setting and a gain setting of the camera. 14. The imaging system of claim 1, wherein the system controller is configured to use the boundary data to automatically reduce an importance of pixel intensities that correspond to the inactive portion of the digital image. 15. The imaging system of claim 1, wherein the system controller is configured to use the boundary data to filter noise in the inactive portion of the digital image. 16. The imaging system of claim 1, wherein the system controller is configured to use the boundary data to at least one of (i) automatically adjust sharpness of the digital image, (ii) automatically perform digital zooming on the digital image, (iii) automatically re-center the digital image on a monitor and (iv) improve performance of a tone mapping technique performed by the imaging system. 17. The imaging system of claim 1, wherein the active portion of the digital image corresponds to an interior portion of the light-sensitive surface that was occupied by the captured light, and the inactive portion of the digital image corresponds to a peripheral portion of the light-sensitive surface that was not occupied by the captured light. 18. The imaging system of claim 1, wherein the characteristic of the boundary is at least one of a center, a radius, a position, a size, and a shape of the boundary. 19. The imaging system of claim 1, wherein the imaging scope includes an image transmission device that transmits the captured light from an objective lens located proximate a distal end of the imaging scope to a proximal end of the imaging scope, wherein the camera is disposed relative to the proximal end of the imaging scope. 20. The imaging system of claim 19, wherein the image transmission device transmits the captured light therethrough in the form of a captured light beam having a cross-sectional shape that is at least partially circular; and wherein the light-sensitive surface of the light sensor has a rectangular shape. 21. The imaging system of claim 1, wherein the camera is releasably connected to the imaging scope; and wherein the imaging scope is a first type of imaging scope, and the camera is configured to be releasably connected to a second type of imaging scope, the second type being different than the first type. 22. The imaging system of claim 1, wherein the camera is a video camera, and the digital image generated by the light sensor represents one of a plurality of time-sequenced frames of a digital video. 23. The imaging system of claim 1, wherein the camera includes a zoom device configured to receive the captured light from the imaging scope before the captured light is received by the light sensor; wherein the zoom device is selectively adjustable between a low magnification configuration, in which the zoom device magnifies the captured light such that the captured light occupies only a portion of the light-sensitive surface when received thereon, and a high magnification configuration, in which the zoom device magnifies the captured light such that the captured light occupies all of the light-sensitive surface of the light sensor when received thereon. 24. A method, comprising: receiving, by an image processor, a digital image generated by an imaging system; using, by the image processor, at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of the digital image; generating, by the image processor, boundary data indicative of a characteristic of the boundary; and automatically selecting and/or adjusting one or more settings of the imaging system based on the boundary data. 25. A computer-readable medium storing instructions, the instructions comprising: using at least one of a random sample consensus (RANSAC) technique and a Hough Transform technique to identify a boundary between an active portion and an inactive portion of a digital image generated using an imaging system; generating boundary data indicative of a characteristic of the boundary; automatically selecting and/or adjusting one or more settings of the imaging system based on the boundary data.
2,400
7,477
7,477
11,872,030
2,419
A computer-implemented method of identifying a missing recipient of an electronic message can include identifying at least one user specified as a recipient of an electronic message and accessing a data store comprising measures of correlation between a plurality of users, wherein the plurality of users comprises the recipient of the electronic message. One or more users not designated as a recipient of the electronic message and having a measure of correlation, with at least one recipient of the electronic message, that exceeds a predetermined threshold can be identified as a potential missing recipient of the electronic message. An indication that a recipient may have been excluded from the electronic message can be output.
1. A computer-implemented method of identifying a missing recipient of an electronic message, the method comprising: determining measures of correlation between users from a plurality of electronic messages within an electronic messaging system; identifying at least one user designated as a recipient of an electronic message; identifying at least one user not designated as a recipient of the electronic message and having a measure of correlation, with the at least one recipient of the electronic message, that exceeds a predetermined threshold as a potential missing recipient of the electronic message; and outputting an indication that a recipient may have been excluded from the electronic message. 2. The computer-implemented method of claim 1, wherein outputting an indication comprises automatically suggesting that the potential missing recipient be added as a recipient of the electronic message. 3. The computer-implemented method of claim 1, further comprising presenting a list comprising each user identified as a potential missing recipient. 4. The computer-implemented method of claim 1, wherein outputting an indication comprises automatically adding the potential missing recipient as a recipient of the electronic message. 5. The computer-implemented method of claim 1, wherein identifying at least one user designated as a recipient comprises recursively expanding a distribution list specifying a plurality of recipients of an electronic message into constituent members of the distribution list. 6. The computer-implemented method of claim 1, wherein determining measures of correlation comprises determining frequency of co-occurrence of recipients within same electronic messages of the plurality of electronic messages. 7. The computer-implemented method of claim 1, wherein determining measures of correlation comprises determining frequency of co-occurrence of users specified as recipients or senders within same electronic messages of the plurality of electronic messages. 8. The computer-implemented method of claim 1, wherein determining measures of correlation comprises determining a distance between users within an organizational hierarchy. 9. The computer-implemented method of claim 8, wherein determining measures of correlation comprises determining that two users are within a same organizational sub-unit of an organizational hierarchy. 10. A computer-implemented method of identifying a missing recipient of an electronic message, the method comprising: identifying at least one user specified as a recipient of an electronic message; accessing a data store comprising measures of correlation between a plurality of users, wherein the plurality of users comprises the recipient of the electronic message; identifying at least one user not designated as a recipient of the electronic message and having a measure of correlation, with at least one recipient of the electronic message, that exceeds a predetermined threshold as a potential missing recipient of the electronic message; and outputting an indication that a recipient may have been excluded from the electronic message. 11. The computer-implemented method of claim 10, wherein outputting an indication comprises automatically suggesting that the potential missing recipient be added as a recipient of the electronic message. 12. The computer-implemented method of claim 10, further comprising presenting a list comprising each user identified as a potential missing recipient. 13. The computer-implemented method of claim 10, wherein outputting an indication comprises automatically adding the potential missing recipient as a recipient of the electronic message. 14. A computer program product comprising: a computer-usable medium comprising computer-usable program code that identifies a missing recipient of an electronic message, the computer-usable medium comprising: computer-usable program code that identifies at least one user specified as a recipient of an electronic message; computer-usable program code that accesses a data store comprising measures of correlation between a plurality of users, wherein the plurality of users comprises the recipient of the electronic message; computer-usable program code that identifies at least one user not designated as a recipient of the electronic message and having a measure of correlation, with at least one recipient of the electronic message, that exceeds a predetermined threshold as a potential missing recipient of the electronic message; and computer-usable program code that outputs an indication that a recipient may have been excluded from the electronic message. 15. The computer program product of claim 14, wherein the computer-usable program code that outputs an indication comprises computer-usable program code that automatically suggests that the potential missing recipient be added as a recipient of the electronic message. 16. The computer program product of claim 14, further comprising computer-usable program code that presents a list comprising each user identified as a potential missing recipient. 17. The computer program product of claim 14, wherein the computer-usable program code that outputs an indication comprises computer-usable program code that automatically adds the potential missing recipient as a recipient of the electronic message. 18. The computer program product of claim 14, wherein the measures of correlation depend upon frequency of co-occurrence of recipients within same electronic messages. 19. The computer program product of claim 14, wherein the measures of correlation depend upon frequency of co-occurrence of users specified as recipients or senders within same electronic messages. 20. The computer program product of claim 14, wherein the measures of correlation depend upon distances between users within an organizational hierarchy.
A computer-implemented method of identifying a missing recipient of an electronic message can include identifying at least one user specified as a recipient of an electronic message and accessing a data store comprising measures of correlation between a plurality of users, wherein the plurality of users comprises the recipient of the electronic message. One or more users not designated as a recipient of the electronic message and having a measure of correlation, with at least one recipient of the electronic message, that exceeds a predetermined threshold can be identified as a potential missing recipient of the electronic message. An indication that a recipient may have been excluded from the electronic message can be output.1. A computer-implemented method of identifying a missing recipient of an electronic message, the method comprising: determining measures of correlation between users from a plurality of electronic messages within an electronic messaging system; identifying at least one user designated as a recipient of an electronic message; identifying at least one user not designated as a recipient of the electronic message and having a measure of correlation, with the at least one recipient of the electronic message, that exceeds a predetermined threshold as a potential missing recipient of the electronic message; and outputting an indication that a recipient may have been excluded from the electronic message. 2. The computer-implemented method of claim 1, wherein outputting an indication comprises automatically suggesting that the potential missing recipient be added as a recipient of the electronic message. 3. The computer-implemented method of claim 1, further comprising presenting a list comprising each user identified as a potential missing recipient. 4. The computer-implemented method of claim 1, wherein outputting an indication comprises automatically adding the potential missing recipient as a recipient of the electronic message. 5. The computer-implemented method of claim 1, wherein identifying at least one user designated as a recipient comprises recursively expanding a distribution list specifying a plurality of recipients of an electronic message into constituent members of the distribution list. 6. The computer-implemented method of claim 1, wherein determining measures of correlation comprises determining frequency of co-occurrence of recipients within same electronic messages of the plurality of electronic messages. 7. The computer-implemented method of claim 1, wherein determining measures of correlation comprises determining frequency of co-occurrence of users specified as recipients or senders within same electronic messages of the plurality of electronic messages. 8. The computer-implemented method of claim 1, wherein determining measures of correlation comprises determining a distance between users within an organizational hierarchy. 9. The computer-implemented method of claim 8, wherein determining measures of correlation comprises determining that two users are within a same organizational sub-unit of an organizational hierarchy. 10. A computer-implemented method of identifying a missing recipient of an electronic message, the method comprising: identifying at least one user specified as a recipient of an electronic message; accessing a data store comprising measures of correlation between a plurality of users, wherein the plurality of users comprises the recipient of the electronic message; identifying at least one user not designated as a recipient of the electronic message and having a measure of correlation, with at least one recipient of the electronic message, that exceeds a predetermined threshold as a potential missing recipient of the electronic message; and outputting an indication that a recipient may have been excluded from the electronic message. 11. The computer-implemented method of claim 10, wherein outputting an indication comprises automatically suggesting that the potential missing recipient be added as a recipient of the electronic message. 12. The computer-implemented method of claim 10, further comprising presenting a list comprising each user identified as a potential missing recipient. 13. The computer-implemented method of claim 10, wherein outputting an indication comprises automatically adding the potential missing recipient as a recipient of the electronic message. 14. A computer program product comprising: a computer-usable medium comprising computer-usable program code that identifies a missing recipient of an electronic message, the computer-usable medium comprising: computer-usable program code that identifies at least one user specified as a recipient of an electronic message; computer-usable program code that accesses a data store comprising measures of correlation between a plurality of users, wherein the plurality of users comprises the recipient of the electronic message; computer-usable program code that identifies at least one user not designated as a recipient of the electronic message and having a measure of correlation, with at least one recipient of the electronic message, that exceeds a predetermined threshold as a potential missing recipient of the electronic message; and computer-usable program code that outputs an indication that a recipient may have been excluded from the electronic message. 15. The computer program product of claim 14, wherein the computer-usable program code that outputs an indication comprises computer-usable program code that automatically suggests that the potential missing recipient be added as a recipient of the electronic message. 16. The computer program product of claim 14, further comprising computer-usable program code that presents a list comprising each user identified as a potential missing recipient. 17. The computer program product of claim 14, wherein the computer-usable program code that outputs an indication comprises computer-usable program code that automatically adds the potential missing recipient as a recipient of the electronic message. 18. The computer program product of claim 14, wherein the measures of correlation depend upon frequency of co-occurrence of recipients within same electronic messages. 19. The computer program product of claim 14, wherein the measures of correlation depend upon frequency of co-occurrence of users specified as recipients or senders within same electronic messages. 20. The computer program product of claim 14, wherein the measures of correlation depend upon distances between users within an organizational hierarchy.
2,400
7,478
7,478
15,221,608
2,463
In one exemplary aspect, an edge-gateway multipath method includes the step of providing an edge device in a local network communicatively coupled with a cloud-computing service in a cloud-computing network. A set of wide area network (WAN) links connected to the edge device are automatically detected. The WAN links are automatically measured without the need for an external router. The edge device is communicatively coupled with a central configuration point in the cloud-computing network. The method further includes the step of downloading, from the central configuration point, an enterprise-specific configuration data into the edge device. The enterprise-specific configuration data includes the gateway information. The edge device is communicatively coupled with a gateway in the cloud-computing network. The communicatively coupling of the edge device with the gateway includes a multipath (MP) protocol.
1. A network-link method useful for a last-mile connectivity in an edge-gateway multipath comprising: identifying a network-traffic flow of a computer network using deep-packet inspection to determine an identity of an application type associated with the network-traffic flow; aggregating a bandwidth from a specified set of network links; intelligently load-balancing a traffic on the set of network links by sending successive packets belonging to a same traffic flow on a set of specified multiple-network links, wherein the set of specified multiple-network links is selected based on the identity of an application type associated with the network-traffic flow; identifying a set of active-network links in the set of specified multiple-network links; providing an in-order data delivery with an application persistence by sending data packets belonging to a same data-packet flow on the set of active links; and correcting an error on a lossy network link using an error control mechanism for data transmission selectively based on the identified network-traffic flow and a current measured condition in the computer network. 2. The network-link method of claim 1, further comprising: identifying the network traffic using the deep-packet inspection to determine an identity of a specific application associated with the network traffic. 3. The network-link method of claim 2, wherein a network link comprises a communications channel that connects two or more communicating devices. 4. The network-link methods of claim 3, wherein the step of intelligently load-balancing the traffic on the set of network links using an application-aware intelligent network link characterization. 5. The network-link method of claim 2, wherein the error-control mechanism for data transmission comprises an Automatic Repeat-reQuest. 6. The network-link method of claim 5, wherein the error-control mechanism comprises a forward error correction (FEC)). 7. The network-link method of claim 1, wherein the application type comprises a real-time application type, a transactional application type, or a bulk-file application type. 8. The network-link method of claim 7, wherein the network traffic is identified as a bulk file transfer network traffic, and wherein the bulk file transfer network traffic is set as a lowest priority traffic and uses a small portion of a network bandwidth. 9. The network-link method of claim 1, wherein the application type comprises a voice-application type, and wherein the forward-error correction is implemented as the error-control mechanism. 10. The network-link method of claim 1, wherein the application type comprises a social-network website browsing application type, and wherein the network traffic is switched to an Internet connection. 11. A computerized system comprising: a processor configured to execute instructions; a memory containing instructions when executed on the processor, causes the processor to perform operations that: identify a network-traffic flow of a computer network using deep-packet inspection to determine an identity of an application type associated with the network-traffic flow; aggregate a bandwidth from a specified set of network links; intelligently load-balance a traffic on the set of network links by sending successive packets belonging to a same traffic flow on a set of specified multiple-network links, wherein the set of specified multiple-network links is selected based on the identity of an application type associated with the network-traffic flow; identify a set of active-network links in the set of specified multiple-network links; provide an in-order data delivery with an application persistence by sending data packets belonging to a same data-packet flow on the set of active links; and correct an error on a lossy network link using an error-control mechanism for data transmission selectively based on the identified network-traffic flow and a current measured condition in the computer network. 12. The computerized system of claim 11, wherein a network link comprises a communications channel that connects two or more communicating devices. 13. The computerized system of claim 12, wherein the step of intelligently load-balancing the traffic on the set of network links using an application-aware intelligent network link characterization. 14. The computerized system of claim 13, wherein the error-control mechanism for data transmission comprises an Automatic Repeat-reQuest. 15. The computerized system of claim 14, wherein the error-control mechanism comprises a forward error correction (FEC)). 16. The computerized system of claim 15, wherein the application type comprises a real-time application type, a transactional application type, or a bulk-file application type. 17. The computerized system of claim 16, wherein the network traffic is identified as a bulk file transfer network traffic, and wherein the bulk file transfer network traffic is set as a lowest priority traffic and uses a small portion of a network bandwidth. 18. The computerized system of claim 17, wherein the application type comprises a voice-application type, and wherein the forward-error correction is implemented as the error-control mechanism. 19. The computerized system of claim 18, wherein the application type comprises a social-network website browsing application type, and wherein the network traffic is switched to an Internet connection.
In one exemplary aspect, an edge-gateway multipath method includes the step of providing an edge device in a local network communicatively coupled with a cloud-computing service in a cloud-computing network. A set of wide area network (WAN) links connected to the edge device are automatically detected. The WAN links are automatically measured without the need for an external router. The edge device is communicatively coupled with a central configuration point in the cloud-computing network. The method further includes the step of downloading, from the central configuration point, an enterprise-specific configuration data into the edge device. The enterprise-specific configuration data includes the gateway information. The edge device is communicatively coupled with a gateway in the cloud-computing network. The communicatively coupling of the edge device with the gateway includes a multipath (MP) protocol.1. A network-link method useful for a last-mile connectivity in an edge-gateway multipath comprising: identifying a network-traffic flow of a computer network using deep-packet inspection to determine an identity of an application type associated with the network-traffic flow; aggregating a bandwidth from a specified set of network links; intelligently load-balancing a traffic on the set of network links by sending successive packets belonging to a same traffic flow on a set of specified multiple-network links, wherein the set of specified multiple-network links is selected based on the identity of an application type associated with the network-traffic flow; identifying a set of active-network links in the set of specified multiple-network links; providing an in-order data delivery with an application persistence by sending data packets belonging to a same data-packet flow on the set of active links; and correcting an error on a lossy network link using an error control mechanism for data transmission selectively based on the identified network-traffic flow and a current measured condition in the computer network. 2. The network-link method of claim 1, further comprising: identifying the network traffic using the deep-packet inspection to determine an identity of a specific application associated with the network traffic. 3. The network-link method of claim 2, wherein a network link comprises a communications channel that connects two or more communicating devices. 4. The network-link methods of claim 3, wherein the step of intelligently load-balancing the traffic on the set of network links using an application-aware intelligent network link characterization. 5. The network-link method of claim 2, wherein the error-control mechanism for data transmission comprises an Automatic Repeat-reQuest. 6. The network-link method of claim 5, wherein the error-control mechanism comprises a forward error correction (FEC)). 7. The network-link method of claim 1, wherein the application type comprises a real-time application type, a transactional application type, or a bulk-file application type. 8. The network-link method of claim 7, wherein the network traffic is identified as a bulk file transfer network traffic, and wherein the bulk file transfer network traffic is set as a lowest priority traffic and uses a small portion of a network bandwidth. 9. The network-link method of claim 1, wherein the application type comprises a voice-application type, and wherein the forward-error correction is implemented as the error-control mechanism. 10. The network-link method of claim 1, wherein the application type comprises a social-network website browsing application type, and wherein the network traffic is switched to an Internet connection. 11. A computerized system comprising: a processor configured to execute instructions; a memory containing instructions when executed on the processor, causes the processor to perform operations that: identify a network-traffic flow of a computer network using deep-packet inspection to determine an identity of an application type associated with the network-traffic flow; aggregate a bandwidth from a specified set of network links; intelligently load-balance a traffic on the set of network links by sending successive packets belonging to a same traffic flow on a set of specified multiple-network links, wherein the set of specified multiple-network links is selected based on the identity of an application type associated with the network-traffic flow; identify a set of active-network links in the set of specified multiple-network links; provide an in-order data delivery with an application persistence by sending data packets belonging to a same data-packet flow on the set of active links; and correct an error on a lossy network link using an error-control mechanism for data transmission selectively based on the identified network-traffic flow and a current measured condition in the computer network. 12. The computerized system of claim 11, wherein a network link comprises a communications channel that connects two or more communicating devices. 13. The computerized system of claim 12, wherein the step of intelligently load-balancing the traffic on the set of network links using an application-aware intelligent network link characterization. 14. The computerized system of claim 13, wherein the error-control mechanism for data transmission comprises an Automatic Repeat-reQuest. 15. The computerized system of claim 14, wherein the error-control mechanism comprises a forward error correction (FEC)). 16. The computerized system of claim 15, wherein the application type comprises a real-time application type, a transactional application type, or a bulk-file application type. 17. The computerized system of claim 16, wherein the network traffic is identified as a bulk file transfer network traffic, and wherein the bulk file transfer network traffic is set as a lowest priority traffic and uses a small portion of a network bandwidth. 18. The computerized system of claim 17, wherein the application type comprises a voice-application type, and wherein the forward-error correction is implemented as the error-control mechanism. 19. The computerized system of claim 18, wherein the application type comprises a social-network website browsing application type, and wherein the network traffic is switched to an Internet connection.
2,400
7,479
7,479
14,357,041
2,483
Provided are a method and apparatus for determining quantization parameter for a quantization and an inverse quantization performed during a video encoding and decoding. The quantization parameter determination method includes determining transformation units of at least one size included in a coding unit; determining a default quantization parameter of the coding unit; reducing a quantization parameter of a transformation unit that is greater than a predetermined size, to be less than the default quantization parameter; and increasing a quantization parameter of a transformation unit that is less than a predetermined size, to be greater than the default quantization parameter.
1. A quantization parameter determination method, the method comprising: determining transformation units of at least one size included in a coding unit; determining a default quantization parameter of the coding unit; reducing a quantization parameter of a transformation unit that is greater than a predetermined size among the transformation units, to be less than the default quantization parameter; and increasing a quantization parameter of a transformation unit that is less than the predetermined size among the transformation units, to be greater than the default quantization parameter. 2. The method of claim 1, wherein the determining of the transformation units comprises determining transformation units of at least one transformation depth included in the coding unit, when the size of the transformation unit is determined by the level of the corresponding transformation depth, wherein the transformation depth denotes a number of split of the coding unit, the determining of the default quantization parameter comprises determining the default quantization parameter allocated to a transformation unit of a predetermined depth in the at least one level of transformation depth, the reducing of the quantization parameter comprises reducing the quantization parameter of the transformation unit of a transformation depth that is lower than the predetermined depth, to be less than the default quantization parameter, and the increasing of the quantization parameter comprises increasing the quantization parameter of a transformation depth that is higher than the predetermined depth, to be greater than the default quantization parameter. 3. The method of claim 1, wherein the reducing of the quantization parameter comprises reducing the quantization parameter by a difference value of the quantization parameter from the default quantization parameter, and the increasing of the quantization parameter comprises increasing the quantization parameter by a difference value of the quantization value from the default quantization parameter. 4. The method of claim 2, wherein the reducing of the quantization parameter comprises determining a reduction amount of the difference value of the quantization parameter reduced by from the default quantization parameter in proportion to a reduction amount of the current transformation depth of the transformation unit from the predetermined transformation depth, and increasing of the quantization parameter comprises determining an increase amount of the difference value of the quantization parameter increasing by from the default quantization parameter in proportion to an increase amount of the current transformation depth of the transformation unit from the predetermined transformation depth. 5. The method of claim 1, further comprising generating quantized transformation coefficients by performing quantization of the transformation units by using the determined quantization parameters. 6. The method of claim 1, further comprising restoring the transformation coefficients from the quantized transformation coefficients by performing an inverse quantization of the transformation units by using the determined quantization parameter. 7. The method of claim 1, further comprising encoding and transmitting information about the difference value of the quantization parameter increasing or reducing from the default quantization parameter and the default quantization parameter. 8. The method of claim 1, further comprising receiving the information about the difference value of the quantization parameter increasing or reducing by from the default quantization parameter and the default quantization parameter. 9. A quantization parameter determination apparatus, the apparatus comprising: a transformation unit determiner for determining transformation units of at least one size included in a coding unit; and a quantization parameter determiner for determining a default quantization parameter of the coding unit, and determining quantization parameters of the transformation unit by reducing a quantization parameter of a transformation unit that is less than a predetermined size to be less than the default quantization parameter, and by increasing the quantization parameter of a transformation unit that is greater than the predetermined size to be greater than the default quantization parameter. 10. The apparatus of claim 9, wherein the transformation unit determiner determines transformation units of at least one transformation depth included in the coding unit, when the size of the transformation unit is determined by the level of the corresponding transformation depth, wherein the transformation depth denotes a number of split of the coding unit, the quantization parameter determiner determines the default quantization parameter allocated to a transformation unit of a predetermined depth in the at least one level of transformation depth, reduces the quantization parameter of the transformation unit of a transformation depth that is lower than the predetermined depth, to be less than the default quantization parameter, and increases the quantization parameter of a transformation depth that is higher than the predetermined depth, to be greater than the default quantization parameter 11. The apparatus of claim 9, wherein the quantization parameter determiner reduces or increases the quantization parameter by a difference value of the quantization parameter from the default quantization parameter. 12. The apparatus of claim 10, wherein the quantization parameter determiner determines a reduction amount of the difference value of the quantization parameter reduced by from the default quantization parameter in proportion to a reduction amount of the current transformation depth of the transformation unit from the predetermined transformation depth, and determines an increase amount of the difference value of the quantization parameter increasing by from the default quantization parameter in proportion to an increase amount of the current transformation depth of the transformation unit from the predetermined transformation depth. 13. The apparatus of claim 9, further comprising: a predictor generating prediction data of a prediction unit by performing an intra prediction or a motion prediction of the at least one prediction unit in the current coding unit; a transformer generating transformation coefficients of the transformation units by transforming the determined transformation units included in the current coding unit that includes the generated prediction data; and a quantizer generating quantized transformation coefficients by performing quantization of the transformation units by using the determined quantization parameter. 14. The apparatus of claim 9, further comprising: an inverse quantizer restoring the transformation coefficients from the quantized transformation coefficients by performing an inverse quantization of the transformation units by using the determined quantization parameter; an inverse transformer restoring the prediction data by performing an inverse transformation of the transformation coefficients; and a prediction restoring unit for restoring image data of the prediction unit by performing an intra prediction or a motion compensation of the at least one prediction unit in the current coding unit, based on the restored prediction data included in the current coding unit. 15. A computer readable recording medium having recorded thereon a program for executing the quantization parameter determination method of claim 1.
Provided are a method and apparatus for determining quantization parameter for a quantization and an inverse quantization performed during a video encoding and decoding. The quantization parameter determination method includes determining transformation units of at least one size included in a coding unit; determining a default quantization parameter of the coding unit; reducing a quantization parameter of a transformation unit that is greater than a predetermined size, to be less than the default quantization parameter; and increasing a quantization parameter of a transformation unit that is less than a predetermined size, to be greater than the default quantization parameter.1. A quantization parameter determination method, the method comprising: determining transformation units of at least one size included in a coding unit; determining a default quantization parameter of the coding unit; reducing a quantization parameter of a transformation unit that is greater than a predetermined size among the transformation units, to be less than the default quantization parameter; and increasing a quantization parameter of a transformation unit that is less than the predetermined size among the transformation units, to be greater than the default quantization parameter. 2. The method of claim 1, wherein the determining of the transformation units comprises determining transformation units of at least one transformation depth included in the coding unit, when the size of the transformation unit is determined by the level of the corresponding transformation depth, wherein the transformation depth denotes a number of split of the coding unit, the determining of the default quantization parameter comprises determining the default quantization parameter allocated to a transformation unit of a predetermined depth in the at least one level of transformation depth, the reducing of the quantization parameter comprises reducing the quantization parameter of the transformation unit of a transformation depth that is lower than the predetermined depth, to be less than the default quantization parameter, and the increasing of the quantization parameter comprises increasing the quantization parameter of a transformation depth that is higher than the predetermined depth, to be greater than the default quantization parameter. 3. The method of claim 1, wherein the reducing of the quantization parameter comprises reducing the quantization parameter by a difference value of the quantization parameter from the default quantization parameter, and the increasing of the quantization parameter comprises increasing the quantization parameter by a difference value of the quantization value from the default quantization parameter. 4. The method of claim 2, wherein the reducing of the quantization parameter comprises determining a reduction amount of the difference value of the quantization parameter reduced by from the default quantization parameter in proportion to a reduction amount of the current transformation depth of the transformation unit from the predetermined transformation depth, and increasing of the quantization parameter comprises determining an increase amount of the difference value of the quantization parameter increasing by from the default quantization parameter in proportion to an increase amount of the current transformation depth of the transformation unit from the predetermined transformation depth. 5. The method of claim 1, further comprising generating quantized transformation coefficients by performing quantization of the transformation units by using the determined quantization parameters. 6. The method of claim 1, further comprising restoring the transformation coefficients from the quantized transformation coefficients by performing an inverse quantization of the transformation units by using the determined quantization parameter. 7. The method of claim 1, further comprising encoding and transmitting information about the difference value of the quantization parameter increasing or reducing from the default quantization parameter and the default quantization parameter. 8. The method of claim 1, further comprising receiving the information about the difference value of the quantization parameter increasing or reducing by from the default quantization parameter and the default quantization parameter. 9. A quantization parameter determination apparatus, the apparatus comprising: a transformation unit determiner for determining transformation units of at least one size included in a coding unit; and a quantization parameter determiner for determining a default quantization parameter of the coding unit, and determining quantization parameters of the transformation unit by reducing a quantization parameter of a transformation unit that is less than a predetermined size to be less than the default quantization parameter, and by increasing the quantization parameter of a transformation unit that is greater than the predetermined size to be greater than the default quantization parameter. 10. The apparatus of claim 9, wherein the transformation unit determiner determines transformation units of at least one transformation depth included in the coding unit, when the size of the transformation unit is determined by the level of the corresponding transformation depth, wherein the transformation depth denotes a number of split of the coding unit, the quantization parameter determiner determines the default quantization parameter allocated to a transformation unit of a predetermined depth in the at least one level of transformation depth, reduces the quantization parameter of the transformation unit of a transformation depth that is lower than the predetermined depth, to be less than the default quantization parameter, and increases the quantization parameter of a transformation depth that is higher than the predetermined depth, to be greater than the default quantization parameter 11. The apparatus of claim 9, wherein the quantization parameter determiner reduces or increases the quantization parameter by a difference value of the quantization parameter from the default quantization parameter. 12. The apparatus of claim 10, wherein the quantization parameter determiner determines a reduction amount of the difference value of the quantization parameter reduced by from the default quantization parameter in proportion to a reduction amount of the current transformation depth of the transformation unit from the predetermined transformation depth, and determines an increase amount of the difference value of the quantization parameter increasing by from the default quantization parameter in proportion to an increase amount of the current transformation depth of the transformation unit from the predetermined transformation depth. 13. The apparatus of claim 9, further comprising: a predictor generating prediction data of a prediction unit by performing an intra prediction or a motion prediction of the at least one prediction unit in the current coding unit; a transformer generating transformation coefficients of the transformation units by transforming the determined transformation units included in the current coding unit that includes the generated prediction data; and a quantizer generating quantized transformation coefficients by performing quantization of the transformation units by using the determined quantization parameter. 14. The apparatus of claim 9, further comprising: an inverse quantizer restoring the transformation coefficients from the quantized transformation coefficients by performing an inverse quantization of the transformation units by using the determined quantization parameter; an inverse transformer restoring the prediction data by performing an inverse transformation of the transformation coefficients; and a prediction restoring unit for restoring image data of the prediction unit by performing an intra prediction or a motion compensation of the at least one prediction unit in the current coding unit, based on the restored prediction data included in the current coding unit. 15. A computer readable recording medium having recorded thereon a program for executing the quantization parameter determination method of claim 1.
2,400
7,480
7,480
12,611,650
2,425
A content output device identifies a set of content including both high definition content and standard definition content. The content output device determines whether an associated presentation device is capable of presenting high definition content. The content output device then filters the set of content accordingly responsive to determining whether the presentation device is capable of presenting high definition content.
1. A video output device comprising: an output interface that communicatively couples to a display device; and a processor operable to: determine whether the display device is capable of presenting high definition content; identify a set of video programming available for output; filter the set of video programming responsive to a determination regarding whether the display device is capable of presenting high definition content; and output a selection menu, identifying the filtered set of video programming, for presentation by the display device. 2. The video output device of claim 1, wherein the set of video programming includes at least one pair of a standard definition service and a high definition service associated with a single broadcast channel and wherein the processor is operable to filter the standard definition service from the set of video programming responsive to determining that the display device is capable of presenting high definition content. 3. The video output device of claim 1, wherein the set of video programming includes at least one pair of a standard definition service and a high definition service associated with a single broadcast channel and wherein the processor is operable to filter the high definition service from the set of video programming responsive to determining that the display device is not capable of presenting high definition content. 4. The video output device of claim 1, wherein the set of video programming includes a plurality of high definition services and a plurality of standard definition services and wherein the processor is operable to filter the plurality of standard definition services from the set of video programming responsive to determining that the display device is capable of presenting high definition content. 5. The video output device of claim 1, wherein the set of video programming includes a plurality of high definition services and a plurality of standard definition services and wherein the processor is operable to filter the plurality of high definition services from the set of video programming responsive to determining that the display device is not capable of presenting high definition content. 6. The video output device of claim 1, wherein the output interface is operable to bi-directionally communicate with the display device and wherein the processor is operable to determine whether the display device is capable of presenting high definition content based on data received by the output interface from the control device. 7. The video output device of claim 1, wherein the output interface comprises a high definition multimedia interface (HDMI). 8. The video output device of claim 1, wherein the video output device comprises a television receiver and wherein the set of video programming comprises electronic programming guide data associated with a plurality of channels. 9. The video output device of claim 8, wherein the television receiver comprises a satellite television receiver. 10. The video output device of claim 8, wherein the television receiver comprises a cable television receiver. 11. The video output device of claim 1, wherein the video output device comprises a television receiver and wherein the set of video programming comprises video on-demand programming available through the television receiver. 12. The video output device of claim 1, further comprising a storage medium operable to store programming for subsequent viewing by a user, wherein the set of programming comprises a plurality of programs stored on the storage medium. 13. A method of operating a video output device, the method comprising: receiving, at a video output device, a set of video programming available for viewing through the video output device; determining whether a display device communicatively coupled to the video output device is capable of displaying high definition content; filtering the set of video programming responsive to determining that the display device is capable of presenting high definition content; and outputting the filtered set of video programming for presentation by the display device. 14. The method of claim 13, wherein determining whether the display device communicatively coupled to the video output device is capable of displaying high definition content further comprises: receiving data, at the video output device, from the display device; and processing the data to determine whether the display device is capable of displaying high definition content. 15. The method of claim 13, wherein the set of video programming includes at least one pair of a standard definition service and a high definition service associated with a single broadcast channel and wherein filtering the set of video programming further comprises: filtering the standard definition service from the set of video programming responsive to determining that the display device is capable of presenting high definition content. 16. The method of claim 13, wherein the set of video programming includes a plurality of high definition services and a plurality of standard definition services and wherein filtering the set of video programming further comprises: filtering the plurality of standard definition services from the set of video programming responsive to determining that the display device is capable of presenting high definition content. 17. A television receiver comprising: a communication interface operable to receive electronic programming guide data, the electronic programming guide data including at least one pair of a standard definition service and a high definition service associated with a single broadcast channel; a output interface that communicatively couples to a display device; and a processor operable to: determine whether the display device is capable of presenting high definition content; responsive to identifying that the display device is capable of presenting high definition content, filter the standard definition service from the electronic programming guide data; and initiate output of the filtered electronic programming guide through the output interface for presentation by the display device. 18. The television receiver of claim 17, wherein the output interface is operable to bi-directionally communicate with the display device and wherein the processor is operable to determine whether the display device is capable of presenting high definition content based on data received by the output interface from the control device. 19. The television receiver of claim 17, wherein the output interface comprises a high definition multimedia interface (HDMI). 20. The television receiver of claim 17, wherein the set of video programming includes a plurality of high definition services and a plurality of standard definition services and wherein the processor is operable to filter the plurality of standard definition services from the set of video programming responsive to determining that the display device is capable of presenting high definition content.
A content output device identifies a set of content including both high definition content and standard definition content. The content output device determines whether an associated presentation device is capable of presenting high definition content. The content output device then filters the set of content accordingly responsive to determining whether the presentation device is capable of presenting high definition content.1. A video output device comprising: an output interface that communicatively couples to a display device; and a processor operable to: determine whether the display device is capable of presenting high definition content; identify a set of video programming available for output; filter the set of video programming responsive to a determination regarding whether the display device is capable of presenting high definition content; and output a selection menu, identifying the filtered set of video programming, for presentation by the display device. 2. The video output device of claim 1, wherein the set of video programming includes at least one pair of a standard definition service and a high definition service associated with a single broadcast channel and wherein the processor is operable to filter the standard definition service from the set of video programming responsive to determining that the display device is capable of presenting high definition content. 3. The video output device of claim 1, wherein the set of video programming includes at least one pair of a standard definition service and a high definition service associated with a single broadcast channel and wherein the processor is operable to filter the high definition service from the set of video programming responsive to determining that the display device is not capable of presenting high definition content. 4. The video output device of claim 1, wherein the set of video programming includes a plurality of high definition services and a plurality of standard definition services and wherein the processor is operable to filter the plurality of standard definition services from the set of video programming responsive to determining that the display device is capable of presenting high definition content. 5. The video output device of claim 1, wherein the set of video programming includes a plurality of high definition services and a plurality of standard definition services and wherein the processor is operable to filter the plurality of high definition services from the set of video programming responsive to determining that the display device is not capable of presenting high definition content. 6. The video output device of claim 1, wherein the output interface is operable to bi-directionally communicate with the display device and wherein the processor is operable to determine whether the display device is capable of presenting high definition content based on data received by the output interface from the control device. 7. The video output device of claim 1, wherein the output interface comprises a high definition multimedia interface (HDMI). 8. The video output device of claim 1, wherein the video output device comprises a television receiver and wherein the set of video programming comprises electronic programming guide data associated with a plurality of channels. 9. The video output device of claim 8, wherein the television receiver comprises a satellite television receiver. 10. The video output device of claim 8, wherein the television receiver comprises a cable television receiver. 11. The video output device of claim 1, wherein the video output device comprises a television receiver and wherein the set of video programming comprises video on-demand programming available through the television receiver. 12. The video output device of claim 1, further comprising a storage medium operable to store programming for subsequent viewing by a user, wherein the set of programming comprises a plurality of programs stored on the storage medium. 13. A method of operating a video output device, the method comprising: receiving, at a video output device, a set of video programming available for viewing through the video output device; determining whether a display device communicatively coupled to the video output device is capable of displaying high definition content; filtering the set of video programming responsive to determining that the display device is capable of presenting high definition content; and outputting the filtered set of video programming for presentation by the display device. 14. The method of claim 13, wherein determining whether the display device communicatively coupled to the video output device is capable of displaying high definition content further comprises: receiving data, at the video output device, from the display device; and processing the data to determine whether the display device is capable of displaying high definition content. 15. The method of claim 13, wherein the set of video programming includes at least one pair of a standard definition service and a high definition service associated with a single broadcast channel and wherein filtering the set of video programming further comprises: filtering the standard definition service from the set of video programming responsive to determining that the display device is capable of presenting high definition content. 16. The method of claim 13, wherein the set of video programming includes a plurality of high definition services and a plurality of standard definition services and wherein filtering the set of video programming further comprises: filtering the plurality of standard definition services from the set of video programming responsive to determining that the display device is capable of presenting high definition content. 17. A television receiver comprising: a communication interface operable to receive electronic programming guide data, the electronic programming guide data including at least one pair of a standard definition service and a high definition service associated with a single broadcast channel; a output interface that communicatively couples to a display device; and a processor operable to: determine whether the display device is capable of presenting high definition content; responsive to identifying that the display device is capable of presenting high definition content, filter the standard definition service from the electronic programming guide data; and initiate output of the filtered electronic programming guide through the output interface for presentation by the display device. 18. The television receiver of claim 17, wherein the output interface is operable to bi-directionally communicate with the display device and wherein the processor is operable to determine whether the display device is capable of presenting high definition content based on data received by the output interface from the control device. 19. The television receiver of claim 17, wherein the output interface comprises a high definition multimedia interface (HDMI). 20. The television receiver of claim 17, wherein the set of video programming includes a plurality of high definition services and a plurality of standard definition services and wherein the processor is operable to filter the plurality of standard definition services from the set of video programming responsive to determining that the display device is capable of presenting high definition content.
2,400
7,481
7,481
14,307,404
2,449
A computer-implemented method operable in a content delivery network (CDN), includes receiving a request a service in said CDN; determining a particular classification of an Internet Protocol (IP) address associated with said request, said determining using a first function that maps IP addresses to one or more classifications, said particular classification being one of said one or more classifications; and processing said request based on said particular classification of said IP address associated with said request. A location-specific response to a request may be based on a geographic location associated with said IP address associated with said request.
1. A computer-implemented method operable in a content delivery network (CDN), the method operable on one or more devices comprising hardware including memory and at least one processor, the method comprising: (A) receiving a request a service in said CDN; (B) determining a particular classification of an Internet Protocol (IP) address associated with said request, said determining using a first function that maps IP addresses to one or more classifications, said particular classification being one of said one or more classifications; and (C) processing said request based on said particular classification of said IP address associated with said request. 2. The method of claim 1 wherein said first function maps IP address ranges to said one or more classifications. 3. The method of claim 1 wherein said one or more classifications comprise a plurality of discrete classifications. 4. The method of claim 1 wherein said one or more classifications are selected from the group comprising: accept and reject. 5. The method of claim 4 wherein said processing in (C) further comprises: (C)(1) when said particular classification is accept then continuing processing the request. 6. The method of claim 4 wherein said processing in (C) further comprises: (C)(1) when said particular classification is reject then rejecting the request. 7. The method of claim 1 wherein said one or more classifications comprise: modify or vary. 8. The method of claim 7 wherein said processing in (C) further comprises: (C)(1) when said particular classification is modify or vary then modifying the request prior to subsequent processing of the request. 9. The method of claim 1 wherein said first function uses a database comprising mappings from IP address ranges to corresponding one or more classifications. 10. The method of claim 1 wherein said first function is a geographic query function. 11. The method of claim 9 wherein said one or more classifications comprise a plurality of classifications, and wherein one classification of said classifications is omitted from said database. 12. The method of claim 11 wherein the one classification of said classifications that is omitted from said database is the most common classification of said IP addresses. 13. The method of claim 1 wherein said first function is encoded in embedded software that maps IP address ranges to corresponding one or more classifications. 14. The method of claim 1 wherein said processing in (C) comprises: (C)(1) providing a location-specific response to said request based on a geographic location associated with said IP address associated with said request. 15. The method of claim 14 wherein said location-specific response includes location-specific advertising. 16. A system, operable in a content delivery network (CDN) comprising multiple service endpoints, said service endpoints running on a plurality of devices, the system comprising: (a) hardware including memory and at least one processor, and (b) one or more services running on said hardware, wherein said one or more services are configured to: (A) receive a request a service in said CDN; (B) determine a particular classification of an Internet Protocol (IP) address associated with said request, said determining using a first function that maps IP addresses to one or more classifications, said particular classification being one of said one or more classifications; and (C) process said request based on said particular classification of said IP address associated with said request. 17. The system of claim 16 wherein said first function maps IP address ranges to said one or more classifications. 18. The system of claim 16 wherein said one or more classifications comprise a plurality of discrete classifications. 19. The system of claim 16 wherein said one or more classifications are selected from the group comprising: accept and reject. 20. The system of claim 16 wherein said wherein said one or more services are configured to process said request in (C) by: (C)(1) providing a location-specific response to said request based on a geographic location associated with said IP address associated with said request. 21. The system of claim 20 wherein said location-specific response includes location-specific advertising. 22. A computer program product having computer readable instructions stored on non-transitory computer readable media, the computer readable instructions including instructions for implementing a computer-implemented method, said method operable on one or more devices comprising hardware including memory and at least one processor and running one or more services on said hardware, said method operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, and said method comprising: (A) receiving a request a service in said CDN; (B) determining a particular classification of an Internet Protocol (IP) address associated with said request, said determining using a first function that maps IP addresses to one or more classifications, said particular classification being one of said one or more classifications; and (C) processing said request based on said particular classification of said IP address associated with said request. 23. The computer program product of claim 22 wherein said first function maps IP address ranges to said one or more classifications. 24. The computer program product of claim 22 wherein said one or more classifications comprise a plurality of discrete classifications. 25. The computer program product of claim 22 wherein said wherein said processing in (C) comprises: (C)(1) providing a location-specific response to said request based on a geographic location associated with said IP address associated with said request. 26. The computer program product of claim 25 wherein said location-specific response includes location-specific advertising.
A computer-implemented method operable in a content delivery network (CDN), includes receiving a request a service in said CDN; determining a particular classification of an Internet Protocol (IP) address associated with said request, said determining using a first function that maps IP addresses to one or more classifications, said particular classification being one of said one or more classifications; and processing said request based on said particular classification of said IP address associated with said request. A location-specific response to a request may be based on a geographic location associated with said IP address associated with said request.1. A computer-implemented method operable in a content delivery network (CDN), the method operable on one or more devices comprising hardware including memory and at least one processor, the method comprising: (A) receiving a request a service in said CDN; (B) determining a particular classification of an Internet Protocol (IP) address associated with said request, said determining using a first function that maps IP addresses to one or more classifications, said particular classification being one of said one or more classifications; and (C) processing said request based on said particular classification of said IP address associated with said request. 2. The method of claim 1 wherein said first function maps IP address ranges to said one or more classifications. 3. The method of claim 1 wherein said one or more classifications comprise a plurality of discrete classifications. 4. The method of claim 1 wherein said one or more classifications are selected from the group comprising: accept and reject. 5. The method of claim 4 wherein said processing in (C) further comprises: (C)(1) when said particular classification is accept then continuing processing the request. 6. The method of claim 4 wherein said processing in (C) further comprises: (C)(1) when said particular classification is reject then rejecting the request. 7. The method of claim 1 wherein said one or more classifications comprise: modify or vary. 8. The method of claim 7 wherein said processing in (C) further comprises: (C)(1) when said particular classification is modify or vary then modifying the request prior to subsequent processing of the request. 9. The method of claim 1 wherein said first function uses a database comprising mappings from IP address ranges to corresponding one or more classifications. 10. The method of claim 1 wherein said first function is a geographic query function. 11. The method of claim 9 wherein said one or more classifications comprise a plurality of classifications, and wherein one classification of said classifications is omitted from said database. 12. The method of claim 11 wherein the one classification of said classifications that is omitted from said database is the most common classification of said IP addresses. 13. The method of claim 1 wherein said first function is encoded in embedded software that maps IP address ranges to corresponding one or more classifications. 14. The method of claim 1 wherein said processing in (C) comprises: (C)(1) providing a location-specific response to said request based on a geographic location associated with said IP address associated with said request. 15. The method of claim 14 wherein said location-specific response includes location-specific advertising. 16. A system, operable in a content delivery network (CDN) comprising multiple service endpoints, said service endpoints running on a plurality of devices, the system comprising: (a) hardware including memory and at least one processor, and (b) one or more services running on said hardware, wherein said one or more services are configured to: (A) receive a request a service in said CDN; (B) determine a particular classification of an Internet Protocol (IP) address associated with said request, said determining using a first function that maps IP addresses to one or more classifications, said particular classification being one of said one or more classifications; and (C) process said request based on said particular classification of said IP address associated with said request. 17. The system of claim 16 wherein said first function maps IP address ranges to said one or more classifications. 18. The system of claim 16 wherein said one or more classifications comprise a plurality of discrete classifications. 19. The system of claim 16 wherein said one or more classifications are selected from the group comprising: accept and reject. 20. The system of claim 16 wherein said wherein said one or more services are configured to process said request in (C) by: (C)(1) providing a location-specific response to said request based on a geographic location associated with said IP address associated with said request. 21. The system of claim 20 wherein said location-specific response includes location-specific advertising. 22. A computer program product having computer readable instructions stored on non-transitory computer readable media, the computer readable instructions including instructions for implementing a computer-implemented method, said method operable on one or more devices comprising hardware including memory and at least one processor and running one or more services on said hardware, said method operable in a network comprising multiple service endpoints, said service endpoints running on a plurality of devices, and said method comprising: (A) receiving a request a service in said CDN; (B) determining a particular classification of an Internet Protocol (IP) address associated with said request, said determining using a first function that maps IP addresses to one or more classifications, said particular classification being one of said one or more classifications; and (C) processing said request based on said particular classification of said IP address associated with said request. 23. The computer program product of claim 22 wherein said first function maps IP address ranges to said one or more classifications. 24. The computer program product of claim 22 wherein said one or more classifications comprise a plurality of discrete classifications. 25. The computer program product of claim 22 wherein said wherein said processing in (C) comprises: (C)(1) providing a location-specific response to said request based on a geographic location associated with said IP address associated with said request. 26. The computer program product of claim 25 wherein said location-specific response includes location-specific advertising.
2,400
7,482
7,482
14,398,283
2,485
A method of encoding a video sequence is provided. The method ( 300 ) comprises including ( 301 ) Network Abstraction Layer (NAL) unit type information with each picture of the video sequence, and identifying ( 302 ) a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture. The current picture is identified using the NAL unit type information, e.g., using a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. Further, a method of decoding a coded video sequence, a method of extracting a sub-bitstream from a coded video sequence, a method of processing a coded video sequence, corresponding computer programs and computer program products, a video encoder, a video decoder, and network elements are provided.
1. A method of encoding a video sequence, the method comprising: including Network Abstraction Layer (NAL) unit type information with each picture of the video sequence, and identifying, using the NAL unit type information, a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture. 2. The method according to claim 1, wherein the current picture is identified using the NAL unit type information if the current picture is a leading picture of exactly one CRA picture, and if the current picture directly follows in decoding order the exactly one CRA picture or a picture which is identified as a leading picture of the exactly one CRA picture. 3. The method according to claim 1, wherein a picture is identified as being a leading picture of exactly one CRA picture by setting the NAL unit type information of the picture to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 4. The method according to claim 1, wherein the NAL unit type information is included in the NAL unit headers of the NAL units which are associated with each picture. 5. A computer program product comprising a non-transitory computer readable medium storing computer program code, the computer program code being adapted, when executed on a processor, to implement the method according to claim 1. 6. A computer program product comprising a non-transitory computer readable medium storing computer program code, the computer program code being adapted, when executed on a processor, to implement the method according to claim 2. 7. A method of decoding a coded video sequence comprising Network Abstraction Layer (NAL) units, the method comprising: detecting an error for a current picture based on NAL unit type information associated with the current picture and which identifies the current picture as a leading picture of exactly one Clean Random Access (CRA) picture. 8. The method according to claim 7, wherein an error is detected if the current picture is a leading picture of more than one CRA picture. 9. The method according to claim 7, wherein an error is detected if the current picture does not directly follow in decoding order the exactly one CRA picture or a picture which is identified as a leading picture of the exactly one CRA picture. 10. The method according to claim 7, wherein the current picture is identified as being a leading picture of exactly one CRA picture if the NAL unit type information of the current picture is set to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 11. The method according to claim 7, wherein the NAL unit type information is included in the NAL unit headers of the NAL units which are associated with the current picture. 12. A computer program product comprising a non-transitory computer readable medium storing computer program code, the computer program code being adapted, when executed on a processor, to implement the method according to claim 7. 13. A computer program product comprising a non-transitory computer readable medium storing computer program code, the computer program code being adapted, when executed on a processor, to implement the method according to claim 8. 14. A method of extracting a sub-bitstream from a coded video sequence comprising Network Abstraction Layer (NAL) units, the method comprising: detecting a Clean Random Access (CRA) picture in the coded video sequence, ignoring all NAL units preceding the identified CRA picture in decoding order, ignoring all NAL units until a NAL unit is detected which is not identified, using NAL unit type information, as a leading picture of exactly one CRA picture, and forwarding or decoding the resulting video sequence. 15. A method of processing a coded video sequence comprising Network Abstraction Layer (NAL) units, the method comprising: identifying a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture, and detecting an error if the NAL unit type information of the current picture is not set to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 16. A video encoder for encoding a video sequence, the video encoder being arranged for: including Network Abstraction Layer (NAL) unit type information with each picture of the video sequence, and identifying, using the NAL unit type information, a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture. 17. The video encoder according to claim 16, wherein the current picture is identified using the NAL unit type information if the current picture is a leading picture of exactly one CRA picture, and if the current picture directly follows in decoding order the exactly one CRA picture or a picture which is identified as a leading picture of the exactly one CRA picture. 18. The video encoder according to claim 16, wherein a picture is identified as being a leading picture of exactly one CRA picture by setting the NAL unit type information of the picture to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 19. The video encoder according to claim 16, wherein the NAL unit type information is included in the NAL unit headers of the NAL units which are associated with each picture. 20. A video decoder for decoding a coded video sequence comprising Network Abstraction Layer (NAL) units, the video decoder being arranged for: detecting an error for a current picture based on NAL unit type information associated with the current picture and which identifies the current picture as a leading picture of exactly one Clean Random Access (CRA) picture. 21. The video decoder according to claim 20, wherein an error is detected if the current picture is a leading picture of more than one CRA picture. 22. The video decoder according to claim 20, wherein an error is detected if the current picture does not directly follow in decoding order the exactly one CRA picture or a picture which is identified as a leading picture of the exactly one CRA picture. 23. The video decoder according to claim 20, wherein the current picture is identified as being a leading picture of exactly one CRA picture if the NAL unit type information of the current picture is set to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 24. The video decoder according to claim 20, wherein the NAL unit type information is included in the NAL unit headers of the NAL units which are associated with the current picture. 25. A network element for extracting a sub-bitstream from a coded video sequence comprising Network Abstraction Layer (NAL) units, the network element being arranged for: detecting a Clean Random Access (CRA) picture in the coded video sequence, ignoring all NAL units preceding the identified CRA picture in decoding order, ignoring all NAL units until a NAL unit is detected which is not identified, using NAL unit type information, as a leading picture of exactly one CRA picture, and forwarding or decoding the resulting video sequence. 26. A network element for processing a coded video sequence comprising Network Abstraction Layer (NAL) units, the network element being arranged for: identifying a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture, and detecting an error if the NAL unit type information of the current picture is not set to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture.
A method of encoding a video sequence is provided. The method ( 300 ) comprises including ( 301 ) Network Abstraction Layer (NAL) unit type information with each picture of the video sequence, and identifying ( 302 ) a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture. The current picture is identified using the NAL unit type information, e.g., using a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. Further, a method of decoding a coded video sequence, a method of extracting a sub-bitstream from a coded video sequence, a method of processing a coded video sequence, corresponding computer programs and computer program products, a video encoder, a video decoder, and network elements are provided.1. A method of encoding a video sequence, the method comprising: including Network Abstraction Layer (NAL) unit type information with each picture of the video sequence, and identifying, using the NAL unit type information, a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture. 2. The method according to claim 1, wherein the current picture is identified using the NAL unit type information if the current picture is a leading picture of exactly one CRA picture, and if the current picture directly follows in decoding order the exactly one CRA picture or a picture which is identified as a leading picture of the exactly one CRA picture. 3. The method according to claim 1, wherein a picture is identified as being a leading picture of exactly one CRA picture by setting the NAL unit type information of the picture to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 4. The method according to claim 1, wherein the NAL unit type information is included in the NAL unit headers of the NAL units which are associated with each picture. 5. A computer program product comprising a non-transitory computer readable medium storing computer program code, the computer program code being adapted, when executed on a processor, to implement the method according to claim 1. 6. A computer program product comprising a non-transitory computer readable medium storing computer program code, the computer program code being adapted, when executed on a processor, to implement the method according to claim 2. 7. A method of decoding a coded video sequence comprising Network Abstraction Layer (NAL) units, the method comprising: detecting an error for a current picture based on NAL unit type information associated with the current picture and which identifies the current picture as a leading picture of exactly one Clean Random Access (CRA) picture. 8. The method according to claim 7, wherein an error is detected if the current picture is a leading picture of more than one CRA picture. 9. The method according to claim 7, wherein an error is detected if the current picture does not directly follow in decoding order the exactly one CRA picture or a picture which is identified as a leading picture of the exactly one CRA picture. 10. The method according to claim 7, wherein the current picture is identified as being a leading picture of exactly one CRA picture if the NAL unit type information of the current picture is set to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 11. The method according to claim 7, wherein the NAL unit type information is included in the NAL unit headers of the NAL units which are associated with the current picture. 12. A computer program product comprising a non-transitory computer readable medium storing computer program code, the computer program code being adapted, when executed on a processor, to implement the method according to claim 7. 13. A computer program product comprising a non-transitory computer readable medium storing computer program code, the computer program code being adapted, when executed on a processor, to implement the method according to claim 8. 14. A method of extracting a sub-bitstream from a coded video sequence comprising Network Abstraction Layer (NAL) units, the method comprising: detecting a Clean Random Access (CRA) picture in the coded video sequence, ignoring all NAL units preceding the identified CRA picture in decoding order, ignoring all NAL units until a NAL unit is detected which is not identified, using NAL unit type information, as a leading picture of exactly one CRA picture, and forwarding or decoding the resulting video sequence. 15. A method of processing a coded video sequence comprising Network Abstraction Layer (NAL) units, the method comprising: identifying a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture, and detecting an error if the NAL unit type information of the current picture is not set to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 16. A video encoder for encoding a video sequence, the video encoder being arranged for: including Network Abstraction Layer (NAL) unit type information with each picture of the video sequence, and identifying, using the NAL unit type information, a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture. 17. The video encoder according to claim 16, wherein the current picture is identified using the NAL unit type information if the current picture is a leading picture of exactly one CRA picture, and if the current picture directly follows in decoding order the exactly one CRA picture or a picture which is identified as a leading picture of the exactly one CRA picture. 18. The video encoder according to claim 16, wherein a picture is identified as being a leading picture of exactly one CRA picture by setting the NAL unit type information of the picture to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 19. The video encoder according to claim 16, wherein the NAL unit type information is included in the NAL unit headers of the NAL units which are associated with each picture. 20. A video decoder for decoding a coded video sequence comprising Network Abstraction Layer (NAL) units, the video decoder being arranged for: detecting an error for a current picture based on NAL unit type information associated with the current picture and which identifies the current picture as a leading picture of exactly one Clean Random Access (CRA) picture. 21. The video decoder according to claim 20, wherein an error is detected if the current picture is a leading picture of more than one CRA picture. 22. The video decoder according to claim 20, wherein an error is detected if the current picture does not directly follow in decoding order the exactly one CRA picture or a picture which is identified as a leading picture of the exactly one CRA picture. 23. The video decoder according to claim 20, wherein the current picture is identified as being a leading picture of exactly one CRA picture if the NAL unit type information of the current picture is set to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture. 24. The video decoder according to claim 20, wherein the NAL unit type information is included in the NAL unit headers of the NAL units which are associated with the current picture. 25. A network element for extracting a sub-bitstream from a coded video sequence comprising Network Abstraction Layer (NAL) units, the network element being arranged for: detecting a Clean Random Access (CRA) picture in the coded video sequence, ignoring all NAL units preceding the identified CRA picture in decoding order, ignoring all NAL units until a NAL unit is detected which is not identified, using NAL unit type information, as a leading picture of exactly one CRA picture, and forwarding or decoding the resulting video sequence. 26. A network element for processing a coded video sequence comprising Network Abstraction Layer (NAL) units, the network element being arranged for: identifying a current picture which is a leading picture of exactly one Clean Random Access (CRA) picture, and detecting an error if the NAL unit type information of the current picture is not set to a specific NAL unit type which is reserved for leading pictures of exactly one CRA picture.
2,400
7,483
7,483
14,765,200
2,425
A method of inspecting an object including locating an object on a machine vision apparatus, attaching a light panel to the object to backlight a region of the object, obtaining an image of the region when backlit by the light panel and identifying a geometric property of the object from the image.
1. A method of inspecting an object comprising locating an object on a machine vision apparatus, attaching a light panel to the object to backlight a region of the object, obtaining an image of the region when backlit by the light panel and identifying a geometric property of the object from the image. 2. A method according to claim 1, wherein the light panel, when attached to the object, has a shape that substantially corresponds to a shape of the region backlit by the light panel. 3. A method according to claim 1, wherein the light panel comprises a flexible light panel that can be bent into a desired shape. 4. A method according to claim 3 wherein attaching the light panel to the object bends the light panel into the desired shape. 5. A method according to claim 1, wherein the light panel comprises electroluminescent sheeting. 6. A method according to claim 1, comprising attaching the light panel to the object using a fixture. 7. A method according to claim 6, wherein the fixture comprises a mounting to which the light panel is releasably attached. 8. A method according to claim 7, wherein the fixture comprises a means for supplying power to the light attached thereto. 9. A method according to claim 8, wherein the fixture comprises a power supply. 10. A method according to claim 6, comprising repeatable mounting the fixture on the same or mounting the fixture on each of a set of nominally identical objects using mounting formations, the mounting formations providing for repeatable mounting of the fixture to the or each object at a location defined by the formations, and inspecting the or each object when the fixture bearing the light panel is mounted thereon. 11. A method according to claim 6, comprising locating the or each of a set of nominally identical objects on the machine vision apparatus using the fixture. 12. A method according to claim 11, wherein the fixture comprises further mounting formations for mounting the object on the machine vision apparatus in a repeatable manner at a defined location defined by the further mounting formations, the method comprising using the fixture to repeatably mount the same object or mount each of a set of nominally identical objects at the defined location and inspecting the or each object when the object or each object is located at the defined location on the machine vision apparatus. 13. A method according to claim 10, wherein the and/or the further mounting formations comprise a kinematic mount that defines the location on the object/machine vision apparatus. 14. A method according to claim 1, wherein the light panel is attached to the object so as to be located within the object. 15. A method according to claim 14, wherein the region that is illuminated by the backlight includes a region that would otherwise fall within a shadow cast by the object if the object was backlit by a light located externally to the object. 16. An image obtained using a method according to claim 1. 17. A lighting unit for illuminating an object in a machine vision apparatus comprising a light panel and a fixture for attaching the light panel to the object such that the light panel backlights the object from a point of view of a camera of the machine vision apparatus. 18. A lighting unit according to claim 17, wherein the fixture comprises a mount to which the light panel can be releasably attached. 19. A lighting unit according to claim 17, wherein the fixture is arranged to locate the light panel within the object to illuminate a region of the object that would otherwise fall within a shadow cast by the object if the object was backlit by a light located externally to the object. 20. A lighting unit according to claim 17, wherein the light panel is a flexible panel that can be bent into a desired shape. 21. A lighting unit according to claim 20, wherein the fixture is arranged to hold the flexible light panel bent into the desired shape when attaching the light panel to the object. 22. A lighting unit according to claim 17, wherein the fixture comprises mounting formations for repeatably mounting the fixture to the object at a location defined by the formations. 23. A lighting unit according to claim 17, wherein the fixture comprises further mounting formations for repeatably mounting the object on the machine vision apparatus at a location defined by the further mounting formations. 24. A lighting unit according to claim 23, wherein the machine vision apparatus comprises a coordinate positioning machine and the further mounting formations are arranged for repeatably mounting the object to a bed of the coordinate positioning machine. 25. A lighting unit according to claim 22, wherein the and/or the further mounting formations comprise a kinematic mount to define the location on the object/machine vision apparatus. 26. A lighting unit according to claim 17, wherein the fixture comprises a power supply for the light panel. 27. A lighting unit according to claim 17, wherein the fixture comprises connections for connecting the light panel to a power supply. 28. A lighting unit according to claim 17, wherein the light panel is an electroluminescent sheet. 29. A fixture for attaching a light panel to an object located in a machine vision apparatus, the fixture comprising a mount to which a light panel can be attached, the mount arranged such that when the light panel is attached thereto and the fixture attached to the object located in the machine vision apparatus, the light panel backlights the object from a point of view of a camera of the machine vision apparatus. 30. A machine vision apparatus in combination with an object to be scanned using the machine vision apparatus, comprising a light panel attached to the object to backlight a region of the object from a point of view of a camera of the machine vision apparatus. 31. A machine vision apparatus according to claim 30, wherein the region is a region that would otherwise fall within a shadow cast by the object if the object was backlit by a light located externally to the object. 32. A machine vision apparatus according to claim 30, wherein the light panel is a flexible light panel that is attached to the object in a bent shape. 33. A method of inspecting an object comprising locating an object in a machine vision apparatus, fixing a flexible light panel in the machine vision apparatus such that the flexible light panel is held bent in a required configuration for backlighting a region of the object, obtaining an image of the region when backlit by the flexible light panel and identifying a geometric property of the object from the image. 34. A method according to claim 33, wherein the flexible light panel is held in a bent shape that substantially corresponds to a shape of the region of the object to be backlit. 35. A method according to claim 33, wherein the flexible light panel is fixed in the bent shape through attachment to the object and/or to a bed of the machine vision apparatus. 36. A lighting unit for illuminating an object in a machine vision apparatus comprising a flexible light panel and a fixture for fixing the flexible light panel in the machine vision apparatus such that the flexible light panel is held bent in a required configuration for backlighting a region of the object from a point of view of a camera of the machine vision apparatus. 37. A machine vision apparatus in combination with an object to be scanned using the machine vision apparatus, comprising a flexible light panel fixed in the machine vision apparatus such that the flexible light panel is held bent in a required configuration for backlighting a region of the object from a point of view of a camera of the machine vision apparatus.
A method of inspecting an object including locating an object on a machine vision apparatus, attaching a light panel to the object to backlight a region of the object, obtaining an image of the region when backlit by the light panel and identifying a geometric property of the object from the image.1. A method of inspecting an object comprising locating an object on a machine vision apparatus, attaching a light panel to the object to backlight a region of the object, obtaining an image of the region when backlit by the light panel and identifying a geometric property of the object from the image. 2. A method according to claim 1, wherein the light panel, when attached to the object, has a shape that substantially corresponds to a shape of the region backlit by the light panel. 3. A method according to claim 1, wherein the light panel comprises a flexible light panel that can be bent into a desired shape. 4. A method according to claim 3 wherein attaching the light panel to the object bends the light panel into the desired shape. 5. A method according to claim 1, wherein the light panel comprises electroluminescent sheeting. 6. A method according to claim 1, comprising attaching the light panel to the object using a fixture. 7. A method according to claim 6, wherein the fixture comprises a mounting to which the light panel is releasably attached. 8. A method according to claim 7, wherein the fixture comprises a means for supplying power to the light attached thereto. 9. A method according to claim 8, wherein the fixture comprises a power supply. 10. A method according to claim 6, comprising repeatable mounting the fixture on the same or mounting the fixture on each of a set of nominally identical objects using mounting formations, the mounting formations providing for repeatable mounting of the fixture to the or each object at a location defined by the formations, and inspecting the or each object when the fixture bearing the light panel is mounted thereon. 11. A method according to claim 6, comprising locating the or each of a set of nominally identical objects on the machine vision apparatus using the fixture. 12. A method according to claim 11, wherein the fixture comprises further mounting formations for mounting the object on the machine vision apparatus in a repeatable manner at a defined location defined by the further mounting formations, the method comprising using the fixture to repeatably mount the same object or mount each of a set of nominally identical objects at the defined location and inspecting the or each object when the object or each object is located at the defined location on the machine vision apparatus. 13. A method according to claim 10, wherein the and/or the further mounting formations comprise a kinematic mount that defines the location on the object/machine vision apparatus. 14. A method according to claim 1, wherein the light panel is attached to the object so as to be located within the object. 15. A method according to claim 14, wherein the region that is illuminated by the backlight includes a region that would otherwise fall within a shadow cast by the object if the object was backlit by a light located externally to the object. 16. An image obtained using a method according to claim 1. 17. A lighting unit for illuminating an object in a machine vision apparatus comprising a light panel and a fixture for attaching the light panel to the object such that the light panel backlights the object from a point of view of a camera of the machine vision apparatus. 18. A lighting unit according to claim 17, wherein the fixture comprises a mount to which the light panel can be releasably attached. 19. A lighting unit according to claim 17, wherein the fixture is arranged to locate the light panel within the object to illuminate a region of the object that would otherwise fall within a shadow cast by the object if the object was backlit by a light located externally to the object. 20. A lighting unit according to claim 17, wherein the light panel is a flexible panel that can be bent into a desired shape. 21. A lighting unit according to claim 20, wherein the fixture is arranged to hold the flexible light panel bent into the desired shape when attaching the light panel to the object. 22. A lighting unit according to claim 17, wherein the fixture comprises mounting formations for repeatably mounting the fixture to the object at a location defined by the formations. 23. A lighting unit according to claim 17, wherein the fixture comprises further mounting formations for repeatably mounting the object on the machine vision apparatus at a location defined by the further mounting formations. 24. A lighting unit according to claim 23, wherein the machine vision apparatus comprises a coordinate positioning machine and the further mounting formations are arranged for repeatably mounting the object to a bed of the coordinate positioning machine. 25. A lighting unit according to claim 22, wherein the and/or the further mounting formations comprise a kinematic mount to define the location on the object/machine vision apparatus. 26. A lighting unit according to claim 17, wherein the fixture comprises a power supply for the light panel. 27. A lighting unit according to claim 17, wherein the fixture comprises connections for connecting the light panel to a power supply. 28. A lighting unit according to claim 17, wherein the light panel is an electroluminescent sheet. 29. A fixture for attaching a light panel to an object located in a machine vision apparatus, the fixture comprising a mount to which a light panel can be attached, the mount arranged such that when the light panel is attached thereto and the fixture attached to the object located in the machine vision apparatus, the light panel backlights the object from a point of view of a camera of the machine vision apparatus. 30. A machine vision apparatus in combination with an object to be scanned using the machine vision apparatus, comprising a light panel attached to the object to backlight a region of the object from a point of view of a camera of the machine vision apparatus. 31. A machine vision apparatus according to claim 30, wherein the region is a region that would otherwise fall within a shadow cast by the object if the object was backlit by a light located externally to the object. 32. A machine vision apparatus according to claim 30, wherein the light panel is a flexible light panel that is attached to the object in a bent shape. 33. A method of inspecting an object comprising locating an object in a machine vision apparatus, fixing a flexible light panel in the machine vision apparatus such that the flexible light panel is held bent in a required configuration for backlighting a region of the object, obtaining an image of the region when backlit by the flexible light panel and identifying a geometric property of the object from the image. 34. A method according to claim 33, wherein the flexible light panel is held in a bent shape that substantially corresponds to a shape of the region of the object to be backlit. 35. A method according to claim 33, wherein the flexible light panel is fixed in the bent shape through attachment to the object and/or to a bed of the machine vision apparatus. 36. A lighting unit for illuminating an object in a machine vision apparatus comprising a flexible light panel and a fixture for fixing the flexible light panel in the machine vision apparatus such that the flexible light panel is held bent in a required configuration for backlighting a region of the object from a point of view of a camera of the machine vision apparatus. 37. A machine vision apparatus in combination with an object to be scanned using the machine vision apparatus, comprising a flexible light panel fixed in the machine vision apparatus such that the flexible light panel is held bent in a required configuration for backlighting a region of the object from a point of view of a camera of the machine vision apparatus.
2,400
7,484
7,484
13,333,038
2,449
Methods, systems, non-transitory media comprising computer-readable instructions, and logic for obtaining information related to an application may include receiving, at a terminal device, data exchange information, including data transfer speed information, that is related to a data exchange between the terminal device and a web service. The method further may include storing, at the terminal device, at least a portion of the data exchange information received at the terminal device.
1. A method for obtaining information related to an application, the method comprising: receiving, at a terminal device, data exchange information, comprising data transfer speed information, that is related to a data exchange between the terminal device and a web service; and storing, at the terminal device, at least a portion of the data exchange information received at the terminal device. 2. The method of claim 1, further comprising transmitting at least a portion of the data exchange information stored at the terminal device to a web server. 3. The method of claim 2, further comprising removing the at least a portion of the data exchange information stored at the terminal device after transmitting the at least a portion of the data exchange information stored at the terminal device to the web server. 4. The method of claim 2, further comprising: receiving, at the terminal device, feedback information from the web server after transmitting the at least a portion of the data exchange information stored at the terminal device to the web server; and processing the feedback information received from the web server. 5. The method of claim 1, wherein the storing comprises storing the at least a portion of the data exchange information received at the terminal device on a memory of the terminal device. 6. The method of claim 1, wherein the storing comprises storing the at least a portion of the data exchange information received at the terminal device on a drive of the terminal device. 7. The method of claim 1, further comprising analyzing, at the terminal device, the at least a portion of the data exchange information received at the terminal device. 8. The method of claim 1, wherein the transfer speed information comprises one or more of a data download speed and a data upload speed. 9. The method of claim 1, wherein the data exchange information further comprises information corresponding to one or more of: a size of data exchanged between the terminal device and the web service, a type of a particular application to be executed on the terminal device, a type of the terminal device, a location of the terminal device, data regarding a network comprising the terminal device, an actual time, and data regarding errors related to the data exchange between the terminal device and the web service. 10. The method of claim 1, wherein the receiving of the data exchange information occurs for a predetermined period of time. 11. The method of claim 1, wherein the terminal device comprises a mobile telephone. 12. The method of claim 1, wherein the web service comprises a web application programming interface. 13. The method of claim 1, further comprising executing a particular application on the terminal device, wherein the terminal device comprises a personal digital assistant (“PDA”) terminal, and wherein the particular application comprises a native application installed on the PDA terminal. 14. A system for obtaining information related to an application, the system comprising: at least one terminal device, wherein each terminal device of the at least one terminal device comprises: a processor; and a memory, wherein the at least one terminal device is configured to receive data exchange information, comprising data transfer speed information, that is related to a data exchange between the at least one terminal device and at least one web service, and wherein the at least one terminal device is configured to store at least a portion of the received data exchange information. 15. The system of claim 14, further comprising at least one web server configured to receive data exchange information from the at least one terminal device. 16. The system of claim 14, wherein the at least one terminal device is configured to transmit at least a portion of the data exchange information stored thereon to at least one web server, and wherein the at least one terminal device is configured to remove the at least a portion of the data exchange information stored thereon after transmitting the at least a portion of the data exchange information stored thereon to the at least one web browser. 17. The system of claim 14, wherein the at least one terminal device is configured to transmit at least a portion of the data exchange information stored thereon to at least one web server, wherein the at least one terminal device is configured to receive feedback information from the at least one web server after transmitting the at least a portion of the data exchange information stored thereon to the at least one web browser, and wherein the at least one terminal device is configured to process the feedback information from the at least one web server. 18. Logic encoded in one or more non-transitory, computer-readable media, the logic comprising instructions that, when executed by a processor, are operable to: receive, at a terminal device, data exchange information, comprising data transfer speed information, that is related to a data exchange between the terminal device and a web service; and store, at the terminal device, at least a portion of the data exchange information received at the terminal device. 19. The logic of claim 18 further comprising instructions that, when executed by a processor, are operable to transmit at least a portion of the data exchange information stored at the terminal device to a web server. 20. The logic of claim 18 further comprising instructions that, when executed by a processor, are operable to: receive, at the terminal device, feedback information from the web server after transmitting the at least a portion of the data exchange information stored at the terminal device to the web server; and process the feedback information received from the web server.
Methods, systems, non-transitory media comprising computer-readable instructions, and logic for obtaining information related to an application may include receiving, at a terminal device, data exchange information, including data transfer speed information, that is related to a data exchange between the terminal device and a web service. The method further may include storing, at the terminal device, at least a portion of the data exchange information received at the terminal device.1. A method for obtaining information related to an application, the method comprising: receiving, at a terminal device, data exchange information, comprising data transfer speed information, that is related to a data exchange between the terminal device and a web service; and storing, at the terminal device, at least a portion of the data exchange information received at the terminal device. 2. The method of claim 1, further comprising transmitting at least a portion of the data exchange information stored at the terminal device to a web server. 3. The method of claim 2, further comprising removing the at least a portion of the data exchange information stored at the terminal device after transmitting the at least a portion of the data exchange information stored at the terminal device to the web server. 4. The method of claim 2, further comprising: receiving, at the terminal device, feedback information from the web server after transmitting the at least a portion of the data exchange information stored at the terminal device to the web server; and processing the feedback information received from the web server. 5. The method of claim 1, wherein the storing comprises storing the at least a portion of the data exchange information received at the terminal device on a memory of the terminal device. 6. The method of claim 1, wherein the storing comprises storing the at least a portion of the data exchange information received at the terminal device on a drive of the terminal device. 7. The method of claim 1, further comprising analyzing, at the terminal device, the at least a portion of the data exchange information received at the terminal device. 8. The method of claim 1, wherein the transfer speed information comprises one or more of a data download speed and a data upload speed. 9. The method of claim 1, wherein the data exchange information further comprises information corresponding to one or more of: a size of data exchanged between the terminal device and the web service, a type of a particular application to be executed on the terminal device, a type of the terminal device, a location of the terminal device, data regarding a network comprising the terminal device, an actual time, and data regarding errors related to the data exchange between the terminal device and the web service. 10. The method of claim 1, wherein the receiving of the data exchange information occurs for a predetermined period of time. 11. The method of claim 1, wherein the terminal device comprises a mobile telephone. 12. The method of claim 1, wherein the web service comprises a web application programming interface. 13. The method of claim 1, further comprising executing a particular application on the terminal device, wherein the terminal device comprises a personal digital assistant (“PDA”) terminal, and wherein the particular application comprises a native application installed on the PDA terminal. 14. A system for obtaining information related to an application, the system comprising: at least one terminal device, wherein each terminal device of the at least one terminal device comprises: a processor; and a memory, wherein the at least one terminal device is configured to receive data exchange information, comprising data transfer speed information, that is related to a data exchange between the at least one terminal device and at least one web service, and wherein the at least one terminal device is configured to store at least a portion of the received data exchange information. 15. The system of claim 14, further comprising at least one web server configured to receive data exchange information from the at least one terminal device. 16. The system of claim 14, wherein the at least one terminal device is configured to transmit at least a portion of the data exchange information stored thereon to at least one web server, and wherein the at least one terminal device is configured to remove the at least a portion of the data exchange information stored thereon after transmitting the at least a portion of the data exchange information stored thereon to the at least one web browser. 17. The system of claim 14, wherein the at least one terminal device is configured to transmit at least a portion of the data exchange information stored thereon to at least one web server, wherein the at least one terminal device is configured to receive feedback information from the at least one web server after transmitting the at least a portion of the data exchange information stored thereon to the at least one web browser, and wherein the at least one terminal device is configured to process the feedback information from the at least one web server. 18. Logic encoded in one or more non-transitory, computer-readable media, the logic comprising instructions that, when executed by a processor, are operable to: receive, at a terminal device, data exchange information, comprising data transfer speed information, that is related to a data exchange between the terminal device and a web service; and store, at the terminal device, at least a portion of the data exchange information received at the terminal device. 19. The logic of claim 18 further comprising instructions that, when executed by a processor, are operable to transmit at least a portion of the data exchange information stored at the terminal device to a web server. 20. The logic of claim 18 further comprising instructions that, when executed by a processor, are operable to: receive, at the terminal device, feedback information from the web server after transmitting the at least a portion of the data exchange information stored at the terminal device to the web server; and process the feedback information received from the web server.
2,400
7,485
7,485
13,731,155
2,482
A reprogrammable modular imaging device with a control module and at least one input module connected to the control module. Image data is received by the input module for processing and transmission to the control module. A program is received by the control module to reprogram the processor.
1. A reprogrammable modular imaging device comprising: a control module; at least one input module having a processor and connectable to said control module; image data received by said at least one input module for processing and transmission to said control module; and a program received by said control module to reprogram the processor. 2. The device of claim 1 further comprising, the image data is raw image data. 3. The device of claim 1 further comprising the input module transmitting processed image data to the control module in a format readable by the control module. 4. The device of claim 3 wherein said processed image data is in a format readable by said control module. 5. The device of claim 1 further comprising a display connectable to said control module displaying processed and formatted image data. 6. The device of claim 1 further comprising said control module having an upgrade port for receiving the program. 7. The device of claim 1 wherein the input module receives the program from the control module. 8. The device of claim 1 further comprising: a camera connectable to the input module for transmitting image data. 9. The device of claim 8 further comprising: the camera having a processor; a program received by the control module to reprogram the processor of the camera. 10. The device of claim 1 wherein said reprogram reconfigures the processor. 11. The device of claim 1 further comprising: a soft feature enabled by the reprogram. 12. The device of claim 1 further comprising: a soft feature disabled by the reprogram. 13. The device of claim 1 further comprising: a module link connecting the control module to the input module 14. The device of claim 1 further comprising: a network connection, the program received from said network connection. 15. The device of claim 11 wherein the network connection is wireless. 16. The device of claim 1 further comprising: a network connection, the program retrieved from said network connection. 17. The device of claim 1 further comprising: an alternate image source is compatible with the processor upon the reprogram. 18. The device of claim 1 further comprising: a reprogram authorization received by the processor. 19. A reprogrammable modular imaging device comprising: a control module having a processor; at least one input module connectable to said control module; image data received by said control module for display formatting by the processor; and a program received by said control module to reprogram the processor. 20. A reprogrammable modular imaging device comprising: a control module having a processor; at least one input module having a processor and connectable to said control module; image data received by said at least one input module for processing and transmission to said control module; processed image data received by said control module for display formatting; and a program received by said control module to reprogram the processor of the control module and the processor of the input module.
A reprogrammable modular imaging device with a control module and at least one input module connected to the control module. Image data is received by the input module for processing and transmission to the control module. A program is received by the control module to reprogram the processor.1. A reprogrammable modular imaging device comprising: a control module; at least one input module having a processor and connectable to said control module; image data received by said at least one input module for processing and transmission to said control module; and a program received by said control module to reprogram the processor. 2. The device of claim 1 further comprising, the image data is raw image data. 3. The device of claim 1 further comprising the input module transmitting processed image data to the control module in a format readable by the control module. 4. The device of claim 3 wherein said processed image data is in a format readable by said control module. 5. The device of claim 1 further comprising a display connectable to said control module displaying processed and formatted image data. 6. The device of claim 1 further comprising said control module having an upgrade port for receiving the program. 7. The device of claim 1 wherein the input module receives the program from the control module. 8. The device of claim 1 further comprising: a camera connectable to the input module for transmitting image data. 9. The device of claim 8 further comprising: the camera having a processor; a program received by the control module to reprogram the processor of the camera. 10. The device of claim 1 wherein said reprogram reconfigures the processor. 11. The device of claim 1 further comprising: a soft feature enabled by the reprogram. 12. The device of claim 1 further comprising: a soft feature disabled by the reprogram. 13. The device of claim 1 further comprising: a module link connecting the control module to the input module 14. The device of claim 1 further comprising: a network connection, the program received from said network connection. 15. The device of claim 11 wherein the network connection is wireless. 16. The device of claim 1 further comprising: a network connection, the program retrieved from said network connection. 17. The device of claim 1 further comprising: an alternate image source is compatible with the processor upon the reprogram. 18. The device of claim 1 further comprising: a reprogram authorization received by the processor. 19. A reprogrammable modular imaging device comprising: a control module having a processor; at least one input module connectable to said control module; image data received by said control module for display formatting by the processor; and a program received by said control module to reprogram the processor. 20. A reprogrammable modular imaging device comprising: a control module having a processor; at least one input module having a processor and connectable to said control module; image data received by said at least one input module for processing and transmission to said control module; processed image data received by said control module for display formatting; and a program received by said control module to reprogram the processor of the control module and the processor of the input module.
2,400
7,486
7,486
15,389,343
2,491
Disclosed are various embodiments for malware detection by way of proxy servers. In one embodiment, a proxied request for a network resource from a network site is received from a client device by a proxy server application. The proxied request is analyzed to determine whether the proxied request includes protected information transmitted in an unsecured manner. It is then determined whether the network resource comprises malware based at least in part on an execution of the network resource or whether the proxied request includes the protected information transmitted in the unsecured manner. The proxy server application refrains from sending data generated by the network resource to the client device in response to the proxied request when the network resource is determined to comprise the malware.
1. A system, comprising: at least one computing device; and a proxy server application executable in the at least one computing device, wherein when executed the proxy server application causes the at least one computing device to at least: receive a proxied request from a client device for a network resource from a network site; analyze the proxied request to determine whether the proxied request includes protected information transmitted in an unsecured manner; determine that the client device is affected by malware based at least in part on the proxied request including the protected information transmitted in the unsecured manner; and refrain from sending data generated by the network resource to the client device in response to determining that the client device is affected by the malware. 2. The system of claim 1, wherein when executed the proxy server application further causes the at least one computing device to at least: receive the network resource from the network site; execute the network resource within an execution environment, the execution environment being configured to mimic a configuration of the client device; and determine whether the network resource comprises malware further based at least in part on the execution of the network resource. 3. The system of claim 1, wherein when executed the proxy server application further causes the at least one computing device to at least: determine a source of the malware based at least in part on the proxied request or telemetry data received from the client device by the proxy server application; and implement an action in response to determining the source of the malware. 4. The system of claim 3, wherein the action comprises blocking proxied requests by the client device for network resources of the network site corresponding to the source of the malware. 5. The system of claim 3, wherein the action comprises logging an indication of the source of the malware or generating an alert including the indication of the source of the malware. 6. The system of claim 3, wherein the action comprises determining that another client device is affected by the malware based at least in part on another proxied request received from the other client device, the other proxied request being associated with the source of the malware. 7. The system of claim 3, wherein the action comprises configuring a warning to be returned to another client device in response to proxied requests by the other client device for network resources of the network site corresponding to the source of the malware. 8. The system of claim 3, wherein the action comprises configuring processing of a proxied network resource received from the source of the malware by the proxy server application to remove the malware before returning the proxied network resource to the client device. 9. The system of claim 3, wherein the client device is configured to report the telemetry data to the proxy server application in response to receiving a proxied network resource from the proxy server application, and determining that the client device is affected by the malware further comprises: determining that the client device is affected by the malware in response to detecting at least one of: an absence of the telemetry data expected to be received from the client device, or an abnormality in the telemetry data received from the client device. 10. The system of claim 9, wherein the abnormality is detected from at least one of: memory consumption data, data storage usage data, network connection data, system configuration data, or process state data. 11. The system of claim 1, wherein the protected information transmitted in the unsecured manner comprises: a credit card number being sent in clear text via the proxied request, a password being sent in clear text via the proxied request, or predefined protected information associated with the client device. 12. The system of claim 1, wherein determining that the client device is affected by the malware further comprises detecting an absence of another proxied request that is expected to be received from the client device. 13. The system of claim 1, wherein determining that the client device is affected by the malware further comprise detecting that the proxied request is for canary data, the canary data being hidden from a user interface of the client device. 14. A method, comprising: receiving, via at least one of one or more computing devices, a proxied request from a client device for a network resource from a network site; determining, via at least one of the one or more computing devices, whether the network resource is correlated with malware based at least in part on a browsing history associated with at least one other client device; executing, within an execution environment of the one or more computing devices, the network resource in response to determining that the network resource is correlated with malware; and determining, via at least one of the one or more computing devices, whether the network resource comprises malware based at least in part on the execution of the network resource. 15. The method of claim 14, further comprising sending, via at least one of the one or more computing devices, data generated by the network resource to the client device in response to the proxied request when the network resource is determined not to comprise the malware. 16. The method of claim 14, further comprising sending, via at least one of the one or more computing devices, data encoding a warning to the client device in place of data generated by the network resource to the client device in response to the proxied request when the network resource is determined to comprise the malware. 17. The method of claim 14, further comprising: analyzing, via at least one of the one or more computing devices, the proxied request to determine whether the proxied request includes protected information transmitted in an unsecured manner; and determining, via at least one of the one or more computing devices, whether the network resource comprises malware based at least in part on whether the proxied request includes the protected information transmitted in the unsecured manner. 18. The method of claim 17, wherein the one or more computing devices include programmable hardware configured to perform the analyzing, the programmable hardware including at least one of: a field programmable gate array (FPGA), a field programmable object array (FPOA), or a memristor array. 19. The method of claim 14, further comprising: receiving, via at least one of the one or more computing devices, a user-submitted report identifying the network resource as comprising the malware; and determining, via at least one of the one or more computing devices, whether the network resource comprises malware based at least in part on the user-submitted report. 20. A non-transitory computer-readable medium embodying a program executable in at least one computing device, wherein when executed the program causes the at least one computing device to at least: receive a proxied request from a client device for a network resource from a network site; analyze the proxied request to determine whether the proxied request includes protected information transmitted in an unsecured manner; determine whether the network resource comprises malware based at least in part on an execution of the network resource or whether the proxied request includes the protected information transmitted in the unsecured manner; and sending data generated by the network resource to the client device in response to the proxied request when the network resource is determined not to comprise the malware.
Disclosed are various embodiments for malware detection by way of proxy servers. In one embodiment, a proxied request for a network resource from a network site is received from a client device by a proxy server application. The proxied request is analyzed to determine whether the proxied request includes protected information transmitted in an unsecured manner. It is then determined whether the network resource comprises malware based at least in part on an execution of the network resource or whether the proxied request includes the protected information transmitted in the unsecured manner. The proxy server application refrains from sending data generated by the network resource to the client device in response to the proxied request when the network resource is determined to comprise the malware.1. A system, comprising: at least one computing device; and a proxy server application executable in the at least one computing device, wherein when executed the proxy server application causes the at least one computing device to at least: receive a proxied request from a client device for a network resource from a network site; analyze the proxied request to determine whether the proxied request includes protected information transmitted in an unsecured manner; determine that the client device is affected by malware based at least in part on the proxied request including the protected information transmitted in the unsecured manner; and refrain from sending data generated by the network resource to the client device in response to determining that the client device is affected by the malware. 2. The system of claim 1, wherein when executed the proxy server application further causes the at least one computing device to at least: receive the network resource from the network site; execute the network resource within an execution environment, the execution environment being configured to mimic a configuration of the client device; and determine whether the network resource comprises malware further based at least in part on the execution of the network resource. 3. The system of claim 1, wherein when executed the proxy server application further causes the at least one computing device to at least: determine a source of the malware based at least in part on the proxied request or telemetry data received from the client device by the proxy server application; and implement an action in response to determining the source of the malware. 4. The system of claim 3, wherein the action comprises blocking proxied requests by the client device for network resources of the network site corresponding to the source of the malware. 5. The system of claim 3, wherein the action comprises logging an indication of the source of the malware or generating an alert including the indication of the source of the malware. 6. The system of claim 3, wherein the action comprises determining that another client device is affected by the malware based at least in part on another proxied request received from the other client device, the other proxied request being associated with the source of the malware. 7. The system of claim 3, wherein the action comprises configuring a warning to be returned to another client device in response to proxied requests by the other client device for network resources of the network site corresponding to the source of the malware. 8. The system of claim 3, wherein the action comprises configuring processing of a proxied network resource received from the source of the malware by the proxy server application to remove the malware before returning the proxied network resource to the client device. 9. The system of claim 3, wherein the client device is configured to report the telemetry data to the proxy server application in response to receiving a proxied network resource from the proxy server application, and determining that the client device is affected by the malware further comprises: determining that the client device is affected by the malware in response to detecting at least one of: an absence of the telemetry data expected to be received from the client device, or an abnormality in the telemetry data received from the client device. 10. The system of claim 9, wherein the abnormality is detected from at least one of: memory consumption data, data storage usage data, network connection data, system configuration data, or process state data. 11. The system of claim 1, wherein the protected information transmitted in the unsecured manner comprises: a credit card number being sent in clear text via the proxied request, a password being sent in clear text via the proxied request, or predefined protected information associated with the client device. 12. The system of claim 1, wherein determining that the client device is affected by the malware further comprises detecting an absence of another proxied request that is expected to be received from the client device. 13. The system of claim 1, wherein determining that the client device is affected by the malware further comprise detecting that the proxied request is for canary data, the canary data being hidden from a user interface of the client device. 14. A method, comprising: receiving, via at least one of one or more computing devices, a proxied request from a client device for a network resource from a network site; determining, via at least one of the one or more computing devices, whether the network resource is correlated with malware based at least in part on a browsing history associated with at least one other client device; executing, within an execution environment of the one or more computing devices, the network resource in response to determining that the network resource is correlated with malware; and determining, via at least one of the one or more computing devices, whether the network resource comprises malware based at least in part on the execution of the network resource. 15. The method of claim 14, further comprising sending, via at least one of the one or more computing devices, data generated by the network resource to the client device in response to the proxied request when the network resource is determined not to comprise the malware. 16. The method of claim 14, further comprising sending, via at least one of the one or more computing devices, data encoding a warning to the client device in place of data generated by the network resource to the client device in response to the proxied request when the network resource is determined to comprise the malware. 17. The method of claim 14, further comprising: analyzing, via at least one of the one or more computing devices, the proxied request to determine whether the proxied request includes protected information transmitted in an unsecured manner; and determining, via at least one of the one or more computing devices, whether the network resource comprises malware based at least in part on whether the proxied request includes the protected information transmitted in the unsecured manner. 18. The method of claim 17, wherein the one or more computing devices include programmable hardware configured to perform the analyzing, the programmable hardware including at least one of: a field programmable gate array (FPGA), a field programmable object array (FPOA), or a memristor array. 19. The method of claim 14, further comprising: receiving, via at least one of the one or more computing devices, a user-submitted report identifying the network resource as comprising the malware; and determining, via at least one of the one or more computing devices, whether the network resource comprises malware based at least in part on the user-submitted report. 20. A non-transitory computer-readable medium embodying a program executable in at least one computing device, wherein when executed the program causes the at least one computing device to at least: receive a proxied request from a client device for a network resource from a network site; analyze the proxied request to determine whether the proxied request includes protected information transmitted in an unsecured manner; determine whether the network resource comprises malware based at least in part on an execution of the network resource or whether the proxied request includes the protected information transmitted in the unsecured manner; and sending data generated by the network resource to the client device in response to the proxied request when the network resource is determined not to comprise the malware.
2,400
7,487
7,487
10,527,136
2,456
For a portal server system for managing a collection of associated portlets responsive to user requests to access a application, the invention provides apparatus and methodology including: a portlet application session object for saving parameters from user requests of associated portlets; and, a portlet application communication client linked to said portlet application session means for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application.
1. Apparatus for a portal server system for managing a collection of associated portlets responsive to user requests to access a web application, the apparatus comprising: portlet application session means for saving parameters from user requests of associated portlets; and a portlet application communication client linked to said portlet application session means for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application. 2. The apparatus of claim 1 wherein said portlet application communication client stores user session information. 3. The apparatus of claim 1 wherein said portlet application session means comprises a portlet application session object. 4. The apparatus of claim 1 wherein said associated portlets have portlet request parameter maps for storing data and instructions from user requests to said portlets. 5. The apparatus of claim 1 wherein a portlet application is adapted to operate on said portal server system for managing said collection of associated portlets. 6. The apparatus of claim 1 wherein said portlet application communication client has access to a user session information store for storing user session information. 7. The apparatus of claim 1 wherein said portlet application communication client includes a user session information store for storing user session information. 8. The apparatus of claim 2 wherein said user session information includes user session information for mapping said user session information to a corresponding session of said web application. 9. The apparatus of claim 8 wherein said user session information is selected from the set comprising: user id, user credentials, language preferences, session timeout information, session id, for mapping said user session information to a corresponding session of said web application. 10. The apparatus of claim 1 wherein said portlet application communication client has a request buffer for storing requests from said associated portlets to enable said communication client to provide data and instructions for said web application. 11. The apparatus of claim 10 wherein said communication client has a request buffer for storing requests from said portlet request parameter maps of said associated portlets to enable said communication client to provide data and instructions for said web application. 12. A portlet application for managing a collection of associated portlets in a portal, for operating on a server providing access to a web application by a user; said associated portlets having portlet request parameter maps storing data and instructions from user requests to said portlets; a portlet application session object for said user for said associated portlets; a portlet application session data store controlled by said portlet application session object; a portlet application communication client linked to said portlet application data store for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application; and said communication client having a request buffer for storing requests from portlet request parameter maps of said associated portlets to enable said communication client to provide data and instructions for said web application. 13. A portlet application communication client linkable to said portlet application data store of claim 12 for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application; said portlet application communication client having a user session information store for storing user session information including selected information from the set of the following user session information: user id, user credentials, language preferences, session timeout information, session id, etc. for mapping said user session information to a corresponding session of said web application; said session timeout information including session timeout information of said portal server and said web application. 14. The apparatus of claim 13 further including synchronization means for said portlet application communication client for matching session timeouts between said portal server and said web application by reauthenticating said user if said web application times out before said portal server. 15. The apparatus of claim 14 further including synchronization means for said portlet application communication client for matching session timeouts between said portal server and said web application by reauthenticating said user from stored information in said user session information store if said web application times out before said portal server. 16. Apparatus for a portal server adapted to operate a web portal to provide access to a web application; having a portlet application operating on said portal server, for managing a collection of associated portlets; wherein said portlet application includes: means to initiate portlets on requests of a user to access said web application; means to manage a portlet application session object for said portlets; and, a portlet application session object data store controlled by said portlet application session object for saving parameters from user requests for associating said portlets with said with said portlet application session object, the apparatus comprising: a portlet application communication client linked to said portlet application data store for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application; and said portlet application communication client having a user session information store for storing user session information including selected information from the set of the following user session information: user id, user credentials, language preferences, session timeout information, session id, etc. for mapping said user session information to a corresponding session of said web application. 17. The apparatus of claim 1, claim 9 or claim 16 wherein said session timeout information includes session timeout information of said portal server and said web application. 18. A portlet application, for managing a collection of associated portlets in a portal, for operating on a server providing access to a web application by a user; said associated portlets having portlet request parameter maps storing data and instructions from user requests to said portlets; a portlet application session object for said user for said associated portlets; a portlet application session data store controlled by said portlet application session object; a portlet application communication client linked to said portlet application data store for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application; said communication client having a request buffer for storing requests from portlet request parameter maps of said associated portlets to enable said communication client to provide data and instructions for said web application. 19. The apparatus of claim 18 further including synchronization means for said portlet application communication client for matching session timeouts between portal server and said web application by reauthenticating said user if said web application times out before said portal server. 20. A method for a portal server system for managing a collection of associated portlets responsive to user requests to access a web application, the method comprising: using portlet application session means for saving parameters from user requests of associated portlets; and using a portlet application communication client linked to said portlet application session means for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application. 21. An article comprising: a computer readable signal bearing medium; and computer program code means recorded on said medium adapted to implement the apparatus of any claims 1-19.
For a portal server system for managing a collection of associated portlets responsive to user requests to access a application, the invention provides apparatus and methodology including: a portlet application session object for saving parameters from user requests of associated portlets; and, a portlet application communication client linked to said portlet application session means for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application.1. Apparatus for a portal server system for managing a collection of associated portlets responsive to user requests to access a web application, the apparatus comprising: portlet application session means for saving parameters from user requests of associated portlets; and a portlet application communication client linked to said portlet application session means for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application. 2. The apparatus of claim 1 wherein said portlet application communication client stores user session information. 3. The apparatus of claim 1 wherein said portlet application session means comprises a portlet application session object. 4. The apparatus of claim 1 wherein said associated portlets have portlet request parameter maps for storing data and instructions from user requests to said portlets. 5. The apparatus of claim 1 wherein a portlet application is adapted to operate on said portal server system for managing said collection of associated portlets. 6. The apparatus of claim 1 wherein said portlet application communication client has access to a user session information store for storing user session information. 7. The apparatus of claim 1 wherein said portlet application communication client includes a user session information store for storing user session information. 8. The apparatus of claim 2 wherein said user session information includes user session information for mapping said user session information to a corresponding session of said web application. 9. The apparatus of claim 8 wherein said user session information is selected from the set comprising: user id, user credentials, language preferences, session timeout information, session id, for mapping said user session information to a corresponding session of said web application. 10. The apparatus of claim 1 wherein said portlet application communication client has a request buffer for storing requests from said associated portlets to enable said communication client to provide data and instructions for said web application. 11. The apparatus of claim 10 wherein said communication client has a request buffer for storing requests from said portlet request parameter maps of said associated portlets to enable said communication client to provide data and instructions for said web application. 12. A portlet application for managing a collection of associated portlets in a portal, for operating on a server providing access to a web application by a user; said associated portlets having portlet request parameter maps storing data and instructions from user requests to said portlets; a portlet application session object for said user for said associated portlets; a portlet application session data store controlled by said portlet application session object; a portlet application communication client linked to said portlet application data store for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application; and said communication client having a request buffer for storing requests from portlet request parameter maps of said associated portlets to enable said communication client to provide data and instructions for said web application. 13. A portlet application communication client linkable to said portlet application data store of claim 12 for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application; said portlet application communication client having a user session information store for storing user session information including selected information from the set of the following user session information: user id, user credentials, language preferences, session timeout information, session id, etc. for mapping said user session information to a corresponding session of said web application; said session timeout information including session timeout information of said portal server and said web application. 14. The apparatus of claim 13 further including synchronization means for said portlet application communication client for matching session timeouts between said portal server and said web application by reauthenticating said user if said web application times out before said portal server. 15. The apparatus of claim 14 further including synchronization means for said portlet application communication client for matching session timeouts between said portal server and said web application by reauthenticating said user from stored information in said user session information store if said web application times out before said portal server. 16. Apparatus for a portal server adapted to operate a web portal to provide access to a web application; having a portlet application operating on said portal server, for managing a collection of associated portlets; wherein said portlet application includes: means to initiate portlets on requests of a user to access said web application; means to manage a portlet application session object for said portlets; and, a portlet application session object data store controlled by said portlet application session object for saving parameters from user requests for associating said portlets with said with said portlet application session object, the apparatus comprising: a portlet application communication client linked to said portlet application data store for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application; and said portlet application communication client having a user session information store for storing user session information including selected information from the set of the following user session information: user id, user credentials, language preferences, session timeout information, session id, etc. for mapping said user session information to a corresponding session of said web application. 17. The apparatus of claim 1, claim 9 or claim 16 wherein said session timeout information includes session timeout information of said portal server and said web application. 18. A portlet application, for managing a collection of associated portlets in a portal, for operating on a server providing access to a web application by a user; said associated portlets having portlet request parameter maps storing data and instructions from user requests to said portlets; a portlet application session object for said user for said associated portlets; a portlet application session data store controlled by said portlet application session object; a portlet application communication client linked to said portlet application data store for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application; said communication client having a request buffer for storing requests from portlet request parameter maps of said associated portlets to enable said communication client to provide data and instructions for said web application. 19. The apparatus of claim 18 further including synchronization means for said portlet application communication client for matching session timeouts between portal server and said web application by reauthenticating said user if said web application times out before said portal server. 20. A method for a portal server system for managing a collection of associated portlets responsive to user requests to access a web application, the method comprising: using portlet application session means for saving parameters from user requests of associated portlets; and using a portlet application communication client linked to said portlet application session means for communicating between said associated portlets and said web application to convey user requests received from said associated portlets to said web application. 21. An article comprising: a computer readable signal bearing medium; and computer program code means recorded on said medium adapted to implement the apparatus of any claims 1-19.
2,400
7,488
7,488
13,739,986
2,454
Segment generation describing usage patterns is described. In one or more implementations, user interaction with a browser is monitored to navigate through a plurality of web pages using a computing device. Data is extracted from web documents associated with the plurality of web pages automatically and without user intervention by one or more modules of the computing device, the data usable to describe a usage pattern involving the navigation through the plurality of web pages. A segment is generated which describes the usage pattern automatically and with user intervention, the segment configured to identify the usage pattern to target content.
1. A method implemented by one or more modules of a computing device, the method comprising: monitoring user interaction with a browser to navigate through a plurality of web pages using the computing device; extracting data from web documents associated with the plurality of web pages automatically and without user intervention by the one or more modules of the computing device, the data usable to describe a usage pattern involving the navigation through the plurality of web pages; and generating a segment which describes the usage pattern automatically and with user intervention, the segment configured to identify the usage pattern to target content. 2. A method as described in claim 1, further comprising determining whether respective said web pages support analytics tracking and wherein the extracting of the data from the web documents is performed responsive to a determination that that the respective said web pages do support analytics tracking. 3. A method as described in claim 1, wherein the one or more modules are embedded as part of the browser. 4. A method as described in claim 1, wherein the extracting of the data is performed automatically and without user intervention from the web documents responsive to user selection of individual said webpages. 5. A method as described in claim 1, wherein the extracting of the data is performed automatically and without user intervention from the web documents responsive to the navigation to the individual said webpages. 6. A method as described in claim 1, wherein the usage pattern includes identification of a geographic location of the computing device. 7. A method as described in claim 1, wherein the usage pattern includes identification of hardware and software resources of the computing device. 8. A method as described in claim 1, wherein the usage pattern includes identification of network resources of a network used by the computing device to navigate through the plurality of webpages. 9. A method as described in claim 1, wherein the plurality of webpages is provided via different websites. 10. A method as described in claim 9, wherein at least one said website is a social network website having a link that is selectable to navigate to another said website. 11. A method as described in claim 1, wherein the targeted content includes advertisements, configuration of a website that includes one or more of the webpages, or content recommendations. 12. A method as described in claim 1, wherein the monitoring, the extracting, and the generating are performed locally at the computing device. 13. One or more computer-readable storage media comprising instructions stored thereon that, responsive to execution by a computing device, causes the computing device to perform operations comprising: generating a segment automatically and without user intervention that describes a usage pattern involving navigation through a plurality of web pages using a browser, the segment generated using data extracted from web documents associated with the plurality of web pages automatically and without user intervention by the browser; and exposing the segment by the browser to a web site to receive targeted content. 14. One or more computer-readable storage media as described in claim 13, wherein each said web page is associated with a responsive one of the web documents. 15. One or more computer-readable storage media as described in claim 13, wherein the exposing is performed such that the web site configures at least one webpage of the website based on the usage pattern represented by the segment. 16. One or more computer-readable storage media as described in claim 13, wherein the targeted content that includes advertisements, configuration of a website that includes one or more of the webpages, or content recommendations. 17. A system comprising: at least one module implemented at least partially in hardware and configured to generate a segment automatically and without user intervention that describes a usage pattern involving navigation through a plurality of web pages using a browser, the segment generated using data extracted from web documents associated with the plurality of web pages automatically and without user intervention by the browser; and one or more modules implemented at least partially in hardware and configured to filter analytics data using the segment to aggregate data that describes behavior of visitors having the usage pattern described by the segment. 18. A system as described in claim 17, wherein at least one said web document is associated as part of a respective said web page. 19. A system as described in claim 17, wherein the one or more modules are further configured to output a user interface that includes a display of the aggregate data. 20. A system as described in claim 19, wherein the one or more modules are further configured to output the user interface that is also configured to support specification of a campaign involving targeted content to the usage pattern identified by the segment.
Segment generation describing usage patterns is described. In one or more implementations, user interaction with a browser is monitored to navigate through a plurality of web pages using a computing device. Data is extracted from web documents associated with the plurality of web pages automatically and without user intervention by one or more modules of the computing device, the data usable to describe a usage pattern involving the navigation through the plurality of web pages. A segment is generated which describes the usage pattern automatically and with user intervention, the segment configured to identify the usage pattern to target content.1. A method implemented by one or more modules of a computing device, the method comprising: monitoring user interaction with a browser to navigate through a plurality of web pages using the computing device; extracting data from web documents associated with the plurality of web pages automatically and without user intervention by the one or more modules of the computing device, the data usable to describe a usage pattern involving the navigation through the plurality of web pages; and generating a segment which describes the usage pattern automatically and with user intervention, the segment configured to identify the usage pattern to target content. 2. A method as described in claim 1, further comprising determining whether respective said web pages support analytics tracking and wherein the extracting of the data from the web documents is performed responsive to a determination that that the respective said web pages do support analytics tracking. 3. A method as described in claim 1, wherein the one or more modules are embedded as part of the browser. 4. A method as described in claim 1, wherein the extracting of the data is performed automatically and without user intervention from the web documents responsive to user selection of individual said webpages. 5. A method as described in claim 1, wherein the extracting of the data is performed automatically and without user intervention from the web documents responsive to the navigation to the individual said webpages. 6. A method as described in claim 1, wherein the usage pattern includes identification of a geographic location of the computing device. 7. A method as described in claim 1, wherein the usage pattern includes identification of hardware and software resources of the computing device. 8. A method as described in claim 1, wherein the usage pattern includes identification of network resources of a network used by the computing device to navigate through the plurality of webpages. 9. A method as described in claim 1, wherein the plurality of webpages is provided via different websites. 10. A method as described in claim 9, wherein at least one said website is a social network website having a link that is selectable to navigate to another said website. 11. A method as described in claim 1, wherein the targeted content includes advertisements, configuration of a website that includes one or more of the webpages, or content recommendations. 12. A method as described in claim 1, wherein the monitoring, the extracting, and the generating are performed locally at the computing device. 13. One or more computer-readable storage media comprising instructions stored thereon that, responsive to execution by a computing device, causes the computing device to perform operations comprising: generating a segment automatically and without user intervention that describes a usage pattern involving navigation through a plurality of web pages using a browser, the segment generated using data extracted from web documents associated with the plurality of web pages automatically and without user intervention by the browser; and exposing the segment by the browser to a web site to receive targeted content. 14. One or more computer-readable storage media as described in claim 13, wherein each said web page is associated with a responsive one of the web documents. 15. One or more computer-readable storage media as described in claim 13, wherein the exposing is performed such that the web site configures at least one webpage of the website based on the usage pattern represented by the segment. 16. One or more computer-readable storage media as described in claim 13, wherein the targeted content that includes advertisements, configuration of a website that includes one or more of the webpages, or content recommendations. 17. A system comprising: at least one module implemented at least partially in hardware and configured to generate a segment automatically and without user intervention that describes a usage pattern involving navigation through a plurality of web pages using a browser, the segment generated using data extracted from web documents associated with the plurality of web pages automatically and without user intervention by the browser; and one or more modules implemented at least partially in hardware and configured to filter analytics data using the segment to aggregate data that describes behavior of visitors having the usage pattern described by the segment. 18. A system as described in claim 17, wherein at least one said web document is associated as part of a respective said web page. 19. A system as described in claim 17, wherein the one or more modules are further configured to output a user interface that includes a display of the aggregate data. 20. A system as described in claim 19, wherein the one or more modules are further configured to output the user interface that is also configured to support specification of a campaign involving targeted content to the usage pattern identified by the segment.
2,400
7,489
7,489
13,808,948
2,487
The embodiments herein provide a variable three-dimensional camera assembly for still photography. The assembly consists of a housing to encase two cameras for capturing and projecting left and right eye views. The telescopically movable arms are fixed to the housing and fixed with two objectives. The arms are moved manually or using a motorized control to enable the objectives to converge on a desired target simultaneously. The images captured by the cameras are passed through the two eyepieces to project a 3-D image. A single common control unit regulates the image processing units provided for simultaneous 3-D viewing of the images of the target object. A horizon parallel indicator system is arranged to hold the cameras horizontally. A multi position primary lens/prism/mirrors complex unit (LMPC4) is provided to adjust the convergence accurately to improve the 3-D effect.
1. A dual-camera assembly to render a variable three-dimensional view of a target-object, comprising: a left and right movable and telescoping arms with objectives having optical axes, operably connected to left and right portions of a housing and disposed horizontally along an axis A-A□, wherein the movable and telescopic arms are operable to move laterally and convergingly, to secure left and right perspectives of the target-object; a pair of stationary convergence lenses with an inter-spatial distance, disposed in the movable and telescoping arms of the left and right camera units and aligned with the optical axes of the objectives; at least a motorized control unit operably connected to said pair of movable and telescoping arms, wherein said control unit arranged to operate the left and right movable and telescoping arms; a horizon parallel indicator operably connected to the housing, wherein said parallel indicator disposed to balance the left and right portions of the housing; and left and right eye pieces, disposed in left right portions of the housing and optically aligned with the objectives, to view simultaneously the three-dimensional view target-object, with an enhanced depth perception. 2. The dual-camera assembly according to claim 1, wherein the left and right movable and telescopic arms further comprising; a plurality of image processing units; at least a polarized filter disposed at the distal ends of the movable and telescoping arms; a plurality of barrels and elbows, wherein said elbows and barrels disposed to vary inter pupillary distance (IPD), optically, and provide a variable degree of left and right convergence angles to the movable and telescoping arms; and a plurality of graticules disposed in the movable and telescoping arms, to synchronise the left and right perspectives of the target object; wherein said image processing units, polarized filter, barrels, elbows and graticules are optically aligned with the optical axes. 3. The dual-assembly according to claim 1, wherein the left and right movable and telescoping arms have a plurality of joints to move the left arm and the right outwardly and inwardly to focus on the target-object. 4. The dual-camera assembly according to claim 1, wherein the movable and telescoping arms are disposed with variable and independent target-object convergence positions. 5. The dual-camera assembly according to claim 4, wherein the variable target-object convergence positions are substantially perpendicular and at variable oblique angles to vertical axis of the housing. 6. The dual-camera assembly according to claim 4, wherein the variable target-object positions of the respective movable and telescoping arms are synchronous or asynchronous. 7. The dual-camera assembly according to claim 1, wherein the objectives alone are adjusted to focus on the target-object while keeping the left arm and the right arm in stationary condition. 8. The dual-camera assembly according to claim 1, wherein a length of an optical path left and right movable and telescoping arms is varied based on a distance of the target-object for enhancing the 3-dimensional effect. 9. The dual-camera assembly according to claim 1, wherein the motorized control unit is operated based on an output of the image processing unit to control the movement and bending angle of the left and right movable and telescoping arms. 10. The dual-camera assembly according to claim 1, wherein the movable and telescoping aims are metallic or of fiber optic material. 11. The dual-camera assembly according to claim 2, wherein the image processing units are mobile adjustable and include a lens, prism or mirror or a combination thereof. 12. The dual-camera assembly according to claim 1, wherein the left objective and right objective includes at least one of glasses and lenses arranged as graticules. 13. The dual-camera assembly according to claim 1, wherein left and right display screens are arranged on the housing in proximity to the eye pieces. 14. The dual-camera assembly according to claim 13, wherein the display screens is a liquid crystal display, light emitting diode display or plasma display. 15. The dual-camera assembly according to claim 1, wherein the left and right movable and telescoping arms do not share a single optical axis for focusing the image on a photographic film or for displaying the image on the left side display screen and the right side display screen or for recording and storing the image on a storage device. 16. The dual-camera assembly according to claim 1, further comprising a master-slave system for outdoor photography. 17. The dual-camera assembly according to claim 1, wherein the master camera controls the slave camera through a wired network, wireless network, an internet, Wide Area Network (WAN) and Local Area Network (LAN). 18. The dual-camera assembly according to claim 1, wherein the master camera includes a control panel for adjusting orientation of the slave camera in line with the target-object. 19. The dual-camera assembly according to claim 1, comprising a single camera with a plain polarized filter or a single charge coupled device (CCD) or an image sensor for the 2-D still photography.
The embodiments herein provide a variable three-dimensional camera assembly for still photography. The assembly consists of a housing to encase two cameras for capturing and projecting left and right eye views. The telescopically movable arms are fixed to the housing and fixed with two objectives. The arms are moved manually or using a motorized control to enable the objectives to converge on a desired target simultaneously. The images captured by the cameras are passed through the two eyepieces to project a 3-D image. A single common control unit regulates the image processing units provided for simultaneous 3-D viewing of the images of the target object. A horizon parallel indicator system is arranged to hold the cameras horizontally. A multi position primary lens/prism/mirrors complex unit (LMPC4) is provided to adjust the convergence accurately to improve the 3-D effect.1. A dual-camera assembly to render a variable three-dimensional view of a target-object, comprising: a left and right movable and telescoping arms with objectives having optical axes, operably connected to left and right portions of a housing and disposed horizontally along an axis A-A□, wherein the movable and telescopic arms are operable to move laterally and convergingly, to secure left and right perspectives of the target-object; a pair of stationary convergence lenses with an inter-spatial distance, disposed in the movable and telescoping arms of the left and right camera units and aligned with the optical axes of the objectives; at least a motorized control unit operably connected to said pair of movable and telescoping arms, wherein said control unit arranged to operate the left and right movable and telescoping arms; a horizon parallel indicator operably connected to the housing, wherein said parallel indicator disposed to balance the left and right portions of the housing; and left and right eye pieces, disposed in left right portions of the housing and optically aligned with the objectives, to view simultaneously the three-dimensional view target-object, with an enhanced depth perception. 2. The dual-camera assembly according to claim 1, wherein the left and right movable and telescopic arms further comprising; a plurality of image processing units; at least a polarized filter disposed at the distal ends of the movable and telescoping arms; a plurality of barrels and elbows, wherein said elbows and barrels disposed to vary inter pupillary distance (IPD), optically, and provide a variable degree of left and right convergence angles to the movable and telescoping arms; and a plurality of graticules disposed in the movable and telescoping arms, to synchronise the left and right perspectives of the target object; wherein said image processing units, polarized filter, barrels, elbows and graticules are optically aligned with the optical axes. 3. The dual-assembly according to claim 1, wherein the left and right movable and telescoping arms have a plurality of joints to move the left arm and the right outwardly and inwardly to focus on the target-object. 4. The dual-camera assembly according to claim 1, wherein the movable and telescoping arms are disposed with variable and independent target-object convergence positions. 5. The dual-camera assembly according to claim 4, wherein the variable target-object convergence positions are substantially perpendicular and at variable oblique angles to vertical axis of the housing. 6. The dual-camera assembly according to claim 4, wherein the variable target-object positions of the respective movable and telescoping arms are synchronous or asynchronous. 7. The dual-camera assembly according to claim 1, wherein the objectives alone are adjusted to focus on the target-object while keeping the left arm and the right arm in stationary condition. 8. The dual-camera assembly according to claim 1, wherein a length of an optical path left and right movable and telescoping arms is varied based on a distance of the target-object for enhancing the 3-dimensional effect. 9. The dual-camera assembly according to claim 1, wherein the motorized control unit is operated based on an output of the image processing unit to control the movement and bending angle of the left and right movable and telescoping arms. 10. The dual-camera assembly according to claim 1, wherein the movable and telescoping aims are metallic or of fiber optic material. 11. The dual-camera assembly according to claim 2, wherein the image processing units are mobile adjustable and include a lens, prism or mirror or a combination thereof. 12. The dual-camera assembly according to claim 1, wherein the left objective and right objective includes at least one of glasses and lenses arranged as graticules. 13. The dual-camera assembly according to claim 1, wherein left and right display screens are arranged on the housing in proximity to the eye pieces. 14. The dual-camera assembly according to claim 13, wherein the display screens is a liquid crystal display, light emitting diode display or plasma display. 15. The dual-camera assembly according to claim 1, wherein the left and right movable and telescoping arms do not share a single optical axis for focusing the image on a photographic film or for displaying the image on the left side display screen and the right side display screen or for recording and storing the image on a storage device. 16. The dual-camera assembly according to claim 1, further comprising a master-slave system for outdoor photography. 17. The dual-camera assembly according to claim 1, wherein the master camera controls the slave camera through a wired network, wireless network, an internet, Wide Area Network (WAN) and Local Area Network (LAN). 18. The dual-camera assembly according to claim 1, wherein the master camera includes a control panel for adjusting orientation of the slave camera in line with the target-object. 19. The dual-camera assembly according to claim 1, comprising a single camera with a plain polarized filter or a single charge coupled device (CCD) or an image sensor for the 2-D still photography.
2,400
7,490
7,490
12,491,876
2,424
Various embodiments facilitate voice control of a receiving device, such as a set-top box. In one embodiment, a voice enabled media presentation system (“VEMPS”) includes a receiving device and a remote-control device having an audio input device. The VEMPS is configured to obtain audio data via the audio input device, the audio data received from a user and representing a spoken command to control the receiving device. The VEMPS is further configured to determine the spoken command by performing speech recognition on the obtained audio data, and to control the receiving device based on the determined command. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims.
1. A media presentation system, comprising: a remote-control device including multiple keys and an audio input device; and a set-top box wirelessly communicatively coupled to the remote-control device, wherein the media presentation system is configured to: obtain audio data via the audio input device, the audio data received from a user and representing a spoken command to control the set-top box; determine the spoken command by performing speech recognition upon the obtained audio data; control the set-top box in response to the determination of the spoken command; and control the set-top box in response to a user selection of one of the multiple keys of the remote-control device. 2. The media presentation system of claim 1 wherein the remote-control device is configured to transmit the obtained audio data to the set-top box, and wherein the set-top box is configured to perform the speech recognition upon the obtained audio data. 3. The media presentation system of claim 1 wherein the remote-control device is configured to perform at least some of the speech recognition upon the obtained audio data. 4. The media presentation system of claim 1 wherein the spoken command identifies programming, and wherein the set-top box is configured to present the identified programming in response to the determination of the spoken command. 5. A method of controlling a set-top box, comprising: wirelessly receiving audio data from a remote-control device, the audio data representing a spoken command uttered by a user into an audio input device of the remote-control device; determining the spoken command by performing speech recognition upon the received audio data; and controlling the set-top box device based on the determined command. 6. The method of claim 5 wherein controlling the set-top box includes selecting programming identified by the spoken command and presenting the selected programming on a presentation device coupled to the set-top box. 7. The method of claim 5 wherein the spoken command includes at least one of: an identification of programming to be selected by the set-top box, a command to modify volume of audio output provided by the set-top box, a command to power up/down the set-top box, a request for help, a request to view an electronic program guide, a request to modify a view of an electronic program guide, and a request to view programming identified by an electronic program guide. 8. The method of claim 5, further comprising: receiving an indication that the user is speaking a voice command; and in response to the received indication, reducing audio output volume provided by the set-top box. 9. The method of claim 8 wherein the received indication is an initial portion of the received audio data. 10. The method of claim 8 wherein the received indication is a signal transmitted by the remote-control device, the signal generated in response a key of the remote-control device being pressed by the user. 11. The method of claim 8 wherein the set-top box includes a digital video recorder, and wherein controlling the set-top box includes controlling operation of the digital video recorder. 12. The method of claim 5, further comprising: disambiguating a plurality of set-top box commands that correspond to the spoken command, by: determining, based on the spoken command, the plurality of set-top box commands; presenting the plurality of set-top box commands to the user; receiving from the user an indication of one of the plurality of set-top box commands; and controlling the set-top box using the one set-top box command. 13. The method of claim 12 wherein receiving the indication of the one set-top box command includes receiving an additional spoken command from the user. 14. The method of claim 5, further comprising: determining audio data that represents a voice prompt directing the user to provide a spoken command; and transmitting the determined audio data to an audio output device configured to play the voice prompt. 15. A method in a remote-control device that includes an audio input device and multiple keys, the method comprising: under control of the remote-control device: controlling the set-top box based on a command spoken by a user by: receiving audio data via the audio input device, the audio data representing the spoken command; and initiating speech recognition upon the received audio data to determine the spoken command; and controlling the set-top box in response to a user selection of one of the multiple keys of the remote-control device. 16. The method of claim 15 wherein controlling the set-top box based on the command spoken by the user includes at least one of selecting programming identified by the spoken command, adjusting audio output volume provided by the set-top box, controlling operation of a digital video recorder coupled with the set-top box, obtaining help regarding operation of the set-top box, and powering on/off the set-top box. 17. The method of claim 15 wherein initiating speech recognition includes: transmitting the received audio data to the set-top box; and causing the set-top box to begin speech recognition upon the transmitted audio data. 18. The method of claim 15 wherein initiating speech recognition includes performing the speech recognition upon the received audio data, and wherein controlling the set-top box based on the command spoken by the user includes transmitting a command to control the set-top box, the transmitted command based on the spoken command. 19. The method of claim 15, further comprising: transmitting an indication to the set-top box to reduce audio output volume. 20. The method of claim 19 wherein the transmitted indication is a signal generated by the remote-control device in response to a key pressed by the user. 21. The method of claim 19 wherein the transmitted indication is a first portion of the received audio data. 22. The method of claim 15, further comprising: receiving from the set-top box audio data that represents a voice prompt directing the user to provide a spoken command; and playing the voice prompt.
Various embodiments facilitate voice control of a receiving device, such as a set-top box. In one embodiment, a voice enabled media presentation system (“VEMPS”) includes a receiving device and a remote-control device having an audio input device. The VEMPS is configured to obtain audio data via the audio input device, the audio data received from a user and representing a spoken command to control the receiving device. The VEMPS is further configured to determine the spoken command by performing speech recognition on the obtained audio data, and to control the receiving device based on the determined command. This abstract is provided to comply with rules requiring an abstract, and it is submitted with the intention that it will not be used to interpret or limit the scope or meaning of the claims.1. A media presentation system, comprising: a remote-control device including multiple keys and an audio input device; and a set-top box wirelessly communicatively coupled to the remote-control device, wherein the media presentation system is configured to: obtain audio data via the audio input device, the audio data received from a user and representing a spoken command to control the set-top box; determine the spoken command by performing speech recognition upon the obtained audio data; control the set-top box in response to the determination of the spoken command; and control the set-top box in response to a user selection of one of the multiple keys of the remote-control device. 2. The media presentation system of claim 1 wherein the remote-control device is configured to transmit the obtained audio data to the set-top box, and wherein the set-top box is configured to perform the speech recognition upon the obtained audio data. 3. The media presentation system of claim 1 wherein the remote-control device is configured to perform at least some of the speech recognition upon the obtained audio data. 4. The media presentation system of claim 1 wherein the spoken command identifies programming, and wherein the set-top box is configured to present the identified programming in response to the determination of the spoken command. 5. A method of controlling a set-top box, comprising: wirelessly receiving audio data from a remote-control device, the audio data representing a spoken command uttered by a user into an audio input device of the remote-control device; determining the spoken command by performing speech recognition upon the received audio data; and controlling the set-top box device based on the determined command. 6. The method of claim 5 wherein controlling the set-top box includes selecting programming identified by the spoken command and presenting the selected programming on a presentation device coupled to the set-top box. 7. The method of claim 5 wherein the spoken command includes at least one of: an identification of programming to be selected by the set-top box, a command to modify volume of audio output provided by the set-top box, a command to power up/down the set-top box, a request for help, a request to view an electronic program guide, a request to modify a view of an electronic program guide, and a request to view programming identified by an electronic program guide. 8. The method of claim 5, further comprising: receiving an indication that the user is speaking a voice command; and in response to the received indication, reducing audio output volume provided by the set-top box. 9. The method of claim 8 wherein the received indication is an initial portion of the received audio data. 10. The method of claim 8 wherein the received indication is a signal transmitted by the remote-control device, the signal generated in response a key of the remote-control device being pressed by the user. 11. The method of claim 8 wherein the set-top box includes a digital video recorder, and wherein controlling the set-top box includes controlling operation of the digital video recorder. 12. The method of claim 5, further comprising: disambiguating a plurality of set-top box commands that correspond to the spoken command, by: determining, based on the spoken command, the plurality of set-top box commands; presenting the plurality of set-top box commands to the user; receiving from the user an indication of one of the plurality of set-top box commands; and controlling the set-top box using the one set-top box command. 13. The method of claim 12 wherein receiving the indication of the one set-top box command includes receiving an additional spoken command from the user. 14. The method of claim 5, further comprising: determining audio data that represents a voice prompt directing the user to provide a spoken command; and transmitting the determined audio data to an audio output device configured to play the voice prompt. 15. A method in a remote-control device that includes an audio input device and multiple keys, the method comprising: under control of the remote-control device: controlling the set-top box based on a command spoken by a user by: receiving audio data via the audio input device, the audio data representing the spoken command; and initiating speech recognition upon the received audio data to determine the spoken command; and controlling the set-top box in response to a user selection of one of the multiple keys of the remote-control device. 16. The method of claim 15 wherein controlling the set-top box based on the command spoken by the user includes at least one of selecting programming identified by the spoken command, adjusting audio output volume provided by the set-top box, controlling operation of a digital video recorder coupled with the set-top box, obtaining help regarding operation of the set-top box, and powering on/off the set-top box. 17. The method of claim 15 wherein initiating speech recognition includes: transmitting the received audio data to the set-top box; and causing the set-top box to begin speech recognition upon the transmitted audio data. 18. The method of claim 15 wherein initiating speech recognition includes performing the speech recognition upon the received audio data, and wherein controlling the set-top box based on the command spoken by the user includes transmitting a command to control the set-top box, the transmitted command based on the spoken command. 19. The method of claim 15, further comprising: transmitting an indication to the set-top box to reduce audio output volume. 20. The method of claim 19 wherein the transmitted indication is a signal generated by the remote-control device in response to a key pressed by the user. 21. The method of claim 19 wherein the transmitted indication is a first portion of the received audio data. 22. The method of claim 15, further comprising: receiving from the set-top box audio data that represents a voice prompt directing the user to provide a spoken command; and playing the voice prompt.
2,400
7,491
7,491
15,653,975
2,415
A method in a wireless device for reporting Channel State Information (CSI). The wireless device is comprised in a wireless communications system. The method includes receiving a CSI process configuration and a request for CSI information from a network node. The method further includes reporting CSI for one or more CSI processes. The CSI reflects the state of the channel for a CSI reference resource. According to the method, the CSI reference resource is determined based on the number of configured CSI processes. Related devices are also disclosed.
1. A method, in a wireless device, for reporting Channel State Information (CSI), the wireless device being comprised in a wireless communications system, the method comprising: receiving a CSI process configuration and a request for CSI from a network node; determining a CSI reference resource based on a number of configured CSI processes; and reporting CSI for one or more CSI processes, wherein the CSI reflects a state of a channel for the CSI reference resource. 2. The method of claim 1, wherein the determining the CSI reference resource comprises determining the CSI reference resource further based on a number of configured CSI-RS resources. 3. The method of claim 1, further comprising: prioritizing a first CSI process over a second CSI process; determining a rank indicator and/or a precoding matrix indicator for the first CSI process; and reusing the determined rank indicator and/or precoding matrix indicator for the second CSI process. 4. The method of claim 1, further comprising: performing measurements on reference signal resources corresponding to the configured CSI processes; and determining the CSI based on the measurements. 5. The method of claim 1, wherein the CSI is determined based on measurements performed in and/or prior to the CSI reference resource. 6. The method of claim 1, wherein the request for CSI is a request for a periodic CSI report. 7. The method of claim 1, wherein the request for CSI is a request for an aperiodic CSI report. 8. The method of claim 1, further comprising: performing measurements on interference measurement resources corresponding to the configured CSI processes; and determining the CSI based on the measurements. 9. The method of claim 1, wherein each CSI process corresponds to a reference signal resource and an interference measurement resource. 10. A wireless device for reporting Channel State Information (CSI), the wireless device comprising: memory comprising instructions; processing circuitry operatively connected to the memory and configured, when executing the instructions, to cause the wireless device to: receive, from a network node, a CSI process configuration and a request for CSI; determine a CSI reference resource based on a number of configured CSI processes; and report CSI for one or more CSI processes, wherein the CSI reflects a state of a channel for the CSI reference resource. 11. The wireless device of claim 10, wherein the wireless device is a user equipment. 12. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to determine the CSI reference resource further based on a number of configured CSI-RS resources. 13. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to: prioritize a first CSI process over a second CSI process; determine a rank indicator and/or a precoding matrix indicator for the first CSI process; and reuse the determined rank indicator and/or precoding matrix indicator for the second CSI process. 14. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to: perform measurements on reference signal resources corresponding to the configured CSI processes; and determine the CSI based on the measurements. 15. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to determine the CSI based on measurements performed in and/or prior to the CSI reference resource. 16. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to: perform measurements on interference measurement resources corresponding to the configured CSI processes; and determine the CSI based on the measurements. 17. A computer program product stored on a non-transitory, computer readable medium and comprising program instructions, which when executed by at least one processor, causes the at least one processor to: receive, from a network node, a CSI process configuration and a request for CSI; determine a CSI reference resource based on a number of configured CSI processes; and report CSI for one or more CSI processes, wherein the CSI reflects a state of a channel for the CSI reference resource.
A method in a wireless device for reporting Channel State Information (CSI). The wireless device is comprised in a wireless communications system. The method includes receiving a CSI process configuration and a request for CSI information from a network node. The method further includes reporting CSI for one or more CSI processes. The CSI reflects the state of the channel for a CSI reference resource. According to the method, the CSI reference resource is determined based on the number of configured CSI processes. Related devices are also disclosed.1. A method, in a wireless device, for reporting Channel State Information (CSI), the wireless device being comprised in a wireless communications system, the method comprising: receiving a CSI process configuration and a request for CSI from a network node; determining a CSI reference resource based on a number of configured CSI processes; and reporting CSI for one or more CSI processes, wherein the CSI reflects a state of a channel for the CSI reference resource. 2. The method of claim 1, wherein the determining the CSI reference resource comprises determining the CSI reference resource further based on a number of configured CSI-RS resources. 3. The method of claim 1, further comprising: prioritizing a first CSI process over a second CSI process; determining a rank indicator and/or a precoding matrix indicator for the first CSI process; and reusing the determined rank indicator and/or precoding matrix indicator for the second CSI process. 4. The method of claim 1, further comprising: performing measurements on reference signal resources corresponding to the configured CSI processes; and determining the CSI based on the measurements. 5. The method of claim 1, wherein the CSI is determined based on measurements performed in and/or prior to the CSI reference resource. 6. The method of claim 1, wherein the request for CSI is a request for a periodic CSI report. 7. The method of claim 1, wherein the request for CSI is a request for an aperiodic CSI report. 8. The method of claim 1, further comprising: performing measurements on interference measurement resources corresponding to the configured CSI processes; and determining the CSI based on the measurements. 9. The method of claim 1, wherein each CSI process corresponds to a reference signal resource and an interference measurement resource. 10. A wireless device for reporting Channel State Information (CSI), the wireless device comprising: memory comprising instructions; processing circuitry operatively connected to the memory and configured, when executing the instructions, to cause the wireless device to: receive, from a network node, a CSI process configuration and a request for CSI; determine a CSI reference resource based on a number of configured CSI processes; and report CSI for one or more CSI processes, wherein the CSI reflects a state of a channel for the CSI reference resource. 11. The wireless device of claim 10, wherein the wireless device is a user equipment. 12. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to determine the CSI reference resource further based on a number of configured CSI-RS resources. 13. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to: prioritize a first CSI process over a second CSI process; determine a rank indicator and/or a precoding matrix indicator for the first CSI process; and reuse the determined rank indicator and/or precoding matrix indicator for the second CSI process. 14. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to: perform measurements on reference signal resources corresponding to the configured CSI processes; and determine the CSI based on the measurements. 15. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to determine the CSI based on measurements performed in and/or prior to the CSI reference resource. 16. The wireless device of claim 10, wherein the processing circuitry, when executing the instructions, is further configured to: perform measurements on interference measurement resources corresponding to the configured CSI processes; and determine the CSI based on the measurements. 17. A computer program product stored on a non-transitory, computer readable medium and comprising program instructions, which when executed by at least one processor, causes the at least one processor to: receive, from a network node, a CSI process configuration and a request for CSI; determine a CSI reference resource based on a number of configured CSI processes; and report CSI for one or more CSI processes, wherein the CSI reflects a state of a channel for the CSI reference resource.
2,400
7,492
7,492
14,878,226
2,425
Embodiments of the present invention disclose an integrated set-top box for recording a voice communication and/or a voicemail. In one embodiment, the integrated set-top box automatically detects communications associated with a monitored communication line and records and stores the voice communications in a data storage unit of the integrated set-top box. In another embodiment, the integrated set-top box may provide voicemail capabilities in addition to other features.
1. A method of recording a voice communication on a set-top box, the method comprising: decoding a television signal to display a television program on a display unit associated with the set-top box; monitoring, by the set-top box, a communication line to detect a connected phone call; recording, by the set-top box, audio of a live conversation associated with the connected phone call on the communication line being monitored; storing the audio in a data storage unit associated with the set-top box; and generating a first user interface that displays a list of a plurality of recorded voice communications along with information relating to each of the plurality of recorded voice communications, the plurality of recorded voice communications comprising the recorded audio of the live conversation. 2. The method of claim 1, further comprising: capturing caller identification associated with the connected phone call; and storing the caller identification along with the audio in the data storage unit. 3. The method of claim 2, wherein the connected phone call is an outbound phone call, and wherein capturing caller identification information comprises interpreting dial pulses or DTMF signals to determine the caller information. 4. The method of claim 1, further comprising: responsive to receiving a user selected voice communication from the list of recorded voice communications, playing the user selected voice communication using audio output speakers of the display unit. 5. The method of claim 1, further comprising: monitoring the communication line to determine whether an incoming call is answered within a specified time; responsive to a determination that the incoming call is not answered within the specified time, playing a voicemail greeting; and recording and storing a voicemail message in the data storage unit associated with the set-top box. 6. The method of claim 1, further comprising: parsing each of the recorded voice communications to enable a user to search for specific terms spoken during the live conversation. 7. The method of claim 1, further comprising: transmitting an audio file of one or more of the recorded voice communications to a user-specified e-mail address. 8. The method of claim 1, wherein monitoring a communication line comprises monitoring a voltage level traveling through the communication line. 9. The method of claim 1, wherein the set-top box comprises an analog telephone adapter, and wherein the method comprises communicating directly with a Voice over Internet Protocol (VoIP) server. 10. The method of claim 1, further comprising, detecting, by the set-top box, that the connected phone call is associated with caller identifier specified by user preferences stored at the set-top box, and wherein the audio of the live conversation is recorded in response to detecting that the connected phone call is associated with the specified caller identifier. 11. A set-top box configured to record voice communications, the set-top box comprising: memory for storing computer executable instructions; a data storage unit for storing recorded voice communications and recorded media content files; a line monitoring module for detecting a connected call on a communication line; and a processing unit, wherein the computer executable instructions are executable to: decode a television signal to display a television program on a display unit associated with the set-top box; monitor a communication line to detect a connected phone call; record audio of a live conversation associated with the connected phone call on the communication line being monitored; store the audio in the data storage unit; and generate a first user interface that displays a list of a plurality of recorded voice communications along with information relating to each of the plurality of recorded voice communications, the plurality of recorded voice communications comprising the recorded audio of the live conversation. 12. The set-top box of claim 11, wherein the computer executable instructions are further executable to capture caller identification associated with the connected phone call; and store the caller identification along with the audio in the data storage unit. 13. The set-top box of claim 12, wherein the connected phone call is an outbound phone call, and wherein capturing caller identification information comprises interpreting dial pulses or DTMF signals to determine the caller information. 14. The set-top box of claim 11, wherein the computer executable instructions are further executable to: responsive to receiving a user selected voice communication from the list of recorded voice communications, play the user selected voice communication using audio output speakers of the display unit. 15. The set-top box of claim 11, wherein the computer executable instructions are further executable to: monitor the communication line to determine whether an incoming call is answered within a specified time; responsive to a determination that the incoming call is not answered within the specified time, play a voicemail greeting; and record and store a voicemail message in the data storage unit associated with the set-top box. 16. The set-top box of claim 11, wherein the computer executable instructions are further executable to: parse each of the recorded voice communications to enable a user to search for specific terms spoken during the live conversation. 17. The set-top box of claim 11, wherein the computer executable instructions are further executable to: transmit an audio file of one or more of the recorded voice communications to a user-specified e-mail address. 18. The set-top box of claim 11, wherein monitoring a communication line comprises monitoring a voltage level traveling through the communication line. 19. The set-top box of claim 11, wherein the set-top box further comprises an analog telephone adapter, and wherein the computer executable instructions are further executable to communicate directly with a Voice over Internet Protocol (VoIP) server. 20. The set-top box of claim 11, further wherein the computer executable instructions are further executable to detect that the connected phone call is associated with caller identifier specified by user preferences stored at the set-top box, and wherein the audio of the live conversation is recorded in response to detecting that the connected phone call is associated with the specified caller identifier.
Embodiments of the present invention disclose an integrated set-top box for recording a voice communication and/or a voicemail. In one embodiment, the integrated set-top box automatically detects communications associated with a monitored communication line and records and stores the voice communications in a data storage unit of the integrated set-top box. In another embodiment, the integrated set-top box may provide voicemail capabilities in addition to other features.1. A method of recording a voice communication on a set-top box, the method comprising: decoding a television signal to display a television program on a display unit associated with the set-top box; monitoring, by the set-top box, a communication line to detect a connected phone call; recording, by the set-top box, audio of a live conversation associated with the connected phone call on the communication line being monitored; storing the audio in a data storage unit associated with the set-top box; and generating a first user interface that displays a list of a plurality of recorded voice communications along with information relating to each of the plurality of recorded voice communications, the plurality of recorded voice communications comprising the recorded audio of the live conversation. 2. The method of claim 1, further comprising: capturing caller identification associated with the connected phone call; and storing the caller identification along with the audio in the data storage unit. 3. The method of claim 2, wherein the connected phone call is an outbound phone call, and wherein capturing caller identification information comprises interpreting dial pulses or DTMF signals to determine the caller information. 4. The method of claim 1, further comprising: responsive to receiving a user selected voice communication from the list of recorded voice communications, playing the user selected voice communication using audio output speakers of the display unit. 5. The method of claim 1, further comprising: monitoring the communication line to determine whether an incoming call is answered within a specified time; responsive to a determination that the incoming call is not answered within the specified time, playing a voicemail greeting; and recording and storing a voicemail message in the data storage unit associated with the set-top box. 6. The method of claim 1, further comprising: parsing each of the recorded voice communications to enable a user to search for specific terms spoken during the live conversation. 7. The method of claim 1, further comprising: transmitting an audio file of one or more of the recorded voice communications to a user-specified e-mail address. 8. The method of claim 1, wherein monitoring a communication line comprises monitoring a voltage level traveling through the communication line. 9. The method of claim 1, wherein the set-top box comprises an analog telephone adapter, and wherein the method comprises communicating directly with a Voice over Internet Protocol (VoIP) server. 10. The method of claim 1, further comprising, detecting, by the set-top box, that the connected phone call is associated with caller identifier specified by user preferences stored at the set-top box, and wherein the audio of the live conversation is recorded in response to detecting that the connected phone call is associated with the specified caller identifier. 11. A set-top box configured to record voice communications, the set-top box comprising: memory for storing computer executable instructions; a data storage unit for storing recorded voice communications and recorded media content files; a line monitoring module for detecting a connected call on a communication line; and a processing unit, wherein the computer executable instructions are executable to: decode a television signal to display a television program on a display unit associated with the set-top box; monitor a communication line to detect a connected phone call; record audio of a live conversation associated with the connected phone call on the communication line being monitored; store the audio in the data storage unit; and generate a first user interface that displays a list of a plurality of recorded voice communications along with information relating to each of the plurality of recorded voice communications, the plurality of recorded voice communications comprising the recorded audio of the live conversation. 12. The set-top box of claim 11, wherein the computer executable instructions are further executable to capture caller identification associated with the connected phone call; and store the caller identification along with the audio in the data storage unit. 13. The set-top box of claim 12, wherein the connected phone call is an outbound phone call, and wherein capturing caller identification information comprises interpreting dial pulses or DTMF signals to determine the caller information. 14. The set-top box of claim 11, wherein the computer executable instructions are further executable to: responsive to receiving a user selected voice communication from the list of recorded voice communications, play the user selected voice communication using audio output speakers of the display unit. 15. The set-top box of claim 11, wherein the computer executable instructions are further executable to: monitor the communication line to determine whether an incoming call is answered within a specified time; responsive to a determination that the incoming call is not answered within the specified time, play a voicemail greeting; and record and store a voicemail message in the data storage unit associated with the set-top box. 16. The set-top box of claim 11, wherein the computer executable instructions are further executable to: parse each of the recorded voice communications to enable a user to search for specific terms spoken during the live conversation. 17. The set-top box of claim 11, wherein the computer executable instructions are further executable to: transmit an audio file of one or more of the recorded voice communications to a user-specified e-mail address. 18. The set-top box of claim 11, wherein monitoring a communication line comprises monitoring a voltage level traveling through the communication line. 19. The set-top box of claim 11, wherein the set-top box further comprises an analog telephone adapter, and wherein the computer executable instructions are further executable to communicate directly with a Voice over Internet Protocol (VoIP) server. 20. The set-top box of claim 11, further wherein the computer executable instructions are further executable to detect that the connected phone call is associated with caller identifier specified by user preferences stored at the set-top box, and wherein the audio of the live conversation is recorded in response to detecting that the connected phone call is associated with the specified caller identifier.
2,400
7,493
7,493
13,993,841
2,483
An encoded media file or stream may include video analytics data. There data may include information about the objects depicted in the media.
1. A method comprising: storing information about video analytics of media in association with the encoded media. 2. The method of claim 1 including providing a frame to indicate what type of video analytics information is included with the encoded media. 3. The method of claim 2 including providing a plurality of selectable analytics types for encoding. 4. The method of claim 1 including providing a frame to identify objects within the encoded media. 5. The method of claim 4 wherein providing a frame to identify objects includes identifying a frame of encoded media, identifying objects in said encoded media frame, and providing descriptors that give information about identified objects. 6. The method of claim 1 including providing a frame to indicate the movement of objects being tracked in the media. 7. The method of claim 6 including providing a confidence indicator to indicate how certain is an identification of an object in the media. 8. The method of claim 6 wherein providing a frame to indicate movement including indicating a frame of encoded media, identifying an object by an identifier, indicating a tracked metric and a count of frames in which an object is depicted. 9. The method of claim 1 including providing a frame for metadata about objects depicted in the media. 10. The method of claim 9 wherein providing a frame for metadata includes providing metadata to enable a user to find more information about an object depicted in an encoded frame while viewing the encoded frame. 11. The method of claim 1 including providing a frame with analytics summary information. 12. A non-transitory computer readable medium storing instructions that enable a computer to: store data about video analytics of media in association with the encoded media. 13. The medium of claim 12 further storing instructions to provide a frame to indicate what type of video analytics information is included with the encoded media. 14. The medium of claim 13 further storing instructions to provide a plurality of selectable analytics types for encoding. 15. The medium of claim 12 further storing instructions to provide a frame within the analytics information to identify objects within the encoded media. 16. The medium of claim 12 further storing instructions to provide a frame with encoded media to indicate the movement of objects being tracked in the media. 17. The medium of claim 12 further storing instructions to provide a frame in the information about the video analytics for metadata about objects depicted in the media. 18. The medium of claim 12 further storing instructions to provide a summary of the analytics information stored in association with the encoded media. 19. The medium of claim 16 further storing instructions to provide a confidence indicator to indicate how certain is an identification of an object in the media. 20. An encoder comprising: a processor to store encoded media, together with video analytics information for that encoded media; and a memory coupled to said processor. 21. The encoder of claim 20, said processor to provide video analytics information indicating what type of video analytics information is included in the encoded media. 22. The encoder of claim 21, said processor to provide a plurality of selectable analytics types for encoding. 23. The encoder of claim 20, said processor to provide a frame to identify objects within the encoded media. 24. The encoder of claim 20, said processor to provide a frame to indicate the movement of objects being tracked in the media. 25. The encoder of claim 24, said processor to provide a confidence indicator indicating how certain is an identification of an object in the media. 26. The encoder of claim 20, said processor to provide a frame for metadata about objects depicted in the media. 27. The encoder of claim 20, said processor to provide a frame with analytics summary information. 28. The encoder of claim 20, said processor to provide a frame indicating what type of video analytics information is included with the encoded media, a frame identifying objects within the encoded media, a frame indicating the movement of objects being tracked in the media, a frame for metadata about objects depicted in the media, and a frame with analytics summary information for each of said analytics frames.
An encoded media file or stream may include video analytics data. There data may include information about the objects depicted in the media.1. A method comprising: storing information about video analytics of media in association with the encoded media. 2. The method of claim 1 including providing a frame to indicate what type of video analytics information is included with the encoded media. 3. The method of claim 2 including providing a plurality of selectable analytics types for encoding. 4. The method of claim 1 including providing a frame to identify objects within the encoded media. 5. The method of claim 4 wherein providing a frame to identify objects includes identifying a frame of encoded media, identifying objects in said encoded media frame, and providing descriptors that give information about identified objects. 6. The method of claim 1 including providing a frame to indicate the movement of objects being tracked in the media. 7. The method of claim 6 including providing a confidence indicator to indicate how certain is an identification of an object in the media. 8. The method of claim 6 wherein providing a frame to indicate movement including indicating a frame of encoded media, identifying an object by an identifier, indicating a tracked metric and a count of frames in which an object is depicted. 9. The method of claim 1 including providing a frame for metadata about objects depicted in the media. 10. The method of claim 9 wherein providing a frame for metadata includes providing metadata to enable a user to find more information about an object depicted in an encoded frame while viewing the encoded frame. 11. The method of claim 1 including providing a frame with analytics summary information. 12. A non-transitory computer readable medium storing instructions that enable a computer to: store data about video analytics of media in association with the encoded media. 13. The medium of claim 12 further storing instructions to provide a frame to indicate what type of video analytics information is included with the encoded media. 14. The medium of claim 13 further storing instructions to provide a plurality of selectable analytics types for encoding. 15. The medium of claim 12 further storing instructions to provide a frame within the analytics information to identify objects within the encoded media. 16. The medium of claim 12 further storing instructions to provide a frame with encoded media to indicate the movement of objects being tracked in the media. 17. The medium of claim 12 further storing instructions to provide a frame in the information about the video analytics for metadata about objects depicted in the media. 18. The medium of claim 12 further storing instructions to provide a summary of the analytics information stored in association with the encoded media. 19. The medium of claim 16 further storing instructions to provide a confidence indicator to indicate how certain is an identification of an object in the media. 20. An encoder comprising: a processor to store encoded media, together with video analytics information for that encoded media; and a memory coupled to said processor. 21. The encoder of claim 20, said processor to provide video analytics information indicating what type of video analytics information is included in the encoded media. 22. The encoder of claim 21, said processor to provide a plurality of selectable analytics types for encoding. 23. The encoder of claim 20, said processor to provide a frame to identify objects within the encoded media. 24. The encoder of claim 20, said processor to provide a frame to indicate the movement of objects being tracked in the media. 25. The encoder of claim 24, said processor to provide a confidence indicator indicating how certain is an identification of an object in the media. 26. The encoder of claim 20, said processor to provide a frame for metadata about objects depicted in the media. 27. The encoder of claim 20, said processor to provide a frame with analytics summary information. 28. The encoder of claim 20, said processor to provide a frame indicating what type of video analytics information is included with the encoded media, a frame identifying objects within the encoded media, a frame indicating the movement of objects being tracked in the media, a frame for metadata about objects depicted in the media, and a frame with analytics summary information for each of said analytics frames.
2,400
7,494
7,494
15,012,997
2,461
A method and apparatus provide for low latency transmissions. A higher layer configuration can be received at a device. The higher layer configuration can be higher than a physical layer configuration. The higher layer configuration can indicate configuring the device with a low latency configuration for a low latency transmission mode in addition to a regular latency configuration for a regular latency transmission mode. The low latency transmission mode can have a shorter latency than the regular latency transmission mode. A packet can be received based on one of the low latency configuration and the regular latency transmission mode in a subframe n. A feedback packet can be transmitted in a following subframe n+p, where p<4 when the received packet is based on the low latency configuration. The following subframe n+p can be the p th subframe from the subframe n. A feedback packet can be transmitted in a following subframe n+4 when the received packet is based on the regular latency configuration, where the following subframe n+4 is the fourth subframe from the subframe n.
1. A method comprising: receiving a higher layer configuration at a device, the higher layer configuration being higher than a physical layer configuration, the higher layer configuration indicating configuring the device with a low latency configuration for a low latency transmission mode in addition to a regular latency configuration for a regular latency transmission mode, where the low latency transmission mode has a shorter latency than the regular latency transmission mode; receiving a packet based on one of the low latency configuration and the regular latency transmission mode in subframe n; transmitting a feedback packet in a following subframe n+p, where p<4 when the received packet is based on the low latency configuration, where the following subframe n+p is the pth subframe from the subframe n; and transmitting a feedback packet in a following subframe n+4 when the received packet is based on the regular latency configuration, where the following subframe n+4 is the fourth subframe from the subframe n. 2. The method according to claim 1, further comprising identifying the packet is based on the low latency configuration based on the packet being received on a given transport bearer. 3. The method according to claim 1, further comprising identifying the packet is based on the low latency configuration based on the packet being received from a certain cell. 4. The method according to claim 1, wherein the feedback packet comprises a hybrid automatic repeat request acknowledgement sent in the following subframe that is two subframes n+2 after the first subframe n in response to receiving the receiving the packet based on the low latency configuration in the first subframe n. 5. The method according to claim 4, wherein the hybrid automatic repeat request acknowledgement is transmitted in a temporal portion of the subframe including at least two symbols with resource elements assigned to the device for uplink feedback transmission. 6. The method according to claim 5, wherein one symbol of the at least two symbols is used for a pilot symbol and another symbol of the at least two symbols is used for the hybrid automatic repeat request acknowledgement. 7. The method according to claim 1, wherein a transport block of the low latency configuration is smaller than a transport block for the regular latency configuration. 8. The method according to claim 7, wherein the packet based on the low latency configuration is received on a dedicated resource on a physical downlink shared channel. 9. The method according to claim 1, wherein a code block size in a subframe for packets based on the low latency configuration is smaller than a code block size for packets based on the regular latency configuration. 10. The method according to claim 1, wherein a maximum timing advance value for the low latency configuration is less than a maximum timing advance value for the regular latency configuration. 11. An apparatus comprising: a controller configured to control operations of the apparatus; and a transceiver coupled to the controller, the transceiver configured to receive a higher layer configuration at the apparatus, the higher layer configuration being higher than a physical layer configuration, the higher layer configuration indicating configuring the apparatus with a low latency configuration for a low latency transmission mode in addition to a regular latency configuration for a regular latency transmission mode, where the low latency transmission mode has a shorter latency than the regular latency transmission mode, receive a packet based on one of the low latency configuration and the regular latency transmission mode in subframe n, transmit a feedback packet in a following subframe n+p, where p<4 when the received packet is based on the low latency configuration, where the following subframe n+p is the pth subframe from the subframe n, and transmit a feedback packet in a following subframe n+4 when the received packet is based on the regular latency configuration, where the following subframe n+4 is the fourth subframe from the subframe n. 12. The apparatus according to claim 11, wherein the controller is configured to identify the packet is based on the low latency configuration based on the packet being received on a given transport bearer. 13. The apparatus according to claim 11, wherein the controller is configured to identify the packet is based on the low latency configuration based on the packet being received from a certain cell. 14. The apparatus according to claim 11, wherein the feedback packet comprises a hybrid automatic repeat request acknowledgement sent in the following subframe that is two subframes n+2 after the first subframe n in response to receiving the receiving the packet based on the low latency configuration in the first subframe n. 15. The apparatus according to claim 14, wherein the hybrid automatic repeat request acknowledgement is transmitted in a temporal portion of the subframe including at least two symbols with resource elements assigned to the device for uplink feedback transmission. 16. The apparatus according to claim 15, wherein one symbol of the at least two symbols is used for a pilot symbol and another symbol of the at least two symbols is used for the hybrid automatic repeat request acknowledgement. 17. The apparatus according to claim 11, wherein a transport block of the low latency configuration is smaller than a transport block for the regular latency configuration. 18. The apparatus according to claim 17, wherein the packet based on the low latency configuration is received on a dedicated resource on a physical downlink shared channel. 19. The apparatus according to claim 11, wherein a code block size in a subframe for packets based on the low latency configuration is smaller than a code block size for packets based on the regular latency configuration. 20. The apparatus according to claim 11, wherein a maximum timing advance value for the low latency configuration is less than a maximum timing advance value for the regular latency configuration.
A method and apparatus provide for low latency transmissions. A higher layer configuration can be received at a device. The higher layer configuration can be higher than a physical layer configuration. The higher layer configuration can indicate configuring the device with a low latency configuration for a low latency transmission mode in addition to a regular latency configuration for a regular latency transmission mode. The low latency transmission mode can have a shorter latency than the regular latency transmission mode. A packet can be received based on one of the low latency configuration and the regular latency transmission mode in a subframe n. A feedback packet can be transmitted in a following subframe n+p, where p<4 when the received packet is based on the low latency configuration. The following subframe n+p can be the p th subframe from the subframe n. A feedback packet can be transmitted in a following subframe n+4 when the received packet is based on the regular latency configuration, where the following subframe n+4 is the fourth subframe from the subframe n.1. A method comprising: receiving a higher layer configuration at a device, the higher layer configuration being higher than a physical layer configuration, the higher layer configuration indicating configuring the device with a low latency configuration for a low latency transmission mode in addition to a regular latency configuration for a regular latency transmission mode, where the low latency transmission mode has a shorter latency than the regular latency transmission mode; receiving a packet based on one of the low latency configuration and the regular latency transmission mode in subframe n; transmitting a feedback packet in a following subframe n+p, where p<4 when the received packet is based on the low latency configuration, where the following subframe n+p is the pth subframe from the subframe n; and transmitting a feedback packet in a following subframe n+4 when the received packet is based on the regular latency configuration, where the following subframe n+4 is the fourth subframe from the subframe n. 2. The method according to claim 1, further comprising identifying the packet is based on the low latency configuration based on the packet being received on a given transport bearer. 3. The method according to claim 1, further comprising identifying the packet is based on the low latency configuration based on the packet being received from a certain cell. 4. The method according to claim 1, wherein the feedback packet comprises a hybrid automatic repeat request acknowledgement sent in the following subframe that is two subframes n+2 after the first subframe n in response to receiving the receiving the packet based on the low latency configuration in the first subframe n. 5. The method according to claim 4, wherein the hybrid automatic repeat request acknowledgement is transmitted in a temporal portion of the subframe including at least two symbols with resource elements assigned to the device for uplink feedback transmission. 6. The method according to claim 5, wherein one symbol of the at least two symbols is used for a pilot symbol and another symbol of the at least two symbols is used for the hybrid automatic repeat request acknowledgement. 7. The method according to claim 1, wherein a transport block of the low latency configuration is smaller than a transport block for the regular latency configuration. 8. The method according to claim 7, wherein the packet based on the low latency configuration is received on a dedicated resource on a physical downlink shared channel. 9. The method according to claim 1, wherein a code block size in a subframe for packets based on the low latency configuration is smaller than a code block size for packets based on the regular latency configuration. 10. The method according to claim 1, wherein a maximum timing advance value for the low latency configuration is less than a maximum timing advance value for the regular latency configuration. 11. An apparatus comprising: a controller configured to control operations of the apparatus; and a transceiver coupled to the controller, the transceiver configured to receive a higher layer configuration at the apparatus, the higher layer configuration being higher than a physical layer configuration, the higher layer configuration indicating configuring the apparatus with a low latency configuration for a low latency transmission mode in addition to a regular latency configuration for a regular latency transmission mode, where the low latency transmission mode has a shorter latency than the regular latency transmission mode, receive a packet based on one of the low latency configuration and the regular latency transmission mode in subframe n, transmit a feedback packet in a following subframe n+p, where p<4 when the received packet is based on the low latency configuration, where the following subframe n+p is the pth subframe from the subframe n, and transmit a feedback packet in a following subframe n+4 when the received packet is based on the regular latency configuration, where the following subframe n+4 is the fourth subframe from the subframe n. 12. The apparatus according to claim 11, wherein the controller is configured to identify the packet is based on the low latency configuration based on the packet being received on a given transport bearer. 13. The apparatus according to claim 11, wherein the controller is configured to identify the packet is based on the low latency configuration based on the packet being received from a certain cell. 14. The apparatus according to claim 11, wherein the feedback packet comprises a hybrid automatic repeat request acknowledgement sent in the following subframe that is two subframes n+2 after the first subframe n in response to receiving the receiving the packet based on the low latency configuration in the first subframe n. 15. The apparatus according to claim 14, wherein the hybrid automatic repeat request acknowledgement is transmitted in a temporal portion of the subframe including at least two symbols with resource elements assigned to the device for uplink feedback transmission. 16. The apparatus according to claim 15, wherein one symbol of the at least two symbols is used for a pilot symbol and another symbol of the at least two symbols is used for the hybrid automatic repeat request acknowledgement. 17. The apparatus according to claim 11, wherein a transport block of the low latency configuration is smaller than a transport block for the regular latency configuration. 18. The apparatus according to claim 17, wherein the packet based on the low latency configuration is received on a dedicated resource on a physical downlink shared channel. 19. The apparatus according to claim 11, wherein a code block size in a subframe for packets based on the low latency configuration is smaller than a code block size for packets based on the regular latency configuration. 20. The apparatus according to claim 11, wherein a maximum timing advance value for the low latency configuration is less than a maximum timing advance value for the regular latency configuration.
2,400
7,495
7,495
13,074,939
2,483
An apparatus to provide white illuminating light for medical or boroscopic applications includes a first light source with a light-emitting diode to emit light with a broad spectrum, a second light source to emit monochromatic light, and a coupling device to couple light of the first light source and light of the second light source into a common beam path in order to generate illuminating light with improved color rendering.
1. An apparatus to provide white illuminating light for medical and boroscopic applications, with: a first light source with a light-emitting diode, to emit light with a broad spectrum; a second light source to emit monochromatic light; a coupling device to couple light of the first light source and light of the second light source into a common beam path in order to generate illuminating light with improved color rendering. 2. The apparatus according to claim 1, wherein at least either the illuminating light has a color rendering index that is at least 10 points higher than the color rendering index of the light of the first light source or the median color distance in illuminating a predetermined set of test colors with the illuminating light is better by at least one-tenth than in illuminating the predetermined set of test colors only with light of the first light source or the sum of the amounts of the distances of the white balance parameters from 1 in illuminating with the illuminating light is lower than in illuminating with light of the first light source. 3. An apparatus according to claim 1, wherein the light generated by the second light source lies within the spectrum of the light generated by the first light source. 4. The apparatus according to claim 1, wherein within the spectral range visible to the human eye, the radiancy coupled into the beam path by means of the coupling device is higher than the radiancy of the light of the first light source that can be coupled into the beam path. 5. The apparatus according to claim 1, wherein the second light source includes at least either a diode laser or another laser. 6. The apparatus according to claim 1, in addition with: a third light source, where the coupling device is configured to couple light of the first light source, light of the second light source, and light of the third light source into the common beam path in order to generate the illuminating light, where the spectrum of the illuminating light has better color rendering than the first spectrum and than a mixture of the light of the first light source and light of the second light source that is optimal with respect to color rendering. 7. The apparatus according to claim 6, wherein at least either the illuminating light has a color rendering index at least 15 points higher than the color rendering index of the first spectrum or the median color distance in illuminating a predetermined set of test colors with the illuminating light is at least one-fifth better than in illuminating with light of the first light source or the sum of the amounts of the distances of the white balance parameters from 1 in illuminating with the illuminating light is lower than in illuminating with a mixture of the light of the first light source and light of the second light source that is optimal with respect to color rendering. 8. The apparatus according to claim 1, in addition with: a video camera to record an image of an object illuminated by means of the apparatus. 9. The apparatus according to claim 1, wherein the color rendering in illuminating objects of investigation with the illuminating light and recording them by means of the video camera is better than in illuminating exclusively with light of the first light source. 10. The apparatus according to claim 8, in addition with an endoscope or exoscope or microscope. 11. A method to provide white illuminating light for medical and boroscopic applications, with the following steps: generate light with a broad spectrum by means of a first light source, which includes a light-emitting diode; generate monochromatic light by means of a second light source; couple light of the first light source and light of the second light source into a common beam path by means of a coupling device in order to generate illuminating light with improved color rendering. 12. The method according to claim 11, wherein at least either the illuminating light has a color rendering index that is at least 10 points higher than the color rendering index of the light of the first light source, or the median color distance in illuminating a predetermined set of test colors with the illuminating light is at least one-tenth better than in illuminating with light of the first light source, or the sum of the amounts of the distances of the white balance parameters from 1 in illuminating with the illuminating light is lower than in illuminating with light of the first light source.
An apparatus to provide white illuminating light for medical or boroscopic applications includes a first light source with a light-emitting diode to emit light with a broad spectrum, a second light source to emit monochromatic light, and a coupling device to couple light of the first light source and light of the second light source into a common beam path in order to generate illuminating light with improved color rendering.1. An apparatus to provide white illuminating light for medical and boroscopic applications, with: a first light source with a light-emitting diode, to emit light with a broad spectrum; a second light source to emit monochromatic light; a coupling device to couple light of the first light source and light of the second light source into a common beam path in order to generate illuminating light with improved color rendering. 2. The apparatus according to claim 1, wherein at least either the illuminating light has a color rendering index that is at least 10 points higher than the color rendering index of the light of the first light source or the median color distance in illuminating a predetermined set of test colors with the illuminating light is better by at least one-tenth than in illuminating the predetermined set of test colors only with light of the first light source or the sum of the amounts of the distances of the white balance parameters from 1 in illuminating with the illuminating light is lower than in illuminating with light of the first light source. 3. An apparatus according to claim 1, wherein the light generated by the second light source lies within the spectrum of the light generated by the first light source. 4. The apparatus according to claim 1, wherein within the spectral range visible to the human eye, the radiancy coupled into the beam path by means of the coupling device is higher than the radiancy of the light of the first light source that can be coupled into the beam path. 5. The apparatus according to claim 1, wherein the second light source includes at least either a diode laser or another laser. 6. The apparatus according to claim 1, in addition with: a third light source, where the coupling device is configured to couple light of the first light source, light of the second light source, and light of the third light source into the common beam path in order to generate the illuminating light, where the spectrum of the illuminating light has better color rendering than the first spectrum and than a mixture of the light of the first light source and light of the second light source that is optimal with respect to color rendering. 7. The apparatus according to claim 6, wherein at least either the illuminating light has a color rendering index at least 15 points higher than the color rendering index of the first spectrum or the median color distance in illuminating a predetermined set of test colors with the illuminating light is at least one-fifth better than in illuminating with light of the first light source or the sum of the amounts of the distances of the white balance parameters from 1 in illuminating with the illuminating light is lower than in illuminating with a mixture of the light of the first light source and light of the second light source that is optimal with respect to color rendering. 8. The apparatus according to claim 1, in addition with: a video camera to record an image of an object illuminated by means of the apparatus. 9. The apparatus according to claim 1, wherein the color rendering in illuminating objects of investigation with the illuminating light and recording them by means of the video camera is better than in illuminating exclusively with light of the first light source. 10. The apparatus according to claim 8, in addition with an endoscope or exoscope or microscope. 11. A method to provide white illuminating light for medical and boroscopic applications, with the following steps: generate light with a broad spectrum by means of a first light source, which includes a light-emitting diode; generate monochromatic light by means of a second light source; couple light of the first light source and light of the second light source into a common beam path by means of a coupling device in order to generate illuminating light with improved color rendering. 12. The method according to claim 11, wherein at least either the illuminating light has a color rendering index that is at least 10 points higher than the color rendering index of the light of the first light source, or the median color distance in illuminating a predetermined set of test colors with the illuminating light is at least one-tenth better than in illuminating with light of the first light source, or the sum of the amounts of the distances of the white balance parameters from 1 in illuminating with the illuminating light is lower than in illuminating with light of the first light source.
2,400
7,496
7,496
13,894,869
2,458
A mobile computing device, a method of operating thereof, a method of manufacturing and an external source for dynamic profile settings for mobile computing devices. In one embodiment, the mobile computing device includes: (1) a settings reservoir configured to store dynamic sets of profile settings and static set of profile settings for the computing device and (2) a profile generator configured to generate coalesced sets of profile settings for applications on the computing device based on the dynamic sets of profiles and the static set of profiles.
1. A mobile computing device, comprising: a settings reservoir configured to store dynamic sets of profile settings and static set of profile settings for said computing device; and a profile generator configured to generate coalesced sets of profile settings for applications on said computing device based on said dynamic sets of profiles and said static set of profiles. 2. The mobile computing device as recited in claim 1 further comprising a profile deliverer configured to deliver settings from a set of said coalesced profile settings to components of said mobile computing device for said application. 3. The mobile computing device as recited in claim 2 wherein said profile deliverer is configured to deliver said settings in response to components of said mobile computing device requests on said application being launched on said mobile computing device. 4. The mobile computing device as recited in claim 2 wherein said profile deliverer is further configured to determine said set of said coalesced profile settings by verifying both a name of said application and a hash or checksum of a binary of said application with a database of said coalesced profile settings stored on said settings reservoir. 5. The mobile computing device as recited in claim 1 further comprising a communications interface configured to wirelessly receive said dynamic sets of profile settings. 6. The mobile computing device as recited in claim 2 wherein said profile deliverer is further configured to deliver said settings in response to a query from said components. 7. The mobile computing device as recited in claim 1 wherein said static set of profiles are original equipment manufacturer profiles. 8. The mobile computing device as recited in claim 1 wherein said dynamic sets of profile settings are based on characteristics of said mobile computing device. 9. A method of operating a mobile computing device, comprising: receiving dynamic sets of profile settings based on system characteristics of said computing device; determining a coalesced set of profile settings for said computing device based on said dynamic sets of profile settings and sets of static profile settings stored on said computing device; and selecting, for an application on said computing device, settings from a set of said coalesced sets of profile settings for components of said computing device. 10. The method as recited in claim 9 further comprising delivering said settings to said components for execution during operation of said application. 11. The method as recited in claim 9 wherein said determining includes replacing overlapping profile settings in said dynamic sets of profile settings with profile settings from said static set of profile settings. 12. The method as recited in claim 9 further comprising verifying said set of coalesced sets of profile settings correspond to said application based on a name of said application and an additional security check derived from a binary of said application. 13. The method as recited in claim 9 further comprising receiving queries from said components in response to launching said application and selecting said set in response to said queries. 14. The method as recited in claim 9 wherein said dynamic sets of profile settings are based on characteristics of said computing device. 15. The method as recited in claim 9 wherein said sets of static profile settings include original equipment manufacturer settings. 16. A method of configuring an apparatus to operate employing profile settings for components thereof that are tailored to particular applications on said apparatus and to characteristics of the components, the method comprising: configuring said apparatus to store a set of application profiles based on system characteristics of said computing device; configuring said apparatus to generate a coalesced set of application profiles by comparing said set of application profiles with a second set of application profiles from another source; and configuring said apparatus to deliver profile settings to components of said apparatus for an application when said application is launched, wherein said profile settings are from a set of said coalesced sets of profile settings. 17. The method as recited in claim 16 wherein said method occurs during manufacturing of said apparatus. 18. The method as recited in claim 16 wherein apparatus is a mobile computing device. 19. The method as recited in claim 16 further comprising configuring said apparatus to download said set of application profiles from a server including hierarchical filtered profile settings. 20. The method as recited in claim 16 further comprising configuring said apparatus to generate said coalesced set of application profiles by replacing settings for a component of said apparatus with settings for said component from said second set of application profiles, wherein at least a portion of said second set of application profiles are established during manufacturing of said apparatus. 21. An external source for dynamic profile settings for mobile computing devices, comprising: a memory configured to store profile settings for applications that execute on mobile computing devices; and a processor configured to determine a dynamic set of profile settings for executing an application on a specific one of said mobile computing devices by hierarchically filtering said profile settings according to characteristics of said specific one. 22. The external source as recited in claim 21 further comprising a communications interface configured to receive a query from said specific one that includes characteristic data thereof, said processor configured to hierarchically filter said profile settings based on said characteristic data.
A mobile computing device, a method of operating thereof, a method of manufacturing and an external source for dynamic profile settings for mobile computing devices. In one embodiment, the mobile computing device includes: (1) a settings reservoir configured to store dynamic sets of profile settings and static set of profile settings for the computing device and (2) a profile generator configured to generate coalesced sets of profile settings for applications on the computing device based on the dynamic sets of profiles and the static set of profiles.1. A mobile computing device, comprising: a settings reservoir configured to store dynamic sets of profile settings and static set of profile settings for said computing device; and a profile generator configured to generate coalesced sets of profile settings for applications on said computing device based on said dynamic sets of profiles and said static set of profiles. 2. The mobile computing device as recited in claim 1 further comprising a profile deliverer configured to deliver settings from a set of said coalesced profile settings to components of said mobile computing device for said application. 3. The mobile computing device as recited in claim 2 wherein said profile deliverer is configured to deliver said settings in response to components of said mobile computing device requests on said application being launched on said mobile computing device. 4. The mobile computing device as recited in claim 2 wherein said profile deliverer is further configured to determine said set of said coalesced profile settings by verifying both a name of said application and a hash or checksum of a binary of said application with a database of said coalesced profile settings stored on said settings reservoir. 5. The mobile computing device as recited in claim 1 further comprising a communications interface configured to wirelessly receive said dynamic sets of profile settings. 6. The mobile computing device as recited in claim 2 wherein said profile deliverer is further configured to deliver said settings in response to a query from said components. 7. The mobile computing device as recited in claim 1 wherein said static set of profiles are original equipment manufacturer profiles. 8. The mobile computing device as recited in claim 1 wherein said dynamic sets of profile settings are based on characteristics of said mobile computing device. 9. A method of operating a mobile computing device, comprising: receiving dynamic sets of profile settings based on system characteristics of said computing device; determining a coalesced set of profile settings for said computing device based on said dynamic sets of profile settings and sets of static profile settings stored on said computing device; and selecting, for an application on said computing device, settings from a set of said coalesced sets of profile settings for components of said computing device. 10. The method as recited in claim 9 further comprising delivering said settings to said components for execution during operation of said application. 11. The method as recited in claim 9 wherein said determining includes replacing overlapping profile settings in said dynamic sets of profile settings with profile settings from said static set of profile settings. 12. The method as recited in claim 9 further comprising verifying said set of coalesced sets of profile settings correspond to said application based on a name of said application and an additional security check derived from a binary of said application. 13. The method as recited in claim 9 further comprising receiving queries from said components in response to launching said application and selecting said set in response to said queries. 14. The method as recited in claim 9 wherein said dynamic sets of profile settings are based on characteristics of said computing device. 15. The method as recited in claim 9 wherein said sets of static profile settings include original equipment manufacturer settings. 16. A method of configuring an apparatus to operate employing profile settings for components thereof that are tailored to particular applications on said apparatus and to characteristics of the components, the method comprising: configuring said apparatus to store a set of application profiles based on system characteristics of said computing device; configuring said apparatus to generate a coalesced set of application profiles by comparing said set of application profiles with a second set of application profiles from another source; and configuring said apparatus to deliver profile settings to components of said apparatus for an application when said application is launched, wherein said profile settings are from a set of said coalesced sets of profile settings. 17. The method as recited in claim 16 wherein said method occurs during manufacturing of said apparatus. 18. The method as recited in claim 16 wherein apparatus is a mobile computing device. 19. The method as recited in claim 16 further comprising configuring said apparatus to download said set of application profiles from a server including hierarchical filtered profile settings. 20. The method as recited in claim 16 further comprising configuring said apparatus to generate said coalesced set of application profiles by replacing settings for a component of said apparatus with settings for said component from said second set of application profiles, wherein at least a portion of said second set of application profiles are established during manufacturing of said apparatus. 21. An external source for dynamic profile settings for mobile computing devices, comprising: a memory configured to store profile settings for applications that execute on mobile computing devices; and a processor configured to determine a dynamic set of profile settings for executing an application on a specific one of said mobile computing devices by hierarchically filtering said profile settings according to characteristics of said specific one. 22. The external source as recited in claim 21 further comprising a communications interface configured to receive a query from said specific one that includes characteristic data thereof, said processor configured to hierarchically filter said profile settings based on said characteristic data.
2,400
7,497
7,497
13,997,529
2,447
The present invention provides a segmented network in which each segment comprises one or more routers, one or more communications links to provide connectivity between the router(s) and a segment management module. The segment management module uses operational data to predict the future performance of each element. If the predicted performance will breach a threshold value then a data flow may be re-routed. Re-routing between different segments can lead to network management problems and so the present invention discloses methods by which: segments can expand to acquire a router from another segment; segments can subdivide; and segments can merge together, particularly if a segment comprises too few routers.
1. A communications network, the communications network being partitioned into a plurality of network segments, each of the plurality of the network segments comprising a segment management module, a plurality of network elements and a plurality of communications links, the plurality of network elements being interconnected by the plurality of communications links, the network being configured such that, in operation: i) each of the segment management modules receives operational data from the plurality of network elements in its respective network segment; ii) on the basis of operational data received from the plurality of network elements, each segment management module determines the future performance of the plurality of network elements in the respective network segment; iii) if a segment management module determines that the future performance of one or more of the plurality of network elements in the respective network segment will be less than a threshold value, re-routing one or more data flows, to a further segment; and iv) reconfiguring one or more of the segments carrying the one or more data flows. 2. A communications network according to claim 1, wherein one of the network segments expands to acquire a router from a further network segment. 3. A communications network according to claim 1, wherein one of the network segments separates to form a plurality of sub-segments, each of the plurality of sub-segments comprising one or more routers. 4. A communications network according to claim 1, wherein two or more network segments merge to form a new network segment. 5. A communications according to claim 4, wherein the merger of the two or more network segments is initiated when one of the network segments comprises fewer routers than a predetermined threshold value. 6. A method of operating a communications network, the communications network being partitioned into a plurality of network segments; each of the plurality of the network segments comprising: a segment management module; a plurality of network elements; and a plurality of communications links, the plurality of network elements being interconnected by the plurality of communications links, the method comprising the steps of: i) each of the segment management modules receiving operational data from the plurality of network elements in its respective network segment; ii) each of the segment management module determining the future performance of the plurality of network elements in the respective network segment on the basis of operational data received from the plurality of network elements; iii) if a segment management module determines that the future performance of one or more of the plurality of network elements in the respective network segment will be less than a threshold value, re-routing one or more data flows, to a further segment; and iv) reconfiguring one or more of the segments carrying the one or more data flows. 7. A data carrier device comprising computer executable code for performing a method according to claim 6.
The present invention provides a segmented network in which each segment comprises one or more routers, one or more communications links to provide connectivity between the router(s) and a segment management module. The segment management module uses operational data to predict the future performance of each element. If the predicted performance will breach a threshold value then a data flow may be re-routed. Re-routing between different segments can lead to network management problems and so the present invention discloses methods by which: segments can expand to acquire a router from another segment; segments can subdivide; and segments can merge together, particularly if a segment comprises too few routers.1. A communications network, the communications network being partitioned into a plurality of network segments, each of the plurality of the network segments comprising a segment management module, a plurality of network elements and a plurality of communications links, the plurality of network elements being interconnected by the plurality of communications links, the network being configured such that, in operation: i) each of the segment management modules receives operational data from the plurality of network elements in its respective network segment; ii) on the basis of operational data received from the plurality of network elements, each segment management module determines the future performance of the plurality of network elements in the respective network segment; iii) if a segment management module determines that the future performance of one or more of the plurality of network elements in the respective network segment will be less than a threshold value, re-routing one or more data flows, to a further segment; and iv) reconfiguring one or more of the segments carrying the one or more data flows. 2. A communications network according to claim 1, wherein one of the network segments expands to acquire a router from a further network segment. 3. A communications network according to claim 1, wherein one of the network segments separates to form a plurality of sub-segments, each of the plurality of sub-segments comprising one or more routers. 4. A communications network according to claim 1, wherein two or more network segments merge to form a new network segment. 5. A communications according to claim 4, wherein the merger of the two or more network segments is initiated when one of the network segments comprises fewer routers than a predetermined threshold value. 6. A method of operating a communications network, the communications network being partitioned into a plurality of network segments; each of the plurality of the network segments comprising: a segment management module; a plurality of network elements; and a plurality of communications links, the plurality of network elements being interconnected by the plurality of communications links, the method comprising the steps of: i) each of the segment management modules receiving operational data from the plurality of network elements in its respective network segment; ii) each of the segment management module determining the future performance of the plurality of network elements in the respective network segment on the basis of operational data received from the plurality of network elements; iii) if a segment management module determines that the future performance of one or more of the plurality of network elements in the respective network segment will be less than a threshold value, re-routing one or more data flows, to a further segment; and iv) reconfiguring one or more of the segments carrying the one or more data flows. 7. A data carrier device comprising computer executable code for performing a method according to claim 6.
2,400
7,498
7,498
15,147,300
2,487
A vision system of a vehicle includes a camera module disposed at the vehicle windshield and having a camera, a control and a recording device. The system includes a continuous loop recording of image data captured by the camera while the vehicle is operated. The control includes an image processor that processes image data captured by the camera for at least one driver assistance system of the vehicle. Responsive to a user input, the control exits the continuous loop recording and the recording device saves image data captured by the camera in non-volatile memory of the recording device. Responsive to the user input, the recording device saves in the non-volatile memory captured image data saved by the continuous loop recording prior to the user input and saves in the non-volatile memory image data captured by the camera after the user input.
1. A vision system for a vehicle, said vision system comprising: a camera module configured for attachment at an interior surface of a windshield of a vehicle equipped with said vision system, said camera module comprising a camera, a control and a recording device; wherein, when said camera module is disposed at the windshield of the equipped vehicle, said camera views through the windshield and forward of the equipped vehicle, and wherein said camera captures image data; wherein said vision system comprises a continuous loop recording of image data captured by said camera while the vehicle is operated, and wherein said vision system continuously records captured image data and erases previously captured image data such that image data captured by said camera for a time period is temporarily saved by said continuous loop recording; wherein said control comprises an image processor that processes image data captured by said camera for at least one driver assistance system of the vehicle; wherein, responsive to a user input, said control exits said continuous loop recording and said recording device saves image data captured by said camera in non-volatile memory of said recording device; and wherein, responsive to said user input, said recording device saves in the non-volatile memory captured image data saved by said continuous loop recording prior to said user input and saves in the non-volatile memory image data captured by said camera after said user input. 2. The vision system of claim 1, wherein said user input comprises at least one of (i) actuation of a button or switch by an occupant of the vehicle, (ii) actuation of a hazard light of the equipped vehicle by an occupant of the vehicle and (iii) an input from a remote device. 3. The vision system of claim 1, wherein said user input comprises a voice command from an occupant of the vehicle. 4. The vision system of claim 3, wherein said camera module includes a microphone for receiving voice commands and messages. 5. The vision system of claim 1, wherein said control controls said recording device to save captured image data responsive to a triggering event, and wherein said triggering event comprises at least one of (i) a forward collision warning event, (ii) a lane departure warning event, (iii) an automatic emergency braking event, (iv) an airbag deployment, (v) a sudden or rapid deceleration, (vi) an antilock braking system event, (vii) a hard driver steering or threshold lateral g level event, (viii) a traction control event, (ix) a stability control event, (x) a wide open throttle event and (xi) a very high speed blind spot/lane change aid signal. 6. The vision system of claim 5, wherein said recording device saves in the non-volatile memory captured image data saved by said continuous loop recording for a period of time prior to said user input or said triggering event, and wherein the period of time prior to said user input or said triggering event varies depending on the type of user input or triggering event. 7. The vision system of claim 1, wherein said recording device stops saving captured image data after a period of time following said user input. 8. The vision system of claim 7, wherein the period of time following said user input varies depending on the type of user input. 9. The vision system of claim 1, wherein said recording device stops saving captured image data responsive to another user input. 10. The vision system of claim 1, wherein said recording device saves in the non-volatile memory the captured image data that is saved by said continuous loop recording for a period of time prior to said user input, and wherein the period of time prior to said user input comprises at least a portion of the time period for a full loop of said continuous loop recording. 11. The vision system of claim 10, wherein the period of time prior to said user input varies depending on the type of user input. 12. The vision system of claim 1, wherein the captured image data saved by said continuous loop recording prior to said user input and the image data captured by said camera after said user input are saved as a single data file in the non-volatile memory of said recording device. 13. The vision system of claim 1, wherein said vision system is operable to communicate the saved captured image data to a remote device via a Wi-Fi link or a wireless communication. 14. The vision system of claim 1, wherein, at vehicle start up and at completion of a recording, said control determines available non-volatile memory for recording image data, and wherein, if determined available non-volatile memory is below a threshold level, said control deletes an oldest saved data file from the non-volatile memory of said recording device. 15. A vision system for a vehicle, said vision system comprising: a camera module configured for attachment at an interior surface of a windshield of a vehicle equipped with said vision system, said camera module comprising a camera, a control and a recording device; wherein, when said camera module is disposed at the windshield of the equipped vehicle, said camera views through the windshield and forward of the equipped vehicle, and wherein said camera captures image data; wherein said vision system comprises a continuous loop recording of image data captured by said camera while the vehicle is operated, and wherein said vision system continuously records captured image data and erases previously captured image data such that image data captured by said camera for a time period is temporarily saved by said continuous loop recording; wherein said control comprises an image processor that processes image data captured by said camera for at least one driver assistance system of the vehicle; wherein, responsive to a user input, said control exits said continuous loop recording and said recording device saves image data captured by said camera in non-volatile memory of said recording device; wherein said user input comprises at least a voice command from an occupant of the vehicle; wherein said camera module includes a microphone for receiving voice messages, and wherein said recording device saves audio data with captured image data in the non-volatile memory; wherein, responsive to said user input, said recording device saves in the non-volatile memory captured image data saved by said continuous loop recording prior to said user input and saves in the non-volatile memory image data captured by said camera and received audio data after said user input; and wherein at least one of (i) said recording device stops saving captured image data after a period of time following said user input and (ii) said recording device stops saving captured image data responsive to another user input. 16. The vision system of claim 15, wherein said recording device stops saving captured image data after a period of time following said user input, and wherein the period of time following said user input varies depending on the type of user input. 17. The vision system of claim 15, wherein the captured image data saved by said continuous loop recording prior to said user input and the image data captured by said camera after said user input are saved as a single data file in the non-volatile memory of said recording device. 18. A vision system for a vehicle, said vision system comprising: a camera module configured for attachment at an interior surface of a windshield of a vehicle equipped with said vision system, said camera module comprising a camera, a control and a recording device; wherein, when said camera module is disposed at the windshield of the equipped vehicle, said camera views through the windshield and forward of the equipped vehicle, and wherein said camera captures image data; wherein said vision system comprises a continuous loop recording of image data captured by said camera while the vehicle is operated, and wherein said vision system continuously records captured image data and erases previously captured image data such that image data captured by said camera for a time period is temporarily saved by said continuous loop recording; wherein said control comprises an image processor that processes image data captured by said camera for at least one driver assistance system of the vehicle; wherein, responsive to one of a user input and a triggering event, said control exits said continuous loop recording and said recording device saves image data captured by said camera in non-volatile memory of said recording device; wherein, responsive to said one of a user input and a triggering event, said recording device saves in the non-volatile memory captured image data saved by said continuous loop recording prior to said one of a user input and a triggering event and saves in the non-volatile memory image data captured by said camera after said one of a user input and a triggering event; and wherein the captured image data saved by said continuous loop recording prior to said one of a user input and a triggering event and the image data captured by said camera after said one of a user input and a triggering event are saved as a single data file in the non-volatile memory of said recording device. 19. The vision system of claim 18, wherein said recording device stops saving captured image data after a period of time following said one of a user input and a triggering event, and wherein the period of time following said one of a user input and a triggering event varies depending on the type of user input or triggering event. 20. The vision system of claim 18, wherein said recording device saves in the non-volatile memory the captured image data that is saved by said continuous loop recording for a period of time prior to said one of a user input and a triggering event, and wherein the period of time prior to said one of a user input and a triggering event comprises at least a portion of the time period for a full loop of said continuous loop recording, and wherein the period of time prior to said one of a user input and a triggering event varies depending on the type of user input or triggering event.
A vision system of a vehicle includes a camera module disposed at the vehicle windshield and having a camera, a control and a recording device. The system includes a continuous loop recording of image data captured by the camera while the vehicle is operated. The control includes an image processor that processes image data captured by the camera for at least one driver assistance system of the vehicle. Responsive to a user input, the control exits the continuous loop recording and the recording device saves image data captured by the camera in non-volatile memory of the recording device. Responsive to the user input, the recording device saves in the non-volatile memory captured image data saved by the continuous loop recording prior to the user input and saves in the non-volatile memory image data captured by the camera after the user input.1. A vision system for a vehicle, said vision system comprising: a camera module configured for attachment at an interior surface of a windshield of a vehicle equipped with said vision system, said camera module comprising a camera, a control and a recording device; wherein, when said camera module is disposed at the windshield of the equipped vehicle, said camera views through the windshield and forward of the equipped vehicle, and wherein said camera captures image data; wherein said vision system comprises a continuous loop recording of image data captured by said camera while the vehicle is operated, and wherein said vision system continuously records captured image data and erases previously captured image data such that image data captured by said camera for a time period is temporarily saved by said continuous loop recording; wherein said control comprises an image processor that processes image data captured by said camera for at least one driver assistance system of the vehicle; wherein, responsive to a user input, said control exits said continuous loop recording and said recording device saves image data captured by said camera in non-volatile memory of said recording device; and wherein, responsive to said user input, said recording device saves in the non-volatile memory captured image data saved by said continuous loop recording prior to said user input and saves in the non-volatile memory image data captured by said camera after said user input. 2. The vision system of claim 1, wherein said user input comprises at least one of (i) actuation of a button or switch by an occupant of the vehicle, (ii) actuation of a hazard light of the equipped vehicle by an occupant of the vehicle and (iii) an input from a remote device. 3. The vision system of claim 1, wherein said user input comprises a voice command from an occupant of the vehicle. 4. The vision system of claim 3, wherein said camera module includes a microphone for receiving voice commands and messages. 5. The vision system of claim 1, wherein said control controls said recording device to save captured image data responsive to a triggering event, and wherein said triggering event comprises at least one of (i) a forward collision warning event, (ii) a lane departure warning event, (iii) an automatic emergency braking event, (iv) an airbag deployment, (v) a sudden or rapid deceleration, (vi) an antilock braking system event, (vii) a hard driver steering or threshold lateral g level event, (viii) a traction control event, (ix) a stability control event, (x) a wide open throttle event and (xi) a very high speed blind spot/lane change aid signal. 6. The vision system of claim 5, wherein said recording device saves in the non-volatile memory captured image data saved by said continuous loop recording for a period of time prior to said user input or said triggering event, and wherein the period of time prior to said user input or said triggering event varies depending on the type of user input or triggering event. 7. The vision system of claim 1, wherein said recording device stops saving captured image data after a period of time following said user input. 8. The vision system of claim 7, wherein the period of time following said user input varies depending on the type of user input. 9. The vision system of claim 1, wherein said recording device stops saving captured image data responsive to another user input. 10. The vision system of claim 1, wherein said recording device saves in the non-volatile memory the captured image data that is saved by said continuous loop recording for a period of time prior to said user input, and wherein the period of time prior to said user input comprises at least a portion of the time period for a full loop of said continuous loop recording. 11. The vision system of claim 10, wherein the period of time prior to said user input varies depending on the type of user input. 12. The vision system of claim 1, wherein the captured image data saved by said continuous loop recording prior to said user input and the image data captured by said camera after said user input are saved as a single data file in the non-volatile memory of said recording device. 13. The vision system of claim 1, wherein said vision system is operable to communicate the saved captured image data to a remote device via a Wi-Fi link or a wireless communication. 14. The vision system of claim 1, wherein, at vehicle start up and at completion of a recording, said control determines available non-volatile memory for recording image data, and wherein, if determined available non-volatile memory is below a threshold level, said control deletes an oldest saved data file from the non-volatile memory of said recording device. 15. A vision system for a vehicle, said vision system comprising: a camera module configured for attachment at an interior surface of a windshield of a vehicle equipped with said vision system, said camera module comprising a camera, a control and a recording device; wherein, when said camera module is disposed at the windshield of the equipped vehicle, said camera views through the windshield and forward of the equipped vehicle, and wherein said camera captures image data; wherein said vision system comprises a continuous loop recording of image data captured by said camera while the vehicle is operated, and wherein said vision system continuously records captured image data and erases previously captured image data such that image data captured by said camera for a time period is temporarily saved by said continuous loop recording; wherein said control comprises an image processor that processes image data captured by said camera for at least one driver assistance system of the vehicle; wherein, responsive to a user input, said control exits said continuous loop recording and said recording device saves image data captured by said camera in non-volatile memory of said recording device; wherein said user input comprises at least a voice command from an occupant of the vehicle; wherein said camera module includes a microphone for receiving voice messages, and wherein said recording device saves audio data with captured image data in the non-volatile memory; wherein, responsive to said user input, said recording device saves in the non-volatile memory captured image data saved by said continuous loop recording prior to said user input and saves in the non-volatile memory image data captured by said camera and received audio data after said user input; and wherein at least one of (i) said recording device stops saving captured image data after a period of time following said user input and (ii) said recording device stops saving captured image data responsive to another user input. 16. The vision system of claim 15, wherein said recording device stops saving captured image data after a period of time following said user input, and wherein the period of time following said user input varies depending on the type of user input. 17. The vision system of claim 15, wherein the captured image data saved by said continuous loop recording prior to said user input and the image data captured by said camera after said user input are saved as a single data file in the non-volatile memory of said recording device. 18. A vision system for a vehicle, said vision system comprising: a camera module configured for attachment at an interior surface of a windshield of a vehicle equipped with said vision system, said camera module comprising a camera, a control and a recording device; wherein, when said camera module is disposed at the windshield of the equipped vehicle, said camera views through the windshield and forward of the equipped vehicle, and wherein said camera captures image data; wherein said vision system comprises a continuous loop recording of image data captured by said camera while the vehicle is operated, and wherein said vision system continuously records captured image data and erases previously captured image data such that image data captured by said camera for a time period is temporarily saved by said continuous loop recording; wherein said control comprises an image processor that processes image data captured by said camera for at least one driver assistance system of the vehicle; wherein, responsive to one of a user input and a triggering event, said control exits said continuous loop recording and said recording device saves image data captured by said camera in non-volatile memory of said recording device; wherein, responsive to said one of a user input and a triggering event, said recording device saves in the non-volatile memory captured image data saved by said continuous loop recording prior to said one of a user input and a triggering event and saves in the non-volatile memory image data captured by said camera after said one of a user input and a triggering event; and wherein the captured image data saved by said continuous loop recording prior to said one of a user input and a triggering event and the image data captured by said camera after said one of a user input and a triggering event are saved as a single data file in the non-volatile memory of said recording device. 19. The vision system of claim 18, wherein said recording device stops saving captured image data after a period of time following said one of a user input and a triggering event, and wherein the period of time following said one of a user input and a triggering event varies depending on the type of user input or triggering event. 20. The vision system of claim 18, wherein said recording device saves in the non-volatile memory the captured image data that is saved by said continuous loop recording for a period of time prior to said one of a user input and a triggering event, and wherein the period of time prior to said one of a user input and a triggering event comprises at least a portion of the time period for a full loop of said continuous loop recording, and wherein the period of time prior to said one of a user input and a triggering event varies depending on the type of user input or triggering event.
2,400
7,499
7,499
14,640,439
2,431
Various embodiments that may generally relate to mobile gaming, location determination, mobile devices, authentication, and so on are described. Various methods are described. Various apparatus are described. Further embodiments are described.
1. A method comprising: storing in memory of a gaming service a plurality of hashes generated from operating system files of operating systems approved for use with the gaming service, each hash generated from a portion of an operating system file and a length of the operating system file; receiving, by a processor of the gaming service, from a mobile device a hash generated from a portion of an operating system file of an operating system currently in use by the mobile device and a length of the operating system file; comparing, by a processor of the gaming service, the received hash to the plurality of hashes stored in memory; determining, by a processor of the gaming service, that the received hash is equal to at least one of the hashes stored in memory based on the comparison; and based on at least in part on the determination, approving the mobile device to access the gaming service. 2. The method of claim 1, wherein the portion of the operating system file of the mobile device from which the received hash is generated is less than an entirety of the operating system file of the mobile device. 3. The method of claim 2, wherein the portion of the operating system file of the mobile device from which the received hash is generated is at least one of a beginning of the operating system file of the mobile device and an end of the operating system file of the mobile device. 4. The method of claim 3, wherein the beginning of the operating system file of the mobile device is the first 128 bytes of the operating system file. 5. The method of claim 3, wherein the end of the operating system file of the mobile device is the last 128 bytes of the operating system file. 6. The method of claim 3, wherein the received hash is generated from both the beginning and the end of the operating system file of the mobile device. 7. An apparatus comprising: at least one processor and memory including instructions that, when executed by the processor configure the at least one processor to: store in the memory a plurality of hashes generated from operating system files of operating systems approved for use with a gaming service, each hash generated from a portion of an operating system file and a length of the operating system file; receive from a mobile device a hash generated from a portion of an operating system file of an operating system currently in use by the mobile device and a length of the operating system file; compare the received hash to the plurality of hashes stored in memory; determine that the received hash is equal to at least one of the hashes stored in memory based on the comparison; and based at least in part on the determination, approve the mobile device to access the gaming service. 8. The apparatus of claim 7, wherein the portion of the operating system file of the mobile device from which the received hash is generated is less than an entirety of the operating system file of the mobile device. 9. The method of claim 8, wherein the portion of the operating system file of the mobile device from which the received hash is generated is at least one of a beginning of the operating system file of the mobile device and an end of the operating system file of the mobile device. 10. The method of claim 9, wherein the beginning of the operating system file of the mobile device is the first 128 bytes of the operating system file. 11. The method of claim 9, wherein the end of the operating system file of the mobile device is the last 128 bytes of the operating system file. 12. The method of claim 9, wherein the received hash is generated from both the beginning and the end of the operating system file of the mobile device. 13. A non-transitory computer readable medium storing instructions that, when executed by at least one processor configure the at least one processor to: store in the memory a plurality of hashes generated from operating system files of operating systems approved for use with a gaming service, each hash generated from a portion of an operating system file and a length of the operating system file; receive from a mobile device a hash generated from a portion of an operating system file of an operating system currently in use by the mobile device and a length of the operating system file; compare the received hash to the plurality of hashes stored in memory; determine that the received hash is equal to at least one of the hashes stored in memory based on the comparison; and based at least in part on the determination, approve the mobile device to access the gaming service. 14. The non-transitory computer readable medium of claim 13, wherein the portion of the operating system file of the mobile device from which the received hash is generated is less than an entirety of the operating system file of the mobile device. 15. The non-transitory computer readable medium of claim 14, wherein the portion of the operating system file of the mobile device from which the received hash is generated is at least one of a beginning of the operating system file of the mobile device and an end of the operating system file of the mobile device. 16. The non-transitory computer readable medium of claim 13, wherein the beginning of the operating system file of the mobile device is the first 128 bytes of the operating system file. 17. The non-transitory computer readable medium of claim 13, wherein the end of the operating system file of the mobile device is the last 128 bytes of the operating system file. 18. The non-transitory computer readable medium of claim 13, wherein the received hash is generated from both the beginning and the end of the operating system file of the mobile device.
Various embodiments that may generally relate to mobile gaming, location determination, mobile devices, authentication, and so on are described. Various methods are described. Various apparatus are described. Further embodiments are described.1. A method comprising: storing in memory of a gaming service a plurality of hashes generated from operating system files of operating systems approved for use with the gaming service, each hash generated from a portion of an operating system file and a length of the operating system file; receiving, by a processor of the gaming service, from a mobile device a hash generated from a portion of an operating system file of an operating system currently in use by the mobile device and a length of the operating system file; comparing, by a processor of the gaming service, the received hash to the plurality of hashes stored in memory; determining, by a processor of the gaming service, that the received hash is equal to at least one of the hashes stored in memory based on the comparison; and based on at least in part on the determination, approving the mobile device to access the gaming service. 2. The method of claim 1, wherein the portion of the operating system file of the mobile device from which the received hash is generated is less than an entirety of the operating system file of the mobile device. 3. The method of claim 2, wherein the portion of the operating system file of the mobile device from which the received hash is generated is at least one of a beginning of the operating system file of the mobile device and an end of the operating system file of the mobile device. 4. The method of claim 3, wherein the beginning of the operating system file of the mobile device is the first 128 bytes of the operating system file. 5. The method of claim 3, wherein the end of the operating system file of the mobile device is the last 128 bytes of the operating system file. 6. The method of claim 3, wherein the received hash is generated from both the beginning and the end of the operating system file of the mobile device. 7. An apparatus comprising: at least one processor and memory including instructions that, when executed by the processor configure the at least one processor to: store in the memory a plurality of hashes generated from operating system files of operating systems approved for use with a gaming service, each hash generated from a portion of an operating system file and a length of the operating system file; receive from a mobile device a hash generated from a portion of an operating system file of an operating system currently in use by the mobile device and a length of the operating system file; compare the received hash to the plurality of hashes stored in memory; determine that the received hash is equal to at least one of the hashes stored in memory based on the comparison; and based at least in part on the determination, approve the mobile device to access the gaming service. 8. The apparatus of claim 7, wherein the portion of the operating system file of the mobile device from which the received hash is generated is less than an entirety of the operating system file of the mobile device. 9. The method of claim 8, wherein the portion of the operating system file of the mobile device from which the received hash is generated is at least one of a beginning of the operating system file of the mobile device and an end of the operating system file of the mobile device. 10. The method of claim 9, wherein the beginning of the operating system file of the mobile device is the first 128 bytes of the operating system file. 11. The method of claim 9, wherein the end of the operating system file of the mobile device is the last 128 bytes of the operating system file. 12. The method of claim 9, wherein the received hash is generated from both the beginning and the end of the operating system file of the mobile device. 13. A non-transitory computer readable medium storing instructions that, when executed by at least one processor configure the at least one processor to: store in the memory a plurality of hashes generated from operating system files of operating systems approved for use with a gaming service, each hash generated from a portion of an operating system file and a length of the operating system file; receive from a mobile device a hash generated from a portion of an operating system file of an operating system currently in use by the mobile device and a length of the operating system file; compare the received hash to the plurality of hashes stored in memory; determine that the received hash is equal to at least one of the hashes stored in memory based on the comparison; and based at least in part on the determination, approve the mobile device to access the gaming service. 14. The non-transitory computer readable medium of claim 13, wherein the portion of the operating system file of the mobile device from which the received hash is generated is less than an entirety of the operating system file of the mobile device. 15. The non-transitory computer readable medium of claim 14, wherein the portion of the operating system file of the mobile device from which the received hash is generated is at least one of a beginning of the operating system file of the mobile device and an end of the operating system file of the mobile device. 16. The non-transitory computer readable medium of claim 13, wherein the beginning of the operating system file of the mobile device is the first 128 bytes of the operating system file. 17. The non-transitory computer readable medium of claim 13, wherein the end of the operating system file of the mobile device is the last 128 bytes of the operating system file. 18. The non-transitory computer readable medium of claim 13, wherein the received hash is generated from both the beginning and the end of the operating system file of the mobile device.
2,400