Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
7,900
7,900
15,041,747
2,423
A method of operating a receiving device coupled to a display device at a user location is disclosed, comprising programming the receiving device to record a program and recording the program based, at least in part, on at least one segmentation message in a program stream. In one example, the receiving device, which may be a set-top terminal, for example, is coupled to a display device, such as a television, at a user location. Devices are disclosed, as well.
1. A method of operating a receiving device comprising: receiving instructions to record a selected one of at least one program to be received in a program signal stream, starting at a first program start clock time; receiving the program signal stream comprising at least the selected program and at least one message defining a second program start time for the selected program, in units of time with respect to progression of the program signal stream, the program signal stream and the at least one message being received from a same originator; identifying the at least one message in the received program stream; comparing the first program start clock time of the selected program with the second program start time, in units of time, defined by the at least one message; and starting to record the selected program based, at least in part, on the second program start time, if the second program start time is different from the first program start clock time. 2. The method of claim 1, wherein the at least one message comprises a single message defining the second program start time and the second program end time, the method comprising: receiving the program signal stream including the single message. 3. The method of claim 1, further comprising; receiving the instructions from a program listing; and setting a recording start clock time for the selected program based on the first program start time defined by the program listing. 4. The method of claim 3, further comprising; setting a recording end clock time for the selected program based on a first program end clock time defined by the program listing; wherein the at least one message contains a second program end time of the selected program, the second program end time being in units of time with respect to progression of the program signal stream, from receipt of the message to an end of the selected program in the program stream, the method further comprising; comparing the second program end time, in units of time, contained in the at least one message to the recording end time; adjusting the recording end time of the selected program to the second program end time, if the recording end time is different than the second program end time; and ending recording of the selected program at the second program end time, if the recording end time has been adjusted. 5. The method of claim 3, further comprising: after starting to record the selected program, receiving at least one second message in the program signal stream, the at least one second message defining a second program end time by indicating an amount of time until the end of the program, in units of time with respect to progression of the program signal stream, from receipt of the message to an end of the selected program in the program stream; and ending recording of the program at the second program end time, in units of time, contained in the at least one second message. 6. The method of claim 3, further comprising: after starting to record the selected program; receiving at least one second message defining a start time for unscheduled content of the selected program, wherein the unscheduled content follows the first program end clock time, the at least one second message being received prior to the scheduled end time, in the program signal stream; continuing to record the selected program after the scheduled end time; while recording the selected program, receiving at least one third message defining an end time for the unscheduled content of the selected program, the end time for the unscheduled content being after the first program end clock time of the selected program, the at least one third message being received prior to the end time of the unscheduled content, in the program signal stream; and ending recording of the selected program at the end time of the unscheduled content defined by at least one of the at least one third message. 7. The method of claim 6, wherein the selected program is a sporting event and the unscheduled content comprises overtime of the sporting event. 8. The method of claim 6, wherein the at least one third message comprises at least one fourth message defining at least one respective expected end time of the unscheduled content and at least one fifth message defining an actual end time of the unscheduled content, the method comprising: receiving the at least one fourth message prior to receiving the at least one fifth message; and ending recording of the selected program at the actual end time defined by the at least one fifth message. 9. The method of claim 1, comprising receiving the program signal stream via a cable television network. 10. A receiving device to receive a program signal stream comprising: an interface to receive a program signal stream comprising at least one program, the program signal stream comprising at least one message containing a first program start time for a selected one of the at least one program, the first program start time being in units of time with respect to progression of the program signal stream, from receipt of the message to a start time of the selected program in the program stream, wherein the program signal stream including the at least one message is received from a same originator; a processing device configured to: receive instructions from a user to record the selected program starting at a second program start clock time; receive the program signal stream from the interface; identify the at least one message in the received program signal stream; compare the first program start time of the selected program in units of time contained in the at least one message to the second program start clock time; and start to record the selected program starting at the first start time, in units of time contained in the at least one message, if the second start time is different from the first start clock time; the receiving device further comprising memory coupled to the processor to store the selected recorded program. 11. The receiving device of claim 10, wherein the at least one message comprises a single message defining the first program start time and the first program end time, the processing device being programmed to: receive the program signal stream including the single message. 12. The receiving device of claim 10, wherein the processing device is configured to: receive the instructions from a program listing; and set a recording start time for the selected program based on the second program start clock time defined by the program listing. 13. The receiving device of claim 12, wherein the processor is programmed to: set a recording end time for the selected program based on a second end clock time defined by the program listing; and the at least one message defines a second program end time of the selected program, the second program end time being in units of time with respect to progression of the program signal stream, from receipt of the message to an end of the selected program in the program stream; the processing device being further programmed to: compare the recording program end time of the selected program to the second program end time in units of time, contained in the at least one message; and adjust the recording end time of the selected program to the second program end time if the second program end time is different than the recording program end 14. The receiving device claim 13, wherein: the program signal stream comprises at least one second message defining a start time for unscheduled content of the selected program and at least one third message defining an end time for the unscheduled content of the selected program, the end time for the unscheduled content of the selected program being after the scheduled end time of the selected program; and the processing device is further programmed to; receive the at least one second message defining a start time for the unscheduled content of the selected program after starting to record the selected program; continue to record the selected program after the scheduled end time for the selected program; receive the at least one third message defining an end time for the unscheduled content, while recording the selected program; and end recording of the program at the end time of the unscheduled content defined by the at least one third message. 15. The receiving device of claim 13, wherein: at least one of the at least one second message and at least one of the at least one third message comprises a single message; and the processor is configured to derive the second program end time and the end time of the unscheduled content from the at least one third message. 16. The receiving device of claim 13, wherein the unscheduled content starts at the scheduled end of the selected program. 17. The receiving device of claim 10, wherein the processing device is configured to start to record the selected program based, at least in part, on the second program start time contained in the at least one message, if a difference between the second program start time and the first program start time is greater than a threshold. 18. The receiving device of claim 10, wherein the interface is configured to receive the program signal stream via a cable television network. 19. A method of operating a receiving device comprising: receiving instructions to record a selected one of at least one program to be received in a program signal stream from an originator, starting at a program start clock time and ending at a program end clock time; receiving at least one message in the program signal stream defining an end time for the program, from the same originator; comparing the program end clock time to the end time defined by the message and, if the end time defined by the message is later than the program end clock time, continuing to record the selected program after the program end clock time; and ending recording of the program at the end time defined by the message. 20. The method of claim 19, wherein the at least one message comprises a single message defining the second program start time and the second program end time, the method comprising: receiving the program signal stream including the single message. 21. The method of claim 19, comprising; receiving the instructions from a program listing; and setting a recording start clock time for the selected program based on the first program start time defined by the program listing. 22. The method of claim 21, further comprising; setting a recording start clock time for the selected program based on a first program start clock time defined by the program listing; wherein the at least one message contains a second program start time of the selected program, the second program start time being in units of time with respect to progression of the program signal stream, from receipt of the message to a start of the selected program in the program stream, the method further comprising; comparing the second program start time, in units of time, contained in the at least one message to the recording start time; adjusting the recording start time of the selected program to the second program end time, if the recording start time is different than the second program start time; and ending recording of the selected program at the second program start time, if the recording end time has been adjusted. 23. The method of claim 22, further comprising: after starting to record the selected program, receiving at least one second message in the program signal stream, the at least one second message defining a second program end time by indicating an amount of time until the end of the program, in units of time with respect to progression of the program signal stream, from receipt of the message to the second end of the selected program in the program stream; and ending recording of the program at the second program end time, in units of time, contained in the at least one second message. 24. The method of claim 22, further comprising: after starting to record the selected program; receiving at least one second message defining a start time for unscheduled content of the selected program, wherein the unscheduled content follows the first program end clock time, the at least one second message being received prior to the scheduled end time, in the program signal stream; continuing to record the selected program after the scheduled end time; while recording the selected program, receiving at least one third message defining an end time for the unscheduled content of the selected program, the end time for the unscheduled content being after the first program end clock time of the selected program, the at least one third message being received prior to the end time of the unscheduled content, in the program signal stream; and ending recording of the selected program at the end time of the unscheduled content defined by at least one of the at least one third message. 25. The method of claim 24, wherein the selected program is a sporting event and the unscheduled content comprises overtime of the sporting event. 26. The method of claim 24, wherein the at least one third message comprises at least one fourth message defining at least one respective expected end time of the unscheduled content and at least one fifth message defining an actual end time of the unscheduled content, the method comprising: receiving the at least one fourth message prior to receiving the at least one fifth message; and ending recording of the selected program at the actual end time defined by the at least one fifth message. 27. The method of claim 19, comprising receiving the program signal stream via a cable television network. 28. A receiving device to receive a program signal stream comprising: an interface to receive a program signal stream comprising at least one program, the program signal stream comprising at least one message containing a first program end time for a selected one of the at least one programs, the first program end time being in units of time with respect to progression of the program signal stream, from receipt of the message to an end time of the selected program in the program stream, wherein the program signal stream including the at least one message is received from a same originator; a processing device configured to: be programmable by a user to record the selected program starting at a second program end clock time; receive the program signal stream from the interface; identify the at least one message in the received program signal stream; compare the first program end time of the selected program in units of time contained in the at least one message to the second program end clock time; and end recording of the selected program at the first end time, in units of time contained in the at least one message, if the second end time is different from the first end clock time; the receiving device further comprising memory coupled to the processor to store the selected recorded program. 29. The receiving device of claim 28, wherein the at least one message comprises a single message defining the first program start time and the first program end time, the processing device being programmed to: receive the program signal stream including the single message. 30. The receiving device of claim 28, wherein the processing device is programmable to record the selected program by selecting the program from a program listing. 31. The receiving device of claim 30, wherein the program listing defines the second program start time of the selected program and the processing device is programmed to: set a recording start time for the selected program based on the second program start clock time defined by the program listing. 32. The receiving device of claim 31, wherein the processor is programmed to: set a recording start time for the selected program based on a second start clock time defined by the program listing; and the at least one message defines a second program start time of the selected program, the second program start time being in units of time with respect to progression of the program signal stream, from receipt of the message to a start of the selected program in the program stream; the processing device being further programmed to: compare the recording program start time of the selected program to the second program start time in units of time, contained in the at least one message; and adjust the recording start time of the selected program to the second program start time if the second program start time is different than the recording program start time. 33. The receiving device claim 32, wherein: the program signal stream comprises at least one second message defining a start time for unscheduled content of the selected program and at least one third message defining an end time for the unscheduled content of the selected program, the end time for the unscheduled content of the selected program being after the scheduled end time of the selected program; and the processing device is further programmed to: receive the at least one second message defining a start time for the unscheduled content of the selected program after starting to record the selected program; continue to record the selected program after the scheduled end time for the selected program; receive the at least one third message defining an end time for the unscheduled content, while recording the selected program; and end recording of the program at the end time of the unscheduled content defined by the at least one third message. 34. The receiving device of claim 33, wherein at least one of the at least one second message and at least one of the at least one third message are a single message. 35. The receiving device of claim 33, wherein the unscheduled content starts at the scheduled end of the selected program. 36. The receiving device of claim 28, wherein the processing device is configured to end recording of the selected program based, at least in part, on the second program end time contained in the at least one message, if a difference between the second program end time and the first program start time is greater than a threshold. 37. The receiving device of claim 28, wherein the interface is configured to receive the program signal stream via a cable television network.
A method of operating a receiving device coupled to a display device at a user location is disclosed, comprising programming the receiving device to record a program and recording the program based, at least in part, on at least one segmentation message in a program stream. In one example, the receiving device, which may be a set-top terminal, for example, is coupled to a display device, such as a television, at a user location. Devices are disclosed, as well.1. A method of operating a receiving device comprising: receiving instructions to record a selected one of at least one program to be received in a program signal stream, starting at a first program start clock time; receiving the program signal stream comprising at least the selected program and at least one message defining a second program start time for the selected program, in units of time with respect to progression of the program signal stream, the program signal stream and the at least one message being received from a same originator; identifying the at least one message in the received program stream; comparing the first program start clock time of the selected program with the second program start time, in units of time, defined by the at least one message; and starting to record the selected program based, at least in part, on the second program start time, if the second program start time is different from the first program start clock time. 2. The method of claim 1, wherein the at least one message comprises a single message defining the second program start time and the second program end time, the method comprising: receiving the program signal stream including the single message. 3. The method of claim 1, further comprising; receiving the instructions from a program listing; and setting a recording start clock time for the selected program based on the first program start time defined by the program listing. 4. The method of claim 3, further comprising; setting a recording end clock time for the selected program based on a first program end clock time defined by the program listing; wherein the at least one message contains a second program end time of the selected program, the second program end time being in units of time with respect to progression of the program signal stream, from receipt of the message to an end of the selected program in the program stream, the method further comprising; comparing the second program end time, in units of time, contained in the at least one message to the recording end time; adjusting the recording end time of the selected program to the second program end time, if the recording end time is different than the second program end time; and ending recording of the selected program at the second program end time, if the recording end time has been adjusted. 5. The method of claim 3, further comprising: after starting to record the selected program, receiving at least one second message in the program signal stream, the at least one second message defining a second program end time by indicating an amount of time until the end of the program, in units of time with respect to progression of the program signal stream, from receipt of the message to an end of the selected program in the program stream; and ending recording of the program at the second program end time, in units of time, contained in the at least one second message. 6. The method of claim 3, further comprising: after starting to record the selected program; receiving at least one second message defining a start time for unscheduled content of the selected program, wherein the unscheduled content follows the first program end clock time, the at least one second message being received prior to the scheduled end time, in the program signal stream; continuing to record the selected program after the scheduled end time; while recording the selected program, receiving at least one third message defining an end time for the unscheduled content of the selected program, the end time for the unscheduled content being after the first program end clock time of the selected program, the at least one third message being received prior to the end time of the unscheduled content, in the program signal stream; and ending recording of the selected program at the end time of the unscheduled content defined by at least one of the at least one third message. 7. The method of claim 6, wherein the selected program is a sporting event and the unscheduled content comprises overtime of the sporting event. 8. The method of claim 6, wherein the at least one third message comprises at least one fourth message defining at least one respective expected end time of the unscheduled content and at least one fifth message defining an actual end time of the unscheduled content, the method comprising: receiving the at least one fourth message prior to receiving the at least one fifth message; and ending recording of the selected program at the actual end time defined by the at least one fifth message. 9. The method of claim 1, comprising receiving the program signal stream via a cable television network. 10. A receiving device to receive a program signal stream comprising: an interface to receive a program signal stream comprising at least one program, the program signal stream comprising at least one message containing a first program start time for a selected one of the at least one program, the first program start time being in units of time with respect to progression of the program signal stream, from receipt of the message to a start time of the selected program in the program stream, wherein the program signal stream including the at least one message is received from a same originator; a processing device configured to: receive instructions from a user to record the selected program starting at a second program start clock time; receive the program signal stream from the interface; identify the at least one message in the received program signal stream; compare the first program start time of the selected program in units of time contained in the at least one message to the second program start clock time; and start to record the selected program starting at the first start time, in units of time contained in the at least one message, if the second start time is different from the first start clock time; the receiving device further comprising memory coupled to the processor to store the selected recorded program. 11. The receiving device of claim 10, wherein the at least one message comprises a single message defining the first program start time and the first program end time, the processing device being programmed to: receive the program signal stream including the single message. 12. The receiving device of claim 10, wherein the processing device is configured to: receive the instructions from a program listing; and set a recording start time for the selected program based on the second program start clock time defined by the program listing. 13. The receiving device of claim 12, wherein the processor is programmed to: set a recording end time for the selected program based on a second end clock time defined by the program listing; and the at least one message defines a second program end time of the selected program, the second program end time being in units of time with respect to progression of the program signal stream, from receipt of the message to an end of the selected program in the program stream; the processing device being further programmed to: compare the recording program end time of the selected program to the second program end time in units of time, contained in the at least one message; and adjust the recording end time of the selected program to the second program end time if the second program end time is different than the recording program end 14. The receiving device claim 13, wherein: the program signal stream comprises at least one second message defining a start time for unscheduled content of the selected program and at least one third message defining an end time for the unscheduled content of the selected program, the end time for the unscheduled content of the selected program being after the scheduled end time of the selected program; and the processing device is further programmed to; receive the at least one second message defining a start time for the unscheduled content of the selected program after starting to record the selected program; continue to record the selected program after the scheduled end time for the selected program; receive the at least one third message defining an end time for the unscheduled content, while recording the selected program; and end recording of the program at the end time of the unscheduled content defined by the at least one third message. 15. The receiving device of claim 13, wherein: at least one of the at least one second message and at least one of the at least one third message comprises a single message; and the processor is configured to derive the second program end time and the end time of the unscheduled content from the at least one third message. 16. The receiving device of claim 13, wherein the unscheduled content starts at the scheduled end of the selected program. 17. The receiving device of claim 10, wherein the processing device is configured to start to record the selected program based, at least in part, on the second program start time contained in the at least one message, if a difference between the second program start time and the first program start time is greater than a threshold. 18. The receiving device of claim 10, wherein the interface is configured to receive the program signal stream via a cable television network. 19. A method of operating a receiving device comprising: receiving instructions to record a selected one of at least one program to be received in a program signal stream from an originator, starting at a program start clock time and ending at a program end clock time; receiving at least one message in the program signal stream defining an end time for the program, from the same originator; comparing the program end clock time to the end time defined by the message and, if the end time defined by the message is later than the program end clock time, continuing to record the selected program after the program end clock time; and ending recording of the program at the end time defined by the message. 20. The method of claim 19, wherein the at least one message comprises a single message defining the second program start time and the second program end time, the method comprising: receiving the program signal stream including the single message. 21. The method of claim 19, comprising; receiving the instructions from a program listing; and setting a recording start clock time for the selected program based on the first program start time defined by the program listing. 22. The method of claim 21, further comprising; setting a recording start clock time for the selected program based on a first program start clock time defined by the program listing; wherein the at least one message contains a second program start time of the selected program, the second program start time being in units of time with respect to progression of the program signal stream, from receipt of the message to a start of the selected program in the program stream, the method further comprising; comparing the second program start time, in units of time, contained in the at least one message to the recording start time; adjusting the recording start time of the selected program to the second program end time, if the recording start time is different than the second program start time; and ending recording of the selected program at the second program start time, if the recording end time has been adjusted. 23. The method of claim 22, further comprising: after starting to record the selected program, receiving at least one second message in the program signal stream, the at least one second message defining a second program end time by indicating an amount of time until the end of the program, in units of time with respect to progression of the program signal stream, from receipt of the message to the second end of the selected program in the program stream; and ending recording of the program at the second program end time, in units of time, contained in the at least one second message. 24. The method of claim 22, further comprising: after starting to record the selected program; receiving at least one second message defining a start time for unscheduled content of the selected program, wherein the unscheduled content follows the first program end clock time, the at least one second message being received prior to the scheduled end time, in the program signal stream; continuing to record the selected program after the scheduled end time; while recording the selected program, receiving at least one third message defining an end time for the unscheduled content of the selected program, the end time for the unscheduled content being after the first program end clock time of the selected program, the at least one third message being received prior to the end time of the unscheduled content, in the program signal stream; and ending recording of the selected program at the end time of the unscheduled content defined by at least one of the at least one third message. 25. The method of claim 24, wherein the selected program is a sporting event and the unscheduled content comprises overtime of the sporting event. 26. The method of claim 24, wherein the at least one third message comprises at least one fourth message defining at least one respective expected end time of the unscheduled content and at least one fifth message defining an actual end time of the unscheduled content, the method comprising: receiving the at least one fourth message prior to receiving the at least one fifth message; and ending recording of the selected program at the actual end time defined by the at least one fifth message. 27. The method of claim 19, comprising receiving the program signal stream via a cable television network. 28. A receiving device to receive a program signal stream comprising: an interface to receive a program signal stream comprising at least one program, the program signal stream comprising at least one message containing a first program end time for a selected one of the at least one programs, the first program end time being in units of time with respect to progression of the program signal stream, from receipt of the message to an end time of the selected program in the program stream, wherein the program signal stream including the at least one message is received from a same originator; a processing device configured to: be programmable by a user to record the selected program starting at a second program end clock time; receive the program signal stream from the interface; identify the at least one message in the received program signal stream; compare the first program end time of the selected program in units of time contained in the at least one message to the second program end clock time; and end recording of the selected program at the first end time, in units of time contained in the at least one message, if the second end time is different from the first end clock time; the receiving device further comprising memory coupled to the processor to store the selected recorded program. 29. The receiving device of claim 28, wherein the at least one message comprises a single message defining the first program start time and the first program end time, the processing device being programmed to: receive the program signal stream including the single message. 30. The receiving device of claim 28, wherein the processing device is programmable to record the selected program by selecting the program from a program listing. 31. The receiving device of claim 30, wherein the program listing defines the second program start time of the selected program and the processing device is programmed to: set a recording start time for the selected program based on the second program start clock time defined by the program listing. 32. The receiving device of claim 31, wherein the processor is programmed to: set a recording start time for the selected program based on a second start clock time defined by the program listing; and the at least one message defines a second program start time of the selected program, the second program start time being in units of time with respect to progression of the program signal stream, from receipt of the message to a start of the selected program in the program stream; the processing device being further programmed to: compare the recording program start time of the selected program to the second program start time in units of time, contained in the at least one message; and adjust the recording start time of the selected program to the second program start time if the second program start time is different than the recording program start time. 33. The receiving device claim 32, wherein: the program signal stream comprises at least one second message defining a start time for unscheduled content of the selected program and at least one third message defining an end time for the unscheduled content of the selected program, the end time for the unscheduled content of the selected program being after the scheduled end time of the selected program; and the processing device is further programmed to: receive the at least one second message defining a start time for the unscheduled content of the selected program after starting to record the selected program; continue to record the selected program after the scheduled end time for the selected program; receive the at least one third message defining an end time for the unscheduled content, while recording the selected program; and end recording of the program at the end time of the unscheduled content defined by the at least one third message. 34. The receiving device of claim 33, wherein at least one of the at least one second message and at least one of the at least one third message are a single message. 35. The receiving device of claim 33, wherein the unscheduled content starts at the scheduled end of the selected program. 36. The receiving device of claim 28, wherein the processing device is configured to end recording of the selected program based, at least in part, on the second program end time contained in the at least one message, if a difference between the second program end time and the first program start time is greater than a threshold. 37. The receiving device of claim 28, wherein the interface is configured to receive the program signal stream via a cable television network.
2,400
7,901
7,901
16,032,185
2,497
Methods, systems, and devices for updating access permissions of users in an access control system are described. The access permissions are capable of being updated based on rules and thresholds that include as at least one variable presence or contextual information associated with a user. The presence or contextual information associated with a user may be analyzed to trigger a credential update process for that user or other users within the access control system.
1. A method, comprising: receiving contextual information regarding a first user; based on the received contextual information, determining a credential update to perform in connection with at least one device associated with the first user; generating a first message that contains at least one instruction to update at least one credential; and transmitting the first message to the at least one device associated with the first user. 2. The method of claim 1, wherein the at least one device comprises a user device that is capable of exchanging messages via a communication network. 3. The method of claim 2, wherein the first message is transmitted to the at least one device via the communication network in at least one of an SMS message, an email, and an HTTP request. 4. The method of claim 2, wherein the user device comprises an NFC-enabled smart phone. 5. The method of claim 4, wherein the user device further comprises a secure element that stores the at least one credential as sensitive data. 6. The method of claim 5, wherein the secure element corresponds to at least one of a SIM card, microSD card, removeable IC, and embedded IC. 7. The method of claim 1, wherein the at least one device comprises a local host that is protecting at least one asset. 8. The method of claim 1, wherein the at least one instruction to update the at least one credential comprises an instruction to at least one of modify, add, remove, activate, and deactivate a logical credential on the at least one device. 9. The method of claim 1, wherein the contextual information regarding the first user comprises presence information. 10. The method of claim 1, wherein the contextual information regarding the first user comprises location information. 11. The method of claim 1, wherein the contextual information regarding the first user comprises information regarding the first user's usage of the at least one device. 12. The method of claim 1, further comprising: based on the received contextual information, determining a credential update to perform in connection with at least one device associated with a second user, the second user being different than the first user; generating a second message that contains at least one instruction to update at least one credential for the second user; and transmitting the second message to the at least one device associated with the second user. 13. The method of claim 1, further comprising: generating a second message that contains the at least one instruction to update the at least one credential; and transmitting the second message to a local host within an access control system, the local host protecting at least one asset and conditioning availability of the at least one access to users that present a physical credential having a valid logical credential stored thereon. 14. A computer-readable medium comprising processor-executable instructions that, when executed by a processor, perform the method of claim 1. 15. An access control system, comprising: a credential control authority configured to receive at least one of presence information and contextual information associated with a first user of the access control system and then determine whether a credential update process is to be performed for at least one device associated with the first user, the credential control authority further configured to invoke the credential update process upon determining that the first user has crossed at least one of a physical and logical threshold based on the received at least one of presence information and contextual information. 16. The system of claim 15, wherein the at least one of a physical and logical threshold corresponds to a predetermined distance away from a predetermined location. 17. The system of claim 15, wherein the at least one of a physical and logical threshold corresponds to a predetermined action of the first user detected at the at least one device associated with the first user. 18. The system of claim 15, further comprising one or more local hosts configured to control access to one or more assets. 19. The system of claim 18, wherein the one or more assets include at least one of a physical and logical asset. 20. The system of claim 18, wherein the one or more local hosts correspond to an RFID reader, wherein the at least one device associated with the first user comprises an NFC-enabled communication device, and wherein the credential update process includes transmitting a first credential update message to the one or more local hosts as well as transmitting a second credential update message to the at least one device associated with the first user. 21. A credential control authority configured to communicate with a contextual information source and based on information received from the contextual information source, determine whether a credential update process is to be performed for at least one device associated with a first user, the credential control authority further configured to invoke the credential update process upon determining that the first user has crossed at least one of a physical and logical threshold based on the information received from the contextual information source. 22. The credential control authority of claim 21, wherein the contextual information source is executed on a server that is separate from a server on which the credential control authority is operated. 23. The credential control authority of claim 22, wherein the contextual information source is connected with the credential control authority via a communication network. 24. The credential control authority of claim 21, wherein the information received from the contextual information source includes at least one of presence information, communication context information, location context information, group context information, and calendar information.
Methods, systems, and devices for updating access permissions of users in an access control system are described. The access permissions are capable of being updated based on rules and thresholds that include as at least one variable presence or contextual information associated with a user. The presence or contextual information associated with a user may be analyzed to trigger a credential update process for that user or other users within the access control system.1. A method, comprising: receiving contextual information regarding a first user; based on the received contextual information, determining a credential update to perform in connection with at least one device associated with the first user; generating a first message that contains at least one instruction to update at least one credential; and transmitting the first message to the at least one device associated with the first user. 2. The method of claim 1, wherein the at least one device comprises a user device that is capable of exchanging messages via a communication network. 3. The method of claim 2, wherein the first message is transmitted to the at least one device via the communication network in at least one of an SMS message, an email, and an HTTP request. 4. The method of claim 2, wherein the user device comprises an NFC-enabled smart phone. 5. The method of claim 4, wherein the user device further comprises a secure element that stores the at least one credential as sensitive data. 6. The method of claim 5, wherein the secure element corresponds to at least one of a SIM card, microSD card, removeable IC, and embedded IC. 7. The method of claim 1, wherein the at least one device comprises a local host that is protecting at least one asset. 8. The method of claim 1, wherein the at least one instruction to update the at least one credential comprises an instruction to at least one of modify, add, remove, activate, and deactivate a logical credential on the at least one device. 9. The method of claim 1, wherein the contextual information regarding the first user comprises presence information. 10. The method of claim 1, wherein the contextual information regarding the first user comprises location information. 11. The method of claim 1, wherein the contextual information regarding the first user comprises information regarding the first user's usage of the at least one device. 12. The method of claim 1, further comprising: based on the received contextual information, determining a credential update to perform in connection with at least one device associated with a second user, the second user being different than the first user; generating a second message that contains at least one instruction to update at least one credential for the second user; and transmitting the second message to the at least one device associated with the second user. 13. The method of claim 1, further comprising: generating a second message that contains the at least one instruction to update the at least one credential; and transmitting the second message to a local host within an access control system, the local host protecting at least one asset and conditioning availability of the at least one access to users that present a physical credential having a valid logical credential stored thereon. 14. A computer-readable medium comprising processor-executable instructions that, when executed by a processor, perform the method of claim 1. 15. An access control system, comprising: a credential control authority configured to receive at least one of presence information and contextual information associated with a first user of the access control system and then determine whether a credential update process is to be performed for at least one device associated with the first user, the credential control authority further configured to invoke the credential update process upon determining that the first user has crossed at least one of a physical and logical threshold based on the received at least one of presence information and contextual information. 16. The system of claim 15, wherein the at least one of a physical and logical threshold corresponds to a predetermined distance away from a predetermined location. 17. The system of claim 15, wherein the at least one of a physical and logical threshold corresponds to a predetermined action of the first user detected at the at least one device associated with the first user. 18. The system of claim 15, further comprising one or more local hosts configured to control access to one or more assets. 19. The system of claim 18, wherein the one or more assets include at least one of a physical and logical asset. 20. The system of claim 18, wherein the one or more local hosts correspond to an RFID reader, wherein the at least one device associated with the first user comprises an NFC-enabled communication device, and wherein the credential update process includes transmitting a first credential update message to the one or more local hosts as well as transmitting a second credential update message to the at least one device associated with the first user. 21. A credential control authority configured to communicate with a contextual information source and based on information received from the contextual information source, determine whether a credential update process is to be performed for at least one device associated with a first user, the credential control authority further configured to invoke the credential update process upon determining that the first user has crossed at least one of a physical and logical threshold based on the information received from the contextual information source. 22. The credential control authority of claim 21, wherein the contextual information source is executed on a server that is separate from a server on which the credential control authority is operated. 23. The credential control authority of claim 22, wherein the contextual information source is connected with the credential control authority via a communication network. 24. The credential control authority of claim 21, wherein the information received from the contextual information source includes at least one of presence information, communication context information, location context information, group context information, and calendar information.
2,400
7,902
7,902
14,472,688
2,448
A service includes registering a plurality of peer devices at a registration server and registering a peer server at the registration server. A first peer device of the plurality of peer devices communicates with a second peer device of the plurality of peer devices via the peer server. The peer server performs peer operations in a peer-to-peer network on behalf of the first peer device, and the peer server identifies itself to other devices as the first peer device.
1-76. (canceled) 77. A service comprising: registering a plurality of peer devices at a registration server; and registering a peer server at the registration server; wherein a first peer device of the plurality of peer devices communicates with a second peer device of the plurality of peer devices via the peer server, and wherein the peer server performs peer operations in a peer-to-peer network on behalf of the first peer device, and the peer server identifies itself to other devices as the first peer device. 78. The service of claim 77, further comprising, providing call progress functionality on behalf of the first peer device. 79. The service of claim 78, further comprising, providing the call progress functionality for at least one of incoming calls and outgoing calls. 80. The service of claim 77, further comprising, with the peer server, providing a ring-back signal to a calling party calling the first peer device. 81. The service of claim 77, further comprising: selectively forwarding information on behalf of the first peer device according to rules, and selectively accumulating information on behalf of the first peer device according to the rules. 82. The service of claim 81, further comprising, receiving updates of the rules. 83. The service of claim 81, further comprising, selectively forwarding the accumulated information to the first peer device. 84. The service of claim 77, wherein the peer server performs peer operations on behalf of the first peer device when the first peer device is offline. 85. The service of claim 77, wherein the peer server performs peer operations on behalf of the first peer device when the first peer device is in stand-by mode. 86. The service of claim 77, wherein the first peer device is one of a portable network device, a mobile network device and a battery operated network device. 87. A computer program product, stored on one or more computer-readable media, comprising instructions operative to cause a programmable processor of a network device to: register a plurality of peer devices at a registration server; and register a peer server at the registration server; wherein a first peer device of the plurality of peer devices communicates with a second peer device of the plurality of peer devices via the peer server, and wherein the peer server performs peer operations in a peer-to-peer network on behalf of the first peer device and the peer server identifies itself to other devices as the first peer device. 88. The computer program product of claim 87, wherein the first peer device communicates with the peer server over a wireless network 89. The computer program product of claim 88, wherein the wireless network is one of a cellular network, a wireless local area network, a wireless metropolitan area network, a personal area network, a WiFi network, a Worldwide Interoperability for Microwave Access (WiMAX) network, a Bluetooth network, a Zigbee network and a Ultra Wide Band (UWB) network. 90. The computer program product of claim 87, wherein the peer server performs peer operations on behalf of the first peer device when the first peer device is offline. 91. The computer program product of claim 87, wherein the peer server performs peer operations on behalf of the first peer device when the first peer device is in stand-by mode. 92. The computer program product of claim 87, wherein the first peer device is one of a portable network device, a mobile network device and a battery operated network device. 93. The computer program product of claim 87, wherein the instructions are further operative to cause the programmable processor of the network device to selectively forward information on behalf of the first peer device according to rules. 94. The service of claim 93, further comprising, wherein the instructions are further operative to cause the programmable processor of the network device to selectively accumulate information on behalf of the first peer device according to the rules. 95. The service of claim 93, further comprising, wherein the instructions are further operative to cause the programmable processor of the network device to receive updates of the rules. 96. A peer-to-peer communication network comprising: at least one peer server; and a plurality of peer devices, wherein at least one of the plurality of peer devices, being a first peer device, communicates with at least another one of the plurality of peer devices, being a second peer device, via at least one of the peer servers, being a first peer server, and wherein the first peer server performs peer operations in the peer-to-peer network on behalf of the first peer device and the first peer server identifies itself to other devices as the first peer device.
A service includes registering a plurality of peer devices at a registration server and registering a peer server at the registration server. A first peer device of the plurality of peer devices communicates with a second peer device of the plurality of peer devices via the peer server. The peer server performs peer operations in a peer-to-peer network on behalf of the first peer device, and the peer server identifies itself to other devices as the first peer device.1-76. (canceled) 77. A service comprising: registering a plurality of peer devices at a registration server; and registering a peer server at the registration server; wherein a first peer device of the plurality of peer devices communicates with a second peer device of the plurality of peer devices via the peer server, and wherein the peer server performs peer operations in a peer-to-peer network on behalf of the first peer device, and the peer server identifies itself to other devices as the first peer device. 78. The service of claim 77, further comprising, providing call progress functionality on behalf of the first peer device. 79. The service of claim 78, further comprising, providing the call progress functionality for at least one of incoming calls and outgoing calls. 80. The service of claim 77, further comprising, with the peer server, providing a ring-back signal to a calling party calling the first peer device. 81. The service of claim 77, further comprising: selectively forwarding information on behalf of the first peer device according to rules, and selectively accumulating information on behalf of the first peer device according to the rules. 82. The service of claim 81, further comprising, receiving updates of the rules. 83. The service of claim 81, further comprising, selectively forwarding the accumulated information to the first peer device. 84. The service of claim 77, wherein the peer server performs peer operations on behalf of the first peer device when the first peer device is offline. 85. The service of claim 77, wherein the peer server performs peer operations on behalf of the first peer device when the first peer device is in stand-by mode. 86. The service of claim 77, wherein the first peer device is one of a portable network device, a mobile network device and a battery operated network device. 87. A computer program product, stored on one or more computer-readable media, comprising instructions operative to cause a programmable processor of a network device to: register a plurality of peer devices at a registration server; and register a peer server at the registration server; wherein a first peer device of the plurality of peer devices communicates with a second peer device of the plurality of peer devices via the peer server, and wherein the peer server performs peer operations in a peer-to-peer network on behalf of the first peer device and the peer server identifies itself to other devices as the first peer device. 88. The computer program product of claim 87, wherein the first peer device communicates with the peer server over a wireless network 89. The computer program product of claim 88, wherein the wireless network is one of a cellular network, a wireless local area network, a wireless metropolitan area network, a personal area network, a WiFi network, a Worldwide Interoperability for Microwave Access (WiMAX) network, a Bluetooth network, a Zigbee network and a Ultra Wide Band (UWB) network. 90. The computer program product of claim 87, wherein the peer server performs peer operations on behalf of the first peer device when the first peer device is offline. 91. The computer program product of claim 87, wherein the peer server performs peer operations on behalf of the first peer device when the first peer device is in stand-by mode. 92. The computer program product of claim 87, wherein the first peer device is one of a portable network device, a mobile network device and a battery operated network device. 93. The computer program product of claim 87, wherein the instructions are further operative to cause the programmable processor of the network device to selectively forward information on behalf of the first peer device according to rules. 94. The service of claim 93, further comprising, wherein the instructions are further operative to cause the programmable processor of the network device to selectively accumulate information on behalf of the first peer device according to the rules. 95. The service of claim 93, further comprising, wherein the instructions are further operative to cause the programmable processor of the network device to receive updates of the rules. 96. A peer-to-peer communication network comprising: at least one peer server; and a plurality of peer devices, wherein at least one of the plurality of peer devices, being a first peer device, communicates with at least another one of the plurality of peer devices, being a second peer device, via at least one of the peer servers, being a first peer server, and wherein the first peer server performs peer operations in the peer-to-peer network on behalf of the first peer device and the first peer server identifies itself to other devices as the first peer device.
2,400
7,903
7,903
14,514,529
2,414
A vehicle gateway module is configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks. The gateway module has a cellular data link which provides a direct connection from the gateway module to the Internet and a wireless data link which provides a direct connection from the gateway module to an area within the vehicle whereby the cellular data link in conjunction with the wireless data link establish an Internet hotspot for a mobile device within the vehicle.
1. A system for a vehicle comprising: a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks, the gateway module having a cellular data link which provides a direct connection from the gateway module to the Internet and a wireless data link which provides a direct connection from the gateway module to an area within the vehicle whereby the cellular data link in conjunction with the wireless data link establish an Internet hotspot for a mobile device within the vehicle. 2. The system of claim 1 wherein: communication between the mobile device and a remote entity connected to the Internet is enabled via the cellular data link and the wireless data link. 3. The system of claim 1 wherein: communication between a vehicle device connected to a vehicle network connected to the gateway module, the mobile device, and a remote entity connected to the Internet is enabled via the gateway module, the cellular data link, and the wireless data link. 4. The system of claim 1 wherein: the cellular data link is one of a 3G data link and a 4G data link. 5. The system of claim 1 wherein: the wireless data link is one of a WiFi™ wireless data link and a Bluetooth™ wireless data link. 6. The system of claim 5 wherein: the gateway module further includes a second wireless data link which provides a direct connection between the gateway module and a mobile device within the vehicle, the second wireless data link being the other one of a WiFi™ data wireless link and a Bluetooth™ data wireless link. 7. The system of claim 1 wherein: the vehicle networks are respectively one of a Controller Area Network (CAN), a Local Interconnect Network (LIN), an Ethernet network, a FlexRay™ network, and a Media Oriented Systems Transport (MOST) network. 8. A method for a vehicle comprising: providing a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks; and establishing an Internet hotspot within the vehicle for use by a mobile device within the vehicle by providing a cellular data link from the gateway module to the Internet and a wireless data link from the gateway module to an area within the vehicle. 9. The method of claim 8 further comprising: communicating between the mobile device and a remote entity connected to the Internet via the cellular data link and the wireless data link. 10. The method of claim 8 further comprising: communicating between a vehicle device connected to a vehicle network connected to the gateway module, the mobile device, and a remote entity connected to the Internet via the gateway module, the cellular data link, and the wireless data link. 11. The method of claim 8 further comprising: communicating a communication from the mobile device to a remote entity connected to the Internet via the wireless data link and the cellular data link; and communicating a communication from the remote entity to a vehicle device connected to a vehicle network connected to the gateway module via the cellular data link and the gateway module in response to the communication communicated from the mobile device. 12. The method of claim 8 further comprising: remotely accessing, via the cellular data link, by a remote entity connected to the Internet, a vehicle device connected to a vehicle network connected to the gateway module in response to a request communicated from the mobile device to the remote entity via the wireless data link and the cellular data link. 13. The method of claim 8 wherein: the cellular data link is one of a 3G data link and a 4G data link, and the wireless data link is one of a WiFi™ wireless data link and a Bluetooth™ wireless data link. 14. The method of claim 8 further comprising: communicating between a vehicle device connected to the gateway module via the wireless data link and a vehicle device connected to the gateway module via one of the vehicle networks. 15. A method for a vehicle comprising: providing a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks; and communicating between a remote entity connected to the Internet and a mobile device within the vehicle via a cellular data link of the gateway module and a wireless data link of the gateway module in which the cellular data link provides a direct connection from the gateway module to the Internet and the wireless link provides a direct connection from the gateway module to an area within the vehicle. 16. The method of claim 15 further comprising: communicating between a vehicle device connected to a vehicle network connected to the gateway module, the mobile device, and the remote entity via the gateway module, the cellular data link, and the wireless data link. 17. The method of claim 15 further comprising: communicating a communication from the remote entity to a vehicle device connected to a vehicle network connected to the gateway module via the cellular data link and the gateway module in response to a communication communicated from the mobile device to the remote entity via the wireless data link and the cellular data link. 18. The method of claim 15 further comprising: remotely accessing, via the cellular data link, by the remote entity, a vehicle device connected to a vehicle network connected to the gateway module in response to a request communicated from the mobile device to the remote entity via the wireless data link and the cellular data link. 19. The method of claim 15 wherein: the cellular data link is one of a 3G data link and a 4G data link. 20. The method of claim 15 wherein: the wireless data link is one of a WiFi™ wireless data link and a Bluetooth™ wireless data link.
A vehicle gateway module is configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks. The gateway module has a cellular data link which provides a direct connection from the gateway module to the Internet and a wireless data link which provides a direct connection from the gateway module to an area within the vehicle whereby the cellular data link in conjunction with the wireless data link establish an Internet hotspot for a mobile device within the vehicle.1. A system for a vehicle comprising: a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks, the gateway module having a cellular data link which provides a direct connection from the gateway module to the Internet and a wireless data link which provides a direct connection from the gateway module to an area within the vehicle whereby the cellular data link in conjunction with the wireless data link establish an Internet hotspot for a mobile device within the vehicle. 2. The system of claim 1 wherein: communication between the mobile device and a remote entity connected to the Internet is enabled via the cellular data link and the wireless data link. 3. The system of claim 1 wherein: communication between a vehicle device connected to a vehicle network connected to the gateway module, the mobile device, and a remote entity connected to the Internet is enabled via the gateway module, the cellular data link, and the wireless data link. 4. The system of claim 1 wherein: the cellular data link is one of a 3G data link and a 4G data link. 5. The system of claim 1 wherein: the wireless data link is one of a WiFi™ wireless data link and a Bluetooth™ wireless data link. 6. The system of claim 5 wherein: the gateway module further includes a second wireless data link which provides a direct connection between the gateway module and a mobile device within the vehicle, the second wireless data link being the other one of a WiFi™ data wireless link and a Bluetooth™ data wireless link. 7. The system of claim 1 wherein: the vehicle networks are respectively one of a Controller Area Network (CAN), a Local Interconnect Network (LIN), an Ethernet network, a FlexRay™ network, and a Media Oriented Systems Transport (MOST) network. 8. A method for a vehicle comprising: providing a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks; and establishing an Internet hotspot within the vehicle for use by a mobile device within the vehicle by providing a cellular data link from the gateway module to the Internet and a wireless data link from the gateway module to an area within the vehicle. 9. The method of claim 8 further comprising: communicating between the mobile device and a remote entity connected to the Internet via the cellular data link and the wireless data link. 10. The method of claim 8 further comprising: communicating between a vehicle device connected to a vehicle network connected to the gateway module, the mobile device, and a remote entity connected to the Internet via the gateway module, the cellular data link, and the wireless data link. 11. The method of claim 8 further comprising: communicating a communication from the mobile device to a remote entity connected to the Internet via the wireless data link and the cellular data link; and communicating a communication from the remote entity to a vehicle device connected to a vehicle network connected to the gateway module via the cellular data link and the gateway module in response to the communication communicated from the mobile device. 12. The method of claim 8 further comprising: remotely accessing, via the cellular data link, by a remote entity connected to the Internet, a vehicle device connected to a vehicle network connected to the gateway module in response to a request communicated from the mobile device to the remote entity via the wireless data link and the cellular data link. 13. The method of claim 8 wherein: the cellular data link is one of a 3G data link and a 4G data link, and the wireless data link is one of a WiFi™ wireless data link and a Bluetooth™ wireless data link. 14. The method of claim 8 further comprising: communicating between a vehicle device connected to the gateway module via the wireless data link and a vehicle device connected to the gateway module via one of the vehicle networks. 15. A method for a vehicle comprising: providing a gateway module configured to communicate over vehicle networks connected to the gateway module with vehicle devices connected to the vehicle networks; and communicating between a remote entity connected to the Internet and a mobile device within the vehicle via a cellular data link of the gateway module and a wireless data link of the gateway module in which the cellular data link provides a direct connection from the gateway module to the Internet and the wireless link provides a direct connection from the gateway module to an area within the vehicle. 16. The method of claim 15 further comprising: communicating between a vehicle device connected to a vehicle network connected to the gateway module, the mobile device, and the remote entity via the gateway module, the cellular data link, and the wireless data link. 17. The method of claim 15 further comprising: communicating a communication from the remote entity to a vehicle device connected to a vehicle network connected to the gateway module via the cellular data link and the gateway module in response to a communication communicated from the mobile device to the remote entity via the wireless data link and the cellular data link. 18. The method of claim 15 further comprising: remotely accessing, via the cellular data link, by the remote entity, a vehicle device connected to a vehicle network connected to the gateway module in response to a request communicated from the mobile device to the remote entity via the wireless data link and the cellular data link. 19. The method of claim 15 wherein: the cellular data link is one of a 3G data link and a 4G data link. 20. The method of claim 15 wherein: the wireless data link is one of a WiFi™ wireless data link and a Bluetooth™ wireless data link.
2,400
7,904
7,904
15,315,392
2,423
A system, apparatus and method for distribution of a signal on a single cable are provided. The present disclosure provides for receiving a signal from a provider, determining whether the signal is to be delivered using a first signal format or a second signal format, providing the signal to a second device over the co-axial cable in the first signal format if it is determined that the signal is to be delivered using the first signal format, and providing the signal over the co-axial cable in the second signal format if it is determined that the signal is to be delivered using the second signal format. The first signal format may include at least one of an analog signal and an RF modulated analog signal. The second signal format may include at least one of a digital signal, an RF modulated IP signal and a MoCA signal.
1. A method comprising: receiving a signal from a media service provider; determining whether the signal is to be delivered using a first signal format or a second signal format through a co-axial cable; providing the signal to a second device over the co-axial cable in the first signal format if it is determined that the signal is to be delivered using the first signal format; and providing the signal to the second device over the co-axial cable in the second signal format if it is determined that the signal is to be delivered using the second signal format. 2. The method of claim 1, wherein the first signal format is at least one of an analog signal format and an RF modulated analog signal. 3. The method of claim 2, wherein the second signal format is at least one of a digital signal, an RF modulated Internet Protocol (IP) signal and a multimedia over cable alliance signal. 4. The method of claim 3, wherein the second signal format uses a signal frequency range that is different than the first signal format. 5. The method of claim 3, wherein the second signal format uses a signal frequency range that is substantially the same as the first signal format. 6. The method of claim 1, wherein the providing the signal in the first format further includes providing the signal to a television display device and wherein the providing the signal in the second format includes providing the signal to a set top box. 7. The method of claim 6, wherein the providing the signal to the set top box in the second signal format further includes providing the signal on at least one of a first network and a second network, the first network being different from the second network. 8. The method of claim 1, wherein the received signal is provided over a first network and the delivered signal is provided over a second network. 9. The method of claim 8, wherein the determining further includes: detecting a presence of a second signal in the second signal format; isolating the second network; and if the second signal in the second signal format is detected after isolating the second network, providing the signal in the first signal format. 10. The method of claim 9, wherein if presence of the second signal is not detected after isolating the second network, providing the signal in the second signal format. 11. An apparatus comprising: a signal interface that receives a signal from a media service provider; a controller coupled to the first interface, the controller determining whether the signal is to be delivered using a first signal format or a second signal format; and a switch coupled to the controller, the switch providing the signal to a second device in one of the first signal format and the second signal format based on the determination by the controller. 12. The apparatus of claim 1, wherein the first signal format is at least one of an analog signal format and an RF modulated analog signal. 13. The apparatus of claim 12, wherein the second signal format is at least one of a digital signal, an RF modulated Internet Protocol (IP) signal and a multimedia over cable alliance signal. 14. The apparatus of claim 13, wherein the second signal format uses a signal frequency range that is different than the first signal format. 15. The apparatus of claim 13, wherein the second signal format uses a signal frequency range that is substantially the same as the first signal format. 16. The apparatus of claim 11, wherein the switch provides the signal in the first signal format to a television display device and provides the signal in the second signal format to a set top box. 17. The apparatus of claim 16, wherein the signal provided in the second signal format to the set top box is provided on at least one of a first network and a second network, the first network being different than the second network. 18. The apparatus of claim 11, wherein the signal interface is coupled to a first network and the switch is coupled to a second network. 19. The apparatus of claim 18, wherein the controller further detects a presence of a second signal in the second signal format, isolates the second network via the switch and wherein the switch provides the signal in the first signal format if the presence of the second signal in the second signal format is detected after isolating the second network. 20. The apparatus of claim 19, wherein the switch provides the signal in the first signal format if presence of the second signal is not detected after the isolating the second network. 21. An apparatus comprising: means for receiving a signal from a media service provider; means for determining whether the signal is to be delivered using a first signal format or a second signal format; and means for providing the signal to a second device in the first signal format if it is determined that the signal is to be delivered using the first signal format and providing the signal to the second device in the second signal format if it is determined that the signal is to be delivered using the second signal format. 22. The apparatus of claim 21, wherein the first signal format is at least one of an analog signal format and an RF modulated analog signal. 23. The apparatus of claim 22, wherein the second signal format is at least one of a digital signal, an RF modulated Internet Protocol (IP) signal and a multimedia over cable alliance signal. 24. The apparatus of claim 23, wherein the second signal format uses a signal frequency range that is different than the first signal format. 25. The apparatus of claim 23, wherein the second signal format uses a signal frequency range that is substantially the same as the first signal format. 26. The apparatus of claim 21, wherein the means for providing further includes means for providing the signal in the first signal format to a television display device and providing the signal in the second signal format to a set top box. 27. The apparatus of claim 26, wherein the means for providing the signal in the second signal format to the set top box further includes means for providing the signal on at least one of a first network and a second network, the first network being different from the second network. 28. The apparatus of claim 21, wherein the means for receiving received the signal over a first network and the means for providing provides the signal over a second network. 29. The apparatus of claim 28, wherein the means for determining further includes: means for detecting a presence of a second signal in the second signal format; means for isolating the second network; and wherein the means for providing the signal further includes means for providing the signal in the first signal format if the presence of the second signal in the second signal format is detected after isolating the second network. 30. The apparatus of claim 29, wherein the means for providing the signal further includes means for providing the signal in the second signal format if the presence of the second signal in the second signal format is not detected after isolating the second network.
A system, apparatus and method for distribution of a signal on a single cable are provided. The present disclosure provides for receiving a signal from a provider, determining whether the signal is to be delivered using a first signal format or a second signal format, providing the signal to a second device over the co-axial cable in the first signal format if it is determined that the signal is to be delivered using the first signal format, and providing the signal over the co-axial cable in the second signal format if it is determined that the signal is to be delivered using the second signal format. The first signal format may include at least one of an analog signal and an RF modulated analog signal. The second signal format may include at least one of a digital signal, an RF modulated IP signal and a MoCA signal.1. A method comprising: receiving a signal from a media service provider; determining whether the signal is to be delivered using a first signal format or a second signal format through a co-axial cable; providing the signal to a second device over the co-axial cable in the first signal format if it is determined that the signal is to be delivered using the first signal format; and providing the signal to the second device over the co-axial cable in the second signal format if it is determined that the signal is to be delivered using the second signal format. 2. The method of claim 1, wherein the first signal format is at least one of an analog signal format and an RF modulated analog signal. 3. The method of claim 2, wherein the second signal format is at least one of a digital signal, an RF modulated Internet Protocol (IP) signal and a multimedia over cable alliance signal. 4. The method of claim 3, wherein the second signal format uses a signal frequency range that is different than the first signal format. 5. The method of claim 3, wherein the second signal format uses a signal frequency range that is substantially the same as the first signal format. 6. The method of claim 1, wherein the providing the signal in the first format further includes providing the signal to a television display device and wherein the providing the signal in the second format includes providing the signal to a set top box. 7. The method of claim 6, wherein the providing the signal to the set top box in the second signal format further includes providing the signal on at least one of a first network and a second network, the first network being different from the second network. 8. The method of claim 1, wherein the received signal is provided over a first network and the delivered signal is provided over a second network. 9. The method of claim 8, wherein the determining further includes: detecting a presence of a second signal in the second signal format; isolating the second network; and if the second signal in the second signal format is detected after isolating the second network, providing the signal in the first signal format. 10. The method of claim 9, wherein if presence of the second signal is not detected after isolating the second network, providing the signal in the second signal format. 11. An apparatus comprising: a signal interface that receives a signal from a media service provider; a controller coupled to the first interface, the controller determining whether the signal is to be delivered using a first signal format or a second signal format; and a switch coupled to the controller, the switch providing the signal to a second device in one of the first signal format and the second signal format based on the determination by the controller. 12. The apparatus of claim 1, wherein the first signal format is at least one of an analog signal format and an RF modulated analog signal. 13. The apparatus of claim 12, wherein the second signal format is at least one of a digital signal, an RF modulated Internet Protocol (IP) signal and a multimedia over cable alliance signal. 14. The apparatus of claim 13, wherein the second signal format uses a signal frequency range that is different than the first signal format. 15. The apparatus of claim 13, wherein the second signal format uses a signal frequency range that is substantially the same as the first signal format. 16. The apparatus of claim 11, wherein the switch provides the signal in the first signal format to a television display device and provides the signal in the second signal format to a set top box. 17. The apparatus of claim 16, wherein the signal provided in the second signal format to the set top box is provided on at least one of a first network and a second network, the first network being different than the second network. 18. The apparatus of claim 11, wherein the signal interface is coupled to a first network and the switch is coupled to a second network. 19. The apparatus of claim 18, wherein the controller further detects a presence of a second signal in the second signal format, isolates the second network via the switch and wherein the switch provides the signal in the first signal format if the presence of the second signal in the second signal format is detected after isolating the second network. 20. The apparatus of claim 19, wherein the switch provides the signal in the first signal format if presence of the second signal is not detected after the isolating the second network. 21. An apparatus comprising: means for receiving a signal from a media service provider; means for determining whether the signal is to be delivered using a first signal format or a second signal format; and means for providing the signal to a second device in the first signal format if it is determined that the signal is to be delivered using the first signal format and providing the signal to the second device in the second signal format if it is determined that the signal is to be delivered using the second signal format. 22. The apparatus of claim 21, wherein the first signal format is at least one of an analog signal format and an RF modulated analog signal. 23. The apparatus of claim 22, wherein the second signal format is at least one of a digital signal, an RF modulated Internet Protocol (IP) signal and a multimedia over cable alliance signal. 24. The apparatus of claim 23, wherein the second signal format uses a signal frequency range that is different than the first signal format. 25. The apparatus of claim 23, wherein the second signal format uses a signal frequency range that is substantially the same as the first signal format. 26. The apparatus of claim 21, wherein the means for providing further includes means for providing the signal in the first signal format to a television display device and providing the signal in the second signal format to a set top box. 27. The apparatus of claim 26, wherein the means for providing the signal in the second signal format to the set top box further includes means for providing the signal on at least one of a first network and a second network, the first network being different from the second network. 28. The apparatus of claim 21, wherein the means for receiving received the signal over a first network and the means for providing provides the signal over a second network. 29. The apparatus of claim 28, wherein the means for determining further includes: means for detecting a presence of a second signal in the second signal format; means for isolating the second network; and wherein the means for providing the signal further includes means for providing the signal in the first signal format if the presence of the second signal in the second signal format is detected after isolating the second network. 30. The apparatus of claim 29, wherein the means for providing the signal further includes means for providing the signal in the second signal format if the presence of the second signal in the second signal format is not detected after isolating the second network.
2,400
7,905
7,905
14,581,651
2,433
Particular embodiments described herein provide for an electronic device that can be configured to identifying a digital certificate associated with data and assigning a reputation to the digital certificate, where the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist and the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist.
1. At least one computer-readable medium comprising one or more instructions that when executed by at least one processor, cause the processor to: identify a digital certificate associated with data; and assign a reputation to the digital certificate, wherein the reputation includes an indication if the data is trusted or untrusted. 2. The at least one computer-readable medium of claim 1, wherein the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist. 3. The at least one computer-readable medium of claim 1, wherein the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist. 4. The at least one computer-readable medium of claim 1, further comprising one or more instructions that when executed by the at least one processor, further cause the processor to: determine a certificate authority that created the digital certificate; and assign the reputation to the digital certificate based at least in part on a reputation of the certificate authority. 5. The at least one computer-readable medium of claim 4, wherein the digital certificate is classified as trusted if the certificate authority is included in an entry in a whitelist. 6. The at least one computer-readable medium of claim 4, wherein the digital certificate is classified as untrusted if the certificate authority is included in an entry in a blacklist. 7. The at least one computer-readable medium of claim 4, wherein the reputation of the certificate authority is determined using an entry in a certificate authority reputation database. 8. The at least one computer-readable medium of claim 1, further comprising one or more instructions that when executed by the at least one processor, further cause the processor to: identify more than one digital certificate associated with the data; determine a reputation for each of the more than one digital certificates; and assign the reputation to the digital certificate, based on the reputation of each of the more than one digital certificates. 9. An apparatus comprising: a digital certificate reputation module configured to: identify a digital certificate associated with data; and assign a reputation to the digital certificate, wherein the reputation includes an indication if the data is trusted or untrusted. 10. The apparatus of claim 9, wherein the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist. 11. The apparatus of claim 9, wherein the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist. 12. The apparatus of claim 9, wherein the digital certificate reputation module is further configured to: determine a certificate authority that created the digital certificate; and assign the reputation to the digital certificate based at least in part on a reputation of the certificate authority. 13. The apparatus of claim 12, wherein the digital certificate is classified as trusted if the certificate authority is included in an entry in a whitelist. 14. The apparatus of claim 12, wherein the digital certificate is classified as untrusted if the certificate authority is included in an entry in a blacklist. 15. The apparatus of claim 12, wherein the reputation of the certificate authority is determined using an entry in a certificate authority reputation database. 16. The apparatus of claim 9, wherein the digital certificate reputation module is further configured to: identify more than one digital certificate associated with the data; determine a reputation for each of the more than one digital certificates; and assign the reputation to the digital certificate, based on the reputation of each of the more than one digital certificates. 17. A method comprising: identifying a digital certificate associated with data; and assigning a reputation to the digital certificate, wherein the reputation includes an indication if the data is trusted or untrusted. 18. The method of claim 17, wherein the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist. 19. The method of claim 17, wherein the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist. 20. The method of claim 17, further comprising: determining a certificate authority that created the digital certificate; and assigning the reputation to the digital certificate based at least in part on a reputation of the certificate authority. 21. The method of claim 20, wherein the digital certificate is classified as trusted if the certificate authority is included in an entry in a whitelist. 22. The method of claim 20, wherein the digital certificate is classified as untrusted if the certificate authority is included in an entry in a blacklist. 23. The method of claim 17, further comprising: identifying more than one digital certificate associated with the data; determining a reputation for each of the more than one digital certificates; and assigning the reputation to the digital certificate, based on the reputation of each of the more than one digital certificates. 24. A system for determining the reputation of a digital certificate, the system comprising: a digital certificate reputation module configured for: identifying a digital certificate associated with data; and assigning a reputation to the digital certificate, wherein the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist and the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist. 25. The system of claim 24, wherein the system is configured for: determining a certificate authority that created the digital certificate; and assigning the reputation to the digital certificate based at least in part on a reputation of the certificate authority.
Particular embodiments described herein provide for an electronic device that can be configured to identifying a digital certificate associated with data and assigning a reputation to the digital certificate, where the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist and the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist.1. At least one computer-readable medium comprising one or more instructions that when executed by at least one processor, cause the processor to: identify a digital certificate associated with data; and assign a reputation to the digital certificate, wherein the reputation includes an indication if the data is trusted or untrusted. 2. The at least one computer-readable medium of claim 1, wherein the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist. 3. The at least one computer-readable medium of claim 1, wherein the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist. 4. The at least one computer-readable medium of claim 1, further comprising one or more instructions that when executed by the at least one processor, further cause the processor to: determine a certificate authority that created the digital certificate; and assign the reputation to the digital certificate based at least in part on a reputation of the certificate authority. 5. The at least one computer-readable medium of claim 4, wherein the digital certificate is classified as trusted if the certificate authority is included in an entry in a whitelist. 6. The at least one computer-readable medium of claim 4, wherein the digital certificate is classified as untrusted if the certificate authority is included in an entry in a blacklist. 7. The at least one computer-readable medium of claim 4, wherein the reputation of the certificate authority is determined using an entry in a certificate authority reputation database. 8. The at least one computer-readable medium of claim 1, further comprising one or more instructions that when executed by the at least one processor, further cause the processor to: identify more than one digital certificate associated with the data; determine a reputation for each of the more than one digital certificates; and assign the reputation to the digital certificate, based on the reputation of each of the more than one digital certificates. 9. An apparatus comprising: a digital certificate reputation module configured to: identify a digital certificate associated with data; and assign a reputation to the digital certificate, wherein the reputation includes an indication if the data is trusted or untrusted. 10. The apparatus of claim 9, wherein the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist. 11. The apparatus of claim 9, wherein the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist. 12. The apparatus of claim 9, wherein the digital certificate reputation module is further configured to: determine a certificate authority that created the digital certificate; and assign the reputation to the digital certificate based at least in part on a reputation of the certificate authority. 13. The apparatus of claim 12, wherein the digital certificate is classified as trusted if the certificate authority is included in an entry in a whitelist. 14. The apparatus of claim 12, wherein the digital certificate is classified as untrusted if the certificate authority is included in an entry in a blacklist. 15. The apparatus of claim 12, wherein the reputation of the certificate authority is determined using an entry in a certificate authority reputation database. 16. The apparatus of claim 9, wherein the digital certificate reputation module is further configured to: identify more than one digital certificate associated with the data; determine a reputation for each of the more than one digital certificates; and assign the reputation to the digital certificate, based on the reputation of each of the more than one digital certificates. 17. A method comprising: identifying a digital certificate associated with data; and assigning a reputation to the digital certificate, wherein the reputation includes an indication if the data is trusted or untrusted. 18. The method of claim 17, wherein the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist. 19. The method of claim 17, wherein the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist. 20. The method of claim 17, further comprising: determining a certificate authority that created the digital certificate; and assigning the reputation to the digital certificate based at least in part on a reputation of the certificate authority. 21. The method of claim 20, wherein the digital certificate is classified as trusted if the certificate authority is included in an entry in a whitelist. 22. The method of claim 20, wherein the digital certificate is classified as untrusted if the certificate authority is included in an entry in a blacklist. 23. The method of claim 17, further comprising: identifying more than one digital certificate associated with the data; determining a reputation for each of the more than one digital certificates; and assigning the reputation to the digital certificate, based on the reputation of each of the more than one digital certificates. 24. A system for determining the reputation of a digital certificate, the system comprising: a digital certificate reputation module configured for: identifying a digital certificate associated with data; and assigning a reputation to the digital certificate, wherein the digital certificate is classified as trusted if the digital certificate is included in an entry in a whitelist and the digital certificate is classified as untrusted if the digital certificate is included in an entry in a blacklist. 25. The system of claim 24, wherein the system is configured for: determining a certificate authority that created the digital certificate; and assigning the reputation to the digital certificate based at least in part on a reputation of the certificate authority.
2,400
7,906
7,906
14,849,598
2,486
A system that includes a ground unit, an aerial unit and a connecting element that is arranged to connect the ground unit to the aerial unit; wherein the ground unit comprises a connecting element manipulator that is arranged to alter an effective length of the connecting element; wherein the effective length of the connecting element defines a distance between the ground unit and the aerial unit; wherein at least one out of the ground unit and the aerial unit comprises a controller; wherein the controller is configured to determine the manner in which the aerial unit operates and is further configured to assist in a controlling of an aerial monitoring device that differs from the aerial unit.
1. A system, comprising a ground unit, an aerial unit and a connecting element that is arranged to connect the ground unit to the aerial unit; wherein the ground unit comprises a connecting element manipulator that is arranged to alter an effective length of the connecting element; wherein the effective length of the connecting element defines a distance between the ground unit and the aerial unit; wherein at least one out of the ground unit and the aerial unit comprises a controller; and wherein the controller is configured to determine a manner in which the aerial unit operates and is further configured to assist in a controlling of an aerial monitoring device that differs from the aerial unit. 2. The system according to claim 1 wherein the controller is configured to assist in the controlling of the aerial monitoring device that differs from the aerial unit in response to instructions fed to the system by a user. 3. The system according to claim 1 wherein the controller is configured to assist in the controlling of the aerial monitoring device in an autonomous manner. 4. The system according to claim 1 wherein the controller is configured to assist in the controlling of the aerial monitoring device by sending, to the aerial monitoring device, target information about a location of a target to be monitored by the aerial monitoring device. 5. The system according to claim 1 wherein the controller is configured to assist in the controlling of the aerial monitoring device by sending, to the aerial monitoring device, control information that is relayed by the aerial unit. 6. The system according to claim 1 wherein the controller is configured to receive monitoring information from the aerial monitoring device and to control a display to a user of the monitoring information. 7. The system according to claim 1 comprising a display that is configured to display to a user monitoring information from the aerial monitoring device and monitoring information from the aerial unit. 8. The system according to claim 1 wherein the controller is configured to assist in a controlling of at least one additional aerial monitoring devices. 9. The system according to claim 1 wherein the aerial monitoring device is a satellite. 10. The system according to claim 1 wherein the aerial monitoring device is a blimp. 11. A system, comprising a ground unit, an aerial unit and a connecting element that is arranged to connect the ground unit to the aerial unit; wherein the ground unit comprises a first controller and a connecting element manipulator that is arranged to alter an effective length of the connecting element; wherein the effective length of the connecting element defines a distance between the ground unit and the aerial unit; wherein the first controller is configured to determine a manner in which the aerial unit operates; and wherein the aerial unit comprises a second controller that is configured to assist in a controlling of an aerial monitoring device that differs from the aerial unit. 12. The system according to claim 11 wherein the aerial monitoring device differs from the aerial unit by at least one out of resolution a spectrum. 13. A method for controlling an aerial unit and an aerial monitoring device, the method comprising: operating the aerial unit and a ground unit to provide an image of an area; wherein the aerial unit and the ground unit are connected to each other by a connecting element; wherein the ground unit comprises a connecting element manipulator that is arranged to alter an effective length of the connecting element; wherein the effective length of the connecting element defines a distance between the ground unit and the aerial unit; selecting a target to be viewed by the aerial monitoring device; sending control information to the aerial monitoring device to enable a monitoring of a target by the aerial monitoring device; receiving monitoring information received from the aerial monitoring device; and displaying the monitoring information. 14. The method according to claim 13 comprising assisting in a controlling of the aerial monitoring device in response to instructions fed by a user. 15. The method according to claim 13 comprising assisting in a controlling of the aerial monitoring device in an autonomous manner. 16. The method according to claim 13 comprising assisting in a controlling of the aerial monitoring device by sending to the aerial monitoring device target information about a location of a target to be monitored by the aerial monitoring device. 17. The method according to claim 13 comprising assisting in a controlling of the aerial monitoring device by sending to the aerial monitoring device control information that is relayed by the aerial unit. 18. The method according to claim 13 comprising assisting in a controlling of at least one additional aerial monitoring device that differs from the aerial unit. 19. The method according to claim 13 wherein the aerial monitoring device is a satellite. 20. The method according to claim 13 wherein the aerial monitoring device is a blimp. 21. The method according to claim 13 wherein the displaying the monitoring information comprises displaying monitoring information received from the aerial unit at a first window and displaying monitoring information received from the aerial monitoring device at a second window.
A system that includes a ground unit, an aerial unit and a connecting element that is arranged to connect the ground unit to the aerial unit; wherein the ground unit comprises a connecting element manipulator that is arranged to alter an effective length of the connecting element; wherein the effective length of the connecting element defines a distance between the ground unit and the aerial unit; wherein at least one out of the ground unit and the aerial unit comprises a controller; wherein the controller is configured to determine the manner in which the aerial unit operates and is further configured to assist in a controlling of an aerial monitoring device that differs from the aerial unit.1. A system, comprising a ground unit, an aerial unit and a connecting element that is arranged to connect the ground unit to the aerial unit; wherein the ground unit comprises a connecting element manipulator that is arranged to alter an effective length of the connecting element; wherein the effective length of the connecting element defines a distance between the ground unit and the aerial unit; wherein at least one out of the ground unit and the aerial unit comprises a controller; and wherein the controller is configured to determine a manner in which the aerial unit operates and is further configured to assist in a controlling of an aerial monitoring device that differs from the aerial unit. 2. The system according to claim 1 wherein the controller is configured to assist in the controlling of the aerial monitoring device that differs from the aerial unit in response to instructions fed to the system by a user. 3. The system according to claim 1 wherein the controller is configured to assist in the controlling of the aerial monitoring device in an autonomous manner. 4. The system according to claim 1 wherein the controller is configured to assist in the controlling of the aerial monitoring device by sending, to the aerial monitoring device, target information about a location of a target to be monitored by the aerial monitoring device. 5. The system according to claim 1 wherein the controller is configured to assist in the controlling of the aerial monitoring device by sending, to the aerial monitoring device, control information that is relayed by the aerial unit. 6. The system according to claim 1 wherein the controller is configured to receive monitoring information from the aerial monitoring device and to control a display to a user of the monitoring information. 7. The system according to claim 1 comprising a display that is configured to display to a user monitoring information from the aerial monitoring device and monitoring information from the aerial unit. 8. The system according to claim 1 wherein the controller is configured to assist in a controlling of at least one additional aerial monitoring devices. 9. The system according to claim 1 wherein the aerial monitoring device is a satellite. 10. The system according to claim 1 wherein the aerial monitoring device is a blimp. 11. A system, comprising a ground unit, an aerial unit and a connecting element that is arranged to connect the ground unit to the aerial unit; wherein the ground unit comprises a first controller and a connecting element manipulator that is arranged to alter an effective length of the connecting element; wherein the effective length of the connecting element defines a distance between the ground unit and the aerial unit; wherein the first controller is configured to determine a manner in which the aerial unit operates; and wherein the aerial unit comprises a second controller that is configured to assist in a controlling of an aerial monitoring device that differs from the aerial unit. 12. The system according to claim 11 wherein the aerial monitoring device differs from the aerial unit by at least one out of resolution a spectrum. 13. A method for controlling an aerial unit and an aerial monitoring device, the method comprising: operating the aerial unit and a ground unit to provide an image of an area; wherein the aerial unit and the ground unit are connected to each other by a connecting element; wherein the ground unit comprises a connecting element manipulator that is arranged to alter an effective length of the connecting element; wherein the effective length of the connecting element defines a distance between the ground unit and the aerial unit; selecting a target to be viewed by the aerial monitoring device; sending control information to the aerial monitoring device to enable a monitoring of a target by the aerial monitoring device; receiving monitoring information received from the aerial monitoring device; and displaying the monitoring information. 14. The method according to claim 13 comprising assisting in a controlling of the aerial monitoring device in response to instructions fed by a user. 15. The method according to claim 13 comprising assisting in a controlling of the aerial monitoring device in an autonomous manner. 16. The method according to claim 13 comprising assisting in a controlling of the aerial monitoring device by sending to the aerial monitoring device target information about a location of a target to be monitored by the aerial monitoring device. 17. The method according to claim 13 comprising assisting in a controlling of the aerial monitoring device by sending to the aerial monitoring device control information that is relayed by the aerial unit. 18. The method according to claim 13 comprising assisting in a controlling of at least one additional aerial monitoring device that differs from the aerial unit. 19. The method according to claim 13 wherein the aerial monitoring device is a satellite. 20. The method according to claim 13 wherein the aerial monitoring device is a blimp. 21. The method according to claim 13 wherein the displaying the monitoring information comprises displaying monitoring information received from the aerial unit at a first window and displaying monitoring information received from the aerial monitoring device at a second window.
2,400
7,907
7,907
14,183,689
2,465
A mobile device is configurable to accommodate multiple personas and associated profiles. Once the mobile device is triggered to configure itself with a selected persona/profile, no more information is required by a user (the provider of the trigger) of the mobile device. Each persona/profile is autonomous from any other persona/profile with which the mobile device can be configured. A persona is indicative of a personality, role, or identity portrayed by the device, such as a phone number, for example. A profile is indicative of functions associated with a persona. The mobile device is easily reconfigured via simple UI operations.
1. A first apparatus comprising: a processor; and memory coupled to the processor, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations comprising: receiving a call; determining an intended recipient of the call; determining that the first apparatus is not configured with a persona of the intended recipient; determining, in accordance with a priority order, that a second apparatus is configured with the persona of the intended recipient; and performing one of: transferring the call to the second apparatus; or answering the call and bridging the second apparatus into the call. 2. The first apparatus of claim 1, wherein transferring the call to the second apparatus is accomplished via at least one of: depressing a button on the first apparatus; selecting a soft key on the first apparatus; providing a voice command to the first apparatus; depressing a button on the second apparatus; selecting a soft key on the second apparatus; or providing a voice command to the second apparatus. 3. The first apparatus of claim 1, wherein bridging the call to the second apparatus is accomplished via at least one of: depressing a button on the first apparatus; selecting a soft key on the first apparatus; providing a voice command to the first apparatus; depressing a button on the second apparatus; selecting a soft key on the second apparatus; or providing a voice command to the second apparatus. 4. The first apparatus of claim 1, wherein: the persona is indicative of a portrayed identity. 5. The first apparatus of claim 1, wherein: the first apparatus is configurable to provide characteristics of the persona via at least one of: depressing a button on the first apparatus; selecting a soft key on the first apparatus; or providing a voice command to the first apparatus. 6. The first apparatus of claim 1, wherein the persona comprises a characteristic indicative of an identity of a user of the second apparatus. 7. The first apparatus of claim 1, wherein: a call to a phone number associated with the persona is provided to the first apparatus based on a first priority order; and a call to a phone number associated with the second apparatus is provided to the second apparatus based on a second priority order. 8. An apparatus comprising: a processor; and memory coupled to the processor, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations comprising: receiving a call; determining an intended recipient of the call; determining a persona of the intended recipient, wherein: the persona has an associated phone number; and receipt of the call is a result of a predetermined priority order for calling apparatuses; determining a profile of functions associated with the persona of the intended recipient; configuring the apparatus to provide characteristics of the persona of the intended recipient; and configuring the apparatus to provide functionality of the determined profile. 9. The apparatus of claim 8, wherein: the determined persona is indicative of an identity portrayed by the apparatus. 10. The apparatus of claim 8, wherein: the apparatus is configurable to provide characteristics of the selected persona and provide functionality of the selected profile via at least one of: depressing a button on the apparatus; selecting a soft key on the apparatus; or providing a voice command to the apparatus. 11. The apparatus of claim 8, wherein the apparatus is configurable to provide characteristics of the determined persona and provide functionality of the determined profile via entering a short code on the apparatus. 12. The apparatus of claim 8, wherein the determined persona comprises a characteristic indicative of an identity of at least one of: a user of the apparatus; or a telephone number associated with a user of the apparatus. 13. The apparatus of claim 8, wherein the determined persona comprises a characteristic indicative of one of: a work persona of a user of the apparatus, or a personal persona of the user of the apparatus. 14. The apparatus of claim 8, wherein the determined persona is indicative of a service provider for the apparatus. 15. The apparatus of claim 8, wherein: characteristics of the selected persona are stored on the apparatus; and functions of the determined profile are stored on the apparatus. 16. The apparatus of claim 8, wherein: the apparatus obtains characteristics of the determined persona via a network; and the apparatus obtains functions of the determined profile via a network. 17. The apparatus of claim 8, the operations further comprising: configuring the apparatus with a predetermined persona and associated profile in accordance with a location of the apparatus. 18. A computer readable storage medium that is not a propagating signal, the computer readable storage medium comprising executable instructions that when executed by a processor cause the processor to effectuate operations comprising: receiving a call on a first mobile device; determining, via the first mobile device, an intended recipient of the call; determining that the first mobile device is not configured with a persona of the intended recipient; determining, via the first mobile device, that a second mobile device is configured with the persona of the intended recipient; and was most recently configured, as compared to a plurality of communications devices, with the persona of the intended recipient; and performing, via the first mobile device, one of: transferring the call to the second mobile device; or answering the call on the first mobile device and bridging the second mobile device into the call. 19. The computer readable storage medium of claim 18, wherein transferring the call to the second mobile device is accomplished via at least one of: depressing a button on the first mobile device; selecting a soft key on the first mobile device; providing a voice command to the first mobile device; depressing a button on the second mobile device; selecting a soft key on the second mobile device; or providing a voice command to the second mobile device. 20. The computer readable storage medium of claim 18, wherein bridging the call to the second mobile device is accomplished via at least one of: depressing a button on the first mobile device; selecting a soft key on the first mobile device; providing a voice command to the first mobile device; depressing a button on the second mobile device; selecting a soft key on the second mobile device; or providing a voice command to the second mobile device.
A mobile device is configurable to accommodate multiple personas and associated profiles. Once the mobile device is triggered to configure itself with a selected persona/profile, no more information is required by a user (the provider of the trigger) of the mobile device. Each persona/profile is autonomous from any other persona/profile with which the mobile device can be configured. A persona is indicative of a personality, role, or identity portrayed by the device, such as a phone number, for example. A profile is indicative of functions associated with a persona. The mobile device is easily reconfigured via simple UI operations.1. A first apparatus comprising: a processor; and memory coupled to the processor, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations comprising: receiving a call; determining an intended recipient of the call; determining that the first apparatus is not configured with a persona of the intended recipient; determining, in accordance with a priority order, that a second apparatus is configured with the persona of the intended recipient; and performing one of: transferring the call to the second apparatus; or answering the call and bridging the second apparatus into the call. 2. The first apparatus of claim 1, wherein transferring the call to the second apparatus is accomplished via at least one of: depressing a button on the first apparatus; selecting a soft key on the first apparatus; providing a voice command to the first apparatus; depressing a button on the second apparatus; selecting a soft key on the second apparatus; or providing a voice command to the second apparatus. 3. The first apparatus of claim 1, wherein bridging the call to the second apparatus is accomplished via at least one of: depressing a button on the first apparatus; selecting a soft key on the first apparatus; providing a voice command to the first apparatus; depressing a button on the second apparatus; selecting a soft key on the second apparatus; or providing a voice command to the second apparatus. 4. The first apparatus of claim 1, wherein: the persona is indicative of a portrayed identity. 5. The first apparatus of claim 1, wherein: the first apparatus is configurable to provide characteristics of the persona via at least one of: depressing a button on the first apparatus; selecting a soft key on the first apparatus; or providing a voice command to the first apparatus. 6. The first apparatus of claim 1, wherein the persona comprises a characteristic indicative of an identity of a user of the second apparatus. 7. The first apparatus of claim 1, wherein: a call to a phone number associated with the persona is provided to the first apparatus based on a first priority order; and a call to a phone number associated with the second apparatus is provided to the second apparatus based on a second priority order. 8. An apparatus comprising: a processor; and memory coupled to the processor, the memory comprising executable instructions that when executed by the processor cause the processor to effectuate operations comprising: receiving a call; determining an intended recipient of the call; determining a persona of the intended recipient, wherein: the persona has an associated phone number; and receipt of the call is a result of a predetermined priority order for calling apparatuses; determining a profile of functions associated with the persona of the intended recipient; configuring the apparatus to provide characteristics of the persona of the intended recipient; and configuring the apparatus to provide functionality of the determined profile. 9. The apparatus of claim 8, wherein: the determined persona is indicative of an identity portrayed by the apparatus. 10. The apparatus of claim 8, wherein: the apparatus is configurable to provide characteristics of the selected persona and provide functionality of the selected profile via at least one of: depressing a button on the apparatus; selecting a soft key on the apparatus; or providing a voice command to the apparatus. 11. The apparatus of claim 8, wherein the apparatus is configurable to provide characteristics of the determined persona and provide functionality of the determined profile via entering a short code on the apparatus. 12. The apparatus of claim 8, wherein the determined persona comprises a characteristic indicative of an identity of at least one of: a user of the apparatus; or a telephone number associated with a user of the apparatus. 13. The apparatus of claim 8, wherein the determined persona comprises a characteristic indicative of one of: a work persona of a user of the apparatus, or a personal persona of the user of the apparatus. 14. The apparatus of claim 8, wherein the determined persona is indicative of a service provider for the apparatus. 15. The apparatus of claim 8, wherein: characteristics of the selected persona are stored on the apparatus; and functions of the determined profile are stored on the apparatus. 16. The apparatus of claim 8, wherein: the apparatus obtains characteristics of the determined persona via a network; and the apparatus obtains functions of the determined profile via a network. 17. The apparatus of claim 8, the operations further comprising: configuring the apparatus with a predetermined persona and associated profile in accordance with a location of the apparatus. 18. A computer readable storage medium that is not a propagating signal, the computer readable storage medium comprising executable instructions that when executed by a processor cause the processor to effectuate operations comprising: receiving a call on a first mobile device; determining, via the first mobile device, an intended recipient of the call; determining that the first mobile device is not configured with a persona of the intended recipient; determining, via the first mobile device, that a second mobile device is configured with the persona of the intended recipient; and was most recently configured, as compared to a plurality of communications devices, with the persona of the intended recipient; and performing, via the first mobile device, one of: transferring the call to the second mobile device; or answering the call on the first mobile device and bridging the second mobile device into the call. 19. The computer readable storage medium of claim 18, wherein transferring the call to the second mobile device is accomplished via at least one of: depressing a button on the first mobile device; selecting a soft key on the first mobile device; providing a voice command to the first mobile device; depressing a button on the second mobile device; selecting a soft key on the second mobile device; or providing a voice command to the second mobile device. 20. The computer readable storage medium of claim 18, wherein bridging the call to the second mobile device is accomplished via at least one of: depressing a button on the first mobile device; selecting a soft key on the first mobile device; providing a voice command to the first mobile device; depressing a button on the second mobile device; selecting a soft key on the second mobile device; or providing a voice command to the second mobile device.
2,400
7,908
7,908
14,706,963
2,434
Integrated techniques for computer bot detection and human user based access include determining if a client device has been identified as a computer bot based upon, client information extracted from a service request and a service policy. The service policy is also utilized to determine if the client device is operating under control of a human user or operating autonomously based upon matching a captcha response to an expected captcha response.
1. A method comprising: receiving, by a service gateway, a service request from a client device; extracting, by the service gateway, client information from the received service request; determining, by the service gateway, if the client device has been identified as a computer bot based upon the client information and a service policy; selecting, by the service gateway, a captcha in response to the service request, if the client device is not a known computer bot; generating, by the service gateway, captcha instructions for the selected captcha; generating, by the service gateway, an expected captcha response for the determined captcha; sending, by the service gateway, the captcha instructions to the client device; receiving, by the service gateway, a captcha response from the client device in response to the captcha instructions; comparing, by the service gateway, the captcha response to the expected captcha response to determine based on the service policy if the client device is operating under control of a human user or operating autonomously; and sending, by the service gateway, the service request to an appropriate server device if the client device is operating under control of a human user and the client device is not a known computer bot. 2. The method according to claim 1, farther comprising declining, by the service gateway, the service request if the client device is a known computer bot. 3. The method according to claim 1, further comprising handling, by the service gateway, the service request according to the service policy based upon whether the client device is a known computer bot or not 4. The method according to claim 3, wherein the handling is specified by a web access firewall policy of the service policy. 5. The method according to claim 1, further comprising declining, by the service gateway, the service request if the client device is operating autonomously, 6. The method according to claim 1, further comprising handling, by the service gateway, the service request according to the service policy based upon whether the client device is operating under control of a human user or operating autonomously. 7. The method according to claim 6, wherein the handling is specified by a web access firewall policy of the service policy. 8. The method according to claim 1, further comprising generating, by the service gateway, the expected captcha response including expected timing information for the determined captcha; receiving, by the service gateway, the captcha response including timing information from the client device in response to the captcha instructions; and comparing, by the service gateway, the captcha response including timing information to the expected captcha response including timing information to determine, based on the service policy, if the client device is operating under control of a human user or operating autonomously. 9. A computing device for executing computing device executable instructions stored in a computing storage module that when executed by a processor module of the computing device perform a method comprising: receiving, by a service gateway, a service request from a client device; extracting, by the service gateway, client information from the received service request; determining, by the service gateway, if the client device has been identified as a computer bot based upon the client information and a service policy; selecting, by the service gateway, a captcha in response to the service request, if the client device is not a brown computer bot; generating, by the service gateway, captcha instructions for the selected captcha; generating, by the service gateway, an expected captcha response for the determined captcha; sending, by the service gateway, the captcha instructions to the client device; receiving, by the service gateway, a captcha response from the client device in response to the captcha instructions; comparing, by the service gateway, the captcha response to the expected captcha response to determine based on the service policy if the client device is operating under control of a human user or operating autonomously; and sending, by the service gateway, the service request to an appropriate server device if the client device is operating under control of a human user and the client device is not a known computer bot. 10. The method according to claim 9, further comprising declining, by the service gateway, the service request if the client device is a known computer bot. 11. The method according to claim 9, further comprising handling, by the service gateway, the service request according to the service policy based upon whether the client device is a known computer bot or not. 12. The method according to claim 11, wherein the handling is specified by a web access firewall policy of the service policy. 13. The method according to claim 9, further comprising declining, by the service gateway, the service request if the client device is operating autonomously. 14. The method according to claim 9, further comprising handling, by the service gateway, the service request according to the service policy based upon whether the client device is operating under control of a human user or operating autonomously. cm 15. The method according to claim 14, wherein the handling is specified by a web access firewall policy of the service policy. 16. The method according to claim 9, further comprising; generating, by the service gateway, the expected captcha response including expected timing information for the determined captcha; receiving, by the service gateway, the captcha response including timing information from the client device in response to the captcha instructions; and comparing, by the service gateway, the captcha response including timing information to the expected captcha response including timing information to determine based, on the service policy if the client device is operating under control of a human user or operating autonomously. 17. The method according to claim 9, wherein the information concerning whether the client device has been identified as computer bot is stored in the computing storage module of the computing device. 18. The method according to claim 9, wherein the service policy is stored in the computing storage module of the computing device. 19. The method according to claim 9, wherein the captcha is selected from a captcha database stored in the computing storage module of the computing device.
Integrated techniques for computer bot detection and human user based access include determining if a client device has been identified as a computer bot based upon, client information extracted from a service request and a service policy. The service policy is also utilized to determine if the client device is operating under control of a human user or operating autonomously based upon matching a captcha response to an expected captcha response.1. A method comprising: receiving, by a service gateway, a service request from a client device; extracting, by the service gateway, client information from the received service request; determining, by the service gateway, if the client device has been identified as a computer bot based upon the client information and a service policy; selecting, by the service gateway, a captcha in response to the service request, if the client device is not a known computer bot; generating, by the service gateway, captcha instructions for the selected captcha; generating, by the service gateway, an expected captcha response for the determined captcha; sending, by the service gateway, the captcha instructions to the client device; receiving, by the service gateway, a captcha response from the client device in response to the captcha instructions; comparing, by the service gateway, the captcha response to the expected captcha response to determine based on the service policy if the client device is operating under control of a human user or operating autonomously; and sending, by the service gateway, the service request to an appropriate server device if the client device is operating under control of a human user and the client device is not a known computer bot. 2. The method according to claim 1, farther comprising declining, by the service gateway, the service request if the client device is a known computer bot. 3. The method according to claim 1, further comprising handling, by the service gateway, the service request according to the service policy based upon whether the client device is a known computer bot or not 4. The method according to claim 3, wherein the handling is specified by a web access firewall policy of the service policy. 5. The method according to claim 1, further comprising declining, by the service gateway, the service request if the client device is operating autonomously, 6. The method according to claim 1, further comprising handling, by the service gateway, the service request according to the service policy based upon whether the client device is operating under control of a human user or operating autonomously. 7. The method according to claim 6, wherein the handling is specified by a web access firewall policy of the service policy. 8. The method according to claim 1, further comprising generating, by the service gateway, the expected captcha response including expected timing information for the determined captcha; receiving, by the service gateway, the captcha response including timing information from the client device in response to the captcha instructions; and comparing, by the service gateway, the captcha response including timing information to the expected captcha response including timing information to determine, based on the service policy, if the client device is operating under control of a human user or operating autonomously. 9. A computing device for executing computing device executable instructions stored in a computing storage module that when executed by a processor module of the computing device perform a method comprising: receiving, by a service gateway, a service request from a client device; extracting, by the service gateway, client information from the received service request; determining, by the service gateway, if the client device has been identified as a computer bot based upon the client information and a service policy; selecting, by the service gateway, a captcha in response to the service request, if the client device is not a brown computer bot; generating, by the service gateway, captcha instructions for the selected captcha; generating, by the service gateway, an expected captcha response for the determined captcha; sending, by the service gateway, the captcha instructions to the client device; receiving, by the service gateway, a captcha response from the client device in response to the captcha instructions; comparing, by the service gateway, the captcha response to the expected captcha response to determine based on the service policy if the client device is operating under control of a human user or operating autonomously; and sending, by the service gateway, the service request to an appropriate server device if the client device is operating under control of a human user and the client device is not a known computer bot. 10. The method according to claim 9, further comprising declining, by the service gateway, the service request if the client device is a known computer bot. 11. The method according to claim 9, further comprising handling, by the service gateway, the service request according to the service policy based upon whether the client device is a known computer bot or not. 12. The method according to claim 11, wherein the handling is specified by a web access firewall policy of the service policy. 13. The method according to claim 9, further comprising declining, by the service gateway, the service request if the client device is operating autonomously. 14. The method according to claim 9, further comprising handling, by the service gateway, the service request according to the service policy based upon whether the client device is operating under control of a human user or operating autonomously. cm 15. The method according to claim 14, wherein the handling is specified by a web access firewall policy of the service policy. 16. The method according to claim 9, further comprising; generating, by the service gateway, the expected captcha response including expected timing information for the determined captcha; receiving, by the service gateway, the captcha response including timing information from the client device in response to the captcha instructions; and comparing, by the service gateway, the captcha response including timing information to the expected captcha response including timing information to determine based, on the service policy if the client device is operating under control of a human user or operating autonomously. 17. The method according to claim 9, wherein the information concerning whether the client device has been identified as computer bot is stored in the computing storage module of the computing device. 18. The method according to claim 9, wherein the service policy is stored in the computing storage module of the computing device. 19. The method according to claim 9, wherein the captcha is selected from a captcha database stored in the computing storage module of the computing device.
2,400
7,909
7,909
15,443,992
2,421
Methods and apparatus for optimizing the distribution and delivery of multimedia or other content within a content-based network. In one embodiment, the network comprises a broadcast switched cable television network, which utilizes a Network optimization controller (NOC) that processes subscriber program viewing requests to identify options available to fulfill the request (including, e.g., the creation of one or more “microcasts” specifically targeting one or more users), and evaluate these options to determine one that optimizes network operation. The NOC performs these decisions by considering various parameters including network resource availability, type of CPE, subscriber's targeted advertisement profile, and business rules programmed by operator of the network.
1.-42. (canceled) 43. Computer readable apparatus comprising a non-transitory storage medium, the non-transitory medium comprising at least one computer program having a plurality of instructions configured to, when executed on a processing apparatus: based at least on a request for programming content issued via a computerized client device in data communication with a content delivery network, the computerized client device having at least one subscriber associated therewith, identify one or more variables associated with the at least one subscriber; analyze two or more possible options for service of the request for the digitally rendered program, the analysis based at least in part on the identified one or more variables; process at least one of a plurality of digital program streams, the at least one of the plurality of digital program streams configured to carry the programming content, the processing of the at least one of the plurality of digital program streams comprising segmentation of the at least one digital program stream at one or more boundaries; establish a data session with the computerized client device, the session established according to a networking protocol; and insert digitally rendered advertising or promotional content within the at least one digital program stream at the one or more boundaries, the at least one digital program stream being deliverable via the data session and according to one of the two or more possible options; wherein the inserted digitally rendered advertising or promotional content and the one of the two or more possible options are each selected based at least in part on the analysis. 44. The apparatus of claim 43, wherein the plurality of instructions are further configured to, when executed: switch the computerized client device onto the at least one digital program stream in order to allow subsequent removal of a first one of the plurality of digital program streams to which the computerized client device is currently tuned, the switch based at least on the analysis; and wherein the analysis comprises an evaluation of at least one metric, the at least one metric determined based at least in part on a correlation of: (i) one or more first values of demographic variables associated with the at least one subscriber associated with the computerized client device, to (ii) one or more second values of the demographic variables associated with the at least one of the plurality of digital program streams. 45. The apparatus of claim 44, wherein the switch is delayed by a prescribed amount of time so as to allow for processing to occur before delivery of the at least one digital program stream, the processing comprising processing of the at least one digital program stream to encode it according to an encoding format different than a current encoding format thereof. 46. The apparatus of claim 45, wherein the processing of the at least one digital program stream to encode it according to an encoding format different than a current encoding format thereof is based at least in part on data relating to the computerized client device relating to its decoding format capabilities. 47. The apparatus of claim 43, wherein the two or more possible options for the service of the request for the digitally rendered program comprise: (i) creation of a new digital program stream having content at least partly determined based on a geographic region associated with at least one subscriber, and causation of the computerized client device to tune thereto; and (ii) causation of the computerized client device to tune to a pre-existing digital program stream having predetermined content not particularly selected for users of the geographic region associated with the at least one subscriber. 48. The apparatus of claim 43, wherein the plurality of instructions are further configured to, when executed: collect information relating to historical tuning activity of the computerized client device; generate a profile for the at least one subscriber, the profile comprising: (i) data representative of the identified variables, and (ii) the collected information; and store the profile in a database; wherein information enabling specific identification of the at least one subscriber is removed prior to the storage. 49. The apparatus of claim 43, wherein the segmentation further comprises segmentation of the programming content into one or more portions, wherein the segmentation enables the at least one subscriber to, via at least an application computer program accessible to the subscriber, tag individual ones of the one or more portions, and share individual ones of the one or more portions with other computerized client devices in data communication with the content delivery network. 50. The apparatus of claim 43, wherein the plurality of instructions are further configured to, when executed: provide an option to the at least one subscriber via a display element rendered on a display device associated with the computerized client device, the option enabling the at least one subscriber to bypass playback of the digitally rendered advertising or promotional content. 51. Computerized network apparatus configured for use in a content distribution network, the computerized network apparatus comprising: server apparatus comprising: processor apparatus; first network interface apparatus in data communication with the processor apparatus configured to at least receive data representative of a request for delivery of digitally rendered programming content, the request originating from a computerized client device of the content distribution network; and storage apparatus in data communication with the processor apparatus, the storage apparatus comprising at least one computer program configured to, when executed on the processor apparatus: based on the request received via the first network interface apparatus, evaluate at least one metric, the at least one metric determined based at least in part on data specific to a user of the content distribution network, the user associated with the computerized client device, the data relating to one or more demographic variables or psychographic variables; analyze two or more possible options for service of the request for the digitally rendered programming content, the analysis based at least in part on the evaluation of the at least one metric, the two or more possible options comprising: (i) creation of a new digital program stream, and (ii) utilization of an existing digital program stream to which the computerized client device is not currently tuned; based at least on the analysis, select one of the options (i) and (ii), and cause tuning of the computerized client device to the digital program stream associated with the selected option; process the digital program stream associated with the selected option, the processing comprising segmentation of the digital program stream associated with the selected option at one or more boundaries; establish at least one data delivery session with the computerized client device, the session established according to a networking protocol; select digitally rendered secondary content appropriate to the user; insert the selected appropriate digitally rendered secondary content within the digital program stream associated with the selected option at the one or more boundaries; and cause delivery via the at least one content data delivery session and the digital program stream associated with the selected option, of the digitally rendered programming content and the appropriate digitally rendered secondary content. 52. The computerized network apparatus of claim 51, wherein the appropriate digitally rendered secondary content is selected based at least on a geographic region associated with the user. 53. The computerized network apparatus of claim 51, wherein the analysis of the two or more possible options for servicing the request for the digitally rendered programming content comprises analysis of at least: (i) the metric, relative to a predetermined criterion, and (ii) whether an existing digital program stream to which the computerized client device is tuned before the causing tuning to the digital program stream associated with the selected option, can be removed from service within the content distribution network in order to conserve bandwidth. 54. The computerized network apparatus of claim 51, wherein the at least one computer program is further configured to, when executed on the processor apparatus: receive indication of an impending insertion opportunity within the digital program stream associated with the selected option, the indication of the impending insertion opportunity comprising an SCTE-35 compliant cue issued by a program network entity that is the source of the digital program stream associated with the selected option. 55. The computerized network apparatus of claim 54, wherein the impending insertion opportunity comprises a change of scene within the digitally rendered programming content. 56. The computerized network apparatus of claim 51, wherein the content distribution network comprises a managed network, and the at least one computer program is further configured to, when executed: generate impression data relating to the computerized client device, the impression data based at least on functional activity within the computerized client device during use; and cause transmission of the impression data to a computerized process operated within the managed content distribution network. 57. A computerized method of operating a content delivery network having a plurality of computerized client devices associated therewith, each of the plurality of computerized client devices configured to receive at least a respective one of a plurality of digital program streams being delivered over the content delivery network, the method comprising: analyzing at least one of the plurality of computerized client devices currently tuned to a first one of the plurality of existing digital program streams, the analyzing comprising: evaluating at least one metric determined based at least in part on a correlation of one or more first demographic data associated with the at least one computerized client device to one or more second demographic data associated with the one of the plurality of existing digital program streams other than the first one of the plurality of existing digital program streams; and based at least on the evaluating the at least one metric, determining which of: (a) creating a new digital program stream, and causing the at least one computerized client device to tune thereto; or (b) switching of the at least one computerized client device onto at least the other existing digital program stream in order to allow subsequent removal of at least the first one of the plurality of digital program streams, is optimal. 58. The method of claim 57, wherein the determining of which of (a) or (b) is optimal comprises at least: comparing a first score metric calculated for (a) and a second score metric calculated for (b); and selecting either (a) or (b) as optimal based on the highest score metric. 59. The method of claim 57, further comprising segmenting at a plurality of locations either i) the other existing digital program stream, or (ii) the new digital program stream, as applicable, and wherein the plurality of locations correspond to scene changes of programming content delivered via the applicable other or new digital program stream, the scene changes of the programming content each comprising respective data indicative of one or more opportunities for insertion of digitally rendered advertising content that is not part of the other or new digital program stream as applicable. 60. The method of claim 59, wherein the segmenting occurs based at least on a temporal scheme. 61. The method of claim 57, wherein the correlation comprises anonymously identifying a subscriber associated with the at least one computerized client device, the anonymously identifying comprising using a cryptographic one-way hash and one or more encoded variables representative of the one or more first demographics. 62. The method of claim 57, wherein: the one or more first demographic data comprises a first plurality of data descriptive of a subscriber associated with the at least one computerized device; the one or more second demographic data comprises a second plurality of data descriptive of content elements with the one of the plurality of existing digital program streams other that the first one; and the evaluating at least one metric determined based at least in part on a correlation of one or more first demographic data associated with the at least one computerized client device to one or more second demographic data associated with the one of the plurality of existing digital program streams other that the first one comprises utilizing an algorithm for: (i) identifying one or more matches between the first plurality of data and the second plurality of data; and (ii) based at least on the identifying, generating a score representative of at least the degree of correlation. 63. The method of claim 57, wherein: the one or more first demographic data comprises a first plurality of data descriptive of a subscriber associated with the at least one computerized device; the one or more second demographic data comprises a second plurality of data descriptive of content elements with the one of the plurality of existing digital program streams other that the first one; and the evaluating at least one metric determined based at least in part on a correlation of one or more first demographic data associated with the at least one computerized client device to one or more second demographic data associated with the one of the plurality of existing digital program streams other that the first one comprises utilizing an algorithm for: (i) assessing a similarity between individual data values of the first plurality of data and the corresponding data values second plurality of data, the individual data values of the first and second pluralities of data corresponding to respective demographic attributes; and (ii) based at least on the assessing, generating a score representative of at least a degree of the correlation. 64. The method of claim 57, further comprising, upon switching of the at least one computerized client device onto at least the other existing digital program stream: determining that none of the plurality of computerized client devices remain tuned to the first one of the plurality of digital program streams; and causing removal of the first one of the plurality of program streams from delivery to a broadcast delivery switch within a delivery node of the content delivery network.
Methods and apparatus for optimizing the distribution and delivery of multimedia or other content within a content-based network. In one embodiment, the network comprises a broadcast switched cable television network, which utilizes a Network optimization controller (NOC) that processes subscriber program viewing requests to identify options available to fulfill the request (including, e.g., the creation of one or more “microcasts” specifically targeting one or more users), and evaluate these options to determine one that optimizes network operation. The NOC performs these decisions by considering various parameters including network resource availability, type of CPE, subscriber's targeted advertisement profile, and business rules programmed by operator of the network.1.-42. (canceled) 43. Computer readable apparatus comprising a non-transitory storage medium, the non-transitory medium comprising at least one computer program having a plurality of instructions configured to, when executed on a processing apparatus: based at least on a request for programming content issued via a computerized client device in data communication with a content delivery network, the computerized client device having at least one subscriber associated therewith, identify one or more variables associated with the at least one subscriber; analyze two or more possible options for service of the request for the digitally rendered program, the analysis based at least in part on the identified one or more variables; process at least one of a plurality of digital program streams, the at least one of the plurality of digital program streams configured to carry the programming content, the processing of the at least one of the plurality of digital program streams comprising segmentation of the at least one digital program stream at one or more boundaries; establish a data session with the computerized client device, the session established according to a networking protocol; and insert digitally rendered advertising or promotional content within the at least one digital program stream at the one or more boundaries, the at least one digital program stream being deliverable via the data session and according to one of the two or more possible options; wherein the inserted digitally rendered advertising or promotional content and the one of the two or more possible options are each selected based at least in part on the analysis. 44. The apparatus of claim 43, wherein the plurality of instructions are further configured to, when executed: switch the computerized client device onto the at least one digital program stream in order to allow subsequent removal of a first one of the plurality of digital program streams to which the computerized client device is currently tuned, the switch based at least on the analysis; and wherein the analysis comprises an evaluation of at least one metric, the at least one metric determined based at least in part on a correlation of: (i) one or more first values of demographic variables associated with the at least one subscriber associated with the computerized client device, to (ii) one or more second values of the demographic variables associated with the at least one of the plurality of digital program streams. 45. The apparatus of claim 44, wherein the switch is delayed by a prescribed amount of time so as to allow for processing to occur before delivery of the at least one digital program stream, the processing comprising processing of the at least one digital program stream to encode it according to an encoding format different than a current encoding format thereof. 46. The apparatus of claim 45, wherein the processing of the at least one digital program stream to encode it according to an encoding format different than a current encoding format thereof is based at least in part on data relating to the computerized client device relating to its decoding format capabilities. 47. The apparatus of claim 43, wherein the two or more possible options for the service of the request for the digitally rendered program comprise: (i) creation of a new digital program stream having content at least partly determined based on a geographic region associated with at least one subscriber, and causation of the computerized client device to tune thereto; and (ii) causation of the computerized client device to tune to a pre-existing digital program stream having predetermined content not particularly selected for users of the geographic region associated with the at least one subscriber. 48. The apparatus of claim 43, wherein the plurality of instructions are further configured to, when executed: collect information relating to historical tuning activity of the computerized client device; generate a profile for the at least one subscriber, the profile comprising: (i) data representative of the identified variables, and (ii) the collected information; and store the profile in a database; wherein information enabling specific identification of the at least one subscriber is removed prior to the storage. 49. The apparatus of claim 43, wherein the segmentation further comprises segmentation of the programming content into one or more portions, wherein the segmentation enables the at least one subscriber to, via at least an application computer program accessible to the subscriber, tag individual ones of the one or more portions, and share individual ones of the one or more portions with other computerized client devices in data communication with the content delivery network. 50. The apparatus of claim 43, wherein the plurality of instructions are further configured to, when executed: provide an option to the at least one subscriber via a display element rendered on a display device associated with the computerized client device, the option enabling the at least one subscriber to bypass playback of the digitally rendered advertising or promotional content. 51. Computerized network apparatus configured for use in a content distribution network, the computerized network apparatus comprising: server apparatus comprising: processor apparatus; first network interface apparatus in data communication with the processor apparatus configured to at least receive data representative of a request for delivery of digitally rendered programming content, the request originating from a computerized client device of the content distribution network; and storage apparatus in data communication with the processor apparatus, the storage apparatus comprising at least one computer program configured to, when executed on the processor apparatus: based on the request received via the first network interface apparatus, evaluate at least one metric, the at least one metric determined based at least in part on data specific to a user of the content distribution network, the user associated with the computerized client device, the data relating to one or more demographic variables or psychographic variables; analyze two or more possible options for service of the request for the digitally rendered programming content, the analysis based at least in part on the evaluation of the at least one metric, the two or more possible options comprising: (i) creation of a new digital program stream, and (ii) utilization of an existing digital program stream to which the computerized client device is not currently tuned; based at least on the analysis, select one of the options (i) and (ii), and cause tuning of the computerized client device to the digital program stream associated with the selected option; process the digital program stream associated with the selected option, the processing comprising segmentation of the digital program stream associated with the selected option at one or more boundaries; establish at least one data delivery session with the computerized client device, the session established according to a networking protocol; select digitally rendered secondary content appropriate to the user; insert the selected appropriate digitally rendered secondary content within the digital program stream associated with the selected option at the one or more boundaries; and cause delivery via the at least one content data delivery session and the digital program stream associated with the selected option, of the digitally rendered programming content and the appropriate digitally rendered secondary content. 52. The computerized network apparatus of claim 51, wherein the appropriate digitally rendered secondary content is selected based at least on a geographic region associated with the user. 53. The computerized network apparatus of claim 51, wherein the analysis of the two or more possible options for servicing the request for the digitally rendered programming content comprises analysis of at least: (i) the metric, relative to a predetermined criterion, and (ii) whether an existing digital program stream to which the computerized client device is tuned before the causing tuning to the digital program stream associated with the selected option, can be removed from service within the content distribution network in order to conserve bandwidth. 54. The computerized network apparatus of claim 51, wherein the at least one computer program is further configured to, when executed on the processor apparatus: receive indication of an impending insertion opportunity within the digital program stream associated with the selected option, the indication of the impending insertion opportunity comprising an SCTE-35 compliant cue issued by a program network entity that is the source of the digital program stream associated with the selected option. 55. The computerized network apparatus of claim 54, wherein the impending insertion opportunity comprises a change of scene within the digitally rendered programming content. 56. The computerized network apparatus of claim 51, wherein the content distribution network comprises a managed network, and the at least one computer program is further configured to, when executed: generate impression data relating to the computerized client device, the impression data based at least on functional activity within the computerized client device during use; and cause transmission of the impression data to a computerized process operated within the managed content distribution network. 57. A computerized method of operating a content delivery network having a plurality of computerized client devices associated therewith, each of the plurality of computerized client devices configured to receive at least a respective one of a plurality of digital program streams being delivered over the content delivery network, the method comprising: analyzing at least one of the plurality of computerized client devices currently tuned to a first one of the plurality of existing digital program streams, the analyzing comprising: evaluating at least one metric determined based at least in part on a correlation of one or more first demographic data associated with the at least one computerized client device to one or more second demographic data associated with the one of the plurality of existing digital program streams other than the first one of the plurality of existing digital program streams; and based at least on the evaluating the at least one metric, determining which of: (a) creating a new digital program stream, and causing the at least one computerized client device to tune thereto; or (b) switching of the at least one computerized client device onto at least the other existing digital program stream in order to allow subsequent removal of at least the first one of the plurality of digital program streams, is optimal. 58. The method of claim 57, wherein the determining of which of (a) or (b) is optimal comprises at least: comparing a first score metric calculated for (a) and a second score metric calculated for (b); and selecting either (a) or (b) as optimal based on the highest score metric. 59. The method of claim 57, further comprising segmenting at a plurality of locations either i) the other existing digital program stream, or (ii) the new digital program stream, as applicable, and wherein the plurality of locations correspond to scene changes of programming content delivered via the applicable other or new digital program stream, the scene changes of the programming content each comprising respective data indicative of one or more opportunities for insertion of digitally rendered advertising content that is not part of the other or new digital program stream as applicable. 60. The method of claim 59, wherein the segmenting occurs based at least on a temporal scheme. 61. The method of claim 57, wherein the correlation comprises anonymously identifying a subscriber associated with the at least one computerized client device, the anonymously identifying comprising using a cryptographic one-way hash and one or more encoded variables representative of the one or more first demographics. 62. The method of claim 57, wherein: the one or more first demographic data comprises a first plurality of data descriptive of a subscriber associated with the at least one computerized device; the one or more second demographic data comprises a second plurality of data descriptive of content elements with the one of the plurality of existing digital program streams other that the first one; and the evaluating at least one metric determined based at least in part on a correlation of one or more first demographic data associated with the at least one computerized client device to one or more second demographic data associated with the one of the plurality of existing digital program streams other that the first one comprises utilizing an algorithm for: (i) identifying one or more matches between the first plurality of data and the second plurality of data; and (ii) based at least on the identifying, generating a score representative of at least the degree of correlation. 63. The method of claim 57, wherein: the one or more first demographic data comprises a first plurality of data descriptive of a subscriber associated with the at least one computerized device; the one or more second demographic data comprises a second plurality of data descriptive of content elements with the one of the plurality of existing digital program streams other that the first one; and the evaluating at least one metric determined based at least in part on a correlation of one or more first demographic data associated with the at least one computerized client device to one or more second demographic data associated with the one of the plurality of existing digital program streams other that the first one comprises utilizing an algorithm for: (i) assessing a similarity between individual data values of the first plurality of data and the corresponding data values second plurality of data, the individual data values of the first and second pluralities of data corresponding to respective demographic attributes; and (ii) based at least on the assessing, generating a score representative of at least a degree of the correlation. 64. The method of claim 57, further comprising, upon switching of the at least one computerized client device onto at least the other existing digital program stream: determining that none of the plurality of computerized client devices remain tuned to the first one of the plurality of digital program streams; and causing removal of the first one of the plurality of program streams from delivery to a broadcast delivery switch within a delivery node of the content delivery network.
2,400
7,910
7,910
14,030,093
2,448
A network device, of a cloud computing environment, receives a packet destined for a virtual machine of the cloud computing environment. The packet is received from a user device and via public network. The network device is associated with a first public Internet protocol (IP) address, and the virtual machine is associated with a second public IP address that is different than the first public IP address. The network device determines, based on the packet, the second public IP address associated with the virtual machine, and provides the packet to the virtual machine based on the second public IP address associated with the virtual machine.
1. A method, comprising: receiving, by a network device of a cloud computing environment, a packet destined for a virtual machine of the cloud computing environment, the packet being received from a user device and via public network, the network device being associated with a first public Internet protocol (IP) address, and the virtual machine being associated with a second public IP address that is different than the first public IP address; determining, by the network device and based on the packet, the second public IP address associated with the virtual machine; and providing, by network device, the packet to the virtual machine based on the second public IP address associated with the virtual machine. 2. The method of claim 1, where: the first public IP address is assigned to network device from a plurality of public IP addresses, and the second public IP address is assigned to the virtual machine from the plurality of public IP addresses. 3. The method of claim 1, where the virtual machine receives the packet and processes the packet. 4. The method of claim 1, further comprising: receiving, by the network device, an additional packet from the virtual machine, the additional packet being destined for another virtual machine of the cloud computing environment, and the other virtual machine being associated with a third public IP address that is different than the first public IP address and the second public IP address. 5. The method of claim 4, where the third public IP address is assigned to the other virtual machine from a plurality of public IP addresses. 6. The method of claim 4, further comprising: determining, by the network device and based on the additional packet, the third public IP address associated with the other virtual machine; and providing, by network device, the additional packet to the other virtual machine based on the third public IP address associated with the other virtual machine. 7. The method of claim 6, where the other virtual machine receives the additional packet and processes the additional packet. 8. A network device of a cloud computing environment, the network device comprising: one or more processors to: receive a packet destined for a virtual machine of the cloud computing environment, the packet being received from a user device and via public network, the network device being associated with a first public Internet protocol (IP) address, and the virtual machine being associated with a second public IP address that is different than the first public IP address, determine, based on the packet, the second public IP address associated with the virtual machine, and provide the packet to the virtual machine based on the second public IP address associated with the virtual machine. 9. The network device of claim 8, where: the first public IP address is assigned to network device from a plurality of public IP addresses, and the second public IP address is assigned to the virtual machine from the plurality of public IP addresses. 10. The network device of claim 8, where, when determining the second public IP address associated with the virtual machine, the one or more processors are further to: compare information in the packet with a table provided in the network device, the table including public IP addresses assigned to virtual machines of the cloud computing environment, and determine the second public IP address associated with the virtual machine based on the comparison. 11. The network device of claim 8, where the one or more processors are further to: receive an additional packet from the virtual machine, the additional packet being destined for another virtual machine of the cloud computing environment, and the other virtual machine being associated with a third public IP address that is different than the first public IP address and the second public IP address. 12. The network device of claim 11, where the third public IP address is assigned to the other virtual machine from a plurality of public IP addresses. 13. The network device of claim 11, where the one or more processors are further to: determine, based on the additional packet, the third public IP address associated with the other virtual machine, and provide the additional packet to the other virtual machine based on the third public IP address associated with the other virtual machine. 14. The network device of claim 13, where, when determining the third public IP address associated with the other virtual machine, the one or more processors are further to: compare information in the additional packet with a table provided in the network device, the table including public IP addresses assigned to virtual machines of the cloud computing environment, and determine the third public IP address associated with the other virtual machine based on the comparison. 15. A non-transitory computer-readable medium for storing instructions, the instructions comprising: one or more instructions that, when executed by a processor of a network device of a cloud computing environment, cause the processor to: receive a packet destined for a virtual machine of the cloud computing environment, the packet being received from a user device and via public network, the network device being associated with a first public Internet protocol (IP) address, and the virtual machine being associated with a second public IP address that is different than the first public IP address, determine, based on the packet, the second public IP address associated with the virtual machine, and provide the packet to the virtual machine based on the second public IP address associated with the virtual machine. 16. The computer-readable medium of claim 15, where the instructions further comprise: one or more instructions that, when executed by the processor, cause the processor to: compare information in the packet with a table provided in the network device, the table including public IP addresses assigned to virtual machines of the cloud computing environment, and determine the second public IP address associated with the virtual machine based on the comparison. 17. The computer-readable medium of claim 15, where the instructions further comprise: one or more instructions that, when executed by the processor, cause the processor to: receive an additional packet from the virtual machine, the additional packet being destined for another virtual machine of the cloud computing environment, and the other virtual machine being associated with a third public IP address that is different than the first public IP address and the second public IP address. 18. The computer-readable medium of claim 17, where the third public IP address is assigned to the other virtual machine from a plurality of public IP addresses. 19. The computer-readable medium of claim 17, where the instructions further comprise: one or more instructions that, when executed by the processor, cause the processor to: determine, based on the additional packet, the third public IP address associated with the other virtual machine, and provide the additional packet to the other virtual machine based on the third public IP address associated with the other virtual machine. 20. The computer-readable medium of claim 19, where the instructions further comprise: one or more instructions that, when executed by the processor, cause the processor to: compare information in the additional packet with a table provided in the network device, the table including public IP addresses assigned to virtual machines of the cloud computing environment, and determine the third public IP address associated with the other virtual machine based on the comparison.
A network device, of a cloud computing environment, receives a packet destined for a virtual machine of the cloud computing environment. The packet is received from a user device and via public network. The network device is associated with a first public Internet protocol (IP) address, and the virtual machine is associated with a second public IP address that is different than the first public IP address. The network device determines, based on the packet, the second public IP address associated with the virtual machine, and provides the packet to the virtual machine based on the second public IP address associated with the virtual machine.1. A method, comprising: receiving, by a network device of a cloud computing environment, a packet destined for a virtual machine of the cloud computing environment, the packet being received from a user device and via public network, the network device being associated with a first public Internet protocol (IP) address, and the virtual machine being associated with a second public IP address that is different than the first public IP address; determining, by the network device and based on the packet, the second public IP address associated with the virtual machine; and providing, by network device, the packet to the virtual machine based on the second public IP address associated with the virtual machine. 2. The method of claim 1, where: the first public IP address is assigned to network device from a plurality of public IP addresses, and the second public IP address is assigned to the virtual machine from the plurality of public IP addresses. 3. The method of claim 1, where the virtual machine receives the packet and processes the packet. 4. The method of claim 1, further comprising: receiving, by the network device, an additional packet from the virtual machine, the additional packet being destined for another virtual machine of the cloud computing environment, and the other virtual machine being associated with a third public IP address that is different than the first public IP address and the second public IP address. 5. The method of claim 4, where the third public IP address is assigned to the other virtual machine from a plurality of public IP addresses. 6. The method of claim 4, further comprising: determining, by the network device and based on the additional packet, the third public IP address associated with the other virtual machine; and providing, by network device, the additional packet to the other virtual machine based on the third public IP address associated with the other virtual machine. 7. The method of claim 6, where the other virtual machine receives the additional packet and processes the additional packet. 8. A network device of a cloud computing environment, the network device comprising: one or more processors to: receive a packet destined for a virtual machine of the cloud computing environment, the packet being received from a user device and via public network, the network device being associated with a first public Internet protocol (IP) address, and the virtual machine being associated with a second public IP address that is different than the first public IP address, determine, based on the packet, the second public IP address associated with the virtual machine, and provide the packet to the virtual machine based on the second public IP address associated with the virtual machine. 9. The network device of claim 8, where: the first public IP address is assigned to network device from a plurality of public IP addresses, and the second public IP address is assigned to the virtual machine from the plurality of public IP addresses. 10. The network device of claim 8, where, when determining the second public IP address associated with the virtual machine, the one or more processors are further to: compare information in the packet with a table provided in the network device, the table including public IP addresses assigned to virtual machines of the cloud computing environment, and determine the second public IP address associated with the virtual machine based on the comparison. 11. The network device of claim 8, where the one or more processors are further to: receive an additional packet from the virtual machine, the additional packet being destined for another virtual machine of the cloud computing environment, and the other virtual machine being associated with a third public IP address that is different than the first public IP address and the second public IP address. 12. The network device of claim 11, where the third public IP address is assigned to the other virtual machine from a plurality of public IP addresses. 13. The network device of claim 11, where the one or more processors are further to: determine, based on the additional packet, the third public IP address associated with the other virtual machine, and provide the additional packet to the other virtual machine based on the third public IP address associated with the other virtual machine. 14. The network device of claim 13, where, when determining the third public IP address associated with the other virtual machine, the one or more processors are further to: compare information in the additional packet with a table provided in the network device, the table including public IP addresses assigned to virtual machines of the cloud computing environment, and determine the third public IP address associated with the other virtual machine based on the comparison. 15. A non-transitory computer-readable medium for storing instructions, the instructions comprising: one or more instructions that, when executed by a processor of a network device of a cloud computing environment, cause the processor to: receive a packet destined for a virtual machine of the cloud computing environment, the packet being received from a user device and via public network, the network device being associated with a first public Internet protocol (IP) address, and the virtual machine being associated with a second public IP address that is different than the first public IP address, determine, based on the packet, the second public IP address associated with the virtual machine, and provide the packet to the virtual machine based on the second public IP address associated with the virtual machine. 16. The computer-readable medium of claim 15, where the instructions further comprise: one or more instructions that, when executed by the processor, cause the processor to: compare information in the packet with a table provided in the network device, the table including public IP addresses assigned to virtual machines of the cloud computing environment, and determine the second public IP address associated with the virtual machine based on the comparison. 17. The computer-readable medium of claim 15, where the instructions further comprise: one or more instructions that, when executed by the processor, cause the processor to: receive an additional packet from the virtual machine, the additional packet being destined for another virtual machine of the cloud computing environment, and the other virtual machine being associated with a third public IP address that is different than the first public IP address and the second public IP address. 18. The computer-readable medium of claim 17, where the third public IP address is assigned to the other virtual machine from a plurality of public IP addresses. 19. The computer-readable medium of claim 17, where the instructions further comprise: one or more instructions that, when executed by the processor, cause the processor to: determine, based on the additional packet, the third public IP address associated with the other virtual machine, and provide the additional packet to the other virtual machine based on the third public IP address associated with the other virtual machine. 20. The computer-readable medium of claim 19, where the instructions further comprise: one or more instructions that, when executed by the processor, cause the processor to: compare information in the additional packet with a table provided in the network device, the table including public IP addresses assigned to virtual machines of the cloud computing environment, and determine the third public IP address associated with the other virtual machine based on the comparison.
2,400
7,911
7,911
14,453,242
2,423
A system and method for advertising are disclosed. In an aspect, a method comprises rendering a content to a plurality of users, the content having a time duration, rendering a first selectable element associated with the content to a first one of the plurality of users at a first time during the time duration of the rendered content, and rendering a second selectable element associated with the content to a second one of the plurality of users at a second time during the time duration of the rendered content, wherein the second time is different from the first time and the first selectable element is not rendered to the first one of the plurality of users while the second selectable element is rendered to the second one of the plurality of users.
1. A method comprising: transmitting an advertisement to a plurality of users, wherein the advertisement has a time duration; transmitting a first selectable element associated with the advertisement exclusively to a first portion of the plurality of users at a first time during the time duration of the advertisement; transmitting a second selectable element associated with the advertisement exclusively to a second portion of the plurality of users at a second time during the time duration of the advertisement, wherein the first time and the second time are determined randomly, wherein the first time and the second time are different, and wherein the first selectable element and the second selectable element are associated with a particular merchant; receiving a selection of the first selectable element at a third time; initiating a first communication session in response to the selection of the first selectable element; receiving a selection of the second selectable element at a fourth time; and initiating a second communication session in response to the selection of the second selectable element.
A system and method for advertising are disclosed. In an aspect, a method comprises rendering a content to a plurality of users, the content having a time duration, rendering a first selectable element associated with the content to a first one of the plurality of users at a first time during the time duration of the rendered content, and rendering a second selectable element associated with the content to a second one of the plurality of users at a second time during the time duration of the rendered content, wherein the second time is different from the first time and the first selectable element is not rendered to the first one of the plurality of users while the second selectable element is rendered to the second one of the plurality of users.1. A method comprising: transmitting an advertisement to a plurality of users, wherein the advertisement has a time duration; transmitting a first selectable element associated with the advertisement exclusively to a first portion of the plurality of users at a first time during the time duration of the advertisement; transmitting a second selectable element associated with the advertisement exclusively to a second portion of the plurality of users at a second time during the time duration of the advertisement, wherein the first time and the second time are determined randomly, wherein the first time and the second time are different, and wherein the first selectable element and the second selectable element are associated with a particular merchant; receiving a selection of the first selectable element at a third time; initiating a first communication session in response to the selection of the first selectable element; receiving a selection of the second selectable element at a fourth time; and initiating a second communication session in response to the selection of the second selectable element.
2,400
7,912
7,912
15,075,577
2,435
The present invention provides an approach for granting access and respectively denying access to an instruction set of a device. The technical teaching provides the advantage that unauthorized access can be effectively prevented. Hence, maintenance work can be performed by specialized staff and security sensitive parts of the instruction sets are secured.
1. A device providing secure vendor service access for its maintenance, comprising: a configuration storage providing a device configuration for operating the device; and a security module being arranged to set at least one access right for accessing the configuration storage; wherein the device is only operable if the at least one access right is set. 2. The device according to claim 1, wherein the at least one access right is one of a read access right and a write access right. 3. The device according to claim 1, wherein the device configuration comprises at least one of device parameters, a firmware, device control instructions, an instruction set for operating the device and status information. 4. The device according to claim 1, wherein the at least one access right can be assigned according to at least one of a group of further access parameters, the group comprising a permanent access right, a temporary access right and a period of time for which access is granted. 5. The device according to claim 1, wherein the security module comprises an interface module for setting the at least one access right. 6. The device according to claim 1, being arranged to operate the security module under usage of encryption techniques. 7. The device according to claim 1, being arranged such that the at least one access right is configurable such that access to the configuration storage is enabled, disabled or not set. 8. The device according to claim 1, being arranged such that a transition from the condition of access rights not being set to one of the conditions access enabled and access disabled and vice versa is shiftable. 9. The device according to claim 1, being arranged such that a direct transition from the condition of access rights enabled to access rights disabled and vice versa is prohibited. 10. The device according to claim 1, being arranged such that a status of at least one access right is coded by at least one access bit. 11. A method for operating a device providing secure vendor service access for its maintenance, comprising: providing a configuration storage providing a device configuration for operating the device; and providing a security module being arranged to assign at least one access right for accessing the configuration storage; wherein the device is only operable if the at least one access right is set. 12. The method according to claim 11, further comprising provision of a transition model specifying enabled transitions of access right states. 13. The method according to claim 11, further comprising unlocking the device if at least one access right is set. 14. The method according to claim 11, further comprising providing an assignment of access rights to at least a part of the stored device configuration. 15. A computer readable medium having stored thereon instructions executable by a computer processor for operating a device providing secure vender service access for its maintenance, the instructions which, when executed by the processor, perform a method comprising: providing a configuration storage; providing a device configuration for operating the device; providing a security module arranged to assign at least one access right for accessing the configuration storage, wherein the device is only operable if the at least one access right is set; provision of a transition model specifying enabled transitions of access right states; unlocking the device if at least one access right is set; providing an assignment of access rights to at least a part of the stored device configuration. 16. The device according to claim 2, wherein the device configuration comprises at least one of device parameters, a firmware, device control instructions, an instruction set for operating the device and status information. 17. The device according to claim 2, wherein the at least one access right can be assigned according to at least one of a group of further access parameters, the group comprising a permanent access right, a temporary access right and a period of time for which access is granted. 18. The device according to claim 2, wherein the security module comprises an interface module for setting the at least one access right. 19. The device according to claim 2, being arranged to operate the security module under usage of encryption techniques. 20. The device according to claim 2, wherein the at least one access right is configurable such that access to the configuration storage is enabled disabled or not set.
The present invention provides an approach for granting access and respectively denying access to an instruction set of a device. The technical teaching provides the advantage that unauthorized access can be effectively prevented. Hence, maintenance work can be performed by specialized staff and security sensitive parts of the instruction sets are secured.1. A device providing secure vendor service access for its maintenance, comprising: a configuration storage providing a device configuration for operating the device; and a security module being arranged to set at least one access right for accessing the configuration storage; wherein the device is only operable if the at least one access right is set. 2. The device according to claim 1, wherein the at least one access right is one of a read access right and a write access right. 3. The device according to claim 1, wherein the device configuration comprises at least one of device parameters, a firmware, device control instructions, an instruction set for operating the device and status information. 4. The device according to claim 1, wherein the at least one access right can be assigned according to at least one of a group of further access parameters, the group comprising a permanent access right, a temporary access right and a period of time for which access is granted. 5. The device according to claim 1, wherein the security module comprises an interface module for setting the at least one access right. 6. The device according to claim 1, being arranged to operate the security module under usage of encryption techniques. 7. The device according to claim 1, being arranged such that the at least one access right is configurable such that access to the configuration storage is enabled, disabled or not set. 8. The device according to claim 1, being arranged such that a transition from the condition of access rights not being set to one of the conditions access enabled and access disabled and vice versa is shiftable. 9. The device according to claim 1, being arranged such that a direct transition from the condition of access rights enabled to access rights disabled and vice versa is prohibited. 10. The device according to claim 1, being arranged such that a status of at least one access right is coded by at least one access bit. 11. A method for operating a device providing secure vendor service access for its maintenance, comprising: providing a configuration storage providing a device configuration for operating the device; and providing a security module being arranged to assign at least one access right for accessing the configuration storage; wherein the device is only operable if the at least one access right is set. 12. The method according to claim 11, further comprising provision of a transition model specifying enabled transitions of access right states. 13. The method according to claim 11, further comprising unlocking the device if at least one access right is set. 14. The method according to claim 11, further comprising providing an assignment of access rights to at least a part of the stored device configuration. 15. A computer readable medium having stored thereon instructions executable by a computer processor for operating a device providing secure vender service access for its maintenance, the instructions which, when executed by the processor, perform a method comprising: providing a configuration storage; providing a device configuration for operating the device; providing a security module arranged to assign at least one access right for accessing the configuration storage, wherein the device is only operable if the at least one access right is set; provision of a transition model specifying enabled transitions of access right states; unlocking the device if at least one access right is set; providing an assignment of access rights to at least a part of the stored device configuration. 16. The device according to claim 2, wherein the device configuration comprises at least one of device parameters, a firmware, device control instructions, an instruction set for operating the device and status information. 17. The device according to claim 2, wherein the at least one access right can be assigned according to at least one of a group of further access parameters, the group comprising a permanent access right, a temporary access right and a period of time for which access is granted. 18. The device according to claim 2, wherein the security module comprises an interface module for setting the at least one access right. 19. The device according to claim 2, being arranged to operate the security module under usage of encryption techniques. 20. The device according to claim 2, wherein the at least one access right is configurable such that access to the configuration storage is enabled disabled or not set.
2,400
7,913
7,913
14,913,415
2,425
Methods and apparatuses are provided which utilize the preamble of a multi-carrier modulated digital transmitted signal to indicate whether or not a special message (e.g., Emergency Alert System messages) is available for reception. When in power saving mode the receiver periodically detects the preamble, which requires limited functionality and power, to check for the presence of such special messages. The receiver only completely wakes up additional functionalities if a special message is detected in the preamble. This results in advantageous savings in power consumption, particularly for portable and handheld devices, while having the capability of receiving special messages anytime.
1. An apparatus for transmitting a multi-carrier modulated signal comprising: a source (111, 300) for providing data, said data divided into frames and comprising a wakeup message parameter which identifies whether or not a special message is included in said data; and a multi-carrier modulator (114) that modulates said data by allocating said data to a plurality of carriers in a plurality of modulation symbols, wherein the wakeup message parameter is included in at least one preamble symbol of a frame of data. 2. The apparatus according to claim 1 further comprising: a channel encoder (113) for channel encoding said data prior to the multi-carrier modulator. 3. The apparatus according to claim 1 wherein the wakeup message parameter comprises at least 1 bit. 4. The apparatus according to claim 3 wherein the wakeup message parameter comprises unused bit combinations in the preamble. 5. The apparatus according to claim 3 wherein the wakeup message parameter further identifies the type of message. 6. The apparatus according to claim 1 wherein the multi-carrier modulation is OFDM. 7. An apparatus for receiving a multi-carrier modulated signal in power saving mode, said apparatus comprising: a multi-carrier demodulator (124, 410) that periodically demodulates at least one preamble symbol of said modulated signal to create at least one demodulated preamble symbol, said at least one preamble symbol being at least one of a plurality of modulated symbols in a signal frame; and a signaling data detector (422) that detects preamble data from said at least one demodulated preamble symbol and for recovering a wakeup message parameter from said preamble data, wherein said wakeup message parameter identifies whether or not a special message is included in said modulated signal. 8. The apparatus according to claim 7, wherein the signaling data detector further wakes up additional functional blocks if said special message is included, comprising: the multi-carrier demodulator (124, 410) for further demodulating additional modulated symbols of said modulated signal to recover said special message; and a display device to display said special message. 9. The apparatus according to claim 7 further comprising: a channel decoder (123, 420) for channel decoding the output of the multi-carrier demodulator prior to recovering said wakeup message parameter. 10. The apparatus according to claim 8 further comprising: a channel decoder (123, 420) for channel decoding the output of the multi-carrier demodulator prior to recovering said special message. 11. The apparatus according to claim 7 wherein the wakeup message parameter comprises at least 1 bit. 12. The apparatus according to claim 7 wherein the wakeup message parameter comprises unused bit combinations in the preamble data. 13. The apparatus according to claim 7 wherein the wakeup message parameter further identifies the type of message. 14. The apparatus according to claim 7 wherein the multi-carrier modulation is OFDM. 15. A method for transmitting a multi-carrier modulated signal comprising: providing data (510), said data divided into frames and comprising a wakeup message parameter (520) which identifies whether or not a special message is included in said data; and modulating (540) said data by allocating said data to a plurality of carriers in a plurality of modulation symbols, wherein the wakeup message parameter is included in at least one preamble symbol of a frame of data. 16. The method according to claim 15 further comprising: channel encoding (540) said data prior to the step of modulating. 17. The method according to claim 15 wherein the wakeup message parameter comprises at least 1 bit. 18. The method according to claim 17 wherein the wakeup message parameter comprises unused bit combinations in the preamble. 19. The method according to claim 17 wherein the wakeup message parameter further identifies the type of message. 20. The method according to claim 15 wherein the multi-carrier modulation is OFDM. 21. A method for receiving a multi-carrier modulated signal in power saving mode, said method comprising: periodically demodulating (610, 620) at least one preamble symbol of said modulated signal to create at least one demodulated preamble symbol, said at least one preamble symbol being at least one of a plurality of modulated symbols in a signal frame; detecting preamble data (620) from said at least one demodulated preamble symbol; and recovering a wakeup message parameter (620) from said preamble data, wherein said wakeup message parameter identifies whether or not a special message (630) is included in said modulated signal. 22. The method according to claim 21, further comprising: waking up additional functionalities (640) if said special message is included, comprising: demodulating additional modulated symbols of said modulated signal to recover said special message (650); and displaying said special message. 23. The method according to claim 21 further comprising: channel decoding (620) after the step of demodulating and prior to recovering said wakeup message parameter. 24. The method according to claim 22 further comprising: channel decoding (620) after the step of demodulating and prior to recovering said special message. 25. The method according to claim 21 wherein the wakeup message parameter comprises at least 1 bit. 26. The method according to claim 21 wherein the wakeup message parameter comprises unused bit combinations in the preamble data. 27. The method according to claim 21 wherein the wakeup message parameter further identifies the type of message. 28. The method according to claim 21 wherein the multi-carrier modulation is OFDM.
Methods and apparatuses are provided which utilize the preamble of a multi-carrier modulated digital transmitted signal to indicate whether or not a special message (e.g., Emergency Alert System messages) is available for reception. When in power saving mode the receiver periodically detects the preamble, which requires limited functionality and power, to check for the presence of such special messages. The receiver only completely wakes up additional functionalities if a special message is detected in the preamble. This results in advantageous savings in power consumption, particularly for portable and handheld devices, while having the capability of receiving special messages anytime.1. An apparatus for transmitting a multi-carrier modulated signal comprising: a source (111, 300) for providing data, said data divided into frames and comprising a wakeup message parameter which identifies whether or not a special message is included in said data; and a multi-carrier modulator (114) that modulates said data by allocating said data to a plurality of carriers in a plurality of modulation symbols, wherein the wakeup message parameter is included in at least one preamble symbol of a frame of data. 2. The apparatus according to claim 1 further comprising: a channel encoder (113) for channel encoding said data prior to the multi-carrier modulator. 3. The apparatus according to claim 1 wherein the wakeup message parameter comprises at least 1 bit. 4. The apparatus according to claim 3 wherein the wakeup message parameter comprises unused bit combinations in the preamble. 5. The apparatus according to claim 3 wherein the wakeup message parameter further identifies the type of message. 6. The apparatus according to claim 1 wherein the multi-carrier modulation is OFDM. 7. An apparatus for receiving a multi-carrier modulated signal in power saving mode, said apparatus comprising: a multi-carrier demodulator (124, 410) that periodically demodulates at least one preamble symbol of said modulated signal to create at least one demodulated preamble symbol, said at least one preamble symbol being at least one of a plurality of modulated symbols in a signal frame; and a signaling data detector (422) that detects preamble data from said at least one demodulated preamble symbol and for recovering a wakeup message parameter from said preamble data, wherein said wakeup message parameter identifies whether or not a special message is included in said modulated signal. 8. The apparatus according to claim 7, wherein the signaling data detector further wakes up additional functional blocks if said special message is included, comprising: the multi-carrier demodulator (124, 410) for further demodulating additional modulated symbols of said modulated signal to recover said special message; and a display device to display said special message. 9. The apparatus according to claim 7 further comprising: a channel decoder (123, 420) for channel decoding the output of the multi-carrier demodulator prior to recovering said wakeup message parameter. 10. The apparatus according to claim 8 further comprising: a channel decoder (123, 420) for channel decoding the output of the multi-carrier demodulator prior to recovering said special message. 11. The apparatus according to claim 7 wherein the wakeup message parameter comprises at least 1 bit. 12. The apparatus according to claim 7 wherein the wakeup message parameter comprises unused bit combinations in the preamble data. 13. The apparatus according to claim 7 wherein the wakeup message parameter further identifies the type of message. 14. The apparatus according to claim 7 wherein the multi-carrier modulation is OFDM. 15. A method for transmitting a multi-carrier modulated signal comprising: providing data (510), said data divided into frames and comprising a wakeup message parameter (520) which identifies whether or not a special message is included in said data; and modulating (540) said data by allocating said data to a plurality of carriers in a plurality of modulation symbols, wherein the wakeup message parameter is included in at least one preamble symbol of a frame of data. 16. The method according to claim 15 further comprising: channel encoding (540) said data prior to the step of modulating. 17. The method according to claim 15 wherein the wakeup message parameter comprises at least 1 bit. 18. The method according to claim 17 wherein the wakeup message parameter comprises unused bit combinations in the preamble. 19. The method according to claim 17 wherein the wakeup message parameter further identifies the type of message. 20. The method according to claim 15 wherein the multi-carrier modulation is OFDM. 21. A method for receiving a multi-carrier modulated signal in power saving mode, said method comprising: periodically demodulating (610, 620) at least one preamble symbol of said modulated signal to create at least one demodulated preamble symbol, said at least one preamble symbol being at least one of a plurality of modulated symbols in a signal frame; detecting preamble data (620) from said at least one demodulated preamble symbol; and recovering a wakeup message parameter (620) from said preamble data, wherein said wakeup message parameter identifies whether or not a special message (630) is included in said modulated signal. 22. The method according to claim 21, further comprising: waking up additional functionalities (640) if said special message is included, comprising: demodulating additional modulated symbols of said modulated signal to recover said special message (650); and displaying said special message. 23. The method according to claim 21 further comprising: channel decoding (620) after the step of demodulating and prior to recovering said wakeup message parameter. 24. The method according to claim 22 further comprising: channel decoding (620) after the step of demodulating and prior to recovering said special message. 25. The method according to claim 21 wherein the wakeup message parameter comprises at least 1 bit. 26. The method according to claim 21 wherein the wakeup message parameter comprises unused bit combinations in the preamble data. 27. The method according to claim 21 wherein the wakeup message parameter further identifies the type of message. 28. The method according to claim 21 wherein the multi-carrier modulation is OFDM.
2,400
7,914
7,914
15,890,004
2,447
An electronic device with a touch-sensitive surface and display can execute a messaging application. The messaging application provides options for sending a message with a large attachment. In one option it allows for sending a message with a large attachment by uploading and storing the attachment on a cloud server, embeds a link to the storage location in the message, and sends the message without the attachment. The messaging application may also include a UI element in the message that includes an indicator about the status of the stored attachment. Furthermore, the messaging application may embed in the message a smaller sized version of the attachment before sending the message. The status indicator may display whether the link to the storage location has expired or whether the attachment has previously been retrieved from the cloud server.
1. An electronic device, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving an email message comprising a link to a location of an attachment associated with the email message on a server and validity information of the link; responsive to an action opening the email message, determining whether the link is valid using the validity information; responsive to an action selecting the link to the location of the attachment and in accordance with a determination that the link is valid, sending a request to the server to retrieve the attachment; retrieving the attachment from the server; and responsive to retrieving the attachment, updating the validity information of the link to indicate that the attachment has been retrieved from the server, and storing the email message with the updated validity information and with the attachment being embedded within the email message. 2. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device, the one or more programs including instructions for: receiving an email message comprising a link to a location of an attachment associated with the email message on a server and validity information of the link; responsive to an action opening the email message, determining whether the link is valid using the validity information; responsive to an action selecting the link to the location of the attachment and in accordance with a determination that the link is valid, sending a request to the server to retrieve the attachment; retrieving the attachment from the server; and responsive to retrieving the attachment, updating the validity information of the link to indicate that the attachment has been retrieved from the server, and storing the email message with the updated validity information and with the attachment being embedded within the email message. 3. A method, comprising: receiving an email message comprising a link to a location of an attachment associated with the email message on a server and validity information of the link; responsive to an action opening the email message, determining whether the link is valid using the validity information; responsive to an action selecting the link to the location of the attachment and in accordance with a determination that the link is valid, sending a request to the server to retrieve the attachment; retrieving the attachment from the server; and responsive to retrieving the attachment, updating the validity information of the link to indicate that the attachment has been retrieved from the server, and storing the email message with the updated validity information and with the attachment being embedded within the email message. 4. An electronic device, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving an email message comprising a link to a location of an attachment associated with the email on a server and validity information of the link; responsive to receiving selection of the link to the location of the attachment and the link being valid, sending a request to the server to retrieve the attachment; and in accordance with the determination that the attachment has been retrieved from the server, updating the visible indication of the validity status to indicate that the attachment has been retrieved from the server. 5. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device, the one or more programs including instructions for: receiving an email message comprising a link to a location of an attachment associated with the email on a server and validity information of the link; responsive to receiving selection of the link to the location of the attachment and the link being valid, sending a request to the server to retrieve the attachment; and in accordance with the determination that the attachment has been retrieved from the server, updating the visible indication of the validity status to indicate that the attachment has been retrieved from the server. 6. A method, comprising: receiving an email message comprising a link to a location of an attachment associated with the email on a server and validity information of the link; responsive to receiving selection of the link to the location of the attachment and the link being valid, sending a request to the server to retrieve the attachment; and in accordance with the determination that the attachment has been retrieved from the server, updating the visible indication of the validity status to indicate that the attachment has been retrieved from the server.
An electronic device with a touch-sensitive surface and display can execute a messaging application. The messaging application provides options for sending a message with a large attachment. In one option it allows for sending a message with a large attachment by uploading and storing the attachment on a cloud server, embeds a link to the storage location in the message, and sends the message without the attachment. The messaging application may also include a UI element in the message that includes an indicator about the status of the stored attachment. Furthermore, the messaging application may embed in the message a smaller sized version of the attachment before sending the message. The status indicator may display whether the link to the storage location has expired or whether the attachment has previously been retrieved from the cloud server.1. An electronic device, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving an email message comprising a link to a location of an attachment associated with the email message on a server and validity information of the link; responsive to an action opening the email message, determining whether the link is valid using the validity information; responsive to an action selecting the link to the location of the attachment and in accordance with a determination that the link is valid, sending a request to the server to retrieve the attachment; retrieving the attachment from the server; and responsive to retrieving the attachment, updating the validity information of the link to indicate that the attachment has been retrieved from the server, and storing the email message with the updated validity information and with the attachment being embedded within the email message. 2. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device, the one or more programs including instructions for: receiving an email message comprising a link to a location of an attachment associated with the email message on a server and validity information of the link; responsive to an action opening the email message, determining whether the link is valid using the validity information; responsive to an action selecting the link to the location of the attachment and in accordance with a determination that the link is valid, sending a request to the server to retrieve the attachment; retrieving the attachment from the server; and responsive to retrieving the attachment, updating the validity information of the link to indicate that the attachment has been retrieved from the server, and storing the email message with the updated validity information and with the attachment being embedded within the email message. 3. A method, comprising: receiving an email message comprising a link to a location of an attachment associated with the email message on a server and validity information of the link; responsive to an action opening the email message, determining whether the link is valid using the validity information; responsive to an action selecting the link to the location of the attachment and in accordance with a determination that the link is valid, sending a request to the server to retrieve the attachment; retrieving the attachment from the server; and responsive to retrieving the attachment, updating the validity information of the link to indicate that the attachment has been retrieved from the server, and storing the email message with the updated validity information and with the attachment being embedded within the email message. 4. An electronic device, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: receiving an email message comprising a link to a location of an attachment associated with the email on a server and validity information of the link; responsive to receiving selection of the link to the location of the attachment and the link being valid, sending a request to the server to retrieve the attachment; and in accordance with the determination that the attachment has been retrieved from the server, updating the visible indication of the validity status to indicate that the attachment has been retrieved from the server. 5. A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of an electronic device, the one or more programs including instructions for: receiving an email message comprising a link to a location of an attachment associated with the email on a server and validity information of the link; responsive to receiving selection of the link to the location of the attachment and the link being valid, sending a request to the server to retrieve the attachment; and in accordance with the determination that the attachment has been retrieved from the server, updating the visible indication of the validity status to indicate that the attachment has been retrieved from the server. 6. A method, comprising: receiving an email message comprising a link to a location of an attachment associated with the email on a server and validity information of the link; responsive to receiving selection of the link to the location of the attachment and the link being valid, sending a request to the server to retrieve the attachment; and in accordance with the determination that the attachment has been retrieved from the server, updating the visible indication of the validity status to indicate that the attachment has been retrieved from the server.
2,400
7,915
7,915
14,207,998
2,486
A system and method provides an image that adjusts in response to at least one vehicle mounted sensor.
1. A method for adjusting image horizon for a vehicle mounted camera, comprising: providing a camera mounted in a vehicle; providing at least one sensor in said vehicle, the sensor detecting a change in tilt of said vehicle; and adjusting an image horizon in response to said detected change in tilt of said vehicle. 2. A method in accordance with claim 1, wherein said image horizon automatically adjusts in response to said detected change in tilt of said vehicle. 3. A method in accordance with claim 2, wherein said image horizon is adjusted as a digital video effect. 4. A method in accordance with claim 1, wherein said detected change in tilt is provided as vehicle telemetry data. 5. A method in accordance with claim 2, wherein said image horizon is adjusted to match or approximate a skyline horizon during tilting of said vehicle. 6. A method in accordance with claim 2, wherein said image horizon is adjusted with a variation in zoom of the image. 7. A method in accordance with claim 1, further comprising: capturing a first image or video at a first resolution, which resolution is greater than high definition and higher than a predetermined second, output display resolution; selecting a first desired portion of the captured, native first image or video, wherein said first portion is at a resolution lower than that of the captured first image or video; and displaying said selected first portion at said second, output resolution. 8. A method in accordance with claim 7, wherein said selecting of a desired first portion of the first image or video is provided by a graphical user interface having a selectable extraction window. 9. A method in accordance with claim 8, wherein said extraction window is configured to allow an operator to navigate within said captured image or video and select portions thereof for presentation. 10. A system for adjusting image horizon for a vehicle mounted camera, comprising: a camera mounted in a vehicle; at least one sensor in said vehicle, the sensor configured to a change in tilt of said vehicle; a processor configured to access camera image data and data indicating tilt of said vehicle; and a digital video effects component, the digital video effects component configured to adjust an image horizon in response to said detected change in tilt of said vehicle. 11. A system in accordance with claim 10, wherein said digital video effects component is configured to automatically adjust image horizon in response to said detected change in tilt of said vehicle. 12. A system in accordance with claim 11, wherein said detected change in tilt is provided as vehicle telemetry data. 13. A system in accordance with claim 11, wherein said image horizon is adjusted to match or approximate a skyline horizon during tilting of said vehicle. 14. A system in accordance with claim 10, wherein said image horizon is adjusted with a variation in zoom of the image. 15. A system in accordance with claim 10, wherein said camera is configured to capture a first image or video at a first resolution, which resolution is greater than high definition and higher than a predetermined second, output display resolution, the system further comprising: a processor in communication with a graphical user interface, said interface configured to select a first desired portion of the native, first image or video, wherein said first portion is at a resolution lower than that of the captured first image or video; and an output mechanism configured to transport said selected first portion to a router, switcher or server at said second, output resolution. 16. A system in accordance with claim 15, wherein said graphical user interface has a selectable extraction window. 17. A system in accordance with claim 16, wherein said extraction window is configured to allow an operator to navigate within said captured image or video and select portions thereof for presentation. 18. A method for adjusting an image for a vehicle mounted camera, comprising: providing a camera mounted in a vehicle; providing at least one sensor in said vehicle, the sensor detecting a data of interest relative to said vehicle; and adjusting an image in response to said detected data of said vehicle. 19. A method in accordance with claim 18, wherein said sensor data includes one or more of: gyro data; vehicle angle; attitude; altitude; speed; acceleration; traction; and navigational data. 20. A method in accordance with claim 18, wherein said sensor data includes environmental conditions for the vehicle, including one or more of: weather; sensed track conditions; wind; and temperature. 21. A method in accordance with claim 18, wherein said image adjustment includes one or more of: adjustment of an image horizon; adjustment of image crop; selection of image portions; tracking of objects of interest in images; rendering selective high definition images from greater than high definition cameras; selective capture of image points of interest; or adjustment of the image responsive to environmental conditions. 22. A method in accordance with claim 21, wherein said image adjustment is provided as a digital video effect. 23. A method in accordance with claim 22, wherein said at least a portion of said image adjustment is performed by an on-board vehicle processor. 24. A method in accordance with claim 23, wherein said adjusted image is transmitted via wireless protocol to an external computing device. 25. A method in accordance with claim 18, further comprising: capturing a first image or video at a first resolution, which resolution is greater than high definition and higher than a predetermined second, output display resolution; selecting a first desired portion of the captured, native first image or video, wherein said first portion is at a resolution lower than that of the captured first image or video; and displaying said selected first portion at said second, output resolution. 26. A method in accordance with claim 25, wherein said selecting of a desired first portion of the first image or video is provided by a graphical user interface having a selectable extraction window. 27. A method in accordance with claim 26, wherein said extraction window is configured to allow an operator to navigate within said captured image or video and select portions thereof for presentation.
A system and method provides an image that adjusts in response to at least one vehicle mounted sensor.1. A method for adjusting image horizon for a vehicle mounted camera, comprising: providing a camera mounted in a vehicle; providing at least one sensor in said vehicle, the sensor detecting a change in tilt of said vehicle; and adjusting an image horizon in response to said detected change in tilt of said vehicle. 2. A method in accordance with claim 1, wherein said image horizon automatically adjusts in response to said detected change in tilt of said vehicle. 3. A method in accordance with claim 2, wherein said image horizon is adjusted as a digital video effect. 4. A method in accordance with claim 1, wherein said detected change in tilt is provided as vehicle telemetry data. 5. A method in accordance with claim 2, wherein said image horizon is adjusted to match or approximate a skyline horizon during tilting of said vehicle. 6. A method in accordance with claim 2, wherein said image horizon is adjusted with a variation in zoom of the image. 7. A method in accordance with claim 1, further comprising: capturing a first image or video at a first resolution, which resolution is greater than high definition and higher than a predetermined second, output display resolution; selecting a first desired portion of the captured, native first image or video, wherein said first portion is at a resolution lower than that of the captured first image or video; and displaying said selected first portion at said second, output resolution. 8. A method in accordance with claim 7, wherein said selecting of a desired first portion of the first image or video is provided by a graphical user interface having a selectable extraction window. 9. A method in accordance with claim 8, wherein said extraction window is configured to allow an operator to navigate within said captured image or video and select portions thereof for presentation. 10. A system for adjusting image horizon for a vehicle mounted camera, comprising: a camera mounted in a vehicle; at least one sensor in said vehicle, the sensor configured to a change in tilt of said vehicle; a processor configured to access camera image data and data indicating tilt of said vehicle; and a digital video effects component, the digital video effects component configured to adjust an image horizon in response to said detected change in tilt of said vehicle. 11. A system in accordance with claim 10, wherein said digital video effects component is configured to automatically adjust image horizon in response to said detected change in tilt of said vehicle. 12. A system in accordance with claim 11, wherein said detected change in tilt is provided as vehicle telemetry data. 13. A system in accordance with claim 11, wherein said image horizon is adjusted to match or approximate a skyline horizon during tilting of said vehicle. 14. A system in accordance with claim 10, wherein said image horizon is adjusted with a variation in zoom of the image. 15. A system in accordance with claim 10, wherein said camera is configured to capture a first image or video at a first resolution, which resolution is greater than high definition and higher than a predetermined second, output display resolution, the system further comprising: a processor in communication with a graphical user interface, said interface configured to select a first desired portion of the native, first image or video, wherein said first portion is at a resolution lower than that of the captured first image or video; and an output mechanism configured to transport said selected first portion to a router, switcher or server at said second, output resolution. 16. A system in accordance with claim 15, wherein said graphical user interface has a selectable extraction window. 17. A system in accordance with claim 16, wherein said extraction window is configured to allow an operator to navigate within said captured image or video and select portions thereof for presentation. 18. A method for adjusting an image for a vehicle mounted camera, comprising: providing a camera mounted in a vehicle; providing at least one sensor in said vehicle, the sensor detecting a data of interest relative to said vehicle; and adjusting an image in response to said detected data of said vehicle. 19. A method in accordance with claim 18, wherein said sensor data includes one or more of: gyro data; vehicle angle; attitude; altitude; speed; acceleration; traction; and navigational data. 20. A method in accordance with claim 18, wherein said sensor data includes environmental conditions for the vehicle, including one or more of: weather; sensed track conditions; wind; and temperature. 21. A method in accordance with claim 18, wherein said image adjustment includes one or more of: adjustment of an image horizon; adjustment of image crop; selection of image portions; tracking of objects of interest in images; rendering selective high definition images from greater than high definition cameras; selective capture of image points of interest; or adjustment of the image responsive to environmental conditions. 22. A method in accordance with claim 21, wherein said image adjustment is provided as a digital video effect. 23. A method in accordance with claim 22, wherein said at least a portion of said image adjustment is performed by an on-board vehicle processor. 24. A method in accordance with claim 23, wherein said adjusted image is transmitted via wireless protocol to an external computing device. 25. A method in accordance with claim 18, further comprising: capturing a first image or video at a first resolution, which resolution is greater than high definition and higher than a predetermined second, output display resolution; selecting a first desired portion of the captured, native first image or video, wherein said first portion is at a resolution lower than that of the captured first image or video; and displaying said selected first portion at said second, output resolution. 26. A method in accordance with claim 25, wherein said selecting of a desired first portion of the first image or video is provided by a graphical user interface having a selectable extraction window. 27. A method in accordance with claim 26, wherein said extraction window is configured to allow an operator to navigate within said captured image or video and select portions thereof for presentation.
2,400
7,916
7,916
14,298,670
2,483
A method, system, and apparatus are provided for capturing a video image and speed of a target vehicle. A ranging device detects a distance to a target vehicle. The focal distance or zoom of a video camera is set and adjusted based on the distance. The speed of travel of the vehicle is detected, displayed, and/or stored in association with a video image captured of the vehicle by the video camera. A range of distances within which to capture the video image and speed of the vehicle may be set by detecting distances between a pair of landmarks or using GPS and compass heading data. An inclinometer is provided to aid initiation of a power-conservation mode. A target tracking time may be determined and compared to a minimum tracking time period. A device certification period can be stored and displayed and the device deactivated upon expiration thereof.
1. A method for operating a traffic enforcement system device, the method comprising: determining a distance to a moving target vehicle; focusing a camera on said target vehicle based on said determined distance; zooming said camera in on said target vehicle such that an image of said target vehicle substantially fills a field of view of a display of said camera; periodically redetermining said distance to said moving target vehicle to maintain said image of said target substantially within said field of view of said camera; determining target data for said moving target vehicle; displaying said target data on said camera display; capturing one or more images of said target vehicle; and storing said one or more images and corresponding target data of said target vehicle. 2. The method of claim 1, wherein said determining a distance step comprises transmitting an electromagnetic signal at said target vehicle and receiving a return electromagnetic signal therefrom. 3. The method of claim 2, wherein said electromagnetic signal is a laser signal. 4. The method of claim 2, wherein said electromagnetic signal is a microwave signal. 5. The method of claim 1, wherein said camera is a digital camera. 6. The method of claim 1, wherein said camera is a video camera. 7. The method of claim 1, wherein said periodically redetermining step is continuously. 8. The method of claim 1, wherein said target data includes the speed of said target vehicle. 9. The method of claim 1, wherein said target data includes a compass heading of said target vehicle. 10. The method of claim 1, wherein said target data includes a geographic position of said target vehicle. 11. The method of claim 1, wherein said determining target data is for a period of time. 12. The method of claim 11, wherein said period of time is predetermined. 13. The method of claim 12, wherein said period of time is a minimum period of time. 14. The method of claim 13, wherein if said period time is less than said minimum period of time, skipping said storing step. 15. The method of claim 1, further comprising storing a certification date corresponding to a certification of said traffic enforcement system device. 16. The method of claim 15, further comprising storing an expiration date of said certification. 17. The method of claim 16, further comprising storing a time period before said expiration date, wherein an indication is displayed on said camera display during said time period. 18. The method of claim 16, further comprising storing a time period before said expiration date, wherein an audio indication is output from said traffic enforcement system during said time period. 19. The method of claim 16, wherein an indication is displayed on said camera display after said expiration date is reached. 20. The method of claim 16, wherein said traffic enforcement system is disabled after said expiration date is reached. 21. A traffic enforcement system device comprising: a detection module that determines one or more of a distance to a moving target vehicle and target data for said moving target vehicle, the detection module periodically redetermining one or more of said distance to said moving target vehicle and said target data; a display device; a camera configured to capture one or more images of said target vehicle, the focal distance of said camera being set based at least partially on said determined distance such that an image of said target vehicle substantially fills a field of view of the display device, the focal distance being periodically adjusted based on the redetermined one or more of said distance to said moving target vehicle and said target data to maintain said target vehicle substantially within said field of view of said camera; a control module that displays said target data on said display device and stores said one or more images and corresponding target data of said target vehicle in a memory. 22. The traffic enforcement system device of claim 21, wherein said detection module transmits an electromagnetic signal at said target vehicle and receives a return electromagnetic signal therefrom. 23. The traffic enforcement system device of claim 22, wherein said electromagnetic signal is a laser signal. 24. The traffic enforcement system device of claim 22, wherein said electromagnetic signal is a microwave signal. 25. The traffic enforcement system device of claim 21, wherein said one or more of said distance to said moving target vehicle and said target data is redetermined continuously. 26. The traffic enforcement system device of claim 21, wherein said target data includes the speed of said target vehicle. 27. The traffic enforcement system device of claim 21, wherein said target data includes a compass heading of said target vehicle. 28. The traffic enforcement system device of claim 21, wherein said target data includes a geographic position of said target vehicle. 29. A traffic enforcement system device comprising: a detection module that determines one or more of a distance to a moving target vehicle and target data for said moving target vehicle, said detection module determining said one or more of said distance and target data for said moving target vehicle for a period of time and measuring a duration of the period of time; a display device; a control module that displays said target data on said display. 30. The traffic enforcement system device of claim 29, wherein said duration of said period of time is greater than a predetermined minimum period of time, and wherein said control module displays an indicia on said display. 31. The traffic enforcement system device of claim 29, wherein said duration of said period of time is greater than a predetermined minimum period of time, and wherein an audible tone is emitted by the traffic enforcement system. 32. The traffic enforcement system device of claim 29, further comprising: a camera configured to capture one or more images of said target vehicle, wherein said duration of said period of time is greater than a predetermined minimum period of time, and said control module stores said one or more images and corresponding target data of said target vehicle in a memory, or wherein said duration of said period of time is less than said predetermined minimum period of time and said control module does not store said one or more images and corresponding target data for said target vehicle in said memory. 33. A traffic enforcement system device comprising: a detection module that determines one or more of a distance to a moving target vehicle and target data for said moving target vehicle; a display device; a control module that displays said target data on said display and stores a certification date corresponding to a certification of said traffic enforcement system device in a memory. 34. The traffic enforcement system device of claim 33, wherein an expiration date of said certification is stored in said memory. 35. The traffic enforcement system device of claim 34, wherein a time period before said expiration date is stored in said memory, and wherein an indicia is displayed on said display device during said time period. 36. The traffic enforcement system device of claim 34, wherein an indication is displayed on said display device after said expiration date is reached. 37. The traffic enforcement system device of claim 34, wherein said traffic enforcement system is disabled after said expiration date is reached.
A method, system, and apparatus are provided for capturing a video image and speed of a target vehicle. A ranging device detects a distance to a target vehicle. The focal distance or zoom of a video camera is set and adjusted based on the distance. The speed of travel of the vehicle is detected, displayed, and/or stored in association with a video image captured of the vehicle by the video camera. A range of distances within which to capture the video image and speed of the vehicle may be set by detecting distances between a pair of landmarks or using GPS and compass heading data. An inclinometer is provided to aid initiation of a power-conservation mode. A target tracking time may be determined and compared to a minimum tracking time period. A device certification period can be stored and displayed and the device deactivated upon expiration thereof.1. A method for operating a traffic enforcement system device, the method comprising: determining a distance to a moving target vehicle; focusing a camera on said target vehicle based on said determined distance; zooming said camera in on said target vehicle such that an image of said target vehicle substantially fills a field of view of a display of said camera; periodically redetermining said distance to said moving target vehicle to maintain said image of said target substantially within said field of view of said camera; determining target data for said moving target vehicle; displaying said target data on said camera display; capturing one or more images of said target vehicle; and storing said one or more images and corresponding target data of said target vehicle. 2. The method of claim 1, wherein said determining a distance step comprises transmitting an electromagnetic signal at said target vehicle and receiving a return electromagnetic signal therefrom. 3. The method of claim 2, wherein said electromagnetic signal is a laser signal. 4. The method of claim 2, wherein said electromagnetic signal is a microwave signal. 5. The method of claim 1, wherein said camera is a digital camera. 6. The method of claim 1, wherein said camera is a video camera. 7. The method of claim 1, wherein said periodically redetermining step is continuously. 8. The method of claim 1, wherein said target data includes the speed of said target vehicle. 9. The method of claim 1, wherein said target data includes a compass heading of said target vehicle. 10. The method of claim 1, wherein said target data includes a geographic position of said target vehicle. 11. The method of claim 1, wherein said determining target data is for a period of time. 12. The method of claim 11, wherein said period of time is predetermined. 13. The method of claim 12, wherein said period of time is a minimum period of time. 14. The method of claim 13, wherein if said period time is less than said minimum period of time, skipping said storing step. 15. The method of claim 1, further comprising storing a certification date corresponding to a certification of said traffic enforcement system device. 16. The method of claim 15, further comprising storing an expiration date of said certification. 17. The method of claim 16, further comprising storing a time period before said expiration date, wherein an indication is displayed on said camera display during said time period. 18. The method of claim 16, further comprising storing a time period before said expiration date, wherein an audio indication is output from said traffic enforcement system during said time period. 19. The method of claim 16, wherein an indication is displayed on said camera display after said expiration date is reached. 20. The method of claim 16, wherein said traffic enforcement system is disabled after said expiration date is reached. 21. A traffic enforcement system device comprising: a detection module that determines one or more of a distance to a moving target vehicle and target data for said moving target vehicle, the detection module periodically redetermining one or more of said distance to said moving target vehicle and said target data; a display device; a camera configured to capture one or more images of said target vehicle, the focal distance of said camera being set based at least partially on said determined distance such that an image of said target vehicle substantially fills a field of view of the display device, the focal distance being periodically adjusted based on the redetermined one or more of said distance to said moving target vehicle and said target data to maintain said target vehicle substantially within said field of view of said camera; a control module that displays said target data on said display device and stores said one or more images and corresponding target data of said target vehicle in a memory. 22. The traffic enforcement system device of claim 21, wherein said detection module transmits an electromagnetic signal at said target vehicle and receives a return electromagnetic signal therefrom. 23. The traffic enforcement system device of claim 22, wherein said electromagnetic signal is a laser signal. 24. The traffic enforcement system device of claim 22, wherein said electromagnetic signal is a microwave signal. 25. The traffic enforcement system device of claim 21, wherein said one or more of said distance to said moving target vehicle and said target data is redetermined continuously. 26. The traffic enforcement system device of claim 21, wherein said target data includes the speed of said target vehicle. 27. The traffic enforcement system device of claim 21, wherein said target data includes a compass heading of said target vehicle. 28. The traffic enforcement system device of claim 21, wherein said target data includes a geographic position of said target vehicle. 29. A traffic enforcement system device comprising: a detection module that determines one or more of a distance to a moving target vehicle and target data for said moving target vehicle, said detection module determining said one or more of said distance and target data for said moving target vehicle for a period of time and measuring a duration of the period of time; a display device; a control module that displays said target data on said display. 30. The traffic enforcement system device of claim 29, wherein said duration of said period of time is greater than a predetermined minimum period of time, and wherein said control module displays an indicia on said display. 31. The traffic enforcement system device of claim 29, wherein said duration of said period of time is greater than a predetermined minimum period of time, and wherein an audible tone is emitted by the traffic enforcement system. 32. The traffic enforcement system device of claim 29, further comprising: a camera configured to capture one or more images of said target vehicle, wherein said duration of said period of time is greater than a predetermined minimum period of time, and said control module stores said one or more images and corresponding target data of said target vehicle in a memory, or wherein said duration of said period of time is less than said predetermined minimum period of time and said control module does not store said one or more images and corresponding target data for said target vehicle in said memory. 33. A traffic enforcement system device comprising: a detection module that determines one or more of a distance to a moving target vehicle and target data for said moving target vehicle; a display device; a control module that displays said target data on said display and stores a certification date corresponding to a certification of said traffic enforcement system device in a memory. 34. The traffic enforcement system device of claim 33, wherein an expiration date of said certification is stored in said memory. 35. The traffic enforcement system device of claim 34, wherein a time period before said expiration date is stored in said memory, and wherein an indicia is displayed on said display device during said time period. 36. The traffic enforcement system device of claim 34, wherein an indication is displayed on said display device after said expiration date is reached. 37. The traffic enforcement system device of claim 34, wherein said traffic enforcement system is disabled after said expiration date is reached.
2,400
7,917
7,917
14,456,709
2,416
The present disclosure relates to transport link quality measurement in a distributed antenna system. A link quality indicator associated with the functional performance of a digital transport link in the distributed antenna system can be determined by a component of the distributed antenna system. An indication of a potential fault condition can be determined based on the link quality indicator before a fault condition associated with the potential fault condition occurs. The indication of the potential fault condition can be presented, for example, via a graphical user interface, a table, or an email alert.
1. A method comprising: determining, by a component of a distributed antenna system, a link quality indicator associated with an aspect of performance of a digital transport link in the distributed antenna system; and determining an indication of a potential fault condition based on the link quality indicator before a fault condition associated with the potential fault condition occurs. 2. The method of claim 1, wherein determining the link quality indicator includes monitoring at least one of a signal-to-noise ratio, a resynchronization rate, a bit-error rate, or an interference associated with the digital transport link. 3. The method of claim 1, wherein determining the link quality indicator includes measuring at least one of a direct current resistance, a roundtrip time of a reflected signal, a current flow, an attenuation, an impedance, or a resistance of the digital transport link. 4. The method of claim 1, wherein determining the indication of the potential fault condition includes at least one of: comparing an attenuation of a signal transmitted over a cable to a length of the cable, comparing a transmitted power level to a received power level, or comparing a measured current flow to an expected current flow. 5. The method of claim 1, wherein determining the indication of the potential fault condition includes using two or more different link quality indicators in combination. 6. The method of claim 1, further comprising presenting the indication of the potential fault condition via at least one of a graphical user interface, a table, an email alert, or a text message. 7. The method of claim 6, wherein presenting the indication of the potential fault condition is by the graphical user interface, wherein the graphical user interface includes a network diagram depicting the distributed antenna system and one or more selectable data filters for selecting an attribute of the digital transport link in the distributed antenna system, wherein presentation of the digital transport link is based on the attribute. 8. A system comprising: a measurement module configured to determine a link quality indicator associated with an aspect of performance of a digital transport link in a distributed antenna system; and a diagnostic module configured to determine an indication of a potential fault condition based on the link quality indicator before a fault condition associated with the potential fault condition occurs. 9. The system of claim 8, wherein the measurement module and the diagnostic module are components of a head-end unit of the distributed antenna system. 10. The system of claim 8, wherein the measurement module is configured to determine a near-end crosstalk value for one or more frequencies. 11. The system of claim 10, wherein the measurement module is configured to determine an expected near-end crosstalk value for each of the one or more frequencies. 12. The system of claim 11, wherein the diagnostic module is configured to determine the indication of the potential fault condition by determining that a cable type of the digital transport link is different than an intended cable type based on comparing a measured near-end crosstalk value at each of the one or more frequencies to expected near-end crosstalk values at each of the one or more frequencies. 13. The system of claim 8, further comprising a presentation module configured to present the indication of the potential fault condition via at least one of a graphical user interface, a table, an email alert, or a text message. 14. The system of claim 13, wherein the presentation module is configured to present one or more selectable data filters for selecting an attribute of the digital transport link, and a network diagram depicting the distributed antenna system where the digital transport link is presented based on the attribute. 15. The system of claim 8, wherein the diagnostic module is configured to determine an indication of a potential fault condition based on a difference between the link quality indicator measured at a first time and the link quality indicator measured at a second time. 16. A system comprising: a measurement module configured to measure a parameter associated with a component of a distributed antenna system; a diagnostic module configured to determine an indication of a potential fault condition before a fault condition associated with the potential fault condition occurs by comparing the parameter with an expected value of the parameter; and a presentation module configured to provide the indication of a potential fault condition via a graphical user interface. 17. The system of claim 16, wherein the presentation module is further configured to present (i) one or more selectable data filters for selecting an attribute of a digital transport link, and (ii) a network diagram depicting the distributed antenna system based on the attribute. 18. The system of claim 16, wherein the diagnostic module is configured to: measure a first value of the parameter at a first time; measure a second value of the parameter at a second time; determine a first expected value of the parameter; determine a second expected value of the parameter; compare the first value of the parameter to the first expected value of the parameter to generate a first comparison; compare the second value of the parameter to the second expected value of the parameter to generate a second comparison; determine a rate of change between the first comparison and the second comparison over a period of time between the first time and the second time; and compare the rate of change to a threshold rate of change to determine the indication of the potential fault condition. 19. The system of claim 16, wherein the measurement module is configured to determine a location of an interference source based on one or more interference link quality indicators. 20. The system of claim 16, wherein the diagnostic module is configured to predict a likelihood that the fault condition will occur before the fault condition occurs based on the indication of the potential fault condition.
The present disclosure relates to transport link quality measurement in a distributed antenna system. A link quality indicator associated with the functional performance of a digital transport link in the distributed antenna system can be determined by a component of the distributed antenna system. An indication of a potential fault condition can be determined based on the link quality indicator before a fault condition associated with the potential fault condition occurs. The indication of the potential fault condition can be presented, for example, via a graphical user interface, a table, or an email alert.1. A method comprising: determining, by a component of a distributed antenna system, a link quality indicator associated with an aspect of performance of a digital transport link in the distributed antenna system; and determining an indication of a potential fault condition based on the link quality indicator before a fault condition associated with the potential fault condition occurs. 2. The method of claim 1, wherein determining the link quality indicator includes monitoring at least one of a signal-to-noise ratio, a resynchronization rate, a bit-error rate, or an interference associated with the digital transport link. 3. The method of claim 1, wherein determining the link quality indicator includes measuring at least one of a direct current resistance, a roundtrip time of a reflected signal, a current flow, an attenuation, an impedance, or a resistance of the digital transport link. 4. The method of claim 1, wherein determining the indication of the potential fault condition includes at least one of: comparing an attenuation of a signal transmitted over a cable to a length of the cable, comparing a transmitted power level to a received power level, or comparing a measured current flow to an expected current flow. 5. The method of claim 1, wherein determining the indication of the potential fault condition includes using two or more different link quality indicators in combination. 6. The method of claim 1, further comprising presenting the indication of the potential fault condition via at least one of a graphical user interface, a table, an email alert, or a text message. 7. The method of claim 6, wherein presenting the indication of the potential fault condition is by the graphical user interface, wherein the graphical user interface includes a network diagram depicting the distributed antenna system and one or more selectable data filters for selecting an attribute of the digital transport link in the distributed antenna system, wherein presentation of the digital transport link is based on the attribute. 8. A system comprising: a measurement module configured to determine a link quality indicator associated with an aspect of performance of a digital transport link in a distributed antenna system; and a diagnostic module configured to determine an indication of a potential fault condition based on the link quality indicator before a fault condition associated with the potential fault condition occurs. 9. The system of claim 8, wherein the measurement module and the diagnostic module are components of a head-end unit of the distributed antenna system. 10. The system of claim 8, wherein the measurement module is configured to determine a near-end crosstalk value for one or more frequencies. 11. The system of claim 10, wherein the measurement module is configured to determine an expected near-end crosstalk value for each of the one or more frequencies. 12. The system of claim 11, wherein the diagnostic module is configured to determine the indication of the potential fault condition by determining that a cable type of the digital transport link is different than an intended cable type based on comparing a measured near-end crosstalk value at each of the one or more frequencies to expected near-end crosstalk values at each of the one or more frequencies. 13. The system of claim 8, further comprising a presentation module configured to present the indication of the potential fault condition via at least one of a graphical user interface, a table, an email alert, or a text message. 14. The system of claim 13, wherein the presentation module is configured to present one or more selectable data filters for selecting an attribute of the digital transport link, and a network diagram depicting the distributed antenna system where the digital transport link is presented based on the attribute. 15. The system of claim 8, wherein the diagnostic module is configured to determine an indication of a potential fault condition based on a difference between the link quality indicator measured at a first time and the link quality indicator measured at a second time. 16. A system comprising: a measurement module configured to measure a parameter associated with a component of a distributed antenna system; a diagnostic module configured to determine an indication of a potential fault condition before a fault condition associated with the potential fault condition occurs by comparing the parameter with an expected value of the parameter; and a presentation module configured to provide the indication of a potential fault condition via a graphical user interface. 17. The system of claim 16, wherein the presentation module is further configured to present (i) one or more selectable data filters for selecting an attribute of a digital transport link, and (ii) a network diagram depicting the distributed antenna system based on the attribute. 18. The system of claim 16, wherein the diagnostic module is configured to: measure a first value of the parameter at a first time; measure a second value of the parameter at a second time; determine a first expected value of the parameter; determine a second expected value of the parameter; compare the first value of the parameter to the first expected value of the parameter to generate a first comparison; compare the second value of the parameter to the second expected value of the parameter to generate a second comparison; determine a rate of change between the first comparison and the second comparison over a period of time between the first time and the second time; and compare the rate of change to a threshold rate of change to determine the indication of the potential fault condition. 19. The system of claim 16, wherein the measurement module is configured to determine a location of an interference source based on one or more interference link quality indicators. 20. The system of claim 16, wherein the diagnostic module is configured to predict a likelihood that the fault condition will occur before the fault condition occurs based on the indication of the potential fault condition.
2,400
7,918
7,918
14,819,306
2,485
A method and apparatus for video coding with spatial prediction mode for multi-mode video coding is disclosed. In one aspect, the method includes coding a slice of video data, the slice including a plurality of pixels organized into a first line and a plurality of non-first lines. The coding of the slice further includes coding a current pixel of the first line in a spatial prediction mode using a previous pixel of the first line as a predictor and coding another pixel of a non-first line in a coding mode other than the spatial prediction mode.
1. A method for coding video data via a plurality of coding modes in display link video compression, comprising: coding a slice of the video data, the slice comprising a plurality of pixels in a first line of the slice and a plurality of non-first lines of the slice, the coding of the slice comprising: coding a current pixel of the first line of the slice in a spatial prediction mode using a previous pixel of the first line as a predictor; and coding another pixel of a non-first line of the slice in a coding mode other than the spatial prediction mode. 2. The method of claim 1, wherein: the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and the coding of the slice of the video data further comprises coding the first line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 3. The method of claim 1, wherein the slice of the video data is further organized into a plurality of blocks, each block being a two-dimensional (2D) block including at least two rows of pixels, each line of the slice comprising a plurality of blocks, wherein the current pixel and the previous pixel are in the same row, and wherein the coding of the slice of the video data further comprises coding each row of the first line via corresponding first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 4. The method of claim 1, further comprising coding a first pixel of the first line of the slice without prediction. 5. The method of claim 4, further comprising coding a second pixel of the first line of the slice without prediction, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel, the coding of the slice of the video data further comprising coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 6. The method of claim 4, further comprising coding a second pixel of the first line of the slice in the spatial prediction mode using the first pixel of the first line as a predictor, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel, the coding of the slice of the video data further comprising coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 7. The method of claim 4, further comprising: determining whether there is a rate constraint on the coding of the video data; and truncating a bit depth of the first pixel in response to the determination that the rate constraint exists. 8. The method of claim 1, further comprising coding a first pixel of the first line in the spatial prediction mode using a default predictor, wherein the default predictor is dependent upon a bit-depth of the video data. 9. The method of claim 1, further comprising coding a current pixel of a current line in a median adaptive prediction (MAP) mode using a previous pixel of the current line and first and second pixels of a previous line as predictors, the current pixel of the current line and the previous pixel of the current line being separated by a first intervening pixel and the first and second pixels of the previous line being separated by a second intervening pixel, the coding of the slice of the video data further comprises coding the current line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 10. A device for coding video data via a plurality of coding modes in display link video compression, comprising: a memory configured to store the video data; and a processor in communication with the memory and configured to: code a slice of the video data, the slice comprising a plurality of pixels in a first line of the slice and a plurality of non-first lines of the slice; code a current pixel of the first line of the slice in a spatial prediction mode using a previous pixel of the first line as a predictor; and code another pixel of a non-first line of the slice in a coding mode other than the spatial prediction mode. 11. The device of claim 10, wherein: the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and the processor is further configured to code the first line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 12. The device of claim 10, wherein the slice of the video data is further organized into a plurality of blocks, each block being a two-dimensional (2D) block including at least two rows of pixels, each line of the slice comprising a plurality of blocks, wherein the current pixel and the previous pixel are in the same row, and wherein the processor is further configured to code each row of the first line via corresponding first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 13. The device of claim 10, wherein the processor is further configured to code a first pixel of the first line of the slice without prediction. 14. The device of claim 13, wherein the processor is further configured to code a second pixel of the first line of the slice without prediction, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel, the coding of the slice of the video data further comprising coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 15. The device of claim 13, wherein the processor is further configured to: code a second pixel of the first line of the slice in the spatial prediction mode using the first pixel of the first line as a predictor, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and code the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 16. The device of claim 13, wherein the processor is further configured to: determine whether there is a rate constraint on the coding of the video data; and truncate a bit depth of the first pixel in response to the determination that the rate constraint exists. 17. The device of claim 10, wherein the processor is further configured to code a first pixel of the first line in the spatial prediction mode using a default predictor, wherein the default predictor is dependent upon a bit-depth of the video data. 18. The device of claim 10, wherein the processor is further configured to code a current pixel of a current line in a median adaptive prediction (MAP) mode using a previous pixel of the current line and first and second pixels of a previous line as predictors, the current pixel of the current line and the previous pixel of the current line being separated by a first intervening pixel and the first and second pixels of the previous line being separated by a second intervening pixel, the coding of the slice of the video data further comprises coding the current line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 19. An apparatus, comprising: means for coding a slice of the video data, the slice comprising a plurality of pixels in a first line of the slice and a plurality of non-first lines of the slice; means for coding a current pixel of the first line of the slice in a spatial prediction mode using a previous pixel of the first line as a predictor; and means for coding another pixel of a non-first line of the slice in a coding mode other than the spatial prediction mode. 20. The apparatus of claim 19, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and the apparatus further comprises means for coding the first line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 21. The apparatus of claim 19, wherein the slice of the video data is further organized into a plurality of blocks, each block being a two-dimensional (2D) block including at least two rows of pixels, each line of the slice comprising a plurality of blocks, wherein the current pixel and the previous pixel are in the same row, and wherein the apparatus further comprises means for coding each row of the first line via corresponding first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 22. The apparatus of claim 19, further comprising means for coding a first pixel of the first line of the slice without prediction. 23. The apparatus of claim 22, further comprising: means for coding a second pixel of the first line of the slice without prediction, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and means for coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 24. The apparatus of claim 22, further comprising: means for coding a second pixel of the first line of the slice in the spatial prediction mode using the first pixel of the first line as a predictor, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and means for coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 25. The apparatus of claim 22, further comprising: means for determining whether there is a rate constraint on the coding of the video data; and means for truncating a bit depth of the first pixel in response to the determination that the rate constraint exists. 26. A non-transitory computer readable storage medium having stored thereon instructions that, when executed, cause a processor of a device to: code a slice of the video data, the slice comprising a plurality of pixels in a first line of the slice and a plurality of non-first lines of the slice; code a current pixel of the first line of the slice in a spatial prediction mode using a previous pixel of the first line as a predictor; and code another pixel of a non-first line of the slice in a coding mode other than the spatial prediction mode. 27. The non-transitory computer readable storage medium of claim 26, wherein: the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and the non-transitory computer readable storage medium further has stored thereon instructions that, when executed, cause the processor to code the first line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 28. The non-transitory computer readable storage medium of claim 26, wherein the slice of the video data is further organized into a plurality of blocks, each block being a two-dimensional (2D) block including at least two rows of pixels, each line of the slice comprising a plurality of blocks, wherein the current pixel and the previous pixel are in the same row, and wherein the non-transitory computer readable storage medium further has stored thereon instructions that, when executed, cause the processor to code each row of the first line via corresponding first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 29. The non-transitory computer readable storage medium of claim 26, further having stored thereon instructions that, when executed, cause the processor to code a first pixel of the first line of the slice without prediction. 30. The non-transitory computer readable storage medium of claim 29, further having stored thereon instructions that, when executed, cause the processor to: code a second pixel of the first line of the slice without prediction, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and code the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path.
A method and apparatus for video coding with spatial prediction mode for multi-mode video coding is disclosed. In one aspect, the method includes coding a slice of video data, the slice including a plurality of pixels organized into a first line and a plurality of non-first lines. The coding of the slice further includes coding a current pixel of the first line in a spatial prediction mode using a previous pixel of the first line as a predictor and coding another pixel of a non-first line in a coding mode other than the spatial prediction mode.1. A method for coding video data via a plurality of coding modes in display link video compression, comprising: coding a slice of the video data, the slice comprising a plurality of pixels in a first line of the slice and a plurality of non-first lines of the slice, the coding of the slice comprising: coding a current pixel of the first line of the slice in a spatial prediction mode using a previous pixel of the first line as a predictor; and coding another pixel of a non-first line of the slice in a coding mode other than the spatial prediction mode. 2. The method of claim 1, wherein: the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and the coding of the slice of the video data further comprises coding the first line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 3. The method of claim 1, wherein the slice of the video data is further organized into a plurality of blocks, each block being a two-dimensional (2D) block including at least two rows of pixels, each line of the slice comprising a plurality of blocks, wherein the current pixel and the previous pixel are in the same row, and wherein the coding of the slice of the video data further comprises coding each row of the first line via corresponding first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 4. The method of claim 1, further comprising coding a first pixel of the first line of the slice without prediction. 5. The method of claim 4, further comprising coding a second pixel of the first line of the slice without prediction, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel, the coding of the slice of the video data further comprising coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 6. The method of claim 4, further comprising coding a second pixel of the first line of the slice in the spatial prediction mode using the first pixel of the first line as a predictor, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel, the coding of the slice of the video data further comprising coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 7. The method of claim 4, further comprising: determining whether there is a rate constraint on the coding of the video data; and truncating a bit depth of the first pixel in response to the determination that the rate constraint exists. 8. The method of claim 1, further comprising coding a first pixel of the first line in the spatial prediction mode using a default predictor, wherein the default predictor is dependent upon a bit-depth of the video data. 9. The method of claim 1, further comprising coding a current pixel of a current line in a median adaptive prediction (MAP) mode using a previous pixel of the current line and first and second pixels of a previous line as predictors, the current pixel of the current line and the previous pixel of the current line being separated by a first intervening pixel and the first and second pixels of the previous line being separated by a second intervening pixel, the coding of the slice of the video data further comprises coding the current line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 10. A device for coding video data via a plurality of coding modes in display link video compression, comprising: a memory configured to store the video data; and a processor in communication with the memory and configured to: code a slice of the video data, the slice comprising a plurality of pixels in a first line of the slice and a plurality of non-first lines of the slice; code a current pixel of the first line of the slice in a spatial prediction mode using a previous pixel of the first line as a predictor; and code another pixel of a non-first line of the slice in a coding mode other than the spatial prediction mode. 11. The device of claim 10, wherein: the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and the processor is further configured to code the first line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 12. The device of claim 10, wherein the slice of the video data is further organized into a plurality of blocks, each block being a two-dimensional (2D) block including at least two rows of pixels, each line of the slice comprising a plurality of blocks, wherein the current pixel and the previous pixel are in the same row, and wherein the processor is further configured to code each row of the first line via corresponding first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 13. The device of claim 10, wherein the processor is further configured to code a first pixel of the first line of the slice without prediction. 14. The device of claim 13, wherein the processor is further configured to code a second pixel of the first line of the slice without prediction, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel, the coding of the slice of the video data further comprising coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 15. The device of claim 13, wherein the processor is further configured to: code a second pixel of the first line of the slice in the spatial prediction mode using the first pixel of the first line as a predictor, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and code the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 16. The device of claim 13, wherein the processor is further configured to: determine whether there is a rate constraint on the coding of the video data; and truncate a bit depth of the first pixel in response to the determination that the rate constraint exists. 17. The device of claim 10, wherein the processor is further configured to code a first pixel of the first line in the spatial prediction mode using a default predictor, wherein the default predictor is dependent upon a bit-depth of the video data. 18. The device of claim 10, wherein the processor is further configured to code a current pixel of a current line in a median adaptive prediction (MAP) mode using a previous pixel of the current line and first and second pixels of a previous line as predictors, the current pixel of the current line and the previous pixel of the current line being separated by a first intervening pixel and the first and second pixels of the previous line being separated by a second intervening pixel, the coding of the slice of the video data further comprises coding the current line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 19. An apparatus, comprising: means for coding a slice of the video data, the slice comprising a plurality of pixels in a first line of the slice and a plurality of non-first lines of the slice; means for coding a current pixel of the first line of the slice in a spatial prediction mode using a previous pixel of the first line as a predictor; and means for coding another pixel of a non-first line of the slice in a coding mode other than the spatial prediction mode. 20. The apparatus of claim 19, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and the apparatus further comprises means for coding the first line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 21. The apparatus of claim 19, wherein the slice of the video data is further organized into a plurality of blocks, each block being a two-dimensional (2D) block including at least two rows of pixels, each line of the slice comprising a plurality of blocks, wherein the current pixel and the previous pixel are in the same row, and wherein the apparatus further comprises means for coding each row of the first line via corresponding first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 22. The apparatus of claim 19, further comprising means for coding a first pixel of the first line of the slice without prediction. 23. The apparatus of claim 22, further comprising: means for coding a second pixel of the first line of the slice without prediction, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and means for coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 24. The apparatus of claim 22, further comprising: means for coding a second pixel of the first line of the slice in the spatial prediction mode using the first pixel of the first line as a predictor, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and means for coding the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 25. The apparatus of claim 22, further comprising: means for determining whether there is a rate constraint on the coding of the video data; and means for truncating a bit depth of the first pixel in response to the determination that the rate constraint exists. 26. A non-transitory computer readable storage medium having stored thereon instructions that, when executed, cause a processor of a device to: code a slice of the video data, the slice comprising a plurality of pixels in a first line of the slice and a plurality of non-first lines of the slice; code a current pixel of the first line of the slice in a spatial prediction mode using a previous pixel of the first line as a predictor; and code another pixel of a non-first line of the slice in a coding mode other than the spatial prediction mode. 27. The non-transitory computer readable storage medium of claim 26, wherein: the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and the non-transitory computer readable storage medium further has stored thereon instructions that, when executed, cause the processor to code the first line via first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 28. The non-transitory computer readable storage medium of claim 26, wherein the slice of the video data is further organized into a plurality of blocks, each block being a two-dimensional (2D) block including at least two rows of pixels, each line of the slice comprising a plurality of blocks, wherein the current pixel and the previous pixel are in the same row, and wherein the non-transitory computer readable storage medium further has stored thereon instructions that, when executed, cause the processor to code each row of the first line via corresponding first and second interleaved coding paths, each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path. 29. The non-transitory computer readable storage medium of claim 26, further having stored thereon instructions that, when executed, cause the processor to code a first pixel of the first line of the slice without prediction. 30. The non-transitory computer readable storage medium of claim 29, further having stored thereon instructions that, when executed, cause the processor to: code a second pixel of the first line of the slice without prediction, wherein the current pixel of the first line and the previous pixel of the first line are separated by an intervening pixel; and code the first line via first and second interleaved coding paths, the first interleaved coding path beginning with the first pixel, the second interleaved coding path beginning with the second pixel, and each of the pixels in the first and second interleaved coding paths being coded independently of pixels outside of the corresponding interleaved coding path.
2,400
7,919
7,919
13,921,090
2,466
A method includes constructing a graph characterizing a set of packet headers associated with network traffic. The graph has a unique identifier for each possible combination of packet headers forming a path in the graph. A received packet is associated with a unique identifier in the graph. Characteristics of the received packet are reconstructed based upon the unique identifier.
1. A method, comprising: constructing a graph characterizing a set of packet headers associated with network traffic, wherein the graph has a unique identifier for each possible combination of packet headers forming a path in the graph; associating a received packet with a unique identifier in the graph; and reconstructing characteristics of the received packet based upon the unique identifier. 2. The method of claim 1 wherein the unique identifier is based upon a non-commutative function. 3. The method of claim 2 wherein the non-commutative function is a Cyclic Redundancy Check function. 4. The method of claim 1 wherein the characteristics specify the headers present in a traversed path. 5. The method of claim 1 wherein the characteristics have an associated set of flags. 6. The method of claim 1 wherein the characteristics have an associated set of actions. 7. The method of claim 1 further comprising loading the graph into an associative memory as a path table. 8. The method of claim 1 further comprising operating the associative memory as a multiple simultaneous match parser capable of matching multiple paths in a single lookup. 9. A processor, comprising: an associative memory storing a graph characterizing a set of packet headers associated with network traffic, wherein the graph has a unique identifier for each possible combination of packet headers forming a path in the graph, wherein the associative memory matches attributes of a received packet with a unique identifier; and an index memory to reconstruct characteristics of the received packet based upon the unique identifier. 10. The processor of claim 9 wherein the associative memory is a Ternary Content Addressable Memory. 11. The processor of claim 9 wherein the associative memory operates as a multiple simultaneous match parser capable of matching multiple paths in a single lookup. 12. The processor of claim 9 wherein the unique identifier is based upon a non-commutative function. 13. The processor of claim 12 wherein the non-commutative function is a Cyclic Redundancy Check function. 14. The processor of claim 9 wherein the characteristics specify the headers present in a traversed path. 15. The processor of claim 9 wherein the characteristics have an associated set of flags. 16. The processor of claim 9 wherein the characteristics have an associated set of actions. 17. A method, comprising; forming unique assigned values to arcs in a graph; constraining paths in the graph; forming calculated paths through the graph based upon the assigned values; constructing a path table with the calculated paths; and determining whether any of the calculated paths have an identical value, and if so, repeating the forming and constructing operations. 18. The method of claim 17 wherein constraining paths includes limiting the number of transitions through cyclic paths in the graph. 19. The method of claim 17 wherein constraining paths includes selectively eliminating paths in the graph. 20. The method of claim 17 wherein the unique assigned values are based upon a non-commutative function.
A method includes constructing a graph characterizing a set of packet headers associated with network traffic. The graph has a unique identifier for each possible combination of packet headers forming a path in the graph. A received packet is associated with a unique identifier in the graph. Characteristics of the received packet are reconstructed based upon the unique identifier.1. A method, comprising: constructing a graph characterizing a set of packet headers associated with network traffic, wherein the graph has a unique identifier for each possible combination of packet headers forming a path in the graph; associating a received packet with a unique identifier in the graph; and reconstructing characteristics of the received packet based upon the unique identifier. 2. The method of claim 1 wherein the unique identifier is based upon a non-commutative function. 3. The method of claim 2 wherein the non-commutative function is a Cyclic Redundancy Check function. 4. The method of claim 1 wherein the characteristics specify the headers present in a traversed path. 5. The method of claim 1 wherein the characteristics have an associated set of flags. 6. The method of claim 1 wherein the characteristics have an associated set of actions. 7. The method of claim 1 further comprising loading the graph into an associative memory as a path table. 8. The method of claim 1 further comprising operating the associative memory as a multiple simultaneous match parser capable of matching multiple paths in a single lookup. 9. A processor, comprising: an associative memory storing a graph characterizing a set of packet headers associated with network traffic, wherein the graph has a unique identifier for each possible combination of packet headers forming a path in the graph, wherein the associative memory matches attributes of a received packet with a unique identifier; and an index memory to reconstruct characteristics of the received packet based upon the unique identifier. 10. The processor of claim 9 wherein the associative memory is a Ternary Content Addressable Memory. 11. The processor of claim 9 wherein the associative memory operates as a multiple simultaneous match parser capable of matching multiple paths in a single lookup. 12. The processor of claim 9 wherein the unique identifier is based upon a non-commutative function. 13. The processor of claim 12 wherein the non-commutative function is a Cyclic Redundancy Check function. 14. The processor of claim 9 wherein the characteristics specify the headers present in a traversed path. 15. The processor of claim 9 wherein the characteristics have an associated set of flags. 16. The processor of claim 9 wherein the characteristics have an associated set of actions. 17. A method, comprising; forming unique assigned values to arcs in a graph; constraining paths in the graph; forming calculated paths through the graph based upon the assigned values; constructing a path table with the calculated paths; and determining whether any of the calculated paths have an identical value, and if so, repeating the forming and constructing operations. 18. The method of claim 17 wherein constraining paths includes limiting the number of transitions through cyclic paths in the graph. 19. The method of claim 17 wherein constraining paths includes selectively eliminating paths in the graph. 20. The method of claim 17 wherein the unique assigned values are based upon a non-commutative function.
2,400
7,920
7,920
14,820,022
2,454
A vehicle system includes a processor programmed to output a shared screen for a meeting at a vehicle display configured to output infotainment settings. The processor is in communication with the vehicle display and a nomadic device. The processor is programmed to parse a calendar associated with the nomadic device for a meeting within a predefined or selected time window or period. The processor is further programmed to identify login information for the meeting, establish a communication link for the meeting based on the identified login information, and output a shared screen for the meeting at the display based on the vehicle being in a predefined state.
1. A vehicle system comprising: a display configured to output infotainment settings; and a processor in communication with the display and a nomadic device and programmed to, parse a calendar associated with the nomadic device for a meeting starting within a time window; identify login information for the meeting; and in response to the vehicle being in a predefined state, output at the display a shared screen for the meeting based on the login information. 2. The vehicle system of claim 1, wherein the processor is further programmed to output one or more meeting options based on the login information to the display. 3. The vehicle system of claim 2, wherein the one or more meeting options are at least one of a call-in to meeting selection, a start shared screen conference selection, a delay meeting reminder selection, and a dismiss meeting selection. 4. The vehicle system of claim 3, wherein the processor is further programmed to, in response to received input for the start shared screen conference selection, output the shared screen at the display based on the predefined state of the vehicle. 5. The vehicle system of claim 3, wherein the delay meeting reminder selection removes the one or more meeting options from the display for a predefined amount of time. 6. The vehicle system of claim 5, wherein the predefined state is at least one of a park state for a transmission and a wheel speed value approximately equal to zero. 7. The vehicle system of claim 1, wherein the time window corresponds to a set amount of time. 8. The vehicle system of claim 1, wherein the time window corresponds to a moving window based on a predetermined time from a current time. 9. The vehicle system of claim 1, wherein the display is a human machine interface display for an infotainment system. 10. The vehicle system of claim 1, wherein the login information includes at least one of a phone number, access code, hyperlink, and attendee ID. 11. The vehicle system of claim 10, wherein the hyperlink enables a communication link between a remote computer controlled by a meeting attendee and the processor. 12. The vehicle system of claim 11, wherein the shared screen provides content received from the remote computer. 13. The vehicle system of claim 1, wherein the calendar is stored at the nomadic device or a server. 14. A vehicle conference call method comprising: recognizing, via a vehicle system, an occupant based on a nomadic device; parsing calendar data associated with the occupant for a meeting starting within a predefined time window and associated meeting login information; presenting a prompt via a vehicle display for authorization of a communication link for the meeting; and if authorization is received, automatically transmitting the login information using the communication link. 15. The vehicle conference call method of claim 14, further comprising: establishing, via the vehicle system, communication with a remote server associated with the calendar data; and receiving the calendar data associated with the occupant from the remote server. 16. The vehicle conference call method of claim 14, wherein presenting a prompt comprises outputting to the vehicle display one or more meeting options based on the login information. 17. The vehicle conference call method of claim 16, wherein the one or more meeting options are at least one of a call-in to meeting selection, a start shared screen conference selection, a delay meeting reminder selection, and a dismiss meeting selection. 18. The vehicle conference call method of claim 14, wherein the meeting login information includes at least one of a phone number, access code, hyperlink, and attendee ID. 19. A computer-program product embodied in a non-transitory computer readable medium having stored instructions for programming a vehicle processor, comprising instructions for: parsing a linked nomadic device calendar for a meeting starting within a selected time period; identifying login information for the meeting; establishing a communication link for the meeting based on the identified login information; and in response to a predefined vehicle state, outputting a shared screen for the meeting at a display. 20. The computer-program product of claim 19, the non-transitory computer readable medium further comprising instructions for: establishing communication with a remote server; and retrieving the calendar from the remote server.
A vehicle system includes a processor programmed to output a shared screen for a meeting at a vehicle display configured to output infotainment settings. The processor is in communication with the vehicle display and a nomadic device. The processor is programmed to parse a calendar associated with the nomadic device for a meeting within a predefined or selected time window or period. The processor is further programmed to identify login information for the meeting, establish a communication link for the meeting based on the identified login information, and output a shared screen for the meeting at the display based on the vehicle being in a predefined state.1. A vehicle system comprising: a display configured to output infotainment settings; and a processor in communication with the display and a nomadic device and programmed to, parse a calendar associated with the nomadic device for a meeting starting within a time window; identify login information for the meeting; and in response to the vehicle being in a predefined state, output at the display a shared screen for the meeting based on the login information. 2. The vehicle system of claim 1, wherein the processor is further programmed to output one or more meeting options based on the login information to the display. 3. The vehicle system of claim 2, wherein the one or more meeting options are at least one of a call-in to meeting selection, a start shared screen conference selection, a delay meeting reminder selection, and a dismiss meeting selection. 4. The vehicle system of claim 3, wherein the processor is further programmed to, in response to received input for the start shared screen conference selection, output the shared screen at the display based on the predefined state of the vehicle. 5. The vehicle system of claim 3, wherein the delay meeting reminder selection removes the one or more meeting options from the display for a predefined amount of time. 6. The vehicle system of claim 5, wherein the predefined state is at least one of a park state for a transmission and a wheel speed value approximately equal to zero. 7. The vehicle system of claim 1, wherein the time window corresponds to a set amount of time. 8. The vehicle system of claim 1, wherein the time window corresponds to a moving window based on a predetermined time from a current time. 9. The vehicle system of claim 1, wherein the display is a human machine interface display for an infotainment system. 10. The vehicle system of claim 1, wherein the login information includes at least one of a phone number, access code, hyperlink, and attendee ID. 11. The vehicle system of claim 10, wherein the hyperlink enables a communication link between a remote computer controlled by a meeting attendee and the processor. 12. The vehicle system of claim 11, wherein the shared screen provides content received from the remote computer. 13. The vehicle system of claim 1, wherein the calendar is stored at the nomadic device or a server. 14. A vehicle conference call method comprising: recognizing, via a vehicle system, an occupant based on a nomadic device; parsing calendar data associated with the occupant for a meeting starting within a predefined time window and associated meeting login information; presenting a prompt via a vehicle display for authorization of a communication link for the meeting; and if authorization is received, automatically transmitting the login information using the communication link. 15. The vehicle conference call method of claim 14, further comprising: establishing, via the vehicle system, communication with a remote server associated with the calendar data; and receiving the calendar data associated with the occupant from the remote server. 16. The vehicle conference call method of claim 14, wherein presenting a prompt comprises outputting to the vehicle display one or more meeting options based on the login information. 17. The vehicle conference call method of claim 16, wherein the one or more meeting options are at least one of a call-in to meeting selection, a start shared screen conference selection, a delay meeting reminder selection, and a dismiss meeting selection. 18. The vehicle conference call method of claim 14, wherein the meeting login information includes at least one of a phone number, access code, hyperlink, and attendee ID. 19. A computer-program product embodied in a non-transitory computer readable medium having stored instructions for programming a vehicle processor, comprising instructions for: parsing a linked nomadic device calendar for a meeting starting within a selected time period; identifying login information for the meeting; establishing a communication link for the meeting based on the identified login information; and in response to a predefined vehicle state, outputting a shared screen for the meeting at a display. 20. The computer-program product of claim 19, the non-transitory computer readable medium further comprising instructions for: establishing communication with a remote server; and retrieving the calendar from the remote server.
2,400
7,921
7,921
13,752,811
2,482
Systems and methods for use in distributing video are provided. A first video distribution node includes a video processor configured to receive a video stream from a second video distribution node, and send the video stream to a third video distribution node, such that the video stream is sent substantially near real-time.
1. A first video distribution node comprising: a video processor configured to: receive a video stream from a second video distribution node; and send the video stream to a third video distribution node, such that the video stream is sent substantially near real-time. 2. A first video distribution node in accordance with claim 1, wherein the video processor is further configured to receive and send the video stream substantially simultaneously. 3. A first video distribution node in accordance with claim 1, further comprising a registration server configured to receive at least one video stream registration and at least one video stream request, wherein said registration server is further configured to associate a first video stream request with a first video stream registration. 4. A first video distribution node in accordance with claim 1, wherein said video processor broadcasts a video distribution request to at least the second video distribution node. 5. A first video distribution node in accordance with claim 1, wherein said video processor is further configured to receive a video stream from at least one of another video distribution node or a video source capturing the video stream. 6. A first video distribution node in accordance with claim 5, wherein the third video distribution node is communicatively coupled to said first video distribution node via at least a fourth video distribution node. 7. A first video distribution node in accordance with claim 1, wherein said video processor is further configured to combine metadata about the video stream with the video stream and separate metadata about the video stream from the video stream. 8. A first video distribution node in accordance with claim 1, wherein said first video distribution node is configured to re-distribute the video stream. 9. A first video distribution node in accordance with claim 1, wherein said first video distribution node is a software application running on a computing device. 10. A first video distribution node in accordance with claim 9, wherein said computing device is a mobile device. 11. A first video distribution cloud comprising a plurality of video distribution nodes, each video distribution node comprising: a video processor configured to: receive a video stream; and send the video stream to a second video distribution node, such that the video stream is sent substantially near real-time. 12. A first video distribution cloud in accordance with claim 11, wherein said video processor broadcasts a video distribution request to at least the second video distribution node. 13. A first video distribution cloud in accordance with claim 11, wherein the video processor is further configured to receive and send the video stream substantially simultaneously. 14. A first video distribution cloud in accordance with claim 11, wherein said plurality of video distribution nodes includes a registration server node that comprises a registration server configured to receive at least one video stream registration and at least one video stream request, wherein the registration server is further configured to associate a first video stream request with a first video stream registration. 15. A first video distribution cloud in accordance with claim 11, wherein said plurality of video distribution nodes includes a first bridge node communicatively coupled to a second bridge node in a second video distribution cloud. 16. A first video distribution cloud in accordance with claim 15, wherein the processor is further configured to send the video stream to a second video distribution node in the second video distribution cloud via said first bridge node. 17. A method for distributing video, said method comprising: receiving a video stream at near real-time from a source video distribution node within a video distribution network; and re-distributing the video stream at near real-time to at least one video distribution node within the video distribution network. 18. A method for distributing video in accordance with claim 17, wherein the receiving the video stream further comprises: sending a first video stream request from a first video distribution node, wherein the first video stream request includes a video stream identifier; sending a first subscription request from the first video distribution node to a second video distribution node, wherein the first subscription request includes the video stream identifier; and sending a command to the second video distribution node, wherein the command instructs the second video distribution node to send a video stream associated with the video stream identifier to the first video distribution node. 19. A method for distributing video in accordance with claim 18, wherein the re-distributing the video stream further comprises: receiving a second video stream request by the first video distribution node from at least a third video distribution node, wherein the second video stream request includes the video stream identifier; receiving a second subscription request by the first video distribution node from at least a third video distribution node, wherein the second subscription request includes the video stream identifier; and distributing, by the first video distribution node, the video stream associated with the video stream identifier to at least the third video distribution node. 20. A method in accordance with claim 18, further comprising determining a route for the second video distribution node to use in sending the video stream to the first video distribution node. 21. A method in accordance with claim 17, wherein re-distributing the video stream occurs substantially at the same time as receiving the video stream.
Systems and methods for use in distributing video are provided. A first video distribution node includes a video processor configured to receive a video stream from a second video distribution node, and send the video stream to a third video distribution node, such that the video stream is sent substantially near real-time.1. A first video distribution node comprising: a video processor configured to: receive a video stream from a second video distribution node; and send the video stream to a third video distribution node, such that the video stream is sent substantially near real-time. 2. A first video distribution node in accordance with claim 1, wherein the video processor is further configured to receive and send the video stream substantially simultaneously. 3. A first video distribution node in accordance with claim 1, further comprising a registration server configured to receive at least one video stream registration and at least one video stream request, wherein said registration server is further configured to associate a first video stream request with a first video stream registration. 4. A first video distribution node in accordance with claim 1, wherein said video processor broadcasts a video distribution request to at least the second video distribution node. 5. A first video distribution node in accordance with claim 1, wherein said video processor is further configured to receive a video stream from at least one of another video distribution node or a video source capturing the video stream. 6. A first video distribution node in accordance with claim 5, wherein the third video distribution node is communicatively coupled to said first video distribution node via at least a fourth video distribution node. 7. A first video distribution node in accordance with claim 1, wherein said video processor is further configured to combine metadata about the video stream with the video stream and separate metadata about the video stream from the video stream. 8. A first video distribution node in accordance with claim 1, wherein said first video distribution node is configured to re-distribute the video stream. 9. A first video distribution node in accordance with claim 1, wherein said first video distribution node is a software application running on a computing device. 10. A first video distribution node in accordance with claim 9, wherein said computing device is a mobile device. 11. A first video distribution cloud comprising a plurality of video distribution nodes, each video distribution node comprising: a video processor configured to: receive a video stream; and send the video stream to a second video distribution node, such that the video stream is sent substantially near real-time. 12. A first video distribution cloud in accordance with claim 11, wherein said video processor broadcasts a video distribution request to at least the second video distribution node. 13. A first video distribution cloud in accordance with claim 11, wherein the video processor is further configured to receive and send the video stream substantially simultaneously. 14. A first video distribution cloud in accordance with claim 11, wherein said plurality of video distribution nodes includes a registration server node that comprises a registration server configured to receive at least one video stream registration and at least one video stream request, wherein the registration server is further configured to associate a first video stream request with a first video stream registration. 15. A first video distribution cloud in accordance with claim 11, wherein said plurality of video distribution nodes includes a first bridge node communicatively coupled to a second bridge node in a second video distribution cloud. 16. A first video distribution cloud in accordance with claim 15, wherein the processor is further configured to send the video stream to a second video distribution node in the second video distribution cloud via said first bridge node. 17. A method for distributing video, said method comprising: receiving a video stream at near real-time from a source video distribution node within a video distribution network; and re-distributing the video stream at near real-time to at least one video distribution node within the video distribution network. 18. A method for distributing video in accordance with claim 17, wherein the receiving the video stream further comprises: sending a first video stream request from a first video distribution node, wherein the first video stream request includes a video stream identifier; sending a first subscription request from the first video distribution node to a second video distribution node, wherein the first subscription request includes the video stream identifier; and sending a command to the second video distribution node, wherein the command instructs the second video distribution node to send a video stream associated with the video stream identifier to the first video distribution node. 19. A method for distributing video in accordance with claim 18, wherein the re-distributing the video stream further comprises: receiving a second video stream request by the first video distribution node from at least a third video distribution node, wherein the second video stream request includes the video stream identifier; receiving a second subscription request by the first video distribution node from at least a third video distribution node, wherein the second subscription request includes the video stream identifier; and distributing, by the first video distribution node, the video stream associated with the video stream identifier to at least the third video distribution node. 20. A method in accordance with claim 18, further comprising determining a route for the second video distribution node to use in sending the video stream to the first video distribution node. 21. A method in accordance with claim 17, wherein re-distributing the video stream occurs substantially at the same time as receiving the video stream.
2,400
7,922
7,922
12,414,554
2,426
Network content delivery apparatus and methods based on content compiled from various sources and particularly selected for a given user. In one embodiment, the network comprises a cable television network, and the content sources include DVR, broadcast, nPVR, and VOD. The user-targeted content is assembled into a playlist, and displayed as a continuous stream on a virtual channel particular to that user. User interfaces accessible through the virtual channel present various functional options, including the selection or exploration of content having similarity or prescribed relationships to other content, and the ability to order purchasable content. An improved electronic program guide is also disclosed which allows a user to start over, record, view, receive information on, “catch up”, and rate content. Apparatus for remote access and configuration of the playlist and virtual channel functions, as well as a business rules “engine” implementing operational or business goals, are also disclosed.
1. In a content based network, a method of providing targeted to a user, comprising: receiving information regarding a plurality of content; comparing the information to a set of criteria; selecting individual ones of said plurality of content for provision to said user based at least in part on said act of comparing; and providing said selected individual ones of said plurality of content to said user; wherein said content comprises content obtained from a plurality of different sources. 2. The method of claim 1, wherein said content sources are selected from the group consisting of: (i) an on demand content source; (ii) a broadcast program content source; (iii) a digital video recorder; (iv) a personal media content storage device; and (v) a pay-per view content source. 3. The method of claim 1, wherein said information comprises metadata transmitted with and relating to individual ones of said plurality of content. 4. The method of claim 1, wherein said set of criteria comprises a user profile having information about said user relating to various aspects of said content, and said act of comparing comprises examining aspects of individual ones of said plurality of content for similarity to said various aspects in said user profile. 5. The method of claim 4, wherein said act of selecting individual ones of said plurality of content for provision to said user based at least in part on said act of comparing comprises storing information relating to individual ones of said plurality of content having a threshold level of similarity to said various aspects in said user profile. 6. The method of claim 5, further comprising using said stored information relating to individual ones of said plurality of content to generate a list, said list comprising: content identification information; content location; and content accessing information; and wherein said list is prioritized based at least in part on said act of comparing said information regarding said plurality of content to said set of criteria. 7. The method of claim 6, wherein said act of providing said selected content to said user comprises: displaying a portion of said content identification information to a user display; providing a mechanism for the selection of individual ones of said content; utilizing at least said content location and content accessing information to locate and access said content; and displaying said content to said user at said user display. 8. The method of claim 5, wherein said act of providing said selected content to said user comprises: providing a virtual channel accessible by said user; providing a mechanism to utilize at least said content location and content accessing information to locate and access said content; and displaying said content to said user at said virtual channel; and wherein said content is displayed in order dictated by said list. 9. The method of claim 5, wherein at least one of said content provided to said user comprises purchasable content; and wherein said act of providing said selected content to said user comprises: providing said content identification information; allowing the selection of said purchasable content; providing at least one user confirmation for the purchase of said content; providing a mechanism to utilize at least said content location and content accessing information to locate and access said content; and displaying said content to said user at said user display. 10. The method of claim 4, further comprising modifying said user profile based at least in part on at least one user action. 11. The method of claim 10, wherein said at least one user action is weighted depending on a classification thereof. 12. The method of claim 10, wherein said user actions comprise at least two of a group consisting of: (i) viewing said content; (ii) navigating away from said content; (iii) recording said content; (iv) deleting said content; and (v) rejecting recommendations to view said content. 13. The method of claim 4, further comprising modifying said user profile based at least in part on user feedback, said user feedback comprising instructions relating to the user's impression of said provided content. 14. The method of claim 1, wherein said plurality of content comprises programming content and advertising content. 15. For use in a content based network, an apparatus for delivery of targeted content, comprising: a processor, said processor adapted to run at least one software process thereon, said software process adapted to: receive information related to a plurality of available content; compare said information relating to a plurality of available content to a standard; select individual ones of said plurality of available content for provision to a user based at least in part on said act of comparing; and deliver said selected content to said user; a network interface in data communication with said processor; and a storage device in data communication with said processor. 16. The apparatus of claim 15, wherein said apparatus comprises a converged premises device (CPD). 17. The apparatus of claim 15, wherein said information related to said plurality of available content comprises metadata rendered at least partly in a human-readable form. 18. The apparatus of claim 15, wherein said comparison comprises: generating records regarding various aspects of each of said plurality of available content; and utilizing said records to find matches between said various aspects of said available content and various aspects of said standard. 19. The apparatus of claim 18, wherein said standard comprises a user-based profile, and wherein said storage device in data communication with said processor is adapted to store at least a portion of said user-based profile. 20. The apparatus of claim 19, wherein said user-based profile is modified based at least in part on user actions, said user actions comprising at least one of. (i) viewing said content; (ii) entering said user's impression of displayed content; (iii) navigating away from said content; (iv) recording said content; (v) deleting said content; and (vi) rejecting recommendations to view said content. 21. The apparatus of claim 18, wherein said selected individual ones of said plurality of available content are compiled into a selected content list, said selected content list having entries prioritized based at least in part on the results of said act of comparing. 22. The apparatus of claim 21, wherein said delivery comprises displaying said selected content list on a display device in data communication with said apparatus; and wherein said software process is further adapted to enable a user to choose one or more of said content in said prioritized list for delivery. 23. The apparatus of claim 21, wherein said delivery comprises displaying content associated with each of said selected content in said selected content list on a virtual channel. 24. The apparatus of claim 22, wherein said display device in data communication with said apparatus comprises an Internet site in data communication with said software process. 25. The apparatus of claim 23, wherein said standard comprises an individual one of said plurality of content. 26. In a content-based network comprising a plurality of content sources, a method of generating a subset of content elements having features consistent with a set of criteria, the method comprising: retrieving metadata regarding content elements from a content source associated therewith; determining similarity of said content metadata to said set of criteria, placing content elements having a threshold level of similarity in a list, said list arranged by similarity level; displaying at least one of said content elements in said list; interpreting a user action; updating said set of criteria to reflect said user action; and determining similarity of said content metadata to said updated set of criteria. 27. A method of doing business in a content-based network comprising: receiving an ensemble of content elements from a plurality of content sources; generating a navigable electronic program guide of said ensemble of content elements; comparing at least portions of said ensemble of content elements to a prescribed set of criteria; storing information regarding individual ones of said ensemble of content elements; and providing results of said comparison to a user. 28. The method of claim 27, wherein said act of comparing comprises utilizing metadata transmitted with and relating to said ensemble of content elements to find matches to said prescribed set of criteria and said stored information regarding individual ones of said ensemble of content elements comprises at least content identification information, content location information, and content accessing information. 29. The method of claim 28, wherein act of providing results of said comparison to said user comprises: providing at least a portion of said content identification information for display on a display device associated with said user, said display device providing a means for the selection of individual ones of said content; utilizing at least said content location and content accessing information to locate and access said selected individual ones of said content; and providing said selected individual ones of said content to said display device. 30. The method of claim 29, wherein said act of providing said selected content to said display device comprises: providing a virtual channel accessible by said user; providing a mechanism to utilize at least said content location and content accessing information to locate and access said content; and displaying said content to said user at said virtual channel; and wherein said content is displayed in order dictated by said list. 31. For use in a content-based network, a premises device adapted to generate an electronic program guide comprising a plurality of available content, said device comprising: apparatus for generating a navigable schedule of said available content; apparatus for navigating said navigable schedule of available content; apparatus for displaying a representative icon for each available content in said schedule; and apparatus for displaying as a background a programs stream over which said electronic program guide is displayed; wherein said schedule of available content comprises at least content broadcast within a predetermined period of time, said predetermined period of time including fiture, present, and past broadcasts. 32. The device of claim 31, wherein said representative icon comprises a recognizable picture related to said content and is further accompanied by at least one of: a textual description of said content; and a video clip representative of said content. 33. The device of claim 31, wherein said electronic program guide is further adapted to comprise at least one tool with a function selected from a group consisting of: (i) accessing more information regarding a selected program; (ii) starting a program over from its beginning during the time block a program is set to broadcast; (iii) setting an alert or reminder for at least one program having a broadcast time in the future; (iv) receiving a short program clip regarding a selected content; (v) rating content; and (vi) viewing descriptions of previous episodes of content in a series. 34. The device of claim 31, further comprising a recommendation tool adapted to compare a selected one of said available content to at least one of: said plurality of available content; and a user profile, wherein said recommendation tool is further adapted to display a recommendation based on the results of said comparison. 35. The device of claim 31, wherein said navigable schedule of available content comprises a one-day schedule of content from a single content source. 36. The device of claim 31, wherein said a navigable schedule of content comprises content bearing a threshold level of similarity to a user profile. 37. The device of claim 36, further comprising a personal timeline, wherein said user is able to select content from said navigable schedule for placement in said personal timeline. 38. Computer readable apparatus comprising media adapted to contain a computer program having a plurality of instructions, said plurality of instructions which, when executed: request a plurality of available content from a plurality of content sources; generate a navigable schedule of content; link each content item in said schedule of content a plurality of information regarding said content; link each content item in said schedule of content to a plurality of tools operable by a user via a user interface; and display said navigable schedule of content on top of a currently displayed program stream, said display comprising a user interface. 39. The computer readable apparatus of claim 38, wherein said act of generating a navigable schedule of content further comprises utilizing metadata relating to said content to determine similarity to a prescribed set of criteria. 40 The computer readable apparatus of claim 39, wherein said computer program is further adapted to: display a personal timeline, said timeline comprising a plurality of date and time place holders; enable said user to select content from said navigable schedule of content for placement into said various date and time place holders; and display content from said personal timeline at the date and time given by said placeholders. 41. The computer readable apparatus of claim 38, wherein said plurality of information comprises at least one of; (i) an icon representative of at least one of said plurality of available content; (ii) a text description of at least one of said plurality of available content; and iii) a video clip related to at least one of said plurality of available content. 42. The computer readable apparatus of claim 38, wherein said plurality of information comprises at least one of a group consisting of: (i) identification information relating to at least one of said plurality of available content; (ii) information describing a location of at least one of said plurality of available content; and (iii) information useful in accessing at least one of said plurality of available content. 43. The computer readable apparatus of claim 38, wherein at least one of said plurality of tools operable by said user via said user interface comprises at least one function selected from a group consisting of: (i) accessing more information regarding a selected one of said plurality of available content; (ii) starting said selected content over from its beginning during the time block said content is set to broadcast; and (iii) viewing at least one of said plurality of available content by selecting said content during the time block said content is set to broadcast. 44. The computer readable apparatus of claim 38, wherein at least one of said plurality of tools operable by said user via said user interface comprises at least one function selected from a group consisting of: (i) setting an alert or reminder for at least one of said plurality of available content having a broadcast time in the future; (ii) receiving a short program clip regarding said selected content; (iii) rating said selected content; (iv) viewing descriptions of previous episodes of said selected content; and (v) viewing at least a portion of previous episodes of said selected content.
Network content delivery apparatus and methods based on content compiled from various sources and particularly selected for a given user. In one embodiment, the network comprises a cable television network, and the content sources include DVR, broadcast, nPVR, and VOD. The user-targeted content is assembled into a playlist, and displayed as a continuous stream on a virtual channel particular to that user. User interfaces accessible through the virtual channel present various functional options, including the selection or exploration of content having similarity or prescribed relationships to other content, and the ability to order purchasable content. An improved electronic program guide is also disclosed which allows a user to start over, record, view, receive information on, “catch up”, and rate content. Apparatus for remote access and configuration of the playlist and virtual channel functions, as well as a business rules “engine” implementing operational or business goals, are also disclosed.1. In a content based network, a method of providing targeted to a user, comprising: receiving information regarding a plurality of content; comparing the information to a set of criteria; selecting individual ones of said plurality of content for provision to said user based at least in part on said act of comparing; and providing said selected individual ones of said plurality of content to said user; wherein said content comprises content obtained from a plurality of different sources. 2. The method of claim 1, wherein said content sources are selected from the group consisting of: (i) an on demand content source; (ii) a broadcast program content source; (iii) a digital video recorder; (iv) a personal media content storage device; and (v) a pay-per view content source. 3. The method of claim 1, wherein said information comprises metadata transmitted with and relating to individual ones of said plurality of content. 4. The method of claim 1, wherein said set of criteria comprises a user profile having information about said user relating to various aspects of said content, and said act of comparing comprises examining aspects of individual ones of said plurality of content for similarity to said various aspects in said user profile. 5. The method of claim 4, wherein said act of selecting individual ones of said plurality of content for provision to said user based at least in part on said act of comparing comprises storing information relating to individual ones of said plurality of content having a threshold level of similarity to said various aspects in said user profile. 6. The method of claim 5, further comprising using said stored information relating to individual ones of said plurality of content to generate a list, said list comprising: content identification information; content location; and content accessing information; and wherein said list is prioritized based at least in part on said act of comparing said information regarding said plurality of content to said set of criteria. 7. The method of claim 6, wherein said act of providing said selected content to said user comprises: displaying a portion of said content identification information to a user display; providing a mechanism for the selection of individual ones of said content; utilizing at least said content location and content accessing information to locate and access said content; and displaying said content to said user at said user display. 8. The method of claim 5, wherein said act of providing said selected content to said user comprises: providing a virtual channel accessible by said user; providing a mechanism to utilize at least said content location and content accessing information to locate and access said content; and displaying said content to said user at said virtual channel; and wherein said content is displayed in order dictated by said list. 9. The method of claim 5, wherein at least one of said content provided to said user comprises purchasable content; and wherein said act of providing said selected content to said user comprises: providing said content identification information; allowing the selection of said purchasable content; providing at least one user confirmation for the purchase of said content; providing a mechanism to utilize at least said content location and content accessing information to locate and access said content; and displaying said content to said user at said user display. 10. The method of claim 4, further comprising modifying said user profile based at least in part on at least one user action. 11. The method of claim 10, wherein said at least one user action is weighted depending on a classification thereof. 12. The method of claim 10, wherein said user actions comprise at least two of a group consisting of: (i) viewing said content; (ii) navigating away from said content; (iii) recording said content; (iv) deleting said content; and (v) rejecting recommendations to view said content. 13. The method of claim 4, further comprising modifying said user profile based at least in part on user feedback, said user feedback comprising instructions relating to the user's impression of said provided content. 14. The method of claim 1, wherein said plurality of content comprises programming content and advertising content. 15. For use in a content based network, an apparatus for delivery of targeted content, comprising: a processor, said processor adapted to run at least one software process thereon, said software process adapted to: receive information related to a plurality of available content; compare said information relating to a plurality of available content to a standard; select individual ones of said plurality of available content for provision to a user based at least in part on said act of comparing; and deliver said selected content to said user; a network interface in data communication with said processor; and a storage device in data communication with said processor. 16. The apparatus of claim 15, wherein said apparatus comprises a converged premises device (CPD). 17. The apparatus of claim 15, wherein said information related to said plurality of available content comprises metadata rendered at least partly in a human-readable form. 18. The apparatus of claim 15, wherein said comparison comprises: generating records regarding various aspects of each of said plurality of available content; and utilizing said records to find matches between said various aspects of said available content and various aspects of said standard. 19. The apparatus of claim 18, wherein said standard comprises a user-based profile, and wherein said storage device in data communication with said processor is adapted to store at least a portion of said user-based profile. 20. The apparatus of claim 19, wherein said user-based profile is modified based at least in part on user actions, said user actions comprising at least one of. (i) viewing said content; (ii) entering said user's impression of displayed content; (iii) navigating away from said content; (iv) recording said content; (v) deleting said content; and (vi) rejecting recommendations to view said content. 21. The apparatus of claim 18, wherein said selected individual ones of said plurality of available content are compiled into a selected content list, said selected content list having entries prioritized based at least in part on the results of said act of comparing. 22. The apparatus of claim 21, wherein said delivery comprises displaying said selected content list on a display device in data communication with said apparatus; and wherein said software process is further adapted to enable a user to choose one or more of said content in said prioritized list for delivery. 23. The apparatus of claim 21, wherein said delivery comprises displaying content associated with each of said selected content in said selected content list on a virtual channel. 24. The apparatus of claim 22, wherein said display device in data communication with said apparatus comprises an Internet site in data communication with said software process. 25. The apparatus of claim 23, wherein said standard comprises an individual one of said plurality of content. 26. In a content-based network comprising a plurality of content sources, a method of generating a subset of content elements having features consistent with a set of criteria, the method comprising: retrieving metadata regarding content elements from a content source associated therewith; determining similarity of said content metadata to said set of criteria, placing content elements having a threshold level of similarity in a list, said list arranged by similarity level; displaying at least one of said content elements in said list; interpreting a user action; updating said set of criteria to reflect said user action; and determining similarity of said content metadata to said updated set of criteria. 27. A method of doing business in a content-based network comprising: receiving an ensemble of content elements from a plurality of content sources; generating a navigable electronic program guide of said ensemble of content elements; comparing at least portions of said ensemble of content elements to a prescribed set of criteria; storing information regarding individual ones of said ensemble of content elements; and providing results of said comparison to a user. 28. The method of claim 27, wherein said act of comparing comprises utilizing metadata transmitted with and relating to said ensemble of content elements to find matches to said prescribed set of criteria and said stored information regarding individual ones of said ensemble of content elements comprises at least content identification information, content location information, and content accessing information. 29. The method of claim 28, wherein act of providing results of said comparison to said user comprises: providing at least a portion of said content identification information for display on a display device associated with said user, said display device providing a means for the selection of individual ones of said content; utilizing at least said content location and content accessing information to locate and access said selected individual ones of said content; and providing said selected individual ones of said content to said display device. 30. The method of claim 29, wherein said act of providing said selected content to said display device comprises: providing a virtual channel accessible by said user; providing a mechanism to utilize at least said content location and content accessing information to locate and access said content; and displaying said content to said user at said virtual channel; and wherein said content is displayed in order dictated by said list. 31. For use in a content-based network, a premises device adapted to generate an electronic program guide comprising a plurality of available content, said device comprising: apparatus for generating a navigable schedule of said available content; apparatus for navigating said navigable schedule of available content; apparatus for displaying a representative icon for each available content in said schedule; and apparatus for displaying as a background a programs stream over which said electronic program guide is displayed; wherein said schedule of available content comprises at least content broadcast within a predetermined period of time, said predetermined period of time including fiture, present, and past broadcasts. 32. The device of claim 31, wherein said representative icon comprises a recognizable picture related to said content and is further accompanied by at least one of: a textual description of said content; and a video clip representative of said content. 33. The device of claim 31, wherein said electronic program guide is further adapted to comprise at least one tool with a function selected from a group consisting of: (i) accessing more information regarding a selected program; (ii) starting a program over from its beginning during the time block a program is set to broadcast; (iii) setting an alert or reminder for at least one program having a broadcast time in the future; (iv) receiving a short program clip regarding a selected content; (v) rating content; and (vi) viewing descriptions of previous episodes of content in a series. 34. The device of claim 31, further comprising a recommendation tool adapted to compare a selected one of said available content to at least one of: said plurality of available content; and a user profile, wherein said recommendation tool is further adapted to display a recommendation based on the results of said comparison. 35. The device of claim 31, wherein said navigable schedule of available content comprises a one-day schedule of content from a single content source. 36. The device of claim 31, wherein said a navigable schedule of content comprises content bearing a threshold level of similarity to a user profile. 37. The device of claim 36, further comprising a personal timeline, wherein said user is able to select content from said navigable schedule for placement in said personal timeline. 38. Computer readable apparatus comprising media adapted to contain a computer program having a plurality of instructions, said plurality of instructions which, when executed: request a plurality of available content from a plurality of content sources; generate a navigable schedule of content; link each content item in said schedule of content a plurality of information regarding said content; link each content item in said schedule of content to a plurality of tools operable by a user via a user interface; and display said navigable schedule of content on top of a currently displayed program stream, said display comprising a user interface. 39. The computer readable apparatus of claim 38, wherein said act of generating a navigable schedule of content further comprises utilizing metadata relating to said content to determine similarity to a prescribed set of criteria. 40 The computer readable apparatus of claim 39, wherein said computer program is further adapted to: display a personal timeline, said timeline comprising a plurality of date and time place holders; enable said user to select content from said navigable schedule of content for placement into said various date and time place holders; and display content from said personal timeline at the date and time given by said placeholders. 41. The computer readable apparatus of claim 38, wherein said plurality of information comprises at least one of; (i) an icon representative of at least one of said plurality of available content; (ii) a text description of at least one of said plurality of available content; and iii) a video clip related to at least one of said plurality of available content. 42. The computer readable apparatus of claim 38, wherein said plurality of information comprises at least one of a group consisting of: (i) identification information relating to at least one of said plurality of available content; (ii) information describing a location of at least one of said plurality of available content; and (iii) information useful in accessing at least one of said plurality of available content. 43. The computer readable apparatus of claim 38, wherein at least one of said plurality of tools operable by said user via said user interface comprises at least one function selected from a group consisting of: (i) accessing more information regarding a selected one of said plurality of available content; (ii) starting said selected content over from its beginning during the time block said content is set to broadcast; and (iii) viewing at least one of said plurality of available content by selecting said content during the time block said content is set to broadcast. 44. The computer readable apparatus of claim 38, wherein at least one of said plurality of tools operable by said user via said user interface comprises at least one function selected from a group consisting of: (i) setting an alert or reminder for at least one of said plurality of available content having a broadcast time in the future; (ii) receiving a short program clip regarding said selected content; (iii) rating said selected content; (iv) viewing descriptions of previous episodes of said selected content; and (v) viewing at least a portion of previous episodes of said selected content.
2,400
7,923
7,923
14,861,160
2,454
A system and method for creating a document in a messaging environment is described. A communication including a document specification including zero or more formatting commands and content is received from a sender and processed. The system and method determine whether the document specification is in a done condition, and iterates until done. A formatted document is also created and returned to the sender and recipients.
1. A computer-implemented method for creating a document, the method comprising: receiving a first communication from a first sender, the first communication identifying zero or more recipients, the first communication including a document specification including zero or more formatting commands and content; parsing the first communication, the parsing including determining whether the document specification is in a done condition; applying the zero or more formatting commands to the content, wherein the applying comprises producing a document; storing the document; transmitting information for accessing the document to the first sender, and to the zero or more recipients; and iterating until the document specification is in a done condition. 2. The computer-implemented method of claim 1, wherein the receiving the first communication is via email. 3. The computer-implemented method of claim 1, the first communication further comprising one or more address fields, the zero or more recipients identified in the one or more address fields, the one or more address fields corresponding to one or more document access privileges for the zero or more recipients. 4. The computer-implemented method of claim 1, wherein the transmitting information for accessing the document includes transmitting the document itself. 5. The computer-implemented method of claim 1, wherein the zero or more formatting commands comprise plain text. 6. The computer-implemented method of claim 1, further comprising in response to receiving the zero or more formatting commands, transmitting to the first sender, instructions for creating a document specification. 7. The computer-implemented method of claim 6, wherein the instructions for creating a document specification include a template. 8. The computer-implemented method of claim 1, wherein the document specification includes a template. 9. The computer-implemented method of claim 1, wherein the iterating includes revising the document specification and transmitting the revised document specification to the first sender. 10. The computer-implemented method of claim 9, further comprising transmitting the revised document specification to the zero or more recipients. 11. The computer-implemented method of claim 9, wherein the revising the document specification further comprises stamping it with a version indicator. 12. The computer-implemented method of claim 9, wherein the zero or more formatting commands is erroneous, and revising the document specification includes identifying errors corresponding to the zero or more formatting commands. 13. The computer-implemented method of claim 1, the method further comprising applying one or more access permissions to the document corresponding to the first sender and the zero or more recipients. 14. The computer-implemented method of claim 13, further comprising receiving a second communication from a second sender, wherein the access permissions prohibit the second sender from modifying the document, and rejecting the second communication. 15. The computer-implemented method of claim 13, further comprising receiving a second communication from a second sender, the second communication including one or more updates to the document specification, wherein the access permissions permit the second sender to modify the document, and tracking and associating the one or more updates to the document specification with the second sender. 16. The computer-implemented method of claim 13, further comprising receiving a third communication from the first sender, the third communication including one or more instructions for modifying the access permissions, and in response to the one or more instructions, modifying the access permissions. 17. The computer-implemented method of claim 13, wherein the access permissions prohibit zero or more recipients from modifying the document. 18. The computer-implemented method of claim 13, wherein applying one or more access permissions further comprises creating a password. 19. A system comprising: one or more computers configured to perform operations including: receiving a first communication from a first sender, the first communication identifying zero or more recipients, the first communication including a document specification including zero or more formatting commands and content; parsing the first communication, the parsing including determining whether the document specification is in a done condition; applying the zero or more formatting commands to the content, wherein the applying comprises producing a document; storing the document; transmitting information for accessing the document to the first sender, and to the zero or more recipients; and iterating until the document specification is in a done condition.
A system and method for creating a document in a messaging environment is described. A communication including a document specification including zero or more formatting commands and content is received from a sender and processed. The system and method determine whether the document specification is in a done condition, and iterates until done. A formatted document is also created and returned to the sender and recipients.1. A computer-implemented method for creating a document, the method comprising: receiving a first communication from a first sender, the first communication identifying zero or more recipients, the first communication including a document specification including zero or more formatting commands and content; parsing the first communication, the parsing including determining whether the document specification is in a done condition; applying the zero or more formatting commands to the content, wherein the applying comprises producing a document; storing the document; transmitting information for accessing the document to the first sender, and to the zero or more recipients; and iterating until the document specification is in a done condition. 2. The computer-implemented method of claim 1, wherein the receiving the first communication is via email. 3. The computer-implemented method of claim 1, the first communication further comprising one or more address fields, the zero or more recipients identified in the one or more address fields, the one or more address fields corresponding to one or more document access privileges for the zero or more recipients. 4. The computer-implemented method of claim 1, wherein the transmitting information for accessing the document includes transmitting the document itself. 5. The computer-implemented method of claim 1, wherein the zero or more formatting commands comprise plain text. 6. The computer-implemented method of claim 1, further comprising in response to receiving the zero or more formatting commands, transmitting to the first sender, instructions for creating a document specification. 7. The computer-implemented method of claim 6, wherein the instructions for creating a document specification include a template. 8. The computer-implemented method of claim 1, wherein the document specification includes a template. 9. The computer-implemented method of claim 1, wherein the iterating includes revising the document specification and transmitting the revised document specification to the first sender. 10. The computer-implemented method of claim 9, further comprising transmitting the revised document specification to the zero or more recipients. 11. The computer-implemented method of claim 9, wherein the revising the document specification further comprises stamping it with a version indicator. 12. The computer-implemented method of claim 9, wherein the zero or more formatting commands is erroneous, and revising the document specification includes identifying errors corresponding to the zero or more formatting commands. 13. The computer-implemented method of claim 1, the method further comprising applying one or more access permissions to the document corresponding to the first sender and the zero or more recipients. 14. The computer-implemented method of claim 13, further comprising receiving a second communication from a second sender, wherein the access permissions prohibit the second sender from modifying the document, and rejecting the second communication. 15. The computer-implemented method of claim 13, further comprising receiving a second communication from a second sender, the second communication including one or more updates to the document specification, wherein the access permissions permit the second sender to modify the document, and tracking and associating the one or more updates to the document specification with the second sender. 16. The computer-implemented method of claim 13, further comprising receiving a third communication from the first sender, the third communication including one or more instructions for modifying the access permissions, and in response to the one or more instructions, modifying the access permissions. 17. The computer-implemented method of claim 13, wherein the access permissions prohibit zero or more recipients from modifying the document. 18. The computer-implemented method of claim 13, wherein applying one or more access permissions further comprises creating a password. 19. A system comprising: one or more computers configured to perform operations including: receiving a first communication from a first sender, the first communication identifying zero or more recipients, the first communication including a document specification including zero or more formatting commands and content; parsing the first communication, the parsing including determining whether the document specification is in a done condition; applying the zero or more formatting commands to the content, wherein the applying comprises producing a document; storing the document; transmitting information for accessing the document to the first sender, and to the zero or more recipients; and iterating until the document specification is in a done condition.
2,400
7,924
7,924
13,220,535
2,487
An unmanned aerial vehicle (UAV) includes a fuselage, a gimbal-mounted turret having one or more degrees of freedom relative to the fuselage, a camera disposed in the gimbal-mounted turret for motion therewith in the one or more degrees of freedom, and a central video image processor disposed exteriorly of the gimbal-mounted turret, the central video image processor configured to receive and process image data from the camera.
1. An unmanned aerial vehicle (UAV) comprising: a fuselage; a gimbal-mounted turret having one or more degrees of freedom relative to the fuselage; a camera disposed in the gimbal-mounted turret for motion therewith in the one or more degrees of freedom; and a central video image processor disposed exteriorly of the gimbal-mounted turret, the central video image processor configured to receive and process image data from the camera. 2. The UAV of claim 1, wherein the central video image processor includes one or more of the following modules: a de-mosaicing module, a video conditioning module, individual frame display information module, and template matching module. 3. The UAV of claim 1, further comprising an additional camera mounted to the aircraft and coupled to the central video image processor, the central video image processor configured to receive and process image data from the additional camera. 4. The UAV of claim 1, wherein the additional camera is a landing camera. 5. A surveillance method comprising: capturing image information using a gimbaled camera mounted in a turret exterior to an aircraft fuselage; transmitting the captured image information to a central image processor disposed in the aircraft fuselage; and processing the transmitted captured image information in the central image processor. 6. The method of claim 5, further comprising transmitting information processed by the central image processor to a remote location. 7. The method of claim 5, further comprising capturing image information using an additional camera mounted exteriorly of the fuselage and transmitting the captured image information from the additional camera to a central image processor disposed in the aircraft fuselage. 8. A device comprising: means for capturing image information using a gimbaled camera mounted in a turret exterior to an aircraft fuselage; means for transmitting the captured image information to a central image processor disposed in the aircraft fuselage; and means for processing the transmitted captured image information in the central image processor. 9. The method of claim 8, further comprising means for transmitting information processed by the central image processor to a remote location. 10. The method of claim 8, further comprising means for capturing image information using an additional camera mounted exteriorly of the fuselage and transmitting the captured image information from the additional camera to a central image processor disposed in the aircraft fuselage.
An unmanned aerial vehicle (UAV) includes a fuselage, a gimbal-mounted turret having one or more degrees of freedom relative to the fuselage, a camera disposed in the gimbal-mounted turret for motion therewith in the one or more degrees of freedom, and a central video image processor disposed exteriorly of the gimbal-mounted turret, the central video image processor configured to receive and process image data from the camera.1. An unmanned aerial vehicle (UAV) comprising: a fuselage; a gimbal-mounted turret having one or more degrees of freedom relative to the fuselage; a camera disposed in the gimbal-mounted turret for motion therewith in the one or more degrees of freedom; and a central video image processor disposed exteriorly of the gimbal-mounted turret, the central video image processor configured to receive and process image data from the camera. 2. The UAV of claim 1, wherein the central video image processor includes one or more of the following modules: a de-mosaicing module, a video conditioning module, individual frame display information module, and template matching module. 3. The UAV of claim 1, further comprising an additional camera mounted to the aircraft and coupled to the central video image processor, the central video image processor configured to receive and process image data from the additional camera. 4. The UAV of claim 1, wherein the additional camera is a landing camera. 5. A surveillance method comprising: capturing image information using a gimbaled camera mounted in a turret exterior to an aircraft fuselage; transmitting the captured image information to a central image processor disposed in the aircraft fuselage; and processing the transmitted captured image information in the central image processor. 6. The method of claim 5, further comprising transmitting information processed by the central image processor to a remote location. 7. The method of claim 5, further comprising capturing image information using an additional camera mounted exteriorly of the fuselage and transmitting the captured image information from the additional camera to a central image processor disposed in the aircraft fuselage. 8. A device comprising: means for capturing image information using a gimbaled camera mounted in a turret exterior to an aircraft fuselage; means for transmitting the captured image information to a central image processor disposed in the aircraft fuselage; and means for processing the transmitted captured image information in the central image processor. 9. The method of claim 8, further comprising means for transmitting information processed by the central image processor to a remote location. 10. The method of claim 8, further comprising means for capturing image information using an additional camera mounted exteriorly of the fuselage and transmitting the captured image information from the additional camera to a central image processor disposed in the aircraft fuselage.
2,400
7,925
7,925
15,176,727
2,481
A data processing system for performing motion estimation in a sequence of frames comprising first and second frames each divided into respective sets of blocks of pixels, the system comprising: a vector generator configured to form motion vector candidates representing mappings of pixels between the first and second frames; and a vector processor configured to, for a search block of the first frame, identify a first motion vector candidate ending in a block of the second frame collocated with the search block and form an output vector for the search block which is substantially parallel to the first motion vector candidate and represents a mapping of pixels from the search block to the second frame.
1. A data processing system for performing motion estimation in a sequence of frames comprising first and second frames each divided into respective sets of blocks of pixels, the system comprising: a vector generator configured to form motion vector candidates representing mappings of pixels between the first and second frames; and a vector processor configured to, for a search block of the first frame, identify a first motion vector candidate ending in a block of the second frame collocated with the search block and form an output vector for the search block which is substantially parallel to the first motion vector candidate and represents a mapping of pixels from the search block to the second frame. 2. A data processing system as claimed in claim 1, wherein the vector processor is configured to form the output vector if no motion vector candidates are available for the search block. 3. A data processing system as claimed in claim 1, further comprising a candidate assessor configured to calculate a score for each motion vector candidate, each score being a measure of the similarity of the pixels in the first and second frames at each end of the respective motion vector candidate, and the candidate assessor being configured to cause the vector processor to form the output vector if: the score of each motion vector candidate available for the search block is indicative of a low similarity between the pixels at the endpoints of that motion vector candidate; or no motion vector candidates are available for the search block; the data processing system being configured to use the output vector as a vector describing the mapping of pixels from the search block to the second frame. 4. A data processing system as claimed in claim 3, wherein the candidate assessor is configured to, if at least one motion vector candidate is available for the search block having a score indicative of a high similarity between the pixels at its endpoints, provide for use as a vector describing the mapping of pixels from the search block to the second frame the motion vector candidate having a score indicative of greatest similarity between its endpoint pixels. 5. A data processing system as claimed in claim 1, further comprising a candidate assessor configured to calculate a score for each motion vector candidate, each score being a measure of the similarity of the pixels in the first and second frames at each end of the respective motion vector candidate, wherein the vector processor is configured to add the output vector to any motion vector candidates available for the search block and the data processing system being configured to use as a vector describing the mapping of pixels from the search block to the second frame the vector having a score indicative of greatest similarity between its endpoint pixels, the output vector being assigned a predefined score or a score formed in dependence on the score of the first motion vector candidate. 6. A data processing system as claimed in claim 5, wherein the vector processor is configured to process each block of the first frame as a search block so as to, in each case that a motion vector candidate ends in a respective collocated block of the second frame, form an output vector for that block of the first frame. 7. A data processing system as claimed in claim 1, wherein the motion vector candidates include single-ended vectors originating at blocks of the first and/or second frames. 8. A data processing system as claimed in claim 1, wherein the motion vector candidates include double-ended vectors originating at blocks of an interpolated frame between the first and second frames. 9. A data processing system as claimed in claim 1, wherein the vector processor is configured to identify the collocated block of the second frame as a block of the second frame which is located at a frame position corresponding to the search block of the first frame. 10. A data processing system as claimed in claim 1, wherein the motion vector candidates include single-ended motion vector candidates and the vector generator is configured to form a single-ended motion vector candidate for a block of the second frame by identifying an area of pixels in the first frame which most closely matches the pixels of the block of the second frame, and/or to form a single-ended motion vector candidate for a block of the first frame by identifying an area of pixels in the second frame which most closely matches the pixels of the block of the first frame. 11. A data processing system as claimed in claim 1, wherein the motion vector candidates include double-ended motion vector candidates and the vector generator is configured to form a double-ended motion vector candidate for a block of an interpolated frame between the first and second frames by identifying matching areas of pixels in the first and second frames, the areas of pixels in the first and second frames having a predefined relationship to the block of the interpolated frame. 12. A data processing system as claimed in claim 1, wherein the vector processor is configured to form the output vector using pixel data from the first and second frames only. 13. A data processing system as claimed in claim 1, wherein the vector processor is configured to form the output vector using motion vector candidates generated in respect of the span between the first and second frames only. 14. A data processing system as claimed in claim 1, wherein the vector processor is configured to determine the direction of the output vector further in dependence on the directions of one or more motion vector candidates ending in blocks neighbouring the collocated block of the second frame. 15. A data processing system as claimed in claim 1, wherein the output vector is a single-ended vector originating at the search block of the first frame. 16. A data processing system as claimed in claim 1, wherein the output vector is a double-ended vector originating at a block of an interpolated frame between the first and second frames and having an endpoint at the search block of the first frame. 17. A data processing system as claimed in claim 1, further comprising interpolation logic configured to operate the output vector on pixels of the search block so as to generate a block of an interpolated frame between the first and second frames. 18. A data processing system as claimed in claim 1, further comprising encoding logic configured to use the output vector in the generation of an encoded video stream. 19. A method for performing motion estimation in a sequence of frames, the sequence comprising first and second frames each divided into respective sets of blocks of pixels, the method comprising: forming motion vector candidates representing mappings of pixels between the first and second frames; and for a search block of the first frame: identifying a first motion vector candidate ending in a block of the second frame collocated with the search block of the first frame; and forming an output vector for the search block which is substantially parallel to the first motion vector candidate and represents a mapping of pixels from the search block to the second frame. 20. A non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a processor, cause the processor to implement the method of performing motion estimation in a sequence of frames, the sequence comprising first and second frames each divided into respective sets of blocks of pixels, the method comprising: forming motion vector candidates representing mappings of pixels between the first and second frames; and for a search block of the first frame: identifying a first motion vector candidate ending in a block of the second frame collocated with the search block of the first frame; and forming an output vector for the search block which is substantially parallel to the first motion vector candidate and represents a mapping of pixels from the search block to the second frame.
A data processing system for performing motion estimation in a sequence of frames comprising first and second frames each divided into respective sets of blocks of pixels, the system comprising: a vector generator configured to form motion vector candidates representing mappings of pixels between the first and second frames; and a vector processor configured to, for a search block of the first frame, identify a first motion vector candidate ending in a block of the second frame collocated with the search block and form an output vector for the search block which is substantially parallel to the first motion vector candidate and represents a mapping of pixels from the search block to the second frame.1. A data processing system for performing motion estimation in a sequence of frames comprising first and second frames each divided into respective sets of blocks of pixels, the system comprising: a vector generator configured to form motion vector candidates representing mappings of pixels between the first and second frames; and a vector processor configured to, for a search block of the first frame, identify a first motion vector candidate ending in a block of the second frame collocated with the search block and form an output vector for the search block which is substantially parallel to the first motion vector candidate and represents a mapping of pixels from the search block to the second frame. 2. A data processing system as claimed in claim 1, wherein the vector processor is configured to form the output vector if no motion vector candidates are available for the search block. 3. A data processing system as claimed in claim 1, further comprising a candidate assessor configured to calculate a score for each motion vector candidate, each score being a measure of the similarity of the pixels in the first and second frames at each end of the respective motion vector candidate, and the candidate assessor being configured to cause the vector processor to form the output vector if: the score of each motion vector candidate available for the search block is indicative of a low similarity between the pixels at the endpoints of that motion vector candidate; or no motion vector candidates are available for the search block; the data processing system being configured to use the output vector as a vector describing the mapping of pixels from the search block to the second frame. 4. A data processing system as claimed in claim 3, wherein the candidate assessor is configured to, if at least one motion vector candidate is available for the search block having a score indicative of a high similarity between the pixels at its endpoints, provide for use as a vector describing the mapping of pixels from the search block to the second frame the motion vector candidate having a score indicative of greatest similarity between its endpoint pixels. 5. A data processing system as claimed in claim 1, further comprising a candidate assessor configured to calculate a score for each motion vector candidate, each score being a measure of the similarity of the pixels in the first and second frames at each end of the respective motion vector candidate, wherein the vector processor is configured to add the output vector to any motion vector candidates available for the search block and the data processing system being configured to use as a vector describing the mapping of pixels from the search block to the second frame the vector having a score indicative of greatest similarity between its endpoint pixels, the output vector being assigned a predefined score or a score formed in dependence on the score of the first motion vector candidate. 6. A data processing system as claimed in claim 5, wherein the vector processor is configured to process each block of the first frame as a search block so as to, in each case that a motion vector candidate ends in a respective collocated block of the second frame, form an output vector for that block of the first frame. 7. A data processing system as claimed in claim 1, wherein the motion vector candidates include single-ended vectors originating at blocks of the first and/or second frames. 8. A data processing system as claimed in claim 1, wherein the motion vector candidates include double-ended vectors originating at blocks of an interpolated frame between the first and second frames. 9. A data processing system as claimed in claim 1, wherein the vector processor is configured to identify the collocated block of the second frame as a block of the second frame which is located at a frame position corresponding to the search block of the first frame. 10. A data processing system as claimed in claim 1, wherein the motion vector candidates include single-ended motion vector candidates and the vector generator is configured to form a single-ended motion vector candidate for a block of the second frame by identifying an area of pixels in the first frame which most closely matches the pixels of the block of the second frame, and/or to form a single-ended motion vector candidate for a block of the first frame by identifying an area of pixels in the second frame which most closely matches the pixels of the block of the first frame. 11. A data processing system as claimed in claim 1, wherein the motion vector candidates include double-ended motion vector candidates and the vector generator is configured to form a double-ended motion vector candidate for a block of an interpolated frame between the first and second frames by identifying matching areas of pixels in the first and second frames, the areas of pixels in the first and second frames having a predefined relationship to the block of the interpolated frame. 12. A data processing system as claimed in claim 1, wherein the vector processor is configured to form the output vector using pixel data from the first and second frames only. 13. A data processing system as claimed in claim 1, wherein the vector processor is configured to form the output vector using motion vector candidates generated in respect of the span between the first and second frames only. 14. A data processing system as claimed in claim 1, wherein the vector processor is configured to determine the direction of the output vector further in dependence on the directions of one or more motion vector candidates ending in blocks neighbouring the collocated block of the second frame. 15. A data processing system as claimed in claim 1, wherein the output vector is a single-ended vector originating at the search block of the first frame. 16. A data processing system as claimed in claim 1, wherein the output vector is a double-ended vector originating at a block of an interpolated frame between the first and second frames and having an endpoint at the search block of the first frame. 17. A data processing system as claimed in claim 1, further comprising interpolation logic configured to operate the output vector on pixels of the search block so as to generate a block of an interpolated frame between the first and second frames. 18. A data processing system as claimed in claim 1, further comprising encoding logic configured to use the output vector in the generation of an encoded video stream. 19. A method for performing motion estimation in a sequence of frames, the sequence comprising first and second frames each divided into respective sets of blocks of pixels, the method comprising: forming motion vector candidates representing mappings of pixels between the first and second frames; and for a search block of the first frame: identifying a first motion vector candidate ending in a block of the second frame collocated with the search block of the first frame; and forming an output vector for the search block which is substantially parallel to the first motion vector candidate and represents a mapping of pixels from the search block to the second frame. 20. A non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a processor, cause the processor to implement the method of performing motion estimation in a sequence of frames, the sequence comprising first and second frames each divided into respective sets of blocks of pixels, the method comprising: forming motion vector candidates representing mappings of pixels between the first and second frames; and for a search block of the first frame: identifying a first motion vector candidate ending in a block of the second frame collocated with the search block of the first frame; and forming an output vector for the search block which is substantially parallel to the first motion vector candidate and represents a mapping of pixels from the search block to the second frame.
2,400
7,926
7,926
14,351,419
2,426
The present invention relates to an apparatus and a method for configuring a control message in a broadcasting system for supporting a multimedia service based on an interne protocol. To this end, control information to be recorded in a payload of a control message to be configured is generated based on a table selected from among a plurality of tables for defining the information related to the generation and consumption of a hybrid content when a message type of the control message to be configured is determined to be a type for providing information related to the generation and consumption of a hybrid content. In addition, the information related to the type of the selected table is recorded in an optional field of the control message to be configured.
1-16. (canceled) 17. An apparatus for providing a hybrid content, the apparatus comprising: a control message generator to generate a signaling message for consuming the hybrid content; a transmitter to transmit the signaling message generated by the control message generator to a hybrid content consuming device; and a controller to control the control message generator and the transmitter so as to generate the signaling message and to transmit the generated signaling message, wherein the control message generator generates the signaling message using a payload including at least one of a plurality of tables defining information required for consumption of the hybrid content, and a header including an extension field that records information related with the at least one table included in the payload, based on a type of the signaling message designated by the controller for the consumption of the hybrid content, and one of the plurality of tables is a Device Capability Information Table (DCIT) that defines information on a required device capabilities for consuming a package/asset of the hybrid content. 18. The apparatus as claimed in claim 17, wherein the plurality of tables further includes a table that defines location information on the package of the hybrid content. 19. The apparatus as claimed in claim 18, wherein the plurality of tables further includes a table that defines composition information of the package of the hybrid content. 20. The apparatus as claimed in claim 17, wherein the information required for consumption of the hybrid content is MPEG MEDIA Transport (MMT) package/asset information. 21. An apparatus for consuming a hybrid content, the apparatus comprising: a receiver to receive a signaling message from a hybrid content provider; a control message processor to obtain information required for consuming the hybrid content from the signaling message received through the receiver; and a controller to control the receiver and the control message processor so as to receive the signaling message and to obtain the information required for consuming the hybrid content from the received signaling message, wherein the control message processor determines a type of the received signaling message based on header information of the received signaling message, and obtains information required for consuming the hybrid content defined by at least one table included in a payload of the received signaling message based on the determined type of the signaling message and information related with the at least one table recorded in an extension field forming the header information, and one of the plurality of tables, which is capable of being selected to be the at least one table included in the payload of the signaling message, is a Device Capability Information Table (DCIT) defining information on a required device capabilities for consuming a package/asset of the hybrid content. 22. The apparatus as claimed in claim 21, wherein the plurality of tables further includes a table defining location information on the package of the hybrid content. 23. The apparatus as claimed in claim 22, wherein the plurality of tables further includes a table defining composition information of the package of the hybrid content. 24. The apparatus as claimed in claim 21, wherein the information required for consumption of the hybrid content is MPEG MEDIA Transport (MMT) package/asset information. 25. A method of providing a hybrid content in a content providing apparatus, the method comprising: generating a signaling message for consuming the hybrid content; and transmitting the generated signaling message to a hybrid content consuming device, wherein generating comprises generating the signaling message using a payload including of at least one of a plurality of tables defining information required for consuming the hybrid content and a header including an extension field that records information related with the at least one table included in the payload, based on a type of the signaling message designated for the consumption of the hybrid content, and one of the plurality of tables is a Device Capability information Table (DCIT) defining information on a required device capabilities for consuming a package/asset of the hybrid content. 26. The method as claimed in claim 25, wherein the plurality of tables further includes a table defining location information on the package of the hybrid content. 27. The method as claimed in claim 26, wherein the plurality of tables further includes a table defining composition information of the package of the hybrid content. 28. The method as claimed in claim 25, wherein the information required for consumption of the hybrid content is MPEG MEDIA Transport (MMT) package/asset information. 29. A method of consuming a hybrid content in a hybrid content consuming apparatus, the method comprising: receiving a signaling message from a hybrid content provider; and obtaining information required for consuming the hybrid content from the received signaling message, wherein obtaining comprises: determining a type of the received signaling message based on header information of the received signaling message; and obtaining the information required for consuming the hybrid content defined by at least one table included in a payload of the received signaling message based on the determined type of the signaling message and information related with the at least one table recorded in an extension field forming the header information, and one of the plurality of tables, which is capable of being selected to be the at least one table included in the payload of the signaling message, is a Device Capability Information Table (DCIT) defining information on a required device capabilities for consuming a package/asset of the hybrid content. 30. The method as claimed in claim 29, wherein the plurality of tables further includes a table defining location information on the package of the hybrid content. 31. The method as claimed in claim 30, wherein the plurality of tables further includes a table defining composition information of the package of the hybrid content. 32. The method as claimed in claim 29, wherein the information required for consumption of the hybrid content is MPEG MEDIA Transport (MMT) package/asset information.
The present invention relates to an apparatus and a method for configuring a control message in a broadcasting system for supporting a multimedia service based on an interne protocol. To this end, control information to be recorded in a payload of a control message to be configured is generated based on a table selected from among a plurality of tables for defining the information related to the generation and consumption of a hybrid content when a message type of the control message to be configured is determined to be a type for providing information related to the generation and consumption of a hybrid content. In addition, the information related to the type of the selected table is recorded in an optional field of the control message to be configured.1-16. (canceled) 17. An apparatus for providing a hybrid content, the apparatus comprising: a control message generator to generate a signaling message for consuming the hybrid content; a transmitter to transmit the signaling message generated by the control message generator to a hybrid content consuming device; and a controller to control the control message generator and the transmitter so as to generate the signaling message and to transmit the generated signaling message, wherein the control message generator generates the signaling message using a payload including at least one of a plurality of tables defining information required for consumption of the hybrid content, and a header including an extension field that records information related with the at least one table included in the payload, based on a type of the signaling message designated by the controller for the consumption of the hybrid content, and one of the plurality of tables is a Device Capability Information Table (DCIT) that defines information on a required device capabilities for consuming a package/asset of the hybrid content. 18. The apparatus as claimed in claim 17, wherein the plurality of tables further includes a table that defines location information on the package of the hybrid content. 19. The apparatus as claimed in claim 18, wherein the plurality of tables further includes a table that defines composition information of the package of the hybrid content. 20. The apparatus as claimed in claim 17, wherein the information required for consumption of the hybrid content is MPEG MEDIA Transport (MMT) package/asset information. 21. An apparatus for consuming a hybrid content, the apparatus comprising: a receiver to receive a signaling message from a hybrid content provider; a control message processor to obtain information required for consuming the hybrid content from the signaling message received through the receiver; and a controller to control the receiver and the control message processor so as to receive the signaling message and to obtain the information required for consuming the hybrid content from the received signaling message, wherein the control message processor determines a type of the received signaling message based on header information of the received signaling message, and obtains information required for consuming the hybrid content defined by at least one table included in a payload of the received signaling message based on the determined type of the signaling message and information related with the at least one table recorded in an extension field forming the header information, and one of the plurality of tables, which is capable of being selected to be the at least one table included in the payload of the signaling message, is a Device Capability Information Table (DCIT) defining information on a required device capabilities for consuming a package/asset of the hybrid content. 22. The apparatus as claimed in claim 21, wherein the plurality of tables further includes a table defining location information on the package of the hybrid content. 23. The apparatus as claimed in claim 22, wherein the plurality of tables further includes a table defining composition information of the package of the hybrid content. 24. The apparatus as claimed in claim 21, wherein the information required for consumption of the hybrid content is MPEG MEDIA Transport (MMT) package/asset information. 25. A method of providing a hybrid content in a content providing apparatus, the method comprising: generating a signaling message for consuming the hybrid content; and transmitting the generated signaling message to a hybrid content consuming device, wherein generating comprises generating the signaling message using a payload including of at least one of a plurality of tables defining information required for consuming the hybrid content and a header including an extension field that records information related with the at least one table included in the payload, based on a type of the signaling message designated for the consumption of the hybrid content, and one of the plurality of tables is a Device Capability information Table (DCIT) defining information on a required device capabilities for consuming a package/asset of the hybrid content. 26. The method as claimed in claim 25, wherein the plurality of tables further includes a table defining location information on the package of the hybrid content. 27. The method as claimed in claim 26, wherein the plurality of tables further includes a table defining composition information of the package of the hybrid content. 28. The method as claimed in claim 25, wherein the information required for consumption of the hybrid content is MPEG MEDIA Transport (MMT) package/asset information. 29. A method of consuming a hybrid content in a hybrid content consuming apparatus, the method comprising: receiving a signaling message from a hybrid content provider; and obtaining information required for consuming the hybrid content from the received signaling message, wherein obtaining comprises: determining a type of the received signaling message based on header information of the received signaling message; and obtaining the information required for consuming the hybrid content defined by at least one table included in a payload of the received signaling message based on the determined type of the signaling message and information related with the at least one table recorded in an extension field forming the header information, and one of the plurality of tables, which is capable of being selected to be the at least one table included in the payload of the signaling message, is a Device Capability Information Table (DCIT) defining information on a required device capabilities for consuming a package/asset of the hybrid content. 30. The method as claimed in claim 29, wherein the plurality of tables further includes a table defining location information on the package of the hybrid content. 31. The method as claimed in claim 30, wherein the plurality of tables further includes a table defining composition information of the package of the hybrid content. 32. The method as claimed in claim 29, wherein the information required for consumption of the hybrid content is MPEG MEDIA Transport (MMT) package/asset information.
2,400
7,927
7,927
14,875,558
2,423
The various embodiments of the present disclosure are directed to devices, systems and methods for providing media content. Per one embodiment, a method includes receiving bookmarking data at a client device from a data processing and aggregating server, the bookmarking data referencing a specific point in a media content item that is playable from the client device; providing a first output signal based on the bookmarking data from the client device to a display device, the output signal causing the display device to display at least one bookmark that includes at least a user-selectable portion and a descriptive portion; receiving an input signal at the client device from a user input device, the input signal indicating a user selection of a bookmark displayed by the display device; and providing a second output signal from the client device to the display device responsive to the input signal, the second output signal causing the display device to display a media content item from a specific point specified by the bookmarking data.
1. A method of providing media content, comprising: receiving bookmarking data at a client device from a data processing and aggregating server, the bookmarking data referencing a specific point in a media content item that is playable from the client device; providing a first output signal based on the bookmarking data from the client device to a display device, the first output signal causing the display device to display at least one bookmark that includes at least one user-selectable portion and at least one descriptive portion; receiving an input signal at the client device from a user input device, the input signal indicating a user selection of a bookmark displayed by the display device; and providing a second output signal from the client device to the display device responsive to the input signal, the second output signal causing the display device to display a media content item from a specific point specified by the bookmarking data. 2. The method of claim 1, wherein the first output signal causes the display device to display a list of selectable bookmarks; wherein the list of selectable bookmarks includes a title and an indication of a length for each of the bookmarks. 3. The method of claim 2, wherein the second output signal causes the display device to additionally display the selected bookmark over a current playback position in a progress bar; wherein the selected bookmark includes the title and length for the selected bookmark. 4. The method of claim 1, wherein the first output signal causes the display device to display a list of bookmarks; wherein the list of bookmarks includes a title and start and stop times for each of the bookmarks. 5. The method of claim 4, wherein the second output signal causes the display device to additionally display the selected bookmark over a current playback position in a progress bar; wherein the selected bookmark includes the title and start and stop times for the selected bookmark. 6. The method of claim 1, wherein the first output signal causes the display device to display a list of bookmarks; wherein the list includes a title and a category of indicators for each of the bookmarks. 7. The method of claim 6, wherein the second output signal causes the display device to additionally display the selected bookmark over a current playback position in a progress bar; wherein the selected bookmark includes the title and category for the selected bookmark. 8. The method of claim 1, wherein the first output signal causes the display device to display a popup overlay on a grid of DVR items; wherein the popup overlay includes a button that when selected causes a second overlay to be displayed, the second overlay having one or more selectable bookmarks. 9. The method of claim 8, wherein the selectable bookmarks of the second overlay are arranged in a single column. 10. The method of claim 8, wherein the selectable bookmarks of the second overlay are arranged in two adjacent columns. 11. The method of claim 1, wherein the first output signal causes the display device to display a menu of bookmarks alongside a video output; wherein each bookmark in the menu includes a list of descriptors for the bookmark. 12. The method of claim 11, wherein the second output signal causes the display device to additionally display the selected bookmark over a current playback position in a progress bar; wherein the selected bookmark includes the list of descriptors for the bookmark. 13. The method of claim 1, wherein the media content item is a commercial having a beginning and an end and the second output signal causes the media content item to skip to the beginning of the commercial. 14. The method of claim 1, wherein the media content item is a commercial having a beginning and an end and the second output signal causes the media content item to skip to the end of the commercial. 15. The method of claim 14, wherein the second output signal is provided responsive to user input. 16. The method of claim 15, wherein the second output signal is automatically provided responsive to a commercial skipping setting. 17. The method of claim 1, wherein the bookmarking data specifies a location of age restricted content within the media content item. 18. A method of processing a request for bookmarking data, comprising: receiving a request for bookmark data from a client device; requesting information from one or more data sources to fulfill the request for bookmark data; receiving one or more replies from the data sources responsive to the request for information; processing the one or more responses received from the one or more sources to generate the bookmark data; and transmitting the bookmark data to the client device responsive to the request. 19. The method of claim 18, further comprising: determining if all data needed for fulfilling the request has been received from the data sources; and repeating the operations of determining and requesting information until all information needed for generating the requested bookmark has been received. 20. The method of claim 19, further comprising: prior to the operation of requesting information from one or more data sources, determining what information is needed to fulfill the request for bookmark data.
The various embodiments of the present disclosure are directed to devices, systems and methods for providing media content. Per one embodiment, a method includes receiving bookmarking data at a client device from a data processing and aggregating server, the bookmarking data referencing a specific point in a media content item that is playable from the client device; providing a first output signal based on the bookmarking data from the client device to a display device, the output signal causing the display device to display at least one bookmark that includes at least a user-selectable portion and a descriptive portion; receiving an input signal at the client device from a user input device, the input signal indicating a user selection of a bookmark displayed by the display device; and providing a second output signal from the client device to the display device responsive to the input signal, the second output signal causing the display device to display a media content item from a specific point specified by the bookmarking data.1. A method of providing media content, comprising: receiving bookmarking data at a client device from a data processing and aggregating server, the bookmarking data referencing a specific point in a media content item that is playable from the client device; providing a first output signal based on the bookmarking data from the client device to a display device, the first output signal causing the display device to display at least one bookmark that includes at least one user-selectable portion and at least one descriptive portion; receiving an input signal at the client device from a user input device, the input signal indicating a user selection of a bookmark displayed by the display device; and providing a second output signal from the client device to the display device responsive to the input signal, the second output signal causing the display device to display a media content item from a specific point specified by the bookmarking data. 2. The method of claim 1, wherein the first output signal causes the display device to display a list of selectable bookmarks; wherein the list of selectable bookmarks includes a title and an indication of a length for each of the bookmarks. 3. The method of claim 2, wherein the second output signal causes the display device to additionally display the selected bookmark over a current playback position in a progress bar; wherein the selected bookmark includes the title and length for the selected bookmark. 4. The method of claim 1, wherein the first output signal causes the display device to display a list of bookmarks; wherein the list of bookmarks includes a title and start and stop times for each of the bookmarks. 5. The method of claim 4, wherein the second output signal causes the display device to additionally display the selected bookmark over a current playback position in a progress bar; wherein the selected bookmark includes the title and start and stop times for the selected bookmark. 6. The method of claim 1, wherein the first output signal causes the display device to display a list of bookmarks; wherein the list includes a title and a category of indicators for each of the bookmarks. 7. The method of claim 6, wherein the second output signal causes the display device to additionally display the selected bookmark over a current playback position in a progress bar; wherein the selected bookmark includes the title and category for the selected bookmark. 8. The method of claim 1, wherein the first output signal causes the display device to display a popup overlay on a grid of DVR items; wherein the popup overlay includes a button that when selected causes a second overlay to be displayed, the second overlay having one or more selectable bookmarks. 9. The method of claim 8, wherein the selectable bookmarks of the second overlay are arranged in a single column. 10. The method of claim 8, wherein the selectable bookmarks of the second overlay are arranged in two adjacent columns. 11. The method of claim 1, wherein the first output signal causes the display device to display a menu of bookmarks alongside a video output; wherein each bookmark in the menu includes a list of descriptors for the bookmark. 12. The method of claim 11, wherein the second output signal causes the display device to additionally display the selected bookmark over a current playback position in a progress bar; wherein the selected bookmark includes the list of descriptors for the bookmark. 13. The method of claim 1, wherein the media content item is a commercial having a beginning and an end and the second output signal causes the media content item to skip to the beginning of the commercial. 14. The method of claim 1, wherein the media content item is a commercial having a beginning and an end and the second output signal causes the media content item to skip to the end of the commercial. 15. The method of claim 14, wherein the second output signal is provided responsive to user input. 16. The method of claim 15, wherein the second output signal is automatically provided responsive to a commercial skipping setting. 17. The method of claim 1, wherein the bookmarking data specifies a location of age restricted content within the media content item. 18. A method of processing a request for bookmarking data, comprising: receiving a request for bookmark data from a client device; requesting information from one or more data sources to fulfill the request for bookmark data; receiving one or more replies from the data sources responsive to the request for information; processing the one or more responses received from the one or more sources to generate the bookmark data; and transmitting the bookmark data to the client device responsive to the request. 19. The method of claim 18, further comprising: determining if all data needed for fulfilling the request has been received from the data sources; and repeating the operations of determining and requesting information until all information needed for generating the requested bookmark has been received. 20. The method of claim 19, further comprising: prior to the operation of requesting information from one or more data sources, determining what information is needed to fulfill the request for bookmark data.
2,400
7,928
7,928
15,610,060
2,484
A device is disclosed. The device includes a plurality of ports to receive a plurality of audio streams, an audio content control unit configured to modify playback length of an audio content of at least one of the plurality of audio streams according to an input time interval, an audio decoder and a memory buffer coupled to the audio decoder and the audio content control unit. The memory buffer is used by the audio content control unit to buffer at least one of the plurality of audio streams.
1. A device, comprising: a plurality of ports to receive a plurality of audio or video (AV) streams; an AV content control unit configured to modify playback length of an AV content of at least one of the plurality of AV streams according to an input time interval; an AV decoder; and a memory buffer coupled to the audio decoder and the AV content control unit, wherein the memory buffer is used by the AV content control unit to buffer at least one of the plurality of AV streams. 2. The device of claim 1, further including a port to receive input from a navigation device. 3. The device of claim 2, wherein the input from the navigation device includes estimated time of arrival at a destination. 4. The device of claim 1, further including an electronic program guide (EPG) decoder to identify program information embedded in at least one of the plurality of AV streams. 5. The device of claim 1, further including a program selector to allow selection of an AV programs from a plurality of audio programs based on an output of the EPG decoder. 6. The device of claim 3, wherein the AV content control unit is configured to receive the input time interval from the navigation device based on a pre-inputted travel destination. 7. The device of claim 6, wherein the input time interval is a variable that changes according to changes in time to reach the pre-inputted destination. 8. The device of claim 1, wherein the AV content control unit is configured to receive the input time interval from a user. 9. The device of claim 1, wherein the memory buffer includes separate memory spaces for each of the plurality of AV streams. 10. The device of claim 1, wherein the plurality of audio streams includes analog radio stream, digital radio stream, Internet radio stream, digital video streams and locally stored AV content stream. 11. The device of claim 1, wherein the AV content control unit is configured to modify the playback length of at least one of the AV streams by 5% to 20%. 12. A method for time stretching an audio or video (AV) stream, the method comprising: (a) receiving an AV stream; (b) receiving a time interval indicating an estimated time of arrival at a destination; (c) modifying playback rate of the received AV stream to fit entire playback within the time interval; and (d) repeating operations (b) and (c) until the destination is reached. 13. The method of claim 12, further including identifying program information embedded in the AV stream using an electronic program guide (EPG) decoder. 14. The method of claim 12, wherein the modifying includes altering playback length of the AV content by 5% to 20%.
A device is disclosed. The device includes a plurality of ports to receive a plurality of audio streams, an audio content control unit configured to modify playback length of an audio content of at least one of the plurality of audio streams according to an input time interval, an audio decoder and a memory buffer coupled to the audio decoder and the audio content control unit. The memory buffer is used by the audio content control unit to buffer at least one of the plurality of audio streams.1. A device, comprising: a plurality of ports to receive a plurality of audio or video (AV) streams; an AV content control unit configured to modify playback length of an AV content of at least one of the plurality of AV streams according to an input time interval; an AV decoder; and a memory buffer coupled to the audio decoder and the AV content control unit, wherein the memory buffer is used by the AV content control unit to buffer at least one of the plurality of AV streams. 2. The device of claim 1, further including a port to receive input from a navigation device. 3. The device of claim 2, wherein the input from the navigation device includes estimated time of arrival at a destination. 4. The device of claim 1, further including an electronic program guide (EPG) decoder to identify program information embedded in at least one of the plurality of AV streams. 5. The device of claim 1, further including a program selector to allow selection of an AV programs from a plurality of audio programs based on an output of the EPG decoder. 6. The device of claim 3, wherein the AV content control unit is configured to receive the input time interval from the navigation device based on a pre-inputted travel destination. 7. The device of claim 6, wherein the input time interval is a variable that changes according to changes in time to reach the pre-inputted destination. 8. The device of claim 1, wherein the AV content control unit is configured to receive the input time interval from a user. 9. The device of claim 1, wherein the memory buffer includes separate memory spaces for each of the plurality of AV streams. 10. The device of claim 1, wherein the plurality of audio streams includes analog radio stream, digital radio stream, Internet radio stream, digital video streams and locally stored AV content stream. 11. The device of claim 1, wherein the AV content control unit is configured to modify the playback length of at least one of the AV streams by 5% to 20%. 12. A method for time stretching an audio or video (AV) stream, the method comprising: (a) receiving an AV stream; (b) receiving a time interval indicating an estimated time of arrival at a destination; (c) modifying playback rate of the received AV stream to fit entire playback within the time interval; and (d) repeating operations (b) and (c) until the destination is reached. 13. The method of claim 12, further including identifying program information embedded in the AV stream using an electronic program guide (EPG) decoder. 14. The method of claim 12, wherein the modifying includes altering playback length of the AV content by 5% to 20%.
2,400
7,929
7,929
14,515,074
2,416
A controller at an IP (e.g., client) layer in a multi-layer network can request a network topology map from another controller at an optical (e.g., server) layer in the multi-layer network. The controller at the optical layer of the network can use a layer mapping function and common attributes between the formats used to describe the network topology map at the two layers to generate a common layer abstraction model representing the network topology map stored at the controller at the optical layer of the network. A controller-to-controller interface can translate and/or send the common layer abstraction model to the controller at the IP layer for processing data on the network.
1. A system, comprising: a network entity at a first layer of a multilayer network, the entity at the first layer configured to receive a request for a network topology from a network entity at a second layer of a multilayer network; a layer mapping function module operatively coupled to the network entity at the first layer and configured to generate a translation protocol, the translation protocol configured to determine common attributes between a first topology format and a second topology format, the network entity at the first layer of the multilayer network configured to use the translation protocol to convert a topology map in the first topology format into a topology map in a third topology format; and a multilayer network translation interface module configured to use the topology map in the third topology format to generate a topology map in the second topology format; the network entity at the first layer configured to send the topology map in the second topology format to the network entity at the second layer such that the entity at the second layer determines a path between a first network node and a second network node based on the topology map in the second topology format. 2. The system of claim 1, wherein the network entity in the first layer of the multilayer network is a first controller in a first layer domain. 3. The system of claim 1, wherein: the network entity in the first layer of the multilayer network is a first controller in a first layer domain; and the first layer domain is an optical domain on a server layer. 4. The system of claim 1, wherein: the network entity in the first layer of the multilayer network is a first controller in a first layer domain; and the network entity in the second layer of the multilayer network is a second controller in a second layer domain. 5. The system of claim 1, wherein: the network entity in the first layer of the multilayer network is a first controller in a first layer domain; the network entity in the second layer of the multilayer network is a second controller in a second layer domain; and the second layer domain is an IP domain on a client layer. 6. The system of claim 1, wherein: the first topology format is an optical network topology format; and the second topology format is an internet protocol (IP) topology format. 7. The system of claim 1, wherein the third topology format is a common link abstraction model format. 8. A system, comprising: a controller at a first layer of a multilayer network including a layer mapping function module, the layer mapping function module configured to translate a network topology map in a first format and at the first layer of the multilayer network into a second format in response to a request for a network topology from a controller at a second layer; the controller at the first layer configured to provide the network topology map in the second format to a controller-to-controller interface module configured to convert the network topology map to a third format and provide the network topology map in the third format to the controller at the second layer. 9. The system of claim 8, wherein: the first layer is a server layer of the multilayer network, and the server layer is configured to use an optical domain protocol. 10. The system of claim 8, wherein: the second layer is a client layer of the multilayer network, and the client layer is configured to use an internet protocol (IP) domain protocol. 11. The system of claim 8, wherein: the first topology format is an optical network topology format; the second topology format is a common link abstraction model format; and the third topology format is an internet protocol (IP) topology format. 12. The system of claim 8, wherein: the first format is an optical network topology format, the third format is an internet protocol (IP) topology format, and the layer mapping function module is configured to determine common attributes between the optical network topology format and the IP topology format. 13. The system of claim 8, wherein: the first format is an optical network topology format, the second topology format is a common link abstraction model format; the third format is an internet protocol (IP) topology format, and the common link abstraction model format is configured to describe the network topology map in the optical network topology format using common attributes between the optical network topology format and the IP topology format. 14. A method, comprising: receiving a signal at a first controller at a first layer of a multilayer network from a second controller at a second layer in the multilayer network, the signal requesting a network topology map in a second layer topology format; in response to the signal, retrieving a network topology map in a first layer topology format at the first controller; translating the network topology map in the first layer topology format into a network topology map in an intermediary topology format using a layer mapping function module at the first controller; translating the topology map in the intermediary topology format into a topology map in the second layer topology format using a controller-to-controller interface module; and sending the topology map in the second layer topology format to the second controller. 15. The method of claim 14, wherein: the first layer is a client layer of the multilayer network, and the client layer uses an internet protocol (IP) domain protocol. 16. The method of claim 14, wherein: the second layer is a server layer of the multilayer network, and the server layer uses an optical domain protocol. 17. The method of claim 14, wherein the first layer topology format is an optical network topology format. 18. The method of claim 14, wherein the second layer topology format is an internet protocol (IP) topology format. 19. The method of claim 14, wherein the intermediary topology format is a common link abstraction model format. 20. The method of claim 14, wherein: the second layer topology format is an internet protocol (IP) topology format, the first layer topology format is format an optical network topology, and the layer mapping function module is configured to determine common attributes between the optical network topology format and the IP topology format.
A controller at an IP (e.g., client) layer in a multi-layer network can request a network topology map from another controller at an optical (e.g., server) layer in the multi-layer network. The controller at the optical layer of the network can use a layer mapping function and common attributes between the formats used to describe the network topology map at the two layers to generate a common layer abstraction model representing the network topology map stored at the controller at the optical layer of the network. A controller-to-controller interface can translate and/or send the common layer abstraction model to the controller at the IP layer for processing data on the network.1. A system, comprising: a network entity at a first layer of a multilayer network, the entity at the first layer configured to receive a request for a network topology from a network entity at a second layer of a multilayer network; a layer mapping function module operatively coupled to the network entity at the first layer and configured to generate a translation protocol, the translation protocol configured to determine common attributes between a first topology format and a second topology format, the network entity at the first layer of the multilayer network configured to use the translation protocol to convert a topology map in the first topology format into a topology map in a third topology format; and a multilayer network translation interface module configured to use the topology map in the third topology format to generate a topology map in the second topology format; the network entity at the first layer configured to send the topology map in the second topology format to the network entity at the second layer such that the entity at the second layer determines a path between a first network node and a second network node based on the topology map in the second topology format. 2. The system of claim 1, wherein the network entity in the first layer of the multilayer network is a first controller in a first layer domain. 3. The system of claim 1, wherein: the network entity in the first layer of the multilayer network is a first controller in a first layer domain; and the first layer domain is an optical domain on a server layer. 4. The system of claim 1, wherein: the network entity in the first layer of the multilayer network is a first controller in a first layer domain; and the network entity in the second layer of the multilayer network is a second controller in a second layer domain. 5. The system of claim 1, wherein: the network entity in the first layer of the multilayer network is a first controller in a first layer domain; the network entity in the second layer of the multilayer network is a second controller in a second layer domain; and the second layer domain is an IP domain on a client layer. 6. The system of claim 1, wherein: the first topology format is an optical network topology format; and the second topology format is an internet protocol (IP) topology format. 7. The system of claim 1, wherein the third topology format is a common link abstraction model format. 8. A system, comprising: a controller at a first layer of a multilayer network including a layer mapping function module, the layer mapping function module configured to translate a network topology map in a first format and at the first layer of the multilayer network into a second format in response to a request for a network topology from a controller at a second layer; the controller at the first layer configured to provide the network topology map in the second format to a controller-to-controller interface module configured to convert the network topology map to a third format and provide the network topology map in the third format to the controller at the second layer. 9. The system of claim 8, wherein: the first layer is a server layer of the multilayer network, and the server layer is configured to use an optical domain protocol. 10. The system of claim 8, wherein: the second layer is a client layer of the multilayer network, and the client layer is configured to use an internet protocol (IP) domain protocol. 11. The system of claim 8, wherein: the first topology format is an optical network topology format; the second topology format is a common link abstraction model format; and the third topology format is an internet protocol (IP) topology format. 12. The system of claim 8, wherein: the first format is an optical network topology format, the third format is an internet protocol (IP) topology format, and the layer mapping function module is configured to determine common attributes between the optical network topology format and the IP topology format. 13. The system of claim 8, wherein: the first format is an optical network topology format, the second topology format is a common link abstraction model format; the third format is an internet protocol (IP) topology format, and the common link abstraction model format is configured to describe the network topology map in the optical network topology format using common attributes between the optical network topology format and the IP topology format. 14. A method, comprising: receiving a signal at a first controller at a first layer of a multilayer network from a second controller at a second layer in the multilayer network, the signal requesting a network topology map in a second layer topology format; in response to the signal, retrieving a network topology map in a first layer topology format at the first controller; translating the network topology map in the first layer topology format into a network topology map in an intermediary topology format using a layer mapping function module at the first controller; translating the topology map in the intermediary topology format into a topology map in the second layer topology format using a controller-to-controller interface module; and sending the topology map in the second layer topology format to the second controller. 15. The method of claim 14, wherein: the first layer is a client layer of the multilayer network, and the client layer uses an internet protocol (IP) domain protocol. 16. The method of claim 14, wherein: the second layer is a server layer of the multilayer network, and the server layer uses an optical domain protocol. 17. The method of claim 14, wherein the first layer topology format is an optical network topology format. 18. The method of claim 14, wherein the second layer topology format is an internet protocol (IP) topology format. 19. The method of claim 14, wherein the intermediary topology format is a common link abstraction model format. 20. The method of claim 14, wherein: the second layer topology format is an internet protocol (IP) topology format, the first layer topology format is format an optical network topology, and the layer mapping function module is configured to determine common attributes between the optical network topology format and the IP topology format.
2,400
7,930
7,930
15,630,900
2,451
Methods and systems for premises management are described. A gateway device may be used to manage communication between a security system and an automation device. The gateway device may receive security data from the security system. The security data may indicate an event. An automation rule may be used to cause the automation device to perform an action based on the event.
1. A method comprising: establishing, by a gateway device, communication between the gateway device located at a premises and a security system located at the premises; establishing, by the gateway device, communication between the gateway device and an automation device located at the premises; receiving, by the gateway device and from the security system, security data; determining that the security data is indicative of an event; determining an automation rule associated with the event, wherein the automation rule comprises an action to perform in response to the event; and causing, based on the automation rule and the determining that the security data is indicative of the event, the automation device to perform the action. 2. The method of claim 1, further comprising: receiving, by the gateway device and from the automation device, automation data; and communicating, by the gateway device and based on the automation data, with a component of the security system. 3. The method of claim 1, further comprising outputting, by the gateway device, a plurality of user interfaces, wherein the plurality of user interfaces comprise a security interface and a network interface, wherein the security interface is configured to control the security system and access the security data, wherein the network interface is configured to control the automation device and access automation data from the automation device. 4. The method of claim 1, further comprising: establishing, by the gateway device, communication between the gateway device and a remote server at a location external to the premises; and managing, by the remote server, at least one of the gateway device and the security system. 5. The method of claim 1, further comprising: establishing communication between a remote client device located external to the premises and the gateway device; and exchanging, via the gateway device, communication between the remote client device and one or more of the security system and the automation device, wherein the remote client device accesses, via the gateway device, functions of one or more of the security system or the automation device. 6. The method of claim 5, wherein exchanging, via the gateway device, communication between the remote client device and one or more of the security system and the automation device comprises transmitting, by the gateway device and to the remote client device, automation data and the security data. 7. The method of claim 5, wherein exchanging, via the gateway device, communication between the remote client device and one or more of the security system and the automation device comprises receiving, by the gateway device and from the remote client device, control data for control of one or more of the security system and the automation device. 8. A system comprising: a security system located at a premises; an automation device located at the premises; and a gateway device located at the premises and in communication with the security system and the automation device, wherein the gateway device is configured to: receive, from the security system, security data, determine that the security data is indicative of an event, determine an automation rule associated with the event, wherein the automation rule comprises an action to perform in response to an event, and cause, based on the automation rule and the determination that the security data is indicative of the event, the automation device to perform the action. 9. The system of claim 8, wherein the gateway device is further configured to: receive, from the automation device, automation data; and communicate, based on the automation data, with a component of the security system. 10. The system of claim 8, wherein the gateway device is further configured to output a plurality of user interfaces, wherein the plurality of user interfaces comprise a security interface and a network interface, wherein the security interface is configured to control the security system and access the security data, wherein the network interface is configured to control the automation device and access automation data from the automation device. 11. The system of claim 8, further comprising: a remote server in communication with the gateway device and located at a location external to the premises, wherein the remote server is configured to manage at least one of the gateway device and the security system. 12. The system of claim 8, wherein the gateway device is in communication with a remote client device located external to the premises, and wherein the gateway device is configured to exchange communication between the remote client device and one or more of the security system and the automation device, and wherein the remote client device is configured to access, via the gateway device, functions of one or more of the security system or the automation device. 13. The system of claim 12, wherein the gateway device being configured to exchange communication between the remote client device and one or more of the security system and the automation device comprises the gateway device being configured to transmit, to the remote client device, automation data and the security data. 14. The system of claim 12, wherein the gateway device being configured to exchange communication between the remote client device and one or more of the security system and the automation device comprises the gateway device being configured to receive, from the remote client device, control data for control of one or more of the security system and the automation device. 15. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: receive, from a security system located at a premises, security data, determine that the security data is indicative of an event, determine an automation rule associated with the event, wherein the automation rule comprises an action to perform in response to an event, and cause, based on the automation rule and the determination that the security data is indicative of the event, an automation device located at the premise to perform the action. 16. The device of claim 15, wherein the instructions, when executed by the one or more processors, further cause the device to: receive, from the automation device, automation data; and communicate, based on the automation data, with a component of the security system. 17. The device of claim 15, wherein the instructions, when executed by the one or more processors, further cause the device to output a plurality of user interfaces, wherein the plurality of user interfaces comprise a security interface and a network interface, wherein the security interface is configured to control the security system and access the security data, wherein the network interface is configured to control the automation device and access automation data from the automation device. 18. The device of claim 15, wherein the instructions, when executed by the one or more processors, further cause the device to establish communication with a remote server at a location external to the premises, and wherein the remote server is configured to manage at least one of the device and the security system. 19. The device of claim 15, wherein the instructions, when executed by the one or more processors, further cause the device to: establish communication with a remote client device located external to the premises; and exchange communication between the remote client device and one or more of the security system and the automation device, wherein the remote client device accesses functions of one or more of the security system or the automation device. 20. The device of claim 19, wherein the instructions, when executed by the one or more processors, further cause the device to exchange communication between the remote client device and one or more of the security system and the automation device by transmitting to the remote client device, automation data and the security data.
Methods and systems for premises management are described. A gateway device may be used to manage communication between a security system and an automation device. The gateway device may receive security data from the security system. The security data may indicate an event. An automation rule may be used to cause the automation device to perform an action based on the event.1. A method comprising: establishing, by a gateway device, communication between the gateway device located at a premises and a security system located at the premises; establishing, by the gateway device, communication between the gateway device and an automation device located at the premises; receiving, by the gateway device and from the security system, security data; determining that the security data is indicative of an event; determining an automation rule associated with the event, wherein the automation rule comprises an action to perform in response to the event; and causing, based on the automation rule and the determining that the security data is indicative of the event, the automation device to perform the action. 2. The method of claim 1, further comprising: receiving, by the gateway device and from the automation device, automation data; and communicating, by the gateway device and based on the automation data, with a component of the security system. 3. The method of claim 1, further comprising outputting, by the gateway device, a plurality of user interfaces, wherein the plurality of user interfaces comprise a security interface and a network interface, wherein the security interface is configured to control the security system and access the security data, wherein the network interface is configured to control the automation device and access automation data from the automation device. 4. The method of claim 1, further comprising: establishing, by the gateway device, communication between the gateway device and a remote server at a location external to the premises; and managing, by the remote server, at least one of the gateway device and the security system. 5. The method of claim 1, further comprising: establishing communication between a remote client device located external to the premises and the gateway device; and exchanging, via the gateway device, communication between the remote client device and one or more of the security system and the automation device, wherein the remote client device accesses, via the gateway device, functions of one or more of the security system or the automation device. 6. The method of claim 5, wherein exchanging, via the gateway device, communication between the remote client device and one or more of the security system and the automation device comprises transmitting, by the gateway device and to the remote client device, automation data and the security data. 7. The method of claim 5, wherein exchanging, via the gateway device, communication between the remote client device and one or more of the security system and the automation device comprises receiving, by the gateway device and from the remote client device, control data for control of one or more of the security system and the automation device. 8. A system comprising: a security system located at a premises; an automation device located at the premises; and a gateway device located at the premises and in communication with the security system and the automation device, wherein the gateway device is configured to: receive, from the security system, security data, determine that the security data is indicative of an event, determine an automation rule associated with the event, wherein the automation rule comprises an action to perform in response to an event, and cause, based on the automation rule and the determination that the security data is indicative of the event, the automation device to perform the action. 9. The system of claim 8, wherein the gateway device is further configured to: receive, from the automation device, automation data; and communicate, based on the automation data, with a component of the security system. 10. The system of claim 8, wherein the gateway device is further configured to output a plurality of user interfaces, wherein the plurality of user interfaces comprise a security interface and a network interface, wherein the security interface is configured to control the security system and access the security data, wherein the network interface is configured to control the automation device and access automation data from the automation device. 11. The system of claim 8, further comprising: a remote server in communication with the gateway device and located at a location external to the premises, wherein the remote server is configured to manage at least one of the gateway device and the security system. 12. The system of claim 8, wherein the gateway device is in communication with a remote client device located external to the premises, and wherein the gateway device is configured to exchange communication between the remote client device and one or more of the security system and the automation device, and wherein the remote client device is configured to access, via the gateway device, functions of one or more of the security system or the automation device. 13. The system of claim 12, wherein the gateway device being configured to exchange communication between the remote client device and one or more of the security system and the automation device comprises the gateway device being configured to transmit, to the remote client device, automation data and the security data. 14. The system of claim 12, wherein the gateway device being configured to exchange communication between the remote client device and one or more of the security system and the automation device comprises the gateway device being configured to receive, from the remote client device, control data for control of one or more of the security system and the automation device. 15. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: receive, from a security system located at a premises, security data, determine that the security data is indicative of an event, determine an automation rule associated with the event, wherein the automation rule comprises an action to perform in response to an event, and cause, based on the automation rule and the determination that the security data is indicative of the event, an automation device located at the premise to perform the action. 16. The device of claim 15, wherein the instructions, when executed by the one or more processors, further cause the device to: receive, from the automation device, automation data; and communicate, based on the automation data, with a component of the security system. 17. The device of claim 15, wherein the instructions, when executed by the one or more processors, further cause the device to output a plurality of user interfaces, wherein the plurality of user interfaces comprise a security interface and a network interface, wherein the security interface is configured to control the security system and access the security data, wherein the network interface is configured to control the automation device and access automation data from the automation device. 18. The device of claim 15, wherein the instructions, when executed by the one or more processors, further cause the device to establish communication with a remote server at a location external to the premises, and wherein the remote server is configured to manage at least one of the device and the security system. 19. The device of claim 15, wherein the instructions, when executed by the one or more processors, further cause the device to: establish communication with a remote client device located external to the premises; and exchange communication between the remote client device and one or more of the security system and the automation device, wherein the remote client device accesses functions of one or more of the security system or the automation device. 20. The device of claim 19, wherein the instructions, when executed by the one or more processors, further cause the device to exchange communication between the remote client device and one or more of the security system and the automation device by transmitting to the remote client device, automation data and the security data.
2,400
7,931
7,931
15,024,499
2,438
Processing network requests includes receiving a request for a target media element available at a requested location. The request can identify a media repository that stores the target media element. A substitute media element that has content approximately equivalent to content of the target media element can be determined. The substitute media element can be stored on a sub-network connected to the network. A selection page having a link to the location of the substitute media element on the sub-network can be generated. A response to the request for the target media element can include the selection page, so as to offer a user a choice of media source.
1. A method of processing network requests, the method comprising: receiving a request via a network for a target media element at a requested location, the request identifying a media repository that stores the target media element; processing the request to determine a substitute media element that has content approximately equivalent to content of the target media element, the substitute media element stored on a sub-network connected to the network; generating a selection page having a substitute link to the location of the substitute media element on the sub-network; and responding to the request with the selection page. 2. The method of claim 1, further comprising authenticating a credential of a requesting computer at which the request originated, the credential associated with an account at the sub-network. 3. The method of claim 2, further comprising charging the account after receiving selection of the substitute media element. 4. The method of claim 1, wherein the sub-network comprises a media store at which media elements including the substitute media element are for sale. 5. The method of claim 1, wherein generating the selection page further comprises providing in the selection page a target link to the requested location of the target media element. 6. The method of claim 1, wherein generating the selection page further comprises providing in the selection page another substitute link to a location of another substitute media element that has content approximately equivalent to content of the target media element, the substitute media element stored on another sub-network connected to the network. 7. The method of claim 1, wherein generating the selection page comprises providing a human-intelligible message in the selection page, the human-intelligible message indicating a negative characteristic of the target media element. 8. The method of claim 7, wherein the target media element contains unlicensed content, and the substitute media element lacks unlicensed content. 9. The method of claim 7, wherein the target media element is of reduced audio or visual quality, and the substitute media element is of superior audio or visual quality. 10. The method of claim 1, wherein the content of the substitute media element is identical to the content of the target media element. 11. A system comprising: one or more computers configured to: receive a request via a network for a target media element at a requested location, the request identifying a media repository that stores the target media element; process the request to determine a substitute media element that has content approximately equivalent to content of the target media element, the substitute media element stored on a sub-network connected to the network; generate a selection page having a substitute link to the location of the substitute media element on the sub-network; and respond to the request with the selection page. 12. The system of claim 11, wherein the one or more computers is further configured to authenticate a credential of a requesting computer at which the request originated, the credential associated with an account at the sub-network. 13. The system of claim 12, wherein the one or more computers is further configured to charge the account after receiving selection of the substitute media element. 14. The system of claim 11, wherein the sub-network comprises a media store at which media elements including the substitute media element are for sale. 15. The system of claim 11, wherein the one or more computers configured to generate the selection page includes providing in the selection page a target link to the requested location of the target media element. 16. The system of claim 11, wherein the one or more computers configured to generate the selection page includes providing in the selection page another substitute link to a location of another substitute media element that has content approximately equivalent to content of the target media element, the substitute media element stored on another sub-network connected to the network. 17. The system of claim 11, wherein the one or more computers configured to generate the selection page includes providing a human-intelligible message in the selection page, the human-intelligible message indicating a negative characteristic of the target media element. 18. The system of claim 17, wherein the target media element contains unlicensed content, and the substitute media element lacks unlicensed content. 19. The system of claim 17, wherein the target media element is of reduced audio or visual quality, and the substitute media element is of superior audio or visual quality. 20. The system of claim 11, wherein the content of the substitute media element is identical to the content of the target media element.
Processing network requests includes receiving a request for a target media element available at a requested location. The request can identify a media repository that stores the target media element. A substitute media element that has content approximately equivalent to content of the target media element can be determined. The substitute media element can be stored on a sub-network connected to the network. A selection page having a link to the location of the substitute media element on the sub-network can be generated. A response to the request for the target media element can include the selection page, so as to offer a user a choice of media source.1. A method of processing network requests, the method comprising: receiving a request via a network for a target media element at a requested location, the request identifying a media repository that stores the target media element; processing the request to determine a substitute media element that has content approximately equivalent to content of the target media element, the substitute media element stored on a sub-network connected to the network; generating a selection page having a substitute link to the location of the substitute media element on the sub-network; and responding to the request with the selection page. 2. The method of claim 1, further comprising authenticating a credential of a requesting computer at which the request originated, the credential associated with an account at the sub-network. 3. The method of claim 2, further comprising charging the account after receiving selection of the substitute media element. 4. The method of claim 1, wherein the sub-network comprises a media store at which media elements including the substitute media element are for sale. 5. The method of claim 1, wherein generating the selection page further comprises providing in the selection page a target link to the requested location of the target media element. 6. The method of claim 1, wherein generating the selection page further comprises providing in the selection page another substitute link to a location of another substitute media element that has content approximately equivalent to content of the target media element, the substitute media element stored on another sub-network connected to the network. 7. The method of claim 1, wherein generating the selection page comprises providing a human-intelligible message in the selection page, the human-intelligible message indicating a negative characteristic of the target media element. 8. The method of claim 7, wherein the target media element contains unlicensed content, and the substitute media element lacks unlicensed content. 9. The method of claim 7, wherein the target media element is of reduced audio or visual quality, and the substitute media element is of superior audio or visual quality. 10. The method of claim 1, wherein the content of the substitute media element is identical to the content of the target media element. 11. A system comprising: one or more computers configured to: receive a request via a network for a target media element at a requested location, the request identifying a media repository that stores the target media element; process the request to determine a substitute media element that has content approximately equivalent to content of the target media element, the substitute media element stored on a sub-network connected to the network; generate a selection page having a substitute link to the location of the substitute media element on the sub-network; and respond to the request with the selection page. 12. The system of claim 11, wherein the one or more computers is further configured to authenticate a credential of a requesting computer at which the request originated, the credential associated with an account at the sub-network. 13. The system of claim 12, wherein the one or more computers is further configured to charge the account after receiving selection of the substitute media element. 14. The system of claim 11, wherein the sub-network comprises a media store at which media elements including the substitute media element are for sale. 15. The system of claim 11, wherein the one or more computers configured to generate the selection page includes providing in the selection page a target link to the requested location of the target media element. 16. The system of claim 11, wherein the one or more computers configured to generate the selection page includes providing in the selection page another substitute link to a location of another substitute media element that has content approximately equivalent to content of the target media element, the substitute media element stored on another sub-network connected to the network. 17. The system of claim 11, wherein the one or more computers configured to generate the selection page includes providing a human-intelligible message in the selection page, the human-intelligible message indicating a negative characteristic of the target media element. 18. The system of claim 17, wherein the target media element contains unlicensed content, and the substitute media element lacks unlicensed content. 19. The system of claim 17, wherein the target media element is of reduced audio or visual quality, and the substitute media element is of superior audio or visual quality. 20. The system of claim 11, wherein the content of the substitute media element is identical to the content of the target media element.
2,400
7,932
7,932
15,013,861
2,424
The creation of a supercut is described using techniques to allow users to efficiently create high quality supercuts. A video clip repository may include a number of video clips. The video clip repository may allow users to browse and view video clips in the repository. A supercut creation tool may operate to identify, based on comparison of search criteria received from a user to the set of tags, video clips, from the set of video clips, that are relevant to the search criteria; determine, based on scores of the video clips, an ordering of the video clips; and generate a supercut of the video clips as a single video corresponding to the video clips and arranged in the determined order.
1. A computing device comprising: a non-transitory memory device storing a set of computer-executable instructions; and a processor configured to execute the set of computer-executable instructions, wherein executing the set of computer-executable instructions causes the processor to: generate a set of tags that describe a set of video clips, each video clip from the set corresponding to a section of a full video and each of the video clips being shorter in length than the corresponding full video, and the video clips from the set being associated with corresponding scores that measures a quality or popularity of the video clips; identify, based on comparison of search criteria received from a user to the set of tags, a plurality of video clips, from the set of video clips, that are relevant to the search criteria; determine, based on the scores of the plurality of video clips, an ordering of the plurality of the video clips; generate a supercut of the plurality of video clips as a single video corresponding to the plurality of the video clips and arranged in the determined order; and output the supercut. 2. The computing device of claim 1, wherein the set of computer-executable instructions, when executed by the processor, is further to cause the processor to: identify, based on the scores, a highest ranking one of the plurality of video clips; and identify, based on the scores, a second highest ranking one of the plurality of video clips, wherein the determination of the order of the plurality of video clips includes locating the highest ranking one of the plurality of video clips and the second highest ranking one of the plurality of video clips at a first and last position of the order. 3. The computing device of claim 2, wherein the highest ranking of the plurality of video clips is located at the first position of the order and the second highest ranking of the plurality of video clips is located at the last position of the order. 4. The computing device of claim 1, wherein the full video is a movie or television show. 5. The computing device of claim 1, wherein the set of computer-executable instructions, when executed by the processor, is further to cause the processor to: provide the plurality of video clips to a user device of the user before the determination of the order of the plurality of video clips; and receive a final selection, of the plurality of video clips, from the user device. 6. The computing device of claim 1, wherein the set of computer-executable instructions, when executed by the processor, is further to cause the processor to: receive the search criteria from a user device of the user, the search criteria including selection, by the user, of one or more tags from the set of tags. 7. The computing device of claim 5, wherein the search criteria further includes search terms provided by the user. 8. The computing device of claim 1, wherein the video clips, of the set of video clips, include user-defined video clips. 9. The computing device of claim 1, wherein at least some of the tags, from the set of tags, are derived from user comments relating to the full videos. 10. A method, implemented by a server device, comprising: generating a set of tags that describe a set of video clips, each video clip from the set corresponding to a section of a full video and each of the video clips being shorter in length than the corresponding full video, and the video clips from the set being associated with corresponding scores that measures a quality or popularity of the video clips; identifying, based on comparison of search criteria received from a user to the set of tags, a plurality of video clips, from the set of video clips, that are relevant to the search criteria; determining, based on the scores of the plurality of video clips, an ordering of the plurality of the video clips; generating a supercut of the plurality of video clips as a single video corresponding to the plurality of the video clips and arranged in the determined order; and outputting the supercut. 11. The method of claim 10, further comprising: identifying, based on the scores, a highest ranking one of the plurality of video clips; and identifying, based on the scores, a second highest ranking one of the plurality of video clips, wherein the determination of the order of the plurality of video clips includes locating the highest ranking one of the plurality of video clips and the second highest ranking one of the plurality of video clips at a first and last position of the order. 12. The method of claim 11, wherein the highest ranking of the plurality of video clips is located at the first position of the order and the second highest ranking of the plurality of video clips is located at the last position of the order. 13. The method of claim 10, further comprising: providing the plurality of video clips to a user device of the user before the determination of the order of the plurality of video clips; and receiving a final selection, of the plurality of video clips, from the user device. 14. The method of claim 10, further comprising: receiving the search criteria from a user device of the user, the search criteria including selection, by the user, of one or more tags from the set of tags. 15. The method of claim 14, wherein the search criteria further includes search terms provided by the user. 16. The method of claim 10, wherein the video clips, of the set of video clips, include user-defined video clips. 17. A non-transitory computer readable medium containing program instructions for causing one or more processors to: generating a set of tags that describe a set of video clips, each video clip from the set corresponding to a section of a full video and each of the video clips being shorter in length than the corresponding full video, and the video clips from the set being associated with corresponding scores that measures a quality or popularity of the video clips; identifying, based on comparison of search criteria received from a user to the set of tags, a plurality of video clips, from the set of video clips, that are relevant to the search criteria; determining, based on the scores of the plurality of video clips, an ordering of the plurality of the video clips; generating a supercut of the plurality of video clips as a single video corresponding to the plurality of the video clips and arranged in the determined order; and outputting the supercut. 18. The non-transitory computer readable medium of claim 17, wherein the program instructions further cause the one or more processors to: identify, based on the scores, a highest ranking one of the plurality of video clips; and identify, based on the scores, a second highest ranking one of the plurality of video clips, wherein the determination of the order of the plurality of video clips includes locating the highest ranking one of the plurality of video clips and the second highest ranking one of the plurality of video clips at a first and last position of the order. 19. The non-transitory computer readable medium of claim 18, wherein the highest ranking of the plurality of video clips is located at the first position of the order and the second highest ranking of the plurality of video clips is located at the last position of the order. 20. The non-transitory computer readable medium of claim 17, wherein the video clips, of the set of video clips, include user-defined video clips.
The creation of a supercut is described using techniques to allow users to efficiently create high quality supercuts. A video clip repository may include a number of video clips. The video clip repository may allow users to browse and view video clips in the repository. A supercut creation tool may operate to identify, based on comparison of search criteria received from a user to the set of tags, video clips, from the set of video clips, that are relevant to the search criteria; determine, based on scores of the video clips, an ordering of the video clips; and generate a supercut of the video clips as a single video corresponding to the video clips and arranged in the determined order.1. A computing device comprising: a non-transitory memory device storing a set of computer-executable instructions; and a processor configured to execute the set of computer-executable instructions, wherein executing the set of computer-executable instructions causes the processor to: generate a set of tags that describe a set of video clips, each video clip from the set corresponding to a section of a full video and each of the video clips being shorter in length than the corresponding full video, and the video clips from the set being associated with corresponding scores that measures a quality or popularity of the video clips; identify, based on comparison of search criteria received from a user to the set of tags, a plurality of video clips, from the set of video clips, that are relevant to the search criteria; determine, based on the scores of the plurality of video clips, an ordering of the plurality of the video clips; generate a supercut of the plurality of video clips as a single video corresponding to the plurality of the video clips and arranged in the determined order; and output the supercut. 2. The computing device of claim 1, wherein the set of computer-executable instructions, when executed by the processor, is further to cause the processor to: identify, based on the scores, a highest ranking one of the plurality of video clips; and identify, based on the scores, a second highest ranking one of the plurality of video clips, wherein the determination of the order of the plurality of video clips includes locating the highest ranking one of the plurality of video clips and the second highest ranking one of the plurality of video clips at a first and last position of the order. 3. The computing device of claim 2, wherein the highest ranking of the plurality of video clips is located at the first position of the order and the second highest ranking of the plurality of video clips is located at the last position of the order. 4. The computing device of claim 1, wherein the full video is a movie or television show. 5. The computing device of claim 1, wherein the set of computer-executable instructions, when executed by the processor, is further to cause the processor to: provide the plurality of video clips to a user device of the user before the determination of the order of the plurality of video clips; and receive a final selection, of the plurality of video clips, from the user device. 6. The computing device of claim 1, wherein the set of computer-executable instructions, when executed by the processor, is further to cause the processor to: receive the search criteria from a user device of the user, the search criteria including selection, by the user, of one or more tags from the set of tags. 7. The computing device of claim 5, wherein the search criteria further includes search terms provided by the user. 8. The computing device of claim 1, wherein the video clips, of the set of video clips, include user-defined video clips. 9. The computing device of claim 1, wherein at least some of the tags, from the set of tags, are derived from user comments relating to the full videos. 10. A method, implemented by a server device, comprising: generating a set of tags that describe a set of video clips, each video clip from the set corresponding to a section of a full video and each of the video clips being shorter in length than the corresponding full video, and the video clips from the set being associated with corresponding scores that measures a quality or popularity of the video clips; identifying, based on comparison of search criteria received from a user to the set of tags, a plurality of video clips, from the set of video clips, that are relevant to the search criteria; determining, based on the scores of the plurality of video clips, an ordering of the plurality of the video clips; generating a supercut of the plurality of video clips as a single video corresponding to the plurality of the video clips and arranged in the determined order; and outputting the supercut. 11. The method of claim 10, further comprising: identifying, based on the scores, a highest ranking one of the plurality of video clips; and identifying, based on the scores, a second highest ranking one of the plurality of video clips, wherein the determination of the order of the plurality of video clips includes locating the highest ranking one of the plurality of video clips and the second highest ranking one of the plurality of video clips at a first and last position of the order. 12. The method of claim 11, wherein the highest ranking of the plurality of video clips is located at the first position of the order and the second highest ranking of the plurality of video clips is located at the last position of the order. 13. The method of claim 10, further comprising: providing the plurality of video clips to a user device of the user before the determination of the order of the plurality of video clips; and receiving a final selection, of the plurality of video clips, from the user device. 14. The method of claim 10, further comprising: receiving the search criteria from a user device of the user, the search criteria including selection, by the user, of one or more tags from the set of tags. 15. The method of claim 14, wherein the search criteria further includes search terms provided by the user. 16. The method of claim 10, wherein the video clips, of the set of video clips, include user-defined video clips. 17. A non-transitory computer readable medium containing program instructions for causing one or more processors to: generating a set of tags that describe a set of video clips, each video clip from the set corresponding to a section of a full video and each of the video clips being shorter in length than the corresponding full video, and the video clips from the set being associated with corresponding scores that measures a quality or popularity of the video clips; identifying, based on comparison of search criteria received from a user to the set of tags, a plurality of video clips, from the set of video clips, that are relevant to the search criteria; determining, based on the scores of the plurality of video clips, an ordering of the plurality of the video clips; generating a supercut of the plurality of video clips as a single video corresponding to the plurality of the video clips and arranged in the determined order; and outputting the supercut. 18. The non-transitory computer readable medium of claim 17, wherein the program instructions further cause the one or more processors to: identify, based on the scores, a highest ranking one of the plurality of video clips; and identify, based on the scores, a second highest ranking one of the plurality of video clips, wherein the determination of the order of the plurality of video clips includes locating the highest ranking one of the plurality of video clips and the second highest ranking one of the plurality of video clips at a first and last position of the order. 19. The non-transitory computer readable medium of claim 18, wherein the highest ranking of the plurality of video clips is located at the first position of the order and the second highest ranking of the plurality of video clips is located at the last position of the order. 20. The non-transitory computer readable medium of claim 17, wherein the video clips, of the set of video clips, include user-defined video clips.
2,400
7,933
7,933
15,202,471
2,484
There is provided a system comprising a label database including a plurality of label, a non-transitory memory storing an executable code, and a hardware processor executing the executable code to receive a media content including a plurality of segments, each segment including a plurality of frames, extract a first plurality of features from a segment, extract a second plurality of features from each frame of the segment, determine an attention weight for each frame of the segment based on the first plurality of features extracted from the segment and the second plurality of features extracted from the segment, and determine that the segment depicts one of the plurality of labels in a label database based on the first plurality of features, the second plurality of features, and the attention weight of each frame of the plurality of frames of the segment.
1. A system comprising: a label database including a plurality of labels; a non-transitory memory storing an executable code; and a hardware processor executing the executable code to: receive a media content including a plurality of segments, each segment of the plurality of segments including a plurality of frames; extract a first plurality of features from a segment of the plurality of segments; extract a second plurality of features from each frame of the plurality of frames of the segment; determine an attention weight for each frame of the plurality of frames of the segment based on the first plurality of features extracted from the segment and the second plurality of features extracted from each frame of the plurality of frames of the segment; and determine that the first segment depicts one of the plurality of labels in the label database based on the first plurality of features, the second plurality of features, and the attention weight of each frame of the plurality of frames of the segment. 2. The system of claim 1, wherein, prior to determining the attention weight for each frame of the plurality of frames of the segment, the hardware processor further executes the executable code to: generate a matching score for each frame of the plurality of frames of the segment based on each preceding frame in the segment. 3. The system of claim 1, wherein the second plurality of features extracted from each frame of the plurality of frames of the segment includes at least scene data, object data, and activity data. 4. The system of claim 1, wherein the attention weight for each frame of the plurality of frames of the segment is based on at least one of a complementary scene data source, a complementary object data source, and a complementary activity data source. 5. The system of claim 1, wherein the first plurality of features includes temporal data and the second plurality of features includes spatial data. 6. The system of claim 1, wherein each frame of the plurality of frames in the segment is considered in determining that the one of the plurality of labels in the label database is included in the frame in proportion to the attention weight. 7. The system of claim 1, wherein each frame of the segment has an attention weight and the attention weight of a frame of the plurality of frames of the segment is based on at least one of the attention weight of one or more frames preceding the frame in the segment and the attention weight of one or more frames succeeding the frame in the segment. 8. The system of claim 1, wherein the hardware processor further executes the executable code to: transmit the segment to a display device for display. 9. The system of claim 1, wherein the hardware processor further executes the executable code to: tag the segment of the plurality of segments with an activity label based on the determined label. 10. The system of claim 9, wherein the hardware processor further executes the executable code to: receive a user input from a user device; and perform an act based the user input and the activity label. 11. A method for use with a system including a non-transitory memory and a hardware processor, the method comprising: receiving, using the hardware processor, a media content including a plurality of segments, each segment of the plurality of segments including a plurality of frames; extracting, using the hardware processor, a first plurality of features from a segment of the plurality of segments; extracting, using the hardware processor, a second plurality of features from each frame of the plurality of frames of the segment; determining, using the hardware processor, an attention weight for each frame of the plurality of frames of the segment based on the first plurality of features extracted from the segment and the second plurality of features extracted from each frame of the plurality of frames of the segment; and determining, using the hardware processor, that the segment depicts one of the plurality of labels in the label database based on the first plurality of features, the second plurality of features, and the attention weight of each frame of the plurality of frames of the segment. 12. The method of claim 11, wherein, prior to determining the attention weight for each frame of the plurality of frames of the segment, the method further comprises: generating, using the hardware processor, a matching score for each frame of the plurality of frames of the segment based on each preceding frame in the segment. 13. The method of claim 11, wherein the second plurality of features extracted from each frame of the plurality of frames of the segment includes at least scene data, object data, and activity data. 14. The method of claim 11, wherein the attention weight for each frame of the plurality of frames of the segment is based on at least one of a complementary scene data source, a complementary object data source, and a complementary activity data source. 15. The method of claim 11, wherein the first plurality of features includes temporal data and the second plurality of features includes spatial data. 16. The method of claim 11, wherein each frame of the plurality of frames in the segment is considered in determining that the one of the plurality of labels in the label database is included in the frame in proportion to the attention weight. 17. The method of claim 11, wherein each frame of the segment has an attention weight and the attention weight of a frame of the plurality of frames of the segment is based on at least one of the attention weight of one or more frames preceding the frame in the segment and the attention weight of one or more frames succeeding the frame in the segment. 18. The method of claim 11, further comprising: transmitting, using the hardware processor, the segment to a display device for display. 19. The method of claim 11, further comprising: tagging, using the hardware processor, the segment of the plurality of segments with an activity label based on the determined label. 20. The method of claim 19, further comprising: receiving, using the hardware processor, a user input from a user device; and performing, using the hardware processor, an act based the user input and the activity label.
There is provided a system comprising a label database including a plurality of label, a non-transitory memory storing an executable code, and a hardware processor executing the executable code to receive a media content including a plurality of segments, each segment including a plurality of frames, extract a first plurality of features from a segment, extract a second plurality of features from each frame of the segment, determine an attention weight for each frame of the segment based on the first plurality of features extracted from the segment and the second plurality of features extracted from the segment, and determine that the segment depicts one of the plurality of labels in a label database based on the first plurality of features, the second plurality of features, and the attention weight of each frame of the plurality of frames of the segment.1. A system comprising: a label database including a plurality of labels; a non-transitory memory storing an executable code; and a hardware processor executing the executable code to: receive a media content including a plurality of segments, each segment of the plurality of segments including a plurality of frames; extract a first plurality of features from a segment of the plurality of segments; extract a second plurality of features from each frame of the plurality of frames of the segment; determine an attention weight for each frame of the plurality of frames of the segment based on the first plurality of features extracted from the segment and the second plurality of features extracted from each frame of the plurality of frames of the segment; and determine that the first segment depicts one of the plurality of labels in the label database based on the first plurality of features, the second plurality of features, and the attention weight of each frame of the plurality of frames of the segment. 2. The system of claim 1, wherein, prior to determining the attention weight for each frame of the plurality of frames of the segment, the hardware processor further executes the executable code to: generate a matching score for each frame of the plurality of frames of the segment based on each preceding frame in the segment. 3. The system of claim 1, wherein the second plurality of features extracted from each frame of the plurality of frames of the segment includes at least scene data, object data, and activity data. 4. The system of claim 1, wherein the attention weight for each frame of the plurality of frames of the segment is based on at least one of a complementary scene data source, a complementary object data source, and a complementary activity data source. 5. The system of claim 1, wherein the first plurality of features includes temporal data and the second plurality of features includes spatial data. 6. The system of claim 1, wherein each frame of the plurality of frames in the segment is considered in determining that the one of the plurality of labels in the label database is included in the frame in proportion to the attention weight. 7. The system of claim 1, wherein each frame of the segment has an attention weight and the attention weight of a frame of the plurality of frames of the segment is based on at least one of the attention weight of one or more frames preceding the frame in the segment and the attention weight of one or more frames succeeding the frame in the segment. 8. The system of claim 1, wherein the hardware processor further executes the executable code to: transmit the segment to a display device for display. 9. The system of claim 1, wherein the hardware processor further executes the executable code to: tag the segment of the plurality of segments with an activity label based on the determined label. 10. The system of claim 9, wherein the hardware processor further executes the executable code to: receive a user input from a user device; and perform an act based the user input and the activity label. 11. A method for use with a system including a non-transitory memory and a hardware processor, the method comprising: receiving, using the hardware processor, a media content including a plurality of segments, each segment of the plurality of segments including a plurality of frames; extracting, using the hardware processor, a first plurality of features from a segment of the plurality of segments; extracting, using the hardware processor, a second plurality of features from each frame of the plurality of frames of the segment; determining, using the hardware processor, an attention weight for each frame of the plurality of frames of the segment based on the first plurality of features extracted from the segment and the second plurality of features extracted from each frame of the plurality of frames of the segment; and determining, using the hardware processor, that the segment depicts one of the plurality of labels in the label database based on the first plurality of features, the second plurality of features, and the attention weight of each frame of the plurality of frames of the segment. 12. The method of claim 11, wherein, prior to determining the attention weight for each frame of the plurality of frames of the segment, the method further comprises: generating, using the hardware processor, a matching score for each frame of the plurality of frames of the segment based on each preceding frame in the segment. 13. The method of claim 11, wherein the second plurality of features extracted from each frame of the plurality of frames of the segment includes at least scene data, object data, and activity data. 14. The method of claim 11, wherein the attention weight for each frame of the plurality of frames of the segment is based on at least one of a complementary scene data source, a complementary object data source, and a complementary activity data source. 15. The method of claim 11, wherein the first plurality of features includes temporal data and the second plurality of features includes spatial data. 16. The method of claim 11, wherein each frame of the plurality of frames in the segment is considered in determining that the one of the plurality of labels in the label database is included in the frame in proportion to the attention weight. 17. The method of claim 11, wherein each frame of the segment has an attention weight and the attention weight of a frame of the plurality of frames of the segment is based on at least one of the attention weight of one or more frames preceding the frame in the segment and the attention weight of one or more frames succeeding the frame in the segment. 18. The method of claim 11, further comprising: transmitting, using the hardware processor, the segment to a display device for display. 19. The method of claim 11, further comprising: tagging, using the hardware processor, the segment of the plurality of segments with an activity label based on the determined label. 20. The method of claim 19, further comprising: receiving, using the hardware processor, a user input from a user device; and performing, using the hardware processor, an act based the user input and the activity label.
2,400
7,934
7,934
15,777,791
2,487
A container crane control system including: a camera configured to be fixedly mounted to a crane to obtain a series of captured images, a video output configured to provide a video signal including a series of cropped image based on the series of captured images; and a control device configured to, for at least part of the captured images and the respective cropped image, receive an input signal indicating a current height of a load of the crane, wherein the control device is configured to control a position of the respective cropped image within the captured image based on the current height of the load.
1. A container crane control system comprising: a camera configured to be fixedly mounted to a crane to obtain a series of captured images; a video output configured to provide a video signal including a series of cropped imaged respectively based on the series of captured images; and a control device, configured to, for at least part of the captured images and the respective cropped image, receive an input signal indicating a current height of a load of the crane, wherein the control device is configured to control a position of the respective cropped image within the captured image based on the current height of the load. 2. The container crane according to claim 1, wherein: the camera includes a control signal input and the video output, wherein control signals provided on the control signal input controls the position of each cropped image within the respective captured image; and wherein the control device is connected to the camera to control the position of the cropped image based on the current height of the load by sending a camera control signal on the control signal input. 3. The container crane control system according to claim 1, further including an operator terminal, being configured to receive the video signal for presentation to an operator and being configured to receive user input for controlling the crane, resulting in a crane control signal for provision to the control device; wherein the control device is configured to receive the crane control signal from the operator terminal and to provide corresponding control signals to control crane operation. 4. The container crane control system according to claim 2, wherein the camera further is responsive to a zoom signal on the control signal input, wherein the zoom signal controls a size of the cropped image compared to the captured image. 5. The container crane control system according to claim 2, wherein the camera further is responsive to a zoom signal on the control signal input, wherein the zoom signal controls an optical zoom of the camera. 6. The container crane control system according to claim 4, wherein the control device is configured to send a zoom signal to the camera to zoom in when the height of the load decreases, and to send a zoom signal to the camera to zoom out when the height of the load increases. 7. The container crane control system according to claim 1, further including an encoder being configured to receive the video signal and encode the video signal to a compressed digital video stream for provision to the operator terminal, the encoder being distinct from the camera. 8. The container crane control system according to claim 1, wherein the video signal includes video stream. 9. A container crane including a spreader, a trolley and a container crane control system comprising: a camera configured to be fixedly mounted to a crane to obtain a series of captured images; a video output configured to provide a video signal including a series of cropped image respectively based on the series of captured images; and a control device configured to, for at least part of the captured images and the respective cropped image, receive an input signal indicating a current height of a load of the crane, wherein the control device is configured to control a position of the respective cropped image within the captured image based on the current height of the load. 10. A method for controlling video signal output from a fixedly mounted camera of a container crane control system also comprising a control device, the method being performed in the container crane control system and including the steps of: receiving an input signal indicating a current height of a load of the crane; capturing an image in the camera, resulting in a captured image; generating a camera control signal to control a position of a cropped image within a captured image based on the current heights of the load; providing the camera control signal to the camera; cropping the captured image based on the input signal, resulting in a cropped image; and providing a video signal having the cropped imaged of the captured image on the video output. 11. A computer program for controlling video signal output from a fixedly mounted camera of a container crane control system also comprising a control device, the computer program including computer program code which, when run on a container crane control system causes the container crane control system to: receive an input signal indicating a current height of a loader of the crane; generate a camera control signal to control a position of a cropped image within a captured image based on the current height of the load; provide the camera control signal to the camera; capture an image in the camera, resulting in a captured image; crop the captured imaged based on the input signal, resulting in a cropped image; and provide a video signal including the cropped image of the captured image on the video output. 12. A computer program product including a computer program according to claim 11 and a computer readable means on which the computer program is stored. 13. The container crane control system according to claim 2, further including an operator terminal, being configured to receive the video signal for presentation to an operator and being configured to receive user input for controlling the crane, resulting in a crane control signal for provision to the control device; wherein the control device is configured to receive the crane control signal from the operator terminal and to provide corresponding control signals to control crane operation. 14. The container crane control system according to claim 3, wherein the camera further is responsive to a zoom signal on the control signal input, wherein the zoom signal controls a size of the cropped image compared to the captured image. 15. The container crane control system according to claim 3, wherein the camera further is responsive to a zoom signal on the control signal input, wherein the zoom signal controls an optical zoom of the camera. 16. The container crane control system according to claim 5, wherein the control device is configured to send a zoom signal to the camera to zoom in when the height of the load decreases, and to send a zoom signal to the camera to zoom out when the height of the load increases. 17. The container crane control system according to claim 2, further including an encoder being configured to receive the video signal and encode the video signal to a compressed digital video stream for provision to the operator terminal, the encoder being distinct from the camera. 18. The container crane control system according to claim 2, wherein the video signal includes a video stream.
A container crane control system including: a camera configured to be fixedly mounted to a crane to obtain a series of captured images, a video output configured to provide a video signal including a series of cropped image based on the series of captured images; and a control device configured to, for at least part of the captured images and the respective cropped image, receive an input signal indicating a current height of a load of the crane, wherein the control device is configured to control a position of the respective cropped image within the captured image based on the current height of the load.1. A container crane control system comprising: a camera configured to be fixedly mounted to a crane to obtain a series of captured images; a video output configured to provide a video signal including a series of cropped imaged respectively based on the series of captured images; and a control device, configured to, for at least part of the captured images and the respective cropped image, receive an input signal indicating a current height of a load of the crane, wherein the control device is configured to control a position of the respective cropped image within the captured image based on the current height of the load. 2. The container crane according to claim 1, wherein: the camera includes a control signal input and the video output, wherein control signals provided on the control signal input controls the position of each cropped image within the respective captured image; and wherein the control device is connected to the camera to control the position of the cropped image based on the current height of the load by sending a camera control signal on the control signal input. 3. The container crane control system according to claim 1, further including an operator terminal, being configured to receive the video signal for presentation to an operator and being configured to receive user input for controlling the crane, resulting in a crane control signal for provision to the control device; wherein the control device is configured to receive the crane control signal from the operator terminal and to provide corresponding control signals to control crane operation. 4. The container crane control system according to claim 2, wherein the camera further is responsive to a zoom signal on the control signal input, wherein the zoom signal controls a size of the cropped image compared to the captured image. 5. The container crane control system according to claim 2, wherein the camera further is responsive to a zoom signal on the control signal input, wherein the zoom signal controls an optical zoom of the camera. 6. The container crane control system according to claim 4, wherein the control device is configured to send a zoom signal to the camera to zoom in when the height of the load decreases, and to send a zoom signal to the camera to zoom out when the height of the load increases. 7. The container crane control system according to claim 1, further including an encoder being configured to receive the video signal and encode the video signal to a compressed digital video stream for provision to the operator terminal, the encoder being distinct from the camera. 8. The container crane control system according to claim 1, wherein the video signal includes video stream. 9. A container crane including a spreader, a trolley and a container crane control system comprising: a camera configured to be fixedly mounted to a crane to obtain a series of captured images; a video output configured to provide a video signal including a series of cropped image respectively based on the series of captured images; and a control device configured to, for at least part of the captured images and the respective cropped image, receive an input signal indicating a current height of a load of the crane, wherein the control device is configured to control a position of the respective cropped image within the captured image based on the current height of the load. 10. A method for controlling video signal output from a fixedly mounted camera of a container crane control system also comprising a control device, the method being performed in the container crane control system and including the steps of: receiving an input signal indicating a current height of a load of the crane; capturing an image in the camera, resulting in a captured image; generating a camera control signal to control a position of a cropped image within a captured image based on the current heights of the load; providing the camera control signal to the camera; cropping the captured image based on the input signal, resulting in a cropped image; and providing a video signal having the cropped imaged of the captured image on the video output. 11. A computer program for controlling video signal output from a fixedly mounted camera of a container crane control system also comprising a control device, the computer program including computer program code which, when run on a container crane control system causes the container crane control system to: receive an input signal indicating a current height of a loader of the crane; generate a camera control signal to control a position of a cropped image within a captured image based on the current height of the load; provide the camera control signal to the camera; capture an image in the camera, resulting in a captured image; crop the captured imaged based on the input signal, resulting in a cropped image; and provide a video signal including the cropped image of the captured image on the video output. 12. A computer program product including a computer program according to claim 11 and a computer readable means on which the computer program is stored. 13. The container crane control system according to claim 2, further including an operator terminal, being configured to receive the video signal for presentation to an operator and being configured to receive user input for controlling the crane, resulting in a crane control signal for provision to the control device; wherein the control device is configured to receive the crane control signal from the operator terminal and to provide corresponding control signals to control crane operation. 14. The container crane control system according to claim 3, wherein the camera further is responsive to a zoom signal on the control signal input, wherein the zoom signal controls a size of the cropped image compared to the captured image. 15. The container crane control system according to claim 3, wherein the camera further is responsive to a zoom signal on the control signal input, wherein the zoom signal controls an optical zoom of the camera. 16. The container crane control system according to claim 5, wherein the control device is configured to send a zoom signal to the camera to zoom in when the height of the load decreases, and to send a zoom signal to the camera to zoom out when the height of the load increases. 17. The container crane control system according to claim 2, further including an encoder being configured to receive the video signal and encode the video signal to a compressed digital video stream for provision to the operator terminal, the encoder being distinct from the camera. 18. The container crane control system according to claim 2, wherein the video signal includes a video stream.
2,400
7,935
7,935
12,071,005
2,467
A method and apparatus for enhanced Internet telephony ensures that communication between a source and destination is not interrupted by common network address translation. According to one aspect of the invention, communication may continue through a router that employs network address translation.
1-7. (canceled) 8. An internet telephony system configured to use Session Initiation Protocol (SIP) signaling to setup a communication of streaming packets, the internet telephony system comprising: a relay configured to relay streaming packets of the communication between a caller and a call destination; a server configured to receive, process and transmit SIP signaling messages to setup the communication between the caller and the call destination and to select the relay to use for the communication, the selection being based at least on the quality of the communication. 9. The internet telephony system of claim 8, wherein the server is configured to select the relay based in part on the geographic location of the caller. 10. The internet telephony system of claim 8, wherein the server is configured to select the relay based in part on the geographic location of the call destination. 11. The internet telephony system of claim 8, wherein the server is configured to select the relay to decrease the latency of the communication. 12. The internet telephony system of claim 8, wherein the server is configured to select the relay to decrease the travel time of the communication. 13. The internet telephony system of claim 8, wherein the server is configured to select the relay to limit the geographical area traveled by the streaming packets of the communication. 14. The internet telephony system of claim 8, wherein the relay is associated with a point-of-presence geographically separated from other points-of-presence. 15. The internet telephony system of claim 8, wherein the server is configured to select the relay based on a SIP Invite message. 16. The internet telephony system of claim 8, wherein said server is a pre-proxy server. 17. The internet telephony system of claim 8, wherein said streaming packet protocol is the Real Time Transport Protocol (RTP). 18. The internet telephony system of claim 8, wherein the relay is a RTP relay. 19. The internet telephony system of claim 8, wherein the server is separated from the relay. 20. A method of providing internet service, the method comprising: providing a server configured to receive, process and transmit Session Initiation Protocol (SIP) signaling messages; receiving a signaling message originating from a caller requesting a communication to a call destination; selecting a relay for use during the communication based at least on the quality of the communication; relaying streaming packets of the communication via the selected relay between the caller and the call destination. 21. The method of claim 20, wherein the selecting the relay is based in part on the geographic location of the caller. 22. The method of claim 20, wherein the selecting the relay is based in part on the geographic location of the call destination. 23. The method of claim 20, wherein the selecting the relay is based in part on improving the latency of the communication. 24. The method of claim 20, wherein the selecting the relay is based in part on improving the travel time of the communication. 25. The method of claim 20, wherein the selecting the relay is based in part on limiting the geographical area traveled by the steaming packets of the communication. 26. The method of claim 20, wherein the relay is associated with a point-of-presence geographically separated from other points-of-presence. 27. The method of claim 20, wherein the SIP signaling message originating from the caller is a SIP Invite. 28. The method of claim 20, wherein the server operates as a pre-proxy server. 29. The method of claim 20, wherein the streaming packets are Real Time Transport Protocol (RTP) packets. 30. The method of claim 20, wherein the selected relay is a RTP relay. 31. The method of claim 20, wherein the server is separated from the relay. 32. An internet telephony server for setting up a communication of streaming packets, the server configured to receive and process a Session Initiation Protocol (SIP) signaling message originated from a caller requesting a communication of streaming packets to a call destination, wherein the server is configured to process information in the SIP signaling message to select a relay from a plurality of relays available to relay streaming packets of the communication between the caller and the call destination, wherein-the server makes the selection based at least on the quality of the communication. 33. The server of claim 32, further configured to select the relay based in part on the geographic location of the caller. 34. The server of claim 32, further configured to select the relay based in part on the geographic location of the call destination. 35. The server of claim 32, further configured to select the relay to decrease the latency of the communication. 36. The server of claim 32, further configured to select the relay to decrease the travel time of the communication. 37. The server of claim 32, further configured to select the relay to limit the geographical area traveled by the steaming packets of the communication. 38. The server of claim 32, wherein each relay is associated with a point-of-presence geographically separated from other points-of-presence. 39. The server of claim 32, wherein the signaling message is a SIP Invite. 40. The server of claim 32, wherein the server operates as a pre-proxy server. 41. The server of claim 32, wherein the streaming packets are Real Time Transport Protocol (RTP) packets. 42. The server of claim 32, wherein said relay comprises a RTP relay. 43. The server of claim 32, wherein the server is separated from the plurality of relays.
A method and apparatus for enhanced Internet telephony ensures that communication between a source and destination is not interrupted by common network address translation. According to one aspect of the invention, communication may continue through a router that employs network address translation.1-7. (canceled) 8. An internet telephony system configured to use Session Initiation Protocol (SIP) signaling to setup a communication of streaming packets, the internet telephony system comprising: a relay configured to relay streaming packets of the communication between a caller and a call destination; a server configured to receive, process and transmit SIP signaling messages to setup the communication between the caller and the call destination and to select the relay to use for the communication, the selection being based at least on the quality of the communication. 9. The internet telephony system of claim 8, wherein the server is configured to select the relay based in part on the geographic location of the caller. 10. The internet telephony system of claim 8, wherein the server is configured to select the relay based in part on the geographic location of the call destination. 11. The internet telephony system of claim 8, wherein the server is configured to select the relay to decrease the latency of the communication. 12. The internet telephony system of claim 8, wherein the server is configured to select the relay to decrease the travel time of the communication. 13. The internet telephony system of claim 8, wherein the server is configured to select the relay to limit the geographical area traveled by the streaming packets of the communication. 14. The internet telephony system of claim 8, wherein the relay is associated with a point-of-presence geographically separated from other points-of-presence. 15. The internet telephony system of claim 8, wherein the server is configured to select the relay based on a SIP Invite message. 16. The internet telephony system of claim 8, wherein said server is a pre-proxy server. 17. The internet telephony system of claim 8, wherein said streaming packet protocol is the Real Time Transport Protocol (RTP). 18. The internet telephony system of claim 8, wherein the relay is a RTP relay. 19. The internet telephony system of claim 8, wherein the server is separated from the relay. 20. A method of providing internet service, the method comprising: providing a server configured to receive, process and transmit Session Initiation Protocol (SIP) signaling messages; receiving a signaling message originating from a caller requesting a communication to a call destination; selecting a relay for use during the communication based at least on the quality of the communication; relaying streaming packets of the communication via the selected relay between the caller and the call destination. 21. The method of claim 20, wherein the selecting the relay is based in part on the geographic location of the caller. 22. The method of claim 20, wherein the selecting the relay is based in part on the geographic location of the call destination. 23. The method of claim 20, wherein the selecting the relay is based in part on improving the latency of the communication. 24. The method of claim 20, wherein the selecting the relay is based in part on improving the travel time of the communication. 25. The method of claim 20, wherein the selecting the relay is based in part on limiting the geographical area traveled by the steaming packets of the communication. 26. The method of claim 20, wherein the relay is associated with a point-of-presence geographically separated from other points-of-presence. 27. The method of claim 20, wherein the SIP signaling message originating from the caller is a SIP Invite. 28. The method of claim 20, wherein the server operates as a pre-proxy server. 29. The method of claim 20, wherein the streaming packets are Real Time Transport Protocol (RTP) packets. 30. The method of claim 20, wherein the selected relay is a RTP relay. 31. The method of claim 20, wherein the server is separated from the relay. 32. An internet telephony server for setting up a communication of streaming packets, the server configured to receive and process a Session Initiation Protocol (SIP) signaling message originated from a caller requesting a communication of streaming packets to a call destination, wherein the server is configured to process information in the SIP signaling message to select a relay from a plurality of relays available to relay streaming packets of the communication between the caller and the call destination, wherein-the server makes the selection based at least on the quality of the communication. 33. The server of claim 32, further configured to select the relay based in part on the geographic location of the caller. 34. The server of claim 32, further configured to select the relay based in part on the geographic location of the call destination. 35. The server of claim 32, further configured to select the relay to decrease the latency of the communication. 36. The server of claim 32, further configured to select the relay to decrease the travel time of the communication. 37. The server of claim 32, further configured to select the relay to limit the geographical area traveled by the steaming packets of the communication. 38. The server of claim 32, wherein each relay is associated with a point-of-presence geographically separated from other points-of-presence. 39. The server of claim 32, wherein the signaling message is a SIP Invite. 40. The server of claim 32, wherein the server operates as a pre-proxy server. 41. The server of claim 32, wherein the streaming packets are Real Time Transport Protocol (RTP) packets. 42. The server of claim 32, wherein said relay comprises a RTP relay. 43. The server of claim 32, wherein the server is separated from the plurality of relays.
2,400
7,936
7,936
15,277,902
2,444
The current document is directed to methods and systems that efficiently distribute virtual-machine images (“VM images”) among servers within large, distributed-computer-system-implemented IAAS platforms to facilitate temporally and computationally efficient instantiation of virtual machines within the servers. In implementations discussed below, VM images are stored in a distributed fashion throughout one or more distributed computing systems, using several different VM-image-distribution models, in order to balance computational-resource usage, temporal constraints, and other factors and considerations related to VM-image distribution and VM instantiation.
1. A distributed VM-image-distribution subsystem of a management system within a distributed computer system having multiple servers, multiple data-storage devices, and one or more internal networks, the distributed VM-image-distribution subsystem implemented as stored computer instructions that, when executed on one or more processors of one or more computer systems, control the computer system to: organize data stores within the distributed computer system that store VM images into groups, each group including an image data store and multiple associated local data stores; classify each VM image as one of an EAGER image and an ON_DEMAND image; when an EAGER image is created and stored in a first image data store, propagate the VM image to the remaining image data stores within the distributed computer system and propagate the VM image from each image data store to the local data stores associated with the image data store; when an ON_DEMAND image is created and stored in a first image data store, propagate the VM image to the remaining image data stores within the distributed computer system; and when the management system processes a request to instantiate an ON_DEMAND image on a server, transfers the ON_DEMAND image from an image data store to a local data store accessible to the server. 2. The distributed VM-image-distribution subsystem of claim 1 further comprising: local distributed-VM-image-distribution instances, each executing within a management server and each associated with an image data store and multiple associated local data stores. 3. The distributed VM-image-distribution subsystem of claim 2 wherein each local distributed-VM-image-distribution instance includes a background subsystem responsible for initiating batches of VM-image copy operations through a management network and an orchestrator subsystem that manages VM-image propagation through the distributed computer system. 4. The distributed VM-image-distribution subsystem of claim 2 wherein, when an ON_DEMAND image is created and stored in a first image data store and when the ON_DEMAND image has been propagated to the remaining image data stores within the distributed computer system, the distributed VM-image-distribution subsystem generates a signal to the management system, following reception of which the management system indicates successful completion of creation of the ON_DEMAND image and availability of the ON_DEMAND image for instantiation. 5. The distributed VM-image-distribution subsystem of claim 2 wherein, when, following creation of an EAGER image and storing of the EAGER image in a first image data store, the EAGER image is successfully propagated to the remaining image data stores within the distributed computer system and propagated to a first batch of local data stores associated with each image data store, the distributed VM-image-distribution subsystem generates a signal to the management system, following reception of which the management system indicates successful completion of creation of the EAGER image and availability of the EAGER image for instantiation. 6. The distributed VM-image-distribution subsystem of claim 2 wherein EAGER images are restricted to sizes below a threshold size. 7. The distributed VM-image-distribution subsystem of claim 2 wherein sizes of ON_DEMAND images are not restricted 8. The distributed VM-image-distribution subsystem of claim 2 wherein the EAGER and ON_DEMAND classifications of VM images are one of: fixed when VM images are created; and initially assigned when VM images are created and later modified, by the management system, when the instantiation pattern of a VM image is incompatible with the classification assigned to the VM image. 9. A method carried out in a distributed VM-image-distribution subsystem of a management system within a distributed computer system having multiple servers, multiple data-storage devices, and one or more internal networks, the method comprising: organizing data stores within the distributed computer system that store VM images into groups, each group including an image data store and multiple associated local data stores; classifying each VM image as one of an EAGER image and an ON_DEMAND image; when an EAGER image is created and stored in a first image data store, propagating the VM image to the remaining image data stores within the distributed computer system and propagating the VM image from each image data store to the local data stores associated with the image data store; when an ON_DEMAND image is created and stored in a first image data store, propagating the VM image to the remaining image data stores within the distributed computer system; and when the management system processes a request to instantiate an ON_DEMAND image on a server, transferring the ON_DEMAND image from an image data store to a local data store accessible to the server. 10. The method of claim 9 wherein the distributed-VM-image-distribution subsystem comprises: local distributed-VM-image-distribution instances, each executing within a management server and each associated with an image data store and multiple associated local data stores. 11. The method of claim 10 wherein each local distributed-VM-image-distribution instance includes a background subsystem responsible for initiating batches of VM-image copy operations through a management network and an orchestrator subsystem that manages VM-image propagation through the distributed computer system. 12. The method of claim 2 wherein, when an ON_DEMAND image is created and stored in a first image data store and when the ON_DEMAND image has been propagated to the remaining image data stores within the distributed computer system, generating a signal to the management system, following reception of which the management system indicates successful completion of creation of the ON_DEMAND image and availability of the ON_DEMAND image for instantiation. 13. The method of claim 10 wherein, when, following creation of an EAGER image and storing of the EAGER image in a first image data store, the EAGER image is successfully propagated to the remaining image data stores within the distributed computer system and propagated to a first batch of local data stores associated with each image data store, generating a signal to the management system, following reception of which the management system indicates successful completion of creation of the EAGER image and availability of the EAGER image for instantiation. 14. The method of claim 10 wherein EAGER images are restricted to sizes below a threshold size. 15. The method of claim 10 wherein sizes of ON_DEMAND images are not restricted 16. The method of claim 10 wherein the EAGER and ON_DEMAND classifications of VM images are one of: fixed when VM images are created; and initially assigned when VM images are created and later modified, by the management system, when the instantiation pattern of a VM image is incompatible with the classification assigned to the VM image. 17. Computer instructions stored on a physical data-storage device that, when executed by one or more processors in a distributed computer system having multiple servers, multiple data-storage devices, and one or more internal networks, control a distributed VM-image-distribution subsystem of a management system within the distributed computer system to: organize data stores within the distributed computer system that store VM images into groups, each group including an image data store and multiple associated local data stores; classify each VM image as one of an EAGER image and an ON_DEMAND image; when an EAGER image is created and stored in a first image data store, propagate the VM image to the remaining image data stores within the distributed computer system and propagate the VM image from each image data store to the local data stores associated with the image data store; when an ON_DEMAND image is created and stored in a first image data store, propagate the VM image to the remaining image data stores within the distributed computer system; and when the management system processes a request to instantiate an ON_DEMAND image on a server, transfers the ON_DEMAND image from an image data store to a local data store accessible to the server. 18. The stored computer instructions of claim 17 wherein the distributed-VM-image-distribution subsystem comprises local distributed-VM-image-distribution instances, each executing within a management server and each associated with an image data store and multiple associated local data stores; and wherein each local distributed-VM-image-distribution instance includes a background subsystem responsible for initiating batches of VM-image copy operations through a management network and an orchestrator subsystem that manages VM-image propagation through the distributed computer system. 19. The stored computer instructions of claim 17 wherein, when an ON_DEMAND image is created and stored in a first image data store and when the ON_DEMAND image has been propagated to the remaining image data stores within the distributed computer system, generating a signal to the management system, following reception of which the management system indicates successful completion of creation of the ON_DEMAND image and availability of the ON_DEMAND image for instantiation. 20. The stored computer instructions of claim 17 wherein, when, following creation of an EAGER image and storing of the EAGER image in a first image data store, the EAGER image is successfully propagated to the remaining image data stores within the distributed computer system and propagated to a first batch of local data stores associated with each image data store, generating a signal to the management system, following reception of which the management system indicates successful completion of creation of the EAGER image and availability of the EAGER image for instantiation.
The current document is directed to methods and systems that efficiently distribute virtual-machine images (“VM images”) among servers within large, distributed-computer-system-implemented IAAS platforms to facilitate temporally and computationally efficient instantiation of virtual machines within the servers. In implementations discussed below, VM images are stored in a distributed fashion throughout one or more distributed computing systems, using several different VM-image-distribution models, in order to balance computational-resource usage, temporal constraints, and other factors and considerations related to VM-image distribution and VM instantiation.1. A distributed VM-image-distribution subsystem of a management system within a distributed computer system having multiple servers, multiple data-storage devices, and one or more internal networks, the distributed VM-image-distribution subsystem implemented as stored computer instructions that, when executed on one or more processors of one or more computer systems, control the computer system to: organize data stores within the distributed computer system that store VM images into groups, each group including an image data store and multiple associated local data stores; classify each VM image as one of an EAGER image and an ON_DEMAND image; when an EAGER image is created and stored in a first image data store, propagate the VM image to the remaining image data stores within the distributed computer system and propagate the VM image from each image data store to the local data stores associated with the image data store; when an ON_DEMAND image is created and stored in a first image data store, propagate the VM image to the remaining image data stores within the distributed computer system; and when the management system processes a request to instantiate an ON_DEMAND image on a server, transfers the ON_DEMAND image from an image data store to a local data store accessible to the server. 2. The distributed VM-image-distribution subsystem of claim 1 further comprising: local distributed-VM-image-distribution instances, each executing within a management server and each associated with an image data store and multiple associated local data stores. 3. The distributed VM-image-distribution subsystem of claim 2 wherein each local distributed-VM-image-distribution instance includes a background subsystem responsible for initiating batches of VM-image copy operations through a management network and an orchestrator subsystem that manages VM-image propagation through the distributed computer system. 4. The distributed VM-image-distribution subsystem of claim 2 wherein, when an ON_DEMAND image is created and stored in a first image data store and when the ON_DEMAND image has been propagated to the remaining image data stores within the distributed computer system, the distributed VM-image-distribution subsystem generates a signal to the management system, following reception of which the management system indicates successful completion of creation of the ON_DEMAND image and availability of the ON_DEMAND image for instantiation. 5. The distributed VM-image-distribution subsystem of claim 2 wherein, when, following creation of an EAGER image and storing of the EAGER image in a first image data store, the EAGER image is successfully propagated to the remaining image data stores within the distributed computer system and propagated to a first batch of local data stores associated with each image data store, the distributed VM-image-distribution subsystem generates a signal to the management system, following reception of which the management system indicates successful completion of creation of the EAGER image and availability of the EAGER image for instantiation. 6. The distributed VM-image-distribution subsystem of claim 2 wherein EAGER images are restricted to sizes below a threshold size. 7. The distributed VM-image-distribution subsystem of claim 2 wherein sizes of ON_DEMAND images are not restricted 8. The distributed VM-image-distribution subsystem of claim 2 wherein the EAGER and ON_DEMAND classifications of VM images are one of: fixed when VM images are created; and initially assigned when VM images are created and later modified, by the management system, when the instantiation pattern of a VM image is incompatible with the classification assigned to the VM image. 9. A method carried out in a distributed VM-image-distribution subsystem of a management system within a distributed computer system having multiple servers, multiple data-storage devices, and one or more internal networks, the method comprising: organizing data stores within the distributed computer system that store VM images into groups, each group including an image data store and multiple associated local data stores; classifying each VM image as one of an EAGER image and an ON_DEMAND image; when an EAGER image is created and stored in a first image data store, propagating the VM image to the remaining image data stores within the distributed computer system and propagating the VM image from each image data store to the local data stores associated with the image data store; when an ON_DEMAND image is created and stored in a first image data store, propagating the VM image to the remaining image data stores within the distributed computer system; and when the management system processes a request to instantiate an ON_DEMAND image on a server, transferring the ON_DEMAND image from an image data store to a local data store accessible to the server. 10. The method of claim 9 wherein the distributed-VM-image-distribution subsystem comprises: local distributed-VM-image-distribution instances, each executing within a management server and each associated with an image data store and multiple associated local data stores. 11. The method of claim 10 wherein each local distributed-VM-image-distribution instance includes a background subsystem responsible for initiating batches of VM-image copy operations through a management network and an orchestrator subsystem that manages VM-image propagation through the distributed computer system. 12. The method of claim 2 wherein, when an ON_DEMAND image is created and stored in a first image data store and when the ON_DEMAND image has been propagated to the remaining image data stores within the distributed computer system, generating a signal to the management system, following reception of which the management system indicates successful completion of creation of the ON_DEMAND image and availability of the ON_DEMAND image for instantiation. 13. The method of claim 10 wherein, when, following creation of an EAGER image and storing of the EAGER image in a first image data store, the EAGER image is successfully propagated to the remaining image data stores within the distributed computer system and propagated to a first batch of local data stores associated with each image data store, generating a signal to the management system, following reception of which the management system indicates successful completion of creation of the EAGER image and availability of the EAGER image for instantiation. 14. The method of claim 10 wherein EAGER images are restricted to sizes below a threshold size. 15. The method of claim 10 wherein sizes of ON_DEMAND images are not restricted 16. The method of claim 10 wherein the EAGER and ON_DEMAND classifications of VM images are one of: fixed when VM images are created; and initially assigned when VM images are created and later modified, by the management system, when the instantiation pattern of a VM image is incompatible with the classification assigned to the VM image. 17. Computer instructions stored on a physical data-storage device that, when executed by one or more processors in a distributed computer system having multiple servers, multiple data-storage devices, and one or more internal networks, control a distributed VM-image-distribution subsystem of a management system within the distributed computer system to: organize data stores within the distributed computer system that store VM images into groups, each group including an image data store and multiple associated local data stores; classify each VM image as one of an EAGER image and an ON_DEMAND image; when an EAGER image is created and stored in a first image data store, propagate the VM image to the remaining image data stores within the distributed computer system and propagate the VM image from each image data store to the local data stores associated with the image data store; when an ON_DEMAND image is created and stored in a first image data store, propagate the VM image to the remaining image data stores within the distributed computer system; and when the management system processes a request to instantiate an ON_DEMAND image on a server, transfers the ON_DEMAND image from an image data store to a local data store accessible to the server. 18. The stored computer instructions of claim 17 wherein the distributed-VM-image-distribution subsystem comprises local distributed-VM-image-distribution instances, each executing within a management server and each associated with an image data store and multiple associated local data stores; and wherein each local distributed-VM-image-distribution instance includes a background subsystem responsible for initiating batches of VM-image copy operations through a management network and an orchestrator subsystem that manages VM-image propagation through the distributed computer system. 19. The stored computer instructions of claim 17 wherein, when an ON_DEMAND image is created and stored in a first image data store and when the ON_DEMAND image has been propagated to the remaining image data stores within the distributed computer system, generating a signal to the management system, following reception of which the management system indicates successful completion of creation of the ON_DEMAND image and availability of the ON_DEMAND image for instantiation. 20. The stored computer instructions of claim 17 wherein, when, following creation of an EAGER image and storing of the EAGER image in a first image data store, the EAGER image is successfully propagated to the remaining image data stores within the distributed computer system and propagated to a first batch of local data stores associated with each image data store, generating a signal to the management system, following reception of which the management system indicates successful completion of creation of the EAGER image and availability of the EAGER image for instantiation.
2,400
7,937
7,937
15,933,094
2,484
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving a first message on a first channel of a plurality of channels, wherein the first message comprises a video frame of a plurality of video frames and metadata describing characteristics of the video frame. The video frame is analyzed to detect an object within the video frame. Based on the analysis, the analysis metadata indicating the analysis performed and an indication of the detected object can be generated and then encapsulated in a new message with the video frame. The new message can be published to a second channel.
1. A method comprising: receiving a first message on a first channel of a plurality of channels, wherein the first message comprises a video frame of a plurality of video frames and metadata describing characteristics of the video frame; analyzing, by a processing device, the video frame to detect an object within the video frame; generating, by the processing device, analysis metadata indicating an analysis performed on the video frame and an indication of the detected object; generating, by the processing device, a second message encapsulating the video frame and the analysis metadata; and publishing the second message to a second channel of the plurality of channels. 2. The method of claim 1, wherein analyzing the video frame to detect the object comprises: applying a computer vision process or a machine learning process to the video frame. 3. The method of claim 1, further comprising: receiving a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyzing the second metadata to determine that analysis to detect the object has been performed; and publishing the third message to the second channel. 4. The method of claim 1, further comprising: receiving a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyzing the second metadata to determine that the second video frame does not have characteristics to enable analyzing to detect the object; generating second analysis metadata indicating that the analysis does not have characteristics to enable analyzing to detect the object; generating a fourth message encapsulating the second video frame and the second analysis metadata; and publishing the fourth message to the second channel. 5. The method of claim 1, wherein the video frame and the metadata describing characteristics of the video frame are each tagged with the same sequence number. 6. The method of claim 1, further comprising: receiving the second message on the second channel of the plurality of channels analyzing the video frame to detect a second object within the video frame; generating second analysis metadata indicating the analysis performed and an indication of the detected second object; generating a third message encapsulating the video frame and the second analysis metadata; and publishing the third message on a third channel of the plurality of channels. 7. The method of claim 1, further comprising: receiving a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyzing the second metadata to determine that the second video frame does not have the object; generating second analysis metadata indicating the analysis performed and an indication that the object was not identified in the video frame; generating a third message encapsulating the second video frame and the second analysis metadata; and publishing the third message to the second channel. 8. The method of claim 1, further comprising: selecting the second channel from the plurality of channels in response to detecting the object in the video frame. 9. The method of claim 1, further comprising: publishing a copy of the second message in response to detecting the object in the video frame. 10. A system, comprising: a computer processing device programmed to perform operations to: receive a first message on a first channel of a plurality of channels, wherein the first message comprises a video frame of a plurality of video frames and metadata describing characteristics of the video frame; analyze the video frame to detect an object within the video frame; generate analysis metadata indicating an analysis performed on the video frame and an indication of the detected object; generate a second message encapsulating the video frame and the analysis metadata; and publish the second message to a second channel of the plurality of channels. 11. The system of claim 10, wherein to analyze the video frame to detect the object the computer processing device is further to: apply a computer vision process or a machine learning process to the video frame. 12. The system of claim 10, wherein the computer processing device is further to: receive a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyze the second metadata to determine that analysis to detect the object has been performed; and publish the third message to the second channel. 13. The system of claim 10, wherein the computer processing device is further to: receive a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyze the second metadata to determine that the second video frame does not have characteristics to enable analyzing to detect the object; generate second analysis metadata indicating that the analysis does not have characteristics to enable analyzing to detect the object; generate a fourth message encapsulating the second video frame and the second analysis metadata; and publish the fourth message to the second channel. 14. The system of claim 10, wherein the video frame and the metadata describing characteristics of the video frame are each tagged with a same sequence number. 15. The system of claim 10, wherein the computer processing device is further to: receive the second message on the second channel of the plurality of channels analyze the video frame to detect a second object within the video frame; generate second analysis metadata indicating the analysis performed and an indication of the detected second object; generate a third message encapsulating the video frame and the second analysis metadata; and publish the third message on a third channel of the plurality of channels. 16. The system of claim 10, wherein the computer processing device is further to: receive a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyze the second metadata to determine that the second video frame does not have the object; generate second analysis metadata indicating the analysis performed and an indication that the object was not identified in the video frame; generate a third message encapsulating the second video frame and the second analysis metadata; and publish the third message to the second channel. 17. The system of claim 10, wherein the computer processing device is further to select the second channel from the plurality of channels in response to detecting the object in the video frame. 18. The system of claim 10, wherein the computer processing device is further to publish a copy of the second message in response to detecting the object in the video frame. 19. A non-transitory computer-readable medium having instructions stored thereon that, when executed by a computer processing device, cause the computer processing device to: receive a first message on a first channel of a plurality of channels, wherein the first message comprises a video frame of a plurality of video frames and metadata describing characteristics of the video frame; analyze, by the computer processing device, the video frame to detect an object within the video frame; generate, by the computer processing device, analysis metadata indicating an analysis performed one the video frame and an indication of the detected object; generate, by the computer processing device a second message encapsulating the video frame and the analysis metadata; and publish the second message to a second channel of the plurality of channels. 20. The non-transitory computer-readable medium of claim 19, wherein the instructions further cause the computer processing device to: receive a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyze the second metadata to determine that analysis to detect the object has been performed; and publish the third message to the second channel.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving a first message on a first channel of a plurality of channels, wherein the first message comprises a video frame of a plurality of video frames and metadata describing characteristics of the video frame. The video frame is analyzed to detect an object within the video frame. Based on the analysis, the analysis metadata indicating the analysis performed and an indication of the detected object can be generated and then encapsulated in a new message with the video frame. The new message can be published to a second channel.1. A method comprising: receiving a first message on a first channel of a plurality of channels, wherein the first message comprises a video frame of a plurality of video frames and metadata describing characteristics of the video frame; analyzing, by a processing device, the video frame to detect an object within the video frame; generating, by the processing device, analysis metadata indicating an analysis performed on the video frame and an indication of the detected object; generating, by the processing device, a second message encapsulating the video frame and the analysis metadata; and publishing the second message to a second channel of the plurality of channels. 2. The method of claim 1, wherein analyzing the video frame to detect the object comprises: applying a computer vision process or a machine learning process to the video frame. 3. The method of claim 1, further comprising: receiving a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyzing the second metadata to determine that analysis to detect the object has been performed; and publishing the third message to the second channel. 4. The method of claim 1, further comprising: receiving a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyzing the second metadata to determine that the second video frame does not have characteristics to enable analyzing to detect the object; generating second analysis metadata indicating that the analysis does not have characteristics to enable analyzing to detect the object; generating a fourth message encapsulating the second video frame and the second analysis metadata; and publishing the fourth message to the second channel. 5. The method of claim 1, wherein the video frame and the metadata describing characteristics of the video frame are each tagged with the same sequence number. 6. The method of claim 1, further comprising: receiving the second message on the second channel of the plurality of channels analyzing the video frame to detect a second object within the video frame; generating second analysis metadata indicating the analysis performed and an indication of the detected second object; generating a third message encapsulating the video frame and the second analysis metadata; and publishing the third message on a third channel of the plurality of channels. 7. The method of claim 1, further comprising: receiving a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyzing the second metadata to determine that the second video frame does not have the object; generating second analysis metadata indicating the analysis performed and an indication that the object was not identified in the video frame; generating a third message encapsulating the second video frame and the second analysis metadata; and publishing the third message to the second channel. 8. The method of claim 1, further comprising: selecting the second channel from the plurality of channels in response to detecting the object in the video frame. 9. The method of claim 1, further comprising: publishing a copy of the second message in response to detecting the object in the video frame. 10. A system, comprising: a computer processing device programmed to perform operations to: receive a first message on a first channel of a plurality of channels, wherein the first message comprises a video frame of a plurality of video frames and metadata describing characteristics of the video frame; analyze the video frame to detect an object within the video frame; generate analysis metadata indicating an analysis performed on the video frame and an indication of the detected object; generate a second message encapsulating the video frame and the analysis metadata; and publish the second message to a second channel of the plurality of channels. 11. The system of claim 10, wherein to analyze the video frame to detect the object the computer processing device is further to: apply a computer vision process or a machine learning process to the video frame. 12. The system of claim 10, wherein the computer processing device is further to: receive a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyze the second metadata to determine that analysis to detect the object has been performed; and publish the third message to the second channel. 13. The system of claim 10, wherein the computer processing device is further to: receive a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyze the second metadata to determine that the second video frame does not have characteristics to enable analyzing to detect the object; generate second analysis metadata indicating that the analysis does not have characteristics to enable analyzing to detect the object; generate a fourth message encapsulating the second video frame and the second analysis metadata; and publish the fourth message to the second channel. 14. The system of claim 10, wherein the video frame and the metadata describing characteristics of the video frame are each tagged with a same sequence number. 15. The system of claim 10, wherein the computer processing device is further to: receive the second message on the second channel of the plurality of channels analyze the video frame to detect a second object within the video frame; generate second analysis metadata indicating the analysis performed and an indication of the detected second object; generate a third message encapsulating the video frame and the second analysis metadata; and publish the third message on a third channel of the plurality of channels. 16. The system of claim 10, wherein the computer processing device is further to: receive a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyze the second metadata to determine that the second video frame does not have the object; generate second analysis metadata indicating the analysis performed and an indication that the object was not identified in the video frame; generate a third message encapsulating the second video frame and the second analysis metadata; and publish the third message to the second channel. 17. The system of claim 10, wherein the computer processing device is further to select the second channel from the plurality of channels in response to detecting the object in the video frame. 18. The system of claim 10, wherein the computer processing device is further to publish a copy of the second message in response to detecting the object in the video frame. 19. A non-transitory computer-readable medium having instructions stored thereon that, when executed by a computer processing device, cause the computer processing device to: receive a first message on a first channel of a plurality of channels, wherein the first message comprises a video frame of a plurality of video frames and metadata describing characteristics of the video frame; analyze, by the computer processing device, the video frame to detect an object within the video frame; generate, by the computer processing device, analysis metadata indicating an analysis performed one the video frame and an indication of the detected object; generate, by the computer processing device a second message encapsulating the video frame and the analysis metadata; and publish the second message to a second channel of the plurality of channels. 20. The non-transitory computer-readable medium of claim 19, wherein the instructions further cause the computer processing device to: receive a third message on the first channel of the plurality of channels, wherein the third message comprises a second video frame and second metadata describing characteristics of the second video frame; analyze the second metadata to determine that analysis to detect the object has been performed; and publish the third message to the second channel.
2,400
7,938
7,938
14,514,690
2,445
Systems, methods and devices relating to coordinated network communication (e.g. transport-layer communication) of client requests and client request responses between a client and a distributed network service system, the network service nodes of the distributed network service system comprising a storage resource, a network interface, and a computer processor module for sending a coordinated network communication of data request responses to the client upon receipt of (1) network communication of client requests from clients; or (2) communication data channel information from another network service node. There is also provided a network switching device for managing a coordinated network communication of data transactions between clients and a distributed network service system comprising a plurality of network service nodes, the network switching device configured to manage higher-layer data units to coordinate a network communication of data transactions between clients and a distributed network service system.
1. A network service node for use in a distributed network service system comprising a plurality of network service nodes supporting distributed network communications with a client, the network service node comprising: a storage resource for use by at least one client-accessible service; a network interface to the network service system; and a processor configured to process a client request when related to said at least one client-accessible service upon both: receipt of said client request when directed to the node; and indirect receipt of said client request, when directed to another node of the distributed network service system, along with related communication channel state information required for the node to become stateful with the client in directly fulfilling said client request with the client. 2. The network service node of claim 1, wherein said processor is further configured to forward said client request along with said related communication channel state information to another node of the distributed network system upon said client request being unrelated to said at least one client-accessible service using said storage resource. 3. The network service node of claim 1, wherein when the node receives said client request form the client via a stateful connection with the client and said client request is unrelated to said at least one client-accessible service using said storage resource, said processor is further configured to forward said client request along with communication channel state information related to said stateful connection to another node of the distributed network system for processing. 4. The network service node of claim 1, wherein the distributed network communications are selected from the following group: physical-layer communications, datalink-layer communications, network-layer communications, transport-layer communications, session-layer communications, presentation-layer communications, and application-layer communications. 5. The network service node of claim 1, wherein the distributed network communications are connection-oriented resulting in a distributed network communication connection between the client and the network service node, and wherein said distributed network communication connection is migrated to another node upon forwarding said communication channel state information thereto. 6. The network service node of claim 5, wherein said distributed network communications are restarted after migration. 7. The network service node of claim 1, wherein the distributed network service system interfaces with the client via a network switching device, and wherein said communication channel state information is received from said network switching device. 8. The network service node of claim 1, wherein, upon said indirect receipt of said client request and said related communication channel state information, said processor is further configured to delay sending a client request response until one of expiry of a predetermined time interval and receipt of a send confirmation from one of the other network service nodes. 9. The network service node of claim 1, wherein the distributed network communications are characterized as one of connection-oriented and connectionless. 10. The network service node of claim 1, wherein the distributed network communications are characterized as one of stream abstracted and datagram abstracted. 11. The network service node of claim 1, wherein the network service node is a storage node, wherein the client-accessible service is data, and wherein the distributed network service system is a distributed storage system. 12. A distributed network service system accessible by a client, comprising: a plurality of network service nodes, each node comprising: a storage resource associated therewith for use by at least one client-accessible service; and a processor configured to process a given client request when related to said at least one client-accessible service using said storage resource upon both: receipt of said given client request when directed to said given node; and indirect receipt of said client request, when directed to another node of the distributed network service system, along with related communication channel state information required for said given node to become stateful with the client in directly fulfilling said client request with the client; and a network switching device interfacing between said plurality of network service nodes and the client to direct said given client request to said given node in fulfilling said given client request. 13. The distributed network service system of claim 12, wherein: said network switching device is configured to identify a destination node identified by said given client request and direct said given client request to said destination node irrespective of whether said given client request is related to said at least one client-accessible service using said storage resource of said destination node; and said destination node is configured to reroute said given client request to another node upon identifying that said client request is unrelated to said at least one client-accessible service using said storage resource of said destination node. 14. The distributed network service system of claim 12, wherein said network switching device is configured to: direct said given client request to a destination node identified by said given client request upon determining that said client request is related to said at least one client-accessible service using said storage resource of said destination node; and otherwise determine that said given client request is related to said at least one client-accessible service using said storage resource of another node, and reroute said given client request to said other node along with said related communication channel state information. 15. The distributed network service system of claim 12, wherein at least one of said network service nodes is a storage node and the distributed network service system acts a distributed storage system. 16. A network switching device for interfacing between a client and a plurality of network service nodes in a distributed network service system, wherein each of the network nodes comprises a storage resource associated therewith for use by at least one client-accessible service, and a processor configured to process a given client request when related to the at least one client-accessible service on the storage resource; the switching device comprising: a network interface to receive a given client request from the client and route said given client request to a selected one of network service nodes for processing; and a processor configured to route said given client request via said network interface to a destination node identified by said given client request upon determining that said client request is related to said at least one client-accessible service using said storage resource of said destination node; and otherwise determine that said given client request is related to said at least one client-accessible service using said storage resource of another node, and reroute said given client request to said other node along with related communication channel state information required for said other node to become stateful with the client in directly fulfilling said client request with the client. 17. The network switching device of claim 16, wherein at least one of the network service nodes is a storage node and the distributed network service system acts as a distributed storage system. 18. A computer-readable medium having statements and instructions stored thereon for implementation by a processor to route a client request to a selected network service node in a distributed network service system in fulfilling the client request, wherein each of the network nodes comprises a storage resource associated therewith for use by at least one client-accessible service, and a processor configured to process a given client request when related to the at least one client-accessible service on the storage resource, the statements and instructions for: routing the client request to a destination node identified by the client request upon determining that the client request is related to the at least one client-accessible service using the storage resource of said destination node; and otherwise determining that the client request is related to the at least one client-accessible service using the storage resource of another node, and rerouting the client request to said other node along with related communication channel state information required for said other node to become stateful with the client in directly fulfilling the client request with the client.
Systems, methods and devices relating to coordinated network communication (e.g. transport-layer communication) of client requests and client request responses between a client and a distributed network service system, the network service nodes of the distributed network service system comprising a storage resource, a network interface, and a computer processor module for sending a coordinated network communication of data request responses to the client upon receipt of (1) network communication of client requests from clients; or (2) communication data channel information from another network service node. There is also provided a network switching device for managing a coordinated network communication of data transactions between clients and a distributed network service system comprising a plurality of network service nodes, the network switching device configured to manage higher-layer data units to coordinate a network communication of data transactions between clients and a distributed network service system.1. A network service node for use in a distributed network service system comprising a plurality of network service nodes supporting distributed network communications with a client, the network service node comprising: a storage resource for use by at least one client-accessible service; a network interface to the network service system; and a processor configured to process a client request when related to said at least one client-accessible service upon both: receipt of said client request when directed to the node; and indirect receipt of said client request, when directed to another node of the distributed network service system, along with related communication channel state information required for the node to become stateful with the client in directly fulfilling said client request with the client. 2. The network service node of claim 1, wherein said processor is further configured to forward said client request along with said related communication channel state information to another node of the distributed network system upon said client request being unrelated to said at least one client-accessible service using said storage resource. 3. The network service node of claim 1, wherein when the node receives said client request form the client via a stateful connection with the client and said client request is unrelated to said at least one client-accessible service using said storage resource, said processor is further configured to forward said client request along with communication channel state information related to said stateful connection to another node of the distributed network system for processing. 4. The network service node of claim 1, wherein the distributed network communications are selected from the following group: physical-layer communications, datalink-layer communications, network-layer communications, transport-layer communications, session-layer communications, presentation-layer communications, and application-layer communications. 5. The network service node of claim 1, wherein the distributed network communications are connection-oriented resulting in a distributed network communication connection between the client and the network service node, and wherein said distributed network communication connection is migrated to another node upon forwarding said communication channel state information thereto. 6. The network service node of claim 5, wherein said distributed network communications are restarted after migration. 7. The network service node of claim 1, wherein the distributed network service system interfaces with the client via a network switching device, and wherein said communication channel state information is received from said network switching device. 8. The network service node of claim 1, wherein, upon said indirect receipt of said client request and said related communication channel state information, said processor is further configured to delay sending a client request response until one of expiry of a predetermined time interval and receipt of a send confirmation from one of the other network service nodes. 9. The network service node of claim 1, wherein the distributed network communications are characterized as one of connection-oriented and connectionless. 10. The network service node of claim 1, wherein the distributed network communications are characterized as one of stream abstracted and datagram abstracted. 11. The network service node of claim 1, wherein the network service node is a storage node, wherein the client-accessible service is data, and wherein the distributed network service system is a distributed storage system. 12. A distributed network service system accessible by a client, comprising: a plurality of network service nodes, each node comprising: a storage resource associated therewith for use by at least one client-accessible service; and a processor configured to process a given client request when related to said at least one client-accessible service using said storage resource upon both: receipt of said given client request when directed to said given node; and indirect receipt of said client request, when directed to another node of the distributed network service system, along with related communication channel state information required for said given node to become stateful with the client in directly fulfilling said client request with the client; and a network switching device interfacing between said plurality of network service nodes and the client to direct said given client request to said given node in fulfilling said given client request. 13. The distributed network service system of claim 12, wherein: said network switching device is configured to identify a destination node identified by said given client request and direct said given client request to said destination node irrespective of whether said given client request is related to said at least one client-accessible service using said storage resource of said destination node; and said destination node is configured to reroute said given client request to another node upon identifying that said client request is unrelated to said at least one client-accessible service using said storage resource of said destination node. 14. The distributed network service system of claim 12, wherein said network switching device is configured to: direct said given client request to a destination node identified by said given client request upon determining that said client request is related to said at least one client-accessible service using said storage resource of said destination node; and otherwise determine that said given client request is related to said at least one client-accessible service using said storage resource of another node, and reroute said given client request to said other node along with said related communication channel state information. 15. The distributed network service system of claim 12, wherein at least one of said network service nodes is a storage node and the distributed network service system acts a distributed storage system. 16. A network switching device for interfacing between a client and a plurality of network service nodes in a distributed network service system, wherein each of the network nodes comprises a storage resource associated therewith for use by at least one client-accessible service, and a processor configured to process a given client request when related to the at least one client-accessible service on the storage resource; the switching device comprising: a network interface to receive a given client request from the client and route said given client request to a selected one of network service nodes for processing; and a processor configured to route said given client request via said network interface to a destination node identified by said given client request upon determining that said client request is related to said at least one client-accessible service using said storage resource of said destination node; and otherwise determine that said given client request is related to said at least one client-accessible service using said storage resource of another node, and reroute said given client request to said other node along with related communication channel state information required for said other node to become stateful with the client in directly fulfilling said client request with the client. 17. The network switching device of claim 16, wherein at least one of the network service nodes is a storage node and the distributed network service system acts as a distributed storage system. 18. A computer-readable medium having statements and instructions stored thereon for implementation by a processor to route a client request to a selected network service node in a distributed network service system in fulfilling the client request, wherein each of the network nodes comprises a storage resource associated therewith for use by at least one client-accessible service, and a processor configured to process a given client request when related to the at least one client-accessible service on the storage resource, the statements and instructions for: routing the client request to a destination node identified by the client request upon determining that the client request is related to the at least one client-accessible service using the storage resource of said destination node; and otherwise determining that the client request is related to the at least one client-accessible service using the storage resource of another node, and rerouting the client request to said other node along with related communication channel state information required for said other node to become stateful with the client in directly fulfilling the client request with the client.
2,400
7,939
7,939
12,087,839
2,426
The invention concerns a gateway comprising means for receiving the first frames of a digital video broadcasting service, characterized in that it comprises: means for determining data representative of a time-slice, means for encapsulating each of the first service frames in a second frame comprising said data representative of a time-slice, and means for transmitting over a wireless local area network each second frame to a digital audio/video terminal.
1. Gateway comprising: a receiver of first frames of a digital video broadcasting service, a module determining data representative of time slicing, an encapsulator encapsulating each of the first service frames in a second frame comprising said data representative of a time-slice, and a transmitter transmitting over a wireless local area network each second frame to a digital audio/video terminal. 2. Gateway according to claim 1, wherein said gateway comprises an inserter inserting data representative of a time-slice in each second frame according to a session description protocol. 3. Gateway according to claim 2, wherein said session description protocol is of the SAP-SDP type. 4. Gateway according to claim 1, wherein the receiver of first frames of a digital video broadcasting service is associated with a wired network. 5. Gateway according to claim 1, wherein the means for receiving the first frames of a digital video broadcasting service are associated with a wireless network. 6. Gateway according to claim 1, wherein said wireless network is of the IEEE 802.11, Hiperlan, IEEE802.15 or IEEE802.16 type. 7. Gateway according to claim 1, wherein said gateway comprises a detector detecting a power saving mode of a terminal receiving said second frames so as to transmit each second frame when the destination terminal is in the listening mode. 8. Gateway according to claim 1, wherein said service is of the DVB-H type. 9. Gateway according to claim 1, wherein each second frame comprises a destination address corresponding to a single terminal. 10. Gateway according to claim 1, wherein each second frame comprises a destination address corresponding to several terminals. 11. Gateway according to claim 1, wherein it comprises a module determining audio/video terminals and a filter filtering the first service frames received, only the service frames intended for one of said determined terminals being encapsulated by said encapsulation means. 12. Gateway according to claim 1, wherein the receiver of first frames of a digital video broadcasting service are associated with a long-distance wireless broadcasting network. 13. Digital audio/video terminal, wherein it comprises: a receiver of second frames comprising data representative of a time-slice, the second frames being transmitted over a wireless local area network, and an extractor extracting first service frames from said second frames. 14. Method for broadcasting digital video services comprising a step for receiving the first frames of a digital video broadcasting service, wherein said method comprises: a step for determining data representative of a time-slice, a step for encapsulating each of the first service frames in a second frame comprising said data representative of a time-slice, and a step for transmitting over a wireless local area network each second frame to a digital audio/video terminal. 15. Method for receiving digital audio/video, wherein said method comprises: a step for receiving second frames comprising data representative of a time-slice, the second frames being transmitted over a wireless local area network, and a step for extracting first service frames from said second frames.
The invention concerns a gateway comprising means for receiving the first frames of a digital video broadcasting service, characterized in that it comprises: means for determining data representative of a time-slice, means for encapsulating each of the first service frames in a second frame comprising said data representative of a time-slice, and means for transmitting over a wireless local area network each second frame to a digital audio/video terminal.1. Gateway comprising: a receiver of first frames of a digital video broadcasting service, a module determining data representative of time slicing, an encapsulator encapsulating each of the first service frames in a second frame comprising said data representative of a time-slice, and a transmitter transmitting over a wireless local area network each second frame to a digital audio/video terminal. 2. Gateway according to claim 1, wherein said gateway comprises an inserter inserting data representative of a time-slice in each second frame according to a session description protocol. 3. Gateway according to claim 2, wherein said session description protocol is of the SAP-SDP type. 4. Gateway according to claim 1, wherein the receiver of first frames of a digital video broadcasting service is associated with a wired network. 5. Gateway according to claim 1, wherein the means for receiving the first frames of a digital video broadcasting service are associated with a wireless network. 6. Gateway according to claim 1, wherein said wireless network is of the IEEE 802.11, Hiperlan, IEEE802.15 or IEEE802.16 type. 7. Gateway according to claim 1, wherein said gateway comprises a detector detecting a power saving mode of a terminal receiving said second frames so as to transmit each second frame when the destination terminal is in the listening mode. 8. Gateway according to claim 1, wherein said service is of the DVB-H type. 9. Gateway according to claim 1, wherein each second frame comprises a destination address corresponding to a single terminal. 10. Gateway according to claim 1, wherein each second frame comprises a destination address corresponding to several terminals. 11. Gateway according to claim 1, wherein it comprises a module determining audio/video terminals and a filter filtering the first service frames received, only the service frames intended for one of said determined terminals being encapsulated by said encapsulation means. 12. Gateway according to claim 1, wherein the receiver of first frames of a digital video broadcasting service are associated with a long-distance wireless broadcasting network. 13. Digital audio/video terminal, wherein it comprises: a receiver of second frames comprising data representative of a time-slice, the second frames being transmitted over a wireless local area network, and an extractor extracting first service frames from said second frames. 14. Method for broadcasting digital video services comprising a step for receiving the first frames of a digital video broadcasting service, wherein said method comprises: a step for determining data representative of a time-slice, a step for encapsulating each of the first service frames in a second frame comprising said data representative of a time-slice, and a step for transmitting over a wireless local area network each second frame to a digital audio/video terminal. 15. Method for receiving digital audio/video, wherein said method comprises: a step for receiving second frames comprising data representative of a time-slice, the second frames being transmitted over a wireless local area network, and a step for extracting first service frames from said second frames.
2,400
7,940
7,940
14,295,904
2,448
A method is disclosed wherein a URL is associated with a resource. The URL is for use in accessing the resource. A smartphone is associated with a recipient. The URL is provided to the recipient. When the URL is accessed by a request for access to the resource relying upon the URL, transmitting from a server to the smartphone a push notification. When the push notification is responded to, allowing access to the resource via the communications network in dependence upon the response.
1. A method comprising: associating a URL and a resource, the URL for accessing the resource; associating a smartphone with a recipient; providing from a first user to a recipient the URL; receiving a request for access to the resource relying upon the URL, the request received via a communication network; upon receiving the request for access to the resource, transmitting from a server to the smartphone a push notification; receiving a reply based on the push notification transmitted to the smartphone; and in dependence upon the reply, allowing access to the resource via the communications network. 2. A method according to claim 1 wherein providing from a first user to a recipient the URL comprises transmitting from a first user system the URL to the recipient via the communications network. 3. A method according to claim 1 wherein the reply comprises a reply to the push notification received from the smartphone via the communication network. 4. A method according to claim 3 wherein the smartphone is uniquely associated with the recipient. 5. A method according to claim 1 wherein the smartphone comprises an application installed thereon, the application for receiving push notifications. 6. A method according to claim 5 wherein providing a reply comprises responding from within the application, the response transmitted to a server from the smartphone. 7. A method according to claim 6 comprising: in response to receiving a request to access the URL providing a request for a user identification; receiving from a user a user identification; and transmitting the push notification to the smartphone associated with the provided user identification. 8. A method according to claim 1 wherein the URL is uniquely associated with a recipient. 9. A method according to claim 1 wherein the URL is associated with a plurality of recipients and wherein transmitting the push notification is performed for each of the associated recipients when the request for access is received. 10. A method according to claim 1 comprising: determining a time of the request and restricting access to the resource at some times and allowing access to the resource at other times. 11. A method according to claim 1 comprising: transmitting a push notification to the smartphone indicating access to the resource has been denied. 12. A method according to claim 1 comprising: providing a first URL for association with a resource; creating the URL, the URL for being directed to the first URL by a URL directing service 13. A method according to claim 12 wherein the URL directing service comprises a URL shortening service. 14. A method according to claim 12 wherein the URL directing service comprises a URL security service. 15. A method according to claim 12 wherein the URL directing service comprises a cloud based file-sharing service. 16. A method according to claim 1 wherein the resource is at least one of a webpage, a second URL, and data. 17. A method according to claim 1 wherein sending from a user system the URL comprises sending the URL in one of an email, text, and tweet. 18. A method according to claim 1 wherein receiving a reply comprises receiving authentication data for authenticating a source of the reply. 19. A method according to claim 5 comprising: from within the application, receiving authentication data provided by a user; and wherein providing a reply comprises transmitting a response to a server from the smartphone based on the authentication data. 20. A method according to claim 1 wherein transmitting a reply from the smartphone comprises transmitting a certificate between the application and the server.
A method is disclosed wherein a URL is associated with a resource. The URL is for use in accessing the resource. A smartphone is associated with a recipient. The URL is provided to the recipient. When the URL is accessed by a request for access to the resource relying upon the URL, transmitting from a server to the smartphone a push notification. When the push notification is responded to, allowing access to the resource via the communications network in dependence upon the response.1. A method comprising: associating a URL and a resource, the URL for accessing the resource; associating a smartphone with a recipient; providing from a first user to a recipient the URL; receiving a request for access to the resource relying upon the URL, the request received via a communication network; upon receiving the request for access to the resource, transmitting from a server to the smartphone a push notification; receiving a reply based on the push notification transmitted to the smartphone; and in dependence upon the reply, allowing access to the resource via the communications network. 2. A method according to claim 1 wherein providing from a first user to a recipient the URL comprises transmitting from a first user system the URL to the recipient via the communications network. 3. A method according to claim 1 wherein the reply comprises a reply to the push notification received from the smartphone via the communication network. 4. A method according to claim 3 wherein the smartphone is uniquely associated with the recipient. 5. A method according to claim 1 wherein the smartphone comprises an application installed thereon, the application for receiving push notifications. 6. A method according to claim 5 wherein providing a reply comprises responding from within the application, the response transmitted to a server from the smartphone. 7. A method according to claim 6 comprising: in response to receiving a request to access the URL providing a request for a user identification; receiving from a user a user identification; and transmitting the push notification to the smartphone associated with the provided user identification. 8. A method according to claim 1 wherein the URL is uniquely associated with a recipient. 9. A method according to claim 1 wherein the URL is associated with a plurality of recipients and wherein transmitting the push notification is performed for each of the associated recipients when the request for access is received. 10. A method according to claim 1 comprising: determining a time of the request and restricting access to the resource at some times and allowing access to the resource at other times. 11. A method according to claim 1 comprising: transmitting a push notification to the smartphone indicating access to the resource has been denied. 12. A method according to claim 1 comprising: providing a first URL for association with a resource; creating the URL, the URL for being directed to the first URL by a URL directing service 13. A method according to claim 12 wherein the URL directing service comprises a URL shortening service. 14. A method according to claim 12 wherein the URL directing service comprises a URL security service. 15. A method according to claim 12 wherein the URL directing service comprises a cloud based file-sharing service. 16. A method according to claim 1 wherein the resource is at least one of a webpage, a second URL, and data. 17. A method according to claim 1 wherein sending from a user system the URL comprises sending the URL in one of an email, text, and tweet. 18. A method according to claim 1 wherein receiving a reply comprises receiving authentication data for authenticating a source of the reply. 19. A method according to claim 5 comprising: from within the application, receiving authentication data provided by a user; and wherein providing a reply comprises transmitting a response to a server from the smartphone based on the authentication data. 20. A method according to claim 1 wherein transmitting a reply from the smartphone comprises transmitting a certificate between the application and the server.
2,400
7,941
7,941
15,177,201
2,488
An example method of coding video data includes coding, from a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; decoding, from the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and decoding the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data.
1. A method of decoding video data, the method comprising: decoding, from a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; decoding, from the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and decoding the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data. 2. The method of claim 1, wherein: decoding the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises decoding the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context, and decoding the one or more syntax elements related to delta QP and/or chroma QP offsets comprises decoding the one or more syntax elements related to delta QP and/or chroma QP offsets using CABAC with a context. 3. The method of claim 1 wherein the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises a palette_transpose_flag syntax element. 4. The method of claim 1, wherein the one or more syntax elements related to delta QP comprise one or both of a syntax element that indicates an absolute value of a difference between a QP of the current block and a predictor of the QP of the current block and a syntax element that indicates a sign of the difference between the QP of the current block and the predictor of the QP of the current block. 5. The method of claim 1, wherein the one or more syntax elements related to chroma QP offsets comprise one or both of a syntax element that indicates whether entries in one or more offset lists are added to a luma QP of the current block to determine chroma QPs for the current block and a syntax element that indicates an index of an entry in each of the one or more offset lists that are added to the luma QP for the current block to determine the chroma QPs for the current block. 6. The method of claim 1, further comprising: decoding, from the coded video bitstream, a group of syntax elements using Bypass mode, wherein the group comprises one or more of: one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette, a syntax element that indicates a number of entries in the current palette that are explicitly signalled, one or more syntax elements that each indicate a value of a component in an entry in the current palette, a syntax element that indicates whether the current block of video data includes at least one escape coded sample, a syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred, and one or more syntax elements that indicate indices in an array of current palette entries. 7. The method of claim 6, wherein one or more of: the one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette comprise one or more palette_predictor_run syntax elements, the syntax element that indicates a number of entries in the current palette that are explicitly signalled comprises a num_signalled_palette_entries syntax element, the one or more syntax elements that each indicate a value of a component in an entry in the current palette comprise one or more palette_entry syntax elements, the syntax element that indicates whether the current block of video data includes at least one escape coded sample comprises palette_escape_val_present_flag, the syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred comprise a num_palette_indices_idc syntax element, and the one or more syntax elements that indicate indices in an array of current palette entries comprise one or more palette_index_idc syntax elements. 8. The method of claim 6, wherein decoding the group of syntax elements comprises decoding the group of syntax elements from the coded video bitstream at a position in the coded video bitstream that is before the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data. 9. The method of claim 6, further comprising: decoding, from the coded video bitstream after the group of syntax elements coded using Bypass mode, a syntax element that indicates a last occurrence of a run type flag within the current block of video data. 10. The method of claim 9, wherein decoding the syntax element that indicates the last occurrence of a run type flag within the current block of video data comprises decoding the syntax element that indicates the last occurrence of a run type flag within the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context. 11. A method of encoding video data, the method comprising: encoding, in a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; encoding, in the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and encoding the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data. 12. The method of claim 11, wherein: encoding the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises encoding the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context, and encoding the one or more syntax elements related to delta QP and/or chroma QP offsets comprises encoding the one or more syntax elements related to delta QP and/or chroma QP offsets using CABAC with a context. 13. The method of claim 11 wherein the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises a palette_transpose_flag syntax element. 14. The method of claim 11, wherein the one or more syntax elements related to delta QP comprise one or both of a syntax element that indicates an absolute value of a difference between a QP of the current block and a predictor of the QP of the current block and a syntax element that indicates a sign of the difference between the QP of the current block and the predictor of the QP of the current block. 15. The method of claim 11, wherein the one or more syntax elements related to chroma QP offsets comprise one or both of a syntax element that indicates whether entries in one or more offset lists are added to a luma QP of the current block to determine chroma QPs for the current block and a syntax element that indicates an index of an entry in each of the one or more offset lists that are added to the luma QP for the current block to determine the chroma QPs for the current block. 16. The method of claim 11, further comprising: encoding, in the coded video bitstream, a group of syntax elements using Bypass mode, wherein the group comprises one or more of: one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette, a syntax element that indicates a number of entries in the current palette that are explicitly signalled, one or more syntax elements that each indicate a value of a component in an entry in the current palette, a syntax element that indicates whether the current block of video data includes at least one escape coded sample, a syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred, and one or more syntax elements that indicate indices in an array of current palette entries. 17. The method of claim 16, wherein one or more of: the one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette comprise one or more palette_predictor_run syntax elements, the syntax element that indicates a number of entries in the current palette that are explicitly signalled comprises a num_signalled_palette_entries syntax element, the one or more syntax elements that each indicate a value of a component in an entry in the current palette comprise one or more palette_entry syntax elements, the syntax element that indicates whether the current block of video data includes at least one escape coded sample comprises palette_escape_val_present_flag, the syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred comprise a num_palette_indices_minus1 syntax element, and the one or more syntax elements that indicate indices in an array of current palette entries comprise one or more palette_index_idc syntax elements. 18. The method of claim 16, wherein encoding the group of syntax elements comprises encoding the group of syntax elements in the coded video bitstream at a position in the coded video bitstream that is before the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data. 19. The method of claim 16, further comprising: encoding, in the coded video bitstream after the group of syntax elements coded using Bypass mode, a syntax element that indicates a last occurrence of a run type flag within the current block of video data. 20. The method of claim 19, wherein encoding the syntax element that indicates the last occurrence of a run type flag within the current block of video data comprises encoding the syntax element that indicates the last occurrence of a run type flag within the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context. 21. A device for encoding or decoding video data, the device comprising: a memory configured to store video data; one or more processors configured to: encode or decode, in a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; encode or decode, in the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and encode or decode the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data. 22. The device of claim 21, wherein: to encode or decode the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data, the one or more processors are configured to encode or decode the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context, and to encode or decode the one or more syntax elements related to delta QP and/or chroma QP offsets the one or more processors are configured to encode or decode the one or more syntax elements related to delta QP and/or chroma QP offsets using CABAC with a context. 23. The device of claim 21 wherein the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises a palette_transpose_flag syntax element. 24. The device of claim 21, where in the one or more processors are further configured to: encode or decode, in the coded video bitstream, a group of syntax elements using Bypass mode, wherein the group comprises one or more of: one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette, a syntax element that indicates a number of entries in the current palette that are explicitly signalled, one or more syntax elements that each indicate a value of a component in an entry in the current palette, a syntax element that indicates whether the current block of video data includes at least one escape coded sample, a syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred, and one or more syntax elements that indicate indices in an array of current palette entries. 25. The device of claim 24, wherein one or more of: the one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette comprise one or more palette_predictor_run syntax elements, the syntax element that indicates a number of entries in the current palette that are explicitly signalled comprises a num_signalled_palette_entries syntax element, the one or more syntax elements that each indicate a value of a component in an entry in the current palette comprise one or more palette_entry syntax elements, the syntax element that indicates whether the current block of video data includes at least one escape coded sample comprises palette_escape_val_present_flag, the syntax element that indicates a number of entries in the current palette that are explicitly signalled or inferred comprise a num_palette_indices_minus1 syntax element, and the one or more syntax elements that indicate indices in an array of current palette entries comprise one or more palette_index_idc syntax elements. 26. The device of claim 24, wherein, to encode or decode the group of syntax elements, the one or more processors are configured to encode or decode the group of syntax elements in the coded video bitstream at a position in the coded video bitstream that is before the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data. 27. The device of claim 24, wherein the one or more processors are further configured to: encode or decode, in the coded video bitstream after the group of syntax elements coded using Bypass mode, a syntax element that indicates a last occurrence of a run type flag within the current block of video data. 28. The device of claim 27, wherein, to encode or decode the syntax element that indicates the last occurrence of a run type flag within the current block of video data, the one or more processors are configured to encode or decode the syntax element that indicates the last occurrence of a run type flag within the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context. 29. A device for decoding video data, the device comprising: means for decoding, from a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; means for decoding, from the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and means for decoding the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data. 30. A computer-readable storage medium storing at least a portion of a coded video bitstream that, when processed by a video decoding device, cause one or more processors of the video decoding device to: determine whether a transpose process is applied to palette indices of a palette for a current block of video data; and decode the current block of the video data based on the palette for the current block of video data and a delta quantization parameter (QP) and one or more chroma QP offsets for the current block of video data, wherein one or more syntax elements related to the delta QP and one or more syntax elements related to the one or more chroma QP offsets for the current block of video data are located at a position in the coded video bitstream that is after a syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data.
An example method of coding video data includes coding, from a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; decoding, from the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and decoding the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data.1. A method of decoding video data, the method comprising: decoding, from a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; decoding, from the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and decoding the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data. 2. The method of claim 1, wherein: decoding the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises decoding the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context, and decoding the one or more syntax elements related to delta QP and/or chroma QP offsets comprises decoding the one or more syntax elements related to delta QP and/or chroma QP offsets using CABAC with a context. 3. The method of claim 1 wherein the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises a palette_transpose_flag syntax element. 4. The method of claim 1, wherein the one or more syntax elements related to delta QP comprise one or both of a syntax element that indicates an absolute value of a difference between a QP of the current block and a predictor of the QP of the current block and a syntax element that indicates a sign of the difference between the QP of the current block and the predictor of the QP of the current block. 5. The method of claim 1, wherein the one or more syntax elements related to chroma QP offsets comprise one or both of a syntax element that indicates whether entries in one or more offset lists are added to a luma QP of the current block to determine chroma QPs for the current block and a syntax element that indicates an index of an entry in each of the one or more offset lists that are added to the luma QP for the current block to determine the chroma QPs for the current block. 6. The method of claim 1, further comprising: decoding, from the coded video bitstream, a group of syntax elements using Bypass mode, wherein the group comprises one or more of: one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette, a syntax element that indicates a number of entries in the current palette that are explicitly signalled, one or more syntax elements that each indicate a value of a component in an entry in the current palette, a syntax element that indicates whether the current block of video data includes at least one escape coded sample, a syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred, and one or more syntax elements that indicate indices in an array of current palette entries. 7. The method of claim 6, wherein one or more of: the one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette comprise one or more palette_predictor_run syntax elements, the syntax element that indicates a number of entries in the current palette that are explicitly signalled comprises a num_signalled_palette_entries syntax element, the one or more syntax elements that each indicate a value of a component in an entry in the current palette comprise one or more palette_entry syntax elements, the syntax element that indicates whether the current block of video data includes at least one escape coded sample comprises palette_escape_val_present_flag, the syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred comprise a num_palette_indices_idc syntax element, and the one or more syntax elements that indicate indices in an array of current palette entries comprise one or more palette_index_idc syntax elements. 8. The method of claim 6, wherein decoding the group of syntax elements comprises decoding the group of syntax elements from the coded video bitstream at a position in the coded video bitstream that is before the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data. 9. The method of claim 6, further comprising: decoding, from the coded video bitstream after the group of syntax elements coded using Bypass mode, a syntax element that indicates a last occurrence of a run type flag within the current block of video data. 10. The method of claim 9, wherein decoding the syntax element that indicates the last occurrence of a run type flag within the current block of video data comprises decoding the syntax element that indicates the last occurrence of a run type flag within the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context. 11. A method of encoding video data, the method comprising: encoding, in a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; encoding, in the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and encoding the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data. 12. The method of claim 11, wherein: encoding the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises encoding the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context, and encoding the one or more syntax elements related to delta QP and/or chroma QP offsets comprises encoding the one or more syntax elements related to delta QP and/or chroma QP offsets using CABAC with a context. 13. The method of claim 11 wherein the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises a palette_transpose_flag syntax element. 14. The method of claim 11, wherein the one or more syntax elements related to delta QP comprise one or both of a syntax element that indicates an absolute value of a difference between a QP of the current block and a predictor of the QP of the current block and a syntax element that indicates a sign of the difference between the QP of the current block and the predictor of the QP of the current block. 15. The method of claim 11, wherein the one or more syntax elements related to chroma QP offsets comprise one or both of a syntax element that indicates whether entries in one or more offset lists are added to a luma QP of the current block to determine chroma QPs for the current block and a syntax element that indicates an index of an entry in each of the one or more offset lists that are added to the luma QP for the current block to determine the chroma QPs for the current block. 16. The method of claim 11, further comprising: encoding, in the coded video bitstream, a group of syntax elements using Bypass mode, wherein the group comprises one or more of: one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette, a syntax element that indicates a number of entries in the current palette that are explicitly signalled, one or more syntax elements that each indicate a value of a component in an entry in the current palette, a syntax element that indicates whether the current block of video data includes at least one escape coded sample, a syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred, and one or more syntax elements that indicate indices in an array of current palette entries. 17. The method of claim 16, wherein one or more of: the one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette comprise one or more palette_predictor_run syntax elements, the syntax element that indicates a number of entries in the current palette that are explicitly signalled comprises a num_signalled_palette_entries syntax element, the one or more syntax elements that each indicate a value of a component in an entry in the current palette comprise one or more palette_entry syntax elements, the syntax element that indicates whether the current block of video data includes at least one escape coded sample comprises palette_escape_val_present_flag, the syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred comprise a num_palette_indices_minus1 syntax element, and the one or more syntax elements that indicate indices in an array of current palette entries comprise one or more palette_index_idc syntax elements. 18. The method of claim 16, wherein encoding the group of syntax elements comprises encoding the group of syntax elements in the coded video bitstream at a position in the coded video bitstream that is before the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data. 19. The method of claim 16, further comprising: encoding, in the coded video bitstream after the group of syntax elements coded using Bypass mode, a syntax element that indicates a last occurrence of a run type flag within the current block of video data. 20. The method of claim 19, wherein encoding the syntax element that indicates the last occurrence of a run type flag within the current block of video data comprises encoding the syntax element that indicates the last occurrence of a run type flag within the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context. 21. A device for encoding or decoding video data, the device comprising: a memory configured to store video data; one or more processors configured to: encode or decode, in a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; encode or decode, in the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and encode or decode the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data. 22. The device of claim 21, wherein: to encode or decode the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data, the one or more processors are configured to encode or decode the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context, and to encode or decode the one or more syntax elements related to delta QP and/or chroma QP offsets the one or more processors are configured to encode or decode the one or more syntax elements related to delta QP and/or chroma QP offsets using CABAC with a context. 23. The device of claim 21 wherein the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data comprises a palette_transpose_flag syntax element. 24. The device of claim 21, where in the one or more processors are further configured to: encode or decode, in the coded video bitstream, a group of syntax elements using Bypass mode, wherein the group comprises one or more of: one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette, a syntax element that indicates a number of entries in the current palette that are explicitly signalled, one or more syntax elements that each indicate a value of a component in an entry in the current palette, a syntax element that indicates whether the current block of video data includes at least one escape coded sample, a syntax element that indicates a number of indices in the current palette that are explicitly signalled or inferred, and one or more syntax elements that indicate indices in an array of current palette entries. 25. The device of claim 24, wherein one or more of: the one or more syntax elements that indicate a number of zeros that precede a non-zero entry in an array that indicates whether entries from a predictor palette are reused in the current palette comprise one or more palette_predictor_run syntax elements, the syntax element that indicates a number of entries in the current palette that are explicitly signalled comprises a num_signalled_palette_entries syntax element, the one or more syntax elements that each indicate a value of a component in an entry in the current palette comprise one or more palette_entry syntax elements, the syntax element that indicates whether the current block of video data includes at least one escape coded sample comprises palette_escape_val_present_flag, the syntax element that indicates a number of entries in the current palette that are explicitly signalled or inferred comprise a num_palette_indices_minus1 syntax element, and the one or more syntax elements that indicate indices in an array of current palette entries comprise one or more palette_index_idc syntax elements. 26. The device of claim 24, wherein, to encode or decode the group of syntax elements, the one or more processors are configured to encode or decode the group of syntax elements in the coded video bitstream at a position in the coded video bitstream that is before the syntax element that indicates whether the transpose process is applied to palette indices of the current block of video data. 27. The device of claim 24, wherein the one or more processors are further configured to: encode or decode, in the coded video bitstream after the group of syntax elements coded using Bypass mode, a syntax element that indicates a last occurrence of a run type flag within the current block of video data. 28. The device of claim 27, wherein, to encode or decode the syntax element that indicates the last occurrence of a run type flag within the current block of video data, the one or more processors are configured to encode or decode the syntax element that indicates the last occurrence of a run type flag within the current block of video data using context adaptive binary arithmetic coding (CABAC) with a context. 29. A device for decoding video data, the device comprising: means for decoding, from a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; means for decoding, from the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and means for decoding the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data. 30. A computer-readable storage medium storing at least a portion of a coded video bitstream that, when processed by a video decoding device, cause one or more processors of the video decoding device to: determine whether a transpose process is applied to palette indices of a palette for a current block of video data; and decode the current block of the video data based on the palette for the current block of video data and a delta quantization parameter (QP) and one or more chroma QP offsets for the current block of video data, wherein one or more syntax elements related to the delta QP and one or more syntax elements related to the one or more chroma QP offsets for the current block of video data are located at a position in the coded video bitstream that is after a syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data.
2,400
7,942
7,942
15,306,937
2,454
A system of at least two microcontrollers on a common semiconductor substrate, each of the at least two microcontrollers respectively having one hardware interface, and the at least two microcontrollers being coupled in data-transmitting fashion via the hardware interfaces by a coupling device.
1-10. (canceled) 11. A system, comprising: at least two microcontrollers on a common semiconductor substrate, each of the at least two microcontrollers having respectively one hardware interface, and the at least two microcontrollers being coupled in data-transmitting fashion via the respective hardware interfaces by a coupling device. 12. The system as recited in claim 11, wherein the at least two microcontrollers each have the same range of functions. 13. The system as recited in claim 11, wherein one microcontroller of the at least two microcontrollers have a range of functions that is different from that of another microcontroller of the at least two microcontrollers. 14. The system as recited in claim 11, wherein the hardware interfaces being developed as bus interfaces. 15. The system as recited in claim 11, wherein the hardware interfaces are developed as parallel or serial interfaces. 16. The system as recited in claim 11, wherein the coupling device have one or multiple circuit traces. 17. The system as recited in claim 11, wherein the at least two microcontrollers are produced like a single microcontroller. 18. The system as recited in claim 11, wherein the system is designed in such a way that outwardly it behaves like a single microcontroller. 19. A processing unit, comprising: a system including at least two microcontrollers on a common semiconductor substrate, each of the at least two microcontrollers having respectively one hardware interface, and the at least two microcontrollers being coupled in data-transmitting fashion via the respective hardware interfaces by a coupling device; wherein the processing unit being designed to control an internal combustion engine. 20. A method of producing a system, the method comprising: producing at least two microcontrollers on a common semiconductor substrate, each of the at least two microcontrollers having respectively one hardware interface, and the at least two microcontrollers being coupled in data-transmitting fashion via the respective hardware interfaces by a coupling device, the at least two microcontroller being produced like a single microcontroller.
A system of at least two microcontrollers on a common semiconductor substrate, each of the at least two microcontrollers respectively having one hardware interface, and the at least two microcontrollers being coupled in data-transmitting fashion via the hardware interfaces by a coupling device.1-10. (canceled) 11. A system, comprising: at least two microcontrollers on a common semiconductor substrate, each of the at least two microcontrollers having respectively one hardware interface, and the at least two microcontrollers being coupled in data-transmitting fashion via the respective hardware interfaces by a coupling device. 12. The system as recited in claim 11, wherein the at least two microcontrollers each have the same range of functions. 13. The system as recited in claim 11, wherein one microcontroller of the at least two microcontrollers have a range of functions that is different from that of another microcontroller of the at least two microcontrollers. 14. The system as recited in claim 11, wherein the hardware interfaces being developed as bus interfaces. 15. The system as recited in claim 11, wherein the hardware interfaces are developed as parallel or serial interfaces. 16. The system as recited in claim 11, wherein the coupling device have one or multiple circuit traces. 17. The system as recited in claim 11, wherein the at least two microcontrollers are produced like a single microcontroller. 18. The system as recited in claim 11, wherein the system is designed in such a way that outwardly it behaves like a single microcontroller. 19. A processing unit, comprising: a system including at least two microcontrollers on a common semiconductor substrate, each of the at least two microcontrollers having respectively one hardware interface, and the at least two microcontrollers being coupled in data-transmitting fashion via the respective hardware interfaces by a coupling device; wherein the processing unit being designed to control an internal combustion engine. 20. A method of producing a system, the method comprising: producing at least two microcontrollers on a common semiconductor substrate, each of the at least two microcontrollers having respectively one hardware interface, and the at least two microcontrollers being coupled in data-transmitting fashion via the respective hardware interfaces by a coupling device, the at least two microcontroller being produced like a single microcontroller.
2,400
7,943
7,943
12,792,184
2,492
A method for protecting software from tampering includes steps for processing, using a computer, first compiled software stored in a computer memory to generate a cryptographic key, the first compiled software configured to perform software protection functions and defined second functions distinct from the software protection functions when executed by a computer processor, the cryptographic key consisting of a first portion of the first compiled software comprising executable code compiled from the software protection functions, encrypting a second portion of the first compiled software using the cryptographic key to produce second compiled software comprising the first portion in unencrypted form and the second portion encrypted with the cryptographic key, wherein the second portion comprises executable code compiled from the defined second functions, and storing the second compiled software in a computer memory for distribution to a client device.
1. A method for protecting software from tampering, comprising: processing, using a computer, first compiled software stored in a computer memory to generate a cryptographic key, the first compiled software configured to perform software protection functions and defined second functions distinct from the software protection functions when executed by a computer processor, and the cryptographic key consisting of a first portion of the first compiled software comprising executable code compiled from the software protection functions; encrypting a second portion of the first compiled software using the cryptographic key, to produce second compiled software comprising the first portion in unencrypted form and the second portion encrypted with the cryptographic key, wherein the second portion comprises executable code compiled from the defined second functions; and storing the second compiled software in a computer memory for distribution to a client device. 2. The method of claim 1, further comprising compiling an algorithm to provide an executable object configured for extracting the cryptographic key from the second compiled software. 3. The method of claim 2, further comprising including the executable object in the second compiled software. 4. The method of claim 2, further comprising storing the executable object at a network node and not in the second compiled object. 5. The method of claim 1, further comprising configuring the second compiled software with code for recognizing the encrypted second portion in the second compiled software. 6. The method of claim 1, further comprising generating a data map identifying location and extent of the encrypted second portion in the second compiled software. 7. The method of claim 6, further comprising serving the data map from a server to a client operating the second compiled software. 8. The method of claim 1, further comprising configuring the second compiled software with code for decrypting the second encrypted portion. 9. The method of claim 8, further comprising including the code for decrypting the second encrypted portion in the second compiled software. 10. A method for executing software at a client device, comprising: executing a first portion of executable software using a computer processor, to extract a decryption key from a second portion of the executable software stored in a computer memory; decrypting a third portion of the executable software using the decryption key to provide an executable third portion that is distinct from the first and second portions of the executable software; and executing the executable third portion using the computer processor to perform a processing function. 11. The method of claim 10, further comprising executing the second portion of the executable software to perform a function that protects the executable software from unauthorized use. 12. The method of claim 11, wherein the function that protects the executable software from unauthorized use determines whether the executable software is installed on an authorized client device before decrypting the third portion of the executable software. 13. The method of claim 11, wherein the function that protects the executable software from unauthorized use determines whether the client device is in use by an authorized user before decrypting the third portion of the executable software. 14. The method of claim 10, wherein the decryption key is extracted from non-contiguous data segments of the executable software by the computer processor. 15. The method of claim 14, wherein the first portion of executable software includes an algorithm for locating the non-contiguous data segments. 16. The method of claim 10, wherein the third portion of the executable software is located in non-contiguous data segments of the executable software. 17. The method of claim 16, wherein the executable software is configured to execute an algorithm for locating the non-contiguous data segments. 18. A computer-readable medium encoded with instructions configured to cause a computer to: execute a first portion of the instructions to extract a decryption key from a second portion of the instructions; decrypt a third portion of the instructions using the decryption key to provide an executable third portion that is distinct from the first and second portions of the instructions; and execute the executable third portion to perform a processing function. 19. The computer-readable medium of claim 18, wherein the second portion of the instructions is configured to perform a function that protects the instructions from unauthorized use. 20. The computer-readable medium of claim 19, wherein the second portion of the instructions is configured to protect the executable software from unauthorized use by determining whether the instructions are installed on an authorized client device before decrypting the third portion of the instructions.
A method for protecting software from tampering includes steps for processing, using a computer, first compiled software stored in a computer memory to generate a cryptographic key, the first compiled software configured to perform software protection functions and defined second functions distinct from the software protection functions when executed by a computer processor, the cryptographic key consisting of a first portion of the first compiled software comprising executable code compiled from the software protection functions, encrypting a second portion of the first compiled software using the cryptographic key to produce second compiled software comprising the first portion in unencrypted form and the second portion encrypted with the cryptographic key, wherein the second portion comprises executable code compiled from the defined second functions, and storing the second compiled software in a computer memory for distribution to a client device.1. A method for protecting software from tampering, comprising: processing, using a computer, first compiled software stored in a computer memory to generate a cryptographic key, the first compiled software configured to perform software protection functions and defined second functions distinct from the software protection functions when executed by a computer processor, and the cryptographic key consisting of a first portion of the first compiled software comprising executable code compiled from the software protection functions; encrypting a second portion of the first compiled software using the cryptographic key, to produce second compiled software comprising the first portion in unencrypted form and the second portion encrypted with the cryptographic key, wherein the second portion comprises executable code compiled from the defined second functions; and storing the second compiled software in a computer memory for distribution to a client device. 2. The method of claim 1, further comprising compiling an algorithm to provide an executable object configured for extracting the cryptographic key from the second compiled software. 3. The method of claim 2, further comprising including the executable object in the second compiled software. 4. The method of claim 2, further comprising storing the executable object at a network node and not in the second compiled object. 5. The method of claim 1, further comprising configuring the second compiled software with code for recognizing the encrypted second portion in the second compiled software. 6. The method of claim 1, further comprising generating a data map identifying location and extent of the encrypted second portion in the second compiled software. 7. The method of claim 6, further comprising serving the data map from a server to a client operating the second compiled software. 8. The method of claim 1, further comprising configuring the second compiled software with code for decrypting the second encrypted portion. 9. The method of claim 8, further comprising including the code for decrypting the second encrypted portion in the second compiled software. 10. A method for executing software at a client device, comprising: executing a first portion of executable software using a computer processor, to extract a decryption key from a second portion of the executable software stored in a computer memory; decrypting a third portion of the executable software using the decryption key to provide an executable third portion that is distinct from the first and second portions of the executable software; and executing the executable third portion using the computer processor to perform a processing function. 11. The method of claim 10, further comprising executing the second portion of the executable software to perform a function that protects the executable software from unauthorized use. 12. The method of claim 11, wherein the function that protects the executable software from unauthorized use determines whether the executable software is installed on an authorized client device before decrypting the third portion of the executable software. 13. The method of claim 11, wherein the function that protects the executable software from unauthorized use determines whether the client device is in use by an authorized user before decrypting the third portion of the executable software. 14. The method of claim 10, wherein the decryption key is extracted from non-contiguous data segments of the executable software by the computer processor. 15. The method of claim 14, wherein the first portion of executable software includes an algorithm for locating the non-contiguous data segments. 16. The method of claim 10, wherein the third portion of the executable software is located in non-contiguous data segments of the executable software. 17. The method of claim 16, wherein the executable software is configured to execute an algorithm for locating the non-contiguous data segments. 18. A computer-readable medium encoded with instructions configured to cause a computer to: execute a first portion of the instructions to extract a decryption key from a second portion of the instructions; decrypt a third portion of the instructions using the decryption key to provide an executable third portion that is distinct from the first and second portions of the instructions; and execute the executable third portion to perform a processing function. 19. The computer-readable medium of claim 18, wherein the second portion of the instructions is configured to perform a function that protects the instructions from unauthorized use. 20. The computer-readable medium of claim 19, wherein the second portion of the instructions is configured to protect the executable software from unauthorized use by determining whether the instructions are installed on an authorized client device before decrypting the third portion of the instructions.
2,400
7,944
7,944
14,414,436
2,467
Described is a method performed by a downloadable agent, the method comprising: collecting WAN performance information, wherein the downloadable agent is executable on a computing device coupled to a LAN of a broadband subscriber, wherein the LAN is coupled by another device to a WAN; and transmitting the WAN performance information to a machine; wherein the machine is operable to: store and analyze the performance information to generate an analysis result; and report the analysis result to at least one of the broadband subscriber and its service provider. Described is a corresponding system which comprises a database; and a server coupled to the database, the server operable to: receive WAN performance information from a downloadable agent; store the information in the database, analyze the information to generate an analysis result; and report the analysis result to at least one of the broadband subscriber and the broadband subscriber's service provider.
1. A method performed by a downloadable agent, the method comprising: collecting WAN performance information, wherein the downloadable agent is executable on a computing device coupled to a LAN of a broadband subscriber, wherein the LAN is coupled by another device to a WAN; and transmitting the WAN performance information to a machine, wherein the machine is operable to: store the WAN performance information in a database associated with the machine, analyze the WAN performance information to generate an analysis result; and report the analysis result to at least one of the broadband subscriber and the broadband subscriber's service provider. 2. The method of claim 1, wherein the other device is a router. 3. The method of claim 1, wherein the machine is operable to store the WAN performance information with an associated timestamp. 4. The method of claim 1 wherein the downloadable agent is operable to collect LAN performance data from at least one of the computing device and other device coupled to the LAN. 5. The method of claim 4 further comprises transmitting by the downloadable agent the LAN performance data to the machine. 6. The method of claim 1, wherein the downloadable agent is executable in a virtual machine on the computing device. 7. (canceled) 8. The method of claim 1 further comprises receiving the analysis result, wherein receiving the analysis result comprises at least one of: receiving statistical analysis including throughput; receiving availability of higher bandwidth for operating a DSL service; receiving service product information for improving DSL service performance; or receiving utilization information for optimizing a consumers DSL service cost. 9. The method of claim 1, wherein the WAN performance information includes at least one of: topological information, geographical information, throughput, latency, jitter, packet loss, time, type of communication device, device network identification, manufacturer and model of equipment, equipment characteristics, firmware, user's network usage pattern, user's provisioned WAN service, RF characteristics including at least one of: signal power, frequency bands and mode of operation, environment statistics, or data on operation of communication devices. 10. The method of claim 1 further comprises: sending an on-demand change request associated with at least one of: throughput, or latency. 11. The method of claim 1, wherein the machine is a server that resides in a cloud. 12. The method of claim 1, wherein the computing device is one of: tablet computing device; a personal computer; a gaming console; an access point (AP); a base station; a wireless smartphone device; a wireless LAN device; an access gateway; a router; a performance enhancement device; a Digital Subscriber Line (DSL) Customer Premises Equipment (CPE) modem; a cable CPE modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless Wi-Fi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; a set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; an Ethernet connected network switch; wearable device; and internet enabled cameras. 13. The method of claim 1, wherein the downloadable agent is executable on an Internet browser. 14. The method of claim 1, wherein the downloadable agent is accessible remotely via the Internet. 15. The method of claim 1 further comprises periodically sending collected WAN performance information to the machine. 16. The method of claim 1 further comprises waiting for a predetermined condition or threshold to be satisfied before sending collected WAN performance information to the machine. 17. The method of claim 16, where the predetermined condition or threshold is at least one of: a function of a type of data collected, or limit or threshold on a performance level associated with the collected data. 18. The method of claim 1, wherein the machine is operable to collect WAN performance information by polling or scheduled based system. 19. The method of claim 1 further comprises collecting data from at least one of: National Weather Service; radio station; or operator. 20. (canceled) 21. A system comprising: a database; and a server coupled to the database, the server operable to: receive WAN performance information from a downloadable agent, wherein the downloadable agent is executable on a computing device coupled to a LAN of a broadband subscriber, wherein the LAN is coupled by another device to a WAN; and store the WAN performance information in the database associated with the server, analyze the WAN performance information to generate an analysis result; and report the analysis result to at least one of the broadband subscriber and the broadband subscriber's service provider. 22. The system of claim 21, wherein the server resides in a cloud. 23. The system of claim 21, wherein the server is operable to store the WAN performance information with an associated timestamp. 24. The system of claim 21, wherein the downloadable agent is operable to collect LAN performance data from at least one of the computing device and other device coupled to the LAN. 25. The system of claim 24, wherein the server is operable to receive from the downloadable agent the LAN performance data. 26. The system of claim 25, wherein the server comprises: a first module for collecting the WAN performance information; a second module for performing statistical analysis using the first WAN performance information; and a third module for generating instruction and commands according to the statistical analysis for at least one of the broadband subscriber, networking equipment at the broadband subscriber's premises, the service provider of the broadband subscriber and the access equipment of the service provider. 27. The system of claim 26, wherein the modules that receive the instruction and command from the third module are accessible by internet. 28. The system of claim 26, wherein the server comprises: a management interface for communicating with the downloadable agent via internet. 29. The system of claim 26, wherein the server comprises: a user interface module for providing access and for displaying information associated with the first, second, third modules. 30. The system of claim 21, wherein the server is operable to compute throughput of DSL connection by collecting current performance metrics associated with DSL service. 31. The system of claim 30, wherein the server to perform throughput computation with reference to a website. 32. The system of claim 31, wherein the throughput computation comprises probing a network. 33. The system of claim 21, wherein the downloadable agent is executable in a virtual machine on the computing device. 34. The system of claim 21, wherein the downloadable agent is dynamically downloaded to the computing device. 35. The system of claim 21, wherein the server is operable to report the analysis result by performing at least one of: sending statistical analysis including throughput; sending availability of higher bandwidth for operating a DSL service; sending service product information for improving DSL service performance; or sending utilization information for optimizing a consumers DSL service cost. 36. The system of claim 21, wherein the WAN performance information includes at least one of: topological information, geographical information, time, throughput, latency, jitter, packet loss, type of communication device, device network identification, manufacturer and model of equipment, equipment characteristics, firmware, user's network usage pattern, RF characteristics including at least one of: signal power, frequency bands and mode of operation, environment statistics, or data on operation of communication devices. 37. The system of claim 21, wherein the server is operable to receive an on-demand change request associated with at least one of: throughput, or latency. 38. The system of claim 21, wherein the computing device is one of: tablet computing device; an access point (AP); a base station; a wireless smartphone device; a wireless LAN device; an access gateway; a router, a performance enhancement device; a Digital Subscriber Line (DSL) Customer Premises Equipment (CPE) modem; a cable CPE modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless Wi-Fi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; a set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; an Ethernet connected network switch; wearable device; and internet enabled cameras. 39. The system of claim 21, wherein the server is operable to provide a marketplace of ideas for the communication devices for trading bandwidth for media services. 40. The system of claim 21, wherein the server is operable to collect WAN performance information by polling or scheduled based system. 41. A method comprising: receiving first information from a first downloadable agent; receiving second information from a second downloadable agent; storing the first and second information in a database; analyzing the first and second information with reference to data already stored in the database; and reporting the analyzed first and second information to a management entity. 42. The method of claim 41, wherein the first and second information are time stamped. 43. The method of claim 41, wherein the first and second agents are executable on multiple computing machines. 44. The method of claim 41, wherein the first downloadable agent is communicatively coupled to a first LAN device. 45. The method of claim 44, wherein the first downloadable agent is operable to collect information from multiple computing entities coupled to the first LAN device. 46. The method of claim 44, wherein the second downloadable agent is communicatively coupled to a second LAN device. 47. The method of claim 46, wherein the second downloadable agent is operable to collect information from multiple computing entities coupled to the second LAN device, the second LAN device being different from the first LAN device. 48. The method of claim 47, wherein the first and second LAN devices comprise at least one of: an access point (AP); a base station; a wireless smartphone device; a wireless LAN device; a router an access gateway; a performance enhancement device; a Digital Subscriber Line (DSL) Customer Premises Equipment (CPE) modem; a cable CPE modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless Wi-Fi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; a set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; an Ethernet connected network switch; wearable device; and internet enabled cameras. 49. The method of claim 41, wherein the first and second downloadable agents execute on devices coupled to the same LAN. 50. The method of claim 41, wherein the first and second downloadable agents execute on devices coupled to distinct LANs. 51. The method of claim 50 further comprises: processing data from the distinct LANs separately to produce analyses and recommendations for each LAN, among the distinct LANs, according to measurements made by corresponding first or second downloadable agents. 52. The method of claim 50 further comprises: processing data from the distinct LANs jointly to produce analyses and recommendations for each LAN, among the distinct LANs, according to data reported from each LAN for which analyses and recommendations are being created and from other LANs different from that LAN. 53. (canceled) 54. The method of claim 41 further comprises: determining control information for a DSL operator, the control information according to the analyzed first and second information; and recommending the DSL operator with the control information to improve performance of a DSL service. 55. The method of claim 54, wherein the control information includes at least one or more of signals or commands related to: power, spectrum control, margin, data rate, latency/delay, or coding. 56. The method of claim 54, wherein the control information relates to on-demand change in performance of the DSL service. 57. The method of claim 56, wherein the on-demand change is associated with at least one of: throughput, latency, packet loss, or jitter. 58. The method of claim 41, wherein reporting comprises at least one of: providing statistical analysis including throughput; providing availability of higher bandwidth for operating a DSL service; providing service product information for improving DSL service performance; or providing utilization information for optimizing a consumers DSL service cost. 59. The method of claim 41, wherein receiving the first and second information is via Internet. 60. The method of claim 41, wherein the first and second information includes at least one of: topological information, geographical information, time, throughput, latency, jitter, packet loss, type of communication device, device network identification, manufacturer and model of equipment, equipment characteristics, firmware, user's network usage pattern, RF characteristics including at least one of: signal power, frequency bands and mode of operation, environment statistics, or data on operation of communication devices. 61. The method of claim 41, wherein analyzing the first information with reference to the second information comprises at least one of: performing statistical analysis including throughput; determining availability of higher bandwidth for operating a DSL service; determining service product information for improving DSL service performance; determining utilization information for optimizing a consumers DSL service cost; or grouping data in the database according to at least one of geographical location, services type, service provider, or time. 62.-87. (canceled) 88. A method performed by a downloadable agent on a processor, the method comprising: collecting first information related to performance of a network device associated with the downloadable agent; sending the first information to a machine, wherein the first information is stored in a database coupled to the machine, and wherein the machine is operable to: receive second information from another downloadable agent; and analyze the first and second information with reference to data already stored in the database; and receiving a report of the analyzed first and second information. 89. The method of claim 88, wherein the first and second information is time stamped. 90.-96. (canceled) 97. The method of claim 88, wherein the first and second information include at least one of: topological information, geographical information, time, throughput, latency, jitter, packet loss, type of communication device, device network identification, manufacturer and model of equipment, equipment characteristics, firmware, user's network usage pattern, RF characteristics including at least one of: signal power, frequency bands and mode of operation, environment statistics, or data on operation of communication devices. 98. The method of claim 88 further comprises: sending an on-demand change request associated with at least one of: throughput, or latency. 99. The method of claim 88, wherein receiving the report comprises at least one of: receiving statistical analysis including throughput; receiving availability of higher bandwidth for operating a DSL service; receiving service product information for improving DSL service performance; or receiving utilization information for optimizing a consumers DSL service cost. 100. The method of claim 88, wherein the machine is operable to: process data from distinct LANs separately to produce analyses and recommendations for each LAN, among the distinct LANs, according to measurements made by respective downloadable agents coupled to respective distinct LANs. 101. The method of claim 88, wherein the machine is operable to: process data from distinct LANs jointly to produce analyses and recommendations for each LAN, among the distinct LANs, according to data reported from each LAN for which analyses and recommendations are being created and from other LANs different from that LAN. 102. (canceled) 103. The method of claim 88, wherein the downloadable agent is executable on an Internet browser. 104. The method of claim 88, wherein the downloadable agent is accessible remotely via the Internet. 105. The method of claim 88 further comprises periodically sending collected first information to the machine. 106. The method of claim 88 further comprises waiting for a predetermined condition or threshold to be satisfied before sending the first information to the machine. 107. The method of claim 106, where the predetermined condition or threshold is at least one of: a function of a type of data collected, or limit or threshold on a performance level associated with the collected data. 108. The method of claim 88, wherein the machine is operable the first information by polling or scheduled based system. 109. The method of claim 1 further comprises collecting data from at least one of: National Weather Service; radio station; or operator. 110. (canceled)
Described is a method performed by a downloadable agent, the method comprising: collecting WAN performance information, wherein the downloadable agent is executable on a computing device coupled to a LAN of a broadband subscriber, wherein the LAN is coupled by another device to a WAN; and transmitting the WAN performance information to a machine; wherein the machine is operable to: store and analyze the performance information to generate an analysis result; and report the analysis result to at least one of the broadband subscriber and its service provider. Described is a corresponding system which comprises a database; and a server coupled to the database, the server operable to: receive WAN performance information from a downloadable agent; store the information in the database, analyze the information to generate an analysis result; and report the analysis result to at least one of the broadband subscriber and the broadband subscriber's service provider.1. A method performed by a downloadable agent, the method comprising: collecting WAN performance information, wherein the downloadable agent is executable on a computing device coupled to a LAN of a broadband subscriber, wherein the LAN is coupled by another device to a WAN; and transmitting the WAN performance information to a machine, wherein the machine is operable to: store the WAN performance information in a database associated with the machine, analyze the WAN performance information to generate an analysis result; and report the analysis result to at least one of the broadband subscriber and the broadband subscriber's service provider. 2. The method of claim 1, wherein the other device is a router. 3. The method of claim 1, wherein the machine is operable to store the WAN performance information with an associated timestamp. 4. The method of claim 1 wherein the downloadable agent is operable to collect LAN performance data from at least one of the computing device and other device coupled to the LAN. 5. The method of claim 4 further comprises transmitting by the downloadable agent the LAN performance data to the machine. 6. The method of claim 1, wherein the downloadable agent is executable in a virtual machine on the computing device. 7. (canceled) 8. The method of claim 1 further comprises receiving the analysis result, wherein receiving the analysis result comprises at least one of: receiving statistical analysis including throughput; receiving availability of higher bandwidth for operating a DSL service; receiving service product information for improving DSL service performance; or receiving utilization information for optimizing a consumers DSL service cost. 9. The method of claim 1, wherein the WAN performance information includes at least one of: topological information, geographical information, throughput, latency, jitter, packet loss, time, type of communication device, device network identification, manufacturer and model of equipment, equipment characteristics, firmware, user's network usage pattern, user's provisioned WAN service, RF characteristics including at least one of: signal power, frequency bands and mode of operation, environment statistics, or data on operation of communication devices. 10. The method of claim 1 further comprises: sending an on-demand change request associated with at least one of: throughput, or latency. 11. The method of claim 1, wherein the machine is a server that resides in a cloud. 12. The method of claim 1, wherein the computing device is one of: tablet computing device; a personal computer; a gaming console; an access point (AP); a base station; a wireless smartphone device; a wireless LAN device; an access gateway; a router; a performance enhancement device; a Digital Subscriber Line (DSL) Customer Premises Equipment (CPE) modem; a cable CPE modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless Wi-Fi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; a set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; an Ethernet connected network switch; wearable device; and internet enabled cameras. 13. The method of claim 1, wherein the downloadable agent is executable on an Internet browser. 14. The method of claim 1, wherein the downloadable agent is accessible remotely via the Internet. 15. The method of claim 1 further comprises periodically sending collected WAN performance information to the machine. 16. The method of claim 1 further comprises waiting for a predetermined condition or threshold to be satisfied before sending collected WAN performance information to the machine. 17. The method of claim 16, where the predetermined condition or threshold is at least one of: a function of a type of data collected, or limit or threshold on a performance level associated with the collected data. 18. The method of claim 1, wherein the machine is operable to collect WAN performance information by polling or scheduled based system. 19. The method of claim 1 further comprises collecting data from at least one of: National Weather Service; radio station; or operator. 20. (canceled) 21. A system comprising: a database; and a server coupled to the database, the server operable to: receive WAN performance information from a downloadable agent, wherein the downloadable agent is executable on a computing device coupled to a LAN of a broadband subscriber, wherein the LAN is coupled by another device to a WAN; and store the WAN performance information in the database associated with the server, analyze the WAN performance information to generate an analysis result; and report the analysis result to at least one of the broadband subscriber and the broadband subscriber's service provider. 22. The system of claim 21, wherein the server resides in a cloud. 23. The system of claim 21, wherein the server is operable to store the WAN performance information with an associated timestamp. 24. The system of claim 21, wherein the downloadable agent is operable to collect LAN performance data from at least one of the computing device and other device coupled to the LAN. 25. The system of claim 24, wherein the server is operable to receive from the downloadable agent the LAN performance data. 26. The system of claim 25, wherein the server comprises: a first module for collecting the WAN performance information; a second module for performing statistical analysis using the first WAN performance information; and a third module for generating instruction and commands according to the statistical analysis for at least one of the broadband subscriber, networking equipment at the broadband subscriber's premises, the service provider of the broadband subscriber and the access equipment of the service provider. 27. The system of claim 26, wherein the modules that receive the instruction and command from the third module are accessible by internet. 28. The system of claim 26, wherein the server comprises: a management interface for communicating with the downloadable agent via internet. 29. The system of claim 26, wherein the server comprises: a user interface module for providing access and for displaying information associated with the first, second, third modules. 30. The system of claim 21, wherein the server is operable to compute throughput of DSL connection by collecting current performance metrics associated with DSL service. 31. The system of claim 30, wherein the server to perform throughput computation with reference to a website. 32. The system of claim 31, wherein the throughput computation comprises probing a network. 33. The system of claim 21, wherein the downloadable agent is executable in a virtual machine on the computing device. 34. The system of claim 21, wherein the downloadable agent is dynamically downloaded to the computing device. 35. The system of claim 21, wherein the server is operable to report the analysis result by performing at least one of: sending statistical analysis including throughput; sending availability of higher bandwidth for operating a DSL service; sending service product information for improving DSL service performance; or sending utilization information for optimizing a consumers DSL service cost. 36. The system of claim 21, wherein the WAN performance information includes at least one of: topological information, geographical information, time, throughput, latency, jitter, packet loss, type of communication device, device network identification, manufacturer and model of equipment, equipment characteristics, firmware, user's network usage pattern, RF characteristics including at least one of: signal power, frequency bands and mode of operation, environment statistics, or data on operation of communication devices. 37. The system of claim 21, wherein the server is operable to receive an on-demand change request associated with at least one of: throughput, or latency. 38. The system of claim 21, wherein the computing device is one of: tablet computing device; an access point (AP); a base station; a wireless smartphone device; a wireless LAN device; an access gateway; a router, a performance enhancement device; a Digital Subscriber Line (DSL) Customer Premises Equipment (CPE) modem; a cable CPE modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless Wi-Fi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; a set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; an Ethernet connected network switch; wearable device; and internet enabled cameras. 39. The system of claim 21, wherein the server is operable to provide a marketplace of ideas for the communication devices for trading bandwidth for media services. 40. The system of claim 21, wherein the server is operable to collect WAN performance information by polling or scheduled based system. 41. A method comprising: receiving first information from a first downloadable agent; receiving second information from a second downloadable agent; storing the first and second information in a database; analyzing the first and second information with reference to data already stored in the database; and reporting the analyzed first and second information to a management entity. 42. The method of claim 41, wherein the first and second information are time stamped. 43. The method of claim 41, wherein the first and second agents are executable on multiple computing machines. 44. The method of claim 41, wherein the first downloadable agent is communicatively coupled to a first LAN device. 45. The method of claim 44, wherein the first downloadable agent is operable to collect information from multiple computing entities coupled to the first LAN device. 46. The method of claim 44, wherein the second downloadable agent is communicatively coupled to a second LAN device. 47. The method of claim 46, wherein the second downloadable agent is operable to collect information from multiple computing entities coupled to the second LAN device, the second LAN device being different from the first LAN device. 48. The method of claim 47, wherein the first and second LAN devices comprise at least one of: an access point (AP); a base station; a wireless smartphone device; a wireless LAN device; a router an access gateway; a performance enhancement device; a Digital Subscriber Line (DSL) Customer Premises Equipment (CPE) modem; a cable CPE modem; an in-home powerline device; a Home Phoneline Network Alliance (HPNA) based device; an in-home coax distribution device; a G.hn (Global Home Networking Standard) compatible device; an in-home metering communication device; an in-home appliance communicatively interfaced with the LAN; a wireless femtocell base station; a wireless Wi-Fi compatible base station; a wireless mobile device repeater; a wireless mobile device base station; nodes within an ad-hoc/mesh network; a set-top box (STB)/set-top unit (STU) customer electronics device; an Internet Protocol (IP) enabled television; an IP enabled media player; an IP enabled gaming console; an Ethernet gateway; a computing device connected to the LAN; an Ethernet connected computer peripheral device; an Ethernet connected router; an Ethernet connected wireless bridge; an Ethernet connected network bridge; an Ethernet connected network switch; wearable device; and internet enabled cameras. 49. The method of claim 41, wherein the first and second downloadable agents execute on devices coupled to the same LAN. 50. The method of claim 41, wherein the first and second downloadable agents execute on devices coupled to distinct LANs. 51. The method of claim 50 further comprises: processing data from the distinct LANs separately to produce analyses and recommendations for each LAN, among the distinct LANs, according to measurements made by corresponding first or second downloadable agents. 52. The method of claim 50 further comprises: processing data from the distinct LANs jointly to produce analyses and recommendations for each LAN, among the distinct LANs, according to data reported from each LAN for which analyses and recommendations are being created and from other LANs different from that LAN. 53. (canceled) 54. The method of claim 41 further comprises: determining control information for a DSL operator, the control information according to the analyzed first and second information; and recommending the DSL operator with the control information to improve performance of a DSL service. 55. The method of claim 54, wherein the control information includes at least one or more of signals or commands related to: power, spectrum control, margin, data rate, latency/delay, or coding. 56. The method of claim 54, wherein the control information relates to on-demand change in performance of the DSL service. 57. The method of claim 56, wherein the on-demand change is associated with at least one of: throughput, latency, packet loss, or jitter. 58. The method of claim 41, wherein reporting comprises at least one of: providing statistical analysis including throughput; providing availability of higher bandwidth for operating a DSL service; providing service product information for improving DSL service performance; or providing utilization information for optimizing a consumers DSL service cost. 59. The method of claim 41, wherein receiving the first and second information is via Internet. 60. The method of claim 41, wherein the first and second information includes at least one of: topological information, geographical information, time, throughput, latency, jitter, packet loss, type of communication device, device network identification, manufacturer and model of equipment, equipment characteristics, firmware, user's network usage pattern, RF characteristics including at least one of: signal power, frequency bands and mode of operation, environment statistics, or data on operation of communication devices. 61. The method of claim 41, wherein analyzing the first information with reference to the second information comprises at least one of: performing statistical analysis including throughput; determining availability of higher bandwidth for operating a DSL service; determining service product information for improving DSL service performance; determining utilization information for optimizing a consumers DSL service cost; or grouping data in the database according to at least one of geographical location, services type, service provider, or time. 62.-87. (canceled) 88. A method performed by a downloadable agent on a processor, the method comprising: collecting first information related to performance of a network device associated with the downloadable agent; sending the first information to a machine, wherein the first information is stored in a database coupled to the machine, and wherein the machine is operable to: receive second information from another downloadable agent; and analyze the first and second information with reference to data already stored in the database; and receiving a report of the analyzed first and second information. 89. The method of claim 88, wherein the first and second information is time stamped. 90.-96. (canceled) 97. The method of claim 88, wherein the first and second information include at least one of: topological information, geographical information, time, throughput, latency, jitter, packet loss, type of communication device, device network identification, manufacturer and model of equipment, equipment characteristics, firmware, user's network usage pattern, RF characteristics including at least one of: signal power, frequency bands and mode of operation, environment statistics, or data on operation of communication devices. 98. The method of claim 88 further comprises: sending an on-demand change request associated with at least one of: throughput, or latency. 99. The method of claim 88, wherein receiving the report comprises at least one of: receiving statistical analysis including throughput; receiving availability of higher bandwidth for operating a DSL service; receiving service product information for improving DSL service performance; or receiving utilization information for optimizing a consumers DSL service cost. 100. The method of claim 88, wherein the machine is operable to: process data from distinct LANs separately to produce analyses and recommendations for each LAN, among the distinct LANs, according to measurements made by respective downloadable agents coupled to respective distinct LANs. 101. The method of claim 88, wherein the machine is operable to: process data from distinct LANs jointly to produce analyses and recommendations for each LAN, among the distinct LANs, according to data reported from each LAN for which analyses and recommendations are being created and from other LANs different from that LAN. 102. (canceled) 103. The method of claim 88, wherein the downloadable agent is executable on an Internet browser. 104. The method of claim 88, wherein the downloadable agent is accessible remotely via the Internet. 105. The method of claim 88 further comprises periodically sending collected first information to the machine. 106. The method of claim 88 further comprises waiting for a predetermined condition or threshold to be satisfied before sending the first information to the machine. 107. The method of claim 106, where the predetermined condition or threshold is at least one of: a function of a type of data collected, or limit or threshold on a performance level associated with the collected data. 108. The method of claim 88, wherein the machine is operable the first information by polling or scheduled based system. 109. The method of claim 1 further comprises collecting data from at least one of: National Weather Service; radio station; or operator. 110. (canceled)
2,400
7,945
7,945
13,728,531
2,456
Disclosed herein are systems, methods, and computer-readable storage media for authorizing third-party profile data sharing. The system receives a request to a request to share profile data held by a first person with a second person, wherein the profile data is of a third person. The system then generates a common context value based on an association between at least two of the first person, the second person, and the third person, wherein the common context value indicates how strongly the third person is connected to the first person and/or the second person. When the common context value is above a threshold, the system permits the first person to share the profile data of the third person with the second person.
1. A method comprising: receiving a request to share profile data held by a first person with a second person, wherein the profile data is of a third person; generating, via a processor, a common context value based on an association between at least two of the first person, the second person, and the third person, wherein the common context value indicates how strongly the third person is connected to at least one of the first person and the second person; and when the common context value is above a threshold, permitting the first person to share the profile data of the third person with the second person. 2. The method of claim 1, wherein the profile data comprises at least one of a name, a nickname, a title, a phone number, an address, an e-mail address, a photograph, an avatar, a date of birth, a username, and an identification number. 3. The method of claim 1, wherein the request to share the profile data of the third person originates from one of the first person and the second person. 4. The method of claim 1, wherein the association comprises an interaction history, the interaction history being based on a communication thread. 5. The method of claim 4, wherein the communication thread comprises at least one of an e-mail message, a phone call, a face-to-face conversation, an announcement, a meeting, a conference, a project, a trip, and a social event. 6. The method of claim 1, wherein the association comprises a relational metric, the relational metric being based on at least one of a familial relationship, a social relationship, a professional relationship, a civic relationship, and a communal relationship. 7. The method of claim 1, wherein the threshold is adjusted according to public availability of the profile data to be shared. 8. The method of claim 1, the method further comprising: receiving the third person's permission for the first person to share the profile data with the second person; and permitting the first person to share the profile data of the third person with the second person, even when the common context value is not above the threshold. 9. A system comprising: a processor; and a computer-readable storage device storing instructions which, when executed by the processor, cause the processor to perform a method comprising: receiving a request to share profile data held by a first person with a second person, wherein the profile data is of a third person; generating a common context value based on an association between at least two of the first person, the second person, and the third person, wherein the common context value indicates how strongly the third person is connected to at least one of the first person and the second person; and when the common context value is above a threshold, permitting the first person to share the profile data of the third person with the second person. 10. The system of claim 9, wherein the profile data comprises at least one of a name, a nickname, a title, a phone number, an address, an e-mail address, a photograph, an avatar, a date of birth, a username, and an identification number. 11. The system of claim 9, wherein the association comprises an interaction history, the interaction history being based on at least one of an e-mail message, a phone call, a face-to-face conversation, an announcement, a meeting, a conference, a project, a trip, and a social event. 12. The system of claim 9, wherein the association comprises a relational metric, the relational metric being based on at least one of a familial relationship, a social relationship, a professional relationship, a civic relationship, and a communal relationship. 13. The system of claim 9, wherein the threshold is associated with public availability of the profile data to be shared. 14. The system of claim 9, wherein the computer-readable storage device stores additional instructions which, when executed by the processor, cause the processor to perform the method further comprising: receiving the third person's permission for the first person to share the profile data with the second person; and permitting the first person to share the profile data of the third person with the second person, even when the common context value is not above the threshold. 15. A computer-readable storage device storing instructions which, when executed by a processor, cause the processor to perform a method comprising: receiving a request to share profile data held by a first person with a second person, wherein the profile data is of a third person; generating a common context value based on an association between at least two of the first person, the second person, and the third person, wherein the common context value indicates how strongly the third person is connected to at least one of the first person and the second person; and when the common context value is above a threshold, permitting the first person to share the profile data of the third person with the second person. 16. The computer-readable storage medium of claim 15, wherein the profile data comprises at least one of a name, a nickname, a title, a phone number, an address, an e-mail address, a photograph, an avatar, a date of birth, a username, and an identification number. 17. The computer-readable storage medium of claim 15, wherein the association comprises an interaction history, the interaction history being based on a plurality of communication threads. 18. The computer-readable storage medium of claim 17, wherein the plurality of communication threads comprises at least one of an e-mail message, a phone call, a face-to-face conversation, an announcement, a meeting, a conference, a project, a trip, and a social event. 19. The computer-readable storage medium of claim 15, wherein the association comprises a relational metric, the relational metric being based on at least one of a familial relationship, a social relationship, a professional relationship, a civic relationship, and a communal relationship. 20. The computer-readable storage medium of claim 15, wherein the instructions, when executed by the processor, cause the processor to perform the method further comprising: when the common context value is not above the threshold, asking the third person for permission for the first person to share the profile data with the second person; and when the third person grants permission, permitting the first person to share the profile data of the third person with the second person.
Disclosed herein are systems, methods, and computer-readable storage media for authorizing third-party profile data sharing. The system receives a request to a request to share profile data held by a first person with a second person, wherein the profile data is of a third person. The system then generates a common context value based on an association between at least two of the first person, the second person, and the third person, wherein the common context value indicates how strongly the third person is connected to the first person and/or the second person. When the common context value is above a threshold, the system permits the first person to share the profile data of the third person with the second person.1. A method comprising: receiving a request to share profile data held by a first person with a second person, wherein the profile data is of a third person; generating, via a processor, a common context value based on an association between at least two of the first person, the second person, and the third person, wherein the common context value indicates how strongly the third person is connected to at least one of the first person and the second person; and when the common context value is above a threshold, permitting the first person to share the profile data of the third person with the second person. 2. The method of claim 1, wherein the profile data comprises at least one of a name, a nickname, a title, a phone number, an address, an e-mail address, a photograph, an avatar, a date of birth, a username, and an identification number. 3. The method of claim 1, wherein the request to share the profile data of the third person originates from one of the first person and the second person. 4. The method of claim 1, wherein the association comprises an interaction history, the interaction history being based on a communication thread. 5. The method of claim 4, wherein the communication thread comprises at least one of an e-mail message, a phone call, a face-to-face conversation, an announcement, a meeting, a conference, a project, a trip, and a social event. 6. The method of claim 1, wherein the association comprises a relational metric, the relational metric being based on at least one of a familial relationship, a social relationship, a professional relationship, a civic relationship, and a communal relationship. 7. The method of claim 1, wherein the threshold is adjusted according to public availability of the profile data to be shared. 8. The method of claim 1, the method further comprising: receiving the third person's permission for the first person to share the profile data with the second person; and permitting the first person to share the profile data of the third person with the second person, even when the common context value is not above the threshold. 9. A system comprising: a processor; and a computer-readable storage device storing instructions which, when executed by the processor, cause the processor to perform a method comprising: receiving a request to share profile data held by a first person with a second person, wherein the profile data is of a third person; generating a common context value based on an association between at least two of the first person, the second person, and the third person, wherein the common context value indicates how strongly the third person is connected to at least one of the first person and the second person; and when the common context value is above a threshold, permitting the first person to share the profile data of the third person with the second person. 10. The system of claim 9, wherein the profile data comprises at least one of a name, a nickname, a title, a phone number, an address, an e-mail address, a photograph, an avatar, a date of birth, a username, and an identification number. 11. The system of claim 9, wherein the association comprises an interaction history, the interaction history being based on at least one of an e-mail message, a phone call, a face-to-face conversation, an announcement, a meeting, a conference, a project, a trip, and a social event. 12. The system of claim 9, wherein the association comprises a relational metric, the relational metric being based on at least one of a familial relationship, a social relationship, a professional relationship, a civic relationship, and a communal relationship. 13. The system of claim 9, wherein the threshold is associated with public availability of the profile data to be shared. 14. The system of claim 9, wherein the computer-readable storage device stores additional instructions which, when executed by the processor, cause the processor to perform the method further comprising: receiving the third person's permission for the first person to share the profile data with the second person; and permitting the first person to share the profile data of the third person with the second person, even when the common context value is not above the threshold. 15. A computer-readable storage device storing instructions which, when executed by a processor, cause the processor to perform a method comprising: receiving a request to share profile data held by a first person with a second person, wherein the profile data is of a third person; generating a common context value based on an association between at least two of the first person, the second person, and the third person, wherein the common context value indicates how strongly the third person is connected to at least one of the first person and the second person; and when the common context value is above a threshold, permitting the first person to share the profile data of the third person with the second person. 16. The computer-readable storage medium of claim 15, wherein the profile data comprises at least one of a name, a nickname, a title, a phone number, an address, an e-mail address, a photograph, an avatar, a date of birth, a username, and an identification number. 17. The computer-readable storage medium of claim 15, wherein the association comprises an interaction history, the interaction history being based on a plurality of communication threads. 18. The computer-readable storage medium of claim 17, wherein the plurality of communication threads comprises at least one of an e-mail message, a phone call, a face-to-face conversation, an announcement, a meeting, a conference, a project, a trip, and a social event. 19. The computer-readable storage medium of claim 15, wherein the association comprises a relational metric, the relational metric being based on at least one of a familial relationship, a social relationship, a professional relationship, a civic relationship, and a communal relationship. 20. The computer-readable storage medium of claim 15, wherein the instructions, when executed by the processor, cause the processor to perform the method further comprising: when the common context value is not above the threshold, asking the third person for permission for the first person to share the profile data with the second person; and when the third person grants permission, permitting the first person to share the profile data of the third person with the second person.
2,400
7,946
7,946
14,679,755
2,424
Mediacast video content detection systems and methods that analyze the image content data of mediacast source data flows that include a variety of replaceable video content segments and a variety of non-replaceable video content segments to detect one or more characteristics of the video content segments. Detection regions may be utilized to detect visual elements in the video content segments that provide information regarding one or more properties of the video content segments, such as program type, start times, end times, video content provider, title, and the like. Replacement video content segments may replace video content segments determined to be replaceable. A buffering scheme may be employed to inherently adjust asynchronicity between a broadcast or Webcast and a mediacast. Actual insertion of replacement video content segments may occur upstream of a content consumer device or at the content consumer device.
1. A method of operation in a content insertion system, the content insertion system comprising at least one processor and at least one nontransitory processor-readable medium communicatively coupled to the at least one processor, the method comprising: receiving a mediacast source data flow which comprises a plurality of replaceable content segments interspersed with a plurality of non-replaceable content segments, the replaceable content segments consisting of one or more sections of replaceable content material which includes image content data and the non-replaceable content segments consisting of one or more sections of non-replaceable content material which includes image content data; reviewing at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element contained within at least one of the replaceable content material or the non-replaceable content material to determine whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on at least one of a presence or an absence of the defined visual content element; in response to determining the presence of a replaceable content segment, selecting a replacement content segment from a store of replacement content segments of the content insertion system; and modifying the mediacast source data flow by replacing the replaceable content segment with the replacement content segment. 2. The method of claim 1 wherein reviewing at least a portion of the image content data comprises reviewing a defined region of a rendering surface of the image content data, the defined region a portion of the rendering surface. 3. The method of claim 2 wherein the defined region includes a plurality of pixels, and reviewing at least a portion of the image content data comprises ignoring at least some of the plurality of pixels in the defined region. 4. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a content provider identifier tag in the at least a portion of the image content data. 5. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting at least one of a lower third graphic, a chyron graphic, an information ticker graphic, an edge graphic, a score graphic, or text in the at least a portion of the image content data. 6. The method of claim 1 wherein reviewing at least a portion of the image content data comprises reviewing at least a portion of the image content data to detect a temporally spaced pattern indicative of at least one of a start of at least one of the content segments or an end of at least one of the content segments. 7. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a color intensity of the at least a portion of the image content data. 8. The method of claim 7 wherein the one or more sections of replaceable content material and the one or more sections of non-replaceable content material each include audio content data, the method further comprising: reviewing at least a portion of the audio content data for a presence or absence of at least one defined auditory content element contained within at least one of the replaceable content material or the non-replaceable content material to determine whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on at least one of a presence or an absence of the defined auditory content element. 9. The method of claim 8 wherein reviewing at least a portion of the audio content data comprises reviewing at least a portion of the audio content data for a period of silence, and reviewing at least a portion of the image content data comprises detecting a low color intensity. 10. The method of claim 1 wherein reviewing at least a portion of the image content data comprises reviewing at least a portion of the image content data for a presence or absence of a plurality of defined visual elements contained within at least one of the replaceable content material or the non-replaceable content material to determine whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on at least one of a presence or an absence of the plurality of defined visual content elements, the plurality of defined visual content elements indicative of at least one of a start of at least one of the content segments or an end of at least one of the content segments. 11. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a presence or absence of at least one edge in the at least a portion of the image content data. 12. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a presence or absence of a continuous lower third graphic in the at least a portion of the image content data. 13. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a presence or absence of a content source provider graphic in a lower third portion of the at least a portion of the image content data. 14. The method of claim 1, further comprising: verifying whether an identified presence or absence of the defined visual content element occurs during a defined expected time period; and determining whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on the verification of the identified presence or absence of the defined visual content element. 15. The method of claim 1 wherein reviewing the at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element further comprises: determining a start of at least one of the content segments of the mediacast source data flow; and determining an end of at least one of the content segments of the mediacast source data flow. 16. The method of claim 1, further comprising: causing delivery of the modified mediacast source data flow over a network by at least one component of the content insertion system. 17. The method of claim 1 wherein selecting a replacement content segment comprises selecting a replacement content segment based at least in part on at least one of a geographic location of a content consumer, a browsing history of the content consumer, a buying history of the content consumer, or a piece of self-reported information provided by the content consumer. 18. The method of claim 1, further comprising: encoding the replacement content segment as content fragments; and providing the content fragments to a number of content delivery networks for retrieval of the content fragments by content consumers. 19. The method of claim 1, further comprising: reviewing the at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element to determine metadata related to at least one of the replaceable content segments or related to at least one of the non-replaceable content segments. 20. The method of claim 19 wherein reviewing the at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element to determine metadata comprises detecting a title associated with at least one of the replaceable content segments or the non-replaceable content segments. 21. The method of claim 1 wherein receiving a mediacast source data flow comprises receiving a mediacast source data flow which comprises a plurality of non-replaceable programming content segments interspersed with a plurality of replaceable advertising content segments. 22. A content delivery system, comprising: at least one communications port communicatively coupleable to receive a mediacast source data flow from a broadcaster or a Webcaster, the mediacast source data flow at least including a plurality of replaceable content segments comprising image content data and a plurality of non-replaceable content segments comprising image content data; at least one nontransitory processor-readable medium which stores a number of processor-executable instructions; and at least one processor communicatively coupled to the at least one communications port and communicatively coupled to the at least one nontransitory processor-readable medium to execute the processor-executable instructions, which execution causes the at least one processor to: receive the mediacast source data flow from the broadcaster or the Webcaster; for each of a number of the content segments of the mediacast source data flow, detect whether the respective content is non-replaceable or replaceable, whereby the at least one processor: reviews at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element contained within at least one of the replaceable content material or the non-replaceable content material to determine whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on at least one of a presence or an absence of the defined visual content element; and replace each of at least some of the content segments of the mediacast source data flow identified as being replaceable with at least one replacement content segment. 23. The content delivery system of claim 22 wherein the at least one processor: reviews a defined region of a rendering surface of the image content data, the defined region a portion of the rendering surface. 24-42. (canceled) 43. A method of operation in a content type detection system, the content type detection system comprising at least one processor and at least one nontransitory processor-readable medium communicatively coupled to the at least one processor, the method comprising: receiving a broadcast source data flow which comprises a plurality of content segments of a first content type interspersed with a plurality of content segments of a second content type, the content segments of the first content type consisting of one or more sections of content material of the first content type which includes image content data and the content segments of the second content type consisting of one or more sections of content material of the second content type which includes image content data; reviewing at least a portion of the image content data of the received broadcast source data flow to detect at least one content type of the content segments; and storing content type data in the at least one nontransitory processor-readable medium of the content type detection system, the content type data indicative of the detected at least one content type of the content segments. 44. The method of claim 43 wherein reviewing at least a portion of the image content data comprises reviewing a defined region of a rendering surface of the image content data, the defined region a portion of the rendering surface. 45-61. (canceled) 62. A content type detection system, comprising: at least one communications port communicatively coupleable to receive a broadcast source data flow from a broadcaster, the broadcast source data flow at least including a plurality of first content type content segments comprising image content data and a plurality of second content type content segments comprising image content data; at least one nontransitory processor-readable medium which stores a number of processor-executable instructions; and at least one processor communicatively coupled to the at least one communications port and communicatively coupled to the at least one nontransitory processor-readable medium to execute the processor-executable instructions, which execution causes the at least one processor to: receive the broadcast source data flow from the broadcaster; for each of a number of the content segments of the broadcast source data flow, detect whether the respective content is of the first content type or the second content type, whereby the at least one processor: reviews at least a portion of the image content data of the received broadcast source data flow to detect at least one content type of the content segments; and stores content type data in the at least one nontransitory processor-readable medium of the content type detection system, the content type data indicative of the detected at least one content type of the content segments. 63. The content type detection system of claim 62 wherein the at least one processor: reviews a defined region of a rendering surface of the image content data, the defined region a portion of the rendering surface. 64-80. (canceled)
Mediacast video content detection systems and methods that analyze the image content data of mediacast source data flows that include a variety of replaceable video content segments and a variety of non-replaceable video content segments to detect one or more characteristics of the video content segments. Detection regions may be utilized to detect visual elements in the video content segments that provide information regarding one or more properties of the video content segments, such as program type, start times, end times, video content provider, title, and the like. Replacement video content segments may replace video content segments determined to be replaceable. A buffering scheme may be employed to inherently adjust asynchronicity between a broadcast or Webcast and a mediacast. Actual insertion of replacement video content segments may occur upstream of a content consumer device or at the content consumer device.1. A method of operation in a content insertion system, the content insertion system comprising at least one processor and at least one nontransitory processor-readable medium communicatively coupled to the at least one processor, the method comprising: receiving a mediacast source data flow which comprises a plurality of replaceable content segments interspersed with a plurality of non-replaceable content segments, the replaceable content segments consisting of one or more sections of replaceable content material which includes image content data and the non-replaceable content segments consisting of one or more sections of non-replaceable content material which includes image content data; reviewing at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element contained within at least one of the replaceable content material or the non-replaceable content material to determine whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on at least one of a presence or an absence of the defined visual content element; in response to determining the presence of a replaceable content segment, selecting a replacement content segment from a store of replacement content segments of the content insertion system; and modifying the mediacast source data flow by replacing the replaceable content segment with the replacement content segment. 2. The method of claim 1 wherein reviewing at least a portion of the image content data comprises reviewing a defined region of a rendering surface of the image content data, the defined region a portion of the rendering surface. 3. The method of claim 2 wherein the defined region includes a plurality of pixels, and reviewing at least a portion of the image content data comprises ignoring at least some of the plurality of pixels in the defined region. 4. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a content provider identifier tag in the at least a portion of the image content data. 5. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting at least one of a lower third graphic, a chyron graphic, an information ticker graphic, an edge graphic, a score graphic, or text in the at least a portion of the image content data. 6. The method of claim 1 wherein reviewing at least a portion of the image content data comprises reviewing at least a portion of the image content data to detect a temporally spaced pattern indicative of at least one of a start of at least one of the content segments or an end of at least one of the content segments. 7. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a color intensity of the at least a portion of the image content data. 8. The method of claim 7 wherein the one or more sections of replaceable content material and the one or more sections of non-replaceable content material each include audio content data, the method further comprising: reviewing at least a portion of the audio content data for a presence or absence of at least one defined auditory content element contained within at least one of the replaceable content material or the non-replaceable content material to determine whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on at least one of a presence or an absence of the defined auditory content element. 9. The method of claim 8 wherein reviewing at least a portion of the audio content data comprises reviewing at least a portion of the audio content data for a period of silence, and reviewing at least a portion of the image content data comprises detecting a low color intensity. 10. The method of claim 1 wherein reviewing at least a portion of the image content data comprises reviewing at least a portion of the image content data for a presence or absence of a plurality of defined visual elements contained within at least one of the replaceable content material or the non-replaceable content material to determine whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on at least one of a presence or an absence of the plurality of defined visual content elements, the plurality of defined visual content elements indicative of at least one of a start of at least one of the content segments or an end of at least one of the content segments. 11. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a presence or absence of at least one edge in the at least a portion of the image content data. 12. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a presence or absence of a continuous lower third graphic in the at least a portion of the image content data. 13. The method of claim 1 wherein reviewing at least a portion of the image content data comprises detecting a presence or absence of a content source provider graphic in a lower third portion of the at least a portion of the image content data. 14. The method of claim 1, further comprising: verifying whether an identified presence or absence of the defined visual content element occurs during a defined expected time period; and determining whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on the verification of the identified presence or absence of the defined visual content element. 15. The method of claim 1 wherein reviewing the at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element further comprises: determining a start of at least one of the content segments of the mediacast source data flow; and determining an end of at least one of the content segments of the mediacast source data flow. 16. The method of claim 1, further comprising: causing delivery of the modified mediacast source data flow over a network by at least one component of the content insertion system. 17. The method of claim 1 wherein selecting a replacement content segment comprises selecting a replacement content segment based at least in part on at least one of a geographic location of a content consumer, a browsing history of the content consumer, a buying history of the content consumer, or a piece of self-reported information provided by the content consumer. 18. The method of claim 1, further comprising: encoding the replacement content segment as content fragments; and providing the content fragments to a number of content delivery networks for retrieval of the content fragments by content consumers. 19. The method of claim 1, further comprising: reviewing the at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element to determine metadata related to at least one of the replaceable content segments or related to at least one of the non-replaceable content segments. 20. The method of claim 19 wherein reviewing the at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element to determine metadata comprises detecting a title associated with at least one of the replaceable content segments or the non-replaceable content segments. 21. The method of claim 1 wherein receiving a mediacast source data flow comprises receiving a mediacast source data flow which comprises a plurality of non-replaceable programming content segments interspersed with a plurality of replaceable advertising content segments. 22. A content delivery system, comprising: at least one communications port communicatively coupleable to receive a mediacast source data flow from a broadcaster or a Webcaster, the mediacast source data flow at least including a plurality of replaceable content segments comprising image content data and a plurality of non-replaceable content segments comprising image content data; at least one nontransitory processor-readable medium which stores a number of processor-executable instructions; and at least one processor communicatively coupled to the at least one communications port and communicatively coupled to the at least one nontransitory processor-readable medium to execute the processor-executable instructions, which execution causes the at least one processor to: receive the mediacast source data flow from the broadcaster or the Webcaster; for each of a number of the content segments of the mediacast source data flow, detect whether the respective content is non-replaceable or replaceable, whereby the at least one processor: reviews at least a portion of the image content data of the received mediacast source data flow for a presence or absence of at least one defined visual content element contained within at least one of the replaceable content material or the non-replaceable content material to determine whether a portion of the received mediacast source data flow corresponds to a non-replaceable content segment or a replaceable content segment based on at least one of a presence or an absence of the defined visual content element; and replace each of at least some of the content segments of the mediacast source data flow identified as being replaceable with at least one replacement content segment. 23. The content delivery system of claim 22 wherein the at least one processor: reviews a defined region of a rendering surface of the image content data, the defined region a portion of the rendering surface. 24-42. (canceled) 43. A method of operation in a content type detection system, the content type detection system comprising at least one processor and at least one nontransitory processor-readable medium communicatively coupled to the at least one processor, the method comprising: receiving a broadcast source data flow which comprises a plurality of content segments of a first content type interspersed with a plurality of content segments of a second content type, the content segments of the first content type consisting of one or more sections of content material of the first content type which includes image content data and the content segments of the second content type consisting of one or more sections of content material of the second content type which includes image content data; reviewing at least a portion of the image content data of the received broadcast source data flow to detect at least one content type of the content segments; and storing content type data in the at least one nontransitory processor-readable medium of the content type detection system, the content type data indicative of the detected at least one content type of the content segments. 44. The method of claim 43 wherein reviewing at least a portion of the image content data comprises reviewing a defined region of a rendering surface of the image content data, the defined region a portion of the rendering surface. 45-61. (canceled) 62. A content type detection system, comprising: at least one communications port communicatively coupleable to receive a broadcast source data flow from a broadcaster, the broadcast source data flow at least including a plurality of first content type content segments comprising image content data and a plurality of second content type content segments comprising image content data; at least one nontransitory processor-readable medium which stores a number of processor-executable instructions; and at least one processor communicatively coupled to the at least one communications port and communicatively coupled to the at least one nontransitory processor-readable medium to execute the processor-executable instructions, which execution causes the at least one processor to: receive the broadcast source data flow from the broadcaster; for each of a number of the content segments of the broadcast source data flow, detect whether the respective content is of the first content type or the second content type, whereby the at least one processor: reviews at least a portion of the image content data of the received broadcast source data flow to detect at least one content type of the content segments; and stores content type data in the at least one nontransitory processor-readable medium of the content type detection system, the content type data indicative of the detected at least one content type of the content segments. 63. The content type detection system of claim 62 wherein the at least one processor: reviews a defined region of a rendering surface of the image content data, the defined region a portion of the rendering surface. 64-80. (canceled)
2,400
7,947
7,947
15,245,718
2,452
Implementations of the present disclosure are directed to a method, a system, and a computer program storage device for determining and implementing transmission buffer sizes for network connections. A computer-implemented method includes: obtaining a respective bandwidth requirement for each of a plurality of network connections between at least one server and at least one client device; determining a respective latency for each network connection; calculating a desired transmission buffer size for each network connection based on the respective bandwidth requirement and the respective latency for the network connection; setting a new transmission buffer size for each network connection to the desired transmission buffer size for the network connection; and transmitting data from the at least one server to the at least one client device using the new transmission buffer sizes.
1. A method, comprising: obtaining a respective bandwidth requirement for each of a plurality of network connections between at least one server and at least one client device; determining a respective latency for each network connection; calculating, by one or more computer processors, a desired transmission buffer size for each network connection based on the respective bandwidth requirement and the respective latency for the network connection; setting a new transmission buffer size for each network connection to the desired transmission buffer size for the network connection; and transmitting data from the at least one server to the at least one client device using the new transmission buffer size. 2. The method of claim 1, wherein obtaining the respective bandwidth requirement for each of the plurality of network connections comprises: determining a target data transfer rate for an application running on a client device associated with one of the network connections. 3. The method of claim 1, wherein obtaining the respective bandwidth requirement for each of the plurality of network connections comprises: measuring an amount of data transmitted over at least one network connection during a time period. 4. The method of claim 1, wherein determining the respective latency for each network connection comprises: determining a round-trip time for at least one network connection. 5. The method of claim 4, wherein the respective latency for the at least one network connection comprises the round-trip time divided by two. 6. The method of claim 4, wherein determining the round-trip time comprises: obtaining the round-trip time from the at least one server. 7. The method of claim 1, wherein calculating the desired transmission buffer size for each network connection comprises: determining a product of the respective bandwidth requirement and the respective latency for at least one network connection. 8. The method of claim 1, wherein at least one network connection comprises a transport control protocol/internet protocol (TCP/IP) connection. 9. The method of claim 1, wherein at least one network connection is connectionless. 10. The method of claim 1, further comprising: determining a respective latency for at least one network connection at a later time; and calculating a new desired transmission buffer size for the at least one network connection based on the respective latency at the later time. 11. A system, comprising: one or more computer processors to obtain a respective bandwidth requirement for each of a plurality of network connections between at least one server and at least one client device; determine a respective latency for each network connection; calculate a desired transmission buffer size for each network connection based on the respective bandwidth requirement and the respective latency for the network connection; set a new transmission buffer size for each network connection to the desired transmission buffer size for the network connection; and transmit data from the at least one server to the at least one client device using the new transmission buffer size. 12. The system of claim 11, wherein to obtain the respective bandwidth requirement for each of the plurality of network connections, the one or more computer processors are to: determine a target data transfer rate for an application miming on a client device associated with one of the network connections. 13. The system of claim 11, wherein to obtain the respective bandwidth requirement for each of the plurality of network connections, the one or more computer processors are to: measure an amount of data transmitted over at least one network connection during a time period. 14. The system of claim 11, wherein to determine the respective latency for each network connection, the one or more computer processors are to: determine a round-trip time for at least one network connection. 15. The system of claim 14, wherein the respective latency for the at least one network connection comprises the round-trip time divided by two. 16. The system of claim 14, wherein to determine the round-trip time, the one or more computer processors are further to: obtain the round-trip time from the at least one server. 17. The system of claim 11, wherein to calculate the desired transmission buffer size for each network connection, the one or more computer processors are further to: determine a product of the respective bandwidth requirement and the respective latency for at least one network connection. 18. The system of claim 11, wherein at least one network connection comprises a transport control protocol/internet protocol (TCP/IP) connection. 19. The system of claim 11, wherein the one or more computer processors are further to: determine a respective latency for at least one network connection at a later time; and calculate a new desired transmission buffer size for the at least one network connection based on the respective latency at the later time. 20. A non-transitory computer-readable medium having instruction stored thereon that, when executed by one or more computer processors, cause the one or more computer processors to: obtain a respective bandwidth requirement for each of a plurality of network connections between at least one server and at least one client device; determine a respective latency for each network connection; calculate a desired transmission buffer size for each network connection based on the respective bandwidth requirement and the respective latency for the network connection; set a new transmission buffer size for each network connection to the desired transmission buffer size for the network connection; and transmit data from the at least one server to the at least one client device using the new transmission buffer size.
Implementations of the present disclosure are directed to a method, a system, and a computer program storage device for determining and implementing transmission buffer sizes for network connections. A computer-implemented method includes: obtaining a respective bandwidth requirement for each of a plurality of network connections between at least one server and at least one client device; determining a respective latency for each network connection; calculating a desired transmission buffer size for each network connection based on the respective bandwidth requirement and the respective latency for the network connection; setting a new transmission buffer size for each network connection to the desired transmission buffer size for the network connection; and transmitting data from the at least one server to the at least one client device using the new transmission buffer sizes.1. A method, comprising: obtaining a respective bandwidth requirement for each of a plurality of network connections between at least one server and at least one client device; determining a respective latency for each network connection; calculating, by one or more computer processors, a desired transmission buffer size for each network connection based on the respective bandwidth requirement and the respective latency for the network connection; setting a new transmission buffer size for each network connection to the desired transmission buffer size for the network connection; and transmitting data from the at least one server to the at least one client device using the new transmission buffer size. 2. The method of claim 1, wherein obtaining the respective bandwidth requirement for each of the plurality of network connections comprises: determining a target data transfer rate for an application running on a client device associated with one of the network connections. 3. The method of claim 1, wherein obtaining the respective bandwidth requirement for each of the plurality of network connections comprises: measuring an amount of data transmitted over at least one network connection during a time period. 4. The method of claim 1, wherein determining the respective latency for each network connection comprises: determining a round-trip time for at least one network connection. 5. The method of claim 4, wherein the respective latency for the at least one network connection comprises the round-trip time divided by two. 6. The method of claim 4, wherein determining the round-trip time comprises: obtaining the round-trip time from the at least one server. 7. The method of claim 1, wherein calculating the desired transmission buffer size for each network connection comprises: determining a product of the respective bandwidth requirement and the respective latency for at least one network connection. 8. The method of claim 1, wherein at least one network connection comprises a transport control protocol/internet protocol (TCP/IP) connection. 9. The method of claim 1, wherein at least one network connection is connectionless. 10. The method of claim 1, further comprising: determining a respective latency for at least one network connection at a later time; and calculating a new desired transmission buffer size for the at least one network connection based on the respective latency at the later time. 11. A system, comprising: one or more computer processors to obtain a respective bandwidth requirement for each of a plurality of network connections between at least one server and at least one client device; determine a respective latency for each network connection; calculate a desired transmission buffer size for each network connection based on the respective bandwidth requirement and the respective latency for the network connection; set a new transmission buffer size for each network connection to the desired transmission buffer size for the network connection; and transmit data from the at least one server to the at least one client device using the new transmission buffer size. 12. The system of claim 11, wherein to obtain the respective bandwidth requirement for each of the plurality of network connections, the one or more computer processors are to: determine a target data transfer rate for an application miming on a client device associated with one of the network connections. 13. The system of claim 11, wherein to obtain the respective bandwidth requirement for each of the plurality of network connections, the one or more computer processors are to: measure an amount of data transmitted over at least one network connection during a time period. 14. The system of claim 11, wherein to determine the respective latency for each network connection, the one or more computer processors are to: determine a round-trip time for at least one network connection. 15. The system of claim 14, wherein the respective latency for the at least one network connection comprises the round-trip time divided by two. 16. The system of claim 14, wherein to determine the round-trip time, the one or more computer processors are further to: obtain the round-trip time from the at least one server. 17. The system of claim 11, wherein to calculate the desired transmission buffer size for each network connection, the one or more computer processors are further to: determine a product of the respective bandwidth requirement and the respective latency for at least one network connection. 18. The system of claim 11, wherein at least one network connection comprises a transport control protocol/internet protocol (TCP/IP) connection. 19. The system of claim 11, wherein the one or more computer processors are further to: determine a respective latency for at least one network connection at a later time; and calculate a new desired transmission buffer size for the at least one network connection based on the respective latency at the later time. 20. A non-transitory computer-readable medium having instruction stored thereon that, when executed by one or more computer processors, cause the one or more computer processors to: obtain a respective bandwidth requirement for each of a plurality of network connections between at least one server and at least one client device; determine a respective latency for each network connection; calculate a desired transmission buffer size for each network connection based on the respective bandwidth requirement and the respective latency for the network connection; set a new transmission buffer size for each network connection to the desired transmission buffer size for the network connection; and transmit data from the at least one server to the at least one client device using the new transmission buffer size.
2,400
7,948
7,948
15,802,501
2,492
A system and method is provided for implementing platform security on a consumer electronic device having an open development platform. The device is of the type which includes an abstraction layer operable between device hardware and application software. A secured software agent is provided for embedding within the abstraction layer forming the operating system. The secured software agent is configured to limit access to the abstraction layer by either blocking loadable kernel modules from loading, blocking writing to the system call table or blocking requests to attach debug utilities to certified applications or kernel components.
1. An apparatus for increasing security of a computing device, that apparatus comprising: at least one processor; at least one non-transitory memory device storing instructions thereon which, when executed by the at least one processor, cause the at least one processor to: embed a first secured software agent within an OS kernel of the device, wherein the first secured software agent is one of plural secured software agents generated by diverse code portion combinations to thereby have the same functionality but be structurally and semantically different, and wherein the secured software agent is configured to limit access to the OS kernel to provide protection of applications and resources. 2. The apparatus of claim 1, wherein access to the OS kernel is limited by receiving a request to modify to a debug functionality request of the OK kernel and preventing access to the OK Kernel based at least in part that the request is not a valid request. 3. The apparatus of claim 1, wherein the instructions further case the processor to embed a second secured software agent of the plural secured software agents within an OS of at least one of a different instantiation of the device, a device of a different type than the device, a device sold in a different geographic region than the device, or a device on a different operator network than the device. 4. The apparatus of claim 1, wherein the instructions further cause the processor to: detect an attack on the first secured software agent; analyze the attack; replace the first secured software agent with a second secured software agent that is one of the plural secured software agents, wherein the second secured software agent incorporates a new functionality designed to prevent the attack. 5. The apparatus of claim 1, wherein the secured software agent is configured to: insert one or more upcalls at points in the OS kernel where a user-level system call from an application would result in access to an internal OS kernel object; receive, from the OS kernel, via at least one of the one or more upcalls, a request to modify or debug functionality of the application; determine whether the request is a valid request; and limit access to the OS kernel based at least in part on a determination that the request is not a valid request. 6. A secured software agents embedded within an OS kernel of the device for increasing security of a computing device, the secured software agent comprising code for causing the computing device to: limit access to the OS kernel to provide protection of applications and resources; and wherein the secured software agent is one of multiple other secure software agents created from diverse code portion combinations to thereby have the same functionality but be structurally and semantically different from each other. 7. The secured software agent of claim 6, wherein access to the OS kernel is limited by receiving a request to modify to a debug functionality request of the OK kernel and preventing access to the OK Kernel based at least in part that the request is not a valid request. 8. The secured software agent of claim 6, wherein at least one of the other secured software agents is embedded in an OS of at least one of a different instantiation of the device, a device of a different type than the device, a device sold in a different geographic region than the device, or a device on a different operator network than the device. 9. The secured software agent of claim 5, wherein the code causes the computing device to: detect an attack on the first secured software agent; analyze the attack; replace the secured software agent with one of the other secured software agents, wherein the one of the other secured software agents incorporates a new functionality designed to prevent the attack. 10. The secured software agent of claim 5, wherein the code further causes the computing device to: insert one or more upcalls at points in the OS kernel where a user-level system call from an application would result in access to an internal OS kernel object; receive, from the OS kernel, via at least one of the one or more upcalls, a request to modify or debug functionality of the application; determine whether the request is a valid request; and limit access to the OS kernel based at least in part on a determination that the request is not a valid request.
A system and method is provided for implementing platform security on a consumer electronic device having an open development platform. The device is of the type which includes an abstraction layer operable between device hardware and application software. A secured software agent is provided for embedding within the abstraction layer forming the operating system. The secured software agent is configured to limit access to the abstraction layer by either blocking loadable kernel modules from loading, blocking writing to the system call table or blocking requests to attach debug utilities to certified applications or kernel components.1. An apparatus for increasing security of a computing device, that apparatus comprising: at least one processor; at least one non-transitory memory device storing instructions thereon which, when executed by the at least one processor, cause the at least one processor to: embed a first secured software agent within an OS kernel of the device, wherein the first secured software agent is one of plural secured software agents generated by diverse code portion combinations to thereby have the same functionality but be structurally and semantically different, and wherein the secured software agent is configured to limit access to the OS kernel to provide protection of applications and resources. 2. The apparatus of claim 1, wherein access to the OS kernel is limited by receiving a request to modify to a debug functionality request of the OK kernel and preventing access to the OK Kernel based at least in part that the request is not a valid request. 3. The apparatus of claim 1, wherein the instructions further case the processor to embed a second secured software agent of the plural secured software agents within an OS of at least one of a different instantiation of the device, a device of a different type than the device, a device sold in a different geographic region than the device, or a device on a different operator network than the device. 4. The apparatus of claim 1, wherein the instructions further cause the processor to: detect an attack on the first secured software agent; analyze the attack; replace the first secured software agent with a second secured software agent that is one of the plural secured software agents, wherein the second secured software agent incorporates a new functionality designed to prevent the attack. 5. The apparatus of claim 1, wherein the secured software agent is configured to: insert one or more upcalls at points in the OS kernel where a user-level system call from an application would result in access to an internal OS kernel object; receive, from the OS kernel, via at least one of the one or more upcalls, a request to modify or debug functionality of the application; determine whether the request is a valid request; and limit access to the OS kernel based at least in part on a determination that the request is not a valid request. 6. A secured software agents embedded within an OS kernel of the device for increasing security of a computing device, the secured software agent comprising code for causing the computing device to: limit access to the OS kernel to provide protection of applications and resources; and wherein the secured software agent is one of multiple other secure software agents created from diverse code portion combinations to thereby have the same functionality but be structurally and semantically different from each other. 7. The secured software agent of claim 6, wherein access to the OS kernel is limited by receiving a request to modify to a debug functionality request of the OK kernel and preventing access to the OK Kernel based at least in part that the request is not a valid request. 8. The secured software agent of claim 6, wherein at least one of the other secured software agents is embedded in an OS of at least one of a different instantiation of the device, a device of a different type than the device, a device sold in a different geographic region than the device, or a device on a different operator network than the device. 9. The secured software agent of claim 5, wherein the code causes the computing device to: detect an attack on the first secured software agent; analyze the attack; replace the secured software agent with one of the other secured software agents, wherein the one of the other secured software agents incorporates a new functionality designed to prevent the attack. 10. The secured software agent of claim 5, wherein the code further causes the computing device to: insert one or more upcalls at points in the OS kernel where a user-level system call from an application would result in access to an internal OS kernel object; receive, from the OS kernel, via at least one of the one or more upcalls, a request to modify or debug functionality of the application; determine whether the request is a valid request; and limit access to the OS kernel based at least in part on a determination that the request is not a valid request.
2,400
7,949
7,949
15,197,652
2,433
A method of defining distributed firewall rules in a group of datacenters is provided. Each datacenter includes a group of data compute nodes (DCNs). The method sends a set of security tags from a particular datacenter to other datacenters. The method, at each datacenter, associates a unique identifier of one or more DCNs of the datacenter to each security tag. The method associates one or more security tags to each of a set of security group at the particular datacenter and defines a set of distributed firewall rules at the particular datacenter based on the security tags. The method sends the set of distributed firewall rules from the particular datacenter to other datacenters. The method, at each datacenter, translates the firewall rules by mapping the unique identifier of each DCN in a distributed firewall rule to a corresponding static address associated with the DCN.
1. A method of defining distributed firewall rules in a plurality of datacenters, each datacenter comprising a plurality of data compute nodes (DCNs), the method comprising: sending a set of security tags from a particular datacenter to other datacenters in the plurality of datacenters; at each datacenter, associating a unique identifier of one or more DCNs of the datacenter to each security tag, wherein the unique identifier of each DCN is unique across the plurality of datacenters; associating one or more security tags to each of a set of security group at the particular datacenter; defining a set of distributed firewall rules at the particular datacenter based on the security tags; sending the set of distributed firewall rules from the particular datacenter to other datacenters in the plurality of datacenters; and at each datacenter, translating the firewall rules by mapping the unique identifier of each DCN in a distributed firewall rule to a corresponding static address associated with the DCN. 2. The method of claim 1 further comprising: defining a set of security groups from the security tags, and associating one or more security tags with each security group, wherein defining the set of distributed firewall rules at the particular datacenter based on the security tags comprises defining the set of distributed firewall rules based on the security groups associated with the security tags. 3. The method of claim 1, wherein the unique identifier of each DCN is an instance identification of the DCN that is unique across the plurality of datacenters and does not change when the DCN is moved from one datacenter to another datacenter. 4. The method of claim 1, wherein the unique identifier of each DCN is a universally unique identifier (UUID) of the DCN. 5. The method of claim 1, wherein the static address associated with each DCN comprises one of a layer 2 (L2) network address associated with the DCN and a layer 3 (L3) network address associated with the DCN. 6. The method of claim 5, wherein the L2 network address associated with the DCN is a media access control (MAC) address, wherein the L3 network address associated with the DCN is an Internet protocol (IP) address. 7. The method of claim 1, wherein each distributed firewall rule includes (i) a set of n-tuples for comparing with a set of attributes of a packet to determine whether a firewall rule is applicable to the packet, (ii) an action identifier that specifies the action to perform on the packet when the firewall rule is applicable to the packet, and (iii) an enforcement-node tuple, wherein for at least one distributed firewall rule the enforcement-node tuple is a DCN identified by said unique identifier of the DCN that is unique across the plurality of datacenters. 8. A non-transitory machine readable medium storing a program that when executed by at least one processing unit defines distributed firewall rules in a plurality of datacenters, each datacenter comprising a plurality of data compute nodes (DCNs), the program comprising sets of instructions for: sending a set of security tags from a particular datacenter to other datacenters in the plurality of datacenters; associating, at each datacenter, a unique identifier of one or more DCNs of the datacenter to each security tag, wherein the unique identifier of each DCN is unique across the plurality of datacenters; associating one or more security tags to each of a set of security group at the particular datacenter; defining a set of distributed firewall rules at the particular datacenter based on the security tags; sending the set of distributed firewall rules from the particular datacenter to other datacenters in the plurality of datacenters; and translating, at each datacenter, the firewall rules by mapping the unique identifier of each DCN in a distributed firewall rule to a corresponding static address associated with the DCN. 9. The non-transitory machine readable medium of claim 8, the program further comprising sets of instructions for: defining a set of security groups from the security tags, and associating one or more security tags with each security group, wherein the set of instructions for defining the set of distributed firewall rules at the particular datacenter based on the security tags comprises a set of instructions for defining the set of distributed firewall rules based on the security groups associated with the security tags. 10. The non-transitory machine readable medium of claim 8, wherein the unique identifier of each DCN is an instance identification of the DCN that is unique across the plurality of datacenters and does not change when the DCN is moved from one datacenter to another datacenter. 11. The non-transitory machine readable medium of claim 8, wherein the unique identifier of each DCN is a universally unique identifier (UUID) of the DCN. 12. The non-transitory machine readable medium of claim 8, wherein the static address associated with each DCN comprises one of a layer 2 (L2) network address associated with the DCN and a layer 3 (L3) network address associated with the DCN. 13. The non-transitory machine readable medium of claim 12, wherein the L2 network address associated with the DCN is a media access control (MAC) address, wherein the L3 network address associated with the DCN is an Internet protocol (IP) address. 14. The non-transitory machine readable medium of claim 8, wherein each distributed firewall rule includes (i) a set of n-tuples for comparing with a set of attributes of a packet to determine whether a firewall rule is applicable to the packet, (ii) an action identifier that specifies the action to perform on the packet when the firewall rule is applicable to the packet, and (iii) an enforcement-node tuple, wherein for at least one distributed firewall rule the enforcement-node tuple is a DCN identified by said unique identifier of the DCN that is unique across the plurality of datacenters. 15. A system comprising: a plurality of datacenters, each datacenter comprising a plurality of data compute nodes (DCNs), and a network manager server, each network manager server comprising a non-transitory machine readable medium storing a program that when executed by at least one processing unit defines distributed firewall rules, the program comprising sets of instructions for: sending a set of security tags from a particular datacenter to other datacenters in the plurality of datacenters; associating, at each datacenter, a unique identifier of one or more DCNs of the datacenter to each security tag, wherein the unique identifier of each DCN is unique across the plurality of datacenters; associating one or more security tags to each of a set of security group at the particular datacenter; defining a set of distributed firewall rules at the particular datacenter based on the security tags; sending the set of distributed firewall rules from the particular datacenter to other datacenters in the plurality of datacenters; and translating, at each datacenter, the firewall rules by mapping the unique identifier of each DCN in a distributed firewall rule to a corresponding static address associated with the DCN. 16. The system of claim 15, the program further comprising sets of instructions for: defining a set of security groups from the security tags, and associating one or more security tags with each security group, wherein the set of instructions for defining the set of distributed firewall rules at the particular datacenter based on the security tags comprises a set of instructions for defining the set of distributed firewall rules based on the security groups associated with the security tags. 17. The system of claim 15, wherein the unique identifier of each DCN is an instance identification of the DCN that is unique across the plurality of datacenters and does not change when the DCN is moved from one datacenter to another datacenter. 18. The system of claim 15, wherein the unique identifier of each DCN is a universally unique identifier (UUID) of the DCN. 19. The system of claim 15, wherein the static address associated with each DCN comprises one of a layer 2 (L2) network address associated with the DCN and a layer 3 (L3) network address associated with the DCN. 20. The system of claim 15, wherein each distributed firewall rule includes (i) a set of n-tuples for comparing with a set of attributes of a packet to determine whether a firewall rule is applicable to the packet, (ii) an action identifier that specifies the action to perform on the packet when the firewall rule is applicable to the packet, and (iii) an enforcement-node tuple, wherein for at least one distributed firewall rule the enforcement-node tuple is a DCN identified by said unique identifier of the DCN that is unique across the plurality of datacenters.
A method of defining distributed firewall rules in a group of datacenters is provided. Each datacenter includes a group of data compute nodes (DCNs). The method sends a set of security tags from a particular datacenter to other datacenters. The method, at each datacenter, associates a unique identifier of one or more DCNs of the datacenter to each security tag. The method associates one or more security tags to each of a set of security group at the particular datacenter and defines a set of distributed firewall rules at the particular datacenter based on the security tags. The method sends the set of distributed firewall rules from the particular datacenter to other datacenters. The method, at each datacenter, translates the firewall rules by mapping the unique identifier of each DCN in a distributed firewall rule to a corresponding static address associated with the DCN.1. A method of defining distributed firewall rules in a plurality of datacenters, each datacenter comprising a plurality of data compute nodes (DCNs), the method comprising: sending a set of security tags from a particular datacenter to other datacenters in the plurality of datacenters; at each datacenter, associating a unique identifier of one or more DCNs of the datacenter to each security tag, wherein the unique identifier of each DCN is unique across the plurality of datacenters; associating one or more security tags to each of a set of security group at the particular datacenter; defining a set of distributed firewall rules at the particular datacenter based on the security tags; sending the set of distributed firewall rules from the particular datacenter to other datacenters in the plurality of datacenters; and at each datacenter, translating the firewall rules by mapping the unique identifier of each DCN in a distributed firewall rule to a corresponding static address associated with the DCN. 2. The method of claim 1 further comprising: defining a set of security groups from the security tags, and associating one or more security tags with each security group, wherein defining the set of distributed firewall rules at the particular datacenter based on the security tags comprises defining the set of distributed firewall rules based on the security groups associated with the security tags. 3. The method of claim 1, wherein the unique identifier of each DCN is an instance identification of the DCN that is unique across the plurality of datacenters and does not change when the DCN is moved from one datacenter to another datacenter. 4. The method of claim 1, wherein the unique identifier of each DCN is a universally unique identifier (UUID) of the DCN. 5. The method of claim 1, wherein the static address associated with each DCN comprises one of a layer 2 (L2) network address associated with the DCN and a layer 3 (L3) network address associated with the DCN. 6. The method of claim 5, wherein the L2 network address associated with the DCN is a media access control (MAC) address, wherein the L3 network address associated with the DCN is an Internet protocol (IP) address. 7. The method of claim 1, wherein each distributed firewall rule includes (i) a set of n-tuples for comparing with a set of attributes of a packet to determine whether a firewall rule is applicable to the packet, (ii) an action identifier that specifies the action to perform on the packet when the firewall rule is applicable to the packet, and (iii) an enforcement-node tuple, wherein for at least one distributed firewall rule the enforcement-node tuple is a DCN identified by said unique identifier of the DCN that is unique across the plurality of datacenters. 8. A non-transitory machine readable medium storing a program that when executed by at least one processing unit defines distributed firewall rules in a plurality of datacenters, each datacenter comprising a plurality of data compute nodes (DCNs), the program comprising sets of instructions for: sending a set of security tags from a particular datacenter to other datacenters in the plurality of datacenters; associating, at each datacenter, a unique identifier of one or more DCNs of the datacenter to each security tag, wherein the unique identifier of each DCN is unique across the plurality of datacenters; associating one or more security tags to each of a set of security group at the particular datacenter; defining a set of distributed firewall rules at the particular datacenter based on the security tags; sending the set of distributed firewall rules from the particular datacenter to other datacenters in the plurality of datacenters; and translating, at each datacenter, the firewall rules by mapping the unique identifier of each DCN in a distributed firewall rule to a corresponding static address associated with the DCN. 9. The non-transitory machine readable medium of claim 8, the program further comprising sets of instructions for: defining a set of security groups from the security tags, and associating one or more security tags with each security group, wherein the set of instructions for defining the set of distributed firewall rules at the particular datacenter based on the security tags comprises a set of instructions for defining the set of distributed firewall rules based on the security groups associated with the security tags. 10. The non-transitory machine readable medium of claim 8, wherein the unique identifier of each DCN is an instance identification of the DCN that is unique across the plurality of datacenters and does not change when the DCN is moved from one datacenter to another datacenter. 11. The non-transitory machine readable medium of claim 8, wherein the unique identifier of each DCN is a universally unique identifier (UUID) of the DCN. 12. The non-transitory machine readable medium of claim 8, wherein the static address associated with each DCN comprises one of a layer 2 (L2) network address associated with the DCN and a layer 3 (L3) network address associated with the DCN. 13. The non-transitory machine readable medium of claim 12, wherein the L2 network address associated with the DCN is a media access control (MAC) address, wherein the L3 network address associated with the DCN is an Internet protocol (IP) address. 14. The non-transitory machine readable medium of claim 8, wherein each distributed firewall rule includes (i) a set of n-tuples for comparing with a set of attributes of a packet to determine whether a firewall rule is applicable to the packet, (ii) an action identifier that specifies the action to perform on the packet when the firewall rule is applicable to the packet, and (iii) an enforcement-node tuple, wherein for at least one distributed firewall rule the enforcement-node tuple is a DCN identified by said unique identifier of the DCN that is unique across the plurality of datacenters. 15. A system comprising: a plurality of datacenters, each datacenter comprising a plurality of data compute nodes (DCNs), and a network manager server, each network manager server comprising a non-transitory machine readable medium storing a program that when executed by at least one processing unit defines distributed firewall rules, the program comprising sets of instructions for: sending a set of security tags from a particular datacenter to other datacenters in the plurality of datacenters; associating, at each datacenter, a unique identifier of one or more DCNs of the datacenter to each security tag, wherein the unique identifier of each DCN is unique across the plurality of datacenters; associating one or more security tags to each of a set of security group at the particular datacenter; defining a set of distributed firewall rules at the particular datacenter based on the security tags; sending the set of distributed firewall rules from the particular datacenter to other datacenters in the plurality of datacenters; and translating, at each datacenter, the firewall rules by mapping the unique identifier of each DCN in a distributed firewall rule to a corresponding static address associated with the DCN. 16. The system of claim 15, the program further comprising sets of instructions for: defining a set of security groups from the security tags, and associating one or more security tags with each security group, wherein the set of instructions for defining the set of distributed firewall rules at the particular datacenter based on the security tags comprises a set of instructions for defining the set of distributed firewall rules based on the security groups associated with the security tags. 17. The system of claim 15, wherein the unique identifier of each DCN is an instance identification of the DCN that is unique across the plurality of datacenters and does not change when the DCN is moved from one datacenter to another datacenter. 18. The system of claim 15, wherein the unique identifier of each DCN is a universally unique identifier (UUID) of the DCN. 19. The system of claim 15, wherein the static address associated with each DCN comprises one of a layer 2 (L2) network address associated with the DCN and a layer 3 (L3) network address associated with the DCN. 20. The system of claim 15, wherein each distributed firewall rule includes (i) a set of n-tuples for comparing with a set of attributes of a packet to determine whether a firewall rule is applicable to the packet, (ii) an action identifier that specifies the action to perform on the packet when the firewall rule is applicable to the packet, and (iii) an enforcement-node tuple, wherein for at least one distributed firewall rule the enforcement-node tuple is a DCN identified by said unique identifier of the DCN that is unique across the plurality of datacenters.
2,400
7,950
7,950
15,006,072
2,452
A method of defining a virtual network across a plurality of physical hosts is provided. At least two hosts utilize network virtualization software provided by two different vendors. Each host hosts a set of data compute nodes (DCNs) for one or more tenants. The method, at an agent at a host, receives a command from a network controller, the command includes (i) an identification a resource on a tenant logical network and (ii) an action to perform on the identified resource. The method, at the agent, determines the network virtualization software utilized by the host. The method, at the agent, translates the received action into a set of configuration commands compatible with the network virtualization software utilized by the host. The method sends the configuration commands to a network configuration interface on the host to perform the action on the identified resource.
1. A method of defining a virtual network across a plurality of physical hosts, at least two hosts utilizing network virtualization software provided by two different vendors, each host hosting a set of data compute nodes (DCNs) for one or more tenants, the method comprising: at an agent at a host, receiving a command from a network controller, the command comprising (i) an identification a resource on a tenant logical network and (ii) an action to perform on the identified resource; at the agent, determining a network virtualization software utilized by the host; at the agent, translating the received action into a set of configuration commands compatible with the network virtualization software utilized by the host; sending the configuration commands to a network configuration interface on the host to perform the action on the identified the resource. 2. The method of claim 1, wherein the virtual network controller is not compatible with the network virtualization software used by the plurality of the hosts. 3. The method of claim 1, wherein the agent and virtual network controller communicate through an application programming interface (API) that is independent of the network virtualization software used by each of the plurality of the hosts. 4. The method of claim 3, wherein the API is a representational state transfer (REST) API. 5. The method of claim 1, wherein the resource identified by the command is one of a logical device and a logical service associated with the tenant logical network. 6. The method of claim 1, wherein the action specified by the command is one of a create command, a read command, an update command, and a delete command. 7. The method of claim 1, wherein a DCN is one of a virtual machine and a container that runs on top of an operating system of the host. 8. The method of claim 1, wherein the host is a first host, wherein the set of configuration commands is a first set of configuration commands, the method further comprising: at an agent operating on a second host, receiving said command from the network controller; and at the agent on the second host, translating the action of said command into a second set of configuration commands compatible with a network virtualization software utilized by the second host; wherein the network virtualization software utilized by the second host is different than the network virtualization software utilized by the first host, wherein the second set of configuration commands is different than the first set of configuration commands. 9. A non-transitory computer readable medium storing a program for defining a virtual network across a plurality of physical hosts, at least two hosts utilizing network virtualization software provided by two different vendors, each host hosting a set of data compute nodes (DCNs) for one or more tenants, the program executable by a processing unit, the program comprising a set of instructions for: receiving, at an agent at a host, a command from a network controller, the command comprising (i) an identification a resource on a tenant logical network and (ii) an action to perform on the identified resource; determining, at the agent, the network virtualization software utilized by the host; translating, at the agent, the received action into a set of configuration commands compatible with the network virtualization software utilized by the host; sending the configuration commands to a network configuration interface on the host to perform the action on the identified the resource. 10. The non-transitory computer readable medium of claim 9, wherein the virtual network controller is not compatible with the network virtualization software used by the plurality of the hosts. 11. The non-transitory computer readable medium of claim 9, wherein the agent and virtual network controller communicate through an application programming interface (API) that is independent of the network virtualization software used by each of the plurality of hosts. 12. The non-transitory computer readable medium of claim 11, wherein the API is a representational state transfer (REST) API. 13. The non-transitory computer readable medium of claim 9, wherein the resource identified by the command is one of a logical device and a logical service associated with the tenant logical network. 14. The non-transitory computer readable medium of claim 9, wherein the action specified by the command is one of a create command, a read command, an update command, and a delete command. 15. The non-transitory computer readable medium of claim 9, wherein a DCN is one of a virtual machine and a container that runs on top of an operating system of the host. 16. The non-transitory computer readable medium of claim 9, wherein the host is a first host, wherein the set of configuration commands is a first set of configuration commands, the program further comprising set of instructions for: receiving, at an agent operating on a second host, said command from the network controller; and translating, at the agent on the second host, the action of said command into a second set of configuration commands compatible with a network virtualization software utilized by the second host; wherein the network virtualization software utilized by the second host is different than the network virtualization software utilized by the first host, wherein the second set of configuration commands is different than the first set of configuration commands. 17. A system comprising: a set of processing units; and a non-transitory computer readable medium storing a program for defining a virtual network across a plurality of physical hosts, at least two hosts utilizing network virtualization software provided by two different vendors, each host hosting a set of data compute nodes (DCNs) for one or more tenants, the program executable by a processing unit in the set of processing units, the program comprising a set of instructions for: receiving, at an agent at a host, a command from a network controller, the command comprising (i) an identification a resource on a tenant logical network and (ii) an action to perform on the identified resource; determining, at the agent, the network virtualization software utilized by the host; translating, at the agent, the received action into a set of configuration commands compatible with the network virtualization software utilized by the host; sending the configuration commands to a network configuration interface on the host to perform the action on the identified the resource. 18. The system of claim 17, wherein the virtual network controller is not compatible with the network virtualization software used by the plurality of the hosts. 19. The system of claim 17, wherein the agent and virtual network controller communicate through an application programming interface (API) that is independent of the network virtualization software used by the plurality of the hosts. 20. The system of claim 17, wherein the API is a representational state transfer (REST) API. 21. The system of claim 17, wherein the resource identified by the command is one of a logical device and a logical service associated with the tenant logical network. 22. The system of claim 17, wherein the action specified by the command is one of a create command, a read command, an update command, and a delete command. 23. The system of claim 17, wherein a DCN is one of a virtual machine and a container that runs on top of an operating system of the host. 24. The system of claim 17, wherein the host is a first host, wherein the set of configuration commands is a first set of configuration commands, the program further comprising set of instructions for: receiving, at an agent operating on a second host, said command from the network controller; and translating, at the agent on the second host, the action of said command into a second set of configuration commands compatible with a network virtualization software utilized by the second host; wherein the network virtualization software utilized by the second host is different than the network virtualization software utilized by the first host, wherein the second set of configuration commands is different than the first set of configuration commands.
A method of defining a virtual network across a plurality of physical hosts is provided. At least two hosts utilize network virtualization software provided by two different vendors. Each host hosts a set of data compute nodes (DCNs) for one or more tenants. The method, at an agent at a host, receives a command from a network controller, the command includes (i) an identification a resource on a tenant logical network and (ii) an action to perform on the identified resource. The method, at the agent, determines the network virtualization software utilized by the host. The method, at the agent, translates the received action into a set of configuration commands compatible with the network virtualization software utilized by the host. The method sends the configuration commands to a network configuration interface on the host to perform the action on the identified resource.1. A method of defining a virtual network across a plurality of physical hosts, at least two hosts utilizing network virtualization software provided by two different vendors, each host hosting a set of data compute nodes (DCNs) for one or more tenants, the method comprising: at an agent at a host, receiving a command from a network controller, the command comprising (i) an identification a resource on a tenant logical network and (ii) an action to perform on the identified resource; at the agent, determining a network virtualization software utilized by the host; at the agent, translating the received action into a set of configuration commands compatible with the network virtualization software utilized by the host; sending the configuration commands to a network configuration interface on the host to perform the action on the identified the resource. 2. The method of claim 1, wherein the virtual network controller is not compatible with the network virtualization software used by the plurality of the hosts. 3. The method of claim 1, wherein the agent and virtual network controller communicate through an application programming interface (API) that is independent of the network virtualization software used by each of the plurality of the hosts. 4. The method of claim 3, wherein the API is a representational state transfer (REST) API. 5. The method of claim 1, wherein the resource identified by the command is one of a logical device and a logical service associated with the tenant logical network. 6. The method of claim 1, wherein the action specified by the command is one of a create command, a read command, an update command, and a delete command. 7. The method of claim 1, wherein a DCN is one of a virtual machine and a container that runs on top of an operating system of the host. 8. The method of claim 1, wherein the host is a first host, wherein the set of configuration commands is a first set of configuration commands, the method further comprising: at an agent operating on a second host, receiving said command from the network controller; and at the agent on the second host, translating the action of said command into a second set of configuration commands compatible with a network virtualization software utilized by the second host; wherein the network virtualization software utilized by the second host is different than the network virtualization software utilized by the first host, wherein the second set of configuration commands is different than the first set of configuration commands. 9. A non-transitory computer readable medium storing a program for defining a virtual network across a plurality of physical hosts, at least two hosts utilizing network virtualization software provided by two different vendors, each host hosting a set of data compute nodes (DCNs) for one or more tenants, the program executable by a processing unit, the program comprising a set of instructions for: receiving, at an agent at a host, a command from a network controller, the command comprising (i) an identification a resource on a tenant logical network and (ii) an action to perform on the identified resource; determining, at the agent, the network virtualization software utilized by the host; translating, at the agent, the received action into a set of configuration commands compatible with the network virtualization software utilized by the host; sending the configuration commands to a network configuration interface on the host to perform the action on the identified the resource. 10. The non-transitory computer readable medium of claim 9, wherein the virtual network controller is not compatible with the network virtualization software used by the plurality of the hosts. 11. The non-transitory computer readable medium of claim 9, wherein the agent and virtual network controller communicate through an application programming interface (API) that is independent of the network virtualization software used by each of the plurality of hosts. 12. The non-transitory computer readable medium of claim 11, wherein the API is a representational state transfer (REST) API. 13. The non-transitory computer readable medium of claim 9, wherein the resource identified by the command is one of a logical device and a logical service associated with the tenant logical network. 14. The non-transitory computer readable medium of claim 9, wherein the action specified by the command is one of a create command, a read command, an update command, and a delete command. 15. The non-transitory computer readable medium of claim 9, wherein a DCN is one of a virtual machine and a container that runs on top of an operating system of the host. 16. The non-transitory computer readable medium of claim 9, wherein the host is a first host, wherein the set of configuration commands is a first set of configuration commands, the program further comprising set of instructions for: receiving, at an agent operating on a second host, said command from the network controller; and translating, at the agent on the second host, the action of said command into a second set of configuration commands compatible with a network virtualization software utilized by the second host; wherein the network virtualization software utilized by the second host is different than the network virtualization software utilized by the first host, wherein the second set of configuration commands is different than the first set of configuration commands. 17. A system comprising: a set of processing units; and a non-transitory computer readable medium storing a program for defining a virtual network across a plurality of physical hosts, at least two hosts utilizing network virtualization software provided by two different vendors, each host hosting a set of data compute nodes (DCNs) for one or more tenants, the program executable by a processing unit in the set of processing units, the program comprising a set of instructions for: receiving, at an agent at a host, a command from a network controller, the command comprising (i) an identification a resource on a tenant logical network and (ii) an action to perform on the identified resource; determining, at the agent, the network virtualization software utilized by the host; translating, at the agent, the received action into a set of configuration commands compatible with the network virtualization software utilized by the host; sending the configuration commands to a network configuration interface on the host to perform the action on the identified the resource. 18. The system of claim 17, wherein the virtual network controller is not compatible with the network virtualization software used by the plurality of the hosts. 19. The system of claim 17, wherein the agent and virtual network controller communicate through an application programming interface (API) that is independent of the network virtualization software used by the plurality of the hosts. 20. The system of claim 17, wherein the API is a representational state transfer (REST) API. 21. The system of claim 17, wherein the resource identified by the command is one of a logical device and a logical service associated with the tenant logical network. 22. The system of claim 17, wherein the action specified by the command is one of a create command, a read command, an update command, and a delete command. 23. The system of claim 17, wherein a DCN is one of a virtual machine and a container that runs on top of an operating system of the host. 24. The system of claim 17, wherein the host is a first host, wherein the set of configuration commands is a first set of configuration commands, the program further comprising set of instructions for: receiving, at an agent operating on a second host, said command from the network controller; and translating, at the agent on the second host, the action of said command into a second set of configuration commands compatible with a network virtualization software utilized by the second host; wherein the network virtualization software utilized by the second host is different than the network virtualization software utilized by the first host, wherein the second set of configuration commands is different than the first set of configuration commands.
2,400
7,951
7,951
14,549,909
2,459
A process control system having an external data server that provides process control data to external networks via one or more firewalls implements a cost-effective security mechanism that reduces or eliminates the ability of the external data server to be compromised by viruses or other security attacks. The security mechanism includes a DMZ gateway disposed outside of the process control network that connects to an external data server located within the process control network. A configuration engine is located within the process control network and configures the external data server to publish one or more preset or pre-established data views to the DMZ gateway, which then receives the data/events/alarms as defined by the data views from the control system automatically, without performing read and write requests to the external data server. The DMZ gateway then republishes the data within the data views on an external network to make the process control data within the published data views available to one or more client applications connected to the external network. Because this security mechanism does not support client read, write, or configuration access to the external data server within the control system, this security mechanism limits the opportunity of viruses to use the structure in the DMZ gateway device to access the process control network.
1. A communication system, comprising: a process control network including a plurality of process control devices communicatively connected together; an external data server disposed within the process control network; an external communications network disposed outside of the process control network; a gateway device communicatively coupled between the external data server and the external communications network; and a configuration application stored on a computer memory within a device within the process control network, that executes on a processor within the device within the process control network to configure the external data server to publish data to the external communications network according to one or more data views, wherein each of the one or more data views defines a set of process control data to be published. 2. The communication system of claim 1, wherein the configuration application further executes on the processor within the device within the process control network to configure the external data server to include data view files specifying the data within one or more data views and to publish the data view files to the gateway device connected to the external communications network. 3. The communication system of claim 1, wherein the external data server is unable to respond to read calls from the gateway device. 4. The communication system of claim 1, wherein the external data server is unable to respond to write calls from the gateway device. 5. The communication system of claim 1, wherein the external data server is unable to respond to configuration calls from the gateway device. 6. The communication system of claim 1, wherein one of the one or more data views specifies a set of process control data generated or collected by one more process controllers within the process control network. 7. The communication system of claim 1, wherein one of the one or more data views specifies process control data generated or collected by one more field devices within the process control network. 8. The communication system of claim 1, wherein one of the one or more data views specifies process control configuration data stored in a memory of a further device within the process control network. 9. The communication system of claim 1, wherein one of the one or more data views specifies maintenance data about one or more process control devices within the process control network. 10. The communication system of claim 1, wherein the configuration application executes to configure the external data server to periodically publish data according to the one or more data views. 11. The communication system of claim 1, wherein the external data server conforms to the OPC protocol. 12. The communication system of claim 1, wherein the external data server is configured to receive and act upon configuration commands only from devices within the process control network. 13. The communication system of claim 1, wherein the configuration application is stored and executed within the external data server. 14. The communication system of claim 1, further including a data or event historian disposed within the process control network and wherein the external data server obtains some of the process control data defined by the one or more data views from the data or event historian. 15. The communication system of claim 1, wherein the gateway device includes a firewall. 16. The communication system of claim 1, wherein the gateway device is configured to republish data according to the one or more data views as received from the external data server to one or more client applications on the external communications network. 17. The communication system of claim 1, wherein the gateway device is unable to execute read or write or configuration calls to the external data server. 18. A communication system, comprising: a process control network including a plurality of process control devices communicatively connected together; an external data server disposed within the process control network; an external communications network disposed outside of the process control network; and a gateway device communicatively coupled between the external data server and the external communications network; wherein the external data server stores one or more data view files and executes to publish data to the gateway device according to one or more data view files, wherein each of the one or more data view files defines a set of process control data from within the process control network to be published and wherein the gateway device stores a set of further data view files defining data to be received from the external data server via publications from the external data server and the gateway device is configured to republish data to one or more client applications connected to the external communications network using the set of further data view files. 19. The communication system of claim 18, wherein the external data server periodically publishes data to the gateway device according to the one or more data view files. 20. The communication system of claim 18, further including a configuration application stored within a device within the process control network that executes to configure the external data server to store the one or more data view files. 21. The communication system of claim 20, wherein the configuration application is stored in the external data server. 22. The communication system of claim 18, wherein the gateway device stores the one or more further data view files. 23. The communication system of claim 18, wherein the external data server is configured to be unable to respond to read or write calls from the gateway device. 24. The communication system of claim 18, wherein the gateway device includes a firewall disposed between the external data server and the external communications network. 25. The communication system of claim 18, wherein the gateway device is configured to be unable to send read or write calls to the external data server. 26. The communication system of claim 18, wherein the external data server is configured to only respond to configuration commands from a source within the process control network. 27. The communication system of claim 18, wherein the external data server is configured to obtain data defined by the one or more data views via the process control network. 28. A method of securely providing information from a process control network to an external communications network in a system having an external data server coupled within the process control network and that is communicatively connected to a gateway device that is connected to the external communications network, comprising: storing one or more data view files in the external data server, wherein each data view file specifies a set of process control data to be regularly published to the external communications network; configuring the external data server to communicate with the gateway device using data publish signals; causing the external data server to automatically publish process control data specified by the one or more data view files to the gateway device; and preventing the external data server from responding to read, write and configuration commands from the gateway device. 29. The method of claim 28, further including causing the gateway device to republish the process control data sent to the gateway device by the external data server to one or more client applications on the external communications network. 30. The method of claim 29, further including storing further data view files at the gateway device defining the process control data to be received from the external data server via data publish signals and to be republished to the one or more client applications. 31. The method of claim 29, further including causing the one or the client devices to subscribe to the process control data republished by the gateway device. 32. The method of claim 28, further including storing a configuration application within a device within the process control network and using the configuration application to configure the external data server to publish the process control data specified by the one or more data view files. 33. The method of claim 32, further including configuring the gateway device to republish the process control data to one or more client applications. 34. The method of claim 32, wherein storing the configuration application includes storing the configuration application in a different device on the process control network than the external data server. 35. The method of claim 28, wherein causing the external data server to automatically publish process control data specified by the one or more data view files to the gateway device includes causing the external data server to obtain the process control data specified by the one or more data view files from the process control network and to periodically send the obtained process control data to the gateway device. 36. The method of claim 28, further including configuring the external data server to be only able to implement configuration commands received from a device within the process control network. 37. The method of claim 28, further including configuring the gateway device to be unable to implement read and write calls to the external data server in response to commands received from one or more client applications on the external communications network.
A process control system having an external data server that provides process control data to external networks via one or more firewalls implements a cost-effective security mechanism that reduces or eliminates the ability of the external data server to be compromised by viruses or other security attacks. The security mechanism includes a DMZ gateway disposed outside of the process control network that connects to an external data server located within the process control network. A configuration engine is located within the process control network and configures the external data server to publish one or more preset or pre-established data views to the DMZ gateway, which then receives the data/events/alarms as defined by the data views from the control system automatically, without performing read and write requests to the external data server. The DMZ gateway then republishes the data within the data views on an external network to make the process control data within the published data views available to one or more client applications connected to the external network. Because this security mechanism does not support client read, write, or configuration access to the external data server within the control system, this security mechanism limits the opportunity of viruses to use the structure in the DMZ gateway device to access the process control network.1. A communication system, comprising: a process control network including a plurality of process control devices communicatively connected together; an external data server disposed within the process control network; an external communications network disposed outside of the process control network; a gateway device communicatively coupled between the external data server and the external communications network; and a configuration application stored on a computer memory within a device within the process control network, that executes on a processor within the device within the process control network to configure the external data server to publish data to the external communications network according to one or more data views, wherein each of the one or more data views defines a set of process control data to be published. 2. The communication system of claim 1, wherein the configuration application further executes on the processor within the device within the process control network to configure the external data server to include data view files specifying the data within one or more data views and to publish the data view files to the gateway device connected to the external communications network. 3. The communication system of claim 1, wherein the external data server is unable to respond to read calls from the gateway device. 4. The communication system of claim 1, wherein the external data server is unable to respond to write calls from the gateway device. 5. The communication system of claim 1, wherein the external data server is unable to respond to configuration calls from the gateway device. 6. The communication system of claim 1, wherein one of the one or more data views specifies a set of process control data generated or collected by one more process controllers within the process control network. 7. The communication system of claim 1, wherein one of the one or more data views specifies process control data generated or collected by one more field devices within the process control network. 8. The communication system of claim 1, wherein one of the one or more data views specifies process control configuration data stored in a memory of a further device within the process control network. 9. The communication system of claim 1, wherein one of the one or more data views specifies maintenance data about one or more process control devices within the process control network. 10. The communication system of claim 1, wherein the configuration application executes to configure the external data server to periodically publish data according to the one or more data views. 11. The communication system of claim 1, wherein the external data server conforms to the OPC protocol. 12. The communication system of claim 1, wherein the external data server is configured to receive and act upon configuration commands only from devices within the process control network. 13. The communication system of claim 1, wherein the configuration application is stored and executed within the external data server. 14. The communication system of claim 1, further including a data or event historian disposed within the process control network and wherein the external data server obtains some of the process control data defined by the one or more data views from the data or event historian. 15. The communication system of claim 1, wherein the gateway device includes a firewall. 16. The communication system of claim 1, wherein the gateway device is configured to republish data according to the one or more data views as received from the external data server to one or more client applications on the external communications network. 17. The communication system of claim 1, wherein the gateway device is unable to execute read or write or configuration calls to the external data server. 18. A communication system, comprising: a process control network including a plurality of process control devices communicatively connected together; an external data server disposed within the process control network; an external communications network disposed outside of the process control network; and a gateway device communicatively coupled between the external data server and the external communications network; wherein the external data server stores one or more data view files and executes to publish data to the gateway device according to one or more data view files, wherein each of the one or more data view files defines a set of process control data from within the process control network to be published and wherein the gateway device stores a set of further data view files defining data to be received from the external data server via publications from the external data server and the gateway device is configured to republish data to one or more client applications connected to the external communications network using the set of further data view files. 19. The communication system of claim 18, wherein the external data server periodically publishes data to the gateway device according to the one or more data view files. 20. The communication system of claim 18, further including a configuration application stored within a device within the process control network that executes to configure the external data server to store the one or more data view files. 21. The communication system of claim 20, wherein the configuration application is stored in the external data server. 22. The communication system of claim 18, wherein the gateway device stores the one or more further data view files. 23. The communication system of claim 18, wherein the external data server is configured to be unable to respond to read or write calls from the gateway device. 24. The communication system of claim 18, wherein the gateway device includes a firewall disposed between the external data server and the external communications network. 25. The communication system of claim 18, wherein the gateway device is configured to be unable to send read or write calls to the external data server. 26. The communication system of claim 18, wherein the external data server is configured to only respond to configuration commands from a source within the process control network. 27. The communication system of claim 18, wherein the external data server is configured to obtain data defined by the one or more data views via the process control network. 28. A method of securely providing information from a process control network to an external communications network in a system having an external data server coupled within the process control network and that is communicatively connected to a gateway device that is connected to the external communications network, comprising: storing one or more data view files in the external data server, wherein each data view file specifies a set of process control data to be regularly published to the external communications network; configuring the external data server to communicate with the gateway device using data publish signals; causing the external data server to automatically publish process control data specified by the one or more data view files to the gateway device; and preventing the external data server from responding to read, write and configuration commands from the gateway device. 29. The method of claim 28, further including causing the gateway device to republish the process control data sent to the gateway device by the external data server to one or more client applications on the external communications network. 30. The method of claim 29, further including storing further data view files at the gateway device defining the process control data to be received from the external data server via data publish signals and to be republished to the one or more client applications. 31. The method of claim 29, further including causing the one or the client devices to subscribe to the process control data republished by the gateway device. 32. The method of claim 28, further including storing a configuration application within a device within the process control network and using the configuration application to configure the external data server to publish the process control data specified by the one or more data view files. 33. The method of claim 32, further including configuring the gateway device to republish the process control data to one or more client applications. 34. The method of claim 32, wherein storing the configuration application includes storing the configuration application in a different device on the process control network than the external data server. 35. The method of claim 28, wherein causing the external data server to automatically publish process control data specified by the one or more data view files to the gateway device includes causing the external data server to obtain the process control data specified by the one or more data view files from the process control network and to periodically send the obtained process control data to the gateway device. 36. The method of claim 28, further including configuring the external data server to be only able to implement configuration commands received from a device within the process control network. 37. The method of claim 28, further including configuring the gateway device to be unable to implement read and write calls to the external data server in response to commands received from one or more client applications on the external communications network.
2,400
7,952
7,952
15,129,135
2,474
Methods and apparatus, including computer program products, are provided for MBSFN measurements. In one aspect there is provided a method. The method may include receiving, by a user equipment, an indication of a monitoring requirement ( 202, 204 ) for at least one of an idle mode of operation at the user equipment or a connected mode of operation at the user equipment; receiving, by the user equipment, information ( 206 ) for one or more transmissions (MBMS) that are multicast or broadcast; and measuring, by the user equipment, the one or more transmissions (MBMS) that are multicast or broadcast, the measuring performed in accordance with the received information ( 206 ) and without regard to the indication of the monitoring requirement ( 202, 204 ) for at least one of the idle mode of operation or the connected mode of operation. Related apparatus, systems, methods, and articles are also described.
1-26. (canceled) 27. A method comprising: receiving, by a user equipment, an indication of a monitoring requirement for at least one of an idle mode of operation at the user equipment or a connected mode of operation at the user equipment; receiving, by the user equipment, information for one or more transmissions that are multicast or broadcast; and measuring, by the user equipment, the one or more transmissions that are multicast or broadcast, the measuring performed in accordance with the received information and without regard to the indication of the monitoring requirement for at least one of the idle mode of operation or the connected mode of operation. 28. The method of claim 27, further comprising: measuring, by the user equipment while in at least one of the idle mode or the connected mode, a radio channel in accordance with the received indication, without regard to the information for the one or more transmissions that are multicast or broadcast. 29. The method of claim 27, wherein the received information includes control channel configuration information. 30. The method of claim 29, wherein the control channel configuration information includes multimedia broadcast multicast service control channel configuration information. 31. The method of claim 29, wherein the control channel configuration information is carried by one or more system information blocks, dedicated signaling, or a combination of both. 32. The method of claim 27, wherein the monitoring requirement is independent of the information. 33. The method of claim 27, wherein the monitoring requirement includes at least one of a radio resource control configuration, a discontinuous receive configuration information, or a combination of both. 34. The method of claim 27, wherein the one or more transmissions include a multicast broadcast single-frequency network. 35. The method of claim 27, wherein the measuring the one or more transmissions includes monitoring the one or more transmission. 36. The method of claim 27, wherein the idle mode and the connected mode comprise a discontinuous receive mode. 37. An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive, by the apparatus, an indication of a monitoring requirement for at least one of an idle mode of operation at the apparatus or a connected mode of operation at the apparatus; receive, by the apparatus, information for one or more transmissions that are multicast or broadcast; and measure, by the apparatus, the one or more transmissions that are multicast or broadcast, the measure performed in accordance with the received information and without regard to the indication of the monitoring requirement for at least one of the idle mode of operation or the connected mode of operation. 38. The apparatus of claim 37, wherein the apparatus is further caused to at least measure, while in at least one of the idle mode or the connected mode, a radio channel in accordance with the received indication, without regard to the information for the one or more transmissions that are multicast or broadcast. 39. The apparatus of claim 37, wherein the received information includes control channel configuration information. 40. The apparatus of claim 39, wherein the control channel configuration information includes multimedia broadcast multicast service control channel configuration information. 41. The apparatus of claim 39, wherein the control channel configuration information is carried by one or more system information blocks, dedicated signaling, or a combination of both. 42. The apparatus of claim 37, wherein the monitoring requirement is independent of the information. 43. The apparatus of claim 37, wherein the monitoring requirement includes at least one of a radio resource control configuration, a discontinuous receive configuration information, or a combination of both. 44. The apparatus of claim 37, wherein the one or more transmissions include a multicast broadcast single-frequency network. 45. The apparatus of claim 37, wherein the measuring the one or more transmissions includes monitoring the one or more transmission. 46. The apparatus of claim 37, wherein the idle mode and the connected mode comprise a discontinuous receive mode.
Methods and apparatus, including computer program products, are provided for MBSFN measurements. In one aspect there is provided a method. The method may include receiving, by a user equipment, an indication of a monitoring requirement ( 202, 204 ) for at least one of an idle mode of operation at the user equipment or a connected mode of operation at the user equipment; receiving, by the user equipment, information ( 206 ) for one or more transmissions (MBMS) that are multicast or broadcast; and measuring, by the user equipment, the one or more transmissions (MBMS) that are multicast or broadcast, the measuring performed in accordance with the received information ( 206 ) and without regard to the indication of the monitoring requirement ( 202, 204 ) for at least one of the idle mode of operation or the connected mode of operation. Related apparatus, systems, methods, and articles are also described.1-26. (canceled) 27. A method comprising: receiving, by a user equipment, an indication of a monitoring requirement for at least one of an idle mode of operation at the user equipment or a connected mode of operation at the user equipment; receiving, by the user equipment, information for one or more transmissions that are multicast or broadcast; and measuring, by the user equipment, the one or more transmissions that are multicast or broadcast, the measuring performed in accordance with the received information and without regard to the indication of the monitoring requirement for at least one of the idle mode of operation or the connected mode of operation. 28. The method of claim 27, further comprising: measuring, by the user equipment while in at least one of the idle mode or the connected mode, a radio channel in accordance with the received indication, without regard to the information for the one or more transmissions that are multicast or broadcast. 29. The method of claim 27, wherein the received information includes control channel configuration information. 30. The method of claim 29, wherein the control channel configuration information includes multimedia broadcast multicast service control channel configuration information. 31. The method of claim 29, wherein the control channel configuration information is carried by one or more system information blocks, dedicated signaling, or a combination of both. 32. The method of claim 27, wherein the monitoring requirement is independent of the information. 33. The method of claim 27, wherein the monitoring requirement includes at least one of a radio resource control configuration, a discontinuous receive configuration information, or a combination of both. 34. The method of claim 27, wherein the one or more transmissions include a multicast broadcast single-frequency network. 35. The method of claim 27, wherein the measuring the one or more transmissions includes monitoring the one or more transmission. 36. The method of claim 27, wherein the idle mode and the connected mode comprise a discontinuous receive mode. 37. An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive, by the apparatus, an indication of a monitoring requirement for at least one of an idle mode of operation at the apparatus or a connected mode of operation at the apparatus; receive, by the apparatus, information for one or more transmissions that are multicast or broadcast; and measure, by the apparatus, the one or more transmissions that are multicast or broadcast, the measure performed in accordance with the received information and without regard to the indication of the monitoring requirement for at least one of the idle mode of operation or the connected mode of operation. 38. The apparatus of claim 37, wherein the apparatus is further caused to at least measure, while in at least one of the idle mode or the connected mode, a radio channel in accordance with the received indication, without regard to the information for the one or more transmissions that are multicast or broadcast. 39. The apparatus of claim 37, wherein the received information includes control channel configuration information. 40. The apparatus of claim 39, wherein the control channel configuration information includes multimedia broadcast multicast service control channel configuration information. 41. The apparatus of claim 39, wherein the control channel configuration information is carried by one or more system information blocks, dedicated signaling, or a combination of both. 42. The apparatus of claim 37, wherein the monitoring requirement is independent of the information. 43. The apparatus of claim 37, wherein the monitoring requirement includes at least one of a radio resource control configuration, a discontinuous receive configuration information, or a combination of both. 44. The apparatus of claim 37, wherein the one or more transmissions include a multicast broadcast single-frequency network. 45. The apparatus of claim 37, wherein the measuring the one or more transmissions includes monitoring the one or more transmission. 46. The apparatus of claim 37, wherein the idle mode and the connected mode comprise a discontinuous receive mode.
2,400
7,953
7,953
13,971,604
2,436
The present invention relates to methods and apparatuses for securing otherwise unsecured computer communications that addresses the above shortcomings among others. According to certain aspects, the invention relates to methods and apparatuses for implementing device snooping, in which some or all traffic passing between a host and a connected device is captured into memory and analyzed in real time by system software. According to other aspects, the invention relates to real time capture of certain types of traffic and communication of the captured traffic to a remote management system. According to still further aspects, the invention relates to detecting security threats in real time. Upon threat detection, possible actions are blocking individual devices or alerting a system administrator. According to certain additional aspects, the security functions performed by methods and apparatuses according to the invention can be logically transparent to the upstream host and to the downstream device.
1. A computer system, comprising: an interface for sending and receiving data; a host that executes an operating system and applications that generate and utilize the data sent and received via the interface; and a secure subsystem interposed between the host and the interface for transparently capturing certain of the data sent and received via the interface. 2. A computer system according to claim 1, wherein the interface comprises one of USB, SAS, SATA, Firewire (IEEE 1394), Thunderbolt, and Ethernet. 3. A computer system according to claim 1, wherein the secure subsystem includes: a snoop logic that determines the certain data; and a bridge logic module in a communication path between the interface and the host that provides the sent and received data to the snoop logic. 4. A computer system according to claim 3, wherein the snoop logic includes a trigger engine that identifies the certain data based on an event associated with the sent and received data. 5. A computer system according to claim 3, wherein the snoop logic includes a filter engine that identifies the certain data by filtering out certain of the sent and received data. 6. A computer system according to claim 4, wherein the secure subsystem includes a controller for configuring how the trigger engine identifies the certain data. 7. A computer system according to claim 5, wherein the secure subsystem includes a controller for configuring how the filter engine identifies the certain data. 8. A computer system according to claim 1, wherein the secure subsystem includes a network interface for sending the certain data to a remote management system via a network. 9. A computer system according to claim 8, further comprising a compression block for compressing the certain data before sending to the remote management system. 10. A computer system according to claim 1, wherein the secure subsystem includes a memory controller for storing the certain data. 11. A computer system according to claim 10, further comprising a compression block for compressing the certain data before storing. 12. A method, comprising: sending and receiving data via an interface; generating and utilizing the data sent and received via the interface an operating system and applications executing on a host processor; and transparently capturing certain of the data sent and received via the interface by a secure subsystem interposed between the host and the interface. 13. A method according to claim 12, wherein the interface comprises one of USB, SAS, SATA, Firewire (IEEE 1394), Thunderbolt, and Ethernet. 14. A method according to claim 12, wherein transparently capturing includes: configuring the interface for snooping; analyzing the sent and received data according to a configuration; and identifying the certain data based on the analyzing. 15. A method according to claim 14, wherein identifying is performed based on an event associated with the sent and received data. 16. A method according to claim 14, wherein identifying is performed by filtering out certain of the sent and received data. 17. A method according to claim 14, further comprising obtaining the configuration from a remote management system via a network. 18. A method according to claim 12, further comprising sending the certain data to a remote management system via a network. 19. A method according to claim 18, further comprising compressing the certain data before sending to the remote management system. 20. A method according to claim 12, further comprising storing the certain data. 21. A method according to claim 20, further comprising compressing the certain data before storing. 22. A method according to claim 18, further comprising analyzing the certain data at the remote management system. 23. A method according to claim 22, wherein analyzing includes cross-correlating the certain data with data obtained from a plurality of different computer systems. 24. A method according to claim 22, further comprising taking remedial action based on the analysis, the remedial action including one or more of notifying an administrator, modifying a security policy for one or more computer systems, and disabling one or more devices associated with interfaces of the host processor.
The present invention relates to methods and apparatuses for securing otherwise unsecured computer communications that addresses the above shortcomings among others. According to certain aspects, the invention relates to methods and apparatuses for implementing device snooping, in which some or all traffic passing between a host and a connected device is captured into memory and analyzed in real time by system software. According to other aspects, the invention relates to real time capture of certain types of traffic and communication of the captured traffic to a remote management system. According to still further aspects, the invention relates to detecting security threats in real time. Upon threat detection, possible actions are blocking individual devices or alerting a system administrator. According to certain additional aspects, the security functions performed by methods and apparatuses according to the invention can be logically transparent to the upstream host and to the downstream device.1. A computer system, comprising: an interface for sending and receiving data; a host that executes an operating system and applications that generate and utilize the data sent and received via the interface; and a secure subsystem interposed between the host and the interface for transparently capturing certain of the data sent and received via the interface. 2. A computer system according to claim 1, wherein the interface comprises one of USB, SAS, SATA, Firewire (IEEE 1394), Thunderbolt, and Ethernet. 3. A computer system according to claim 1, wherein the secure subsystem includes: a snoop logic that determines the certain data; and a bridge logic module in a communication path between the interface and the host that provides the sent and received data to the snoop logic. 4. A computer system according to claim 3, wherein the snoop logic includes a trigger engine that identifies the certain data based on an event associated with the sent and received data. 5. A computer system according to claim 3, wherein the snoop logic includes a filter engine that identifies the certain data by filtering out certain of the sent and received data. 6. A computer system according to claim 4, wherein the secure subsystem includes a controller for configuring how the trigger engine identifies the certain data. 7. A computer system according to claim 5, wherein the secure subsystem includes a controller for configuring how the filter engine identifies the certain data. 8. A computer system according to claim 1, wherein the secure subsystem includes a network interface for sending the certain data to a remote management system via a network. 9. A computer system according to claim 8, further comprising a compression block for compressing the certain data before sending to the remote management system. 10. A computer system according to claim 1, wherein the secure subsystem includes a memory controller for storing the certain data. 11. A computer system according to claim 10, further comprising a compression block for compressing the certain data before storing. 12. A method, comprising: sending and receiving data via an interface; generating and utilizing the data sent and received via the interface an operating system and applications executing on a host processor; and transparently capturing certain of the data sent and received via the interface by a secure subsystem interposed between the host and the interface. 13. A method according to claim 12, wherein the interface comprises one of USB, SAS, SATA, Firewire (IEEE 1394), Thunderbolt, and Ethernet. 14. A method according to claim 12, wherein transparently capturing includes: configuring the interface for snooping; analyzing the sent and received data according to a configuration; and identifying the certain data based on the analyzing. 15. A method according to claim 14, wherein identifying is performed based on an event associated with the sent and received data. 16. A method according to claim 14, wherein identifying is performed by filtering out certain of the sent and received data. 17. A method according to claim 14, further comprising obtaining the configuration from a remote management system via a network. 18. A method according to claim 12, further comprising sending the certain data to a remote management system via a network. 19. A method according to claim 18, further comprising compressing the certain data before sending to the remote management system. 20. A method according to claim 12, further comprising storing the certain data. 21. A method according to claim 20, further comprising compressing the certain data before storing. 22. A method according to claim 18, further comprising analyzing the certain data at the remote management system. 23. A method according to claim 22, wherein analyzing includes cross-correlating the certain data with data obtained from a plurality of different computer systems. 24. A method according to claim 22, further comprising taking remedial action based on the analysis, the remedial action including one or more of notifying an administrator, modifying a security policy for one or more computer systems, and disabling one or more devices associated with interfaces of the host processor.
2,400
7,954
7,954
15,179,034
2,493
In some embodiments, a plurality of real-time monitoring node signal inputs receive streams of monitoring node signal values over time that represent a current operation of the industrial asset control system. A threat detection computer platform, coupled to the plurality of real-time monitoring node signal inputs, may receive the streams of monitoring node signal values and, for each stream of monitoring node signal values, generate a current monitoring node feature vector. The threat detection computer platform may then compare each generated current monitoring node feature vector with a corresponding decision boundary for that monitoring node, the decision boundary separating a normal state from an abnormal state for that monitoring node, and localize an origin of a threat to a particular monitoring node. The threat detection computer platform may then automatically transmit a threat alert signal based on results of said comparisons along with an indication of the particular monitoring node.
1. A system to protect an industrial asset control system, comprising: a plurality of real-time monitoring node signal inputs to receive streams of monitoring node signal values over time that represent a current operation of the industrial asset control system; and a threat detection computer platform, coupled to the plurality of real-time monitoring node signal inputs, to: (i) receive the streams of monitoring node signal values and, for each stream of monitoring node signal values, generate a current monitoring node feature vector, (ii) compare each generated current monitoring node feature vector with a corresponding decision boundary for that monitoring node, the decision boundary separating a normal state from an abnormal state for that monitoring node, (iii) localize an origin of a threat to a particular monitoring node; and (iv) automatically transmit a threat alert signal based on results of said comparisons along with an indication of the particular monitoring node. 2. The system of claim 1, wherein at least one of the monitoring nodes is associated with at least one of: sensor data, an auxiliary equipment input signal, a control intermediary parameter, and a control logic value. 3. The system of claim 1, wherein at least one monitoring node is associated with a plurality of decision boundaries and said comparison is performed in connection with each of those boundaries. 4. The system of claim 1, wherein at least one decision boundary was generated in accordance with a feature-based learning algorithm and at least one of: (i) a high fidelity model, and (ii) normal operation of the industrial asset control system. 5. The system of claim 1, wherein the alert notification is performed using a cloud-based system. 6. The system of claim 5, wherein said localizing is performed in accordance with a time at which a decision boundary associated with one monitoring node was crossed as compared to a time at which a decision boundary associated with another monitoring node was crossed. 7. The system of claim 1, wherein at least one of the current monitoring node feature vectors is associated with at least one of: principal components, statistical features, deep learning features, frequency domain features, time series analysis features, logical features, geographic or position based locations, and interaction features. 8. The system of claim 1, wherein a threat detection model associated with at least one decision boundary is dynamically adapted based on at least one of: a transient condition, a steady state model of the industrial asset control system, and data sets obtained while operating the system as in self-learning systems from incoming data stream. 9. The system of claim 1, wherein the threat is associated with at least one of: an actuator attack, a controller attack, a monitoring node attack, a plant state attack, spoofing, financial damage, unit availability, a unit trip, a loss of unit life, and asset damage requiring at least one new part. 10. The system of claim 1, further comprising: a normal space data source storing, for each of the plurality of monitoring nodes, a series of normal monitoring node values over time that represent normal operation of the industrial asset control system; a threatened space data source storing, for each of the plurality of monitoring nodes, a series of threatened monitoring node values over time that represent a threatened operation of the industrial asset control system; and a threat detection model creation computer, coupled to the normal space data source and the threatened space data source, to: receive the series normal monitoring node values and generate the set of normal feature vectors, receive the series of threatened monitoring node values and generate the set of threatened feature vectors, and automatically calculate and output at least one decision boundary for a threat detection model based on the set of normal feature vectors and the set of threatened feature vectors. 11. The system of claim 10, wherein at least one of the series of normal monitoring node values and the series of threatened monitoring node values are associated with a high fidelity equipment model. 12. The system of claim 10, wherein at least one decision boundary exists in a multi-dimensional space and is associated with at least one of: a dynamic model, design of experiment data, machine learning techniques, a support vector machine, a full factorial process, Taguchi screening, a central composite methodology, a Box-Behnken methodology, real-world operating conditions, a full-factorial design, a screening design, and a central composite design. 13. The system of claim 10, wherein the threat detection model is associated with decision boundaries and at least one of: feature mapping, and feature parameters. 14. The system of claim 10, wherein at least one of the normal and threatened monitoring node values are obtained by running design of experiments on an industrial control system associated with at least one of: a power turbine, a jet engine, a locomotive, and an autonomous vehicle. 15. A computerized method to protect an industrial asset control system, comprising: receiving, by a threat detection computer platform, a plurality of real-time streams of monitoring node signal values over time that represent a current operation of the industrial asset control system; generating, by the threat detection computer platform, a current monitoring node feature vector for each stream of monitoring node signal values; comparing, by the threat detection computer platform, each generated current monitoring node feature vector with a corresponding non-linear, multi-dimensional decision boundary for that monitoring node, the decision boundary separating a normal state from an abnormal state for that monitoring node; localize an origin of a threat to a particular monitoring node; and automatically transmitting a threat alert signal based on results of said comparisons along with an indication of the particular monitoring node. 16. The method of claim 15, wherein at least one of the monitoring nodes is associated with at least one of: sensor data, an auxiliary equipment input signal, a control intermediary parameter, and a control logic value. 17. The method of claim 15, wherein at least one monitoring node is associated with a plurality of multi-dimensional decision boundaries, said comparison is performed in connection with each of those boundaries, and at least one decision boundary was generated in accordance with a feature-based learning algorithm and at least one of: (i) a high fidelity model, and (ii) normal operation of the industrial asset control system. 18. The method of claim 15, wherein said localizing is performed in accordance with a time at which a decision boundary associated with one monitoring node was crossed as compared to a time at which a decision boundary associated with another monitoring node was crossed. 19. A non-transient, computer-readable medium storing instructions to be executed by a processor to perform a method of protecting an asset control system, the method comprising: receiving, by a threat detection computer platform, real-time streams of monitoring node signal values over time that represent a current operation of the asset control system; generating, by the threat detection computer platform, a current monitoring node feature vector for each stream of monitoring node signal values; comparing, by the threat detection computer platform, each generated current monitoring node feature vector with a corresponding non-linear, multi-dimensional decision boundary for that monitoring node, the decision boundary separating a normal state from an abnormal state for that monitoring node; localizing an origin of the threat to a particular monitoring node; and automatically transmitting a threat alert signal based on results of said comparisons along with an indication of the particular monitoring node. 20. The medium of claim 19, wherein at least one of the monitoring nodes is associated with at least one of: sensor data, an auxiliary equipment input signal, a control intermediary parameter, and a control logic value. 21. The medium of claim 19, wherein at least one monitoring node is associated with a plurality of multi-dimensional decision boundaries, said comparison is performed in connection with each of those boundaries, and at least one decision boundary was generated in accordance with a feature-based learning algorithm and at least one of: (i) a high fidelity model, and (ii) normal operation of the asset control system. 22. The medium of claim 19, wherein said localizing is performed in accordance with a time at which a decision boundary associated with one monitoring node was crossed as compared to a time at which a decision boundary associated with another monitoring node was crossed.
In some embodiments, a plurality of real-time monitoring node signal inputs receive streams of monitoring node signal values over time that represent a current operation of the industrial asset control system. A threat detection computer platform, coupled to the plurality of real-time monitoring node signal inputs, may receive the streams of monitoring node signal values and, for each stream of monitoring node signal values, generate a current monitoring node feature vector. The threat detection computer platform may then compare each generated current monitoring node feature vector with a corresponding decision boundary for that monitoring node, the decision boundary separating a normal state from an abnormal state for that monitoring node, and localize an origin of a threat to a particular monitoring node. The threat detection computer platform may then automatically transmit a threat alert signal based on results of said comparisons along with an indication of the particular monitoring node.1. A system to protect an industrial asset control system, comprising: a plurality of real-time monitoring node signal inputs to receive streams of monitoring node signal values over time that represent a current operation of the industrial asset control system; and a threat detection computer platform, coupled to the plurality of real-time monitoring node signal inputs, to: (i) receive the streams of monitoring node signal values and, for each stream of monitoring node signal values, generate a current monitoring node feature vector, (ii) compare each generated current monitoring node feature vector with a corresponding decision boundary for that monitoring node, the decision boundary separating a normal state from an abnormal state for that monitoring node, (iii) localize an origin of a threat to a particular monitoring node; and (iv) automatically transmit a threat alert signal based on results of said comparisons along with an indication of the particular monitoring node. 2. The system of claim 1, wherein at least one of the monitoring nodes is associated with at least one of: sensor data, an auxiliary equipment input signal, a control intermediary parameter, and a control logic value. 3. The system of claim 1, wherein at least one monitoring node is associated with a plurality of decision boundaries and said comparison is performed in connection with each of those boundaries. 4. The system of claim 1, wherein at least one decision boundary was generated in accordance with a feature-based learning algorithm and at least one of: (i) a high fidelity model, and (ii) normal operation of the industrial asset control system. 5. The system of claim 1, wherein the alert notification is performed using a cloud-based system. 6. The system of claim 5, wherein said localizing is performed in accordance with a time at which a decision boundary associated with one monitoring node was crossed as compared to a time at which a decision boundary associated with another monitoring node was crossed. 7. The system of claim 1, wherein at least one of the current monitoring node feature vectors is associated with at least one of: principal components, statistical features, deep learning features, frequency domain features, time series analysis features, logical features, geographic or position based locations, and interaction features. 8. The system of claim 1, wherein a threat detection model associated with at least one decision boundary is dynamically adapted based on at least one of: a transient condition, a steady state model of the industrial asset control system, and data sets obtained while operating the system as in self-learning systems from incoming data stream. 9. The system of claim 1, wherein the threat is associated with at least one of: an actuator attack, a controller attack, a monitoring node attack, a plant state attack, spoofing, financial damage, unit availability, a unit trip, a loss of unit life, and asset damage requiring at least one new part. 10. The system of claim 1, further comprising: a normal space data source storing, for each of the plurality of monitoring nodes, a series of normal monitoring node values over time that represent normal operation of the industrial asset control system; a threatened space data source storing, for each of the plurality of monitoring nodes, a series of threatened monitoring node values over time that represent a threatened operation of the industrial asset control system; and a threat detection model creation computer, coupled to the normal space data source and the threatened space data source, to: receive the series normal monitoring node values and generate the set of normal feature vectors, receive the series of threatened monitoring node values and generate the set of threatened feature vectors, and automatically calculate and output at least one decision boundary for a threat detection model based on the set of normal feature vectors and the set of threatened feature vectors. 11. The system of claim 10, wherein at least one of the series of normal monitoring node values and the series of threatened monitoring node values are associated with a high fidelity equipment model. 12. The system of claim 10, wherein at least one decision boundary exists in a multi-dimensional space and is associated with at least one of: a dynamic model, design of experiment data, machine learning techniques, a support vector machine, a full factorial process, Taguchi screening, a central composite methodology, a Box-Behnken methodology, real-world operating conditions, a full-factorial design, a screening design, and a central composite design. 13. The system of claim 10, wherein the threat detection model is associated with decision boundaries and at least one of: feature mapping, and feature parameters. 14. The system of claim 10, wherein at least one of the normal and threatened monitoring node values are obtained by running design of experiments on an industrial control system associated with at least one of: a power turbine, a jet engine, a locomotive, and an autonomous vehicle. 15. A computerized method to protect an industrial asset control system, comprising: receiving, by a threat detection computer platform, a plurality of real-time streams of monitoring node signal values over time that represent a current operation of the industrial asset control system; generating, by the threat detection computer platform, a current monitoring node feature vector for each stream of monitoring node signal values; comparing, by the threat detection computer platform, each generated current monitoring node feature vector with a corresponding non-linear, multi-dimensional decision boundary for that monitoring node, the decision boundary separating a normal state from an abnormal state for that monitoring node; localize an origin of a threat to a particular monitoring node; and automatically transmitting a threat alert signal based on results of said comparisons along with an indication of the particular monitoring node. 16. The method of claim 15, wherein at least one of the monitoring nodes is associated with at least one of: sensor data, an auxiliary equipment input signal, a control intermediary parameter, and a control logic value. 17. The method of claim 15, wherein at least one monitoring node is associated with a plurality of multi-dimensional decision boundaries, said comparison is performed in connection with each of those boundaries, and at least one decision boundary was generated in accordance with a feature-based learning algorithm and at least one of: (i) a high fidelity model, and (ii) normal operation of the industrial asset control system. 18. The method of claim 15, wherein said localizing is performed in accordance with a time at which a decision boundary associated with one monitoring node was crossed as compared to a time at which a decision boundary associated with another monitoring node was crossed. 19. A non-transient, computer-readable medium storing instructions to be executed by a processor to perform a method of protecting an asset control system, the method comprising: receiving, by a threat detection computer platform, real-time streams of monitoring node signal values over time that represent a current operation of the asset control system; generating, by the threat detection computer platform, a current monitoring node feature vector for each stream of monitoring node signal values; comparing, by the threat detection computer platform, each generated current monitoring node feature vector with a corresponding non-linear, multi-dimensional decision boundary for that monitoring node, the decision boundary separating a normal state from an abnormal state for that monitoring node; localizing an origin of the threat to a particular monitoring node; and automatically transmitting a threat alert signal based on results of said comparisons along with an indication of the particular monitoring node. 20. The medium of claim 19, wherein at least one of the monitoring nodes is associated with at least one of: sensor data, an auxiliary equipment input signal, a control intermediary parameter, and a control logic value. 21. The medium of claim 19, wherein at least one monitoring node is associated with a plurality of multi-dimensional decision boundaries, said comparison is performed in connection with each of those boundaries, and at least one decision boundary was generated in accordance with a feature-based learning algorithm and at least one of: (i) a high fidelity model, and (ii) normal operation of the asset control system. 22. The medium of claim 19, wherein said localizing is performed in accordance with a time at which a decision boundary associated with one monitoring node was crossed as compared to a time at which a decision boundary associated with another monitoring node was crossed.
2,400
7,955
7,955
15,453,081
2,411
Apparatuses and methods for controlling a manner of delivering content to a content user in a mobile telecommunication network are provided. The content is sent to the content user using first a first transmission rate when the content user is in a first radio state and uses a first battery power, and then using a second transmission rate that is lower than the first transmission rate, when the content user is in a second radio state and uses a second battery power that is smaller than the first battery power. The sending is performed such as, (A) while delivering the content, an amount of the content already received by the content user to exceed an amount of the content used by the content user, and (B) to minimize an energy used by the content user during delivery.
1-20. (canceled) 21. A method for managing reception of content, the method, comprising: selectively receiving, by a content user, the content at a first transmission rate and at a second transmission rate that is lower than the first transmission rate, wherein the content user selectively receives the content at the first and second transmission rates based on an amount of the content already received by the content user exceeding an amount of the content used by the content user, wherein the content user uses less power for second transmission rate than for the first transmission rate. 22. The method of claim 21, further comprising: receiving respective state configuring signals before starting to receive the content using the first transmission rate or using the second transmission rate. 23. The method of claim 21, wherein content is selectively received from a proxy via a base station, which is an eNodeB or a WiFi access point. 24. The method of claim 21, wherein the content is received at the first transmission rate during a first time interval, which is based on a size of the content, a playout rate, the first transmission rate, and the second transmission rate, wherein the first transmission rate is larger than the playout rate. 25. The method of claim 24, further comprising: sending, by the content user, measurements of a delivery rate at which the content user receives the content; and receiving, by the content user a remaining portion of the content using the second transmission rate if the playout rate is smaller than a value of the delivery rate measured after the first time interval. 26. The method of claim 25, further comprising: receiving a remaining portion of the content using the second transmission rate if an estimated delivery time that is necessary for the content user to receive the remaining portion of the content according to the value of the measured delivery rate is less than a remaining time for the content user to entirely use the content. 27. The method of claim 26, wherein the remaining portion of the content is received using the first transmission rate if a first amount of energy exceeds a second amount of energy, or the first transmission rate and the second transmission rate if the first amount of energy does not exceed the second amount of energy, wherein the first amount of energy is an amount of energy used by the content user to receive the remaining portion at the first transmission rate and the second amount of energy is an amount of energy used by the content user to receive the remaining portion at the second transmission rate. 28. A user equipment, comprising: a buffer configured to store received content; and a receiver coupled to the buffer and configured to selectively receive the content at a first transmission rate and at a second transmission rate that is lower than the first transmission rate, wherein the content is selectively received at the first and second transmission rates based on an amount of the content already received by the user equipment exceeding an amount of the content used by the user equipment, wherein the user equipment uses less power for second transmission rate than for the first transmission rate. 29. The user equipment of claim 28, wherein the receiver is configured to receive respective state configuring signals before starting to receive the content using the first transmission rate or using the second transmission rate. 30. The user equipment of claim 28, wherein content is selectively received from a proxy via a base station, which is an eNodeB or a WiFi access point. 31. The user equipment of claim 28, wherein the content is received at the first transmission rate during a first time interval, which is based on a size of the content, a playout rate, the first transmission rate, and the second transmission rate, wherein the first transmission rate is larger than the playout rate. 32. The user equipment of claim 31, further comprising: a transmitter configured to send measurements of a delivery rate at which the user equipment receives the content, wherein the receiver is further configured to receive a remaining portion of the content using the second transmission rate if the playout rate is smaller than a value of the delivery rate measured after the first time interval. 33. The user equipment of claim 32, wherein the receiver is further configured to receive a remaining portion of the content using the second transmission rate if an estimated delivery time that is necessary for the user equipment to receive the remaining portion of the content according to the value of the measured delivery rate is less than a remaining time for the user equipment to entirely use the content. 34. The user equipment of claim 33, wherein the remaining portion of the content is received using the first transmission rate if a first amount of energy exceeds a second amount of energy, or the first transmission rate and the second transmission rate if the first amount of energy does not exceed the second amount of energy, wherein the first amount of energy is an amount of energy used by the user equipment to receive the remaining portion at the first transmission rate and the second amount of energy is an amount of energy used by the user equipment to receive the remaining portion at the second transmission rate. 35. A non-transitory computer-readable medium storing executable code, which when executed on a user equipment, causes the user equipment to: selectively receive content at a first transmission rate and at a second transmission rate that is lower than the first transmission rate, wherein the user equipment selectively receives the content at the first and second transmission rates based on an amount of the content already received by the user equipment exceeding an amount of the content used by the user equipment, wherein the user equipment uses less power for second transmission rate than for the first transmission rate. 36. The non-transitory computer-readable medium of claim 35, further comprising code causing the user equipment to: receive respective state configuring signals before starting to receive the content using the first transmission rate or using the second transmission rate. 37. The non-transitory computer-readable medium of claim 35, wherein content is selectively received from a proxy via a base station, which is an eNodeB or a WiFi access point. 38. The non-transitory computer-readable medium of claim 35, wherein the content is received at the first transmission rate during a first time interval, which is based on a size of the content, a playout rate, the first transmission rate, and the second transmission rate, wherein the first transmission rate is larger than the playout rate. 39. The non-transitory computer-readable medium of claim 38, further comprising code causing the user equipment to: send measurements of a delivery rate at which the user equipment receives the content; and receive a remaining portion of the content using the second transmission rate if the playout rate is smaller than a value of the delivery rate measured after the first time interval. 40. The non-transitory computer-readable medium of claim 39, further comprising code causing the user equipment to: receive a remaining portion of the content using the second transmission rate if an estimated delivery time that is necessary for the user equipment to receive the remaining portion of the content according to the value of the measured delivery rate is less than a remaining time for the user equipment to entirely use the content. 41. The non-transitory computer-readable medium of claim 40, wherein the remaining portion of the content is received using the first transmission rate if a first amount of energy exceeds a second amount of energy, or the first transmission rate and the second transmission rate if the first amount of energy does not exceed the second amount of energy, wherein the first amount of energy is an amount of energy used by the user equipment to receive the remaining portion at the first transmission rate and the second amount of energy is an amount of energy used by the user equipment to receive the remaining portion at the second transmission rate.
Apparatuses and methods for controlling a manner of delivering content to a content user in a mobile telecommunication network are provided. The content is sent to the content user using first a first transmission rate when the content user is in a first radio state and uses a first battery power, and then using a second transmission rate that is lower than the first transmission rate, when the content user is in a second radio state and uses a second battery power that is smaller than the first battery power. The sending is performed such as, (A) while delivering the content, an amount of the content already received by the content user to exceed an amount of the content used by the content user, and (B) to minimize an energy used by the content user during delivery.1-20. (canceled) 21. A method for managing reception of content, the method, comprising: selectively receiving, by a content user, the content at a first transmission rate and at a second transmission rate that is lower than the first transmission rate, wherein the content user selectively receives the content at the first and second transmission rates based on an amount of the content already received by the content user exceeding an amount of the content used by the content user, wherein the content user uses less power for second transmission rate than for the first transmission rate. 22. The method of claim 21, further comprising: receiving respective state configuring signals before starting to receive the content using the first transmission rate or using the second transmission rate. 23. The method of claim 21, wherein content is selectively received from a proxy via a base station, which is an eNodeB or a WiFi access point. 24. The method of claim 21, wherein the content is received at the first transmission rate during a first time interval, which is based on a size of the content, a playout rate, the first transmission rate, and the second transmission rate, wherein the first transmission rate is larger than the playout rate. 25. The method of claim 24, further comprising: sending, by the content user, measurements of a delivery rate at which the content user receives the content; and receiving, by the content user a remaining portion of the content using the second transmission rate if the playout rate is smaller than a value of the delivery rate measured after the first time interval. 26. The method of claim 25, further comprising: receiving a remaining portion of the content using the second transmission rate if an estimated delivery time that is necessary for the content user to receive the remaining portion of the content according to the value of the measured delivery rate is less than a remaining time for the content user to entirely use the content. 27. The method of claim 26, wherein the remaining portion of the content is received using the first transmission rate if a first amount of energy exceeds a second amount of energy, or the first transmission rate and the second transmission rate if the first amount of energy does not exceed the second amount of energy, wherein the first amount of energy is an amount of energy used by the content user to receive the remaining portion at the first transmission rate and the second amount of energy is an amount of energy used by the content user to receive the remaining portion at the second transmission rate. 28. A user equipment, comprising: a buffer configured to store received content; and a receiver coupled to the buffer and configured to selectively receive the content at a first transmission rate and at a second transmission rate that is lower than the first transmission rate, wherein the content is selectively received at the first and second transmission rates based on an amount of the content already received by the user equipment exceeding an amount of the content used by the user equipment, wherein the user equipment uses less power for second transmission rate than for the first transmission rate. 29. The user equipment of claim 28, wherein the receiver is configured to receive respective state configuring signals before starting to receive the content using the first transmission rate or using the second transmission rate. 30. The user equipment of claim 28, wherein content is selectively received from a proxy via a base station, which is an eNodeB or a WiFi access point. 31. The user equipment of claim 28, wherein the content is received at the first transmission rate during a first time interval, which is based on a size of the content, a playout rate, the first transmission rate, and the second transmission rate, wherein the first transmission rate is larger than the playout rate. 32. The user equipment of claim 31, further comprising: a transmitter configured to send measurements of a delivery rate at which the user equipment receives the content, wherein the receiver is further configured to receive a remaining portion of the content using the second transmission rate if the playout rate is smaller than a value of the delivery rate measured after the first time interval. 33. The user equipment of claim 32, wherein the receiver is further configured to receive a remaining portion of the content using the second transmission rate if an estimated delivery time that is necessary for the user equipment to receive the remaining portion of the content according to the value of the measured delivery rate is less than a remaining time for the user equipment to entirely use the content. 34. The user equipment of claim 33, wherein the remaining portion of the content is received using the first transmission rate if a first amount of energy exceeds a second amount of energy, or the first transmission rate and the second transmission rate if the first amount of energy does not exceed the second amount of energy, wherein the first amount of energy is an amount of energy used by the user equipment to receive the remaining portion at the first transmission rate and the second amount of energy is an amount of energy used by the user equipment to receive the remaining portion at the second transmission rate. 35. A non-transitory computer-readable medium storing executable code, which when executed on a user equipment, causes the user equipment to: selectively receive content at a first transmission rate and at a second transmission rate that is lower than the first transmission rate, wherein the user equipment selectively receives the content at the first and second transmission rates based on an amount of the content already received by the user equipment exceeding an amount of the content used by the user equipment, wherein the user equipment uses less power for second transmission rate than for the first transmission rate. 36. The non-transitory computer-readable medium of claim 35, further comprising code causing the user equipment to: receive respective state configuring signals before starting to receive the content using the first transmission rate or using the second transmission rate. 37. The non-transitory computer-readable medium of claim 35, wherein content is selectively received from a proxy via a base station, which is an eNodeB or a WiFi access point. 38. The non-transitory computer-readable medium of claim 35, wherein the content is received at the first transmission rate during a first time interval, which is based on a size of the content, a playout rate, the first transmission rate, and the second transmission rate, wherein the first transmission rate is larger than the playout rate. 39. The non-transitory computer-readable medium of claim 38, further comprising code causing the user equipment to: send measurements of a delivery rate at which the user equipment receives the content; and receive a remaining portion of the content using the second transmission rate if the playout rate is smaller than a value of the delivery rate measured after the first time interval. 40. The non-transitory computer-readable medium of claim 39, further comprising code causing the user equipment to: receive a remaining portion of the content using the second transmission rate if an estimated delivery time that is necessary for the user equipment to receive the remaining portion of the content according to the value of the measured delivery rate is less than a remaining time for the user equipment to entirely use the content. 41. The non-transitory computer-readable medium of claim 40, wherein the remaining portion of the content is received using the first transmission rate if a first amount of energy exceeds a second amount of energy, or the first transmission rate and the second transmission rate if the first amount of energy does not exceed the second amount of energy, wherein the first amount of energy is an amount of energy used by the user equipment to receive the remaining portion at the first transmission rate and the second amount of energy is an amount of energy used by the user equipment to receive the remaining portion at the second transmission rate.
2,400
7,956
7,956
14,267,181
2,457
Systems and methods may include receiving first data regarding first devices in a network. The first data may include an amount of utilization of first resources in the network by each device of the first devices. The first data also may include characteristic data of each device of the first devices. Systems and methods may include determining a predictive model for utilization of each resource of second resources in the network based on the first data. Systems and methods may include predicting an amount of utilization of each resource of the second resources by second devices using the predictive model. Systems and methods may include allocating each resource of the second resources based on the predicted amount of utilization of such resource by the second devices.
1. A method comprising: receiving first data regarding a first plurality of devices in a network, the first data including: an amount of utilization of a first plurality of resources in the network by each device of the first plurality of devices; and characteristic data of each device of the first plurality of devices; determining a predictive model for utilization of each resource of a second plurality of resources in the network based on the first data; predicting an amount of utilization of each resource of the second plurality of resources by a second plurality of devices using the predictive model; and allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices. 2. The method of claim 1, further comprising: determining a correlation between the characteristic data of each device of the first plurality of devices and the amount of utilization of each resource of the first plurality of resources, wherein determining the predictive model for utilization of each resource of the second plurality of resources in the network includes determining the predictive model based on the determined correlation. 3. The method of claim 2, further comprising: receiving second data regarding the second plurality of devices in a network, the second data including: characteristic data of each device of the second plurality of devices; and location data of each device of the second plurality of devices, wherein the predictive model is configured to output an estimated utilization of a resource of the second plurality of resources by a device of the second plurality of devices in response to receiving as an input the characteristic data of such device and data identifying such resource, and wherein predicting the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes, for each resource of the second plurality of resources: identifying devices of the second plurality of devices that are within a particular range such resource based on the location data of such devices; inputting the data identifying such resource and the characteristic data of the devices identified as being within the particular range of such resource into the predictive model; and determining as output from the predictive model a total estimated utilization of such resource by the devices identified as being within the particular range of such resource, the total estimated utilization corresponding to the predicted amount of utilization of such resource. 4. The method of claim 1, wherein predicting the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: predicting an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predicting an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes repurposing the second resource to perform a function similar to the first resource. 5. The method of claim 1, wherein predicting the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: predicting an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predicting an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes causing a portion of the second plurality of devices to utilize the second resource instead of the first resource. 6. The method of claim 1, wherein predicting the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: predicting an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predicting an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes allowing only a particular number of devices of the second plurality of devices to utilize the first resource, such that other devices of the second plurality of devices will utilize the second resource instead of the first resource. 7. The method of claim 1, wherein allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes increasing the capacity of a particular resource in response to determining that the predicted amount of utilization of the particular resource will use more than about 80% of the capacity of the particular resource. 8. A system comprising: a monitoring device configured to receive first data regarding a first plurality of devices in a network, the first data including: an amount of utilization of a first plurality of resources in the network by each device of the first plurality of devices; and characteristic data of each device of the first plurality of devices; an analysis device configured to: determine a predictive model for utilization of each resource of a second plurality of resources in the network based on the first data; and predict an amount of utilization of each resource of the second plurality of resources by a second plurality of devices using the predictive model; and a resource allocation device configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices. 9. The system according to claim 8, wherein the analysis device is further configured to: determine a correlation between the characteristic data of each device of the first plurality of devices and the amount of utilization of each resource of the first plurality of resources, and determine the predictive model based on the determined correlation. 10. The system according to claim 9, wherein the monitoring device is further configured to receive second data regarding the second plurality of devices in a network, the second data including: characteristic data of each device of the second plurality of devices; and location data of each device of the second plurality of devices, wherein the predictive model is configured to output an estimated utilization of a resource of the second plurality of resources by a device of the second plurality of devices in response to receiving as an input the characteristic data of such device and data identifying such resource, and wherein the analysis device is configured to, for each resource of the second plurality of resources: identify devices of the second plurality of devices that are within a particular range such resource based on the location data of such devices; input the data identifying such resource and the characteristic data of the devices identified as being within the particular range of such resource into the predictive model; and determine as output from the predictive model a total estimated utilization of such resource by the devices identified as being within the particular range of such resource, the total estimated utilization corresponding to the predicted amount of utilization of such resource. 11. The system according to claim 8, wherein the analysis device is configured to: predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the resource allocation device is configured to repurpose the second resource to perform a function similar to the first resource. 12. The system according to claim 8, wherein the analysis device is configured to: predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the resource allocation device is configured to cause a portion of the second plurality of devices to utilize the second resource instead of the first resource. 13. The system according to claim 8, wherein the analysis device is configured to: predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the resource allocation device is configured to allow only a particular number of devices of the second plurality of devices to utilize the first resource, such that other devices of the second plurality of devices will utilize the second resource instead of the first resource. 14. A computer program product comprising: a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to receive first data regarding a first plurality of devices in a network, the first data including: an amount of utilization of a first plurality of resources in the network by each device of the first plurality of devices; and characteristic data of each device of the first plurality of devices; computer readable program code configured to determine a predictive model for utilization of each resource of a second plurality of resources in the network based on the first data; computer readable program code configured to predict an amount of utilization of each resource of the second plurality of resources by a second plurality of devices using the predictive model; and computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices. 15. The computer program product of claim 14, further comprising: computer readable program code configured to determine a correlation between the characteristic data of each device of the first plurality of devices and the amount of utilization of each resource of the first plurality of resources, wherein the computer readable program code configured to determine the predictive model for utilization of each resource of the second plurality of resources in the network based on the first data includes: computer readable program code configured to determine the predictive model based on the determined correlation. 16. The computer program product of claim 15, further comprising: computer readable program code configured to receive second data regarding the second plurality of devices in a network, the second data including: characteristic data of each device of the second plurality of devices; and location data of each device of the second plurality of devices, wherein the predictive model is configured to output an estimated utilization of a resource of the second plurality of resources by a device of the second plurality of devices in response to receiving as an input the characteristic data of such device and data identifying such resource, and wherein the computer readable program code configured to predict the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: computer readable program code configured to, for each resource of the second plurality of resources, identify devices of the second plurality of devices that are within a particular range such resource based on the location data of such devices; computer readable program code configured to, for each resource of the second plurality of resources, input the data identifying such resource and the characteristic data of the devices identified as being within the particular range of such resource into the predictive model; and computer readable program code configured to, for each resource of the second plurality of resources, determine as output from the predictive model a total estimated utilization of such resource by the devices identified as being within the particular range of such resource, the total estimated utilization corresponding to the predicted amount of utilization of such resource. 17. The computer program product of claim 14, wherein the computer readable program code configured to predict the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: computer readable program code configured to predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and computer readable program code configured to predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes: computer readable program code configured to repurpose the second resource to perform a function similar to the first resource. 18. The computer program product of claim 14, wherein the computer readable program code configured to predict the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: computer readable program code configured to predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and computer readable program code configured to predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes: computer readable program code configured to cause a portion of the second plurality of devices to utilize the second resource instead of the first resource. 19. The computer program product of claim 14, wherein the computer readable program code configured to predict the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: computer readable program code configured to predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and computer readable program code configured to predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes: computer readable program code configured to allow only a particular number of devices of the second plurality of devices to utilize the first resource, such that other devices of the second plurality of devices will utilize the second resource instead of the first resource. 20. The computer program product of claim 14, wherein the computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes: computer readable program code configured to increase the capacity of a particular resource in response to determining that the predicted amount of utilization of the particular resource will use more than about 80% of the capacity of the particular resource.
Systems and methods may include receiving first data regarding first devices in a network. The first data may include an amount of utilization of first resources in the network by each device of the first devices. The first data also may include characteristic data of each device of the first devices. Systems and methods may include determining a predictive model for utilization of each resource of second resources in the network based on the first data. Systems and methods may include predicting an amount of utilization of each resource of the second resources by second devices using the predictive model. Systems and methods may include allocating each resource of the second resources based on the predicted amount of utilization of such resource by the second devices.1. A method comprising: receiving first data regarding a first plurality of devices in a network, the first data including: an amount of utilization of a first plurality of resources in the network by each device of the first plurality of devices; and characteristic data of each device of the first plurality of devices; determining a predictive model for utilization of each resource of a second plurality of resources in the network based on the first data; predicting an amount of utilization of each resource of the second plurality of resources by a second plurality of devices using the predictive model; and allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices. 2. The method of claim 1, further comprising: determining a correlation between the characteristic data of each device of the first plurality of devices and the amount of utilization of each resource of the first plurality of resources, wherein determining the predictive model for utilization of each resource of the second plurality of resources in the network includes determining the predictive model based on the determined correlation. 3. The method of claim 2, further comprising: receiving second data regarding the second plurality of devices in a network, the second data including: characteristic data of each device of the second plurality of devices; and location data of each device of the second plurality of devices, wherein the predictive model is configured to output an estimated utilization of a resource of the second plurality of resources by a device of the second plurality of devices in response to receiving as an input the characteristic data of such device and data identifying such resource, and wherein predicting the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes, for each resource of the second plurality of resources: identifying devices of the second plurality of devices that are within a particular range such resource based on the location data of such devices; inputting the data identifying such resource and the characteristic data of the devices identified as being within the particular range of such resource into the predictive model; and determining as output from the predictive model a total estimated utilization of such resource by the devices identified as being within the particular range of such resource, the total estimated utilization corresponding to the predicted amount of utilization of such resource. 4. The method of claim 1, wherein predicting the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: predicting an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predicting an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes repurposing the second resource to perform a function similar to the first resource. 5. The method of claim 1, wherein predicting the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: predicting an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predicting an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes causing a portion of the second plurality of devices to utilize the second resource instead of the first resource. 6. The method of claim 1, wherein predicting the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: predicting an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predicting an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes allowing only a particular number of devices of the second plurality of devices to utilize the first resource, such that other devices of the second plurality of devices will utilize the second resource instead of the first resource. 7. The method of claim 1, wherein allocating each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes increasing the capacity of a particular resource in response to determining that the predicted amount of utilization of the particular resource will use more than about 80% of the capacity of the particular resource. 8. A system comprising: a monitoring device configured to receive first data regarding a first plurality of devices in a network, the first data including: an amount of utilization of a first plurality of resources in the network by each device of the first plurality of devices; and characteristic data of each device of the first plurality of devices; an analysis device configured to: determine a predictive model for utilization of each resource of a second plurality of resources in the network based on the first data; and predict an amount of utilization of each resource of the second plurality of resources by a second plurality of devices using the predictive model; and a resource allocation device configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices. 9. The system according to claim 8, wherein the analysis device is further configured to: determine a correlation between the characteristic data of each device of the first plurality of devices and the amount of utilization of each resource of the first plurality of resources, and determine the predictive model based on the determined correlation. 10. The system according to claim 9, wherein the monitoring device is further configured to receive second data regarding the second plurality of devices in a network, the second data including: characteristic data of each device of the second plurality of devices; and location data of each device of the second plurality of devices, wherein the predictive model is configured to output an estimated utilization of a resource of the second plurality of resources by a device of the second plurality of devices in response to receiving as an input the characteristic data of such device and data identifying such resource, and wherein the analysis device is configured to, for each resource of the second plurality of resources: identify devices of the second plurality of devices that are within a particular range such resource based on the location data of such devices; input the data identifying such resource and the characteristic data of the devices identified as being within the particular range of such resource into the predictive model; and determine as output from the predictive model a total estimated utilization of such resource by the devices identified as being within the particular range of such resource, the total estimated utilization corresponding to the predicted amount of utilization of such resource. 11. The system according to claim 8, wherein the analysis device is configured to: predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the resource allocation device is configured to repurpose the second resource to perform a function similar to the first resource. 12. The system according to claim 8, wherein the analysis device is configured to: predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the resource allocation device is configured to cause a portion of the second plurality of devices to utilize the second resource instead of the first resource. 13. The system according to claim 8, wherein the analysis device is configured to: predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the resource allocation device is configured to allow only a particular number of devices of the second plurality of devices to utilize the first resource, such that other devices of the second plurality of devices will utilize the second resource instead of the first resource. 14. A computer program product comprising: a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to receive first data regarding a first plurality of devices in a network, the first data including: an amount of utilization of a first plurality of resources in the network by each device of the first plurality of devices; and characteristic data of each device of the first plurality of devices; computer readable program code configured to determine a predictive model for utilization of each resource of a second plurality of resources in the network based on the first data; computer readable program code configured to predict an amount of utilization of each resource of the second plurality of resources by a second plurality of devices using the predictive model; and computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices. 15. The computer program product of claim 14, further comprising: computer readable program code configured to determine a correlation between the characteristic data of each device of the first plurality of devices and the amount of utilization of each resource of the first plurality of resources, wherein the computer readable program code configured to determine the predictive model for utilization of each resource of the second plurality of resources in the network based on the first data includes: computer readable program code configured to determine the predictive model based on the determined correlation. 16. The computer program product of claim 15, further comprising: computer readable program code configured to receive second data regarding the second plurality of devices in a network, the second data including: characteristic data of each device of the second plurality of devices; and location data of each device of the second plurality of devices, wherein the predictive model is configured to output an estimated utilization of a resource of the second plurality of resources by a device of the second plurality of devices in response to receiving as an input the characteristic data of such device and data identifying such resource, and wherein the computer readable program code configured to predict the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: computer readable program code configured to, for each resource of the second plurality of resources, identify devices of the second plurality of devices that are within a particular range such resource based on the location data of such devices; computer readable program code configured to, for each resource of the second plurality of resources, input the data identifying such resource and the characteristic data of the devices identified as being within the particular range of such resource into the predictive model; and computer readable program code configured to, for each resource of the second plurality of resources, determine as output from the predictive model a total estimated utilization of such resource by the devices identified as being within the particular range of such resource, the total estimated utilization corresponding to the predicted amount of utilization of such resource. 17. The computer program product of claim 14, wherein the computer readable program code configured to predict the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: computer readable program code configured to predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and computer readable program code configured to predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes: computer readable program code configured to repurpose the second resource to perform a function similar to the first resource. 18. The computer program product of claim 14, wherein the computer readable program code configured to predict the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: computer readable program code configured to predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and computer readable program code configured to predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes: computer readable program code configured to cause a portion of the second plurality of devices to utilize the second resource instead of the first resource. 19. The computer program product of claim 14, wherein the computer readable program code configured to predict the amount of utilization of each resource of the second plurality of resources by the second plurality of devices using the predictive model includes: computer readable program code configured to predict an amount of utilization of a first resource, such that an available capacity of the first resource will be reduced from its current level at a time in the future; and computer readable program code configured to predict an amount of utilization of a second resource, such that an available capacity of the second resource will be reduced from its current level at the time in the future, and wherein the computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes: computer readable program code configured to allow only a particular number of devices of the second plurality of devices to utilize the first resource, such that other devices of the second plurality of devices will utilize the second resource instead of the first resource. 20. The computer program product of claim 14, wherein the computer readable program code configured to allocate each resource of the second plurality of resources based on the predicted amount of utilization of such resource by the second plurality of devices includes: computer readable program code configured to increase the capacity of a particular resource in response to determining that the predicted amount of utilization of the particular resource will use more than about 80% of the capacity of the particular resource.
2,400
7,957
7,957
14,702,521
2,419
In one general sense, display of content communicated by a sender communication device to a destination communication device may be enabled by receiving, at a destination communication device, content to be displayed by the destination communication device. Characteristics of a display of the received content by the destination communication device may be algorithmically identified in accordance with display configuration settings for the destination communication device. Based on the identified characteristics, at least one change to be made to capture configuration settings at a capturing communication device used to capture the received content may be identified. At least one alternative capture configuration setting may be communicated to the capturing communication device. Content that is captured by the capturing communication device is received at the destination communications device based on the alternative capture configuration setting communicated.
1. A method comprising: monitoring, using at least one processor, a plurality of communication sessions of a user; identifying, based on the monitored communication sessions, one or more actions frequently used by the user; providing the user a prompt associated with the identified one or more actions, wherein the prompt includes an option to automate the identified one or more actions; and if the user selects the option to automate the identified one or more actions, automating the identified one or more actions in association with one or more subsequent communication sessions of the user. 2. The method of claim 1, further comprising presenting, to the user, one or more of the identified one or more frequently used actions. 3. The method of claim 1, wherein identifying the one or more frequently used actions comprises identifying an action that occur at least a threshold number of times. 4. The method of claim 1, wherein identifying the one or more frequently used actions comprises identifying an action that occur at least at a threshold frequency. 5. The method of claim 1, wherein identifying the one or more frequently used actions comprises identifying the one or more actions frequently used by the user in monitored communication sessions between the user and another user. 6. The method of claim 1, further comprising receiving a response to the prompt, from the user, to automate the identified one or more frequently used actions. 7. The method of claim 1, further comprising providing an option to the user to edit one or more of the automated identified one or more frequently used actions. 8. The method of claim 7, wherein the option comprises an option to add, edit, or remove an action from the one or more frequently used actions. 9. The method of claim 1, wherein prompting the user to automate the identified one or more frequently used actions occurs in response to identifying a threshold number of consecutive communication sessions where the user repeated the one or more frequently used actions. 10. The method of claim 1, wherein identifying the one or more frequently used actions comprises identifying the one or more actions frequently used by the user that occur on a particular device. 11. The method of claim 10, wherein the particular device is a mobile device. 12. The method of claim 1, wherein the automated identified one or more frequently used actions are specific to a particular application. 13. The method of claim 1, wherein the automated identified one or more frequently used actions are specific to a particular user or group of users. 14. The method of claim 1, wherein the automated identified one or more frequently used actions are specific to a particular communication type. 15. The method of claim 14, wherein the particular communication type comprises one of email, text chat, instant messaging, voice-over-IP, or video conferencing. 16. The method of claim 1, wherein the identifying the one or more frequently used actions comprises identifying the one or more actions frequently used by the user based on the monitored communication sessions occurring at a time of day. 17. A system comprising: at least one processor; and at least one non-transitory computer readable storage medium storing instructions thereon that, when executed by the at least one processor, cause the system to: monitor, by at least one processor, a plurality of communication sessions of a user with a group of users; identify, based on the monitored communication sessions with the group of users, a series of actions frequently used by the user; provide the user a prompt associated with the identified series of actions, wherein the prompt includes an option to automate the identified series of actions; and upon the user selecting the option to automate the identified series of actions, automate the identified series of actions in association with one or more subsequent communication sessions of the user with the group of users. 18. The system of claim 17, wherein the automated identified series of frequently used actions are specific to a particular communication type. 19. A non-transitory computer-readable medium including a set of instructions that, when executed by at least one processor, cause a computer system to perform the steps comprising: monitoring, using at least one processor, a plurality of communication sessions of a user; identifying, based on the monitored communication sessions, one or more actions frequently used by the user; providing the user a prompt associated with the identified one or more actions, wherein the prompt includes an option to automate the identified one or more actions; and if the user selects the option to automate the identified one or more actions, automating the identified one or more actions in association with one or more subsequent communication sessions of the user. 20. The computer-readable medium of claim 19, wherein providing the prompt to automate the identified one or more frequently used actions occurs in response to identifying a threshold number of consecutive communication sessions where the user repeated the one or more frequently used actions.
In one general sense, display of content communicated by a sender communication device to a destination communication device may be enabled by receiving, at a destination communication device, content to be displayed by the destination communication device. Characteristics of a display of the received content by the destination communication device may be algorithmically identified in accordance with display configuration settings for the destination communication device. Based on the identified characteristics, at least one change to be made to capture configuration settings at a capturing communication device used to capture the received content may be identified. At least one alternative capture configuration setting may be communicated to the capturing communication device. Content that is captured by the capturing communication device is received at the destination communications device based on the alternative capture configuration setting communicated.1. A method comprising: monitoring, using at least one processor, a plurality of communication sessions of a user; identifying, based on the monitored communication sessions, one or more actions frequently used by the user; providing the user a prompt associated with the identified one or more actions, wherein the prompt includes an option to automate the identified one or more actions; and if the user selects the option to automate the identified one or more actions, automating the identified one or more actions in association with one or more subsequent communication sessions of the user. 2. The method of claim 1, further comprising presenting, to the user, one or more of the identified one or more frequently used actions. 3. The method of claim 1, wherein identifying the one or more frequently used actions comprises identifying an action that occur at least a threshold number of times. 4. The method of claim 1, wherein identifying the one or more frequently used actions comprises identifying an action that occur at least at a threshold frequency. 5. The method of claim 1, wherein identifying the one or more frequently used actions comprises identifying the one or more actions frequently used by the user in monitored communication sessions between the user and another user. 6. The method of claim 1, further comprising receiving a response to the prompt, from the user, to automate the identified one or more frequently used actions. 7. The method of claim 1, further comprising providing an option to the user to edit one or more of the automated identified one or more frequently used actions. 8. The method of claim 7, wherein the option comprises an option to add, edit, or remove an action from the one or more frequently used actions. 9. The method of claim 1, wherein prompting the user to automate the identified one or more frequently used actions occurs in response to identifying a threshold number of consecutive communication sessions where the user repeated the one or more frequently used actions. 10. The method of claim 1, wherein identifying the one or more frequently used actions comprises identifying the one or more actions frequently used by the user that occur on a particular device. 11. The method of claim 10, wherein the particular device is a mobile device. 12. The method of claim 1, wherein the automated identified one or more frequently used actions are specific to a particular application. 13. The method of claim 1, wherein the automated identified one or more frequently used actions are specific to a particular user or group of users. 14. The method of claim 1, wherein the automated identified one or more frequently used actions are specific to a particular communication type. 15. The method of claim 14, wherein the particular communication type comprises one of email, text chat, instant messaging, voice-over-IP, or video conferencing. 16. The method of claim 1, wherein the identifying the one or more frequently used actions comprises identifying the one or more actions frequently used by the user based on the monitored communication sessions occurring at a time of day. 17. A system comprising: at least one processor; and at least one non-transitory computer readable storage medium storing instructions thereon that, when executed by the at least one processor, cause the system to: monitor, by at least one processor, a plurality of communication sessions of a user with a group of users; identify, based on the monitored communication sessions with the group of users, a series of actions frequently used by the user; provide the user a prompt associated with the identified series of actions, wherein the prompt includes an option to automate the identified series of actions; and upon the user selecting the option to automate the identified series of actions, automate the identified series of actions in association with one or more subsequent communication sessions of the user with the group of users. 18. The system of claim 17, wherein the automated identified series of frequently used actions are specific to a particular communication type. 19. A non-transitory computer-readable medium including a set of instructions that, when executed by at least one processor, cause a computer system to perform the steps comprising: monitoring, using at least one processor, a plurality of communication sessions of a user; identifying, based on the monitored communication sessions, one or more actions frequently used by the user; providing the user a prompt associated with the identified one or more actions, wherein the prompt includes an option to automate the identified one or more actions; and if the user selects the option to automate the identified one or more actions, automating the identified one or more actions in association with one or more subsequent communication sessions of the user. 20. The computer-readable medium of claim 19, wherein providing the prompt to automate the identified one or more frequently used actions occurs in response to identifying a threshold number of consecutive communication sessions where the user repeated the one or more frequently used actions.
2,400
7,958
7,958
15,005,540
2,463
A method and apparatus for improving beam finding in a wireless communication system. In one embodiment, the method includes the base station detecting a first preamble transmission from a UE on a beam. The method also includes the base station examining extra transmissions to detect whether there are other beams which can be used to communicate with the UE. The method further includes the base station considering a beam set of the UE is complete if a rule is fulfilled, wherein the beam set of the UE includes beam(s) through which the UE could communicate with the base station.
1. A method of a base station, comprising: the base station detects a first preamble transmission from a UE (User Equipment) on a beam; the base station examines extra transmissions to detect whether there are other beams which can be used to communicate with the UE; and the base station considers a beam set of the UE is complete if a rule is fulfilled, wherein the beam set of the UE includes beam(s) through which the UE could communicate with the base station. 2. The method of claim 1, wherein the base station considers that the beam set of the UE is complete when all qualified beams of the UE have been found. 3. The method of claim 1, wherein the rule is based on a total number of extra transmissions. 4. The method of claim 1, wherein the rule is based on a number of extra transmissions after a new qualified beam has been detected. 5. The method of claim 1, wherein the rule is based on a power difference between a transmission power of the first preamble and a transmission power of an extra transmission. 6. The method of claim 1, wherein the rule is based on a quality of a newly detected beam from an extra transmission. 7. The method of claim 1, wherein the rule is based on whether a difference of strength of the newly detected beam and strength of the strongest beam reaches a certain value. 8. The method of claim 1, wherein the rule is based on whether a newly detected beam has a quality above a threshold. 9. A method of a UE (User Equipment), comprising: the UE transmits a preamble during a random access (RA) procedure; the UE receives a RAR (Random Access Response) from a base station after the base station detects the transmitted preamble on a beam; and the UE performs several extra transmissions after receiving the RAR from the base station. 10. The method of claim 9, wherein the number of extra transmissions could be fixed or configurable. 11. The method of claim 9, wherein the extra transmissions could be terminated by a signaling from the base station or by another RAR from the base station. 12. A method of a UE (User Equipment), comprising: the UE transmits a first preamble during a random access (RA) procedure; the UE receives a RAR (Random Access Response) from a base station in response to the first preamble transmission after the base station detects the first preamble on a beam; and the UE transmits a second preamble in response to the RAR, wherein the power of the second preamble transmission is the power of the first preamble transmission plus a power offset. 13. The method of claim 12, wherein the power offset is different from a ramping step. 14. A method of a UE (User Equipment), comprising: the UE transmits a first preamble transmission during a random access procedure; the UE receives a RAR (Random Access Response) from a base station in response to the first preamble transmission after the base station detects the first preamble on a beam; and the UE derives a transmission power for a transmission of a signal subsequent to the RAR. 15. The method of claim 14, further comprising: the UE adds a power offset to the derived transmission power if the power offset is configured. 16. The method of claim 15, wherein the power offset is reduced from the derived transmission power if the signal has been transmitted for a certain number of times. 17. The method of claim 15, wherein the power offset is reduced from the derived transmission power if the UE receives an indication from the base station to reduce the derived transmission power. 18. The method of claim 15, wherein the power offset is applied for transmissions for beam finding or for beam tracking, and is not applied for data transmissions. 19. The method of claim 14, wherein the UE derives the transmission power based on ramping step information included in the RAR, and the ramping step information includes how many ramping steps should be reduced to derive the transmission power. 20. The method of claim 14, further comprising: the UE is configured with two different TPC (Transmit Power Control) command ranges in RAR; and the UE applies the configured TPC command ranges to derive the transmission power for the transmission of the signal subsequent to the receipt of the RAR. 21. The method of claim 20, wherein the different TPC command ranges are configured for different purposes of performing RA procedure.
A method and apparatus for improving beam finding in a wireless communication system. In one embodiment, the method includes the base station detecting a first preamble transmission from a UE on a beam. The method also includes the base station examining extra transmissions to detect whether there are other beams which can be used to communicate with the UE. The method further includes the base station considering a beam set of the UE is complete if a rule is fulfilled, wherein the beam set of the UE includes beam(s) through which the UE could communicate with the base station.1. A method of a base station, comprising: the base station detects a first preamble transmission from a UE (User Equipment) on a beam; the base station examines extra transmissions to detect whether there are other beams which can be used to communicate with the UE; and the base station considers a beam set of the UE is complete if a rule is fulfilled, wherein the beam set of the UE includes beam(s) through which the UE could communicate with the base station. 2. The method of claim 1, wherein the base station considers that the beam set of the UE is complete when all qualified beams of the UE have been found. 3. The method of claim 1, wherein the rule is based on a total number of extra transmissions. 4. The method of claim 1, wherein the rule is based on a number of extra transmissions after a new qualified beam has been detected. 5. The method of claim 1, wherein the rule is based on a power difference between a transmission power of the first preamble and a transmission power of an extra transmission. 6. The method of claim 1, wherein the rule is based on a quality of a newly detected beam from an extra transmission. 7. The method of claim 1, wherein the rule is based on whether a difference of strength of the newly detected beam and strength of the strongest beam reaches a certain value. 8. The method of claim 1, wherein the rule is based on whether a newly detected beam has a quality above a threshold. 9. A method of a UE (User Equipment), comprising: the UE transmits a preamble during a random access (RA) procedure; the UE receives a RAR (Random Access Response) from a base station after the base station detects the transmitted preamble on a beam; and the UE performs several extra transmissions after receiving the RAR from the base station. 10. The method of claim 9, wherein the number of extra transmissions could be fixed or configurable. 11. The method of claim 9, wherein the extra transmissions could be terminated by a signaling from the base station or by another RAR from the base station. 12. A method of a UE (User Equipment), comprising: the UE transmits a first preamble during a random access (RA) procedure; the UE receives a RAR (Random Access Response) from a base station in response to the first preamble transmission after the base station detects the first preamble on a beam; and the UE transmits a second preamble in response to the RAR, wherein the power of the second preamble transmission is the power of the first preamble transmission plus a power offset. 13. The method of claim 12, wherein the power offset is different from a ramping step. 14. A method of a UE (User Equipment), comprising: the UE transmits a first preamble transmission during a random access procedure; the UE receives a RAR (Random Access Response) from a base station in response to the first preamble transmission after the base station detects the first preamble on a beam; and the UE derives a transmission power for a transmission of a signal subsequent to the RAR. 15. The method of claim 14, further comprising: the UE adds a power offset to the derived transmission power if the power offset is configured. 16. The method of claim 15, wherein the power offset is reduced from the derived transmission power if the signal has been transmitted for a certain number of times. 17. The method of claim 15, wherein the power offset is reduced from the derived transmission power if the UE receives an indication from the base station to reduce the derived transmission power. 18. The method of claim 15, wherein the power offset is applied for transmissions for beam finding or for beam tracking, and is not applied for data transmissions. 19. The method of claim 14, wherein the UE derives the transmission power based on ramping step information included in the RAR, and the ramping step information includes how many ramping steps should be reduced to derive the transmission power. 20. The method of claim 14, further comprising: the UE is configured with two different TPC (Transmit Power Control) command ranges in RAR; and the UE applies the configured TPC command ranges to derive the transmission power for the transmission of the signal subsequent to the receipt of the RAR. 21. The method of claim 20, wherein the different TPC command ranges are configured for different purposes of performing RA procedure.
2,400
7,959
7,959
14,901,049
2,447
The invention concerns a method for adapting the downloading behavior of a client terminal configured to receive a multimedia content from at least one server, said multimedia content being defined by at least one representation, wherein it comprises the steps of: requesting (S 0 ) a first part of said multimedia content with a given representation; detecting (S 1 ) if a cache between is located along the transmission path the client terminal and a server, based on the request of said first part; in case (S 3 ) a cache is detected, requesting a second part of said multimedia content with a representation depending on at least one performance criterion.
1-11. (canceled) 12. Method for adapting the downloading behavior of a client terminal configured to receive a multimedia content from at least one server, at least one representation of said multimedia content being available, said method comprising: requesting a first part of said multimedia content with a given representation; detecting if a cache is located along the transmission path between the client terminal and a server, based on the request of said first part; in case a cache is detected, requesting a second part of said multimedia content with a representation depending on at least one performance criterion. 13. Method according to claim 12, further comprising: estimating the bandwidth of the transmission path between the client terminal and the detected cache. 14. Method according to claim 13, wherein, according to said performance criterion, the requested second part of said multimedia content is defined with: either the same representation as the one of the first part stored in said detected cache, whatever the result of the bandwidth estimation; or an alternative representation taking into account the estimated bandwidth, said new representation being different from the representation of the first part. 15. Method according to claim 12, wherein the request of said second part comprises a piece of information understandable by said detected cache, so that, in case said second part is not stored in the detected cache, the client terminal receives a message specifying that said second part is unavailable from said cache. 16. Method according to claim 12, wherein it further comprises: in case the downloading of said multimedia content from the detected cache meets at least one downloading criterion, requesting a further part of said multimedia content with a new representation, which differs from the representation of said first part. 17. Method according to claim 12, wherein detecting a cache further comprises: determining the round trip time for a connection establishment request from the client terminal to a server. 18. Method according to claim 17, wherein detecting a cache further comprises: measuring the reception delay between the emission of a request for the first part of the multimedia content and the beginning of the reception of said requested first part. 19. Method according to claim 18, wherein detecting a cache further comprises: comparing the determined round trip time of the connection establishment request and the measured reception delay. 20. Method according to claim 18, wherein detecting a cache further comprises: measuring the response time between the emission of an echo request from the client terminal to a server and the reception of a response to said echo request; comparing the determined round trip time of the connection establishment request with the response time. 21. Terminal configured to adapt its downloading behavior for receiving a multimedia content from at least one server, at least one representation of said multimedia content being available, comprising: a communication module for requesting a first part of said multimedia content with a given representation; a cache detector for detecting if a cache is located along the transmission path between the client terminal and a server, based on the request of said first part; a decision module for requesting, in case a cache is detected, a second part of said multimedia content with a representation depending on at least one performance criterion. 22. Terminal according to claim 21, further comprising a bandwidth estimator for estimating the bandwidth of the transmission path between said terminal and the detected cache. 23. Terminal according to claim 21, wherein, according to said performance criterion, the requested second part of said multimedia content is defined with: either the same representation as the one of the first part stored in said detected cache, whatever the result of the bandwidth estimation; or an alternative representation taking into account the estimated bandwidth, said new representation being different from the representation of the first part. 24. Terminal according to claim 21, wherein the request of said second part comprises a piece of information understandable by said detected cache, so that, in case said second part is not stored in the detected cache, the client terminal receives a message specifying that said second part is unavailable from said cache. 25. Terminal according to claim 21, wherein the terminal is further configured to request a further part of said multimedia content with a new representation, which differs from the representation of said first part, in case the downloading of said multimedia content from the detected cache meets at least one downloading criterion.
The invention concerns a method for adapting the downloading behavior of a client terminal configured to receive a multimedia content from at least one server, said multimedia content being defined by at least one representation, wherein it comprises the steps of: requesting (S 0 ) a first part of said multimedia content with a given representation; detecting (S 1 ) if a cache between is located along the transmission path the client terminal and a server, based on the request of said first part; in case (S 3 ) a cache is detected, requesting a second part of said multimedia content with a representation depending on at least one performance criterion.1-11. (canceled) 12. Method for adapting the downloading behavior of a client terminal configured to receive a multimedia content from at least one server, at least one representation of said multimedia content being available, said method comprising: requesting a first part of said multimedia content with a given representation; detecting if a cache is located along the transmission path between the client terminal and a server, based on the request of said first part; in case a cache is detected, requesting a second part of said multimedia content with a representation depending on at least one performance criterion. 13. Method according to claim 12, further comprising: estimating the bandwidth of the transmission path between the client terminal and the detected cache. 14. Method according to claim 13, wherein, according to said performance criterion, the requested second part of said multimedia content is defined with: either the same representation as the one of the first part stored in said detected cache, whatever the result of the bandwidth estimation; or an alternative representation taking into account the estimated bandwidth, said new representation being different from the representation of the first part. 15. Method according to claim 12, wherein the request of said second part comprises a piece of information understandable by said detected cache, so that, in case said second part is not stored in the detected cache, the client terminal receives a message specifying that said second part is unavailable from said cache. 16. Method according to claim 12, wherein it further comprises: in case the downloading of said multimedia content from the detected cache meets at least one downloading criterion, requesting a further part of said multimedia content with a new representation, which differs from the representation of said first part. 17. Method according to claim 12, wherein detecting a cache further comprises: determining the round trip time for a connection establishment request from the client terminal to a server. 18. Method according to claim 17, wherein detecting a cache further comprises: measuring the reception delay between the emission of a request for the first part of the multimedia content and the beginning of the reception of said requested first part. 19. Method according to claim 18, wherein detecting a cache further comprises: comparing the determined round trip time of the connection establishment request and the measured reception delay. 20. Method according to claim 18, wherein detecting a cache further comprises: measuring the response time between the emission of an echo request from the client terminal to a server and the reception of a response to said echo request; comparing the determined round trip time of the connection establishment request with the response time. 21. Terminal configured to adapt its downloading behavior for receiving a multimedia content from at least one server, at least one representation of said multimedia content being available, comprising: a communication module for requesting a first part of said multimedia content with a given representation; a cache detector for detecting if a cache is located along the transmission path between the client terminal and a server, based on the request of said first part; a decision module for requesting, in case a cache is detected, a second part of said multimedia content with a representation depending on at least one performance criterion. 22. Terminal according to claim 21, further comprising a bandwidth estimator for estimating the bandwidth of the transmission path between said terminal and the detected cache. 23. Terminal according to claim 21, wherein, according to said performance criterion, the requested second part of said multimedia content is defined with: either the same representation as the one of the first part stored in said detected cache, whatever the result of the bandwidth estimation; or an alternative representation taking into account the estimated bandwidth, said new representation being different from the representation of the first part. 24. Terminal according to claim 21, wherein the request of said second part comprises a piece of information understandable by said detected cache, so that, in case said second part is not stored in the detected cache, the client terminal receives a message specifying that said second part is unavailable from said cache. 25. Terminal according to claim 21, wherein the terminal is further configured to request a further part of said multimedia content with a new representation, which differs from the representation of said first part, in case the downloading of said multimedia content from the detected cache meets at least one downloading criterion.
2,400
7,960
7,960
15,122,745
2,461
Methods and apparatus, including computer program products, are provided for adaptation of WLAN selection thresholds. In one aspect there is provided a method, which may include receiving, at a user equipment, information including one or more thresholds for use when evaluating a selection of a wireless local area network access point for offloading; comparing, by the user equipment, a quality of an access provided by the wireless local area network access point selected in accordance with the one or more thresholds to another quality provided by a cellular access point; reporting, by the user equipment, a result of the comparing; and receiving, by the user equipment in response to the reporting, additional information including one or more adjusted thresholds for use when evaluating wireless local area network access point selection. Related systems, articles of manufacture, and the like are also disclosed.
1-22. (canceled) 23. A method comprising: receiving, at a user equipment, information including one or more thresholds for use when evaluating a selection of a wireless local area network access point for offloading; comparing, by the user equipment, a quality of an access provided by the wireless local area network access point selected in accordance with the one or more thresholds to another quality provided by a cellular access point; reporting, by the user equipment, a result of the comparing; and receiving, by the user equipment in response to the reporting, additional information including one or more adjusted thresholds for use when evaluating wireless local area network access point selection. 24. The method of claim 23, wherein the quality represents a first quality of experience for one or more flows when carried via the wireless local area network access point, and wherein the other quality represents a second quality of experience for the one or more flows when carried by the cellular access point. 25. The method of claim 24, wherein the first quality of experience and the second quality of experience each represent at least one of a packet delay, a jitter, a throughput, or a packet loss. 26. The method of claim 24, wherein the first quality of experience and the second quality of experience are determined on a per flow basis. 27. The method of claim 23, wherein the result indicates the quality is less than an expected quality of experience threshold improvement over the other quality. 28. The method of claim 23, wherein the result includes at least one of the received thresholds causing the quality to be less than an expected quality of experience threshold improvement over the other quality. 29. The method of claim 23, wherein the one or more thresholds are received via an access network discovery and selection function management object. 30. An apparatus, comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive information including one or more thresholds for use when evaluating a selection of a wireless local area network access point for offloading; compare a quality of an access provided by the wireless local area network access point selected in accordance with the one or more thresholds to another quality provided by a cellular access point; report a result of the compare; and receive, in response to the report, additional information including one or more adjusted thresholds for use when evaluating wireless local area network access point selection. 31. The apparatus of claim 30, wherein the quality represents a first quality of experience for one or more flows when carried via the wireless local area network access point, and wherein the other quality represents a second quality of experience for the one or more flows when carried by the cellular access point. 32. The apparatus of claim 31, wherein the first quality of experience and the second quality of experience each represent at least one of a packet delay, a jitter, a throughput, or a packet loss. 33. The apparatus of claim 31, wherein the first quality of experience and the second quality of experience are determined on a per flow basis. 34. The apparatus of claim 30, wherein the result indicates the quality is less than an expected quality of experience threshold improvement over the other quality. 35. The apparatus of claim 30, wherein the result includes at least one of the received thresholds causing the quality to be less than an expected quality of experience threshold improvement over the other quality. 36. The apparatus of claim 30, wherein the one or more thresholds are received via an access network discovery and selection function management object. 37. An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: send information including one or more thresholds for use by a user equipment when evaluating a selection of a wireless local area network access point for offloading; receive one or more feedback thresholds, when a quality of experience of an access provided by the wireless local area network access point selected in accordance with the one or more thresholds is less than an expected quality of experience threshold improvement over another quality of experience provided by a cellular access point ; and send, in response to the receive, additional information including one or more adjusted thresholds for use when evaluating wireless local area network access point selection. 38. The apparatus of claim 37, wherein the quality of experience and the other quality of experience each represent at least one of a packet delay, a jitter, a throughput, or a packet loss. 39. The apparatus of claim 37, wherein the information including one or more thresholds is sent via an access network discovery and selection function management object.
Methods and apparatus, including computer program products, are provided for adaptation of WLAN selection thresholds. In one aspect there is provided a method, which may include receiving, at a user equipment, information including one or more thresholds for use when evaluating a selection of a wireless local area network access point for offloading; comparing, by the user equipment, a quality of an access provided by the wireless local area network access point selected in accordance with the one or more thresholds to another quality provided by a cellular access point; reporting, by the user equipment, a result of the comparing; and receiving, by the user equipment in response to the reporting, additional information including one or more adjusted thresholds for use when evaluating wireless local area network access point selection. Related systems, articles of manufacture, and the like are also disclosed.1-22. (canceled) 23. A method comprising: receiving, at a user equipment, information including one or more thresholds for use when evaluating a selection of a wireless local area network access point for offloading; comparing, by the user equipment, a quality of an access provided by the wireless local area network access point selected in accordance with the one or more thresholds to another quality provided by a cellular access point; reporting, by the user equipment, a result of the comparing; and receiving, by the user equipment in response to the reporting, additional information including one or more adjusted thresholds for use when evaluating wireless local area network access point selection. 24. The method of claim 23, wherein the quality represents a first quality of experience for one or more flows when carried via the wireless local area network access point, and wherein the other quality represents a second quality of experience for the one or more flows when carried by the cellular access point. 25. The method of claim 24, wherein the first quality of experience and the second quality of experience each represent at least one of a packet delay, a jitter, a throughput, or a packet loss. 26. The method of claim 24, wherein the first quality of experience and the second quality of experience are determined on a per flow basis. 27. The method of claim 23, wherein the result indicates the quality is less than an expected quality of experience threshold improvement over the other quality. 28. The method of claim 23, wherein the result includes at least one of the received thresholds causing the quality to be less than an expected quality of experience threshold improvement over the other quality. 29. The method of claim 23, wherein the one or more thresholds are received via an access network discovery and selection function management object. 30. An apparatus, comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive information including one or more thresholds for use when evaluating a selection of a wireless local area network access point for offloading; compare a quality of an access provided by the wireless local area network access point selected in accordance with the one or more thresholds to another quality provided by a cellular access point; report a result of the compare; and receive, in response to the report, additional information including one or more adjusted thresholds for use when evaluating wireless local area network access point selection. 31. The apparatus of claim 30, wherein the quality represents a first quality of experience for one or more flows when carried via the wireless local area network access point, and wherein the other quality represents a second quality of experience for the one or more flows when carried by the cellular access point. 32. The apparatus of claim 31, wherein the first quality of experience and the second quality of experience each represent at least one of a packet delay, a jitter, a throughput, or a packet loss. 33. The apparatus of claim 31, wherein the first quality of experience and the second quality of experience are determined on a per flow basis. 34. The apparatus of claim 30, wherein the result indicates the quality is less than an expected quality of experience threshold improvement over the other quality. 35. The apparatus of claim 30, wherein the result includes at least one of the received thresholds causing the quality to be less than an expected quality of experience threshold improvement over the other quality. 36. The apparatus of claim 30, wherein the one or more thresholds are received via an access network discovery and selection function management object. 37. An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: send information including one or more thresholds for use by a user equipment when evaluating a selection of a wireless local area network access point for offloading; receive one or more feedback thresholds, when a quality of experience of an access provided by the wireless local area network access point selected in accordance with the one or more thresholds is less than an expected quality of experience threshold improvement over another quality of experience provided by a cellular access point ; and send, in response to the receive, additional information including one or more adjusted thresholds for use when evaluating wireless local area network access point selection. 38. The apparatus of claim 37, wherein the quality of experience and the other quality of experience each represent at least one of a packet delay, a jitter, a throughput, or a packet loss. 39. The apparatus of claim 37, wherein the information including one or more thresholds is sent via an access network discovery and selection function management object.
2,400
7,961
7,961
14,633,007
2,443
The present invention provides a removable device, adapted to connect a mobile communication device to a head unit of a vehicle and comprising: a first communication module having a first transceiver and configured for bi-directional communication of data with the head unit; a second communication module having a second transceiver and configured for bi-directional communication of data with the mobile communication device; and a control unit configured to provide at least one service to the head unit via the first communication module based on data received via the second communication module.
1. A removable device adapted to connect a mobile communication device to a head unit of a vehicle, the device comprising: a first communication module having a first transceiver and configured for bi-directional communication of data with the head unit; a second communication module having a second transceiver and configured for bi-directional communication of data with the mobile communication device; and a control unit configured to provide at least one service to the head unit via the first communication module based on data received via the second communication module. 2. The removable device according to claim 1, wherein the control unit is further configured to process the data received via the second communication module. 3. The removable device according to claim 1; further comprising: a memory unit storing an application programming interface (API) implementing at least a first protocol for the communication with the head unit via the first communication module and a second protocol for the communication with the mobile communication device, wherein the control unit comprises at least one processing unit adapted to execute the API. 4. The removable device according to claim 2; further comprising: a memory unit storing an application programming interface (API) implementing at least a first protocol for the communication with the head unit via the first communication module and a second protocol for the communication with the mobile communication device, wherein the control unit comprises at least one processing unit adapted to execute the API. 5. The removable device according to claim 3, wherein the control unit is further configured to provide the at least one service by executing the API. 6. The removable device according to claim 1, further comprising at least one of: a decoding unit configured to decode data received from at least one of the mobile communication device and the head unit; and an encoding unit configured to encode data to be transmitted to at least one of the head unit and the mobile communication device. 7. The removable device according to claim 1, further comprising: an authentication unit configured to perform an authentication process with the head unit via the first communication module. 8. The removable device according to claim 1; wherein the first communication module comprises a first connector adapted to connect the removable device with the head unit, and the second communication module comprises a second connector adapted to connect the removable device with the mobile communication terminal. 9. The removable device according to claim 1, further comprising: a power supply connector configured to receive power supplied by the vehicle. 10. The removable device according to claim 1, wherein the second communication module is further configured for communication with an auxiliary infotainment device that comprises a front-view camera, a rear-view camera, or a head-up display. 11. The removable device according to claim 1, further comprising: a third communication module configured for bi-directional communication of data with a wireless network. 12. The removable device according to claim 1, wherein data stored on the mobile communication device is accessed from the head unit via the removable device. 13. The removable device according to claim 11, wherein control signals are transmitted from the head unit to the mobile communication device via the removable device. 14. A method for connecting a mobile communication device to a head unit of a vehicle, the method comprising: establishing a first connection between a removable device and the head unit for bi-directional communication of data via a first communication module of the removable device; establishing a second connection between the removable device and the mobile communication device for bi-directional communication of data via a second communication module of the removable device; and providing at least one service to the head unit via the first communication module based on data received via the second communication module using a control unit of the removable device. 15. The method according to claim 14, further comprising: processing the data received via the second communication module. 16. The method according to claim 14, further comprising: executing an application programming interface (API) implementing at least a first protocol for the communication with the head unit via the first communication module and a second protocol for the communication with the mobile communication device, on at least one processing unit of the removable device. 17. The method according to claim 15, further comprising: executing an application programming interface (API) implementing at least a first protocol for the communication with the head unit via the first communication module and a second protocol for the communication with the mobile communication device, on at least one processing unit of the removable device. 18. The method according to claim 16, wherein the at least one service is provided by executing the API. 19. The method according to claim 14, further comprising at least one of: decoding data received from at least one of the mobile communication device and the head unit by the removable device; and encoding data to be transmitted to at least one of the head unit and the mobile communication device by the removable device. 20. The method according to claim 14, further comprising: performing an authentication process between the head unit and the removable device via the first communication module. 21. The method according to claim 14, further comprising: establishing a third connection between the removable device and an auxiliary infotainment device the comprises a front-view camera, a rear-view camera, or a head-up display; and receiving data from the auxiliary infotainment device at the removable device; and transmitting data from the removable device to the auxiliary infotainment device. 22. The method according to claim 14, further comprising: establishing a connection between the removable device and a wireless network for bi-directional communication of data.
The present invention provides a removable device, adapted to connect a mobile communication device to a head unit of a vehicle and comprising: a first communication module having a first transceiver and configured for bi-directional communication of data with the head unit; a second communication module having a second transceiver and configured for bi-directional communication of data with the mobile communication device; and a control unit configured to provide at least one service to the head unit via the first communication module based on data received via the second communication module.1. A removable device adapted to connect a mobile communication device to a head unit of a vehicle, the device comprising: a first communication module having a first transceiver and configured for bi-directional communication of data with the head unit; a second communication module having a second transceiver and configured for bi-directional communication of data with the mobile communication device; and a control unit configured to provide at least one service to the head unit via the first communication module based on data received via the second communication module. 2. The removable device according to claim 1, wherein the control unit is further configured to process the data received via the second communication module. 3. The removable device according to claim 1; further comprising: a memory unit storing an application programming interface (API) implementing at least a first protocol for the communication with the head unit via the first communication module and a second protocol for the communication with the mobile communication device, wherein the control unit comprises at least one processing unit adapted to execute the API. 4. The removable device according to claim 2; further comprising: a memory unit storing an application programming interface (API) implementing at least a first protocol for the communication with the head unit via the first communication module and a second protocol for the communication with the mobile communication device, wherein the control unit comprises at least one processing unit adapted to execute the API. 5. The removable device according to claim 3, wherein the control unit is further configured to provide the at least one service by executing the API. 6. The removable device according to claim 1, further comprising at least one of: a decoding unit configured to decode data received from at least one of the mobile communication device and the head unit; and an encoding unit configured to encode data to be transmitted to at least one of the head unit and the mobile communication device. 7. The removable device according to claim 1, further comprising: an authentication unit configured to perform an authentication process with the head unit via the first communication module. 8. The removable device according to claim 1; wherein the first communication module comprises a first connector adapted to connect the removable device with the head unit, and the second communication module comprises a second connector adapted to connect the removable device with the mobile communication terminal. 9. The removable device according to claim 1, further comprising: a power supply connector configured to receive power supplied by the vehicle. 10. The removable device according to claim 1, wherein the second communication module is further configured for communication with an auxiliary infotainment device that comprises a front-view camera, a rear-view camera, or a head-up display. 11. The removable device according to claim 1, further comprising: a third communication module configured for bi-directional communication of data with a wireless network. 12. The removable device according to claim 1, wherein data stored on the mobile communication device is accessed from the head unit via the removable device. 13. The removable device according to claim 11, wherein control signals are transmitted from the head unit to the mobile communication device via the removable device. 14. A method for connecting a mobile communication device to a head unit of a vehicle, the method comprising: establishing a first connection between a removable device and the head unit for bi-directional communication of data via a first communication module of the removable device; establishing a second connection between the removable device and the mobile communication device for bi-directional communication of data via a second communication module of the removable device; and providing at least one service to the head unit via the first communication module based on data received via the second communication module using a control unit of the removable device. 15. The method according to claim 14, further comprising: processing the data received via the second communication module. 16. The method according to claim 14, further comprising: executing an application programming interface (API) implementing at least a first protocol for the communication with the head unit via the first communication module and a second protocol for the communication with the mobile communication device, on at least one processing unit of the removable device. 17. The method according to claim 15, further comprising: executing an application programming interface (API) implementing at least a first protocol for the communication with the head unit via the first communication module and a second protocol for the communication with the mobile communication device, on at least one processing unit of the removable device. 18. The method according to claim 16, wherein the at least one service is provided by executing the API. 19. The method according to claim 14, further comprising at least one of: decoding data received from at least one of the mobile communication device and the head unit by the removable device; and encoding data to be transmitted to at least one of the head unit and the mobile communication device by the removable device. 20. The method according to claim 14, further comprising: performing an authentication process between the head unit and the removable device via the first communication module. 21. The method according to claim 14, further comprising: establishing a third connection between the removable device and an auxiliary infotainment device the comprises a front-view camera, a rear-view camera, or a head-up display; and receiving data from the auxiliary infotainment device at the removable device; and transmitting data from the removable device to the auxiliary infotainment device. 22. The method according to claim 14, further comprising: establishing a connection between the removable device and a wireless network for bi-directional communication of data.
2,400
7,962
7,962
15,195,728
2,458
The current document is directed to methods and subsystems within computing systems, including distributed computing systems, that collect, store, process, and analyze population metrics for types and classes of system components, including components of distributed applications executing within containers, virtual machines, and other execution environments. In a described implementation, a graph-like representation of the configuration and state of a computer system included aggregation nodes that collect metric data for a set of multiple object nodes and that collect metric data that represents the members of the set over a monitoring time interval. Population metrics are monitored, in certain implementations, to detect outlier members of an aggregation.
1. A state-information-storage subsystem within a computer system that includes one or more processors, one or more memories, and one or more data-storage devices, the state-information-storage subsystem comprising: current state information, including object entities associated with metrics and aggregation entities associated with population metrics, that is maintained within a combination of one or more memories and one or more data-storage devices; and a state-information-storage subsystem control component that maintains the current state information and that adds data points to population metrics associated with aggregation entities. 2. The state-information-storage subsystem of claim 1 wherein each metric entity stores a time-ordered sequence of data points, each data point comprising a time-associated numeric data value. 3. The state-information-storage subsystem of claim 1 wherein an object entity represents a component of the computer system. 4. The state-information-storage subsystem of claim 2 wherein an aggregation entity aggregates two or more object entities so that data-point-generating events with respect to computer-system components represented by the two or more object entities that produce data points for a population metric associated with the aggregation entity result in storage of the data points by a population metric associated with the aggregation entity. 5. The state-information-storage subsystem of claim 4 wherein an aggregation entity is associated with a special metric that includes entries that represent time-associated addition and deletion events in which object entities are added to and deleted from the aggregation. 6. The state-information-storage subsystem of claim 4 wherein each aggregated object entity includes a reference to a metric table, entries of which indicate an aggregation entity associated with a population metric for metrics that have been aggregated. 7. The state-information-storage subsystem of claim 1 wherein the state-information-storage subsystem control component generates object entities to represent components of a distributed application that each run within one of a virtual machine, container, and another execution environment. 8. The state-information-storage subsystem of claim 7 wherein the state-information-storage subsystem control component generates an aggregation entity associated with a type of distributed-application component, the aggregation entity associated with a population metric that stores data points representing data generated, with respect to the metric, by distributed-application components of the type that are aggregated by the aggregation entity. 9. The state-information-storage subsystem of claim 8 wherein the aggregation entity is associated with a special metric that includes entries that represent time-associated addition and deletion events in which distributed-application components of the type are added to and deleted from the aggregation. 10. The state-information-storage subsystem of claim 8 wherein each object entity representing a distributed-application component aggregated by the aggregation entity includes one of a reference to a metric table and a metric table, entries of the metric table each indicating an aggregation entity associated with a population metric associated with the object entity. 11. The state-information-storage subsystem of claim 1 wherein the state-information-storage subsystem control component monitors the distribution of population-metric values, for a population metric associated with an aggregation entity that aggregates aggregation entities as an aggregation, to: detect candidate aggregated-entity outliers, the distribution of population-metric values generated by a candidate aggregated-entity outlier falling outside a normal population-metric-value distribution for the aggregation of aggregated entities; evaluate the candidate aggregated-entity outliers with respect to the population metrics through which they are aggregated; and trigger and alarm or exception when a candidate aggregated-entity outlier is determined to be an outlier with respect to the population metrics through which it is aggregated by the aggregation entity. 12. A method that stores and maintains state information with respect to a computer system, within the computer system, the method carried out within the computer system that includes one or more processors, one or more memories, and one or more data-storage devices, the method comprising: representing, as objects entities, components of the computer system with respect to which metric-data-point-generating events are associated; representing an aggregation of two or more object entities as an aggregation entity; associating a population metric with the aggregation entity; storing the object entities and aggregation entity as state information in one or more memories and/or data-storage devices; and when a metric-data-point-generating event occurs with respect to an object of the aggregation, when the metric for which the metric-data-point-generating event generated a data point is the population metric associated with the aggregation entity, adding the data-point generated by the data-point-generating event to the population-metric. 13. The method of claim 12 wherein each metric is associated with a stored time-ordered sequence of data points, each data point comprising a time-associated numeric data value. 14. The method of claim 12 wherein multiple object entities within the stored state information represent multiple components of a distributed application, each executing, within one of a virtual machine, container, and another execution environment, that executes within the computer system. 15. The method of claim 14 wherein a distributed-application-representing aggregation entity aggregates two or more object entities that represent components of the distributed application through a population metric associated with the aggregation object. 16. The method of claim 15 wherein the distributed-application-representing aggregation entity is associated with a special metric that includes entries that represent time-associated addition and deletion events in which distributed-application components represented by the aggregated object entities are added to and deleted from the aggregation. 17. The method of claim 14 wherein each distributed-application-component-representing object entity includes a reference to a metric table, entries of which indicate that the distributed-application-representing aggregation entity receives data points generated with respect to the population metric associated with distributed-application-representing aggregation entity. 18. The method of claim 11 further comprising monitoring a distribution of population-metric values, for a population metric associated with an aggregation entity that aggregates aggregation entities as an aggregation, to: detect candidate aggregated-entity outliers, the distribution of population-metric values generated by a candidate aggregated-entity outlier falling outside a normal population-metric-value distribution for the aggregation of aggregated entities; evaluate the candidate aggregated-entity outliers with respect to the population metrics through which they are aggregated; and trigger and alarm or exception when a candidate aggregated-entity outlier is determined to be an outlier with respect to the population metrics through which it is aggregated by the aggregation entity. 19. Computer instructions, stored within a physical data-storage device, that, when executed by one or more processors of a computer system that includes the one or more processors, one or more memories, and one or more data-storage devices, control the computer system to store and maintain state information that describes the state of the computer system, by: representing, as objects entities, components of the computer system with respect to which metric-data-point-generating events are associated; representing an aggregation of two or more object entities as an aggregation entity; associating a population metric with the aggregation entity; storing the object entities and aggregation entity as state information in one or more memories and/or data-storage devices; and when a metric-data-point-generating event occurs with respect to an object of the aggregation, when the metric for which the metric-data-point-generating event generated a data point is the population metric associated with the aggregation entity, adding the data-point generated by the data-point-generating event to the population-metric. 20. The computer instructions of claim 19 wherein each metric is associated with a stored time-ordered sequence of data points, each data point comprising a time-associated numeric data value; wherein a distributed-application-representing aggregation entity aggregates two or more object entities that represent components of the distributed application through a population metric associated with the aggregation object; wherein the distributed-application-representing aggregation entity is associated with a special metric that includes entries that represent time-associated addition and deletion events in which distributed-application components represented by the aggregated object entities are added to and deleted from the aggregation; and wherein each distributed-application-component-representing object entity includes a reference to a metric table, entries of which indicate that the distributed-application-representing aggregation entity receives data points generated with respect to the population metric associated with distributed-application-representing aggregation entity.
The current document is directed to methods and subsystems within computing systems, including distributed computing systems, that collect, store, process, and analyze population metrics for types and classes of system components, including components of distributed applications executing within containers, virtual machines, and other execution environments. In a described implementation, a graph-like representation of the configuration and state of a computer system included aggregation nodes that collect metric data for a set of multiple object nodes and that collect metric data that represents the members of the set over a monitoring time interval. Population metrics are monitored, in certain implementations, to detect outlier members of an aggregation.1. A state-information-storage subsystem within a computer system that includes one or more processors, one or more memories, and one or more data-storage devices, the state-information-storage subsystem comprising: current state information, including object entities associated with metrics and aggregation entities associated with population metrics, that is maintained within a combination of one or more memories and one or more data-storage devices; and a state-information-storage subsystem control component that maintains the current state information and that adds data points to population metrics associated with aggregation entities. 2. The state-information-storage subsystem of claim 1 wherein each metric entity stores a time-ordered sequence of data points, each data point comprising a time-associated numeric data value. 3. The state-information-storage subsystem of claim 1 wherein an object entity represents a component of the computer system. 4. The state-information-storage subsystem of claim 2 wherein an aggregation entity aggregates two or more object entities so that data-point-generating events with respect to computer-system components represented by the two or more object entities that produce data points for a population metric associated with the aggregation entity result in storage of the data points by a population metric associated with the aggregation entity. 5. The state-information-storage subsystem of claim 4 wherein an aggregation entity is associated with a special metric that includes entries that represent time-associated addition and deletion events in which object entities are added to and deleted from the aggregation. 6. The state-information-storage subsystem of claim 4 wherein each aggregated object entity includes a reference to a metric table, entries of which indicate an aggregation entity associated with a population metric for metrics that have been aggregated. 7. The state-information-storage subsystem of claim 1 wherein the state-information-storage subsystem control component generates object entities to represent components of a distributed application that each run within one of a virtual machine, container, and another execution environment. 8. The state-information-storage subsystem of claim 7 wherein the state-information-storage subsystem control component generates an aggregation entity associated with a type of distributed-application component, the aggregation entity associated with a population metric that stores data points representing data generated, with respect to the metric, by distributed-application components of the type that are aggregated by the aggregation entity. 9. The state-information-storage subsystem of claim 8 wherein the aggregation entity is associated with a special metric that includes entries that represent time-associated addition and deletion events in which distributed-application components of the type are added to and deleted from the aggregation. 10. The state-information-storage subsystem of claim 8 wherein each object entity representing a distributed-application component aggregated by the aggregation entity includes one of a reference to a metric table and a metric table, entries of the metric table each indicating an aggregation entity associated with a population metric associated with the object entity. 11. The state-information-storage subsystem of claim 1 wherein the state-information-storage subsystem control component monitors the distribution of population-metric values, for a population metric associated with an aggregation entity that aggregates aggregation entities as an aggregation, to: detect candidate aggregated-entity outliers, the distribution of population-metric values generated by a candidate aggregated-entity outlier falling outside a normal population-metric-value distribution for the aggregation of aggregated entities; evaluate the candidate aggregated-entity outliers with respect to the population metrics through which they are aggregated; and trigger and alarm or exception when a candidate aggregated-entity outlier is determined to be an outlier with respect to the population metrics through which it is aggregated by the aggregation entity. 12. A method that stores and maintains state information with respect to a computer system, within the computer system, the method carried out within the computer system that includes one or more processors, one or more memories, and one or more data-storage devices, the method comprising: representing, as objects entities, components of the computer system with respect to which metric-data-point-generating events are associated; representing an aggregation of two or more object entities as an aggregation entity; associating a population metric with the aggregation entity; storing the object entities and aggregation entity as state information in one or more memories and/or data-storage devices; and when a metric-data-point-generating event occurs with respect to an object of the aggregation, when the metric for which the metric-data-point-generating event generated a data point is the population metric associated with the aggregation entity, adding the data-point generated by the data-point-generating event to the population-metric. 13. The method of claim 12 wherein each metric is associated with a stored time-ordered sequence of data points, each data point comprising a time-associated numeric data value. 14. The method of claim 12 wherein multiple object entities within the stored state information represent multiple components of a distributed application, each executing, within one of a virtual machine, container, and another execution environment, that executes within the computer system. 15. The method of claim 14 wherein a distributed-application-representing aggregation entity aggregates two or more object entities that represent components of the distributed application through a population metric associated with the aggregation object. 16. The method of claim 15 wherein the distributed-application-representing aggregation entity is associated with a special metric that includes entries that represent time-associated addition and deletion events in which distributed-application components represented by the aggregated object entities are added to and deleted from the aggregation. 17. The method of claim 14 wherein each distributed-application-component-representing object entity includes a reference to a metric table, entries of which indicate that the distributed-application-representing aggregation entity receives data points generated with respect to the population metric associated with distributed-application-representing aggregation entity. 18. The method of claim 11 further comprising monitoring a distribution of population-metric values, for a population metric associated with an aggregation entity that aggregates aggregation entities as an aggregation, to: detect candidate aggregated-entity outliers, the distribution of population-metric values generated by a candidate aggregated-entity outlier falling outside a normal population-metric-value distribution for the aggregation of aggregated entities; evaluate the candidate aggregated-entity outliers with respect to the population metrics through which they are aggregated; and trigger and alarm or exception when a candidate aggregated-entity outlier is determined to be an outlier with respect to the population metrics through which it is aggregated by the aggregation entity. 19. Computer instructions, stored within a physical data-storage device, that, when executed by one or more processors of a computer system that includes the one or more processors, one or more memories, and one or more data-storage devices, control the computer system to store and maintain state information that describes the state of the computer system, by: representing, as objects entities, components of the computer system with respect to which metric-data-point-generating events are associated; representing an aggregation of two or more object entities as an aggregation entity; associating a population metric with the aggregation entity; storing the object entities and aggregation entity as state information in one or more memories and/or data-storage devices; and when a metric-data-point-generating event occurs with respect to an object of the aggregation, when the metric for which the metric-data-point-generating event generated a data point is the population metric associated with the aggregation entity, adding the data-point generated by the data-point-generating event to the population-metric. 20. The computer instructions of claim 19 wherein each metric is associated with a stored time-ordered sequence of data points, each data point comprising a time-associated numeric data value; wherein a distributed-application-representing aggregation entity aggregates two or more object entities that represent components of the distributed application through a population metric associated with the aggregation object; wherein the distributed-application-representing aggregation entity is associated with a special metric that includes entries that represent time-associated addition and deletion events in which distributed-application components represented by the aggregated object entities are added to and deleted from the aggregation; and wherein each distributed-application-component-representing object entity includes a reference to a metric table, entries of which indicate that the distributed-application-representing aggregation entity receives data points generated with respect to the population metric associated with distributed-application-representing aggregation entity.
2,400
7,963
7,963
13,335,279
2,451
An integrated security system is described comprising a gateway located at a first location. The gateway includes a takeover component that establishes a coupling with a first controller of a security system installed at the first location. The security system includes security system components coupled to the first controller. The takeover component automatically extracts security data of the security system from the first controller. The gateway automatically transfers the security data extracted from the controller to a second controller. The second controller is coupled to the security system components and replaces the first controller.
1. A method comprising: establishing a coupling between a security system and a gateway coupled to a takeover component, the gateway and the security system located in a first location; automatically establishing a wireless coupling between the takeover component and a first controller of the security system, the security system including security system components coupled to the first controller; automatically extracting security data of the security system from the first controller; and automatically transferring the security data extracted from the first controller to a second controller, wherein the second controller is coupled to the security system components and replaces the first controller. 2-73. (canceled)
An integrated security system is described comprising a gateway located at a first location. The gateway includes a takeover component that establishes a coupling with a first controller of a security system installed at the first location. The security system includes security system components coupled to the first controller. The takeover component automatically extracts security data of the security system from the first controller. The gateway automatically transfers the security data extracted from the controller to a second controller. The second controller is coupled to the security system components and replaces the first controller.1. A method comprising: establishing a coupling between a security system and a gateway coupled to a takeover component, the gateway and the security system located in a first location; automatically establishing a wireless coupling between the takeover component and a first controller of the security system, the security system including security system components coupled to the first controller; automatically extracting security data of the security system from the first controller; and automatically transferring the security data extracted from the first controller to a second controller, wherein the second controller is coupled to the security system components and replaces the first controller. 2-73. (canceled)
2,400
7,964
7,964
14,882,490
2,421
A device and method of automatically pausing media content executing on the electronic device are provided. Sound within an ambient environment in which the electronic device resides is monitored, and the sound in the ambient environment is compared to a prescribed sound threshold. Upon the sound in the ambient environment exceeding the prescribed sound threshold, execution of media content on the electronic device is automatically paused.
1. An electronic device, comprising: an electronic processor; a memory operatively coupled to the processor; and a media player module for executing media content on the electronic device, the media player module stored in the memory and executable by the processor, wherein when executed by the processor the media player module causes the electronic device to i) monitor sound within an ambient environment in which the electronic device resides; ii) compare the monitored sound in the ambient environment to a prescribed sound threshold; and iii) pause execution of the media content upon the sound in the ambient environment exceeding the prescribed sound threshold. 2. The electronic device according to claim 1, further comprising a microphone configured to monitor the sound in the ambient environment and provide data corresponding to the monitored sound to the media player module. 3. The electronic device according to claim 1, further comprising a wireless transceiver for receiving at least one of data corresponding to sound in the ambient environment or media content for execution by the media player. 4. The electronic device according to claim 1, wherein the media player module further causes the electronic device to compare a frequency of the sound in the ambient environment with a frequency of sound output by the electronic device, and pause the media content when the frequency of the ambient sound is substantially different from the frequency of the sound output by the electronic device. 5. The electronic device according to claim 1, wherein the media player module further causes the electronic device to buffer streamed media content while the media content is paused. 6. The electronic device according to claim 1, wherein the media player module further causes the electronic device to resume execution of the media content upon detecting a user-initiated command to resume execution. 7. The electronic device according to claim 1, wherein the media player further causes the electronic device to resume execution of the media content upon the sound in the ambient environment falling below the prescribed sound threshold for a prescribed time period. 8. The electronic device according to claim 1, wherein the electronic device comprises at least one of a mobile phone, a tablet computer, a television, or a home entertainment system. 9. A method of controlling flow of media content on an electronic device, comprising: executing media content on the electronic device; monitoring sound within an ambient environment in which the electronic device resides; comparing the sound in the ambient environment to a prescribed sound threshold; and upon the sound in the ambient environment exceeding the prescribed sound threshold, automatically pausing execution of media content on the electronic device. 10. The method according to claim 9, further comprising adjusting the prescribed sound threshold based on a volume setting of the electronic device. 11. The method according to claim 9, wherein monitoring includes obtaining data corresponding to the sound in the ambient environment from a microphone of the electronic device. 12. The method according to claim 9, wherein monitoring comprises using another electronic device to collect sound data, the another electronic device separate from the electronic device. 13. (canceled) 14. The method according to claim 9, wherein comparing includes comparing a frequency of the sound in the ambient environment with a frequency of sound output by the electronic device, and pausing includes pausing when the frequency of the ambient sound is substantially different from the frequency of the sound output by the electronic device. 15. The method according to claim 9, wherein executing media content includes executing media content stored on a device remote from the electronic device. 16. The method according to claim 9, wherein executing media content includes streaming media content to the electronic device for execution by the electronic device. 17. The method according to claim 16, further comprising buffering streamed media content while the media content is paused. 18. The method according to claim 9, wherein executing media content includes executing media content stored on the electronic device. 19. (canceled) 20. The method according to claim 9, further comprising resuming execution of the media content upon detecting a user-initiated command to resume execution. 21. The method according to claim 9, further comprising resuming execution of the media content upon the sound in the ambient environment falling below the prescribed sound threshold for a prescribed time period. 22. A non-transitory computer readable medium comprising computer executable instructions adapted to control a flow of media content on an electronic device, wherein when executed by a processor the computer executable instructions cause the processor to: execute media content on the electronic device; monitor sound within an ambient environment in which the electronic device resides; compare the sound in the ambient environment to a prescribed sound threshold; and automatically pause execution of media content on the electronic device upon the sound in the ambient environment exceeding the prescribed sound threshold.
A device and method of automatically pausing media content executing on the electronic device are provided. Sound within an ambient environment in which the electronic device resides is monitored, and the sound in the ambient environment is compared to a prescribed sound threshold. Upon the sound in the ambient environment exceeding the prescribed sound threshold, execution of media content on the electronic device is automatically paused.1. An electronic device, comprising: an electronic processor; a memory operatively coupled to the processor; and a media player module for executing media content on the electronic device, the media player module stored in the memory and executable by the processor, wherein when executed by the processor the media player module causes the electronic device to i) monitor sound within an ambient environment in which the electronic device resides; ii) compare the monitored sound in the ambient environment to a prescribed sound threshold; and iii) pause execution of the media content upon the sound in the ambient environment exceeding the prescribed sound threshold. 2. The electronic device according to claim 1, further comprising a microphone configured to monitor the sound in the ambient environment and provide data corresponding to the monitored sound to the media player module. 3. The electronic device according to claim 1, further comprising a wireless transceiver for receiving at least one of data corresponding to sound in the ambient environment or media content for execution by the media player. 4. The electronic device according to claim 1, wherein the media player module further causes the electronic device to compare a frequency of the sound in the ambient environment with a frequency of sound output by the electronic device, and pause the media content when the frequency of the ambient sound is substantially different from the frequency of the sound output by the electronic device. 5. The electronic device according to claim 1, wherein the media player module further causes the electronic device to buffer streamed media content while the media content is paused. 6. The electronic device according to claim 1, wherein the media player module further causes the electronic device to resume execution of the media content upon detecting a user-initiated command to resume execution. 7. The electronic device according to claim 1, wherein the media player further causes the electronic device to resume execution of the media content upon the sound in the ambient environment falling below the prescribed sound threshold for a prescribed time period. 8. The electronic device according to claim 1, wherein the electronic device comprises at least one of a mobile phone, a tablet computer, a television, or a home entertainment system. 9. A method of controlling flow of media content on an electronic device, comprising: executing media content on the electronic device; monitoring sound within an ambient environment in which the electronic device resides; comparing the sound in the ambient environment to a prescribed sound threshold; and upon the sound in the ambient environment exceeding the prescribed sound threshold, automatically pausing execution of media content on the electronic device. 10. The method according to claim 9, further comprising adjusting the prescribed sound threshold based on a volume setting of the electronic device. 11. The method according to claim 9, wherein monitoring includes obtaining data corresponding to the sound in the ambient environment from a microphone of the electronic device. 12. The method according to claim 9, wherein monitoring comprises using another electronic device to collect sound data, the another electronic device separate from the electronic device. 13. (canceled) 14. The method according to claim 9, wherein comparing includes comparing a frequency of the sound in the ambient environment with a frequency of sound output by the electronic device, and pausing includes pausing when the frequency of the ambient sound is substantially different from the frequency of the sound output by the electronic device. 15. The method according to claim 9, wherein executing media content includes executing media content stored on a device remote from the electronic device. 16. The method according to claim 9, wherein executing media content includes streaming media content to the electronic device for execution by the electronic device. 17. The method according to claim 16, further comprising buffering streamed media content while the media content is paused. 18. The method according to claim 9, wherein executing media content includes executing media content stored on the electronic device. 19. (canceled) 20. The method according to claim 9, further comprising resuming execution of the media content upon detecting a user-initiated command to resume execution. 21. The method according to claim 9, further comprising resuming execution of the media content upon the sound in the ambient environment falling below the prescribed sound threshold for a prescribed time period. 22. A non-transitory computer readable medium comprising computer executable instructions adapted to control a flow of media content on an electronic device, wherein when executed by a processor the computer executable instructions cause the processor to: execute media content on the electronic device; monitor sound within an ambient environment in which the electronic device resides; compare the sound in the ambient environment to a prescribed sound threshold; and automatically pause execution of media content on the electronic device upon the sound in the ambient environment exceeding the prescribed sound threshold.
2,400
7,965
7,965
15,338,341
2,437
Instead of specifying actual transport layer IP addresses as a basis for a secure tunnel's security association, an approach described herein specifies virtual addresses. Then suitable network appliances intercept and modify packets in order to map between the virtual addresses and actual addresses. The virtual addresses satisfy IPsec or another authentication procedure that checks packets using the security association. The actual addresses are used by transport layer protocols. This overlay approach permits a session to failover from one network connection to another without requiring restoration of the session in a newly created secure tunnel after one of the network interfaces becomes unavailable, thereby obsoleting the security association based in part on the IP address of the now unavailable interface. This innovative approach also allows the use of parallel paths and the use of one-to-many or many-to-one path topologies, which would otherwise not be permitted.
1. A secure networking process comprising: establishing a virtual private network (VPN) tunnel which has a security association which is specified with at least a source virtual IP address that is not an actual wide area network (WAN) interface address and which is also specified with at least a destination virtual IP address that is not an actual WAN interface address; intercepting an outgoing packet that is directed from a source endpoint of the VPN tunnel toward a destination endpoint of the VPN tunnel; modifying the outgoing packet by replacing an instance of the source virtual IP address in the outgoing packet with an actual address that is the IP address of an outgoing WAN interface at a local site, and modifying the outgoing packet by replacing an instance of the destination virtual IP address in the outgoing packet with an actual address that is the IP address of an incoming WAN interface at a remote site; and then transmitting the modified outgoing packet through the outgoing WAN interface at the local site toward the incoming WAN interface at the remote site. 2. The process of claim 1, further comprising: intercepting the modified outgoing packet after it has reached the incoming WAN interface at the remote site; modifying the intercepted packet by replacing the actual address of the incoming WAN interface with the destination virtual IP address and by replacing the actual address of the outgoing WAN interface with the source virtual IP address; and then submitting the modified intercepted packet for an authentication that is based on the security association. 3. The process of claim 2, wherein the submitting step submits the modified intercepted packet for an IPsec authentication. 4. The process of claim 2, wherein the process maps the security association to pairs of actual WAN interface addresses that define at least two parallel paths between the local site and the remote site. 5. The process of claim 2, wherein the process maps the security association to actual WAN interface addresses that define at least two paths between the local site and the remote site that share the same WAN interface at one site and do not share any WAN interface at the other site. 6. The process of claim 1, further comprising a remote appliance at the remote site and a local appliance at the local site authenticating to one another before the local appliance performs the packet intercepting, packet modifying, and packet transmitting steps. 7. The process of claim 1, further comprising a remote appliance at the remote site failing to authenticate itself to a local appliance at the local site, and then the local appliance terminating the VPN tunnel in response to the authentication failure. 8. The process of claim 1, further comprising a local appliance at the local site performing at least one of the following to get the actual address that is the IP address of the outgoing WAN interface at the local site: load balancing or failing over. 9. The process of claim 1, further comprising a local appliance at the local site participating in a multi-phase auto-configuration by executing at least a portion of an auto-configuration program. 10. A secure networking process comprising: establishing a virtual private network (VPN) tunnel which has a security association which is specified with at least a source virtual IP address that is not an actual wide area network (WAN) interface address and which is also specified with at least a destination virtual IP address that is not an actual WAN interface address; intercepting an outgoing packet that is directed from a source endpoint of the VPN tunnel toward a destination endpoint of the VPN tunnel; performing at least one of the following to get an actual address that is an IP address of an outgoing WAN interface at a local site: load balancing, failing over, or another routing optimization; modifying the outgoing packet by replacing an instance of the source virtual IP address in the outgoing packet with the actual address that is the IP address of the outgoing WAN interface at the local site, and modifying the outgoing packet by replacing an instance of the destination virtual IP address in the outgoing packet with an actual address that is the IP address of an incoming WAN interface at a remote site; transmitting the modified outgoing packet through the outgoing WAN interface at the local site toward the incoming WAN interface at the remote site; intercepting the modified outgoing packet after it has reached the incoming WAN interface at the remote site; modifying the intercepted packet by replacing the actual address of the incoming WAN interface with the destination virtual IP address and by replacing the actual address of the outgoing WAN interface with the source virtual IP address; and submitting the modified intercepted packet for an IPsec authentication that is based on the security association. 11. The process of claim 10, wherein the process maps the security association to pairs of actual WAN interface addresses that define at least two parallel paths between the local site and the remote site. 12. The process of claim 10, wherein the process maps the security association to actual WAN interface addresses that define at least two paths between the local site and the remote site that share the same WAN interface at one site and do not share any WAN interface at the other site. 13. The process of claim 10, further comprising a remote appliance at the remote site and a local appliance at the local site authenticating to one another before the local appliance performs the packet intercepting, packet modifying, and packet transmitting steps. 14. The process of claim 10, further comprising a remote appliance at the remote site failing to authenticate itself to a local appliance at the local site, and then the local appliance terminating the VPN tunnel in response to the authentication failure. 15. The process of claim 10, further comprising a local appliance at the local site participating in a multi-phase auto-configuration by executing at least a portion of an auto-configuration program. 16. A network appliance comprising: at least one wide area network (WAN) interface having an actual IP address, namely, an IP address which has been statically or dynamically assigned and has been or will be advertised across a network connection; at least one local area network (LAN) interface; a processor; a memory in operable communication with the processor; an overlay code residing in the memory which upon execution by the processor performs a secure networking process which intercepts a packet received at the LAN interface, maps two security associated addresses in the packet to actual addresses, one of the actual addresses being the WAN interface actual IP address, and modifies the packet to include the WAN interface actual IP address in place of a source address which is one of the security associated addresses in the packet; and a transmitter code which transmits the modified packet out the WAN interface. 17. The network appliance of claim 16, denoted here as a first network appliance, in combination with a second network appliance, the second network appliance comprising: at least one WAN interface having an actual IP address; at least one LAN interface; a virtual private network (VPN) authentication module; a processor; a memory in operable communication with the processor; an overlay code residing in the memory which upon execution by the processor performs a secure networking process which intercepts the modified packet after it is received at the second network appliance's WAN interface, maps actual addresses in the packet back to the security associated addresses using a table in the memory, thereby restoring the security associated addresses in the packet, and submits the packet with restored security associated addresses to the VPN authentication module for authentication prior to transmittal of the packet to the LAN interface. 18. The network appliance of claim 17, wherein the table maps the security associated addresses to network connections between the first network appliance and the second network appliance, and the connections are parallel and do not share any WAN interface with one another. 19. The network appliance of claim 17, wherein the table maps the security associated addresses to network connections between the first network appliance and the second network appliance, and the connections are not parallel and share at least one WAN interface with one another. 20. The network appliance of claim 16, further comprising at least one of the following: code which authenticates the network appliance to another network appliance; code which terminates a tunnel connecting to the network appliance in response to another network appliance failing to authenticate to the network appliance; or code which auto-configures the network appliance.
Instead of specifying actual transport layer IP addresses as a basis for a secure tunnel's security association, an approach described herein specifies virtual addresses. Then suitable network appliances intercept and modify packets in order to map between the virtual addresses and actual addresses. The virtual addresses satisfy IPsec or another authentication procedure that checks packets using the security association. The actual addresses are used by transport layer protocols. This overlay approach permits a session to failover from one network connection to another without requiring restoration of the session in a newly created secure tunnel after one of the network interfaces becomes unavailable, thereby obsoleting the security association based in part on the IP address of the now unavailable interface. This innovative approach also allows the use of parallel paths and the use of one-to-many or many-to-one path topologies, which would otherwise not be permitted.1. A secure networking process comprising: establishing a virtual private network (VPN) tunnel which has a security association which is specified with at least a source virtual IP address that is not an actual wide area network (WAN) interface address and which is also specified with at least a destination virtual IP address that is not an actual WAN interface address; intercepting an outgoing packet that is directed from a source endpoint of the VPN tunnel toward a destination endpoint of the VPN tunnel; modifying the outgoing packet by replacing an instance of the source virtual IP address in the outgoing packet with an actual address that is the IP address of an outgoing WAN interface at a local site, and modifying the outgoing packet by replacing an instance of the destination virtual IP address in the outgoing packet with an actual address that is the IP address of an incoming WAN interface at a remote site; and then transmitting the modified outgoing packet through the outgoing WAN interface at the local site toward the incoming WAN interface at the remote site. 2. The process of claim 1, further comprising: intercepting the modified outgoing packet after it has reached the incoming WAN interface at the remote site; modifying the intercepted packet by replacing the actual address of the incoming WAN interface with the destination virtual IP address and by replacing the actual address of the outgoing WAN interface with the source virtual IP address; and then submitting the modified intercepted packet for an authentication that is based on the security association. 3. The process of claim 2, wherein the submitting step submits the modified intercepted packet for an IPsec authentication. 4. The process of claim 2, wherein the process maps the security association to pairs of actual WAN interface addresses that define at least two parallel paths between the local site and the remote site. 5. The process of claim 2, wherein the process maps the security association to actual WAN interface addresses that define at least two paths between the local site and the remote site that share the same WAN interface at one site and do not share any WAN interface at the other site. 6. The process of claim 1, further comprising a remote appliance at the remote site and a local appliance at the local site authenticating to one another before the local appliance performs the packet intercepting, packet modifying, and packet transmitting steps. 7. The process of claim 1, further comprising a remote appliance at the remote site failing to authenticate itself to a local appliance at the local site, and then the local appliance terminating the VPN tunnel in response to the authentication failure. 8. The process of claim 1, further comprising a local appliance at the local site performing at least one of the following to get the actual address that is the IP address of the outgoing WAN interface at the local site: load balancing or failing over. 9. The process of claim 1, further comprising a local appliance at the local site participating in a multi-phase auto-configuration by executing at least a portion of an auto-configuration program. 10. A secure networking process comprising: establishing a virtual private network (VPN) tunnel which has a security association which is specified with at least a source virtual IP address that is not an actual wide area network (WAN) interface address and which is also specified with at least a destination virtual IP address that is not an actual WAN interface address; intercepting an outgoing packet that is directed from a source endpoint of the VPN tunnel toward a destination endpoint of the VPN tunnel; performing at least one of the following to get an actual address that is an IP address of an outgoing WAN interface at a local site: load balancing, failing over, or another routing optimization; modifying the outgoing packet by replacing an instance of the source virtual IP address in the outgoing packet with the actual address that is the IP address of the outgoing WAN interface at the local site, and modifying the outgoing packet by replacing an instance of the destination virtual IP address in the outgoing packet with an actual address that is the IP address of an incoming WAN interface at a remote site; transmitting the modified outgoing packet through the outgoing WAN interface at the local site toward the incoming WAN interface at the remote site; intercepting the modified outgoing packet after it has reached the incoming WAN interface at the remote site; modifying the intercepted packet by replacing the actual address of the incoming WAN interface with the destination virtual IP address and by replacing the actual address of the outgoing WAN interface with the source virtual IP address; and submitting the modified intercepted packet for an IPsec authentication that is based on the security association. 11. The process of claim 10, wherein the process maps the security association to pairs of actual WAN interface addresses that define at least two parallel paths between the local site and the remote site. 12. The process of claim 10, wherein the process maps the security association to actual WAN interface addresses that define at least two paths between the local site and the remote site that share the same WAN interface at one site and do not share any WAN interface at the other site. 13. The process of claim 10, further comprising a remote appliance at the remote site and a local appliance at the local site authenticating to one another before the local appliance performs the packet intercepting, packet modifying, and packet transmitting steps. 14. The process of claim 10, further comprising a remote appliance at the remote site failing to authenticate itself to a local appliance at the local site, and then the local appliance terminating the VPN tunnel in response to the authentication failure. 15. The process of claim 10, further comprising a local appliance at the local site participating in a multi-phase auto-configuration by executing at least a portion of an auto-configuration program. 16. A network appliance comprising: at least one wide area network (WAN) interface having an actual IP address, namely, an IP address which has been statically or dynamically assigned and has been or will be advertised across a network connection; at least one local area network (LAN) interface; a processor; a memory in operable communication with the processor; an overlay code residing in the memory which upon execution by the processor performs a secure networking process which intercepts a packet received at the LAN interface, maps two security associated addresses in the packet to actual addresses, one of the actual addresses being the WAN interface actual IP address, and modifies the packet to include the WAN interface actual IP address in place of a source address which is one of the security associated addresses in the packet; and a transmitter code which transmits the modified packet out the WAN interface. 17. The network appliance of claim 16, denoted here as a first network appliance, in combination with a second network appliance, the second network appliance comprising: at least one WAN interface having an actual IP address; at least one LAN interface; a virtual private network (VPN) authentication module; a processor; a memory in operable communication with the processor; an overlay code residing in the memory which upon execution by the processor performs a secure networking process which intercepts the modified packet after it is received at the second network appliance's WAN interface, maps actual addresses in the packet back to the security associated addresses using a table in the memory, thereby restoring the security associated addresses in the packet, and submits the packet with restored security associated addresses to the VPN authentication module for authentication prior to transmittal of the packet to the LAN interface. 18. The network appliance of claim 17, wherein the table maps the security associated addresses to network connections between the first network appliance and the second network appliance, and the connections are parallel and do not share any WAN interface with one another. 19. The network appliance of claim 17, wherein the table maps the security associated addresses to network connections between the first network appliance and the second network appliance, and the connections are not parallel and share at least one WAN interface with one another. 20. The network appliance of claim 16, further comprising at least one of the following: code which authenticates the network appliance to another network appliance; code which terminates a tunnel connecting to the network appliance in response to another network appliance failing to authenticate to the network appliance; or code which auto-configures the network appliance.
2,400
7,966
7,966
15,265,877
2,453
An optoelectronic device includes a semiconductor substrate and a monolithic array of light-emitting elements, including first and second sets of the light-emitting elements arranged on the substrate in respective first and second two-dimensional patterns, which are interleaved on the substrate. First and second conductors are respectively connected to separately drive the first and second sets of the light-emitting elements so that the device selectably emits light in either or both of the first and second patterns.
1. An optoelectronic device, comprising: a semiconductor substrate; and a monolithic array of light-emitting elements, comprising first and second sets of the light-emitting elements arranged on the substrate in respective first and second two-dimensional patterns, which are interleaved on the substrate; and first and second conductors, which are respectively connected to separately drive the first and second sets of the light-emitting elements so that the device selectably emits light in either or both of the first and second patterns. 2. The device according to claim 1, wherein the first and second conductors are disposed in different, first and second metal layers formed over the semiconductor substrate. 3. The device according to claim 1, wherein the first and second conductors are both disposed within a single metal layer formed over the semiconductor substrate. 4. The device according to claim 1, wherein the light-emitting elements comprise vertical-cavity surface-emitting laser (VCSEL) diodes. 5. The device according to claim 1, wherein at least one of the two-dimensional patterns is not a regular lattice. 6. The device according to claim 5, wherein the at least one of the two-dimensional patterns is an uncorrelated pattern. 7. The device according to claim 1, and comprising: projection optics, which are configured to project the light emitted by the light emitting elements onto an object; and an imaging device, which is configured to capture images of the object in a low-resolution mode while only the first set of the light-emitting elements is driven to emit the light, thereby projecting a low-resolution pattern onto the object, and in a high-resolution mode while both of the first and second sets of the light-emitting elements are driven to emit the light, thereby projecting a high-resolution pattern onto the object. 8. The device according to claim 1, and comprising a projection lens, which is mounted on the semiconductor substrate and is configured to collect and focus light emitted by the light-emitting elements so as to project an optical beam containing a light pattern corresponding to the two-dimensional pattern of the light-emitting elements on the substrate. 9. The device according to claim 8, and comprising a diffractive optical element (DOE), which is mounted on the substrate and is configured to expand the projected optical beam by producing multiple, mutually-adjacent replicas of the pattern. 10. The device according to claim 9, wherein the projection lens and the DOE are formed on opposing sides of a single optical substrate. 11. The device according to claim 1, and comprising a single diffractive optical element (DOE), which is mounted on the semiconductor substrate and is configured to collect and focus light emitted by the light-emitting elements so as to project an optical beam containing a light pattern corresponding to the two-dimensional pattern of the light-emitting elements on the substrate while expanding the projected optical beam by producing multiple, mutually-adjacent replicas of the pattern. 12. A method for producing an optoelectronic device, the method comprising: providing a semiconductor substrate; and forming on the substrate a monolithic array of light-emitting elements, comprising first and second sets of the light-emitting elements arranged on the substrate in respective first and second two-dimensional patterns, which are interleaved on the substrate; and connecting first and second conductors to separately drive the first and second sets of the light-emitting elements respectively, so that the device selectably emits light in either or both of the first and second patterns. 13. The method according to claim 12, wherein the first and second conductors are disposed in different, first and second metal layers formed over the semiconductor substrate. 14. The method according to claim 12, wherein the first and second conductors are both disposed within a single metal layer formed over the semiconductor substrate. 15. The method according to claim 12, wherein the light-emitting elements comprise vertical-cavity surface-emitting laser (VCSEL) diodes. 16. The method according to claim 12, wherein at least one of the two-dimensional patterns is not a regular lattice. 17. The method according to claim 16, wherein the at least one of the two-dimensional patterns is an uncorrelated pattern. 18. The method according to claim 12, and comprising: projecting the light emitted by the light emitting elements onto an object; capturing first images of the object in a low-resolution mode while only the first set of the light-emitting elements is driven to emit the light, thereby projecting a low-resolution pattern onto the object; and capturing second images in a high-resolution mode while both of the first and second sets of the light-emitting elements are driven to emit the light, thereby projecting a high-resolution pattern onto the object. 19. The method according to claim 18, and comprising mounting a projection lens on the semiconductor substrate so as to collect and focus light emitted by the light-emitting elements, thereby projecting an optical beam containing a light pattern corresponding to the two-dimensional pattern of the light-emitting elements on the substrate. 20. The method according to claim 19, and comprising mounting a diffractive optical element (DOE) on the substrate so as to expand the projected optical beam by producing multiple, mutually-adjacent replicas of the pattern.
An optoelectronic device includes a semiconductor substrate and a monolithic array of light-emitting elements, including first and second sets of the light-emitting elements arranged on the substrate in respective first and second two-dimensional patterns, which are interleaved on the substrate. First and second conductors are respectively connected to separately drive the first and second sets of the light-emitting elements so that the device selectably emits light in either or both of the first and second patterns.1. An optoelectronic device, comprising: a semiconductor substrate; and a monolithic array of light-emitting elements, comprising first and second sets of the light-emitting elements arranged on the substrate in respective first and second two-dimensional patterns, which are interleaved on the substrate; and first and second conductors, which are respectively connected to separately drive the first and second sets of the light-emitting elements so that the device selectably emits light in either or both of the first and second patterns. 2. The device according to claim 1, wherein the first and second conductors are disposed in different, first and second metal layers formed over the semiconductor substrate. 3. The device according to claim 1, wherein the first and second conductors are both disposed within a single metal layer formed over the semiconductor substrate. 4. The device according to claim 1, wherein the light-emitting elements comprise vertical-cavity surface-emitting laser (VCSEL) diodes. 5. The device according to claim 1, wherein at least one of the two-dimensional patterns is not a regular lattice. 6. The device according to claim 5, wherein the at least one of the two-dimensional patterns is an uncorrelated pattern. 7. The device according to claim 1, and comprising: projection optics, which are configured to project the light emitted by the light emitting elements onto an object; and an imaging device, which is configured to capture images of the object in a low-resolution mode while only the first set of the light-emitting elements is driven to emit the light, thereby projecting a low-resolution pattern onto the object, and in a high-resolution mode while both of the first and second sets of the light-emitting elements are driven to emit the light, thereby projecting a high-resolution pattern onto the object. 8. The device according to claim 1, and comprising a projection lens, which is mounted on the semiconductor substrate and is configured to collect and focus light emitted by the light-emitting elements so as to project an optical beam containing a light pattern corresponding to the two-dimensional pattern of the light-emitting elements on the substrate. 9. The device according to claim 8, and comprising a diffractive optical element (DOE), which is mounted on the substrate and is configured to expand the projected optical beam by producing multiple, mutually-adjacent replicas of the pattern. 10. The device according to claim 9, wherein the projection lens and the DOE are formed on opposing sides of a single optical substrate. 11. The device according to claim 1, and comprising a single diffractive optical element (DOE), which is mounted on the semiconductor substrate and is configured to collect and focus light emitted by the light-emitting elements so as to project an optical beam containing a light pattern corresponding to the two-dimensional pattern of the light-emitting elements on the substrate while expanding the projected optical beam by producing multiple, mutually-adjacent replicas of the pattern. 12. A method for producing an optoelectronic device, the method comprising: providing a semiconductor substrate; and forming on the substrate a monolithic array of light-emitting elements, comprising first and second sets of the light-emitting elements arranged on the substrate in respective first and second two-dimensional patterns, which are interleaved on the substrate; and connecting first and second conductors to separately drive the first and second sets of the light-emitting elements respectively, so that the device selectably emits light in either or both of the first and second patterns. 13. The method according to claim 12, wherein the first and second conductors are disposed in different, first and second metal layers formed over the semiconductor substrate. 14. The method according to claim 12, wherein the first and second conductors are both disposed within a single metal layer formed over the semiconductor substrate. 15. The method according to claim 12, wherein the light-emitting elements comprise vertical-cavity surface-emitting laser (VCSEL) diodes. 16. The method according to claim 12, wherein at least one of the two-dimensional patterns is not a regular lattice. 17. The method according to claim 16, wherein the at least one of the two-dimensional patterns is an uncorrelated pattern. 18. The method according to claim 12, and comprising: projecting the light emitted by the light emitting elements onto an object; capturing first images of the object in a low-resolution mode while only the first set of the light-emitting elements is driven to emit the light, thereby projecting a low-resolution pattern onto the object; and capturing second images in a high-resolution mode while both of the first and second sets of the light-emitting elements are driven to emit the light, thereby projecting a high-resolution pattern onto the object. 19. The method according to claim 18, and comprising mounting a projection lens on the semiconductor substrate so as to collect and focus light emitted by the light-emitting elements, thereby projecting an optical beam containing a light pattern corresponding to the two-dimensional pattern of the light-emitting elements on the substrate. 20. The method according to claim 19, and comprising mounting a diffractive optical element (DOE) on the substrate so as to expand the projected optical beam by producing multiple, mutually-adjacent replicas of the pattern.
2,400
7,967
7,967
16,292,374
2,439
A method for fetching a content from a web server to a client device is disclosed, using tunnel devices serving as intermediate devices. The client device accesses an acceleration server to receive a list of available tunnel devices. The requested content is partitioned into slices, and the client device sends a request for the slices to the available tunnel devices. The tunnel devices in turn fetch the slices from the data server, and send the slices to the client device, where the content is reconstructed from the received slices. A client device may also serve as a tunnel device, serving as an intermediate device to other client devices. Similarly, a tunnel device may also serve as a client device for fetching content from a data server. The selection of tunnel devices to be used by a client device may be in the acceleration server, in the client device, or in both. The partition into slices may be overlapping or non-overlapping, and the same slice (or the whole content) may be fetched via multiple tunnel devices.
1. A method for fetching over the Internet, by a first device identified in the Internet by a first identifier, a first content identified by a first content identifier and stored in a web server, using a first server that stores a group of IP addresses, each IP address in the group is in IPv4 or IPv6 form and is associated with a physical geographical location, the method by the first server comprising: receiving the first content identifier from the first device; selecting an IP address from the group, based on, or using, the respective physical geographical location, in response to the receiving of the first content identifier; sending the first content identifier to the web server using the selected IP address; receiving the first content; and sending the first content to the first device. 2. The method according to claim 1, for use with a group of devices, wherein each of the IP addresses in the group is an identifier of a respective device from the group of devices. 3. The method according to claim 2, wherein the sending of the first content identifier comprises sending the first content identifier to the device from the group of devices that is identified by the selected IP address. 4. The method according to claim 2, wherein the sending of the first content identifier comprises receiving the first content from the device from the group of devices that is identified by the selected IP address. 5. The method according to claim 2, further comprising communicating with each one of the devices from the group of devices. 6. The method according to claim 5, further comprising storing the respective IP address of each of the devices from the group of devices in response to the communicating. 7. The method according to claim 2, wherein the sending of the first content is performed by the device from the group of devices that is identified by the selected IP address. 8. The method according to claim 2, further comprising establishing a connection with each of the devices of the group of devices in response the communicating, and wherein the communicating with each of the devices of the group of devices is over the established connection. 9. The method according to claim 8, wherein each of the devices of the group is communicating using TCP, and wherein the connection is established by performing ‘Active OPEN’ or ‘Passive OPEN’. 10. The method according to claim 1, wherein the sending of the first content identifier to the web server using the selected IP address comprises using the selected IP address as a source address. 11. The method according to claim 1, wherein the selecting further comprises randomly selecting an IP address. 12. The method according to claim 11, wherein the randomly selecting uses one or more random numbers generated by a random number generator that is based on executing an algorithm for generating pseudo-random numbers. 13. The method according to claim 1, wherein the physical geographical location associated with each of the devices is based on, uses, or responsive to, the actual physical geographical location of a device. 14. The method according to claim 1, wherein the physical geographical location includes at least one out of a continent, a country, a state or province, a city, a street, a ZIP code, or longitude and latitude. 15. The method according to claim 1, wherein the physical geographical location is based on a geolocation. 16. The method according to claim 15, wherein the geolocation is based on W3C Geolocation API. 17. The method according to claim 15, for use with a database associating IP addresses to physical geographical locations, wherein the physical geographical location of each of the devices of the group is based on using the database to associate the respective IP address to the physical geographical locations. 18. The method according to claim 17, wherein the database is stored in the first server. 19. The method according to claim 17, wherein the database is stored in a geolocation server accessible via the Internet, and the method further comprising sending each of the IP addresses in the group to the geolocation server, and in response receiving the corresponding physical geographical location. 20. The method according to claim 1, wherein the selecting is based on past activities or is based on a timing of an event. 21. The method according to claim 1, wherein the web server uses HyperText Transfer Protocol (HTTP) that responds to HTTP requests via the Internet. 22. The method according to claim 21, wherein the communication with the web server is based on, or using, HTTP persistent connection. 23. The method according to claim 1, wherein the communication with the first device or with the first server, is based on, or according to, TCP/IP protocol or connection. 24. The method according to claim 1, wherein the first content includes, consists of, or comprises, a part or whole of files, text, numbers, audio, voice, multimedia, video, images, music, or computer program, or wherein the first content includes, consists of, or comprises, a part of, or a whole of, a web-site page. 25. The method according to claim 1, wherein the first server is storing, operating, or using, a server operating system that consists or, comprises of, or based on, one out of Microsoft Windows Server®, Linux, or UNIX. 26. The method according to claim 25, wherein the server operating system consists or, comprises of, or based on, one out of Microsoft Windows Server® 2003 R2, 2008, 2008 R2, 2012, or 2012 R2 variant, Linux™ or GNU/Linux based Debian GNU/Linux, Debian GNU/kFreeBSD, Debian GNU/Hurd, Fedora™, Gentoo™, Linspire™, Mandriva, Red Hat® Linux, SuSE, and Ubuntu®, UNIX® variant Solaris™, AIX®, Mac™ OS X, FreeBSD®, OpenBSD, and NetBSD®. 27. The method according to claim 1, wherein at least part of, or all of, the steps are performed by the first server, and wherein the steps are sequentially executed. 28. The method according to claim 1, further comprising sending, by the first server, at least part of, or all of, the IP addresses in the group to the first device.
A method for fetching a content from a web server to a client device is disclosed, using tunnel devices serving as intermediate devices. The client device accesses an acceleration server to receive a list of available tunnel devices. The requested content is partitioned into slices, and the client device sends a request for the slices to the available tunnel devices. The tunnel devices in turn fetch the slices from the data server, and send the slices to the client device, where the content is reconstructed from the received slices. A client device may also serve as a tunnel device, serving as an intermediate device to other client devices. Similarly, a tunnel device may also serve as a client device for fetching content from a data server. The selection of tunnel devices to be used by a client device may be in the acceleration server, in the client device, or in both. The partition into slices may be overlapping or non-overlapping, and the same slice (or the whole content) may be fetched via multiple tunnel devices.1. A method for fetching over the Internet, by a first device identified in the Internet by a first identifier, a first content identified by a first content identifier and stored in a web server, using a first server that stores a group of IP addresses, each IP address in the group is in IPv4 or IPv6 form and is associated with a physical geographical location, the method by the first server comprising: receiving the first content identifier from the first device; selecting an IP address from the group, based on, or using, the respective physical geographical location, in response to the receiving of the first content identifier; sending the first content identifier to the web server using the selected IP address; receiving the first content; and sending the first content to the first device. 2. The method according to claim 1, for use with a group of devices, wherein each of the IP addresses in the group is an identifier of a respective device from the group of devices. 3. The method according to claim 2, wherein the sending of the first content identifier comprises sending the first content identifier to the device from the group of devices that is identified by the selected IP address. 4. The method according to claim 2, wherein the sending of the first content identifier comprises receiving the first content from the device from the group of devices that is identified by the selected IP address. 5. The method according to claim 2, further comprising communicating with each one of the devices from the group of devices. 6. The method according to claim 5, further comprising storing the respective IP address of each of the devices from the group of devices in response to the communicating. 7. The method according to claim 2, wherein the sending of the first content is performed by the device from the group of devices that is identified by the selected IP address. 8. The method according to claim 2, further comprising establishing a connection with each of the devices of the group of devices in response the communicating, and wherein the communicating with each of the devices of the group of devices is over the established connection. 9. The method according to claim 8, wherein each of the devices of the group is communicating using TCP, and wherein the connection is established by performing ‘Active OPEN’ or ‘Passive OPEN’. 10. The method according to claim 1, wherein the sending of the first content identifier to the web server using the selected IP address comprises using the selected IP address as a source address. 11. The method according to claim 1, wherein the selecting further comprises randomly selecting an IP address. 12. The method according to claim 11, wherein the randomly selecting uses one or more random numbers generated by a random number generator that is based on executing an algorithm for generating pseudo-random numbers. 13. The method according to claim 1, wherein the physical geographical location associated with each of the devices is based on, uses, or responsive to, the actual physical geographical location of a device. 14. The method according to claim 1, wherein the physical geographical location includes at least one out of a continent, a country, a state or province, a city, a street, a ZIP code, or longitude and latitude. 15. The method according to claim 1, wherein the physical geographical location is based on a geolocation. 16. The method according to claim 15, wherein the geolocation is based on W3C Geolocation API. 17. The method according to claim 15, for use with a database associating IP addresses to physical geographical locations, wherein the physical geographical location of each of the devices of the group is based on using the database to associate the respective IP address to the physical geographical locations. 18. The method according to claim 17, wherein the database is stored in the first server. 19. The method according to claim 17, wherein the database is stored in a geolocation server accessible via the Internet, and the method further comprising sending each of the IP addresses in the group to the geolocation server, and in response receiving the corresponding physical geographical location. 20. The method according to claim 1, wherein the selecting is based on past activities or is based on a timing of an event. 21. The method according to claim 1, wherein the web server uses HyperText Transfer Protocol (HTTP) that responds to HTTP requests via the Internet. 22. The method according to claim 21, wherein the communication with the web server is based on, or using, HTTP persistent connection. 23. The method according to claim 1, wherein the communication with the first device or with the first server, is based on, or according to, TCP/IP protocol or connection. 24. The method according to claim 1, wherein the first content includes, consists of, or comprises, a part or whole of files, text, numbers, audio, voice, multimedia, video, images, music, or computer program, or wherein the first content includes, consists of, or comprises, a part of, or a whole of, a web-site page. 25. The method according to claim 1, wherein the first server is storing, operating, or using, a server operating system that consists or, comprises of, or based on, one out of Microsoft Windows Server®, Linux, or UNIX. 26. The method according to claim 25, wherein the server operating system consists or, comprises of, or based on, one out of Microsoft Windows Server® 2003 R2, 2008, 2008 R2, 2012, or 2012 R2 variant, Linux™ or GNU/Linux based Debian GNU/Linux, Debian GNU/kFreeBSD, Debian GNU/Hurd, Fedora™, Gentoo™, Linspire™, Mandriva, Red Hat® Linux, SuSE, and Ubuntu®, UNIX® variant Solaris™, AIX®, Mac™ OS X, FreeBSD®, OpenBSD, and NetBSD®. 27. The method according to claim 1, wherein at least part of, or all of, the steps are performed by the first server, and wherein the steps are sequentially executed. 28. The method according to claim 1, further comprising sending, by the first server, at least part of, or all of, the IP addresses in the group to the first device.
2,400
7,968
7,968
14,011,300
2,482
A method and device for digital measuring and ordering a custom orthopedic device includes an interactive method intended to assist clinicians select, measure and submit precise specifications for patients requiring custom orthopedic devices. The method includes a plurality of menus permitting the clinician to input specifications and submit orders electronically with the specifications and other data packaged together. The method and device include visualization indications to appropriately ensure image capture of a limb from various angles, including posterior, anterior, lateral and medial angles.
1. A method of ordering a custom orthopedic device for a joint, comprising the steps of: aligning a viewfinder image displayed on a screen and generated by an image sensor of a portable device with at least one predetermined portion of a limb including a joint; capturing and storing at least one image of the portion of the limb using the image sensor of the portable device based on at least one guideline; associating the at least one captured image with measurements of the limb, and patient information entered into the portable device; and transmitting an order containing the at least one captured image, the measurements of the limb, and the patient information from the portable device to a provider; wherein the at least one guideline is at least one of a depth of field guideline, a horizontal orientation guideline, a vertical orientation guideline, a tilt guideline, or a limb alignment guideline. 2. The method according to claim 1, wherein the at least one image of the limb satisfies the depth of field guideline, the horizontal or the vertical orientation guideline, the tilt angle guideline, and the limb alignment guideline. 3. The method according to claim 1, wherein the limb alignment guideline is a depth of field guideline overlaid on the viewfinder image, the depth of field guideline being a reference frame for a first distance above a joint, a second distance below a joint, and a centering of the limb and joint in the captured image. 4. The method according to claim 3, wherein the first and second distances are the same and referenced from a knee axis line. 5. The method according to claim 3, further comprising the step of: aligning the distances above and below the joint with the depth of field guideline shown in the viewfinder image before capturing the image. 6. The method according to claim 1, wherein once the orientation of the portable device relative to the limb satisfies the horizontal angle guideline or vertical angle guideline and the tilt angle guideline, the portable device enables image capture. 7. The method according to claim 1, further comprising the step of: calibrating the image sensor of the portable device. 8. The method according to claim 1, further comprising the step of: executing an ordering application; determining whether the ordering application has been previously executed; upon the determination that the ordering application has not been previously executed, calibrating the image sensor of the portable device; and upon the determination that the ordering application has been previously executed, enabling capture of the image of the limb. 9. The method according to claim 1, further comprising the steps of: reviewing the captured image of the limb; and selecting a custom orthopedic device configuration. 10. The method according to claim 9, wherein the step of reviewing the captured image of the limb, comprises the steps of: viewing the captured image with an overlaid depth of field guideline to confirm the captured portion of the limb satisfies the overlaid depth of field guideline; entering basic patient information into the portable device, wherein the basic patient information includes measurements of the limb at various locations on the limb; overlaying the captured image with the basic patient information; and storing the overlaid captured image in the portable device. 11. The method according to claim 1, further comprising the steps of: configuring the custom orthopedic device; reviewing the order; and storing the order in a memory of the portable device. 12. The method according to claim 11, wherein at least one previous order is stored in the memory of the portable device. 13. The method according to claim 1, wherein the order is transmitted as an e-mail containing the patient information and the saved, captured image of the limb. 14. The method according to claim 1, wherein the joint is a knee joint and the orthopedic device is a knee brace. 15. A device for ordering a custom orthopedic device, comprising: an image sensor, the image sensor configured to capture an image; a display; a gyroscope and/or accelerometer; a communication interface; a processor; and a memory; wherein the processor is configured to enable capturing an image of a portion of a limb including a joint using the image sensor based on at least one guideline, wherein the image of the limb satisfies the at least one guideline; wherein the gyroscope and/or accelerometer is configured to provide orientation data to the processor; wherein the communication interface is configured to transmit an order containing the captured image and patient information from the apparatus over a network to a provider; and wherein the at least one guideline is at least one of a depth of field guideline, a horizontal orientation guideline, a vertical orientation guideline, a tilt guideline, or a limb alignment guideline. 16. The device according to claim 15, wherein the limb alignment guideline is a depth of field guideline overlaid on a viewfinder image, the depth of field guideline being a reference frame for a first distance above a joint, a second distance below a joint, and a centering of the limb and joint in the captured image. 17. The device according to claim 16, wherein the processor is configured to provide an indication of the orientation of the device on the display. 18. The device according to claim 15, wherein the processor is configured to calibrate the image sensor by setting the image sensor to a first resolution and a first zoom level. 19. The method according to claim 1, wherein at least one image of markings or reference points on the limb is captured. 20. The method according to claim 19, further comprising: generating a three-dimensional model of the limb from the markings.
A method and device for digital measuring and ordering a custom orthopedic device includes an interactive method intended to assist clinicians select, measure and submit precise specifications for patients requiring custom orthopedic devices. The method includes a plurality of menus permitting the clinician to input specifications and submit orders electronically with the specifications and other data packaged together. The method and device include visualization indications to appropriately ensure image capture of a limb from various angles, including posterior, anterior, lateral and medial angles.1. A method of ordering a custom orthopedic device for a joint, comprising the steps of: aligning a viewfinder image displayed on a screen and generated by an image sensor of a portable device with at least one predetermined portion of a limb including a joint; capturing and storing at least one image of the portion of the limb using the image sensor of the portable device based on at least one guideline; associating the at least one captured image with measurements of the limb, and patient information entered into the portable device; and transmitting an order containing the at least one captured image, the measurements of the limb, and the patient information from the portable device to a provider; wherein the at least one guideline is at least one of a depth of field guideline, a horizontal orientation guideline, a vertical orientation guideline, a tilt guideline, or a limb alignment guideline. 2. The method according to claim 1, wherein the at least one image of the limb satisfies the depth of field guideline, the horizontal or the vertical orientation guideline, the tilt angle guideline, and the limb alignment guideline. 3. The method according to claim 1, wherein the limb alignment guideline is a depth of field guideline overlaid on the viewfinder image, the depth of field guideline being a reference frame for a first distance above a joint, a second distance below a joint, and a centering of the limb and joint in the captured image. 4. The method according to claim 3, wherein the first and second distances are the same and referenced from a knee axis line. 5. The method according to claim 3, further comprising the step of: aligning the distances above and below the joint with the depth of field guideline shown in the viewfinder image before capturing the image. 6. The method according to claim 1, wherein once the orientation of the portable device relative to the limb satisfies the horizontal angle guideline or vertical angle guideline and the tilt angle guideline, the portable device enables image capture. 7. The method according to claim 1, further comprising the step of: calibrating the image sensor of the portable device. 8. The method according to claim 1, further comprising the step of: executing an ordering application; determining whether the ordering application has been previously executed; upon the determination that the ordering application has not been previously executed, calibrating the image sensor of the portable device; and upon the determination that the ordering application has been previously executed, enabling capture of the image of the limb. 9. The method according to claim 1, further comprising the steps of: reviewing the captured image of the limb; and selecting a custom orthopedic device configuration. 10. The method according to claim 9, wherein the step of reviewing the captured image of the limb, comprises the steps of: viewing the captured image with an overlaid depth of field guideline to confirm the captured portion of the limb satisfies the overlaid depth of field guideline; entering basic patient information into the portable device, wherein the basic patient information includes measurements of the limb at various locations on the limb; overlaying the captured image with the basic patient information; and storing the overlaid captured image in the portable device. 11. The method according to claim 1, further comprising the steps of: configuring the custom orthopedic device; reviewing the order; and storing the order in a memory of the portable device. 12. The method according to claim 11, wherein at least one previous order is stored in the memory of the portable device. 13. The method according to claim 1, wherein the order is transmitted as an e-mail containing the patient information and the saved, captured image of the limb. 14. The method according to claim 1, wherein the joint is a knee joint and the orthopedic device is a knee brace. 15. A device for ordering a custom orthopedic device, comprising: an image sensor, the image sensor configured to capture an image; a display; a gyroscope and/or accelerometer; a communication interface; a processor; and a memory; wherein the processor is configured to enable capturing an image of a portion of a limb including a joint using the image sensor based on at least one guideline, wherein the image of the limb satisfies the at least one guideline; wherein the gyroscope and/or accelerometer is configured to provide orientation data to the processor; wherein the communication interface is configured to transmit an order containing the captured image and patient information from the apparatus over a network to a provider; and wherein the at least one guideline is at least one of a depth of field guideline, a horizontal orientation guideline, a vertical orientation guideline, a tilt guideline, or a limb alignment guideline. 16. The device according to claim 15, wherein the limb alignment guideline is a depth of field guideline overlaid on a viewfinder image, the depth of field guideline being a reference frame for a first distance above a joint, a second distance below a joint, and a centering of the limb and joint in the captured image. 17. The device according to claim 16, wherein the processor is configured to provide an indication of the orientation of the device on the display. 18. The device according to claim 15, wherein the processor is configured to calibrate the image sensor by setting the image sensor to a first resolution and a first zoom level. 19. The method according to claim 1, wherein at least one image of markings or reference points on the limb is captured. 20. The method according to claim 19, further comprising: generating a three-dimensional model of the limb from the markings.
2,400
7,969
7,969
14,599,345
2,425
An advanced wireless IP STB is provided with multiple built-in antennas capable of capturing plural downstream transmissions simultaneously on dedicated receivers using different modem technologies without the use of wires to the home. The proposed solution facilitates the advanced wireless IP STB being able to receive multiply sourced data traffic, including, for example, IPTV, digital TV, web TV, radio web, internet chat; written, voice and video, GPS tracking locator signals, media player web support, web based video gaming, You Tube and the like video streaming, TV surveillance, video intercom surveillance, and much more. The advanced wireless IP STB is configured to be able to establish a broadband (internet) session through previously assigned or negotiated channel assignments between one or more modems and plural remote wireless infrastructures widely deployed in a municipality, such as WIMAX, LTE, WCDMA, CDMA 1×, TDSCMA, GSM, GPRS, EDGE, 5G or the like.
1. An apparatus comprising: a plurality of modem modules, wherein each modem module is configured to: receive radio frequency (RF) signals; and process the received RF signals according to a different respective wireless communication standard; and a processor communicatively coupled to the modem modules, wherein the processor is configured to: establish, using at least two of the modem modules, a plurality of wireless network links between the apparatus and at least one wireless data network; obtain, using each of the at least two modems modules, a respective set of data transmitted through one of the wireless network links; determine an RF signal strength associated with each set of data; identify, from among the sets of data, a set of data corresponding to a strongest RF signal strength; determine internet protocol television (IPTV) signals based on the set of data corresponding to the strongest RF signal strength; and transmit output signals corresponding to the IPTV signals to a device communicatively coupled to the apparatus; wherein the apparatus is a set top box. 2. The apparatus of claim 1, wherein the apparatus further comprises a wireless transmitter communicatively coupled to the processor; and wherein the processor is configured to transmit, using the wireless transmitter, the output signals to the device. 3. The apparatus of claim 1, wherein at least some of the sets of data are transmitted by an IPTV provider communicatively coupled to at least one of the wireless data networks. 4. The apparatus of claim 1, wherein the wireless data links are established over a plurality of cellular radio channels. 5. The apparatus of claim 1, wherein the wireless communication standards include at least one of LTE, WiMAX, CMDA 1×, TDSCMA, GSM, DPRS, and EDGE. 6. The apparatus of claim 1, wherein the wireless communication standards include at least one of 2G, 2.5G, 3G, 3.5G, 4G, and 5G. 7. The apparatus of claim 1, wherein the wireless data links include at least one of a broadcast link, a unicast link, and a multicast link. 8. The apparatus of claim 1, wherein the processor is configured to establish the plurality of wireless data links between the apparatus and the wireless data networks based on a quality of service requirement. 9. The apparatus of claim 1, wherein the processor is configured to establish the plurality of wireless data links between the apparatus the wireless data networks based on data traffic. 10. The apparatus of claim 1, the processor is configured to establish a plurality of wireless data links between the apparatus and at least two wireless data networks, wherein the wireless data networks are operated by at least two different network carriers. 11. The apparatus of claim 1, wherein the device is a computer or a television. 12. A method of using a set top box having a plurality of modem modules, wherein each modem module is configured to receive radio frequency (RF) signals and process the received RF signals according to a different respective wireless communication standard, the method comprising: establishing, using at least two of the modem modules, a plurality of wireless network links between the set top box and at least one wireless data network; obtaining, using each of the at least two modems modules, a respective set of data transmitted through one of the wireless network links; determining an RF signal strength associated with each set of data; identifying, from among the sets of data, a set of data corresponding to a strongest RF signal strength; determining internet protocol television (IPTV) signals based on the set of data corresponding to the strongest RF signal strength; and transmitting output signals corresponding to the IPTV signals to a device communicatively coupled to the set top box. 13. The method of claim 12, wherein transmitting output signals to the device comprises transmitting the output signals to the device using a wireless transmitter. 14. The method of claim 12, wherein at least some of the sets of data are transmitted by an IPTV provider communicatively coupled to at least one of the wireless data networks. 15. The method of claim 12, wherein the wireless data links are established over a plurality of cellular radio channels. 16. The method of claim 12, wherein the wireless communication standards include at least one of LTE, WiMAX, CMDA 1×, TDSCMA, GSM, DPRS, and EDGE. 17. The method of claim 12, wherein the wireless communication standards include at least one of 2G, 2.5G, 3G, 3.5G, 4G, and 5G. 18. The method of claim 12, wherein the wireless data links include at least one of a broadcast link, a unicast link, and a multicast link. 19. The method of claim 12, wherein establishing the plurality of wireless network links between the set top box and the wireless data networks comprises establishing the plurality of wireless data links between the set top box and the wireless data networks based on a quality of service requirement. 20. The method of claim 12, wherein establishing the plurality of wireless network links between the set top box and the wireless data networks comprises establishing the plurality of wireless data links between the set top box and the wireless data networks based on data traffic. 21. The method of claim 12, further comprising: establishing a plurality of wireless data links between the set top box and at least two wireless data networks, wherein the wireless data networks are operated by at least two different network carriers. 22. The method of claim 12, wherein the device is a computer or a television.
An advanced wireless IP STB is provided with multiple built-in antennas capable of capturing plural downstream transmissions simultaneously on dedicated receivers using different modem technologies without the use of wires to the home. The proposed solution facilitates the advanced wireless IP STB being able to receive multiply sourced data traffic, including, for example, IPTV, digital TV, web TV, radio web, internet chat; written, voice and video, GPS tracking locator signals, media player web support, web based video gaming, You Tube and the like video streaming, TV surveillance, video intercom surveillance, and much more. The advanced wireless IP STB is configured to be able to establish a broadband (internet) session through previously assigned or negotiated channel assignments between one or more modems and plural remote wireless infrastructures widely deployed in a municipality, such as WIMAX, LTE, WCDMA, CDMA 1×, TDSCMA, GSM, GPRS, EDGE, 5G or the like.1. An apparatus comprising: a plurality of modem modules, wherein each modem module is configured to: receive radio frequency (RF) signals; and process the received RF signals according to a different respective wireless communication standard; and a processor communicatively coupled to the modem modules, wherein the processor is configured to: establish, using at least two of the modem modules, a plurality of wireless network links between the apparatus and at least one wireless data network; obtain, using each of the at least two modems modules, a respective set of data transmitted through one of the wireless network links; determine an RF signal strength associated with each set of data; identify, from among the sets of data, a set of data corresponding to a strongest RF signal strength; determine internet protocol television (IPTV) signals based on the set of data corresponding to the strongest RF signal strength; and transmit output signals corresponding to the IPTV signals to a device communicatively coupled to the apparatus; wherein the apparatus is a set top box. 2. The apparatus of claim 1, wherein the apparatus further comprises a wireless transmitter communicatively coupled to the processor; and wherein the processor is configured to transmit, using the wireless transmitter, the output signals to the device. 3. The apparatus of claim 1, wherein at least some of the sets of data are transmitted by an IPTV provider communicatively coupled to at least one of the wireless data networks. 4. The apparatus of claim 1, wherein the wireless data links are established over a plurality of cellular radio channels. 5. The apparatus of claim 1, wherein the wireless communication standards include at least one of LTE, WiMAX, CMDA 1×, TDSCMA, GSM, DPRS, and EDGE. 6. The apparatus of claim 1, wherein the wireless communication standards include at least one of 2G, 2.5G, 3G, 3.5G, 4G, and 5G. 7. The apparatus of claim 1, wherein the wireless data links include at least one of a broadcast link, a unicast link, and a multicast link. 8. The apparatus of claim 1, wherein the processor is configured to establish the plurality of wireless data links between the apparatus and the wireless data networks based on a quality of service requirement. 9. The apparatus of claim 1, wherein the processor is configured to establish the plurality of wireless data links between the apparatus the wireless data networks based on data traffic. 10. The apparatus of claim 1, the processor is configured to establish a plurality of wireless data links between the apparatus and at least two wireless data networks, wherein the wireless data networks are operated by at least two different network carriers. 11. The apparatus of claim 1, wherein the device is a computer or a television. 12. A method of using a set top box having a plurality of modem modules, wherein each modem module is configured to receive radio frequency (RF) signals and process the received RF signals according to a different respective wireless communication standard, the method comprising: establishing, using at least two of the modem modules, a plurality of wireless network links between the set top box and at least one wireless data network; obtaining, using each of the at least two modems modules, a respective set of data transmitted through one of the wireless network links; determining an RF signal strength associated with each set of data; identifying, from among the sets of data, a set of data corresponding to a strongest RF signal strength; determining internet protocol television (IPTV) signals based on the set of data corresponding to the strongest RF signal strength; and transmitting output signals corresponding to the IPTV signals to a device communicatively coupled to the set top box. 13. The method of claim 12, wherein transmitting output signals to the device comprises transmitting the output signals to the device using a wireless transmitter. 14. The method of claim 12, wherein at least some of the sets of data are transmitted by an IPTV provider communicatively coupled to at least one of the wireless data networks. 15. The method of claim 12, wherein the wireless data links are established over a plurality of cellular radio channels. 16. The method of claim 12, wherein the wireless communication standards include at least one of LTE, WiMAX, CMDA 1×, TDSCMA, GSM, DPRS, and EDGE. 17. The method of claim 12, wherein the wireless communication standards include at least one of 2G, 2.5G, 3G, 3.5G, 4G, and 5G. 18. The method of claim 12, wherein the wireless data links include at least one of a broadcast link, a unicast link, and a multicast link. 19. The method of claim 12, wherein establishing the plurality of wireless network links between the set top box and the wireless data networks comprises establishing the plurality of wireless data links between the set top box and the wireless data networks based on a quality of service requirement. 20. The method of claim 12, wherein establishing the plurality of wireless network links between the set top box and the wireless data networks comprises establishing the plurality of wireless data links between the set top box and the wireless data networks based on data traffic. 21. The method of claim 12, further comprising: establishing a plurality of wireless data links between the set top box and at least two wireless data networks, wherein the wireless data networks are operated by at least two different network carriers. 22. The method of claim 12, wherein the device is a computer or a television.
2,400
7,970
7,970
15,063,944
2,426
Systems and methods are operable to present a sporting event on a display based on a determined level of viewer engagement and a determined team preference of the viewer. An exemplary embodiment presents a neutral viewpoint video content segment on the display during the first period of game play when the viewer has a neutral team preference, alternatively presents a first team alternative video content segment on the display during the first period of game play when the viewer has a preference for the first team, or alternatively presents a second team alternative video content segment on the display during the first period of game play when the viewer has a preference for the second team.
1. A media content presentation method for presenting a sporting event on a display, the method comprising: receiving, at a media device, a sporting event production comprising: a series of neutral viewpoint video content segments that are serially presentable on the display to a viewer during the presentation of the sporting event, and wherein each of the neutral viewpoint video content segments are associated with a duration corresponding to a period of game play of the sporting event; a first team alternative video content segment corresponding to a first period of game play that occurs within the period of game play of the series of neutral viewpoint video content segments, wherein the first team alternative video content segment includes a first identifier associated with a first team that is playing in the sporting event; and a second team alternative video content segment corresponding to the first period of game play, wherein the second team alternative video content segment includes a second identifier associated with a second team that is playing in the sporting event; presenting a first neutral viewpoint video content segment on the display to a user of the media device, wherein the first neutral viewpoint video content segment includes one of the first identifier associated with the first team or the second team identifier associated with the second team, wherein a team preference of the user has not yet been determined by the media device, and wherein the first neutral viewpoint video content segment is presented prior to the first period of game play; detecting a response of the user during presentation of the first neutral viewpoint video content segment; determining, at the media device, a degree of viewer engagement of the user and a characteristic of the user response based on the detected user response; determining one of a user's first team preference that indicates that the user favors the first team, a user's second team preference that indicates that the user favors the second team, and user's neutral team preference that indicates that the user favors neither the first team or the second team, wherein the user's first team preference is determined based on the degree of viewer engagement exceeding a threshold and based on the characteristic of the user response being associated with a favoritism for the first team or a disfavor for the second team, wherein the user's second team preference is determined based on the degree of viewer engagement exceeding the threshold and based on the characteristic of the user response being associated with a favoritism for the second team or a disfavor for the first team, and wherein the user's neutral team preference is determined based on the degree of viewer engagement not exceeding the threshold; presenting a following one of the series of neutral viewpoint video content segments on the display during the first period of game play when the user's neutral team preference is determined; alternatively presenting the first team alternative video content segment on the display during the first period of game play when the user's first team preference is determined, wherein the following one of the series of neutral viewpoint video content segment is being presented; and alternatively presenting the second team alternative video content segment on the display during the first period of game play when the user's second team preference is determined, wherein the following one of the series of neutral viewpoint video content segments is not presented on the display while the second team alternative video content segment is being presented. 2. (canceled) 3. The method of claim 1, wherein after presentation of one of the neutral viewpoint video content segment, the first team alternative video content segment, or the second team alternative video content segment to the viewer during the first period of game play, the method further comprising: receiving a next neutral viewpoint video content segment that follows the neutral viewpoint video content segment during a next period of game play that follows the first period of game play; and presenting only the next neutral viewpoint video content segment when there is no associated next first team alternative video content segment or next second neutral viewpoint video content segment in the received sporting event production. 4. (canceled) 5. The method of claim 1, wherein determining whether the viewer prefers the first team or the second team based on the determined level of viewer engagement comprises: capturing a video image of the viewer using a camera when the first neutral viewpoint video content segment is presented; identifying at least one facial expression, posture or gesture made by the viewer in the captured video image; comparing the identified at least one facial expression, posture or gesture with a stored first plurality of facial expressions, postures or gestures that indicate favoritism and a stored second plurality of facial expressions, postures or gestures that indicate disfavor; determining that the viewer has the preference for the first team when the identified at least one facial expression, posture or gesture corresponds to one of the stored first plurality of facial expressions, postures or gestures; and determining that the viewer has the preference for the second team when the identified at least one facial expression, posture or gesture corresponds to one of the stored second plurality of facial expressions, postures or gestures. 6. The method of claim 1, wherein determining whether the viewer prefers the first team or the second team based on the determined level of viewer engagement comprises: capturing an audio clip of sounds made by the viewer using a microphone when the first neutral viewpoint video content segment is presented; identifying at least one audio content characteristic from the audio clip; comparing the identified at least one audio content characteristic with a stored first plurality of audio content characteristics that indicate favoritism and a stored second plurality of audio content characteristics that indicate disfavor; determining that the viewer has the preference for the first team when the identified at least one audio content characteristic corresponds to one of the stored first plurality of audio content characteristics; and determining that the viewer has the preference for the second team when the identified at least one audio content characteristic corresponds to one of the stored second plurality of audio content characteristics. 7. The method of claim 1, wherein determining whether the viewer prefers the first team or the second team based on the determined level of viewer engagement comprises: capturing an audio clip of sounds made by the viewer using a microphone when the first neutral viewpoint video content segment is presented; identifying a volume level from the audio clip; comparing the identified volume level with a stored volume level threshold; and determining that the viewer has the preference for the first team when the identified volume level is at least equal to the volume level threshold. 8. The method of claim 1, wherein determining whether the viewer prefers the first team or the second team based on the determined level of viewer engagement comprises: capturing an audio clip of sounds made by the viewer using a microphone when the first neutral viewpoint video content segment is presented; identifying at least one key word from the audio clip; comparing the identified at least one key word with a stored first plurality of key words that indicate favoritism and a stored second plurality of key words that indicate disfavor; determining that the viewer has the preference for the first team when the identified at least one key word corresponds to one of the stored first plurality of key words; and determining that the viewer has the preference for the second team when the identified at least one audio key word corresponds to one of the stored second plurality of key words. 9. The method of claim 1, wherein prior to presenting one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment, the method further comprising: presenting an electronic program guide (EPG) on the display to the user, wherein the EPG lists the sporting event and a plurality of other programs; receiving a first selection from the viewer via the presented EPG, wherein the first selection is of the sporting event; and receiving a second selection from the viewer via the presented EPG, wherein the second selection is one of a neutral viewpoint only mode of presentation of the sporting event or an alternative viewpoint mode of presentation, wherein one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment are presented only when the second selection is for the alternative viewpoint presentation, and wherein only the neutral viewpoint video content segment is presented when the second selection is for the neutral viewpoint only mode of presentation. 10. The method of claim 1, wherein prior to presenting one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment, the method further comprising: receiving a wireless signal from a remote control, wherein the wireless signal is generated in response to the viewer actuating a predefined one of a plurality of controllers of the remote control, wherein one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment are presented only in response to receiving the wireless signal from the remote control. 11. A media device configured to present a sporting event on a display, comprising: a media content stream interface configured to receive a sporting event production comprising: a series of neutral viewpoint video content segments that are serially presentable on the display to a viewer during the presentation of the sporting event, and wherein each of the neutral viewpoint video content segments are associated with a duration corresponding to a period of game play of the sporting event; a first team alternative video content segment corresponding to a first period of game play that occurs within the period of game play of the series of neutral viewpoint video content segments, wherein the first team alternative video content segment includes a first identifier associated with a first team that is playing in the sporting event; and a second team alternative video content segment corresponding to the first period of game play, wherein the second team alternative video content segment includes a second identifier associated with a second team that is playing in the sporting event; and a processor system that, while the sporting event production is being received at the media content stream interface: presents a first neutral viewpoint video content segment on the display to a user of the media device, wherein the first neutral viewpoint video content segment includes one of the first identifier associated with the first team or the second team identifier associated with the second team, wherein a team preference of the user has not yet been determined by the media device; processes information corresponding to a detected response of the user during presentation of the first neutral viewpoint video content segment; determines a degree of viewer engagement of the user and a characteristic of the user response based on the detected user response; determines one of a user's first team preference that indicates that the user favors the first team, a user's second team preference that indicates that the user favors the second team, and user's neutral team preference that indicates that the user favors neither the first team or the second team, wherein the user's first team preference is determined based on the degree of viewer engagement exceeding a threshold and based on the characteristic of the user response being associated with a favoritism for the first team or a disfavor for the second team, wherein the user's second team preference is determined based on the degree of viewer engagement exceeding the threshold and based on the characteristic of the user response being associated with a favoritism for the second team or a disfavor for the first team, and wherein the user's neutral team preference is determined based on the degree of viewer engagement not exceeding the threshold; presents a following one of the series of neutral viewpoint video content segments on the display during the first period of game play when the neutral team preference is determined; alternatively presents the first team alternative video content segment on the display during the first period of game play when the first team preference is determined; and alternatively presents the second team alternative video content segment on the display during the first period of game play when the second team preference is determined. 12. (canceled) 13. The media device of claim 11, further comprising: a camera configured to capture a video image of the viewer when the first neutral viewpoint video content segment is presented, wherein the processor system is further configured to: identify at least one facial expression, posture or gesture made by the viewer in the captured video image; compare the identified at least one facial expression, posture or gesture with a stored first plurality of facial expressions, postures or gestures that indicate favoritism and a stored second plurality of facial expressions, postures or gestures that indicate disfavor; determine that the viewer has the preference for the first team when the identified at least one facial expression, posture or gesture corresponds to one of the stored first plurality of facial expressions, postures or gestures; and determine that the viewer has the preference for the second team when the identified at least one facial expression, posture or gesture corresponds to one of the stored second plurality of facial expressions, postures or gestures. 14. The media device of claim 11, further comprising: a microphone configured to capture an audio clip of sounds made by the viewer when the first neutral viewpoint video content segment is presented, wherein the processor system is further configured to: identify at least one audio content characteristic from the audio clip; compare the identified at least one audio content characteristic with a stored first plurality of audio content characteristics that indicate favoritism and a stored second plurality of audio content characteristics that indicate disfavor; determine that the viewer has the preference for the first team when the identified at least one audio content characteristic corresponds to one of the stored first plurality of audio content characteristics; and determine that the viewer has the preference for the second team when the identified at least one audio content characteristic corresponds to one of the stored second plurality of audio content characteristics. 15. The media device of claim 11, further comprising: a microphone configured to capture an audio clip of sounds made by the viewer when the first neutral viewpoint video content segment is presented, wherein the processor system is further configured to: identify a volume level from the audio clip; compare the identified volume level with a stored volume level threshold; and determine that the viewer has the preference for the first team when the identified volume level is at least equal to the volume level threshold. 16. The media device of claim 11, further comprising: a microphone configured to capture an audio clip of sounds made by the viewer when the first neutral viewpoint video content segment is presented, wherein the processor system is further configured to: identify at least one key word from the audio clip; compare the identified at least one key word with a stored first plurality of key words that indicate favoritism and a stored second plurality of key words that indicate disfavor; determine that the viewer has the preference for the first team when the identified at least one key word corresponds to one of the stored first plurality of key words; and determine that the viewer has the preference for the second team when the identified at least one audio key word corresponds to one of the stored second plurality of key words. 17. The media device of claim 11, further comprising: a digital video recorder (DVR) configured to store the received sporting event production, wherein the processor system is further configured to retrieve the stored sporting event production from the DVR, and wherein at least one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment are presented to the viewer as the sporting event production is being retrieved from the DVR. 18. The media device of claim 11, further comprising: a remote control interface that is configured to receive a wireless signal from a remote control, wherein the wireless signal is generated in response to the viewer actuating a predefined one of a plurality of controllers of the remote control, wherein the processor system presented one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment only in response to receiving the wireless signal from the remote control. 19.-20. (canceled) 21. The method of claim 1, wherein the following one of the series of neutral viewpoint video content segments, the first team alternative video content segment, and the second team alternative video content segment occur during the same portion of the first period of game play. 22. The method of claim 1, wherein the sporting event production is a live broadcast of the sporting event, and further comprising: storing, at the media device, the following one of the series of neutral viewpoint video content segments while the first team alternative video content segment or the second team alternative video content segment is being presented on the display; and presenting the following one of the series of neutral viewpoint video content segments after presentation of the first team alternative video content segment or the second team alternative video content segment the has concluded. 23. The method of claim 22, further comprising: determining a total cumulative duration of a presentation time of the presented first team alternative video content segment or the presented second team alternative video content segment and presentation times of any subsequently presented first team alternative video content segments or subsequently presented second team alternative video content segments; skipping presentation of a subsequent portion of the broadcasting sporting event production when the total cumulative duration exceeds a threshold, wherein a duration of the skipped subsequent portion of the sporting event production is substantially the same as the total cumulative duration; and resuming presentation of the live broadcast of the sporting event production after skipping presentation of the subsequent portion of the sporting event production. 24. The method of claim 23, wherein the skipped presentation of the subsequent portion of the broadcasting sporting event production corresponds to a commercial that has been incorporated into the sporting event production. 25. The method of claim 23, wherein the received sporting event production comprises a sacrificial segment, wherein the sacrificial segment is the skipped presentation of the subsequent portion of the broadcasting sporting event production.
Systems and methods are operable to present a sporting event on a display based on a determined level of viewer engagement and a determined team preference of the viewer. An exemplary embodiment presents a neutral viewpoint video content segment on the display during the first period of game play when the viewer has a neutral team preference, alternatively presents a first team alternative video content segment on the display during the first period of game play when the viewer has a preference for the first team, or alternatively presents a second team alternative video content segment on the display during the first period of game play when the viewer has a preference for the second team.1. A media content presentation method for presenting a sporting event on a display, the method comprising: receiving, at a media device, a sporting event production comprising: a series of neutral viewpoint video content segments that are serially presentable on the display to a viewer during the presentation of the sporting event, and wherein each of the neutral viewpoint video content segments are associated with a duration corresponding to a period of game play of the sporting event; a first team alternative video content segment corresponding to a first period of game play that occurs within the period of game play of the series of neutral viewpoint video content segments, wherein the first team alternative video content segment includes a first identifier associated with a first team that is playing in the sporting event; and a second team alternative video content segment corresponding to the first period of game play, wherein the second team alternative video content segment includes a second identifier associated with a second team that is playing in the sporting event; presenting a first neutral viewpoint video content segment on the display to a user of the media device, wherein the first neutral viewpoint video content segment includes one of the first identifier associated with the first team or the second team identifier associated with the second team, wherein a team preference of the user has not yet been determined by the media device, and wherein the first neutral viewpoint video content segment is presented prior to the first period of game play; detecting a response of the user during presentation of the first neutral viewpoint video content segment; determining, at the media device, a degree of viewer engagement of the user and a characteristic of the user response based on the detected user response; determining one of a user's first team preference that indicates that the user favors the first team, a user's second team preference that indicates that the user favors the second team, and user's neutral team preference that indicates that the user favors neither the first team or the second team, wherein the user's first team preference is determined based on the degree of viewer engagement exceeding a threshold and based on the characteristic of the user response being associated with a favoritism for the first team or a disfavor for the second team, wherein the user's second team preference is determined based on the degree of viewer engagement exceeding the threshold and based on the characteristic of the user response being associated with a favoritism for the second team or a disfavor for the first team, and wherein the user's neutral team preference is determined based on the degree of viewer engagement not exceeding the threshold; presenting a following one of the series of neutral viewpoint video content segments on the display during the first period of game play when the user's neutral team preference is determined; alternatively presenting the first team alternative video content segment on the display during the first period of game play when the user's first team preference is determined, wherein the following one of the series of neutral viewpoint video content segment is being presented; and alternatively presenting the second team alternative video content segment on the display during the first period of game play when the user's second team preference is determined, wherein the following one of the series of neutral viewpoint video content segments is not presented on the display while the second team alternative video content segment is being presented. 2. (canceled) 3. The method of claim 1, wherein after presentation of one of the neutral viewpoint video content segment, the first team alternative video content segment, or the second team alternative video content segment to the viewer during the first period of game play, the method further comprising: receiving a next neutral viewpoint video content segment that follows the neutral viewpoint video content segment during a next period of game play that follows the first period of game play; and presenting only the next neutral viewpoint video content segment when there is no associated next first team alternative video content segment or next second neutral viewpoint video content segment in the received sporting event production. 4. (canceled) 5. The method of claim 1, wherein determining whether the viewer prefers the first team or the second team based on the determined level of viewer engagement comprises: capturing a video image of the viewer using a camera when the first neutral viewpoint video content segment is presented; identifying at least one facial expression, posture or gesture made by the viewer in the captured video image; comparing the identified at least one facial expression, posture or gesture with a stored first plurality of facial expressions, postures or gestures that indicate favoritism and a stored second plurality of facial expressions, postures or gestures that indicate disfavor; determining that the viewer has the preference for the first team when the identified at least one facial expression, posture or gesture corresponds to one of the stored first plurality of facial expressions, postures or gestures; and determining that the viewer has the preference for the second team when the identified at least one facial expression, posture or gesture corresponds to one of the stored second plurality of facial expressions, postures or gestures. 6. The method of claim 1, wherein determining whether the viewer prefers the first team or the second team based on the determined level of viewer engagement comprises: capturing an audio clip of sounds made by the viewer using a microphone when the first neutral viewpoint video content segment is presented; identifying at least one audio content characteristic from the audio clip; comparing the identified at least one audio content characteristic with a stored first plurality of audio content characteristics that indicate favoritism and a stored second plurality of audio content characteristics that indicate disfavor; determining that the viewer has the preference for the first team when the identified at least one audio content characteristic corresponds to one of the stored first plurality of audio content characteristics; and determining that the viewer has the preference for the second team when the identified at least one audio content characteristic corresponds to one of the stored second plurality of audio content characteristics. 7. The method of claim 1, wherein determining whether the viewer prefers the first team or the second team based on the determined level of viewer engagement comprises: capturing an audio clip of sounds made by the viewer using a microphone when the first neutral viewpoint video content segment is presented; identifying a volume level from the audio clip; comparing the identified volume level with a stored volume level threshold; and determining that the viewer has the preference for the first team when the identified volume level is at least equal to the volume level threshold. 8. The method of claim 1, wherein determining whether the viewer prefers the first team or the second team based on the determined level of viewer engagement comprises: capturing an audio clip of sounds made by the viewer using a microphone when the first neutral viewpoint video content segment is presented; identifying at least one key word from the audio clip; comparing the identified at least one key word with a stored first plurality of key words that indicate favoritism and a stored second plurality of key words that indicate disfavor; determining that the viewer has the preference for the first team when the identified at least one key word corresponds to one of the stored first plurality of key words; and determining that the viewer has the preference for the second team when the identified at least one audio key word corresponds to one of the stored second plurality of key words. 9. The method of claim 1, wherein prior to presenting one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment, the method further comprising: presenting an electronic program guide (EPG) on the display to the user, wherein the EPG lists the sporting event and a plurality of other programs; receiving a first selection from the viewer via the presented EPG, wherein the first selection is of the sporting event; and receiving a second selection from the viewer via the presented EPG, wherein the second selection is one of a neutral viewpoint only mode of presentation of the sporting event or an alternative viewpoint mode of presentation, wherein one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment are presented only when the second selection is for the alternative viewpoint presentation, and wherein only the neutral viewpoint video content segment is presented when the second selection is for the neutral viewpoint only mode of presentation. 10. The method of claim 1, wherein prior to presenting one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment, the method further comprising: receiving a wireless signal from a remote control, wherein the wireless signal is generated in response to the viewer actuating a predefined one of a plurality of controllers of the remote control, wherein one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment are presented only in response to receiving the wireless signal from the remote control. 11. A media device configured to present a sporting event on a display, comprising: a media content stream interface configured to receive a sporting event production comprising: a series of neutral viewpoint video content segments that are serially presentable on the display to a viewer during the presentation of the sporting event, and wherein each of the neutral viewpoint video content segments are associated with a duration corresponding to a period of game play of the sporting event; a first team alternative video content segment corresponding to a first period of game play that occurs within the period of game play of the series of neutral viewpoint video content segments, wherein the first team alternative video content segment includes a first identifier associated with a first team that is playing in the sporting event; and a second team alternative video content segment corresponding to the first period of game play, wherein the second team alternative video content segment includes a second identifier associated with a second team that is playing in the sporting event; and a processor system that, while the sporting event production is being received at the media content stream interface: presents a first neutral viewpoint video content segment on the display to a user of the media device, wherein the first neutral viewpoint video content segment includes one of the first identifier associated with the first team or the second team identifier associated with the second team, wherein a team preference of the user has not yet been determined by the media device; processes information corresponding to a detected response of the user during presentation of the first neutral viewpoint video content segment; determines a degree of viewer engagement of the user and a characteristic of the user response based on the detected user response; determines one of a user's first team preference that indicates that the user favors the first team, a user's second team preference that indicates that the user favors the second team, and user's neutral team preference that indicates that the user favors neither the first team or the second team, wherein the user's first team preference is determined based on the degree of viewer engagement exceeding a threshold and based on the characteristic of the user response being associated with a favoritism for the first team or a disfavor for the second team, wherein the user's second team preference is determined based on the degree of viewer engagement exceeding the threshold and based on the characteristic of the user response being associated with a favoritism for the second team or a disfavor for the first team, and wherein the user's neutral team preference is determined based on the degree of viewer engagement not exceeding the threshold; presents a following one of the series of neutral viewpoint video content segments on the display during the first period of game play when the neutral team preference is determined; alternatively presents the first team alternative video content segment on the display during the first period of game play when the first team preference is determined; and alternatively presents the second team alternative video content segment on the display during the first period of game play when the second team preference is determined. 12. (canceled) 13. The media device of claim 11, further comprising: a camera configured to capture a video image of the viewer when the first neutral viewpoint video content segment is presented, wherein the processor system is further configured to: identify at least one facial expression, posture or gesture made by the viewer in the captured video image; compare the identified at least one facial expression, posture or gesture with a stored first plurality of facial expressions, postures or gestures that indicate favoritism and a stored second plurality of facial expressions, postures or gestures that indicate disfavor; determine that the viewer has the preference for the first team when the identified at least one facial expression, posture or gesture corresponds to one of the stored first plurality of facial expressions, postures or gestures; and determine that the viewer has the preference for the second team when the identified at least one facial expression, posture or gesture corresponds to one of the stored second plurality of facial expressions, postures or gestures. 14. The media device of claim 11, further comprising: a microphone configured to capture an audio clip of sounds made by the viewer when the first neutral viewpoint video content segment is presented, wherein the processor system is further configured to: identify at least one audio content characteristic from the audio clip; compare the identified at least one audio content characteristic with a stored first plurality of audio content characteristics that indicate favoritism and a stored second plurality of audio content characteristics that indicate disfavor; determine that the viewer has the preference for the first team when the identified at least one audio content characteristic corresponds to one of the stored first plurality of audio content characteristics; and determine that the viewer has the preference for the second team when the identified at least one audio content characteristic corresponds to one of the stored second plurality of audio content characteristics. 15. The media device of claim 11, further comprising: a microphone configured to capture an audio clip of sounds made by the viewer when the first neutral viewpoint video content segment is presented, wherein the processor system is further configured to: identify a volume level from the audio clip; compare the identified volume level with a stored volume level threshold; and determine that the viewer has the preference for the first team when the identified volume level is at least equal to the volume level threshold. 16. The media device of claim 11, further comprising: a microphone configured to capture an audio clip of sounds made by the viewer when the first neutral viewpoint video content segment is presented, wherein the processor system is further configured to: identify at least one key word from the audio clip; compare the identified at least one key word with a stored first plurality of key words that indicate favoritism and a stored second plurality of key words that indicate disfavor; determine that the viewer has the preference for the first team when the identified at least one key word corresponds to one of the stored first plurality of key words; and determine that the viewer has the preference for the second team when the identified at least one audio key word corresponds to one of the stored second plurality of key words. 17. The media device of claim 11, further comprising: a digital video recorder (DVR) configured to store the received sporting event production, wherein the processor system is further configured to retrieve the stored sporting event production from the DVR, and wherein at least one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment are presented to the viewer as the sporting event production is being retrieved from the DVR. 18. The media device of claim 11, further comprising: a remote control interface that is configured to receive a wireless signal from a remote control, wherein the wireless signal is generated in response to the viewer actuating a predefined one of a plurality of controllers of the remote control, wherein the processor system presented one of the neutral viewpoint video content segment, the first team alternative video content segment or the second team alternative video content segment only in response to receiving the wireless signal from the remote control. 19.-20. (canceled) 21. The method of claim 1, wherein the following one of the series of neutral viewpoint video content segments, the first team alternative video content segment, and the second team alternative video content segment occur during the same portion of the first period of game play. 22. The method of claim 1, wherein the sporting event production is a live broadcast of the sporting event, and further comprising: storing, at the media device, the following one of the series of neutral viewpoint video content segments while the first team alternative video content segment or the second team alternative video content segment is being presented on the display; and presenting the following one of the series of neutral viewpoint video content segments after presentation of the first team alternative video content segment or the second team alternative video content segment the has concluded. 23. The method of claim 22, further comprising: determining a total cumulative duration of a presentation time of the presented first team alternative video content segment or the presented second team alternative video content segment and presentation times of any subsequently presented first team alternative video content segments or subsequently presented second team alternative video content segments; skipping presentation of a subsequent portion of the broadcasting sporting event production when the total cumulative duration exceeds a threshold, wherein a duration of the skipped subsequent portion of the sporting event production is substantially the same as the total cumulative duration; and resuming presentation of the live broadcast of the sporting event production after skipping presentation of the subsequent portion of the sporting event production. 24. The method of claim 23, wherein the skipped presentation of the subsequent portion of the broadcasting sporting event production corresponds to a commercial that has been incorporated into the sporting event production. 25. The method of claim 23, wherein the received sporting event production comprises a sacrificial segment, wherein the sacrificial segment is the skipped presentation of the subsequent portion of the broadcasting sporting event production.
2,400
7,971
7,971
14,324,635
2,472
A method for receiving streaming service data in a user equipment (UE) in a mobile communication network is provided. The method includes receiving streaming service data from a first access router (AR) through a first transmission control protocol (TCP) session in a state that a UE establishes the first TCP session with a corresponding node (CN); being allocated a second IP address which is different from the first IP address from a second AR which is different from the first AR if the UE hands over to the second AR while receiving the streaming service data; establishing a second TCP session which is different from the first TCP session with the CN using the second IP address; switching from the first TCP session to the second TCP session; and receiving the streaming service data from the CN through the second TCP session.
1. A method for receiving streaming service data in a user equipment (UE) in a mobile communication network, the method comprising: receiving streaming service data from a first access router (AR) through a first transmission control protocol (TCP) session in a state that a UE establishes the first TCP session with a corresponding node (CN) using a first interne protocol (IP) address through the first AR; being allocated a second IP address which is different from the first IP address from a second AR which is different from the first AR if the UE hands over to the second AR while receiving the streaming service data; establishing a second TCP session which is different from the first TCP session with the CN using the second IP address through the second AR; switching from the first TCP session to the second TCP session; and receiving the streaming service data from the CN through the second TCP session after switching from the first TCP session to the second TCP session. 2. The method of claim 1, further comprising: determining whether there is a TCP session which is established using the first IP address after switching from the first TCP session to the second TCP session; and returning the first IP address to the first AR if there is no TCP session which is established using the first IP address. 3. The method of claim 1, wherein the switching from the first TCP session to the second TCP session comprises: determining whether reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR; awaiting completion of the reception of the data corresponding to the chunk if the reception of the data corresponding to the chunk is not completed; and switching from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 4. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor to perform the method of claim 1. 5. A method for transmitting streaming service data to a user equipment (UE) in a first access router (AR) in a mobile communication network, the method comprising: transmitting streaming service data to a UE through a first transmission control protocol (TCP) session in a state that the first TCP session is established between a corresponding node (CN) and the UE using a first interne protocol (IP) address; performing a handover operation for the UE with a second AR which is different from the first AR upon detecting that the UE will hand over to the second AR while providing the streaming service data to the UE; and releasing the first TCP session with the UE upon detecting that the UE switches from the first TCP session to a second TCP session which is different from the first TCP session, wherein the second TCP session is a TCP session which the UE establishes with the CN using a second IP address which is different from the first IP address which the second AR allocates to the UE. 6. The method of claim 5, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits the completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 7. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor to perform the method of claim 5. 8. A method for transmitting streaming service data to a user equipment (UE) in a second access router (AR) in a mobile communication network, the method comprising: receiving information indicating that a UE will hand over from a first AR to a second AR from the first AR; detecting that the UE will hand over from the first AR to the second AR based on the information; and allocating a second interne protocol (IP) address to the UE, wherein the second IP address is different from a first IP address which is used in a first transmission control protocol (TCP) session which the first AR establishes with the UE and a corresponding node (CN). 9. The method of claim 8, further comprising: releasing a tunnel which is established with the first AR if there is no TCP session which is established using the first IP address after the UE switches from the first TCP session to the second TCP session, wherein the second TCP session is a TCP session which the UE establishes with the CN using the second IP address. 10. The method of claim 9, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 11. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor to perform the method of claim 8. 12. A method for transmitting streaming service data to a user equipment (UE) in a corresponding node (CN) in a mobile communication network, the method comprising: transmitting streaming service data through a first transmission control protocol (TCP) session in a state that a CN establishes the first TCP session with a UE through a first access router (AR) using a first interne protocol (IP) address; establishing a second TCP session which is different from the first TCP session with the UE through a second AR which is different from the first AR using a second IP address which the second AR allocates to the UE and is different from the first IP address upon detecting that the UE will hand over to the second AR while transmitting the streaming serviced data; and transmitting the streaming service data to the UE through the second TCP session. 13. The method of claim 12, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 14. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor to perform the method of claim 12. 15. A user equipment (UE) in a mobile communication network, the UE comprising: a transmitter; and a receiver, wherein the receiver is configured to receive streaming service data from a first access router (AR) through a first transmission control protocol (TCP) session in a state that a UE establishes the first TCP session with a corresponding node (CN) using a first interne protocol (IP) address through the first AR, and be allocated a second IP address which is different from the first IP address from a second AR which is different from the first AR if the UE hands over to the second AR while receiving the streaming service data, wherein the transmitter and the receiver are configured to establish a second TCP session which is different from the first TCP session with the CN using the second IP address through the second AR, and switch from the first TCP session to the second TCP session, and wherein the receiver is configured to receive the streaming service data from the CN through the second TCP session. 16. The UE of claim 15, further comprising: a controller, wherein the controller is configured to determine whether there is a TCP session which is established using the first IP address after the transmitter and the receiver switch from the first TCP session to the second TCP session, and control the transmitter to return the first IP address to the first AR if there is no TCP session which is established using the first IP address. 17. The UE of claim 15, wherein the controller is configured to determine whether reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR, await completion of the reception of the data corresponding to the chunk if the reception of the data corresponding to the chunk is not completed, and control the transmitter and the receiver to switch from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 18. A first access router (AR) in a mobile communication network, the first AR comprising: a transmitter; and a receiver, wherein the transmitter is configured to transmit streaming service data through a first transmission control protocol (TCP) session to a user equipment (UE) in a state that the first TCP session is established between a corresponding node (CN) and the UE using a first interne protocol (IP) address, wherein the transmitter and the receiver are configured to perform a handover operation for the UE with a second AR which is different from the first AR upon detecting that the UE will hand over to the second AR while providing the streaming service data to the UE, and release the first TCP session with the UE upon detecting that the UE switches from the first TCP session to a second TCP session which is different from the first TCP session, and wherein the second TCP session is a TCP session which the UE establishes with the CN using a second IP address which is different from the first IP address which the second AR allocates to the UE. 19. The first AR of claim 18, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 20. A second access router (AR) in a mobile communication network, the second AR comprising: a receiver configured to receive information indicating that a user equipment (UE) will hand over from a first AR to a second AR from the first AR; and a controller configured to detect that the UE will hand over from the first AR to the second AR based on the information, and allocate a second interne protocol (IP) address to the UE, wherein the second IP address is different from a first IP address which is used in a first transmission control protocol (TCP) session which the first AR establishes with the UE and a corresponding node (CN). 21. The second AR of claim 20, further comprising: a receiver, wherein the transmitter and the receiver are configured to release a tunnel which is established with the first AR if there is no TCP session which is established using the first IP address after the UE switches from the first TCP session to the second TCP session, and wherein the second TCP session is a TCP session which the UE establishes with the CN using the second IP address. 22. The second AR of claim 21, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 23. A corresponding node (CN) in a mobile communication network, the CN comprising: a transmitter; and a receiver, wherein the transmitter is configured to transmit streaming service data through a first transmission control protocol (TCP) session in a state that a CN establishes the first TCP session with a user equipment (UE) through a first access router (AR) using a first interne protocol (IP) address, wherein the transmitter and the receiver are configured to establish a second TCP session which is different from the first TCP session with the UE through a second AR which is different from the first AR using a second IP address which the second AR allocates to the UE and is different from the first IP address upon detecting that the UE will hand over to the second AR while the transmitter transmits the streaming serviced data, and wherein the transmitter is configured to transmit the streaming service data to the UE through the second TCP session. 24. The CN of claim 23, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk.
A method for receiving streaming service data in a user equipment (UE) in a mobile communication network is provided. The method includes receiving streaming service data from a first access router (AR) through a first transmission control protocol (TCP) session in a state that a UE establishes the first TCP session with a corresponding node (CN); being allocated a second IP address which is different from the first IP address from a second AR which is different from the first AR if the UE hands over to the second AR while receiving the streaming service data; establishing a second TCP session which is different from the first TCP session with the CN using the second IP address; switching from the first TCP session to the second TCP session; and receiving the streaming service data from the CN through the second TCP session.1. A method for receiving streaming service data in a user equipment (UE) in a mobile communication network, the method comprising: receiving streaming service data from a first access router (AR) through a first transmission control protocol (TCP) session in a state that a UE establishes the first TCP session with a corresponding node (CN) using a first interne protocol (IP) address through the first AR; being allocated a second IP address which is different from the first IP address from a second AR which is different from the first AR if the UE hands over to the second AR while receiving the streaming service data; establishing a second TCP session which is different from the first TCP session with the CN using the second IP address through the second AR; switching from the first TCP session to the second TCP session; and receiving the streaming service data from the CN through the second TCP session after switching from the first TCP session to the second TCP session. 2. The method of claim 1, further comprising: determining whether there is a TCP session which is established using the first IP address after switching from the first TCP session to the second TCP session; and returning the first IP address to the first AR if there is no TCP session which is established using the first IP address. 3. The method of claim 1, wherein the switching from the first TCP session to the second TCP session comprises: determining whether reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR; awaiting completion of the reception of the data corresponding to the chunk if the reception of the data corresponding to the chunk is not completed; and switching from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 4. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor to perform the method of claim 1. 5. A method for transmitting streaming service data to a user equipment (UE) in a first access router (AR) in a mobile communication network, the method comprising: transmitting streaming service data to a UE through a first transmission control protocol (TCP) session in a state that the first TCP session is established between a corresponding node (CN) and the UE using a first interne protocol (IP) address; performing a handover operation for the UE with a second AR which is different from the first AR upon detecting that the UE will hand over to the second AR while providing the streaming service data to the UE; and releasing the first TCP session with the UE upon detecting that the UE switches from the first TCP session to a second TCP session which is different from the first TCP session, wherein the second TCP session is a TCP session which the UE establishes with the CN using a second IP address which is different from the first IP address which the second AR allocates to the UE. 6. The method of claim 5, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits the completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 7. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor to perform the method of claim 5. 8. A method for transmitting streaming service data to a user equipment (UE) in a second access router (AR) in a mobile communication network, the method comprising: receiving information indicating that a UE will hand over from a first AR to a second AR from the first AR; detecting that the UE will hand over from the first AR to the second AR based on the information; and allocating a second interne protocol (IP) address to the UE, wherein the second IP address is different from a first IP address which is used in a first transmission control protocol (TCP) session which the first AR establishes with the UE and a corresponding node (CN). 9. The method of claim 8, further comprising: releasing a tunnel which is established with the first AR if there is no TCP session which is established using the first IP address after the UE switches from the first TCP session to the second TCP session, wherein the second TCP session is a TCP session which the UE establishes with the CN using the second IP address. 10. The method of claim 9, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 11. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor to perform the method of claim 8. 12. A method for transmitting streaming service data to a user equipment (UE) in a corresponding node (CN) in a mobile communication network, the method comprising: transmitting streaming service data through a first transmission control protocol (TCP) session in a state that a CN establishes the first TCP session with a UE through a first access router (AR) using a first interne protocol (IP) address; establishing a second TCP session which is different from the first TCP session with the UE through a second AR which is different from the first AR using a second IP address which the second AR allocates to the UE and is different from the first IP address upon detecting that the UE will hand over to the second AR while transmitting the streaming serviced data; and transmitting the streaming service data to the UE through the second TCP session. 13. The method of claim 12, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 14. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one processor to perform the method of claim 12. 15. A user equipment (UE) in a mobile communication network, the UE comprising: a transmitter; and a receiver, wherein the receiver is configured to receive streaming service data from a first access router (AR) through a first transmission control protocol (TCP) session in a state that a UE establishes the first TCP session with a corresponding node (CN) using a first interne protocol (IP) address through the first AR, and be allocated a second IP address which is different from the first IP address from a second AR which is different from the first AR if the UE hands over to the second AR while receiving the streaming service data, wherein the transmitter and the receiver are configured to establish a second TCP session which is different from the first TCP session with the CN using the second IP address through the second AR, and switch from the first TCP session to the second TCP session, and wherein the receiver is configured to receive the streaming service data from the CN through the second TCP session. 16. The UE of claim 15, further comprising: a controller, wherein the controller is configured to determine whether there is a TCP session which is established using the first IP address after the transmitter and the receiver switch from the first TCP session to the second TCP session, and control the transmitter to return the first IP address to the first AR if there is no TCP session which is established using the first IP address. 17. The UE of claim 15, wherein the controller is configured to determine whether reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR, await completion of the reception of the data corresponding to the chunk if the reception of the data corresponding to the chunk is not completed, and control the transmitter and the receiver to switch from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 18. A first access router (AR) in a mobile communication network, the first AR comprising: a transmitter; and a receiver, wherein the transmitter is configured to transmit streaming service data through a first transmission control protocol (TCP) session to a user equipment (UE) in a state that the first TCP session is established between a corresponding node (CN) and the UE using a first interne protocol (IP) address, wherein the transmitter and the receiver are configured to perform a handover operation for the UE with a second AR which is different from the first AR upon detecting that the UE will hand over to the second AR while providing the streaming service data to the UE, and release the first TCP session with the UE upon detecting that the UE switches from the first TCP session to a second TCP session which is different from the first TCP session, and wherein the second TCP session is a TCP session which the UE establishes with the CN using a second IP address which is different from the first IP address which the second AR allocates to the UE. 19. The first AR of claim 18, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 20. A second access router (AR) in a mobile communication network, the second AR comprising: a receiver configured to receive information indicating that a user equipment (UE) will hand over from a first AR to a second AR from the first AR; and a controller configured to detect that the UE will hand over from the first AR to the second AR based on the information, and allocate a second interne protocol (IP) address to the UE, wherein the second IP address is different from a first IP address which is used in a first transmission control protocol (TCP) session which the first AR establishes with the UE and a corresponding node (CN). 21. The second AR of claim 20, further comprising: a receiver, wherein the transmitter and the receiver are configured to release a tunnel which is established with the first AR if there is no TCP session which is established using the first IP address after the UE switches from the first TCP session to the second TCP session, and wherein the second TCP session is a TCP session which the UE establishes with the CN using the second IP address. 22. The second AR of claim 21, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk. 23. A corresponding node (CN) in a mobile communication network, the CN comprising: a transmitter; and a receiver, wherein the transmitter is configured to transmit streaming service data through a first transmission control protocol (TCP) session in a state that a CN establishes the first TCP session with a user equipment (UE) through a first access router (AR) using a first interne protocol (IP) address, wherein the transmitter and the receiver are configured to establish a second TCP session which is different from the first TCP session with the UE through a second AR which is different from the first AR using a second IP address which the second AR allocates to the UE and is different from the first IP address upon detecting that the UE will hand over to the second AR while the transmitter transmits the streaming serviced data, and wherein the transmitter is configured to transmit the streaming service data to the UE through the second TCP session. 24. The CN of claim 23, wherein, if reception of data corresponding to a chunk for the streaming service data which the UE has received before the handover from the first AR to the second AR is not completed, the UE awaits completion of the reception of the data corresponding to the chunk, and switches from the first TCP session to the second TCP session upon detecting the completion of the reception of the data corresponding to the chunk.
2,400
7,972
7,972
14,277,172
2,483
Methods of encoding and decoding video are described. The encoder and decoder include a buffer storing at least two context model states, each being the context model state after context-adaptive encoding/decoding of a respective previously-encoded/decoded picture. One of the at least two stored context model states is selected from the buffer and used to initialize the context model for context-adaptively decoding a current picture. The current picture is then context-adaptively entropy encoding/decoded.
1. A method of encoding video using a video encoder, the video encoder employing context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the video encoder storing a pre-defined context model state for initialization of the context model, and the video encoder including a buffer storing at least two context model states each being the context model state after context-adaptive encoding of a respective previously-encoded picture in the video encoder, method comprising: for encoding a current picture of the video, selecting one of the at least two stored context model states from the buffer; initializing the context model for context-adaptively encoding the current picture using the selected one of the at least two stored context model states; and context-adaptively entropy encoding the current picture to produce a bitstream of encoded data. 2. The method claimed in claim 1, further comprising storing, in the buffer, an updated context model state associated with the current picture after the context-adaptively entropy encoding. 3. The method claimed in claim 1, further comprising, for each of the respective previously-encoded pictures, context-adaptively entropy encoding that picture, wherein the context-adaptive entropy encoding includes progressively updating a context model state as bins are coded, and after context-adaptive entropy encoding of that picture, storing the updated context model state in the buffer as one of the at least two context model states. 4. The method claimed in claim 1, wherein said selecting is based on similarity in QP values used for the current picture and for the previously-encoded picture. 5. The method claimed in claim 4, wherein selecting includes selecting a context model state from the two or more context model states on the basis that its respective previously-encoded picture used a QP value closer to the QP value used in the current picture than any of the other respective previously-encoded pictures for which context model states are stored in the buffer. 6. The method claimed in claim 1, wherein said selecting is based on the current picture and the previously-encoded picture having the same picture type. 7. The method claimed in claim 1, wherein said selecting is based on the previously-encoded picture being a reference picture for the current picture. 8. The method claimed in claim 1, wherein said selecting is based on the current picture and the previously-encoded picture being on the same layer of a hierarchical layer structure defined for the video. 9. The method claimed in claim 1, wherein said selecting includes selecting a context model state from the two or more context model states on the basis that its respective previously-encoded picture is closer temporally to the current picture than any of the other respective previously-encoded pictures for which context model states are stored in the buffer. 10. The method claimed in claim 1, further comprising, prior to selecting, determining that the context model is not to be initialized using the pre-defined context model state for the current picture. 11. The method claimed in claim 1, wherein said selecting, initializing and context-adaptively entropy encoding is applied on a slice-by-slice basis within the current picture. 12. A method of decoding video from a bitstream of encoded video using a video decoder, the encoded video having been encoded using context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the video decoder storing a pre-defined context model state for initialization of the context model, and the video decoder including a buffer storing at least two context model states each being the context model state after context-adaptive decoding of a respective previously-decoded picture in the video decoder, method comprising: for decoding a current picture of the video, selecting one of the at least two stored context model states from the buffer; initializing the context model for context-adaptively decoding the current picture using the selected one of the at least two stored context model states; and context-adaptively entropy decoding the bitstream to reconstruct the current picture. 13. The method claimed in claim 12, further comprising storing, in the buffer, an updated context model state associated with the current picture after the context-adaptively entropy decoding. 14. The method claimed in claim 12, further comprising, for each of the respective previously-decoded pictures, context-adaptively entropy decoding that picture, wherein the context-adaptive entropy decoding includes progressively updating a context model state as bins are coded, and after context-adaptive entropy decoding of that picture, storing the updated context model state in the buffer as one of the at least two context model states. 15. The method claimed in claim 12, wherein said selecting is based on similarity in QP values used for the current picture and for the previously-encoded picture. 16. The method claimed in claim 15, wherein selecting includes selecting a context model state from the two or more context model states on the basis that its respective previously-decoded picture used a QP value closer to the QP value used in the current picture than any of the other respective previously-decoded pictures for which context model states are stored in the buffer. 17. The method claimed in claim 12, wherein said selecting is based on the current picture and the previously-decoded picture having the same picture type. 18. The method claimed in claim 12, wherein said selecting is based on the previously-decoded picture being a reference picture for the current picture. 19. The method claimed in claim 12, wherein said selecting is based on the current picture and the previously-decoded picture being on the same layer of a hierarchical layer structure defined for the video. 20. The method claimed in claim 12, wherein said selecting includes selecting a context model state from the two or more context model states on the basis that its respective previously-decoded picture is closer temporally to the current picture than any of the other respective previously-decoded pictures for which context model states are stored in the buffer. 21. The method claimed in claim 1, further comprising, prior to selecting, determining that the context model is not to be initialized using the pre-defined context model state for the current picture. 22. The method claimed in claim 1, wherein said selecting, initializing and context-adaptively entropy decoding is applied on a slice-by-slice basis within the current picture. 23. A decoder for decoding a bitstream of encoded video, the encoded video having been encoded using context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the decoder comprising: a processor; a memory storing a pre-defined context model state for initialization of the context model; a buffer storing at least two context model states each being the context model state after context-adaptive decoding of a respective previously-decoded picture in the video decoder; and a decoding application stored in memory and containing instructions executable by the processor to perform the method claimed in claim 12. 24. A non-transitory processor-readable medium storing processor-executable instructions which, when executed, cause one or more processors to perform the method claimed in claim 12. 25. An encoder for encoding video, and employing context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the decoder comprising: a processor; a memory storing a pre-defined context model state for initialization of the context model; a buffer storing at least two context model states each being the context model state after context-adaptive encoding of a respective previously-encoded picture in the video encoder; and an encoding application stored in memory and containing instructions executable by the processor to perform the method claimed in claim 1. 26. A non-transitory processor-readable medium storing processor-executable instructions which, when executed, cause one or more processors to perform the method claimed in claim 1. 27. A method of encoding video using a video encoder, the video encoder employing context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the video encoder storing a pre-defined context model state for initialization of the context model, and the video include a series of pictures, method comprising: for a subset of the pictures in the series, initializing the context model for context-adaptively entropy encoding a picture in the subset using the pre-defined context model state, context-adaptively entropy encoding that picture to produce a bitstream of encoded data, wherein the context-adaptively entropy encoding includes updating the context model state during encoding, and storing the updated context model state in a buffer; and then, for each of the remaining pictures in the series, initializing the context model for context-adaptively entropy encoding that picture using one of the stored context model states from the buffer, and context-adaptively entropy encoding that picture.
Methods of encoding and decoding video are described. The encoder and decoder include a buffer storing at least two context model states, each being the context model state after context-adaptive encoding/decoding of a respective previously-encoded/decoded picture. One of the at least two stored context model states is selected from the buffer and used to initialize the context model for context-adaptively decoding a current picture. The current picture is then context-adaptively entropy encoding/decoded.1. A method of encoding video using a video encoder, the video encoder employing context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the video encoder storing a pre-defined context model state for initialization of the context model, and the video encoder including a buffer storing at least two context model states each being the context model state after context-adaptive encoding of a respective previously-encoded picture in the video encoder, method comprising: for encoding a current picture of the video, selecting one of the at least two stored context model states from the buffer; initializing the context model for context-adaptively encoding the current picture using the selected one of the at least two stored context model states; and context-adaptively entropy encoding the current picture to produce a bitstream of encoded data. 2. The method claimed in claim 1, further comprising storing, in the buffer, an updated context model state associated with the current picture after the context-adaptively entropy encoding. 3. The method claimed in claim 1, further comprising, for each of the respective previously-encoded pictures, context-adaptively entropy encoding that picture, wherein the context-adaptive entropy encoding includes progressively updating a context model state as bins are coded, and after context-adaptive entropy encoding of that picture, storing the updated context model state in the buffer as one of the at least two context model states. 4. The method claimed in claim 1, wherein said selecting is based on similarity in QP values used for the current picture and for the previously-encoded picture. 5. The method claimed in claim 4, wherein selecting includes selecting a context model state from the two or more context model states on the basis that its respective previously-encoded picture used a QP value closer to the QP value used in the current picture than any of the other respective previously-encoded pictures for which context model states are stored in the buffer. 6. The method claimed in claim 1, wherein said selecting is based on the current picture and the previously-encoded picture having the same picture type. 7. The method claimed in claim 1, wherein said selecting is based on the previously-encoded picture being a reference picture for the current picture. 8. The method claimed in claim 1, wherein said selecting is based on the current picture and the previously-encoded picture being on the same layer of a hierarchical layer structure defined for the video. 9. The method claimed in claim 1, wherein said selecting includes selecting a context model state from the two or more context model states on the basis that its respective previously-encoded picture is closer temporally to the current picture than any of the other respective previously-encoded pictures for which context model states are stored in the buffer. 10. The method claimed in claim 1, further comprising, prior to selecting, determining that the context model is not to be initialized using the pre-defined context model state for the current picture. 11. The method claimed in claim 1, wherein said selecting, initializing and context-adaptively entropy encoding is applied on a slice-by-slice basis within the current picture. 12. A method of decoding video from a bitstream of encoded video using a video decoder, the encoded video having been encoded using context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the video decoder storing a pre-defined context model state for initialization of the context model, and the video decoder including a buffer storing at least two context model states each being the context model state after context-adaptive decoding of a respective previously-decoded picture in the video decoder, method comprising: for decoding a current picture of the video, selecting one of the at least two stored context model states from the buffer; initializing the context model for context-adaptively decoding the current picture using the selected one of the at least two stored context model states; and context-adaptively entropy decoding the bitstream to reconstruct the current picture. 13. The method claimed in claim 12, further comprising storing, in the buffer, an updated context model state associated with the current picture after the context-adaptively entropy decoding. 14. The method claimed in claim 12, further comprising, for each of the respective previously-decoded pictures, context-adaptively entropy decoding that picture, wherein the context-adaptive entropy decoding includes progressively updating a context model state as bins are coded, and after context-adaptive entropy decoding of that picture, storing the updated context model state in the buffer as one of the at least two context model states. 15. The method claimed in claim 12, wherein said selecting is based on similarity in QP values used for the current picture and for the previously-encoded picture. 16. The method claimed in claim 15, wherein selecting includes selecting a context model state from the two or more context model states on the basis that its respective previously-decoded picture used a QP value closer to the QP value used in the current picture than any of the other respective previously-decoded pictures for which context model states are stored in the buffer. 17. The method claimed in claim 12, wherein said selecting is based on the current picture and the previously-decoded picture having the same picture type. 18. The method claimed in claim 12, wherein said selecting is based on the previously-decoded picture being a reference picture for the current picture. 19. The method claimed in claim 12, wherein said selecting is based on the current picture and the previously-decoded picture being on the same layer of a hierarchical layer structure defined for the video. 20. The method claimed in claim 12, wherein said selecting includes selecting a context model state from the two or more context model states on the basis that its respective previously-decoded picture is closer temporally to the current picture than any of the other respective previously-decoded pictures for which context model states are stored in the buffer. 21. The method claimed in claim 1, further comprising, prior to selecting, determining that the context model is not to be initialized using the pre-defined context model state for the current picture. 22. The method claimed in claim 1, wherein said selecting, initializing and context-adaptively entropy decoding is applied on a slice-by-slice basis within the current picture. 23. A decoder for decoding a bitstream of encoded video, the encoded video having been encoded using context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the decoder comprising: a processor; a memory storing a pre-defined context model state for initialization of the context model; a buffer storing at least two context model states each being the context model state after context-adaptive decoding of a respective previously-decoded picture in the video decoder; and a decoding application stored in memory and containing instructions executable by the processor to perform the method claimed in claim 12. 24. A non-transitory processor-readable medium storing processor-executable instructions which, when executed, cause one or more processors to perform the method claimed in claim 12. 25. An encoder for encoding video, and employing context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the decoder comprising: a processor; a memory storing a pre-defined context model state for initialization of the context model; a buffer storing at least two context model states each being the context model state after context-adaptive encoding of a respective previously-encoded picture in the video encoder; and an encoding application stored in memory and containing instructions executable by the processor to perform the method claimed in claim 1. 26. A non-transitory processor-readable medium storing processor-executable instructions which, when executed, cause one or more processors to perform the method claimed in claim 1. 27. A method of encoding video using a video encoder, the video encoder employing context-adaptive entropy encoding using a context model, the context model having a context model state defining the probability associated with each context defined in the context model, the video encoder storing a pre-defined context model state for initialization of the context model, and the video include a series of pictures, method comprising: for a subset of the pictures in the series, initializing the context model for context-adaptively entropy encoding a picture in the subset using the pre-defined context model state, context-adaptively entropy encoding that picture to produce a bitstream of encoded data, wherein the context-adaptively entropy encoding includes updating the context model state during encoding, and storing the updated context model state in a buffer; and then, for each of the remaining pictures in the series, initializing the context model for context-adaptively entropy encoding that picture using one of the stored context model states from the buffer, and context-adaptively entropy encoding that picture.
2,400
7,973
7,973
14,602,786
2,473
A microwave backhaul transceiver comprises a plurality of antenna arrays, positioning circuitry, and signal processing circuitry. The microwave backhaul transceiver may determine, via the positioning circuitry, a location of the small cell backhaul transceiver. The microwave backhaul transceiver may, generate, via the signal processing circuitry, a beacon signal that uniquely indicates the location. The microwave backhaul transceiver may transmit the beacon signal via at least one of the antenna arrays. The beacon signal may be generated using a spreading code generated from a unique identifier of the location. The unique identifier of the location may comprise global positioning system coordinates and/or a street address. During the transmitting, a directionality at which the beacon radiates from a particular one of the antenna arrays may be varied such that the beacon is transmitted in multiple directions from the particular one of the antenna arrays.
1. A method comprising: in a microwave backhaul transceiver comprising a plurality of antenna arrays, positioning circuitry, and signal processing circuitry: determining, via said positioning circuitry, a location of said small cell backhaul transceiver; generating, via said signal processing circuitry, a beacon signal that uniquely indicates said location; and transmitting said beacon signal via at least one of said antenna arrays. 2. The method of claim 1, comprising generating said beacon signal using a spreading code generated from a unique identifier of said location. 3. The method of claim 2, wherein said unique identifier of said location comprises global positioning system coordinates. 4. The method of claim 2, wherein said unique identifier of said location will comprises a street address. 5. The method of claim 1, wherein: said microwave backhaul transceiver comprises a plurality of front-end circuits; and controlling, by each of said front-end circuits, a directionality at which said beacon radiates from a respective one of said antenna arrays. 6. The method of claim 1, comprising, during said transmitting, varying a directionality at which said beacon radiates from a particular one of said antenna arrays such that said beacon is transmitted in multiple directions from said particular one of said antenna arrays. 7. The method of claim 1, comprising determining, by said signal processing circuitry, a plurality of spreading codes of possible link partners based on said location, wherein said plurality of spreading codes are a subset of all possible spreading codes. 8. The method of claim 7, comprising: receiving a signal via at least one of said antenna arrays; processing said received signal with said plurality of spreading codes to generate a plurality of despread signals; and determining a source of said received signal based on said plurality of despread signals. 9. A method comprising: in a microwave backhaul transceiver comprising an antenna array, positioning circuitry, and signal processing circuitry: determining a first unique location identifier for a location of said microwave backhaul transceiver; determining a plurality of second unique location identifiers that are within communication range of said first unique location identifier; generating a plurality of first spreading codes based on said plurality of second unique location identifiers; and using said plurality of first spreading codes for processing signals received via said antenna array. 10. The method of claim 9, wherein each of said second unique identifiers comprises global positioning system coordinates. 11. The method of claim 9, wherein each of said second unique identifiers comprises a street address. 12. The method of claim 9, comprising: generating a second spreading code based on said first unique location identifier; generating a beacon using said second spreading code; and transmitting said beacon via said antenna array. 13. A system comprising: a microwave backhaul transceiver comprising a plurality of antenna arrays, positioning circuitry, and signal processing circuitry, wherein: said positioning circuitry is operable to determine a location of said microwave backhaul transceiver; and said signal processing circuitry is operable to generate, for transmission via at least one of said antenna arrays, a beacon signal that uniquely indicates said location. 14. The system of claim 13, wherein said signal processing circuitry is operable to generate said beacon signal using a spreading code generated from a unique identifier of said location. 15. The system of claim 14, wherein said unique identifier of said location comprises global positioning system coordinates. 16. The system of claim 2, wherein said unique identifier of said location comprises a street address. 17. The system of claim 1, wherein: said microwave backhaul transceiver comprises a plurality of front-end circuits; and each of said front-end circuits is operable to control a directionality at which said beacon radiates from a respective one of said antenna arrays. 18. The system of claim 1, wherein: said microwave backhaul transceiver comprises a plurality of front-end circuits; and each of said front-end circuits is operable to vary a directionality at which said beacon radiates from a particular one of said antenna arrays such that said beacon is transmitted in multiple directions from said particular one of said antenna arrays. 19. The system of claim 1, wherein said signal processing circuitry is operable to determine a plurality of spreading codes of possible link partners based on said location, wherein said plurality of spreading codes are a subset of all possible spreading codes. 20. The system of claim 19, wherein said signal processing circuitry is operable to: process a received signal with said plurality of spreading codes to generate a plurality of despread signals; and determine a source of said received signal based on said plurality of despread signals.
A microwave backhaul transceiver comprises a plurality of antenna arrays, positioning circuitry, and signal processing circuitry. The microwave backhaul transceiver may determine, via the positioning circuitry, a location of the small cell backhaul transceiver. The microwave backhaul transceiver may, generate, via the signal processing circuitry, a beacon signal that uniquely indicates the location. The microwave backhaul transceiver may transmit the beacon signal via at least one of the antenna arrays. The beacon signal may be generated using a spreading code generated from a unique identifier of the location. The unique identifier of the location may comprise global positioning system coordinates and/or a street address. During the transmitting, a directionality at which the beacon radiates from a particular one of the antenna arrays may be varied such that the beacon is transmitted in multiple directions from the particular one of the antenna arrays.1. A method comprising: in a microwave backhaul transceiver comprising a plurality of antenna arrays, positioning circuitry, and signal processing circuitry: determining, via said positioning circuitry, a location of said small cell backhaul transceiver; generating, via said signal processing circuitry, a beacon signal that uniquely indicates said location; and transmitting said beacon signal via at least one of said antenna arrays. 2. The method of claim 1, comprising generating said beacon signal using a spreading code generated from a unique identifier of said location. 3. The method of claim 2, wherein said unique identifier of said location comprises global positioning system coordinates. 4. The method of claim 2, wherein said unique identifier of said location will comprises a street address. 5. The method of claim 1, wherein: said microwave backhaul transceiver comprises a plurality of front-end circuits; and controlling, by each of said front-end circuits, a directionality at which said beacon radiates from a respective one of said antenna arrays. 6. The method of claim 1, comprising, during said transmitting, varying a directionality at which said beacon radiates from a particular one of said antenna arrays such that said beacon is transmitted in multiple directions from said particular one of said antenna arrays. 7. The method of claim 1, comprising determining, by said signal processing circuitry, a plurality of spreading codes of possible link partners based on said location, wherein said plurality of spreading codes are a subset of all possible spreading codes. 8. The method of claim 7, comprising: receiving a signal via at least one of said antenna arrays; processing said received signal with said plurality of spreading codes to generate a plurality of despread signals; and determining a source of said received signal based on said plurality of despread signals. 9. A method comprising: in a microwave backhaul transceiver comprising an antenna array, positioning circuitry, and signal processing circuitry: determining a first unique location identifier for a location of said microwave backhaul transceiver; determining a plurality of second unique location identifiers that are within communication range of said first unique location identifier; generating a plurality of first spreading codes based on said plurality of second unique location identifiers; and using said plurality of first spreading codes for processing signals received via said antenna array. 10. The method of claim 9, wherein each of said second unique identifiers comprises global positioning system coordinates. 11. The method of claim 9, wherein each of said second unique identifiers comprises a street address. 12. The method of claim 9, comprising: generating a second spreading code based on said first unique location identifier; generating a beacon using said second spreading code; and transmitting said beacon via said antenna array. 13. A system comprising: a microwave backhaul transceiver comprising a plurality of antenna arrays, positioning circuitry, and signal processing circuitry, wherein: said positioning circuitry is operable to determine a location of said microwave backhaul transceiver; and said signal processing circuitry is operable to generate, for transmission via at least one of said antenna arrays, a beacon signal that uniquely indicates said location. 14. The system of claim 13, wherein said signal processing circuitry is operable to generate said beacon signal using a spreading code generated from a unique identifier of said location. 15. The system of claim 14, wherein said unique identifier of said location comprises global positioning system coordinates. 16. The system of claim 2, wherein said unique identifier of said location comprises a street address. 17. The system of claim 1, wherein: said microwave backhaul transceiver comprises a plurality of front-end circuits; and each of said front-end circuits is operable to control a directionality at which said beacon radiates from a respective one of said antenna arrays. 18. The system of claim 1, wherein: said microwave backhaul transceiver comprises a plurality of front-end circuits; and each of said front-end circuits is operable to vary a directionality at which said beacon radiates from a particular one of said antenna arrays such that said beacon is transmitted in multiple directions from said particular one of said antenna arrays. 19. The system of claim 1, wherein said signal processing circuitry is operable to determine a plurality of spreading codes of possible link partners based on said location, wherein said plurality of spreading codes are a subset of all possible spreading codes. 20. The system of claim 19, wherein said signal processing circuitry is operable to: process a received signal with said plurality of spreading codes to generate a plurality of despread signals; and determine a source of said received signal based on said plurality of despread signals.
2,400
7,974
7,974
15,246,635
2,439
A method and system for managing a protective distribution system is disclosed. In some embodiments, a physical information transmission line may be monitored. A disturbance on the physical information transmission line may be detected. The detected disturbance may not exceed a first preset threshold for triggering alerts of a first alert type based on detected disturbances. Responsive to the detection, a count for the number of disturbances within a preset time period that do not exceed the first preset threshold may be determined. A determination of whether the count, for the number of disturbances that do not exceed the first preset threshold, exceeds a second preset threshold may be effectuated. The second preset threshold may correspond to a preset number of allowable disturbances within the preset time period. An alert of the first alert type may be triggered responsive to a determination that the count exceeds the second preset threshold.
1. A method for managing a protective distribution system, comprising: monitoring a physical information transmission line; detecting, via one or more sensors, a disturbance on the physical information transmission line, wherein the detected disturbance does not exceed a first preset threshold for triggering alerts of a first alert type based on detected disturbances; determining, responsive to the detection via the one or more sensors, a count for the number of disturbances within a preset time period that do not exceed the first preset threshold; determining whether the count, for the number of disturbances that do not exceed the first preset threshold, exceeds a second preset threshold, wherein the second preset threshold corresponds to a preset number of allowable disturbances within the preset time period; and triggering an alert of the first alert type responsive to a determination that the count exceeds the second preset threshold. 2. The method of claim 1, wherein (1) the detected disturbance is of a first disturbance type, (2) the first preset threshold is a threshold for triggering alerts of the first alert type based on detected disturbances of the first disturbance type, and (3) the second preset threshold corresponds to a preset number of allowable disturbances of the first disturbance type within the preset time period. 3. The method of claim 2, wherein the detected disturbance of the first disturbance type comprises at least one of a vibration, a frequency change, an acoustic change, and a change in distance based on reflectometer reading, and a disturbance of a second disturbance type comprises at least a different one of a vibration, a frequency change, an acoustic change, and a change in distance based on reflectometer reading. 4. The method of claim 1, wherein the detected disturbance comprises a vibration-related disturbance, and wherein the first preset threshold is a threshold for triggering alerts based on detected vibration-related disturbances. 5. The method of claim 1, wherein the detected disturbance comprises a frequency-change-related disturbance, and wherein the first preset threshold is a threshold for triggering alerts based on detected frequency-change-related disturbances. 6. The method of claim 1, wherein the detected disturbance comprises an acoustic-change-related disturbance, and wherein the first preset threshold is a threshold for triggering alerts based on detected acoustic-change-related disturbances. 7. The method of claim 1, wherein the detected disturbance comprises a disturbance related to a change in signal propagation distance, and wherein the first preset threshold is a threshold for triggering alerts based on detected disturbances related to a change in signal propagation distance. 8. The method of claim 1, further comprising: initiating a response to the detected disturbance responsive to the triggering of the alert of the first alert type, wherein the response comprises at least one of opening a case, dispatching an investigator to investigate the detected disturbance, and documenting the investigation in the case. 9. The method of claim 1, further comprising: initiating a response to the detected disturbance responsive to the triggering of the alert of the first alert type, wherein the response comprises at least one of disabling a data collection, adjusting a physical security device, rerouting a data collection, and performing network analysis. 10. The method of claim 1, further comprising: causing the detected disturbance to be presented in comparison to the first preset threshold in real-time responsive to the detection of the disturbance. 11. The method of claim 1, wherein the count is for the number of disturbances within the preset time period that do not exceed the first preset threshold, but exceeds a third preset threshold. 12. A system for managing a protective distribution system, comprising: a computer system comprises one or more processors programmed to execute computer program instructions which, when executed, cause the computer system to: monitor a physical information transmission line; detect, via one or more sensors, a disturbance on the physical information transmission line, wherein the detected disturbance does not exceed a first preset threshold for triggering alerts of a first alert type based on detected disturbances; determine, responsive to the detection via the one or more sensors, a count for the number of disturbances within a preset time period that do not exceed the first preset threshold; determine whether the count, for the number of disturbances that do not exceed the first preset threshold, exceeds a second preset threshold, wherein the second preset threshold corresponds to a preset number of allowable disturbances within the preset time period; and trigger an alert of the first alert type responsive to a determination that the count exceeds the second preset threshold. 13. The system of claim 12, further comprising: an intrusion detector coupled to the computer system, the disturbance is detected by the computer system via the intrusion detector. 14. The system of claim 13, further comprising: an optical line terminal or network switch; an optical circuit switch; an optical test access point device; a network analytic tool; and a video camera. 15. The system of claim 12, wherein (1) the detected disturbance is of a first disturbance type, (2) the first preset threshold is a threshold for triggering alerts of the first alert type based on detected disturbances of the first disturbance type, and (3) the second preset threshold corresponds to a preset number of allowable disturbances of the first disturbance type within the preset time period. 16. The system of claim 15, wherein the detected disturbance of the first disturbance type comprises at least one of a vibration, a frequency change, an acoustic change, and a change in signal propagation distance, and a disturbance of a second disturbance type comprises at least a different one of a vibration, a frequency change, an acoustic change, and a change in signal propagation distance. 17. The system of claim 12, wherein the computer system is further caused to: initiating a response to the detected disturbance responsive to the triggering of the alert of the first alert type, wherein the response comprises at least one of opening a case, dispatching an investigator to investigate the detected disturbance, documenting the investigation in the case, disabling a data collection, adjusting a physical security device, rerouting a data collection, and performing network analysis. 18. The system of claim 12, wherein the computer system is further caused to: causing the detected disturbance to be presented in comparison to the first preset threshold in real-time responsive to the detection of the disturbance. 19. The system of claim 12, wherein the count is for the number of disturbances within the preset time period that do not exceed the first preset threshold, but exceeds a third preset threshold. 20. A non-transitory computer-readable medium for storing computer instructions therein, the computer-readable medium comprising a set of instructions which when executed causes a processor to perform a method for managing a protective distribution system, the method comprising: monitoring a physical information transmission line; detecting, via one or more sensors, a disturbance on the physical information transmission line, wherein the detected disturbance does not exceed a first preset threshold for triggering alerts of a first alert type based on detected disturbances; determining, responsive to the detection via the one or more sensors, a count for the number of disturbances within a preset time period that do not exceed the first preset threshold; determining whether the count, for the number of disturbances that do not exceed the first preset threshold, exceeds a second preset threshold, wherein the second preset threshold corresponds to a preset number of allowable disturbances within the preset time period; and triggering an alert of the first alert type responsive to a determination that the count exceeds the second preset threshold.
A method and system for managing a protective distribution system is disclosed. In some embodiments, a physical information transmission line may be monitored. A disturbance on the physical information transmission line may be detected. The detected disturbance may not exceed a first preset threshold for triggering alerts of a first alert type based on detected disturbances. Responsive to the detection, a count for the number of disturbances within a preset time period that do not exceed the first preset threshold may be determined. A determination of whether the count, for the number of disturbances that do not exceed the first preset threshold, exceeds a second preset threshold may be effectuated. The second preset threshold may correspond to a preset number of allowable disturbances within the preset time period. An alert of the first alert type may be triggered responsive to a determination that the count exceeds the second preset threshold.1. A method for managing a protective distribution system, comprising: monitoring a physical information transmission line; detecting, via one or more sensors, a disturbance on the physical information transmission line, wherein the detected disturbance does not exceed a first preset threshold for triggering alerts of a first alert type based on detected disturbances; determining, responsive to the detection via the one or more sensors, a count for the number of disturbances within a preset time period that do not exceed the first preset threshold; determining whether the count, for the number of disturbances that do not exceed the first preset threshold, exceeds a second preset threshold, wherein the second preset threshold corresponds to a preset number of allowable disturbances within the preset time period; and triggering an alert of the first alert type responsive to a determination that the count exceeds the second preset threshold. 2. The method of claim 1, wherein (1) the detected disturbance is of a first disturbance type, (2) the first preset threshold is a threshold for triggering alerts of the first alert type based on detected disturbances of the first disturbance type, and (3) the second preset threshold corresponds to a preset number of allowable disturbances of the first disturbance type within the preset time period. 3. The method of claim 2, wherein the detected disturbance of the first disturbance type comprises at least one of a vibration, a frequency change, an acoustic change, and a change in distance based on reflectometer reading, and a disturbance of a second disturbance type comprises at least a different one of a vibration, a frequency change, an acoustic change, and a change in distance based on reflectometer reading. 4. The method of claim 1, wherein the detected disturbance comprises a vibration-related disturbance, and wherein the first preset threshold is a threshold for triggering alerts based on detected vibration-related disturbances. 5. The method of claim 1, wherein the detected disturbance comprises a frequency-change-related disturbance, and wherein the first preset threshold is a threshold for triggering alerts based on detected frequency-change-related disturbances. 6. The method of claim 1, wherein the detected disturbance comprises an acoustic-change-related disturbance, and wherein the first preset threshold is a threshold for triggering alerts based on detected acoustic-change-related disturbances. 7. The method of claim 1, wherein the detected disturbance comprises a disturbance related to a change in signal propagation distance, and wherein the first preset threshold is a threshold for triggering alerts based on detected disturbances related to a change in signal propagation distance. 8. The method of claim 1, further comprising: initiating a response to the detected disturbance responsive to the triggering of the alert of the first alert type, wherein the response comprises at least one of opening a case, dispatching an investigator to investigate the detected disturbance, and documenting the investigation in the case. 9. The method of claim 1, further comprising: initiating a response to the detected disturbance responsive to the triggering of the alert of the first alert type, wherein the response comprises at least one of disabling a data collection, adjusting a physical security device, rerouting a data collection, and performing network analysis. 10. The method of claim 1, further comprising: causing the detected disturbance to be presented in comparison to the first preset threshold in real-time responsive to the detection of the disturbance. 11. The method of claim 1, wherein the count is for the number of disturbances within the preset time period that do not exceed the first preset threshold, but exceeds a third preset threshold. 12. A system for managing a protective distribution system, comprising: a computer system comprises one or more processors programmed to execute computer program instructions which, when executed, cause the computer system to: monitor a physical information transmission line; detect, via one or more sensors, a disturbance on the physical information transmission line, wherein the detected disturbance does not exceed a first preset threshold for triggering alerts of a first alert type based on detected disturbances; determine, responsive to the detection via the one or more sensors, a count for the number of disturbances within a preset time period that do not exceed the first preset threshold; determine whether the count, for the number of disturbances that do not exceed the first preset threshold, exceeds a second preset threshold, wherein the second preset threshold corresponds to a preset number of allowable disturbances within the preset time period; and trigger an alert of the first alert type responsive to a determination that the count exceeds the second preset threshold. 13. The system of claim 12, further comprising: an intrusion detector coupled to the computer system, the disturbance is detected by the computer system via the intrusion detector. 14. The system of claim 13, further comprising: an optical line terminal or network switch; an optical circuit switch; an optical test access point device; a network analytic tool; and a video camera. 15. The system of claim 12, wherein (1) the detected disturbance is of a first disturbance type, (2) the first preset threshold is a threshold for triggering alerts of the first alert type based on detected disturbances of the first disturbance type, and (3) the second preset threshold corresponds to a preset number of allowable disturbances of the first disturbance type within the preset time period. 16. The system of claim 15, wherein the detected disturbance of the first disturbance type comprises at least one of a vibration, a frequency change, an acoustic change, and a change in signal propagation distance, and a disturbance of a second disturbance type comprises at least a different one of a vibration, a frequency change, an acoustic change, and a change in signal propagation distance. 17. The system of claim 12, wherein the computer system is further caused to: initiating a response to the detected disturbance responsive to the triggering of the alert of the first alert type, wherein the response comprises at least one of opening a case, dispatching an investigator to investigate the detected disturbance, documenting the investigation in the case, disabling a data collection, adjusting a physical security device, rerouting a data collection, and performing network analysis. 18. The system of claim 12, wherein the computer system is further caused to: causing the detected disturbance to be presented in comparison to the first preset threshold in real-time responsive to the detection of the disturbance. 19. The system of claim 12, wherein the count is for the number of disturbances within the preset time period that do not exceed the first preset threshold, but exceeds a third preset threshold. 20. A non-transitory computer-readable medium for storing computer instructions therein, the computer-readable medium comprising a set of instructions which when executed causes a processor to perform a method for managing a protective distribution system, the method comprising: monitoring a physical information transmission line; detecting, via one or more sensors, a disturbance on the physical information transmission line, wherein the detected disturbance does not exceed a first preset threshold for triggering alerts of a first alert type based on detected disturbances; determining, responsive to the detection via the one or more sensors, a count for the number of disturbances within a preset time period that do not exceed the first preset threshold; determining whether the count, for the number of disturbances that do not exceed the first preset threshold, exceeds a second preset threshold, wherein the second preset threshold corresponds to a preset number of allowable disturbances within the preset time period; and triggering an alert of the first alert type responsive to a determination that the count exceeds the second preset threshold.
2,400
7,975
7,975
14,585,475
2,458
An apparatus and method expedite configuration and deployment of a scalable cloud computing environment. An environment configuration mechanism (ECM) in a cloud manager provides a number of pre-configured virtual servers as embedded cloud environments. The embedded clouds can be quickly utilized by a system administrator with minimal or no configuration to deploy cloud workloads. The embedded clouds use similarly embedded controllers and hosts. As these embedded clouds begin to use additional resources the ECM dynamically relocates embedded cloud elements from the embedded cloud to a more permanent location on dedicated hardware as attached controllers and hosts.
1-9. (canceled) 10. A method for expediting configuration and deployment of a cloud computing environment comprising: predeploying a plurality of embedded clouds with at least one embedded cloud element on a physical server; preconfiguring the plurality of embedded clouds with a minimal set of cloud resources, where the minimal set of cloud resources includes central processing unit resources and memory resources such that multiple embedded clouds can be predeployed on the physical server; allowing a user to use a predeployed embedded cloud of the plurality of predeployed embedded clouds to provision workloads, and relocating the embedded cloud element to permanent physical hardware when a resource loading of the embedded cloud element exceeds a threshold. 11. The method of claim 10 where in the step of predeploying the plurality of embedded clouds further comprises: identifying an available server; deploying a hypervisor on the available server; deploying an embedded controller as a virtual machine; deploying an embedded host and registering the embedded host to the embedded controller; and providing the cloud controller to a user to start deploying workloads. 12. The method of claim 10 wherein the step of relocating the embedded cloud to permanent physical hardware further comprises: determining the embedded cloud element is a host; deploying a hypervisor on a new physical server; adding a new host into the cloud controller; and relocating workloads into the new host. 13. The method of claim 10 wherein the step of relocating the embedded cloud to permanent physical hardware further comprises: determining the embedded cloud element is controller; deploying a hypervisor and a new cloud controller on a new physical server; and dedicating physical resources of the new physical server to the new cloud controller. 14. The method of claim 10 further comprising monitoring resource loading of the embedded cloud element on the physical server. 15. The method of claim 14 wherein the resource loading of the embedded cloud element comprises CPU utilization over a period of time. 16. The method of claim 14 wherein the resource loading of the embedded cloud element on the physical machine comprises disk utilization and network utilization. 17. The method of claim 10 further comprising a user setting the threshold. 18. A method for expediting configuration and deployment of a cloud computing environment comprising: predeploying a plurality of embedded clouds with at least one embedded cloud element on a physical server comprising the steps of: identifying an available server; deploying a hypervisor on the available server; deploying an embedded controller as a virtual machine; deploying an embedded host and registering the embedded host to the embedded controller; preconfiguring the plurality of embedded clouds with a minimal set of cloud resources, where the minimal set of cloud resources includes central processing unit resources and memory resources such that multiple embedded clouds can be predeployed on the physical server; allowing a user to use a predeployed embedded cloud of the plurality of predeployed embedded clouds to provision workloads, monitoring resource loading of the embedded cloud element on the physical server; and relocating the embedded cloud element to permanent physical hardware when a resource loading of the embedded cloud element exceeds a threshold, wherein the relocating comprises: determining the embedded cloud element is a host; deploying a hypervisor on a new physical server; adding a new host into the cloud controller; and relocating workloads into the new host. 19. The method of claim 18 wherein relocating the embedded cloud to permanent physical hardware further comprises: determining the embedded cloud element is controller; deploying a hypervisor and a new cloud controller on a new physical server; and dedicating physical resources of the new physical server to the new cloud controller. 20. The method of claim 18 wherein the resource loading of the embedded cloud element comprises CPU utilization over a period of time.
An apparatus and method expedite configuration and deployment of a scalable cloud computing environment. An environment configuration mechanism (ECM) in a cloud manager provides a number of pre-configured virtual servers as embedded cloud environments. The embedded clouds can be quickly utilized by a system administrator with minimal or no configuration to deploy cloud workloads. The embedded clouds use similarly embedded controllers and hosts. As these embedded clouds begin to use additional resources the ECM dynamically relocates embedded cloud elements from the embedded cloud to a more permanent location on dedicated hardware as attached controllers and hosts.1-9. (canceled) 10. A method for expediting configuration and deployment of a cloud computing environment comprising: predeploying a plurality of embedded clouds with at least one embedded cloud element on a physical server; preconfiguring the plurality of embedded clouds with a minimal set of cloud resources, where the minimal set of cloud resources includes central processing unit resources and memory resources such that multiple embedded clouds can be predeployed on the physical server; allowing a user to use a predeployed embedded cloud of the plurality of predeployed embedded clouds to provision workloads, and relocating the embedded cloud element to permanent physical hardware when a resource loading of the embedded cloud element exceeds a threshold. 11. The method of claim 10 where in the step of predeploying the plurality of embedded clouds further comprises: identifying an available server; deploying a hypervisor on the available server; deploying an embedded controller as a virtual machine; deploying an embedded host and registering the embedded host to the embedded controller; and providing the cloud controller to a user to start deploying workloads. 12. The method of claim 10 wherein the step of relocating the embedded cloud to permanent physical hardware further comprises: determining the embedded cloud element is a host; deploying a hypervisor on a new physical server; adding a new host into the cloud controller; and relocating workloads into the new host. 13. The method of claim 10 wherein the step of relocating the embedded cloud to permanent physical hardware further comprises: determining the embedded cloud element is controller; deploying a hypervisor and a new cloud controller on a new physical server; and dedicating physical resources of the new physical server to the new cloud controller. 14. The method of claim 10 further comprising monitoring resource loading of the embedded cloud element on the physical server. 15. The method of claim 14 wherein the resource loading of the embedded cloud element comprises CPU utilization over a period of time. 16. The method of claim 14 wherein the resource loading of the embedded cloud element on the physical machine comprises disk utilization and network utilization. 17. The method of claim 10 further comprising a user setting the threshold. 18. A method for expediting configuration and deployment of a cloud computing environment comprising: predeploying a plurality of embedded clouds with at least one embedded cloud element on a physical server comprising the steps of: identifying an available server; deploying a hypervisor on the available server; deploying an embedded controller as a virtual machine; deploying an embedded host and registering the embedded host to the embedded controller; preconfiguring the plurality of embedded clouds with a minimal set of cloud resources, where the minimal set of cloud resources includes central processing unit resources and memory resources such that multiple embedded clouds can be predeployed on the physical server; allowing a user to use a predeployed embedded cloud of the plurality of predeployed embedded clouds to provision workloads, monitoring resource loading of the embedded cloud element on the physical server; and relocating the embedded cloud element to permanent physical hardware when a resource loading of the embedded cloud element exceeds a threshold, wherein the relocating comprises: determining the embedded cloud element is a host; deploying a hypervisor on a new physical server; adding a new host into the cloud controller; and relocating workloads into the new host. 19. The method of claim 18 wherein relocating the embedded cloud to permanent physical hardware further comprises: determining the embedded cloud element is controller; deploying a hypervisor and a new cloud controller on a new physical server; and dedicating physical resources of the new physical server to the new cloud controller. 20. The method of claim 18 wherein the resource loading of the embedded cloud element comprises CPU utilization over a period of time.
2,400
7,976
7,976
14,586,078
2,458
An apparatus and method expedite configuration and deployment of a scalable cloud computing environment. An environment configuration mechanism (ECM) in a cloud manager provides a number of pre-configured virtual servers as embedded cloud environments. The embedded clouds can be quickly utilized by a system administrator with minimal or no configuration to deploy cloud workloads. The embedded clouds use similarly embedded controllers and hosts. As these embedded clouds begin to use additional resources the ECM dynamically relocates embedded cloud elements from the embedded cloud to a more permanent location on dedicated hardware as attached controllers and hosts.
1. An apparatus comprising: at least one processor; a memory coupled to the at least one processor; an environment configuration mechanism residing in the memory and executed by the at least one processor, wherein the environment configuration mechanism predeploys and preconfigures a plurality of embedded clouds with a plurality of embedded cloud elements on a physical server, and wherein the environment configuration mechanism allows a user to use an embedded cloud of the plurality of embedded clouds to provision workloads, and relocates an embedded cloud element of the plurality of embedded cloud elements to permanent physical hardware when a resource loading of the embedded cloud element exceeds a threshold. 2. The apparatus of claim 1 wherein the environment configuration mechanism monitors resource loading of the embedded cloud element on the physical server. 3. The apparatus of claim 2 wherein the resource loading of the embedded cloud element comprises CPU utilization over a period of time. 4. The apparatus of claim 2 wherein the resource loading of the embedded cloud element on the physical machine comprises disk utilization and network utilization. 5. The apparatus of claim 1 wherein a user sets the threshold. 6. The apparatus of claim 1 wherein the environment configuration mechanism preconfigures the embedded cloud with a minimal set of cloud resources, where the minimal set of resources includes central processing unit resources and memory resources such that multiple embedded clouds can be predeployed on the physical server. 7. The apparatus of claim 1 wherein the environment configuration mechanism predeploys the plurality of embedded clouds by identifying an available server, deploying a hypervisor on the available server, deploying an embedded controller as a virtual machine, deploying an embedded host and registering the embedded host to the embedded controller. 8. The apparatus of claim 1 wherein the environment configuration mechanism determines the embedded cloud element is a host and relocates the host cloud element by deploying a hypervisor on a new physical server, adding a new host into the cloud controller and relocating workloads into the new host. 9. The apparatus of claim 1 wherein the environment configuration mechanism determines the embedded cloud element is a controller and relocates the controller cloud element by deploying a hypervisor and a new cloud controller on a new physical server and dedicating physical resources of the new physical server to the new cloud controller.
An apparatus and method expedite configuration and deployment of a scalable cloud computing environment. An environment configuration mechanism (ECM) in a cloud manager provides a number of pre-configured virtual servers as embedded cloud environments. The embedded clouds can be quickly utilized by a system administrator with minimal or no configuration to deploy cloud workloads. The embedded clouds use similarly embedded controllers and hosts. As these embedded clouds begin to use additional resources the ECM dynamically relocates embedded cloud elements from the embedded cloud to a more permanent location on dedicated hardware as attached controllers and hosts.1. An apparatus comprising: at least one processor; a memory coupled to the at least one processor; an environment configuration mechanism residing in the memory and executed by the at least one processor, wherein the environment configuration mechanism predeploys and preconfigures a plurality of embedded clouds with a plurality of embedded cloud elements on a physical server, and wherein the environment configuration mechanism allows a user to use an embedded cloud of the plurality of embedded clouds to provision workloads, and relocates an embedded cloud element of the plurality of embedded cloud elements to permanent physical hardware when a resource loading of the embedded cloud element exceeds a threshold. 2. The apparatus of claim 1 wherein the environment configuration mechanism monitors resource loading of the embedded cloud element on the physical server. 3. The apparatus of claim 2 wherein the resource loading of the embedded cloud element comprises CPU utilization over a period of time. 4. The apparatus of claim 2 wherein the resource loading of the embedded cloud element on the physical machine comprises disk utilization and network utilization. 5. The apparatus of claim 1 wherein a user sets the threshold. 6. The apparatus of claim 1 wherein the environment configuration mechanism preconfigures the embedded cloud with a minimal set of cloud resources, where the minimal set of resources includes central processing unit resources and memory resources such that multiple embedded clouds can be predeployed on the physical server. 7. The apparatus of claim 1 wherein the environment configuration mechanism predeploys the plurality of embedded clouds by identifying an available server, deploying a hypervisor on the available server, deploying an embedded controller as a virtual machine, deploying an embedded host and registering the embedded host to the embedded controller. 8. The apparatus of claim 1 wherein the environment configuration mechanism determines the embedded cloud element is a host and relocates the host cloud element by deploying a hypervisor on a new physical server, adding a new host into the cloud controller and relocating workloads into the new host. 9. The apparatus of claim 1 wherein the environment configuration mechanism determines the embedded cloud element is a controller and relocates the controller cloud element by deploying a hypervisor and a new cloud controller on a new physical server and dedicating physical resources of the new physical server to the new cloud controller.
2,400
7,977
7,977
13,992,974
2,448
A device for making multiple connections to a computer unit, the device comprising at least one main connection provided with a connector for connection to the computer unit and connected via a switch module to a plurality of secondary connections, each provided with an external connector and together presenting an overall data rate equal to a maximum data rate of the main connection. An assembly comprising a computer unit and pieces of equipment connected thereto by a multiple connection device.
1. A device for making multiple connections to a computer unit, the device comprising at least one main connection provided with a connector for connection to the computer unit and connected via a switch module to a plurality of secondary connections, each provided with an external connector and together presenting an overall data rate equal to a maximum data rate of the main connection. 2. The device according to claim 1, wherein the connections are of the Ethernet type. 3. The device according to claim 1, wherein each secondary connection includes a frame management module. 4. The device according to claim 3, wherein the frame management module is arranged to order frames. 5. The device according to claim 4, wherein the ordering is of the first-in, first-out type. 6. The device according to claim 4, wherein the frame management module is arranged to reject frames as a function of at least one predetermined criterion. 7. The device according to claim 3, wherein the frame management module has a configuration input connected via the main connection to the connector for connection to the computer unit. 8. The device according to claim 3, wherein the frame management module has a configuration input connected via an additional connection to a connector for connection to the computer unit. 9. The device according to claim 8, wherein the additional connection is of the Ethernet type. 10. The device according to claim 1, wherein the switch module is arranged to route transmitted frames to the secondary connections as a function of at least one of the following parameters: a source physical address appearing in each frame; and a tag contained in each frame. 11. An assembly comprising a computer unit and pieces of equipment connected to the computer unit via a device according to any preceding claim and comprising a main connection having a connector for connection to the computer unit and connected by a switch module to a plurality of secondary connections, each provided with a connector for connection to a respective piece of equipment, the computer unit and the pieces of equipment all being programmed to avoid transmitting frames over the secondary connections that are greater in size and number than predetermined thresholds so that the secondary connections convey an overall data rate that is substantially equal to a maximum data rate of the main connection.
A device for making multiple connections to a computer unit, the device comprising at least one main connection provided with a connector for connection to the computer unit and connected via a switch module to a plurality of secondary connections, each provided with an external connector and together presenting an overall data rate equal to a maximum data rate of the main connection. An assembly comprising a computer unit and pieces of equipment connected thereto by a multiple connection device.1. A device for making multiple connections to a computer unit, the device comprising at least one main connection provided with a connector for connection to the computer unit and connected via a switch module to a plurality of secondary connections, each provided with an external connector and together presenting an overall data rate equal to a maximum data rate of the main connection. 2. The device according to claim 1, wherein the connections are of the Ethernet type. 3. The device according to claim 1, wherein each secondary connection includes a frame management module. 4. The device according to claim 3, wherein the frame management module is arranged to order frames. 5. The device according to claim 4, wherein the ordering is of the first-in, first-out type. 6. The device according to claim 4, wherein the frame management module is arranged to reject frames as a function of at least one predetermined criterion. 7. The device according to claim 3, wherein the frame management module has a configuration input connected via the main connection to the connector for connection to the computer unit. 8. The device according to claim 3, wherein the frame management module has a configuration input connected via an additional connection to a connector for connection to the computer unit. 9. The device according to claim 8, wherein the additional connection is of the Ethernet type. 10. The device according to claim 1, wherein the switch module is arranged to route transmitted frames to the secondary connections as a function of at least one of the following parameters: a source physical address appearing in each frame; and a tag contained in each frame. 11. An assembly comprising a computer unit and pieces of equipment connected to the computer unit via a device according to any preceding claim and comprising a main connection having a connector for connection to the computer unit and connected by a switch module to a plurality of secondary connections, each provided with a connector for connection to a respective piece of equipment, the computer unit and the pieces of equipment all being programmed to avoid transmitting frames over the secondary connections that are greater in size and number than predetermined thresholds so that the secondary connections convey an overall data rate that is substantially equal to a maximum data rate of the main connection.
2,400
7,978
7,978
15,128,501
2,439
Information stored in a Hypertext Transfer Protocol (HTTP) session is monitored. Based on the monitoring, authentication information in the information stored in the HTTP session is identified.
1. A method comprising: determining, by a system including a processor, whether a request from an entity is an authentication request; in response to determining that the request is an authentication request, monitoring, by the system, information stored in a Hypertext Transfer Protocol (HTTP) session; and identifying, by the system based on the monitoring, authentication information in the information stored in the HTTP session. 2. The method of claim 1, wherein the determining comprises determining whether the request contains a string from among a collection of specified strings. 3. The method of claim 2, wherein determining whether the request contains a string from among the collection of specified strings comprises determining whether a uniform resource locator of the request contains a string from among the collection of specified strings. 4. The method of claim 2, wherein the collection of specified strings includes strings relating to credentials for authenticating a client. 5. The method of claim 1, wherein identifying the authentication information comprises identifying at least one credential. 6. The method of claim 5, wherein the at least one credential includes at least a username. 7. The method of claim 1, wherein monitoring the information comprises monitoring an application programming interface associated with the HTTP session, the application programming interface used for storing authentication information into a storage of the HTTP session. 8. The method of claim 1, further comprising: indicating a successful authentication attempt in response to identifying the authentication information in the information stored in the HTTP session. 9. The method of claim 1, further comprising: detecting, by the system, an unsuccessful authentication attempt in response to determining that a subsequent authentication request does not result in storage of authentication information in a storage of the HTTP session. 10. The method of claim 1, further comprising: detecting, by the system, a logoff event. 11. An article comprising at least one non-transitory machine-readable storage medium storing instructions that upon execution cause a system to: monitor information stored in a Hypertext Transfer Protocol (HTTP) session to identify authentication information including at least one credential; indicate a successful authentication attempt in response to identifying presence of the authentication information in the information stored in the HTTP session; and indicate an unsuccessful authentication attempt in response to failure to identify presence of the authentication information in the information stored in the HTTP session. 12. The article of claim 11, wherein the instructions upon execution cause the system to further: send, to an analysis computer, a log including events relating to successful authentication attempts and unsuccessful authentication attempts. 13. The article of claim 11, wherein monitoring the information comprises monitoring the information associated with a custom programming interface to a web application. 14. A system comprising: at least one processor to: determine whether a request from a client computer is an authentication request by determining whether a uniform resource locator of the request contains a string from among a collection of specified strings; in response to determining that the request is an authentication request, monitor information stored for a Hypertext Transfer Protocol (HTTP) session established between the client computer and a web application; identify, based on the monitoring, authentication information in the information stored for the HTTP session; and send, to an analysis computer, a log including events relating to successful authentication attempts and unsuccessful authentication attempts. 15. The system of claim 14, wherein the log sent to the analysis computer includes one or a combination of time information of an event; an Internet Protocol (IP) address; a uniform resource locator of the request; an HTTP request parameter; version information of a web browser at the client computer; an identifier of an HTTP session; and a result of an authentication attempt.
Information stored in a Hypertext Transfer Protocol (HTTP) session is monitored. Based on the monitoring, authentication information in the information stored in the HTTP session is identified.1. A method comprising: determining, by a system including a processor, whether a request from an entity is an authentication request; in response to determining that the request is an authentication request, monitoring, by the system, information stored in a Hypertext Transfer Protocol (HTTP) session; and identifying, by the system based on the monitoring, authentication information in the information stored in the HTTP session. 2. The method of claim 1, wherein the determining comprises determining whether the request contains a string from among a collection of specified strings. 3. The method of claim 2, wherein determining whether the request contains a string from among the collection of specified strings comprises determining whether a uniform resource locator of the request contains a string from among the collection of specified strings. 4. The method of claim 2, wherein the collection of specified strings includes strings relating to credentials for authenticating a client. 5. The method of claim 1, wherein identifying the authentication information comprises identifying at least one credential. 6. The method of claim 5, wherein the at least one credential includes at least a username. 7. The method of claim 1, wherein monitoring the information comprises monitoring an application programming interface associated with the HTTP session, the application programming interface used for storing authentication information into a storage of the HTTP session. 8. The method of claim 1, further comprising: indicating a successful authentication attempt in response to identifying the authentication information in the information stored in the HTTP session. 9. The method of claim 1, further comprising: detecting, by the system, an unsuccessful authentication attempt in response to determining that a subsequent authentication request does not result in storage of authentication information in a storage of the HTTP session. 10. The method of claim 1, further comprising: detecting, by the system, a logoff event. 11. An article comprising at least one non-transitory machine-readable storage medium storing instructions that upon execution cause a system to: monitor information stored in a Hypertext Transfer Protocol (HTTP) session to identify authentication information including at least one credential; indicate a successful authentication attempt in response to identifying presence of the authentication information in the information stored in the HTTP session; and indicate an unsuccessful authentication attempt in response to failure to identify presence of the authentication information in the information stored in the HTTP session. 12. The article of claim 11, wherein the instructions upon execution cause the system to further: send, to an analysis computer, a log including events relating to successful authentication attempts and unsuccessful authentication attempts. 13. The article of claim 11, wherein monitoring the information comprises monitoring the information associated with a custom programming interface to a web application. 14. A system comprising: at least one processor to: determine whether a request from a client computer is an authentication request by determining whether a uniform resource locator of the request contains a string from among a collection of specified strings; in response to determining that the request is an authentication request, monitor information stored for a Hypertext Transfer Protocol (HTTP) session established between the client computer and a web application; identify, based on the monitoring, authentication information in the information stored for the HTTP session; and send, to an analysis computer, a log including events relating to successful authentication attempts and unsuccessful authentication attempts. 15. The system of claim 14, wherein the log sent to the analysis computer includes one or a combination of time information of an event; an Internet Protocol (IP) address; a uniform resource locator of the request; an HTTP request parameter; version information of a web browser at the client computer; an identifier of an HTTP session; and a result of an authentication attempt.
2,400
7,979
7,979
15,232,564
2,425
A method and apparatus for serving targeted advertising to a user via a user device are described including receiving content provided by a content provider, rendering the provided content, viewing a commercial, wherein the commercial was spliced into the content based on a default commercial selection algorithm at a commercial break, determining if the user wants to rate the commercial, accepting the user's rating of the commercial if the user wants to rate the commercial and communicating the user's rating of the commercial to the content provider.
1. A method, said method comprising: presenting first content including a first commercial, said first commercial having been selected based on a commercial selection algorithm; receiving a user rating of said presented first commercial; and presenting second content, said second content including a second commercial, said second commercial having been selected based on a modified commercial selection algorithm, said modified commercial selection algorithm having been modified based on said received user rating of said first commercial. 2. The method according to claim 1, further comprising: receiving said first content from a server; forwarding said rating to said server; and receiving said second content from said server. 3. The method according to claim 1, further comprising: receiving said first content from a server; receiving a plurality of commercials from an ad server; and receiving said second content from said server. 4. The method according to claim 2, wherein said server is operated by a content provider, wherein said content provider is a multiple system operator, an online provider or a broadcast provider. 5. The method according to claim 2, wherein said commercial selection algorithm is based on demographics or content or demographics and content. 6. The method according to claim 3, wherein a user's rating of said first commercial is provided to said ad server. 7. A method for a content provider to provide targeted advertising, said method comprising: providing first content including a first commercial, said first commercial having been selected based on a commercial selection algorithm; receiving a user rating of said first commercial; providing second content, said second content including a second commercial, said second commercial having been selected based on a modified commercial selection algorithm, said modified commercial selection algorithm having been modified based on said received user rating of said first commercial. 8. The method according to claim 7, further comprising said first commercial is selected based on a user's profile as well as said commercial selection algorithm. 9. The method according to claim 7, wherein said content provider is a multiple system operator, an online provider or a broadcast provider. 10. The method according to claim 7, wherein said commercial selection algorithm is based on demographics or content or demographics and content. 11. The method according to claim 7, wherein a user's rating of said first commercial is provided to an ad server. 12. An apparatus, comprising: a display device for presenting first content including a first commercial having been selected based on a commercial selection algorithm, said first content including said first commercial having been received by an input signal receiver; a user interface for receiving a user rating of said presented first commercial; and said display device for presenting second content, said second content including a second commercial having been received by said input signal receiver, said second commercial having been selected based on a modified commercial selection algorithm, said modified commercial selection algorithm having been modified based on said received user rating of said first commercial. 13. The apparatus according to claim 12, further comprising: said input signal receiver, receiving said first content from a server; said input signal receiver, forwarding said rating to said server; and said input signal receiver, receiving said second content from said server. 14. The apparatus according to claim 12, further comprising: said input signal receiver, receiving said first content from a server; said input signal receiver, receiving a plurality of commercials from an ad server; and said input signal receiver, receiving said second content from said server. 15. The apparatus according to claim 13, wherein said server is operated by a content provider, wherein said content provider is a multiple system operator, an online provider or a broadcast provider. 16. The apparatus according to claim 12, wherein said commercial selection algorithm is based on demographics or content or demographics and content. 17. The apparatus according to claim 14, wherein a user's rating of said first commercial is provided to said ad server. 18. An apparatus for providing content, comprising: means for providing first content including a first commercial, said first commercial having been selected based on a commercial selection algorithm; means for receiving a user rating of said first commercial; means for providing second content, said second content including a second commercial, said second commercial having been selected based on a modified commercial selection algorithm, said modified commercial selection algorithm having been modified based on said received user rating of said first commercial. 19. The content provider according to claim 18, further comprising said first commercial is selected based on a user's profile as well as said commercial selection algorithm. 20. The content provider according to claim 18, wherein said content provider is a multiple system operator, an online provider or a broadcast provider. 21. The content provider according to claim 18, wherein said commercial selection algorithm is based on demographics or content or demographics and content. 22. The content provider according to claim 18, wherein a user's rating of said first commercial is provided to an ad server.
A method and apparatus for serving targeted advertising to a user via a user device are described including receiving content provided by a content provider, rendering the provided content, viewing a commercial, wherein the commercial was spliced into the content based on a default commercial selection algorithm at a commercial break, determining if the user wants to rate the commercial, accepting the user's rating of the commercial if the user wants to rate the commercial and communicating the user's rating of the commercial to the content provider.1. A method, said method comprising: presenting first content including a first commercial, said first commercial having been selected based on a commercial selection algorithm; receiving a user rating of said presented first commercial; and presenting second content, said second content including a second commercial, said second commercial having been selected based on a modified commercial selection algorithm, said modified commercial selection algorithm having been modified based on said received user rating of said first commercial. 2. The method according to claim 1, further comprising: receiving said first content from a server; forwarding said rating to said server; and receiving said second content from said server. 3. The method according to claim 1, further comprising: receiving said first content from a server; receiving a plurality of commercials from an ad server; and receiving said second content from said server. 4. The method according to claim 2, wherein said server is operated by a content provider, wherein said content provider is a multiple system operator, an online provider or a broadcast provider. 5. The method according to claim 2, wherein said commercial selection algorithm is based on demographics or content or demographics and content. 6. The method according to claim 3, wherein a user's rating of said first commercial is provided to said ad server. 7. A method for a content provider to provide targeted advertising, said method comprising: providing first content including a first commercial, said first commercial having been selected based on a commercial selection algorithm; receiving a user rating of said first commercial; providing second content, said second content including a second commercial, said second commercial having been selected based on a modified commercial selection algorithm, said modified commercial selection algorithm having been modified based on said received user rating of said first commercial. 8. The method according to claim 7, further comprising said first commercial is selected based on a user's profile as well as said commercial selection algorithm. 9. The method according to claim 7, wherein said content provider is a multiple system operator, an online provider or a broadcast provider. 10. The method according to claim 7, wherein said commercial selection algorithm is based on demographics or content or demographics and content. 11. The method according to claim 7, wherein a user's rating of said first commercial is provided to an ad server. 12. An apparatus, comprising: a display device for presenting first content including a first commercial having been selected based on a commercial selection algorithm, said first content including said first commercial having been received by an input signal receiver; a user interface for receiving a user rating of said presented first commercial; and said display device for presenting second content, said second content including a second commercial having been received by said input signal receiver, said second commercial having been selected based on a modified commercial selection algorithm, said modified commercial selection algorithm having been modified based on said received user rating of said first commercial. 13. The apparatus according to claim 12, further comprising: said input signal receiver, receiving said first content from a server; said input signal receiver, forwarding said rating to said server; and said input signal receiver, receiving said second content from said server. 14. The apparatus according to claim 12, further comprising: said input signal receiver, receiving said first content from a server; said input signal receiver, receiving a plurality of commercials from an ad server; and said input signal receiver, receiving said second content from said server. 15. The apparatus according to claim 13, wherein said server is operated by a content provider, wherein said content provider is a multiple system operator, an online provider or a broadcast provider. 16. The apparatus according to claim 12, wherein said commercial selection algorithm is based on demographics or content or demographics and content. 17. The apparatus according to claim 14, wherein a user's rating of said first commercial is provided to said ad server. 18. An apparatus for providing content, comprising: means for providing first content including a first commercial, said first commercial having been selected based on a commercial selection algorithm; means for receiving a user rating of said first commercial; means for providing second content, said second content including a second commercial, said second commercial having been selected based on a modified commercial selection algorithm, said modified commercial selection algorithm having been modified based on said received user rating of said first commercial. 19. The content provider according to claim 18, further comprising said first commercial is selected based on a user's profile as well as said commercial selection algorithm. 20. The content provider according to claim 18, wherein said content provider is a multiple system operator, an online provider or a broadcast provider. 21. The content provider according to claim 18, wherein said commercial selection algorithm is based on demographics or content or demographics and content. 22. The content provider according to claim 18, wherein a user's rating of said first commercial is provided to an ad server.
2,400
7,980
7,980
14,014,315
2,432
Methods, systems, and computer readable media for utilizing predetermined encryption keys in a test simulation environment are disclosed. In one embodiment, a method includes generating, prior to an initiation of an Internet protocol security (IPsec) test session, a private key and a public key at a traffic emulation device and storing the private key and the public key in a local storage associated with the traffic emulation device. The method further includes retrieving, from the local storage, the private key and the public key upon the initiation of the IPsec test session between the traffic emulation device and a device under test (DUT) and generating a shared secret key utilizing the retrieved private key and a DUT public key received from the DUT.
1. A method for utilizing predetermined key exchange data in a test simulation environment, the method comprising: generating, prior to an initiation of an Internet protocol security (IPsec) test session, a private key and a public key at a traffic emulation device; storing the private key and the public key in a local storage associated with the traffic emulation device; and retrieving, from the local storage, the private key and the public key upon the initiation of the IPsec test session between the traffic emulation device and a device under test (DUT); and generating a shared secret key utilizing the retrieved private key and a DUT public key received from the DUT. 2. The method of claim 1 comprising determining, prior to generating the public key, at least one key exchange number. 3. The method of claim 2 wherein generating the public key includes deriving the public key using the at least one key exchange number. 4. The method of claim 3 comprising sending the public key and the at least one key exchange number to the DUT. 5. The method of claim 1 comprising receiving a DUT public key from the DUT upon the initiation of the IPsec test session. 6. The method of claim 1 wherein the IPsec test session is conducted between the traffic emulation device and the DUT at a network layer. 7. The method of claim 1 wherein the traffic emulation device functions as either a client entity or a server entity. 8. The method of claim 1 wherein the public key is generated using the private key and the at least one key exchange number. 9. The method of claim 1 wherein each of the private key, the public key, and the shared secret key is generated utilizing a Diffie-Hellman method. 10. The method of claim 1 wherein the DUT includes at least one of: a firewall device, a router device, and a serving gateway (SGW), a packet data network gateway (PGW). 11. The method of claim 1 comprising retrieving, at the traffic emulation device, the private key and the public key from the local memory upon the initiation of a second IPsec test session between the traffic emulation device and the DUT and generating a second shared secret key utilizing the retrieved private key and a second DUT public key received from the DUT. 12. A system for utilizing predetermined encryption keys data in a test simulation environment, the system comprising: a device under test (DUT) configured to generate a DUT public key and to be subjected to an Internet protocol security (IPsec) test session; and a traffic emulation device configured to generate, prior to the initiation of the IPsec test session with the DUT, a private key and a public key, to store the private key and the public key in a local storage, to retrieve the private key and the public key from the local storage upon the initiation of the IPsec test session, and to generate a shared secret key utilizing the retrieved private key and a DUT public key received from the DUT. 13. The system of claim 12 wherein the traffic emulation device is further configured to determine, prior to generating the public key, at least one key exchange number. 14. The system of claim 13 wherein the traffic emulation device is further configured to derive the public key using the at least one key exchange number. 15. The system of claim 14 wherein the traffic emulation device is further configured to send the public key and the at least one key exchange number to the DUT. 16. The system of claim 12 wherein the traffic emulation device is further configured to receive a DUT public key from the DUT upon the initiation of the IPsec test session. 17. The system of claim 12 wherein the IPsec test session is conducted between the traffic emulation device and the DUT at a network layer. 18. The system of claim 12 wherein the traffic emulation device functions as either a client entity or a server entity. 19. The system of claim 12 wherein the public key is generated using the private key and the at least one key exchange number. 20. The system of claim 12 wherein each of the private key, the public key, and the shared secret key is generated utilizing a Diffie-Hellman method. 21. The system of claim 12 wherein the DUT includes at least one of: a firewall device, a router device, and a serving gateway (SGW), a packet data network gateway (PGW). 22. The system of claim 12 wherein the traffic emulation device is further configured to retrieve the private key and the public key from the local memory upon the initiation of a second IPsec test session between the traffic emulation device and the DUT and to generate a second shared secret key utilizing the private key and a second DUT public key received from the DUT. 23. A non-transitory computer readable medium having stored thereon executable instructions that when executed by the processor of a computer control the computer to perform steps comprising: generating, prior to an initiation of an Internet protocol security (IPsec) test session, a private key and a public key at a traffic emulation device; storing the private key and the public key in a local storage associated with the traffic emulation device; retrieving, from the local storage, the private key and the public key upon the initiation of the IPsec test session between the traffic emulation device and a device under test (DUT); and generating a shared secret key utilizing the retrieved private key and a DUT public key received from the DUT.
Methods, systems, and computer readable media for utilizing predetermined encryption keys in a test simulation environment are disclosed. In one embodiment, a method includes generating, prior to an initiation of an Internet protocol security (IPsec) test session, a private key and a public key at a traffic emulation device and storing the private key and the public key in a local storage associated with the traffic emulation device. The method further includes retrieving, from the local storage, the private key and the public key upon the initiation of the IPsec test session between the traffic emulation device and a device under test (DUT) and generating a shared secret key utilizing the retrieved private key and a DUT public key received from the DUT.1. A method for utilizing predetermined key exchange data in a test simulation environment, the method comprising: generating, prior to an initiation of an Internet protocol security (IPsec) test session, a private key and a public key at a traffic emulation device; storing the private key and the public key in a local storage associated with the traffic emulation device; and retrieving, from the local storage, the private key and the public key upon the initiation of the IPsec test session between the traffic emulation device and a device under test (DUT); and generating a shared secret key utilizing the retrieved private key and a DUT public key received from the DUT. 2. The method of claim 1 comprising determining, prior to generating the public key, at least one key exchange number. 3. The method of claim 2 wherein generating the public key includes deriving the public key using the at least one key exchange number. 4. The method of claim 3 comprising sending the public key and the at least one key exchange number to the DUT. 5. The method of claim 1 comprising receiving a DUT public key from the DUT upon the initiation of the IPsec test session. 6. The method of claim 1 wherein the IPsec test session is conducted between the traffic emulation device and the DUT at a network layer. 7. The method of claim 1 wherein the traffic emulation device functions as either a client entity or a server entity. 8. The method of claim 1 wherein the public key is generated using the private key and the at least one key exchange number. 9. The method of claim 1 wherein each of the private key, the public key, and the shared secret key is generated utilizing a Diffie-Hellman method. 10. The method of claim 1 wherein the DUT includes at least one of: a firewall device, a router device, and a serving gateway (SGW), a packet data network gateway (PGW). 11. The method of claim 1 comprising retrieving, at the traffic emulation device, the private key and the public key from the local memory upon the initiation of a second IPsec test session between the traffic emulation device and the DUT and generating a second shared secret key utilizing the retrieved private key and a second DUT public key received from the DUT. 12. A system for utilizing predetermined encryption keys data in a test simulation environment, the system comprising: a device under test (DUT) configured to generate a DUT public key and to be subjected to an Internet protocol security (IPsec) test session; and a traffic emulation device configured to generate, prior to the initiation of the IPsec test session with the DUT, a private key and a public key, to store the private key and the public key in a local storage, to retrieve the private key and the public key from the local storage upon the initiation of the IPsec test session, and to generate a shared secret key utilizing the retrieved private key and a DUT public key received from the DUT. 13. The system of claim 12 wherein the traffic emulation device is further configured to determine, prior to generating the public key, at least one key exchange number. 14. The system of claim 13 wherein the traffic emulation device is further configured to derive the public key using the at least one key exchange number. 15. The system of claim 14 wherein the traffic emulation device is further configured to send the public key and the at least one key exchange number to the DUT. 16. The system of claim 12 wherein the traffic emulation device is further configured to receive a DUT public key from the DUT upon the initiation of the IPsec test session. 17. The system of claim 12 wherein the IPsec test session is conducted between the traffic emulation device and the DUT at a network layer. 18. The system of claim 12 wherein the traffic emulation device functions as either a client entity or a server entity. 19. The system of claim 12 wherein the public key is generated using the private key and the at least one key exchange number. 20. The system of claim 12 wherein each of the private key, the public key, and the shared secret key is generated utilizing a Diffie-Hellman method. 21. The system of claim 12 wherein the DUT includes at least one of: a firewall device, a router device, and a serving gateway (SGW), a packet data network gateway (PGW). 22. The system of claim 12 wherein the traffic emulation device is further configured to retrieve the private key and the public key from the local memory upon the initiation of a second IPsec test session between the traffic emulation device and the DUT and to generate a second shared secret key utilizing the private key and a second DUT public key received from the DUT. 23. A non-transitory computer readable medium having stored thereon executable instructions that when executed by the processor of a computer control the computer to perform steps comprising: generating, prior to an initiation of an Internet protocol security (IPsec) test session, a private key and a public key at a traffic emulation device; storing the private key and the public key in a local storage associated with the traffic emulation device; retrieving, from the local storage, the private key and the public key upon the initiation of the IPsec test session between the traffic emulation device and a device under test (DUT); and generating a shared secret key utilizing the retrieved private key and a DUT public key received from the DUT.
2,400
7,981
7,981
14,802,281
2,483
Disclosed are various embodiments for a method, system, and apparatus for taking three-dimensional images of produce. The three-dimensional image may be used to estimate the volume and other dimensions of the imaged produce.
1. A system, comprising: a computing device comprising a processor and a memory; and an application executed in the at least one computing device, the application comprising a set of instructions stored in the memory of the computing device that, when executed by the processor of the computing device, cause the computing device to at least: convert a depth image of a produce item into a point cloud image; and estimate a diameter of the produce item based at least in part on the point cloud image. 2. The system of claim 1, wherein the application further comprises instructions stored in the memory of the computing device that, when executed by the processor of the computing device, causes the computing device to at least calculate a volume of the produce item based at least in part on the estimated diameter of the produce item. 3. The system of claim 1, further comprising a Red-Green-Blue-Depth (RGB-D) sensor configured to: generate the depth image of the produce item; and send the depth image of the produce item to the computing device. 4. The system of claim 3, further comprising a conveyor belt positioned to move the produce item through a field of view of the RGB-D sensor. 5. The system of claim 3, further comprising a light source positioned to illuminate the produce item when the produce item is positioned within a field of view of the RGB-D sensor. 6. The system of claim 3, wherein the RGB-D sensor is positioned above the produce item. 7. The system of claim 3, wherein the RGB-D sensor is positioned below the produce item. 8. The system of claim 1, further comprising a weighing device configured to: measure a weight of the produce item; and send the weight of the produce item to the computing device. 9. The system of claim 8, wherein the application further comprises instructions stored in the memory of the computing device that, when executed by the processor of the computing device, causes the computing device to at least calculate a density of the produce item based at least in part on the weight of the produce item. 10. The system of claim 8, wherein the weighing device comprises a scale. 11. The system of claim 1, wherein the produce item comprises an onion. 12. A non-transitory computer-readable medium comprising a program that, when executed by a processor of a computing device, causes the computing device to at least: convert a depth image of a produce item into a point cloud image; estimate a volume of the produce item based at least in part on the point could image; and estimate a diameter of the produce item based at least in part on the point could image. 13. The non-transitory computer-readable medium of claim 12, wherein the program, when executed by the processor, further causes the computing device to at least compute a weight of the produce item based at least in part on a measurement provided by a weighing device. 14. The non-transitory computer-readable medium of claim 13, wherein the program, when executed by the processor, further causes the computing device to at least estimate a density of the produce item based at least in part on the estimated volume and the computed weight of the produce item. 15. The non-transitory computer-readable medium of claim 12, wherein the depth image is received from a Red-Green-Blue-Depth (RGB-D) sensor configured to: generate the depth image of the produce item; and send the depth image of the produce item to the computing device. 16. A computer-implemented method, comprising: converting a depth image of a produce item into a point cloud image; estimating a diameter of the produce item based at least in part on the point cloud image; and estimating a volume of the produce item based at least in part on the estimated diameter. 17. The computer-implemented method of claim 16, further comprising receiving the depth image of the produce item from a Red-Green-Blue-Depth (RGB-D) sensor, wherein the RGB-D sensor generates the depth image. 18. The computer-implemented method of claim 16, further comprising receiving a weight of the produce item from a weighing device. 19. The computer-implemented method of claim 18, further comprising estimating a density of the produce item based at least in part on the volume of the produce item and the weight of the produce item. 20. The computer-implemented method of claim 16, wherein converting the depth image of the produce item into a point cloud image further comprises removing a pixel from the depth image.
Disclosed are various embodiments for a method, system, and apparatus for taking three-dimensional images of produce. The three-dimensional image may be used to estimate the volume and other dimensions of the imaged produce.1. A system, comprising: a computing device comprising a processor and a memory; and an application executed in the at least one computing device, the application comprising a set of instructions stored in the memory of the computing device that, when executed by the processor of the computing device, cause the computing device to at least: convert a depth image of a produce item into a point cloud image; and estimate a diameter of the produce item based at least in part on the point cloud image. 2. The system of claim 1, wherein the application further comprises instructions stored in the memory of the computing device that, when executed by the processor of the computing device, causes the computing device to at least calculate a volume of the produce item based at least in part on the estimated diameter of the produce item. 3. The system of claim 1, further comprising a Red-Green-Blue-Depth (RGB-D) sensor configured to: generate the depth image of the produce item; and send the depth image of the produce item to the computing device. 4. The system of claim 3, further comprising a conveyor belt positioned to move the produce item through a field of view of the RGB-D sensor. 5. The system of claim 3, further comprising a light source positioned to illuminate the produce item when the produce item is positioned within a field of view of the RGB-D sensor. 6. The system of claim 3, wherein the RGB-D sensor is positioned above the produce item. 7. The system of claim 3, wherein the RGB-D sensor is positioned below the produce item. 8. The system of claim 1, further comprising a weighing device configured to: measure a weight of the produce item; and send the weight of the produce item to the computing device. 9. The system of claim 8, wherein the application further comprises instructions stored in the memory of the computing device that, when executed by the processor of the computing device, causes the computing device to at least calculate a density of the produce item based at least in part on the weight of the produce item. 10. The system of claim 8, wherein the weighing device comprises a scale. 11. The system of claim 1, wherein the produce item comprises an onion. 12. A non-transitory computer-readable medium comprising a program that, when executed by a processor of a computing device, causes the computing device to at least: convert a depth image of a produce item into a point cloud image; estimate a volume of the produce item based at least in part on the point could image; and estimate a diameter of the produce item based at least in part on the point could image. 13. The non-transitory computer-readable medium of claim 12, wherein the program, when executed by the processor, further causes the computing device to at least compute a weight of the produce item based at least in part on a measurement provided by a weighing device. 14. The non-transitory computer-readable medium of claim 13, wherein the program, when executed by the processor, further causes the computing device to at least estimate a density of the produce item based at least in part on the estimated volume and the computed weight of the produce item. 15. The non-transitory computer-readable medium of claim 12, wherein the depth image is received from a Red-Green-Blue-Depth (RGB-D) sensor configured to: generate the depth image of the produce item; and send the depth image of the produce item to the computing device. 16. A computer-implemented method, comprising: converting a depth image of a produce item into a point cloud image; estimating a diameter of the produce item based at least in part on the point cloud image; and estimating a volume of the produce item based at least in part on the estimated diameter. 17. The computer-implemented method of claim 16, further comprising receiving the depth image of the produce item from a Red-Green-Blue-Depth (RGB-D) sensor, wherein the RGB-D sensor generates the depth image. 18. The computer-implemented method of claim 16, further comprising receiving a weight of the produce item from a weighing device. 19. The computer-implemented method of claim 18, further comprising estimating a density of the produce item based at least in part on the volume of the produce item and the weight of the produce item. 20. The computer-implemented method of claim 16, wherein converting the depth image of the produce item into a point cloud image further comprises removing a pixel from the depth image.
2,400
7,982
7,982
14,568,967
2,426
Methods and systems for providing multi-track video content includes, receiving a user request for multi-track video, the multi-track video including a plurality of videos, each of the plurality of videos corresponding to a track, each of the plurality of videos being divided into time-based video segments; requesting a first video of the plurality of videos, the first video corresponding to a first track of the multi-track video; receiving first video segments corresponding to the first video; sequentially transferring the first video segments to a player for displaying the sequentially transferred first video segments; receiving second video segments of a second video of the plurality of videos instead of receiving the first video segments, the second video segments corresponding to a second track of the multi-track video; and sequentially transferring the second video segments to the player for displaying the sequentially transferred second video segments.
1. A multi-track video content service method performed by a terminal, the method comprising: receiving, by the terminal, a user request for multi-track video, the multi-track video including a plurality of videos, each of the plurality of videos corresponding to a track, each of the plurality of videos being divided into time-based video segments; requesting, by the terminal, a first video of the plurality of videos, the first video corresponding to a first track of the multi-track video; receiving, by the terminal, first video segments corresponding to the first video; sequentially transferring, by the terminal, the first video segments to a player for displaying the sequentially transferred first video segments; receiving, by the terminal, second video segments of a second video of the plurality of videos instead of receiving the first video segments, the second video segments corresponding to a second track of the multi-track video; and sequentially transferring, by the terminal, the second video segments to the player for displaying the sequentially transferred second video segments. 2. The method of claim 1, wherein each of the plurality of videos includes a start point and an end point and the video segments of each of the plurality of videos are synchronized according to a play order, and in response to a selection of the second track at the player, the sequentially transferred second video segments of the second track are continuously played at a start point of the second video, the continuously playing including transitioning from an end point of a currently played segment of the first video segments when the currently played segment is completed to the start point of the second video. 3. The method of claim 1, wherein history information about a transition between the first track and the second track is stored in a metadata format, and the method further comprises: generating a link for accessing at least the first track and the second track, the link including the history information; and transferring the generated link to another terminal such that the other terminal performs the transition using the history information. 4. The method of claim 1, further comprising: storing history information about a transition between the first track and the second track in a metadata format; and generating a link comprising the history information; and transmitting the link to another terminal such that the other terminal accesses the first track and the second track through the link and displays the first track and the second track according to the history information. 5. The method of claim 1, further comprising: providing, in response to a termination in a play time of the first video, information indicating a termination in playing the first video; and determining to one of (i) re-play the first video from a start of the first video, (ii) play the second video, and (iii) play the first video from a desired play location of the first video. 6. The method of claim 1, further comprising: seeking a desired video segment corresponding to a desired play location of a desired one of the plurality of videos in response to a request from the player for the desired play location; receiving the desired video segment and a plurality of video segments that follow the desired video segment; and sequentially transferring the received video segments to the player. 7. The method of claim 1, wherein each of the plurality of videos includes a track-by-track advertisement segment, and the method further comprises: providing a first track-by-track advertisement segment corresponding to the first video when at least one segment of the first video segments is played; and providing a second track-by-track advertisement segment corresponding to the second video when at least one segment of the second video segments is played. 8. The method of claim 1, wherein each of the plurality of videos corresponds to one of a plurality of thumbnails and a set of the plurality of thumbnails is displayed while one of the plurality of videos is played. 9. The method of claim 1, wherein the first video segments are received through a single data flow and the second video segments are received through the single data flow. 10. A non-transitory computer-readable medium storing program code, which when executed by a processor, configures the processor to: receive a request for multi-track video, the multi-track video including a plurality of videos, each of the plurality of videos corresponding to a track, each of the plurality of videos being divided into time-based video segments; request a first video of the plurality of videos, the first video corresponding to a first track of the multi-track video; receive first video segments corresponding to the first video; sequentially transfer the first video segments to a player for displaying the sequentially transferred first video segments; receive second video segments of a second video of the plurality of videos instead of receiving the first video segments, the second video segments corresponding to a second track of the multi-track video; and sequentially transfer the second video segments to the player for displaying the sequentially transferred second video segments. 11. A terminal for playing multi-track video content comprising: a processor including a request receiver configured to receive a request for multi-track video content from a player, the multi-track video including a plurality of videos, each of the plurality of videos corresponding to a track, each of the plurality of videos being divided into time-based video segments; the processor including a requester configured to request a first video of the plurality of videos based on the received request, the first video corresponding to a first track of the multi-track video; and the processor including a video transferor configured to, receive first video segments corresponding to the first video, sequentially transfer the first video segments to the player for displaying the sequentially transferred first video segments, receive second video segments of a second video of among the plurality of videos instead of receiving the first video segments, the second video segments corresponding to a second track of the multi-track video, and sequentially transfer the second video segments to the player for displaying the sequentially transferred second video segments. 12. The terminal of claim 11, wherein each of the plurality of videos includes a start point and an end point and each of the plurality of videos are synchronized according to a play order, and in response to a selection of the second track at the player, the sequentially transferred second video segments of the second track are continuously played at a start point of the second video, the continuously playing including transitioning from an end point of a currently played segment of the first video segments when the currently played segment is completed to the start point of the second video. 13. The terminal of claim 11, wherein history information about a transition between the first track and the second track is stored in a metadata format, and the processor includes a link transferor configured to: generate a link for accessing at least the first track and the second track, the link including the history information; and transfer the generated link to a terminal such that the terminal performs the transition using the history information. 14. The terminal of claim 11, wherein the processor comprises: a history information repository configured to store, in a non-transitory computer readable medium, history information about a transition between the first track and the second track in a metadata format; and a link transmitter configured to, generate a link comprising the history information, and transmit the link to a terminal such that the terminal accesses the first track and the second track through the link and displays the first track and the second track according to the history information. 15. The terminal of claim 11, the video transferor is further configured to: provide, in response to a termination in a play time of the first video, information indicating a termination in playing the first video, and determine to one of (i) re-play the first video from a start of the first video, (ii) play the second video, and (iii) play the first video from a desired play location of the first video. 16. The terminal of claim 11, wherein the processor includes a segment seeker configured to seek a desired video segment corresponding to a desired location of a desired one of the plurality of videos in response to a request from the player for the desired location, and the video transferor is configured to, receive the desired video segment and video segments followed by the desired video segment, and sequentially transfer the received video segments to the player. 17. The terminal of claim 11, wherein each of the plurality of videos includes a track-by-track advertisement segment, and the video transferor is further configured to: transfer a first track-by-track advertisement segment corresponding to the first video when at least one segment of the first video segments is played; and transfer a second track-by-track advertisement segment corresponding to the second video when at least one segment of the second video segments is played. 18. The terminal of claim 11, wherein the first video segments are received through a single data flow and the second video segments are received through the single data flow. 19. A file distribution system, comprising: a processor including, an installation file manager configured to store and manage an installation file for installing an application, and an installation file transmitter configured to transmit the installation file to a terminal in response to a request of the terminal; and the application configuring the terminal to, transmit a request for a multi-track video to a proxy server, the multi-track video including a plurality of videos and each of the plurality of videos being divided into time-based video segments and being stored in a content server, receive, from the proxy server, first video segments corresponding to a first video of the plurality of videos, the first video corresponding to a first track of the multi-track video, play the first video of the first track as the first video segments are received from the proxy server, the proxy server receiving the first video segments from the content server, request a second video of the plurality of videos in response to a user selection of the second video, the second video corresponding to a second track of the multi-track video, receive, from the proxy server, second video segments of the second video, and play the second video of the second track as the second video segments are received from the proxy server, the proxy server receiving the second video segments from the content server. 20. The file distribution system of claim 19, wherein each of the plurality of videos includes a start point and an end point, the video segments of each of the plurality of videos are synchronized according to a play order, and the application further configures the terminal to: play the second video segments based on a start point of the second video and an end point of a segment of the first video segments, the playing including transitioning from the end point of the segment of the first video segments to the start point of the second video in response to a selection of the second track.
Methods and systems for providing multi-track video content includes, receiving a user request for multi-track video, the multi-track video including a plurality of videos, each of the plurality of videos corresponding to a track, each of the plurality of videos being divided into time-based video segments; requesting a first video of the plurality of videos, the first video corresponding to a first track of the multi-track video; receiving first video segments corresponding to the first video; sequentially transferring the first video segments to a player for displaying the sequentially transferred first video segments; receiving second video segments of a second video of the plurality of videos instead of receiving the first video segments, the second video segments corresponding to a second track of the multi-track video; and sequentially transferring the second video segments to the player for displaying the sequentially transferred second video segments.1. A multi-track video content service method performed by a terminal, the method comprising: receiving, by the terminal, a user request for multi-track video, the multi-track video including a plurality of videos, each of the plurality of videos corresponding to a track, each of the plurality of videos being divided into time-based video segments; requesting, by the terminal, a first video of the plurality of videos, the first video corresponding to a first track of the multi-track video; receiving, by the terminal, first video segments corresponding to the first video; sequentially transferring, by the terminal, the first video segments to a player for displaying the sequentially transferred first video segments; receiving, by the terminal, second video segments of a second video of the plurality of videos instead of receiving the first video segments, the second video segments corresponding to a second track of the multi-track video; and sequentially transferring, by the terminal, the second video segments to the player for displaying the sequentially transferred second video segments. 2. The method of claim 1, wherein each of the plurality of videos includes a start point and an end point and the video segments of each of the plurality of videos are synchronized according to a play order, and in response to a selection of the second track at the player, the sequentially transferred second video segments of the second track are continuously played at a start point of the second video, the continuously playing including transitioning from an end point of a currently played segment of the first video segments when the currently played segment is completed to the start point of the second video. 3. The method of claim 1, wherein history information about a transition between the first track and the second track is stored in a metadata format, and the method further comprises: generating a link for accessing at least the first track and the second track, the link including the history information; and transferring the generated link to another terminal such that the other terminal performs the transition using the history information. 4. The method of claim 1, further comprising: storing history information about a transition between the first track and the second track in a metadata format; and generating a link comprising the history information; and transmitting the link to another terminal such that the other terminal accesses the first track and the second track through the link and displays the first track and the second track according to the history information. 5. The method of claim 1, further comprising: providing, in response to a termination in a play time of the first video, information indicating a termination in playing the first video; and determining to one of (i) re-play the first video from a start of the first video, (ii) play the second video, and (iii) play the first video from a desired play location of the first video. 6. The method of claim 1, further comprising: seeking a desired video segment corresponding to a desired play location of a desired one of the plurality of videos in response to a request from the player for the desired play location; receiving the desired video segment and a plurality of video segments that follow the desired video segment; and sequentially transferring the received video segments to the player. 7. The method of claim 1, wherein each of the plurality of videos includes a track-by-track advertisement segment, and the method further comprises: providing a first track-by-track advertisement segment corresponding to the first video when at least one segment of the first video segments is played; and providing a second track-by-track advertisement segment corresponding to the second video when at least one segment of the second video segments is played. 8. The method of claim 1, wherein each of the plurality of videos corresponds to one of a plurality of thumbnails and a set of the plurality of thumbnails is displayed while one of the plurality of videos is played. 9. The method of claim 1, wherein the first video segments are received through a single data flow and the second video segments are received through the single data flow. 10. A non-transitory computer-readable medium storing program code, which when executed by a processor, configures the processor to: receive a request for multi-track video, the multi-track video including a plurality of videos, each of the plurality of videos corresponding to a track, each of the plurality of videos being divided into time-based video segments; request a first video of the plurality of videos, the first video corresponding to a first track of the multi-track video; receive first video segments corresponding to the first video; sequentially transfer the first video segments to a player for displaying the sequentially transferred first video segments; receive second video segments of a second video of the plurality of videos instead of receiving the first video segments, the second video segments corresponding to a second track of the multi-track video; and sequentially transfer the second video segments to the player for displaying the sequentially transferred second video segments. 11. A terminal for playing multi-track video content comprising: a processor including a request receiver configured to receive a request for multi-track video content from a player, the multi-track video including a plurality of videos, each of the plurality of videos corresponding to a track, each of the plurality of videos being divided into time-based video segments; the processor including a requester configured to request a first video of the plurality of videos based on the received request, the first video corresponding to a first track of the multi-track video; and the processor including a video transferor configured to, receive first video segments corresponding to the first video, sequentially transfer the first video segments to the player for displaying the sequentially transferred first video segments, receive second video segments of a second video of among the plurality of videos instead of receiving the first video segments, the second video segments corresponding to a second track of the multi-track video, and sequentially transfer the second video segments to the player for displaying the sequentially transferred second video segments. 12. The terminal of claim 11, wherein each of the plurality of videos includes a start point and an end point and each of the plurality of videos are synchronized according to a play order, and in response to a selection of the second track at the player, the sequentially transferred second video segments of the second track are continuously played at a start point of the second video, the continuously playing including transitioning from an end point of a currently played segment of the first video segments when the currently played segment is completed to the start point of the second video. 13. The terminal of claim 11, wherein history information about a transition between the first track and the second track is stored in a metadata format, and the processor includes a link transferor configured to: generate a link for accessing at least the first track and the second track, the link including the history information; and transfer the generated link to a terminal such that the terminal performs the transition using the history information. 14. The terminal of claim 11, wherein the processor comprises: a history information repository configured to store, in a non-transitory computer readable medium, history information about a transition between the first track and the second track in a metadata format; and a link transmitter configured to, generate a link comprising the history information, and transmit the link to a terminal such that the terminal accesses the first track and the second track through the link and displays the first track and the second track according to the history information. 15. The terminal of claim 11, the video transferor is further configured to: provide, in response to a termination in a play time of the first video, information indicating a termination in playing the first video, and determine to one of (i) re-play the first video from a start of the first video, (ii) play the second video, and (iii) play the first video from a desired play location of the first video. 16. The terminal of claim 11, wherein the processor includes a segment seeker configured to seek a desired video segment corresponding to a desired location of a desired one of the plurality of videos in response to a request from the player for the desired location, and the video transferor is configured to, receive the desired video segment and video segments followed by the desired video segment, and sequentially transfer the received video segments to the player. 17. The terminal of claim 11, wherein each of the plurality of videos includes a track-by-track advertisement segment, and the video transferor is further configured to: transfer a first track-by-track advertisement segment corresponding to the first video when at least one segment of the first video segments is played; and transfer a second track-by-track advertisement segment corresponding to the second video when at least one segment of the second video segments is played. 18. The terminal of claim 11, wherein the first video segments are received through a single data flow and the second video segments are received through the single data flow. 19. A file distribution system, comprising: a processor including, an installation file manager configured to store and manage an installation file for installing an application, and an installation file transmitter configured to transmit the installation file to a terminal in response to a request of the terminal; and the application configuring the terminal to, transmit a request for a multi-track video to a proxy server, the multi-track video including a plurality of videos and each of the plurality of videos being divided into time-based video segments and being stored in a content server, receive, from the proxy server, first video segments corresponding to a first video of the plurality of videos, the first video corresponding to a first track of the multi-track video, play the first video of the first track as the first video segments are received from the proxy server, the proxy server receiving the first video segments from the content server, request a second video of the plurality of videos in response to a user selection of the second video, the second video corresponding to a second track of the multi-track video, receive, from the proxy server, second video segments of the second video, and play the second video of the second track as the second video segments are received from the proxy server, the proxy server receiving the second video segments from the content server. 20. The file distribution system of claim 19, wherein each of the plurality of videos includes a start point and an end point, the video segments of each of the plurality of videos are synchronized according to a play order, and the application further configures the terminal to: play the second video segments based on a start point of the second video and an end point of a segment of the first video segments, the playing including transitioning from the end point of the segment of the first video segments to the start point of the second video in response to a selection of the second track.
2,400
7,983
7,983
14,742,816
2,485
A method for projection includes projecting a pattern of structured light with a given average intensity onto a scene. A sequence of images is captured of the scene while projecting the pattern. At least one captured image in the sequence is processed in order to extract a depth map of the scene. A condition is identified in the depth map indicative of a fault in projection of the pattern. Responsively to the identified condition, the average intensity of the projection of the pattern is reduced.
1. A method for projection, comprising: projecting a pattern of structured light with a given average intensity onto a scene; capturing a sequence of images of the scene while projecting the pattern; processing at least one captured image in the sequence in order to extract a depth map of the scene, the depth map comprising an array of pixels with respective depth values; identifying in the depth map a condition indicative of a fault in projection of the pattern; and responsively to the identified condition, reducing the average intensity of the projection of the pattern. 2. The method according to claim 1, wherein projecting the pattern comprises generating the pattern by directing a laser beam to impinge on a diffractive optical element (DOE), and wherein identifying the condition comprises detecting a failure of the DOE. 3. The method according to claim 1, wherein identifying the condition comprises detecting that a number of the pixels having valid depth values is below a predefined limit. 4. The method according to claim 1, wherein identifying the condition comprises detecting that a distribution of the depth values does not satisfy a predefined validity criterion. 5. The method according to claim 1, wherein processing the at least one captured image comprises computing confidence scores with respect to the depth values, and wherein identifying the condition comprises detecting that a distribution of the confidence scores does not satisfy a predefined validity criterion. 6. The method according to claim 1, wherein the at least one image captured while projecting the pattern with the given average intensity is a first image, and the depth map extracted therefrom is a first depth map, and wherein the method comprises, after identifying the condition indicative of the fault: capturing at least a second image while projecting the pattern at the reduced average intensity; processing at least the second image in order to extract a second depth map; making a determination, based on the second depth map, that the condition indicative of the fault has been resolved; and responsively to the determination, increasing the average intensity of the projection of the pattern. 7. The method according to claim 6, wherein reducing the average intensity comprises reducing a duty cycle of the projection of the pattern, and wherein increasing the intensity comprises increasing the duty cycle. 8. The method according to claim 6, wherein processing at least the second image comprises extracting multiple depth maps from successive images captured while projecting the pattern at the reduced average intensity, and wherein making the determination comprises deciding that the condition indicative of the fault has been resolved only after finding the condition to have been resolved in a predefined number of the extracted depth maps. 9. Projection apparatus, comprising: a projection assembly, which is configured to project a pattern of structured light with a given average intensity onto a scene; an image capture assembly, which is configured to capture a sequence of images of the scene while the pattern is projected onto the scene; and a processor, which is configured to process at least one captured image in the sequence in order to extract a depth map of the scene, the depth map comprising an array of pixels with respective depth values, to identify in the depth map a condition indicative of a fault in projection of the pattern, and responsively to the identified condition, to cause the projection assembly to reduce the average intensity of the projection of the pattern. 10. The apparatus according to claim 9, wherein the projection assembly comprises a diffractive optical element (DOE) and a laser, which is configured to direct a laser beam to impinge on the DOE, and wherein the identified condition is indicative of a failure of the DOE. 11. The apparatus according to claim 9, wherein the processor is configured to identify the condition by detecting that a number of the pixels having valid depth values is below a predefined limit. 12. The apparatus according to claim 9, wherein the processor is configured to identify the condition by detecting that a distribution of the depth values does not satisfy a predefined validity criterion. 13. The apparatus according to claim 9, wherein the processor is configured to compute confidence scores with respect to the depth values, and to identify the condition by detecting that a distribution of the confidence scores does not satisfy a predefined validity criterion. 14. The apparatus according to claim 9, wherein the at least one image captured while projecting the pattern with the given average intensity is a first image, and the depth map extracted therefrom is a first depth map, and wherein the processor is configured, after identifying the condition indicative of the fault, to process at least a second image captured by the image capture assembly while the projection assembly projects the pattern at the reduced average intensity, to process at least the second image in order to extract a second depth map, to make a determination, based on the second depth map, that the condition indicative of the fault has been resolved, and responsively to the determination, to cause the projection assembly to increase the average intensity of the projection of the pattern. 15. The apparatus according to claim 14, wherein the average intensity is reduced by reducing a duty cycle of the projection of the pattern, and the intensity is increased by increasing the duty cycle. 16. The apparatus according to claim 14, wherein the processor is configured to extract multiple depth maps from successive images captured while projecting the pattern at the reduced average intensity, and to decide that the condition indicative of the fault has been resolved only after finding the condition to have been resolved in a predefined number of the extracted depth maps. 17. A computer software product, comprising a non-transitory, computer-readable medium in which program instructions are stored, which instructions, when read by a programmable processor, cause the processor to receive a sequence of images of a scene while a pattern of structured light is projected onto the scene with a given average intensity, to process at least one captured image in the sequence in order to extract a depth map of the scene, the depth map comprising an array of pixels with respective depth values, to identify in the depth map a condition indicative of a fault in projection of the pattern, and responsively to the identified condition, to reduce the average intensity of the projection of the pattern. 18. The product according to claim 17, wherein the pattern is generated by directing a laser beam to impinge on a diffractive optical element (DOE), and wherein the identified condition is indicative of a failure of the DOE. 19. The product according to claim 17, wherein the instructions cause the processor to identify the condition by detecting that a number of the pixels having valid depth values is below a predefined limit. 20. The product according to claim 17, wherein the instructions cause the processor to identify the condition by detecting that a distribution of the depth values does not satisfy a predefined validity criterion.
A method for projection includes projecting a pattern of structured light with a given average intensity onto a scene. A sequence of images is captured of the scene while projecting the pattern. At least one captured image in the sequence is processed in order to extract a depth map of the scene. A condition is identified in the depth map indicative of a fault in projection of the pattern. Responsively to the identified condition, the average intensity of the projection of the pattern is reduced.1. A method for projection, comprising: projecting a pattern of structured light with a given average intensity onto a scene; capturing a sequence of images of the scene while projecting the pattern; processing at least one captured image in the sequence in order to extract a depth map of the scene, the depth map comprising an array of pixels with respective depth values; identifying in the depth map a condition indicative of a fault in projection of the pattern; and responsively to the identified condition, reducing the average intensity of the projection of the pattern. 2. The method according to claim 1, wherein projecting the pattern comprises generating the pattern by directing a laser beam to impinge on a diffractive optical element (DOE), and wherein identifying the condition comprises detecting a failure of the DOE. 3. The method according to claim 1, wherein identifying the condition comprises detecting that a number of the pixels having valid depth values is below a predefined limit. 4. The method according to claim 1, wherein identifying the condition comprises detecting that a distribution of the depth values does not satisfy a predefined validity criterion. 5. The method according to claim 1, wherein processing the at least one captured image comprises computing confidence scores with respect to the depth values, and wherein identifying the condition comprises detecting that a distribution of the confidence scores does not satisfy a predefined validity criterion. 6. The method according to claim 1, wherein the at least one image captured while projecting the pattern with the given average intensity is a first image, and the depth map extracted therefrom is a first depth map, and wherein the method comprises, after identifying the condition indicative of the fault: capturing at least a second image while projecting the pattern at the reduced average intensity; processing at least the second image in order to extract a second depth map; making a determination, based on the second depth map, that the condition indicative of the fault has been resolved; and responsively to the determination, increasing the average intensity of the projection of the pattern. 7. The method according to claim 6, wherein reducing the average intensity comprises reducing a duty cycle of the projection of the pattern, and wherein increasing the intensity comprises increasing the duty cycle. 8. The method according to claim 6, wherein processing at least the second image comprises extracting multiple depth maps from successive images captured while projecting the pattern at the reduced average intensity, and wherein making the determination comprises deciding that the condition indicative of the fault has been resolved only after finding the condition to have been resolved in a predefined number of the extracted depth maps. 9. Projection apparatus, comprising: a projection assembly, which is configured to project a pattern of structured light with a given average intensity onto a scene; an image capture assembly, which is configured to capture a sequence of images of the scene while the pattern is projected onto the scene; and a processor, which is configured to process at least one captured image in the sequence in order to extract a depth map of the scene, the depth map comprising an array of pixels with respective depth values, to identify in the depth map a condition indicative of a fault in projection of the pattern, and responsively to the identified condition, to cause the projection assembly to reduce the average intensity of the projection of the pattern. 10. The apparatus according to claim 9, wherein the projection assembly comprises a diffractive optical element (DOE) and a laser, which is configured to direct a laser beam to impinge on the DOE, and wherein the identified condition is indicative of a failure of the DOE. 11. The apparatus according to claim 9, wherein the processor is configured to identify the condition by detecting that a number of the pixels having valid depth values is below a predefined limit. 12. The apparatus according to claim 9, wherein the processor is configured to identify the condition by detecting that a distribution of the depth values does not satisfy a predefined validity criterion. 13. The apparatus according to claim 9, wherein the processor is configured to compute confidence scores with respect to the depth values, and to identify the condition by detecting that a distribution of the confidence scores does not satisfy a predefined validity criterion. 14. The apparatus according to claim 9, wherein the at least one image captured while projecting the pattern with the given average intensity is a first image, and the depth map extracted therefrom is a first depth map, and wherein the processor is configured, after identifying the condition indicative of the fault, to process at least a second image captured by the image capture assembly while the projection assembly projects the pattern at the reduced average intensity, to process at least the second image in order to extract a second depth map, to make a determination, based on the second depth map, that the condition indicative of the fault has been resolved, and responsively to the determination, to cause the projection assembly to increase the average intensity of the projection of the pattern. 15. The apparatus according to claim 14, wherein the average intensity is reduced by reducing a duty cycle of the projection of the pattern, and the intensity is increased by increasing the duty cycle. 16. The apparatus according to claim 14, wherein the processor is configured to extract multiple depth maps from successive images captured while projecting the pattern at the reduced average intensity, and to decide that the condition indicative of the fault has been resolved only after finding the condition to have been resolved in a predefined number of the extracted depth maps. 17. A computer software product, comprising a non-transitory, computer-readable medium in which program instructions are stored, which instructions, when read by a programmable processor, cause the processor to receive a sequence of images of a scene while a pattern of structured light is projected onto the scene with a given average intensity, to process at least one captured image in the sequence in order to extract a depth map of the scene, the depth map comprising an array of pixels with respective depth values, to identify in the depth map a condition indicative of a fault in projection of the pattern, and responsively to the identified condition, to reduce the average intensity of the projection of the pattern. 18. The product according to claim 17, wherein the pattern is generated by directing a laser beam to impinge on a diffractive optical element (DOE), and wherein the identified condition is indicative of a failure of the DOE. 19. The product according to claim 17, wherein the instructions cause the processor to identify the condition by detecting that a number of the pixels having valid depth values is below a predefined limit. 20. The product according to claim 17, wherein the instructions cause the processor to identify the condition by detecting that a distribution of the depth values does not satisfy a predefined validity criterion.
2,400
7,984
7,984
14,301,881
2,448
A dynamic workflow-based composite web service system and method for the creation and definition of a web service, its properties, methods, and functions through the combination of an event trigger which defines the web service endpoint, actor classes which defines the service's properties and metadata, and a workflow which defines its methods and functions. The dynamic workflow-based composite web service system and method generally includes one or more web service endpoints (the URL where the service can be accessed by a client application), an event trigger defined for each endpoint, actors which define the properties and metadata of the service, and a workflow which receives input from the endpoint, returns the result of the process, and defines the web service functions and methods.
1. A method for providing dynamic workflow-based composite web services comprising the steps of: providing a management system for receiving an input data request from a remote client application, the request including a request URL; providing a plurality of web service endpoints, each of the endpoints including a trigger with an associated endpoint URL; providing a plurality of workflows, each workflow associated with at least one of the endpoints and including activities; providing a plurality of actors, each actor associated with at least one of the workflows; triggering one of the triggers having the endpoint URL corresponding to the request URL to load the associated workflow; executing the loaded associated workflow using the associated actor to generate and send to the one trigger an output data representing serialized actor properties; and sending the output data from the one trigger to the client application. 2. The method according to claim 1 wherein the request includes parameter data, the one trigger gathers the parameter data and inputs the parameter data to the loaded associated workflow for processing during the execution. 3. The method according to claim 2 wherein the parameter data includes a customer identification, or any other actor identification or filter criteria. 4. The method according to claim 1 including generating the output as either JSON, Atom or any other HTML data format. 5. The method according to claim 1 wherein the workflow activities are step-by-step instructions performed during the execution of the loaded associated workflow. 6. The method according to claim 5 wherein the loaded associated workflow calls at least another of the workflows during the execution. 7. The method according to claim 1 including receiving the request from the remote client application at the management system over the Internet. 8. A computer program product comprising at least one computer program means for performing the method according to claim 1 for providing dynamic workflow-based composite web services wherein at least one step of the method is performed when the computer program means is loaded into at least one processor of the management system. 9. A non-transitory computer-readable data storage device comprising the computer program product according to claim 8. 10. A management system for providing dynamic workflow-based composite web services comprises: an input for receiving an input data request from a remote client application, the request including a request URL; a webservice manager connected to the input and providing a plurality of web service endpoints, each of the endpoints including a trigger with an associated endpoint URL, the webservice manager providing a plurality of workflows, each of the workflows associated with at least one of the endpoints and including activities, the webservice manager providing a plurality of actors, each of the actors associated with at least one of the workflows, the webservice manager triggering a one of the triggers having the endpoint URL corresponding to the request URL to load the associated workflow, the webservice manager executing the loaded associated workflow using the associated actor to generate and send to the one trigger an output data representing serialized actor properties; and an output connected to the webservice manager for sending the output data from the one trigger to the client application. 11. The management system according to claim 10 wherein the request includes parameter data, the one trigger gathers the parameter data and inputs the parameter data to the loaded associated workflow for processing during the execution. 12. The management system according to claim 11 wherein the parameter data includes a customer identification or any other actor identification or filter criteria. 13. The management system according to claim 12 including generating the output as JSON, Atom or any other HTML data format. 14. The management system according to claim 10 wherein the workflow activities are step-by-step instructions performed during the execution of the loaded associated workflow. 15. The management system according to claim 14 wherein the loaded associated workflow calls at least another of the workflows during the execution. 16. The management system according to claim 10 including receiving the input data request from the remote client application at the input over the Internet.
A dynamic workflow-based composite web service system and method for the creation and definition of a web service, its properties, methods, and functions through the combination of an event trigger which defines the web service endpoint, actor classes which defines the service's properties and metadata, and a workflow which defines its methods and functions. The dynamic workflow-based composite web service system and method generally includes one or more web service endpoints (the URL where the service can be accessed by a client application), an event trigger defined for each endpoint, actors which define the properties and metadata of the service, and a workflow which receives input from the endpoint, returns the result of the process, and defines the web service functions and methods.1. A method for providing dynamic workflow-based composite web services comprising the steps of: providing a management system for receiving an input data request from a remote client application, the request including a request URL; providing a plurality of web service endpoints, each of the endpoints including a trigger with an associated endpoint URL; providing a plurality of workflows, each workflow associated with at least one of the endpoints and including activities; providing a plurality of actors, each actor associated with at least one of the workflows; triggering one of the triggers having the endpoint URL corresponding to the request URL to load the associated workflow; executing the loaded associated workflow using the associated actor to generate and send to the one trigger an output data representing serialized actor properties; and sending the output data from the one trigger to the client application. 2. The method according to claim 1 wherein the request includes parameter data, the one trigger gathers the parameter data and inputs the parameter data to the loaded associated workflow for processing during the execution. 3. The method according to claim 2 wherein the parameter data includes a customer identification, or any other actor identification or filter criteria. 4. The method according to claim 1 including generating the output as either JSON, Atom or any other HTML data format. 5. The method according to claim 1 wherein the workflow activities are step-by-step instructions performed during the execution of the loaded associated workflow. 6. The method according to claim 5 wherein the loaded associated workflow calls at least another of the workflows during the execution. 7. The method according to claim 1 including receiving the request from the remote client application at the management system over the Internet. 8. A computer program product comprising at least one computer program means for performing the method according to claim 1 for providing dynamic workflow-based composite web services wherein at least one step of the method is performed when the computer program means is loaded into at least one processor of the management system. 9. A non-transitory computer-readable data storage device comprising the computer program product according to claim 8. 10. A management system for providing dynamic workflow-based composite web services comprises: an input for receiving an input data request from a remote client application, the request including a request URL; a webservice manager connected to the input and providing a plurality of web service endpoints, each of the endpoints including a trigger with an associated endpoint URL, the webservice manager providing a plurality of workflows, each of the workflows associated with at least one of the endpoints and including activities, the webservice manager providing a plurality of actors, each of the actors associated with at least one of the workflows, the webservice manager triggering a one of the triggers having the endpoint URL corresponding to the request URL to load the associated workflow, the webservice manager executing the loaded associated workflow using the associated actor to generate and send to the one trigger an output data representing serialized actor properties; and an output connected to the webservice manager for sending the output data from the one trigger to the client application. 11. The management system according to claim 10 wherein the request includes parameter data, the one trigger gathers the parameter data and inputs the parameter data to the loaded associated workflow for processing during the execution. 12. The management system according to claim 11 wherein the parameter data includes a customer identification or any other actor identification or filter criteria. 13. The management system according to claim 12 including generating the output as JSON, Atom or any other HTML data format. 14. The management system according to claim 10 wherein the workflow activities are step-by-step instructions performed during the execution of the loaded associated workflow. 15. The management system according to claim 14 wherein the loaded associated workflow calls at least another of the workflows during the execution. 16. The management system according to claim 10 including receiving the input data request from the remote client application at the input over the Internet.
2,400
7,985
7,985
14,624,560
2,459
A data storage system, and a method of operation thereof, includes: an interface module for creating a login credential for storing on a removable storage device; a backup module, coupled to the interface module, for transferring data for an automatic backup of the removable storage device to a remote backup system based on the login credential and a status of the removable storage device; and a close module, coupled to the backup module, for disconnecting an application from a cloud backup service for closing a connection between a computing device and the remote backup system with the data from the automatic backup.
1. A method of operation of a data storage system comprising: creating a login credential for storing on a removable storage device; transferring data for an automatic backup of the removable storage device to a remote backup system based on the login credential and a status of the removable storage device; and disconnecting an application from a cloud backup service for closing a connection between a computing device and the remote backup system with the data from the automatic backup. 2. The method as claimed in claim 1 wherein transferring the data includes transferring the data for the automatic backup based on the status indicating a user data file has been moved to the removable storage device. 3. The method as claimed in claim 1 further comprising installing a monitoring service for executing on the computing device. 4. The method as claimed in claim 1 further comprising connecting the removable storage device to the computing device, wherein the computing device is an untrusted computing device. 5. The method as claimed in claim 1 further comprising encrypting the login credential for storing on the removable storage device. 6. The method as claimed in claim 1 further comprising scanning user data for malware and viruses before storing the user data on the removable storage device or backing up the user data to the remote backup system. 7. The method as claimed in claim 1 further comprising encrypting user data before storing the user data on the removable storage device or backing up the user data to the remote backup system. 8. The method as claimed in claim 1 further comprising licensing the application based on a unique hardware identification of the removable storage device, the unique hardware identification includes only one of a Unique Device Identifier (UDI), a Product Identification (PID), a Vendor Identification (VID), a version number, a serial number, and a combination thereof. 9. The method as claimed in claim 1 further comprising: copying a data file to the removable storage device; disconnecting the removable storage device from a first computing device; and connecting the removable storage device to a second computing device; and wherein: transferring the data includes automatically backing up the data file from the removable storage device to the remote backup system based on the status detected by a monitoring service. 10. A method of operation of a data storage system comprising: creating a login credential for storing on a removable storage device connected to a computing device; transferring data for an automatic backup of the removable storage device to a remote backup system based on the login credential and a status of the removable storage device; and disconnecting an application from a cloud backup service for closing a connection between the computing device and the remote backup system with the data from the automatic backup. 11. The method as claimed in claim 10 wherein transferring the data includes transferring the data for the automatic backup based on the status indicating a user data file has been modified on the removable storage device. 12. The method as claimed in claim 10 further comprising installing a monitoring service without user interaction for executing on the computing device. 13. The method as claimed in claim 10 further comprising: connecting the removable storage device to the computing device, wherein the computing device is an untrusted computing device; and launching the application stored on the removable storage device, wherein the application is launched without a monitoring service installed on the computing device. 14. The method as claimed in claim 10 further comprising: copying a data file to the removable storage device; disconnecting the removable storage device from a first computing device without a monitoring service installed thereon; and connecting the removable storage device to a second computing device with the monitoring service installed thereon; and wherein: transferring the data includes automatically backing up the data file to the remote backup system based on the status detected by the monitoring service installed on the second computing device. 15. A data storage system comprising: an interface module for creating a login credential for storing on a removable storage device; a backup module, coupled to the interface module, for transferring data for an automatic backup of the removable storage device to a remote backup system based on the login credential and a status of the removable storage device; and a close module, coupled to the backup module, for disconnecting an application from a cloud backup service for closing a connection between a computing device and the remote backup system with the data from the automatic backup. 16. The system as claimed in claim 15 wherein the backup module is for transferring the data for the automatic backup based on the status indicating a user data file has been moved to the removable storage device. 17. The system as claimed in claim 15 further comprising a device connection module, coupled to the interface module, for installing a monitoring service for executing on the computing device. 18. The system as claimed in claim 15 further comprising a device connection module, coupled to the interface module, for connecting the removable storage device to the computing device, wherein the computing device is an untrusted computing device. 19. The system as claimed in claim 15 wherein the interface module is for encrypting the login credential for storing on the removable storage device. 20. The system as claimed in claim 15 further comprising an application connection module, coupled to the interface module, for scanning user data for malware and viruses before storing the user data on the removable storage device or backing up the user data to the remote backup system. 21. The system as claimed in claim 15 further comprising an application connection module, coupled to the interface module, for encrypting user data before storing the user data on the removable storage device or backing up the user data to the remote backup system. 22. The system as claimed in claim 15 wherein the interface module is for licensing the application based on a unique hardware identification of the removable storage device, the unique hardware identification includes only one of a Unique Device Identifier (UDI), a Product Identification (PID), a Vendor Identification (VID), a version number, a serial number, and a combination thereof. 23. The system as claimed in claim 15 further comprising: a copy module for copying a data file to the removable storage device; a disconnection module, coupled to the copy module, for disconnecting the removable storage device from a first computing device; and a device connection module, coupled to the disconnection module, for connecting the removable storage device to a second computing device; and wherein: the backup module is for automatically backing up the data file from the removable storage device to the remote backup system based on the status detected by a monitoring service. 24. The system as claimed in claim 15 wherein the interface module is for creating the login credential for storing on the removable storage device connected to the computing device. 25. The system as claimed in claim 24 wherein the backup module is for transferring the data for the automatic backup based on the status indicating a user data file has been modified on the removable storage device. 26. The system as claimed in claim 24 further comprising a device connection module, coupled to the interface module, for installing a monitoring service without user interaction for executing on the computing device. 27. The system as claimed in claim 24 further comprising: a device connection module, coupled to the interface module, for connecting the removable storage device to the computing device, wherein the computing device is an untrusted computing device; and a launch module, coupled to the device connection module, for launching the application stored on the removable storage device, wherein the application is launched without a monitoring service installed on the computing device. 28. The system as claimed in claim 24 further comprising: a copy module for copying a data file to the removable storage device; a disconnection module, coupled to the copy module, for disconnecting the removable storage device from a first computing device without a monitoring service installed thereon; and a device connection module, coupled to the disconnection module, for connecting the removable storage device to a second computing device with the monitoring service installed thereon; and wherein: the backup module is for automatically backing up the data file to the remote backup system based on the status detected by the monitoring service installed on the second computing device.
A data storage system, and a method of operation thereof, includes: an interface module for creating a login credential for storing on a removable storage device; a backup module, coupled to the interface module, for transferring data for an automatic backup of the removable storage device to a remote backup system based on the login credential and a status of the removable storage device; and a close module, coupled to the backup module, for disconnecting an application from a cloud backup service for closing a connection between a computing device and the remote backup system with the data from the automatic backup.1. A method of operation of a data storage system comprising: creating a login credential for storing on a removable storage device; transferring data for an automatic backup of the removable storage device to a remote backup system based on the login credential and a status of the removable storage device; and disconnecting an application from a cloud backup service for closing a connection between a computing device and the remote backup system with the data from the automatic backup. 2. The method as claimed in claim 1 wherein transferring the data includes transferring the data for the automatic backup based on the status indicating a user data file has been moved to the removable storage device. 3. The method as claimed in claim 1 further comprising installing a monitoring service for executing on the computing device. 4. The method as claimed in claim 1 further comprising connecting the removable storage device to the computing device, wherein the computing device is an untrusted computing device. 5. The method as claimed in claim 1 further comprising encrypting the login credential for storing on the removable storage device. 6. The method as claimed in claim 1 further comprising scanning user data for malware and viruses before storing the user data on the removable storage device or backing up the user data to the remote backup system. 7. The method as claimed in claim 1 further comprising encrypting user data before storing the user data on the removable storage device or backing up the user data to the remote backup system. 8. The method as claimed in claim 1 further comprising licensing the application based on a unique hardware identification of the removable storage device, the unique hardware identification includes only one of a Unique Device Identifier (UDI), a Product Identification (PID), a Vendor Identification (VID), a version number, a serial number, and a combination thereof. 9. The method as claimed in claim 1 further comprising: copying a data file to the removable storage device; disconnecting the removable storage device from a first computing device; and connecting the removable storage device to a second computing device; and wherein: transferring the data includes automatically backing up the data file from the removable storage device to the remote backup system based on the status detected by a monitoring service. 10. A method of operation of a data storage system comprising: creating a login credential for storing on a removable storage device connected to a computing device; transferring data for an automatic backup of the removable storage device to a remote backup system based on the login credential and a status of the removable storage device; and disconnecting an application from a cloud backup service for closing a connection between the computing device and the remote backup system with the data from the automatic backup. 11. The method as claimed in claim 10 wherein transferring the data includes transferring the data for the automatic backup based on the status indicating a user data file has been modified on the removable storage device. 12. The method as claimed in claim 10 further comprising installing a monitoring service without user interaction for executing on the computing device. 13. The method as claimed in claim 10 further comprising: connecting the removable storage device to the computing device, wherein the computing device is an untrusted computing device; and launching the application stored on the removable storage device, wherein the application is launched without a monitoring service installed on the computing device. 14. The method as claimed in claim 10 further comprising: copying a data file to the removable storage device; disconnecting the removable storage device from a first computing device without a monitoring service installed thereon; and connecting the removable storage device to a second computing device with the monitoring service installed thereon; and wherein: transferring the data includes automatically backing up the data file to the remote backup system based on the status detected by the monitoring service installed on the second computing device. 15. A data storage system comprising: an interface module for creating a login credential for storing on a removable storage device; a backup module, coupled to the interface module, for transferring data for an automatic backup of the removable storage device to a remote backup system based on the login credential and a status of the removable storage device; and a close module, coupled to the backup module, for disconnecting an application from a cloud backup service for closing a connection between a computing device and the remote backup system with the data from the automatic backup. 16. The system as claimed in claim 15 wherein the backup module is for transferring the data for the automatic backup based on the status indicating a user data file has been moved to the removable storage device. 17. The system as claimed in claim 15 further comprising a device connection module, coupled to the interface module, for installing a monitoring service for executing on the computing device. 18. The system as claimed in claim 15 further comprising a device connection module, coupled to the interface module, for connecting the removable storage device to the computing device, wherein the computing device is an untrusted computing device. 19. The system as claimed in claim 15 wherein the interface module is for encrypting the login credential for storing on the removable storage device. 20. The system as claimed in claim 15 further comprising an application connection module, coupled to the interface module, for scanning user data for malware and viruses before storing the user data on the removable storage device or backing up the user data to the remote backup system. 21. The system as claimed in claim 15 further comprising an application connection module, coupled to the interface module, for encrypting user data before storing the user data on the removable storage device or backing up the user data to the remote backup system. 22. The system as claimed in claim 15 wherein the interface module is for licensing the application based on a unique hardware identification of the removable storage device, the unique hardware identification includes only one of a Unique Device Identifier (UDI), a Product Identification (PID), a Vendor Identification (VID), a version number, a serial number, and a combination thereof. 23. The system as claimed in claim 15 further comprising: a copy module for copying a data file to the removable storage device; a disconnection module, coupled to the copy module, for disconnecting the removable storage device from a first computing device; and a device connection module, coupled to the disconnection module, for connecting the removable storage device to a second computing device; and wherein: the backup module is for automatically backing up the data file from the removable storage device to the remote backup system based on the status detected by a monitoring service. 24. The system as claimed in claim 15 wherein the interface module is for creating the login credential for storing on the removable storage device connected to the computing device. 25. The system as claimed in claim 24 wherein the backup module is for transferring the data for the automatic backup based on the status indicating a user data file has been modified on the removable storage device. 26. The system as claimed in claim 24 further comprising a device connection module, coupled to the interface module, for installing a monitoring service without user interaction for executing on the computing device. 27. The system as claimed in claim 24 further comprising: a device connection module, coupled to the interface module, for connecting the removable storage device to the computing device, wherein the computing device is an untrusted computing device; and a launch module, coupled to the device connection module, for launching the application stored on the removable storage device, wherein the application is launched without a monitoring service installed on the computing device. 28. The system as claimed in claim 24 further comprising: a copy module for copying a data file to the removable storage device; a disconnection module, coupled to the copy module, for disconnecting the removable storage device from a first computing device without a monitoring service installed thereon; and a device connection module, coupled to the disconnection module, for connecting the removable storage device to a second computing device with the monitoring service installed thereon; and wherein: the backup module is for automatically backing up the data file to the remote backup system based on the status detected by the monitoring service installed on the second computing device.
2,400
7,986
7,986
15,728,501
2,448
A method can include evaluating each of a plurality of collaborative systems, using a processor, for suitability hosting an artifact according to at least one attribute of the artifact. A first collaborative system can be selected from the plurality of collaborative systems according to the evaluation. The artifact can be stored in the first collaborative system.
1-25. (canceled) 26. A computer-implemented method within a computer hardware system, comprising: evaluating each of a plurality of collaborative systems for suitability hosting an artifact according to at least one attribute of the artifact; selecting a first collaborative system from the plurality of collaborative systems according to the evaluation; storing the artifact in the first collaborative system; and automatically creating a link within a second collaborative system different from the first collaborative system pointing to the artifact in the first collaborative system. 27. The method of claim 26, wherein the evaluation includes: identifying a computing resource needed to host the artifact; and comparing the computing resource needed to host the artifact with computing resources available in each of the plurality of collaborative systems, wherein the first collaborative system is selected according to availability of the computing resource needed to host the artifact. 28. The method of claim 26, wherein the plurality of collaborative systems from which the first collaborative system is selected is limited to collaborative systems to which an owner of the artifact has write access. 29. The method of claim 28, wherein a user input selecting at least two of the collaborative systems to which the owner has write access is selected; and the plurality of collaborative systems to the at least two of the collaborative systems are further limited to the at least two of the collaborative systems selected by the user input. 30. The method of claim 26, further comprising: responsive to a change to the at least one attribute of the artifact: evaluating each of the plurality of collaborative systems for suitability hosting the artifact according to the change to the at least one attribute of the artifact; selecting a second collaboration system to host the artifact; removing the artifact from the first collaborative system; and uploading the artifact to the second collaborative system.
A method can include evaluating each of a plurality of collaborative systems, using a processor, for suitability hosting an artifact according to at least one attribute of the artifact. A first collaborative system can be selected from the plurality of collaborative systems according to the evaluation. The artifact can be stored in the first collaborative system.1-25. (canceled) 26. A computer-implemented method within a computer hardware system, comprising: evaluating each of a plurality of collaborative systems for suitability hosting an artifact according to at least one attribute of the artifact; selecting a first collaborative system from the plurality of collaborative systems according to the evaluation; storing the artifact in the first collaborative system; and automatically creating a link within a second collaborative system different from the first collaborative system pointing to the artifact in the first collaborative system. 27. The method of claim 26, wherein the evaluation includes: identifying a computing resource needed to host the artifact; and comparing the computing resource needed to host the artifact with computing resources available in each of the plurality of collaborative systems, wherein the first collaborative system is selected according to availability of the computing resource needed to host the artifact. 28. The method of claim 26, wherein the plurality of collaborative systems from which the first collaborative system is selected is limited to collaborative systems to which an owner of the artifact has write access. 29. The method of claim 28, wherein a user input selecting at least two of the collaborative systems to which the owner has write access is selected; and the plurality of collaborative systems to the at least two of the collaborative systems are further limited to the at least two of the collaborative systems selected by the user input. 30. The method of claim 26, further comprising: responsive to a change to the at least one attribute of the artifact: evaluating each of the plurality of collaborative systems for suitability hosting the artifact according to the change to the at least one attribute of the artifact; selecting a second collaboration system to host the artifact; removing the artifact from the first collaborative system; and uploading the artifact to the second collaborative system.
2,400
7,987
7,987
13,463,837
2,425
A further coding efficiency increase is achieved by, in hybrid video coding, additionally predicting the residual signal of a current frame by motion-compensated prediction using a reference residual signal of a previous frame. In other words, in order to further reduce the energy of the final residual signal, i.e. the one finally transmitted, and thus increase the coding efficiency, it is proposed to additionally predict the residual signal by motion-compensated prediction using the reconstructed residual signals of previously coded frames.
1. Hybrid video decoder configured to additionally predict a residual signal of a currently decoded frame by motion-compensated prediction using a reference residual signal of a previously decoded frame. 2. Hybrid video decoder according to claim 1, further configured to predict the currently decoded frame from previously decoded video portions to acquire a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to, entropy decode a final residual signal of the currently decoded frame, and reconstruct the currently decoded frame by composing the prediction signal of the currently decoded frame, a residual prediction signal of the currently decoded frame, acquired by the hybrid video decoder in predicting the residual signal of the currently decoded frame, and the final residual signal of the currently decoded frame. 3. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame and build the reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame. 4. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame, predict a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame to acquire a residual prediction signal of the previously decoded frame, and build the reference residual signal of the previously decoded frame by a sum of the final residual signal of the previously decoded frame and the residual prediction signal of the previously decoded frame. 5. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame and to select, on a frame basis, to build the reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame, or a sum of the final residual signal of the previously decoded frame and a residual prediction signal of the previously decoded frame acquired by predicting a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame. 6. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame, build a first candidate reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame and insert same in a decoded picture buffer of the hybrid video decoder, build a second candidate reference residual signal of the previously decoded frame by a sum of the final residual signal of the previously decoded frame and a residual prediction signal of the previously decoded frame acquired by predicting a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame, and insert same into the decoded picture buffer, and using the first or second candidate reference residual signal as the reference residual signal of the previously decoded frame depending on a signalization within a bitstream. 7. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode information on residual prediction motion parameters, and use the residual prediction motion parameters in predicting the residual signal of the currently decoded frame. 8. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode information on video prediction motion parameters, and predict the currently decoded frame by motion-compensated prediction using the video prediction motion parameters to acquire a prediction signal of the currently decoded frame, a prediction error of which the residual signal of the currently decoded frame relates to. 9. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to decode one or more syntax elements for the currently decoded frame and apply the prediction of the residual signal of the currently decoded frame to a predetermined set of first sets of samples of the currently decoded frame, the predetermined set being defined by the one or more syntax elements. 10. Hybrid video decoder according to claim 9, wherein the hybrid video decoder is configured to apply a prediction of the currently decoded frame resulting in a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to, to second sets of samples of the currently decoded frame, decode one or more syntax elements for each of the second sets of samples, and use the one or more syntax elements for each of the second sets of samples to identify the predetermined set of the first sets of samples out of the second sets of samples or out of subsets of the second sets of samples. 11. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to decode one or more syntax elements for the currently decoded frame and apply the prediction of the residual signal of the currently decoded frame to a predetermined set of first sets of samples of the currently decoded frame, the predetermined set being defined by the one or more syntax elements, and apply an intra prediction of the currently decoded frame partially forming a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to, to a predetermined first set of second sets of samples of the currently decoded frame, and a motion-compensated prediction of the currently decoded frame partially forming the prediction signal of the currently decoded frame, to a predetermined second set of the seconds sets of samples, so that the first sets of samples is independent from the first and second sets of the second sets of samples. 12. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to extract a residual reference frame index indexing the previously decoded frame, from a bitstream. 13. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to infer a residual reference frame index indexing the previously decoded frame such that the latter is the one based on which the hybrid video decoder is configured to determine a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to. 14. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to predict residual prediction motion parameters used in predicting the residual signal of the currently decoded frame for a predetermined set of samples of the currently decoded frame, using residual prediction motion parameters previously used by the hybrid video decoder in predicting the residual signal of the currently decoded frame for another set of samples of the currently decoded frame, or a residual signal of a previously decoded frame. 15. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to predict residual prediction motion parameters used in predicting the residual signal of the currently decoded frame for a predetermined set of samples of the currently decoded frame using motion parameters previously used by the hybrid video decoder in determining a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to, for another set or the same set of samples of the currently decoded frame, or previously used by the hybrid video decoder in determining a prediction signal of a previously decoded frame. 16. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to use multi-hypothesis prediction to predict the residual signal of the currently decoded frame and to differently decode a number of hypothesis used in predicting the residual signal of the currently decoded frame as a difference from a number of video hypothesis used by the hybrid video decoder in determining a prediction signal of the currently decoded frame a prediction error of which the residual signal relates to. 17. Hybrid video decoder according to claim 1, further configured to predict a currently decoded frame from previously decoded video portions to obtain a prediction signal of the currently decoded frame, a prediction error of which the residual signal of the currently decoded frame relates to, entropy decode a final residual signal of the currently decoded frame, and reconstruct the currently decoded frame by summing the prediction signal of the currently decoded frame, a residual prediction signal of the currently decoded frame, obtained by the hybrid video decoder in predicting the residual signal of the currently decoded frame, and the final residual signal of the currently decoded frame, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame and to select, on a frame basis, to build the reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame, or a sum of the final residual signal of the previously decoded frame and a residual prediction signal of the previously decoded frame obtained by predicting a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame, or wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame, build a first candidate reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame and insert same in a decoded picture buffer of the hybrid video decoder, build a second candidate reference residual signal of the previously decoded frame by a sum of the final residual signal of the previously decoded frame and a residual prediction signal of the previously decoded frame obtained by predicting a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame, and insert same into the decoded picture buffer, and use the first or second candidate reference residual signal as the reference residual signal of the previously decoded frame depending on a signalization within a bitstream. 18. Hybrid video decoder according to claim 17, wherein the hybrid video decoder is configured to decode one or more syntax elements for the currently decoded frame and apply the prediction of the residual signal of the currently decoded frame to a predetermined set of first sets of samples of the currently decoded frame, and apply the prediction of the currently decoded frame to second sets of samples of the currently decoded frame, decode one or more syntax elements for each of the second sets of samples, and use the one or more syntax elements for each of the second sets of samples to identify the predetermined set of the first sets of samples out of the second sets of samples. 19. Hybrid video encoder configured to additionally predict a residual signal of a currently encoded frame by motion-compensated prediction using a reference residual signal of a previously encoded frame. 20. Hybrid video encoder according to claim 19, further configured to predict a currently encoded frame from previously encoded video portions to obtain a prediction signal of the currently encoded frame, a prediction error of which the residual signal of the currently encoded frame relates to, entropy encode a final residual signal of the currently encoded frame so that the currently encoded frame is reconstructed by summing the prediction signal of the currently encoded frame, a residual prediction signal of the currently encoded frame, obtained by the hybrid video encoder in predicting the residual signal of the currently encoded frame, and the final residual signal of the currently encoded frame, wherein the hybrid video encoder is configured to entropy encode a final residual signal of the previously encoded frame and to select, on a frame basis, to building the reference residual signal of the previously encoded frame by the final residual signal of the previously encoded frame, or a sum of the final residual signal of the previously encoded frame and a residual prediction signal of the previously encoded frame obtained by predicting a residual signal of the previously encoded frame by motion-compensated prediction using a reference residual signal of an even more previously encoded frame, or wherein the hybrid video encoder is configured to entropy encode a final residual signal of the previously encoded frame, build a first candidate reference residual signal of the previously encoded frame by the final residual signal of the previously encoded frame and insert same in a decoded picture buffer of the hybrid video encoder, build a second candidate reference residual signal of the previously encoded frame by a sum of the final residual signal of the previously encoded frame and a residual prediction signal of the previously encoded frame obtained by predicting a residual signal of the previously encoded frame by motion-compensated prediction using a reference residual signal of an even more previously encoded frame, and insert same into the encoded picture buffer, and use the first or second candidate reference residual signal as the reference residual signal of the previously encoded frame with signalizing the use of the first or second candidate reference residual signal within a bitstream. 21. Hybrid video decoding method comprising additionally predicting a residual signal of a currently decoded frame by motion-compensated prediction using a reference residual signal of a previously decoded frame. 22. Hybrid video encoding method comprising additionally predicting a residual signal of a currently encoded frame by motion-compensated prediction using a reference residual signal of a previously encoded frame. 23. Hybrid video coded bitstream comprising information on residual prediction motion parameters prescribing a motion-compensated prediction of a prediction error of a residual signal of the predetermined frame by motion-compensated prediction using a reference residual signal of a previously coded frame. 24. A non-transitory computer readable medium including a computer program comprising a program code for performing, when running on a computer, a hybrid video decoding method comprising additionally predicting a residual signal of a currently decoded frame by motion-compensated prediction using a reference residual signal of a previously decoded frame. 25. A non-transitory computer readable medium including a computer program comprising a program code for performing, when running on a computer, a hybrid video encoding method comprising additionally predicting a residual signal of a currently encoded frame by motion-compensated prediction using a reference residual signal of a previously encoded frame.
A further coding efficiency increase is achieved by, in hybrid video coding, additionally predicting the residual signal of a current frame by motion-compensated prediction using a reference residual signal of a previous frame. In other words, in order to further reduce the energy of the final residual signal, i.e. the one finally transmitted, and thus increase the coding efficiency, it is proposed to additionally predict the residual signal by motion-compensated prediction using the reconstructed residual signals of previously coded frames.1. Hybrid video decoder configured to additionally predict a residual signal of a currently decoded frame by motion-compensated prediction using a reference residual signal of a previously decoded frame. 2. Hybrid video decoder according to claim 1, further configured to predict the currently decoded frame from previously decoded video portions to acquire a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to, entropy decode a final residual signal of the currently decoded frame, and reconstruct the currently decoded frame by composing the prediction signal of the currently decoded frame, a residual prediction signal of the currently decoded frame, acquired by the hybrid video decoder in predicting the residual signal of the currently decoded frame, and the final residual signal of the currently decoded frame. 3. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame and build the reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame. 4. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame, predict a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame to acquire a residual prediction signal of the previously decoded frame, and build the reference residual signal of the previously decoded frame by a sum of the final residual signal of the previously decoded frame and the residual prediction signal of the previously decoded frame. 5. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame and to select, on a frame basis, to build the reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame, or a sum of the final residual signal of the previously decoded frame and a residual prediction signal of the previously decoded frame acquired by predicting a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame. 6. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame, build a first candidate reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame and insert same in a decoded picture buffer of the hybrid video decoder, build a second candidate reference residual signal of the previously decoded frame by a sum of the final residual signal of the previously decoded frame and a residual prediction signal of the previously decoded frame acquired by predicting a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame, and insert same into the decoded picture buffer, and using the first or second candidate reference residual signal as the reference residual signal of the previously decoded frame depending on a signalization within a bitstream. 7. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode information on residual prediction motion parameters, and use the residual prediction motion parameters in predicting the residual signal of the currently decoded frame. 8. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to entropy decode information on video prediction motion parameters, and predict the currently decoded frame by motion-compensated prediction using the video prediction motion parameters to acquire a prediction signal of the currently decoded frame, a prediction error of which the residual signal of the currently decoded frame relates to. 9. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to decode one or more syntax elements for the currently decoded frame and apply the prediction of the residual signal of the currently decoded frame to a predetermined set of first sets of samples of the currently decoded frame, the predetermined set being defined by the one or more syntax elements. 10. Hybrid video decoder according to claim 9, wherein the hybrid video decoder is configured to apply a prediction of the currently decoded frame resulting in a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to, to second sets of samples of the currently decoded frame, decode one or more syntax elements for each of the second sets of samples, and use the one or more syntax elements for each of the second sets of samples to identify the predetermined set of the first sets of samples out of the second sets of samples or out of subsets of the second sets of samples. 11. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to decode one or more syntax elements for the currently decoded frame and apply the prediction of the residual signal of the currently decoded frame to a predetermined set of first sets of samples of the currently decoded frame, the predetermined set being defined by the one or more syntax elements, and apply an intra prediction of the currently decoded frame partially forming a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to, to a predetermined first set of second sets of samples of the currently decoded frame, and a motion-compensated prediction of the currently decoded frame partially forming the prediction signal of the currently decoded frame, to a predetermined second set of the seconds sets of samples, so that the first sets of samples is independent from the first and second sets of the second sets of samples. 12. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to extract a residual reference frame index indexing the previously decoded frame, from a bitstream. 13. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to infer a residual reference frame index indexing the previously decoded frame such that the latter is the one based on which the hybrid video decoder is configured to determine a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to. 14. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to predict residual prediction motion parameters used in predicting the residual signal of the currently decoded frame for a predetermined set of samples of the currently decoded frame, using residual prediction motion parameters previously used by the hybrid video decoder in predicting the residual signal of the currently decoded frame for another set of samples of the currently decoded frame, or a residual signal of a previously decoded frame. 15. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to predict residual prediction motion parameters used in predicting the residual signal of the currently decoded frame for a predetermined set of samples of the currently decoded frame using motion parameters previously used by the hybrid video decoder in determining a prediction signal of the currently decoded frame, a prediction error of which the residual signal relates to, for another set or the same set of samples of the currently decoded frame, or previously used by the hybrid video decoder in determining a prediction signal of a previously decoded frame. 16. Hybrid video decoder according to claim 1, wherein the hybrid video decoder is configured to use multi-hypothesis prediction to predict the residual signal of the currently decoded frame and to differently decode a number of hypothesis used in predicting the residual signal of the currently decoded frame as a difference from a number of video hypothesis used by the hybrid video decoder in determining a prediction signal of the currently decoded frame a prediction error of which the residual signal relates to. 17. Hybrid video decoder according to claim 1, further configured to predict a currently decoded frame from previously decoded video portions to obtain a prediction signal of the currently decoded frame, a prediction error of which the residual signal of the currently decoded frame relates to, entropy decode a final residual signal of the currently decoded frame, and reconstruct the currently decoded frame by summing the prediction signal of the currently decoded frame, a residual prediction signal of the currently decoded frame, obtained by the hybrid video decoder in predicting the residual signal of the currently decoded frame, and the final residual signal of the currently decoded frame, wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame and to select, on a frame basis, to build the reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame, or a sum of the final residual signal of the previously decoded frame and a residual prediction signal of the previously decoded frame obtained by predicting a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame, or wherein the hybrid video decoder is configured to entropy decode a final residual signal of the previously decoded frame, build a first candidate reference residual signal of the previously decoded frame by the final residual signal of the previously decoded frame and insert same in a decoded picture buffer of the hybrid video decoder, build a second candidate reference residual signal of the previously decoded frame by a sum of the final residual signal of the previously decoded frame and a residual prediction signal of the previously decoded frame obtained by predicting a residual signal of the previously decoded frame by motion-compensated prediction using a reference residual signal of an even more previously decoded frame, and insert same into the decoded picture buffer, and use the first or second candidate reference residual signal as the reference residual signal of the previously decoded frame depending on a signalization within a bitstream. 18. Hybrid video decoder according to claim 17, wherein the hybrid video decoder is configured to decode one or more syntax elements for the currently decoded frame and apply the prediction of the residual signal of the currently decoded frame to a predetermined set of first sets of samples of the currently decoded frame, and apply the prediction of the currently decoded frame to second sets of samples of the currently decoded frame, decode one or more syntax elements for each of the second sets of samples, and use the one or more syntax elements for each of the second sets of samples to identify the predetermined set of the first sets of samples out of the second sets of samples. 19. Hybrid video encoder configured to additionally predict a residual signal of a currently encoded frame by motion-compensated prediction using a reference residual signal of a previously encoded frame. 20. Hybrid video encoder according to claim 19, further configured to predict a currently encoded frame from previously encoded video portions to obtain a prediction signal of the currently encoded frame, a prediction error of which the residual signal of the currently encoded frame relates to, entropy encode a final residual signal of the currently encoded frame so that the currently encoded frame is reconstructed by summing the prediction signal of the currently encoded frame, a residual prediction signal of the currently encoded frame, obtained by the hybrid video encoder in predicting the residual signal of the currently encoded frame, and the final residual signal of the currently encoded frame, wherein the hybrid video encoder is configured to entropy encode a final residual signal of the previously encoded frame and to select, on a frame basis, to building the reference residual signal of the previously encoded frame by the final residual signal of the previously encoded frame, or a sum of the final residual signal of the previously encoded frame and a residual prediction signal of the previously encoded frame obtained by predicting a residual signal of the previously encoded frame by motion-compensated prediction using a reference residual signal of an even more previously encoded frame, or wherein the hybrid video encoder is configured to entropy encode a final residual signal of the previously encoded frame, build a first candidate reference residual signal of the previously encoded frame by the final residual signal of the previously encoded frame and insert same in a decoded picture buffer of the hybrid video encoder, build a second candidate reference residual signal of the previously encoded frame by a sum of the final residual signal of the previously encoded frame and a residual prediction signal of the previously encoded frame obtained by predicting a residual signal of the previously encoded frame by motion-compensated prediction using a reference residual signal of an even more previously encoded frame, and insert same into the encoded picture buffer, and use the first or second candidate reference residual signal as the reference residual signal of the previously encoded frame with signalizing the use of the first or second candidate reference residual signal within a bitstream. 21. Hybrid video decoding method comprising additionally predicting a residual signal of a currently decoded frame by motion-compensated prediction using a reference residual signal of a previously decoded frame. 22. Hybrid video encoding method comprising additionally predicting a residual signal of a currently encoded frame by motion-compensated prediction using a reference residual signal of a previously encoded frame. 23. Hybrid video coded bitstream comprising information on residual prediction motion parameters prescribing a motion-compensated prediction of a prediction error of a residual signal of the predetermined frame by motion-compensated prediction using a reference residual signal of a previously coded frame. 24. A non-transitory computer readable medium including a computer program comprising a program code for performing, when running on a computer, a hybrid video decoding method comprising additionally predicting a residual signal of a currently decoded frame by motion-compensated prediction using a reference residual signal of a previously decoded frame. 25. A non-transitory computer readable medium including a computer program comprising a program code for performing, when running on a computer, a hybrid video encoding method comprising additionally predicting a residual signal of a currently encoded frame by motion-compensated prediction using a reference residual signal of a previously encoded frame.
2,400
7,988
7,988
14,662,715
2,419
A coding distortion removing method is provided for removing coding distortion in two adjacent blocks located on both sides of a block boundary between the two adjacent blocks in a reconstructed image. The method includes deriving a difference of pixel values between a pixel in a first block of the reconstructed image and a pixel in a second block of the reconstructed image adjacent to the first block, deriving an average value of the first quantization parameter and the second quantization parameter, and setting a threshold value to the average value of the first quantization parameter and the second quantization parameter. The method also includes comparing the difference of pixel values with the threshold value, and removing a coding distortion in an area disposed on both sides of the block boundary between the first block and the second block, by applying a filter for coding distortion removal.
1. A coding distortion removing method for removing a coding distortion in two adjacent blocks located on both sides of a block boundary between the two adjacent blocks in a reconstructed image, the method comprising: a difference value deriving step for deriving a difference of pixel values between a pixel in a first block of the reconstructed image and a pixel in a second block of the reconstructed image adjacent to the first block, the first block having a first quantization parameter and the second block having a second quantization parameter; an average deriving step for deriving an average value of the first quantization parameter and the second quantization parameter; a threshold value setting step for setting a threshold value in accordance to the average value of the first quantization parameter and the second quantization parameter; a comparing step for comparing the difference of pixel values derived in said difference value deriving step with the threshold value set in the threshold value setting step; and a removing step for removing a coding distortion in an area disposed on both sides of the block boundary between the first block and the second block, by applying a filter for coding distortion removal based on a result of said comparing step, wherein the coding distortion removal is not conducted when the difference is greater than the threshold value, and the coding distortion removal is conducted by applying the filter when the difference is smaller than the threshold value.
A coding distortion removing method is provided for removing coding distortion in two adjacent blocks located on both sides of a block boundary between the two adjacent blocks in a reconstructed image. The method includes deriving a difference of pixel values between a pixel in a first block of the reconstructed image and a pixel in a second block of the reconstructed image adjacent to the first block, deriving an average value of the first quantization parameter and the second quantization parameter, and setting a threshold value to the average value of the first quantization parameter and the second quantization parameter. The method also includes comparing the difference of pixel values with the threshold value, and removing a coding distortion in an area disposed on both sides of the block boundary between the first block and the second block, by applying a filter for coding distortion removal.1. A coding distortion removing method for removing a coding distortion in two adjacent blocks located on both sides of a block boundary between the two adjacent blocks in a reconstructed image, the method comprising: a difference value deriving step for deriving a difference of pixel values between a pixel in a first block of the reconstructed image and a pixel in a second block of the reconstructed image adjacent to the first block, the first block having a first quantization parameter and the second block having a second quantization parameter; an average deriving step for deriving an average value of the first quantization parameter and the second quantization parameter; a threshold value setting step for setting a threshold value in accordance to the average value of the first quantization parameter and the second quantization parameter; a comparing step for comparing the difference of pixel values derived in said difference value deriving step with the threshold value set in the threshold value setting step; and a removing step for removing a coding distortion in an area disposed on both sides of the block boundary between the first block and the second block, by applying a filter for coding distortion removal based on a result of said comparing step, wherein the coding distortion removal is not conducted when the difference is greater than the threshold value, and the coding distortion removal is conducted by applying the filter when the difference is smaller than the threshold value.
2,400
7,989
7,989
14,662,737
2,419
An image decoding apparatus is provided that decodes a coded image, the coded image being generated by coding an image segmented into a plurality of blocks on a block basis. The image decoding apparatus includes a quantization parameter obtaining unit that obtains a quantization parameter for each block of the plurality of blocks, a decoding unit that decodes the coded image to obtain a reconstructed image, and a pixel difference value obtaining unit that obtains a pixel difference value. The image decoding apparatus also includes a comparing unit that compares the pixel difference value with a threshold value, and a removing unit that removes a coding distortion in an area disposed on both sides of a block boundary between the first block and the second block, by applying a filter for coding distortion removal.
1. An image decoding apparatus which decodes a coded image, the coded image being generated by coding an image segmented into a plurality of blocks on a block basis, the image decoding apparatus comprising: a quantization parameter obtaining unit which obtains a quantization parameter for each block of the plurality of blocks; a decoding unit which decodes the coded image to obtain a reconstructed image; a pixel difference value obtaining unit which obtains a pixel difference value which is a difference between a pixel value of one pixel from a first block of the reconstructed image and a pixel value of one pixel from a second block of the reconstructed image adjacent to the first block; a comparing unit which compares the pixel difference value with a threshold value determined by the quantization parameters of the first block and the second block; and a removing unit which removes a coding distortion in an area disposed on both sides of a block boundary between the first block and the second block, by applying a filter for coding distortion removal based on the result of said comparing unit, wherein the coding distortion removal is not conducted when the pixel difference value is greater than the threshold value, and the coding distortion removal is conducted by applying the filter when the pixel difference value is smaller than the threshold value. 2. The image decoding apparatus according to claim 1, wherein the threshold value is determined by an average value of a quantization parameter for the first block and a quantization parameter for the second block.
An image decoding apparatus is provided that decodes a coded image, the coded image being generated by coding an image segmented into a plurality of blocks on a block basis. The image decoding apparatus includes a quantization parameter obtaining unit that obtains a quantization parameter for each block of the plurality of blocks, a decoding unit that decodes the coded image to obtain a reconstructed image, and a pixel difference value obtaining unit that obtains a pixel difference value. The image decoding apparatus also includes a comparing unit that compares the pixel difference value with a threshold value, and a removing unit that removes a coding distortion in an area disposed on both sides of a block boundary between the first block and the second block, by applying a filter for coding distortion removal.1. An image decoding apparatus which decodes a coded image, the coded image being generated by coding an image segmented into a plurality of blocks on a block basis, the image decoding apparatus comprising: a quantization parameter obtaining unit which obtains a quantization parameter for each block of the plurality of blocks; a decoding unit which decodes the coded image to obtain a reconstructed image; a pixel difference value obtaining unit which obtains a pixel difference value which is a difference between a pixel value of one pixel from a first block of the reconstructed image and a pixel value of one pixel from a second block of the reconstructed image adjacent to the first block; a comparing unit which compares the pixel difference value with a threshold value determined by the quantization parameters of the first block and the second block; and a removing unit which removes a coding distortion in an area disposed on both sides of a block boundary between the first block and the second block, by applying a filter for coding distortion removal based on the result of said comparing unit, wherein the coding distortion removal is not conducted when the pixel difference value is greater than the threshold value, and the coding distortion removal is conducted by applying the filter when the pixel difference value is smaller than the threshold value. 2. The image decoding apparatus according to claim 1, wherein the threshold value is determined by an average value of a quantization parameter for the first block and a quantization parameter for the second block.
2,400
7,990
7,990
14,800,453
2,453
Systems and methods are disclosed for putting a plurality of endpoints in communication with a media host server and a real time communications session manager, wherein a client application running on an endpoint recognizes commands sent to a media host server by a media player running on the endpoint, compares those commands to a pre-programmed set of commands, and sends an indication of the commands to the communications session manager.
1. A computing device, the computing device being associated with a first user in a real time communication session over a network, the computing device comprising: a memory containing machine readable medium comprising machine executable code having stored thereon instructions for performing a method of providing electronic media playback; a processor coupled to the memory, the processor configured to execute the machine executable code to: send and receive voice data with a second user at another computing device as part of the real time communication session; during the real time communication session, detect media playback control signals sent by a media streaming application at the computing device; and in response to detecting the media playback control signals, sending an indication of the media playback control signals to a session management server associated with the real time communication session. 2. The system of claim 1, wherein the media streaming application displays a video stream on the computing device. 3. The system of claim 1, wherein the media playback control signals represent media playback commands including at least one of: play, pause, fast forward, reverse, mute, and scrub. 4. The system of claim 1, wherein the media playback control signals represent a media playback command to begin streaming a media file. 5. The system of claim 1, wherein the computing device includes a personal computer, tablet computer; or a smart phone. 6. The system of claim 1, wherein the media streaming application runs within a client application that runs on the computing device. 7. The system of claim 1, wherein a client application that runs on the computing device performs the detecting the media playback control signals and the sending an indication of the media playback control signals to the session management server. 8. The system of claim 1, wherein the indication of the media playback control signals includes control signals corresponding to an application programming interface (API) associated with the media streaming application. 9. The system of claim 1, wherein the session management server includes a Web RTC server, and the real time communication session includes a Web RTC session. 10. The system of claim 1, wherein the processor is further configured to execute the machine executable code to: receive an indication of media playback control signals from the session management server associated with the real time communication session; and send the indicated media playback control signals to a media host via the media streaming application at the computing device. 11. A method performed by a session management server in a network, the session management server facilitating a real-time communication session between a first endpoint device and a second endpoint device, the method comprising: monitoring the first endpoint device in the network for control signals sent from a media player on the first endpoint device to a media host corresponding to the media player; recognizing at least one playback command by comparing the control signals to a predefined set of control signals; and sending a message to the second endpoint device informing the second endpoint device of the at least one playback command. 12. The method of claim 11, wherein the control signals include an address for a streaming media file. 13. The method of claim 11, wherein the at least one playback command comprises at least one of: pause, play, fast forward, reverse, mute, and scrub. 14. The method of claim 11, wherein the session management server includes a Web RTC server, and the real-time communication session includes a Web RTC session. 15. The method of claim 11, wherein the message includes control signals corresponding to an application programming interface (API) associated with the media host and media player. 16. A computer program product having a computer readable medium tangibly recording computer program logic for synchronizing media playback at a first network device, the computer program product comprising: code to engage in a real time communication session by sending and receiving at least voice data with a second network device; code to monitor a media streaming player at the first network device for control signals communicated between the media streaming player and a media host server; and code to send a message indicative of the control signals to a network session manager that is separate from the media host server. 17. The computer program of claim 16, further comprising: code to compare the control signals against a set of predefined playback commands for the media host server; and code to identify a first one of the playback commands from the set of predefined playback commands based on the comparing, wherein the message indicative of the control signals is indicative of the first one of the playback commands. 18. The computer program product of claim 16, wherein the network session manager includes a Web RTC server, and wherein the real time communication session includes a Web RTC session. 19. The method of claim 16, wherein the playback commands include at least one of: pause, play, fast forward, reverse, mute, and scrub. 20. The method of claim 16, wherein the first network device and the second network device include at least one of a tablet computer, a laptop computer, and a smart phone.
Systems and methods are disclosed for putting a plurality of endpoints in communication with a media host server and a real time communications session manager, wherein a client application running on an endpoint recognizes commands sent to a media host server by a media player running on the endpoint, compares those commands to a pre-programmed set of commands, and sends an indication of the commands to the communications session manager.1. A computing device, the computing device being associated with a first user in a real time communication session over a network, the computing device comprising: a memory containing machine readable medium comprising machine executable code having stored thereon instructions for performing a method of providing electronic media playback; a processor coupled to the memory, the processor configured to execute the machine executable code to: send and receive voice data with a second user at another computing device as part of the real time communication session; during the real time communication session, detect media playback control signals sent by a media streaming application at the computing device; and in response to detecting the media playback control signals, sending an indication of the media playback control signals to a session management server associated with the real time communication session. 2. The system of claim 1, wherein the media streaming application displays a video stream on the computing device. 3. The system of claim 1, wherein the media playback control signals represent media playback commands including at least one of: play, pause, fast forward, reverse, mute, and scrub. 4. The system of claim 1, wherein the media playback control signals represent a media playback command to begin streaming a media file. 5. The system of claim 1, wherein the computing device includes a personal computer, tablet computer; or a smart phone. 6. The system of claim 1, wherein the media streaming application runs within a client application that runs on the computing device. 7. The system of claim 1, wherein a client application that runs on the computing device performs the detecting the media playback control signals and the sending an indication of the media playback control signals to the session management server. 8. The system of claim 1, wherein the indication of the media playback control signals includes control signals corresponding to an application programming interface (API) associated with the media streaming application. 9. The system of claim 1, wherein the session management server includes a Web RTC server, and the real time communication session includes a Web RTC session. 10. The system of claim 1, wherein the processor is further configured to execute the machine executable code to: receive an indication of media playback control signals from the session management server associated with the real time communication session; and send the indicated media playback control signals to a media host via the media streaming application at the computing device. 11. A method performed by a session management server in a network, the session management server facilitating a real-time communication session between a first endpoint device and a second endpoint device, the method comprising: monitoring the first endpoint device in the network for control signals sent from a media player on the first endpoint device to a media host corresponding to the media player; recognizing at least one playback command by comparing the control signals to a predefined set of control signals; and sending a message to the second endpoint device informing the second endpoint device of the at least one playback command. 12. The method of claim 11, wherein the control signals include an address for a streaming media file. 13. The method of claim 11, wherein the at least one playback command comprises at least one of: pause, play, fast forward, reverse, mute, and scrub. 14. The method of claim 11, wherein the session management server includes a Web RTC server, and the real-time communication session includes a Web RTC session. 15. The method of claim 11, wherein the message includes control signals corresponding to an application programming interface (API) associated with the media host and media player. 16. A computer program product having a computer readable medium tangibly recording computer program logic for synchronizing media playback at a first network device, the computer program product comprising: code to engage in a real time communication session by sending and receiving at least voice data with a second network device; code to monitor a media streaming player at the first network device for control signals communicated between the media streaming player and a media host server; and code to send a message indicative of the control signals to a network session manager that is separate from the media host server. 17. The computer program of claim 16, further comprising: code to compare the control signals against a set of predefined playback commands for the media host server; and code to identify a first one of the playback commands from the set of predefined playback commands based on the comparing, wherein the message indicative of the control signals is indicative of the first one of the playback commands. 18. The computer program product of claim 16, wherein the network session manager includes a Web RTC server, and wherein the real time communication session includes a Web RTC session. 19. The method of claim 16, wherein the playback commands include at least one of: pause, play, fast forward, reverse, mute, and scrub. 20. The method of claim 16, wherein the first network device and the second network device include at least one of a tablet computer, a laptop computer, and a smart phone.
2,400
7,991
7,991
15,199,687
2,425
Systems and methods that graphically present a virtual environment are disclosed. An exemplary embodiment receives a request from an electronic device an owner or a guest to present the owner's virtual environment that includes a virtual bookshelf case with a personal virtual digital video disc (DVD) collection of the owner that includes a plurality of virtual DVDs located on a virtual bookshelf case; communicate data corresponding to the owner's virtual environment that includes first data used to render an image of the virtual bookshelf case of the owner, second data used to render images of the plurality of individual virtual DVDs that are included in the owner's personal virtual DVD collection, and third data corresponding to a virtual object; and present a graphical representation of the virtual bookshelf case, the plurality of virtual DVDs on at least one shelf of the virtual bookshelf case, and the virtual object.
1. A method for graphically presenting objects in a virtual environment, the method comprising: receiving a request, at a video community system, from an electronic device of one of an owner of a virtual environment or a guest visiting the owner's virtual environment, wherein the owner's virtual environment includes a virtual bookshelf case with a personal virtual digital video disc (DVD) collection of the owner that includes a plurality of virtual DVDs located on the virtual bookshelf case, and wherein the request is received from the electronic device of the owner or the guest to present the owner's virtual environment on a display; communicating data corresponding to an image of the owner's virtual environment from the video community system to the electronic device, wherein the communicated data comprises: first data that is used to render a first image portion that includes the virtual bookshelf case of the owner; second data that is used to render a second image portion that includes each of the plurality of virtual DVDs that are included in the owner's personal virtual DVD collection; and third data that is used to render a third image portion that includes a virtual object; and presenting the image on the display to the requesting owner or guest, wherein the presented image comprises: the first image portion corresponding to a graphical representation of the virtual bookshelf case; the second image portion that includes a plurality of second images each corresponding to a graphical representation of the plurality of virtual DVDs, wherein the plurality of virtual DVDs are presented on at least one shelf of the virtual bookshelf case; and the third image portion corresponding to the virtual object. 2. The method of claim 1, wherein the electronic device is a head mounted display (HMD) that is worn on a head of the owner or the guest, and wherein presenting the image comprises: presenting a three-dimensional virtual graphical image representation that includes the virtual bookshelf case, the plurality of virtual DVDs located on the virtual bookshelf case, and the virtual object. 3. The method of claim 1, wherein the virtual object is presented on the at least one shelf of the virtual bookshelf case. 4. The method of claim 1, wherein the virtual bookshelf case is presented at a first location of the owner's virtual environment, wherein the virtual object is presented at a second location of the owner's virtual environment, and wherein the first location and the second location are at different locations in the owner's virtual environment. 5. The method of claim 1, wherein the virtual object is presented as a static image. 6. The method of claim 1, wherein the virtual object is presented as a video that shows a moving object that has a changing location over some duration, and wherein the moving object represents an image of a virtual person moving about the owner's virtual environment. 7. The method of claim 6, wherein the image of the virtual person further includes a fourth image portion corresponding to another virtual object that is being held by the virtual person. 8. The method of claim 6, wherein the virtual person moving about the owner's virtual environment includes an audio portion that is presented as sound, wherein the sound is perceived by the owner or guest as dialogue being “spoken” by the virtual person. 9. The method of claim 1, wherein the third image portion corresponding to the virtual object is not initially visible to the viewing owner or guest, and wherein the owner or guest must virtually search the owner's virtual environment to locate the virtual object, the method further comprising: providing an incentive to the owner or guest in response to the owner or guest locating the virtual object. 10. The method of claim 1, wherein prior to receiving the request at the video community system from the electronic device of one of the owner or the guest, the method further comprising: receiving a request to add the virtual object to the virtual environment of at least one specified owner, wherein the request to add is received at the video community system from a requesting party; verifying that the requesting party is authorized to add the virtual object in the specified owner's virtual environment; if the requesting party is authorized, storing electronic data associated with the virtual object into a bookshelf case storage medium for the specified owner; and if the requesting party is not authorized, preventing the storing of the electronic data associated with the virtual object into the bookshelf case storage medium for the specified owner. 11. The method of claim 10, further comprising: providing an incentive to the requesting party in response to the storing of the electronic data associated with the virtual object into the bookshelf case storage medium for the specified owner. 12. The method of claim 10, wherein storing electronic data associated with the virtual object into the bookshelf case storage medium for the specified owner further comprises: storing fingerprint information that associates the virtual object with the requesting party, wherein the fingerprint includes at least information that identifies the requesting party. 13. The method of claim 12, wherein after the requesting party has left the virtual object in the owner's virtual environment, the method further comprising: receiving, at the video community system, a request from the viewing owner to inspect the virtual object; accessing the fingerprint information that is associated with the virtual object; and communicating the fingerprint information from the video community system to the electronic device of the viewing owner, wherein the requesting party is identified to the owner. 14. The method of claim 1, wherein the owner or the guest is currently viewing a presentation of the owner's virtual environment on the display, the method further comprising: receiving, at the video community system, a request from the viewing owner or guest to inspect the virtual object, wherein the request to inspect corresponds to a request for supplemental information pertaining to the virtual object; accessing the supplemental information that is associated with the virtual object; communicating the supplemental information from the video community system to the electronic device of the viewing owner or guest; and presenting the supplemental information on the display, wherein the presented supplemental information includes at least one graphic object that is an enlarged sized image corresponding to the selected virtual object. 15. The method of claim 1, wherein the virtual object is associated with a display duration, and wherein presenting the third image corresponding to the virtual object further comprises: presenting the third image corresponding to the virtual object initially when the owner or guest initially views the virtual environment; and ending presentation of the third image corresponding to the virtual object upon expiration of the display duration. 16. A video community system, comprising: an owner's bookshelf case storage medium that stores electronic data for each one of a plurality of owners who are community members of the video community system and who are each an owner of at least one media device, wherein the electronic data for each owner comprises: first data that is used to render an image of a virtual bookshelf case associated with the owner in an owner's virtual environment; second data that is used to render images of the plurality of individual virtual DVDs that are included in an owner's personal virtual DVD collection and that are presented on at least one shelf of the owner's virtual bookshelf case; and third data corresponding to at least one virtual object, wherein the at least one virtual object is presented in the owner's virtual environment; an interface communicatively coupled to the media device of the owner, and configured to receive a request from the owner or a visiting guest to receive the first data, the second data and the third data that is used to render and present an image of the owner's virtual environment, wherein the request includes information that identifies the owner; and a processor system communicatively coupled to the owner's bookshelf case storage medium and the interface, and configured to: communicate the first data, the second data and the third data to the owner's media device in response to receiving the request from the owner or the visiting guest. 17. The video community system of claim 16, wherein the third data includes presentation location information that defines a location in the image of the owner's virtual environment that the virtual object is presented at. 18. The video community system of claim 16, wherein the third data includes presentation location information that defines a location in the image of the at least one shelf of the owner's virtual bookshelf case that the virtual object is presented at. 19. The video community system of claim 16, wherein the owner's bookshelf case storage medium that stores the electronic data for each one of the plurality of owners further comprises, for at least one owner: fourth data that identifies a plurality of authorized requesting parties that are authorized to add a new virtual object into the owner's bookshelf case storage medium for that owner, wherein the processor system is further configured to: receive a request to add the new virtual object to the virtual environment of at least one specified owner, wherein the request to add is received at the video community system from the requesting party; verify that the requesting party is authorized to add the new virtual object in the specified owner's virtual environment based on an identity of the requesting party and the fourth data that identifies the plurality of authorized requesting parties; if the requesting party is authorized, store the electronic data associated with the new virtual object into the bookshelf case storage medium for the specified owner; and if the requesting party is not authorized, prevent the storing of the electronic data associated with the new virtual object into the bookshelf case storage medium for the specified owner. 20. The video community system of claim 19, further comprising: a virtual object catalogue that stores electronic data for a plurality of different virtual objects, wherein the electronic data for each of the plurality of different virtual objects includes at least one keyword that described the associated virtual object, wherein the request to add the new virtual object to the virtual environment of the specified owner further includes at least one descriptive keyword that is associated with the new virtual object that the requesting party intends to add to the owner's virtual environment, and wherein the processor system is further configured to: compare the at least one descriptive keyword that is associated with the new virtual object with the keywords for each of the different virtual objects to identify a plurality of candidate virtual objects that correspond to the new virtual object; communicate information corresponding to the plurality of candidate virtual objects to an electronic device of the requesting party, wherein the plurality of candidate virtual objects are visually presented to the requesting party; receive a selection of one of the plurality of candidate virtual objects from the electronic device of the requesting party; and store the electronic data corresponding to the selected one of the plurality of candidate virtual objects into the bookshelf case storage medium for the specified owner.
Systems and methods that graphically present a virtual environment are disclosed. An exemplary embodiment receives a request from an electronic device an owner or a guest to present the owner's virtual environment that includes a virtual bookshelf case with a personal virtual digital video disc (DVD) collection of the owner that includes a plurality of virtual DVDs located on a virtual bookshelf case; communicate data corresponding to the owner's virtual environment that includes first data used to render an image of the virtual bookshelf case of the owner, second data used to render images of the plurality of individual virtual DVDs that are included in the owner's personal virtual DVD collection, and third data corresponding to a virtual object; and present a graphical representation of the virtual bookshelf case, the plurality of virtual DVDs on at least one shelf of the virtual bookshelf case, and the virtual object.1. A method for graphically presenting objects in a virtual environment, the method comprising: receiving a request, at a video community system, from an electronic device of one of an owner of a virtual environment or a guest visiting the owner's virtual environment, wherein the owner's virtual environment includes a virtual bookshelf case with a personal virtual digital video disc (DVD) collection of the owner that includes a plurality of virtual DVDs located on the virtual bookshelf case, and wherein the request is received from the electronic device of the owner or the guest to present the owner's virtual environment on a display; communicating data corresponding to an image of the owner's virtual environment from the video community system to the electronic device, wherein the communicated data comprises: first data that is used to render a first image portion that includes the virtual bookshelf case of the owner; second data that is used to render a second image portion that includes each of the plurality of virtual DVDs that are included in the owner's personal virtual DVD collection; and third data that is used to render a third image portion that includes a virtual object; and presenting the image on the display to the requesting owner or guest, wherein the presented image comprises: the first image portion corresponding to a graphical representation of the virtual bookshelf case; the second image portion that includes a plurality of second images each corresponding to a graphical representation of the plurality of virtual DVDs, wherein the plurality of virtual DVDs are presented on at least one shelf of the virtual bookshelf case; and the third image portion corresponding to the virtual object. 2. The method of claim 1, wherein the electronic device is a head mounted display (HMD) that is worn on a head of the owner or the guest, and wherein presenting the image comprises: presenting a three-dimensional virtual graphical image representation that includes the virtual bookshelf case, the plurality of virtual DVDs located on the virtual bookshelf case, and the virtual object. 3. The method of claim 1, wherein the virtual object is presented on the at least one shelf of the virtual bookshelf case. 4. The method of claim 1, wherein the virtual bookshelf case is presented at a first location of the owner's virtual environment, wherein the virtual object is presented at a second location of the owner's virtual environment, and wherein the first location and the second location are at different locations in the owner's virtual environment. 5. The method of claim 1, wherein the virtual object is presented as a static image. 6. The method of claim 1, wherein the virtual object is presented as a video that shows a moving object that has a changing location over some duration, and wherein the moving object represents an image of a virtual person moving about the owner's virtual environment. 7. The method of claim 6, wherein the image of the virtual person further includes a fourth image portion corresponding to another virtual object that is being held by the virtual person. 8. The method of claim 6, wherein the virtual person moving about the owner's virtual environment includes an audio portion that is presented as sound, wherein the sound is perceived by the owner or guest as dialogue being “spoken” by the virtual person. 9. The method of claim 1, wherein the third image portion corresponding to the virtual object is not initially visible to the viewing owner or guest, and wherein the owner or guest must virtually search the owner's virtual environment to locate the virtual object, the method further comprising: providing an incentive to the owner or guest in response to the owner or guest locating the virtual object. 10. The method of claim 1, wherein prior to receiving the request at the video community system from the electronic device of one of the owner or the guest, the method further comprising: receiving a request to add the virtual object to the virtual environment of at least one specified owner, wherein the request to add is received at the video community system from a requesting party; verifying that the requesting party is authorized to add the virtual object in the specified owner's virtual environment; if the requesting party is authorized, storing electronic data associated with the virtual object into a bookshelf case storage medium for the specified owner; and if the requesting party is not authorized, preventing the storing of the electronic data associated with the virtual object into the bookshelf case storage medium for the specified owner. 11. The method of claim 10, further comprising: providing an incentive to the requesting party in response to the storing of the electronic data associated with the virtual object into the bookshelf case storage medium for the specified owner. 12. The method of claim 10, wherein storing electronic data associated with the virtual object into the bookshelf case storage medium for the specified owner further comprises: storing fingerprint information that associates the virtual object with the requesting party, wherein the fingerprint includes at least information that identifies the requesting party. 13. The method of claim 12, wherein after the requesting party has left the virtual object in the owner's virtual environment, the method further comprising: receiving, at the video community system, a request from the viewing owner to inspect the virtual object; accessing the fingerprint information that is associated with the virtual object; and communicating the fingerprint information from the video community system to the electronic device of the viewing owner, wherein the requesting party is identified to the owner. 14. The method of claim 1, wherein the owner or the guest is currently viewing a presentation of the owner's virtual environment on the display, the method further comprising: receiving, at the video community system, a request from the viewing owner or guest to inspect the virtual object, wherein the request to inspect corresponds to a request for supplemental information pertaining to the virtual object; accessing the supplemental information that is associated with the virtual object; communicating the supplemental information from the video community system to the electronic device of the viewing owner or guest; and presenting the supplemental information on the display, wherein the presented supplemental information includes at least one graphic object that is an enlarged sized image corresponding to the selected virtual object. 15. The method of claim 1, wherein the virtual object is associated with a display duration, and wherein presenting the third image corresponding to the virtual object further comprises: presenting the third image corresponding to the virtual object initially when the owner or guest initially views the virtual environment; and ending presentation of the third image corresponding to the virtual object upon expiration of the display duration. 16. A video community system, comprising: an owner's bookshelf case storage medium that stores electronic data for each one of a plurality of owners who are community members of the video community system and who are each an owner of at least one media device, wherein the electronic data for each owner comprises: first data that is used to render an image of a virtual bookshelf case associated with the owner in an owner's virtual environment; second data that is used to render images of the plurality of individual virtual DVDs that are included in an owner's personal virtual DVD collection and that are presented on at least one shelf of the owner's virtual bookshelf case; and third data corresponding to at least one virtual object, wherein the at least one virtual object is presented in the owner's virtual environment; an interface communicatively coupled to the media device of the owner, and configured to receive a request from the owner or a visiting guest to receive the first data, the second data and the third data that is used to render and present an image of the owner's virtual environment, wherein the request includes information that identifies the owner; and a processor system communicatively coupled to the owner's bookshelf case storage medium and the interface, and configured to: communicate the first data, the second data and the third data to the owner's media device in response to receiving the request from the owner or the visiting guest. 17. The video community system of claim 16, wherein the third data includes presentation location information that defines a location in the image of the owner's virtual environment that the virtual object is presented at. 18. The video community system of claim 16, wherein the third data includes presentation location information that defines a location in the image of the at least one shelf of the owner's virtual bookshelf case that the virtual object is presented at. 19. The video community system of claim 16, wherein the owner's bookshelf case storage medium that stores the electronic data for each one of the plurality of owners further comprises, for at least one owner: fourth data that identifies a plurality of authorized requesting parties that are authorized to add a new virtual object into the owner's bookshelf case storage medium for that owner, wherein the processor system is further configured to: receive a request to add the new virtual object to the virtual environment of at least one specified owner, wherein the request to add is received at the video community system from the requesting party; verify that the requesting party is authorized to add the new virtual object in the specified owner's virtual environment based on an identity of the requesting party and the fourth data that identifies the plurality of authorized requesting parties; if the requesting party is authorized, store the electronic data associated with the new virtual object into the bookshelf case storage medium for the specified owner; and if the requesting party is not authorized, prevent the storing of the electronic data associated with the new virtual object into the bookshelf case storage medium for the specified owner. 20. The video community system of claim 19, further comprising: a virtual object catalogue that stores electronic data for a plurality of different virtual objects, wherein the electronic data for each of the plurality of different virtual objects includes at least one keyword that described the associated virtual object, wherein the request to add the new virtual object to the virtual environment of the specified owner further includes at least one descriptive keyword that is associated with the new virtual object that the requesting party intends to add to the owner's virtual environment, and wherein the processor system is further configured to: compare the at least one descriptive keyword that is associated with the new virtual object with the keywords for each of the different virtual objects to identify a plurality of candidate virtual objects that correspond to the new virtual object; communicate information corresponding to the plurality of candidate virtual objects to an electronic device of the requesting party, wherein the plurality of candidate virtual objects are visually presented to the requesting party; receive a selection of one of the plurality of candidate virtual objects from the electronic device of the requesting party; and store the electronic data corresponding to the selected one of the plurality of candidate virtual objects into the bookshelf case storage medium for the specified owner.
2,400
7,992
7,992
15,393,516
2,494
In one aspect, an example method includes (i) accessing a first set of ordered content items and a second set of active/inactive status attributes; (ii) identifying a subset of the first set based on each content item of the subset corresponding to an active status attribute in the second set; (iii) using the content items of the identified subset to generate video content that includes the content items of the identified subset, as ordered in the first set; (iv) determining that a particular content item of the first set satisfies a condition, wherein the particular content item corresponds to a particular active/inactive status attribute of the second set; (v) based on the determination, modifying the particular active/inactive status attribute; and (vi) after modifying the particular active/inactive status attribute, repeating the identifying and using acts, thereby causing modification of the generated video content.
1. A method comprising: accessing, by a computing system, a first set of ordered content items and a second set of active/inactive status attributes, wherein each content item of the first set comprises respective data associated with an election and corresponds to a respective active/inactive status attribute of the second set; identifying, by the computing system, a subset of the first set based on each content item of the subset corresponding to an active status attribute in the second set; using, by the computing system, the content items of the identified subset to generate video content that includes the content items of the identified subset, as ordered in the first set; making, by the computing system, a determination that particular data associated with the election satisfies each condition in a condition set, wherein the particular data associated with the election corresponds to a particular content item of the first set, and wherein the particular content item corresponds to a particular active/inactive status attribute of the second set; based, at least in part, on the determination that the particular data associated with the election satisfies each condition in the condition set, modifying, by the computing system, the particular active/inactive status attribute; and after modifying the particular active/inactive status attribute, repeating, by the computing system, the identifying and using acts, thereby causing modification of the generated video content. 2. (canceled) 3. The method of claim 1, wherein the condition set comprises at least one condition from the group consisting of: a first condition that less than a threshold amount of votes cast in connection with a vote count have been tabulated; a second condition that a difference between an amount of votes tabulated for a first candidate of the election and an amount of votes tabulated for a second candidate of the election is at least a threshold amount; a third condition that a first projected outcome of the election and a second projected outcome of the election differ by at least a threshold extent, wherein the first projected outcome of the election is determined before the second projected outcome of the election is determined; a fourth condition that a candidate associated with the particular content item is no longer associated with a race of the election; a fifth condition that voter turnout associated with the vote count is less than a threshold amount; a sixth condition that the vote count is associated with a particular race of the election; a seventh condition that the vote count is associated with a particular candidate of the election; and an eighth condition that the vote count is associated with a particular location. 4. The method of claim 1, wherein a character generator uses the content items of the identified subset to generate the video content. 5. The method of claim 1, wherein the generated video content presents the content items of the identified subset in a scrolling or rotating fashion. 6. (canceled) 7. The method of claim 1, further comprising: transmitting, by the computing system, the generated video content to an end-user device for presentation of the video content to an end-user. 8. A non-transitory computer-readable medium having stored thereon, program instructions that when executed by a processor, cause performance of a set of acts comprising: accessing, by a computing system, a first set of ordered content items and a second set of active/inactive status attributes, wherein each content item of the first set comprises respective data associated with an election and corresponds to a respective active/inactive status attribute of the second set; identifying, by the computing system, a subset of the first set based on each content item of the subset corresponding to an active status attribute in the second set; using, by the computing system, the content items of the identified subset to generate video content that includes the content items of the identified subset, as ordered in the first set; making, by the computing system, a determination that particular data associated with the election satisfies each condition in a condition set, wherein the particular data associated with the election corresponds to a particular content item of the first set, and wherein the particular content item corresponds to a particular active/inactive status attribute of the second set; based, at least in part, on the determination that the particular data associated with the election satisfies each condition in the condition set, modifying, by the computing system, the particular active/inactive status attribute; and after modifying the particular active/inactive status attribute, repeating, by the computing system, the identifying and using acts, thereby causing modification of the generated video content. 9. (canceled) 10. The computer-readable medium of claim 8, wherein the condition set comprises at least one condition from the group consisting of: a first condition that less than a threshold amount of votes cast in connection with a vote count have been tabulated; a second condition that a difference between an amount of votes tabulated for a first candidate of the election and an amount of votes tabulated for a second candidate of the election is at least a threshold amount; a third condition that a first projected outcome of the election and a second projected outcome of the election differ by at least a threshold extent, wherein the first projected outcome of the election is determined before the second projected outcome of the election is determined; a fourth condition that a candidate associated with the particular content item is no longer associated with a race of the election; a fifth condition that voter turnout associated with the vote count is less than a threshold amount; a sixth condition that the vote count is associated with a particular race of the election; a seventh condition that the vote count is associated with a particular candidate of the election; and an eighth condition that the vote count is associated with a particular location. 11. The computer-readable medium of claim 8, wherein a character generator uses the content items of the identified subset to generate the video content. 12. The computer-readable medium of claim 8, wherein the generated video content presents the content items of the identified subset in a scrolling or rotating fashion. 13. (canceled) 14. The computer-readable medium of claim 8, wherein the set of acts further comprises: transmitting the generated video content to an end-user device for presentation of the video content to an end-user. 15. A computing system configured to perform a set of acts comprising: accessing, by the computing system, a first set of ordered content items and a second set of active/inactive status attributes, wherein each content item of the first set comprises respective data associated with an election and corresponds to a respective active/inactive status attribute of the second set; identifying, by the computing system, a subset of the first set based on each content item of the subset corresponding to an active status attribute in the second set; using, by the computing system, the content items of the identified subset to generate video content that includes the content items of the identified subset, as ordered in the first set; making, by the computing system, a determination that particular data associated with the election satisfies each condition in a condition set, wherein the particular data associated with the election corresponds to a particular content item of the first set, and wherein the particular content item corresponds to a particular active/inactive status attribute of the second set; based, at least in part, on the determination that the particular data associated with the election satisfies each condition in the condition set, modifying, by the computing system, the particular active/inactive status attribute; and after modifying the particular active/inactive status attribute, repeating, by the computing system, the identifying and using acts, thereby causing modification of the generated video content. 16. (canceled) 17. The computing system of claim 15, wherein the condition set comprises at least one condition from the group consisting of: a first condition that less than a threshold amount of votes cast in connection with a vote count have been tabulated; a second condition that a difference between an amount of votes tabulated for a first candidate of the election and an amount of votes tabulated for a second candidate of the election is at least a threshold amount; a third condition that a first projected outcome of the election and a second projected outcome of the election differ by at least a threshold extent, wherein the first projected outcome of the election is determined before the second projected outcome of the election is determined; a fourth condition that a candidate associated with the particular content item is no longer associated with a race of the election; a fifth condition that voter turnout associated with the vote count is less than a threshold amount; a sixth condition that the vote count is associated with a particular race of the election; a seventh condition that the vote count is associated with a particular candidate of the election; and an eighth condition that the vote count is associated with a particular location. 18. The computing system of claim 15, wherein the system comprises a character generator, and wherein the character generator uses the content items of the identified subset to generate the video content. 19. The computing system of claim 15, wherein the generated video content presents the content items of the identified subset in a scrolling or rotating fashion. 20. The computing system of claim 15, the set of acts further comprising: transmitting the generated video content to an end-user device for presentation of the video content to an end-user. 21. The method of claim 1, wherein modifying the particular active/inactive status attribute comprises transitioning the particular active/inactive status attribute from an inactive status to an active status, and wherein repeating the identifying and using acts causes the generated video content to include the particular content item based on the particular active/inactive status attribute transitioning from an inactive status to an active status. 22. The method of claim 21, further comprising: making, by the computing system, a determination that second-particular data associated with the election satisfies each condition in a second condition set, wherein the second particular data associated with the election corresponds to a second particular content item of the first set, and wherein the second particular content item corresponds to a second particular active/inactive status attribute of the second set; and based, at least in part, on the determination that the second particular data associated with the election satisfies each condition in the second condition set, modifying, by the computing system, the second particular active/inactive status attribute by transitioning the second particular active/inactive status attribute from an active status to an inactive status, wherein repeating the identifying and using acts causes the generated video content to exclude the second particular content item based on the second particular active/inactive status attribute transitioning from an active status to an inactive status. 23. The method of claim 1, wherein modifying the particular active/inactive status attribute comprises modifying the particular active/inactive status attribute while the video content is being generated. 24. The computer-readable medium of claim 12, wherein modifying the particular active/inactive status attribute comprises modifying the particular active/inactive status attribute while the video content is being generated. 25. The computing system of claim 19, wherein modifying the particular active/inactive status attribute comprises modifying the particular active/inactive status attribute while the video content is being generated.
In one aspect, an example method includes (i) accessing a first set of ordered content items and a second set of active/inactive status attributes; (ii) identifying a subset of the first set based on each content item of the subset corresponding to an active status attribute in the second set; (iii) using the content items of the identified subset to generate video content that includes the content items of the identified subset, as ordered in the first set; (iv) determining that a particular content item of the first set satisfies a condition, wherein the particular content item corresponds to a particular active/inactive status attribute of the second set; (v) based on the determination, modifying the particular active/inactive status attribute; and (vi) after modifying the particular active/inactive status attribute, repeating the identifying and using acts, thereby causing modification of the generated video content.1. A method comprising: accessing, by a computing system, a first set of ordered content items and a second set of active/inactive status attributes, wherein each content item of the first set comprises respective data associated with an election and corresponds to a respective active/inactive status attribute of the second set; identifying, by the computing system, a subset of the first set based on each content item of the subset corresponding to an active status attribute in the second set; using, by the computing system, the content items of the identified subset to generate video content that includes the content items of the identified subset, as ordered in the first set; making, by the computing system, a determination that particular data associated with the election satisfies each condition in a condition set, wherein the particular data associated with the election corresponds to a particular content item of the first set, and wherein the particular content item corresponds to a particular active/inactive status attribute of the second set; based, at least in part, on the determination that the particular data associated with the election satisfies each condition in the condition set, modifying, by the computing system, the particular active/inactive status attribute; and after modifying the particular active/inactive status attribute, repeating, by the computing system, the identifying and using acts, thereby causing modification of the generated video content. 2. (canceled) 3. The method of claim 1, wherein the condition set comprises at least one condition from the group consisting of: a first condition that less than a threshold amount of votes cast in connection with a vote count have been tabulated; a second condition that a difference between an amount of votes tabulated for a first candidate of the election and an amount of votes tabulated for a second candidate of the election is at least a threshold amount; a third condition that a first projected outcome of the election and a second projected outcome of the election differ by at least a threshold extent, wherein the first projected outcome of the election is determined before the second projected outcome of the election is determined; a fourth condition that a candidate associated with the particular content item is no longer associated with a race of the election; a fifth condition that voter turnout associated with the vote count is less than a threshold amount; a sixth condition that the vote count is associated with a particular race of the election; a seventh condition that the vote count is associated with a particular candidate of the election; and an eighth condition that the vote count is associated with a particular location. 4. The method of claim 1, wherein a character generator uses the content items of the identified subset to generate the video content. 5. The method of claim 1, wherein the generated video content presents the content items of the identified subset in a scrolling or rotating fashion. 6. (canceled) 7. The method of claim 1, further comprising: transmitting, by the computing system, the generated video content to an end-user device for presentation of the video content to an end-user. 8. A non-transitory computer-readable medium having stored thereon, program instructions that when executed by a processor, cause performance of a set of acts comprising: accessing, by a computing system, a first set of ordered content items and a second set of active/inactive status attributes, wherein each content item of the first set comprises respective data associated with an election and corresponds to a respective active/inactive status attribute of the second set; identifying, by the computing system, a subset of the first set based on each content item of the subset corresponding to an active status attribute in the second set; using, by the computing system, the content items of the identified subset to generate video content that includes the content items of the identified subset, as ordered in the first set; making, by the computing system, a determination that particular data associated with the election satisfies each condition in a condition set, wherein the particular data associated with the election corresponds to a particular content item of the first set, and wherein the particular content item corresponds to a particular active/inactive status attribute of the second set; based, at least in part, on the determination that the particular data associated with the election satisfies each condition in the condition set, modifying, by the computing system, the particular active/inactive status attribute; and after modifying the particular active/inactive status attribute, repeating, by the computing system, the identifying and using acts, thereby causing modification of the generated video content. 9. (canceled) 10. The computer-readable medium of claim 8, wherein the condition set comprises at least one condition from the group consisting of: a first condition that less than a threshold amount of votes cast in connection with a vote count have been tabulated; a second condition that a difference between an amount of votes tabulated for a first candidate of the election and an amount of votes tabulated for a second candidate of the election is at least a threshold amount; a third condition that a first projected outcome of the election and a second projected outcome of the election differ by at least a threshold extent, wherein the first projected outcome of the election is determined before the second projected outcome of the election is determined; a fourth condition that a candidate associated with the particular content item is no longer associated with a race of the election; a fifth condition that voter turnout associated with the vote count is less than a threshold amount; a sixth condition that the vote count is associated with a particular race of the election; a seventh condition that the vote count is associated with a particular candidate of the election; and an eighth condition that the vote count is associated with a particular location. 11. The computer-readable medium of claim 8, wherein a character generator uses the content items of the identified subset to generate the video content. 12. The computer-readable medium of claim 8, wherein the generated video content presents the content items of the identified subset in a scrolling or rotating fashion. 13. (canceled) 14. The computer-readable medium of claim 8, wherein the set of acts further comprises: transmitting the generated video content to an end-user device for presentation of the video content to an end-user. 15. A computing system configured to perform a set of acts comprising: accessing, by the computing system, a first set of ordered content items and a second set of active/inactive status attributes, wherein each content item of the first set comprises respective data associated with an election and corresponds to a respective active/inactive status attribute of the second set; identifying, by the computing system, a subset of the first set based on each content item of the subset corresponding to an active status attribute in the second set; using, by the computing system, the content items of the identified subset to generate video content that includes the content items of the identified subset, as ordered in the first set; making, by the computing system, a determination that particular data associated with the election satisfies each condition in a condition set, wherein the particular data associated with the election corresponds to a particular content item of the first set, and wherein the particular content item corresponds to a particular active/inactive status attribute of the second set; based, at least in part, on the determination that the particular data associated with the election satisfies each condition in the condition set, modifying, by the computing system, the particular active/inactive status attribute; and after modifying the particular active/inactive status attribute, repeating, by the computing system, the identifying and using acts, thereby causing modification of the generated video content. 16. (canceled) 17. The computing system of claim 15, wherein the condition set comprises at least one condition from the group consisting of: a first condition that less than a threshold amount of votes cast in connection with a vote count have been tabulated; a second condition that a difference between an amount of votes tabulated for a first candidate of the election and an amount of votes tabulated for a second candidate of the election is at least a threshold amount; a third condition that a first projected outcome of the election and a second projected outcome of the election differ by at least a threshold extent, wherein the first projected outcome of the election is determined before the second projected outcome of the election is determined; a fourth condition that a candidate associated with the particular content item is no longer associated with a race of the election; a fifth condition that voter turnout associated with the vote count is less than a threshold amount; a sixth condition that the vote count is associated with a particular race of the election; a seventh condition that the vote count is associated with a particular candidate of the election; and an eighth condition that the vote count is associated with a particular location. 18. The computing system of claim 15, wherein the system comprises a character generator, and wherein the character generator uses the content items of the identified subset to generate the video content. 19. The computing system of claim 15, wherein the generated video content presents the content items of the identified subset in a scrolling or rotating fashion. 20. The computing system of claim 15, the set of acts further comprising: transmitting the generated video content to an end-user device for presentation of the video content to an end-user. 21. The method of claim 1, wherein modifying the particular active/inactive status attribute comprises transitioning the particular active/inactive status attribute from an inactive status to an active status, and wherein repeating the identifying and using acts causes the generated video content to include the particular content item based on the particular active/inactive status attribute transitioning from an inactive status to an active status. 22. The method of claim 21, further comprising: making, by the computing system, a determination that second-particular data associated with the election satisfies each condition in a second condition set, wherein the second particular data associated with the election corresponds to a second particular content item of the first set, and wherein the second particular content item corresponds to a second particular active/inactive status attribute of the second set; and based, at least in part, on the determination that the second particular data associated with the election satisfies each condition in the second condition set, modifying, by the computing system, the second particular active/inactive status attribute by transitioning the second particular active/inactive status attribute from an active status to an inactive status, wherein repeating the identifying and using acts causes the generated video content to exclude the second particular content item based on the second particular active/inactive status attribute transitioning from an active status to an inactive status. 23. The method of claim 1, wherein modifying the particular active/inactive status attribute comprises modifying the particular active/inactive status attribute while the video content is being generated. 24. The computer-readable medium of claim 12, wherein modifying the particular active/inactive status attribute comprises modifying the particular active/inactive status attribute while the video content is being generated. 25. The computing system of claim 19, wherein modifying the particular active/inactive status attribute comprises modifying the particular active/inactive status attribute while the video content is being generated.
2,400
7,993
7,993
15,109,048
2,463
According to various embodiments, a mobile radio communication device may be provided. The mobile radio communication device may include: a receiver configured to receive data; an access point identification circuit configured to determine whether the received data is received from or sent to an access point corresponding to the mobile radio communication device; and a response indication deferral setting circuit configured to set a response indication deferral parameter based on the determination of the access point identification circuit.
1. A mobile radio communication device comprising: a receiver configured to receive data; an access point identification circuit configured to determine whether the received data is received from or sent to an access point corresponding to the mobile radio communication device; and a response indication deferral setting circuit configured to set a response indication deferral parameter based on the determination of the access point identification circuit. 2. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to reset the response indication deferral parameter based on the determination of the access point identification circuit. 3. The mobile radio communication device of claim 1, wherein the received data comprises a physical protocol data unit. 4. The mobile radio communication device of claim 1, wherein the received data comprises at least one frame within a physical layer service data unit. 5. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to reset the response indication deferral parameter to zero if the access point identification circuit determines that the received data is received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the mobile radio communication device obtains both RXVECTOR parameter RESPONSE_INDICATION and the Duration field from the received data. 6. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to replace the current response indication deferral parameter with a new response indication deferral value if the access point identification circuit determines that the received data is received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore there is no duration field in the received data. 7. The mobile radio communication device of claim 1, wherein the access point identification circuit is configured to determine whether the received data is received from or sent to an access point corresponding to the mobile radio communication device based on whether a received PPDU is an NDP MAC frame. 8. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to update the response indication deferral parameter if the access point identification circuit determines that the received data is not received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the new response indication deferral parameter for the RXVECTOR parameter RESPONSE_INDICATION is larger than the current response indication deferral parameter. 9. The mobile radio communication device of claim 8, wherein the response indication deferral setting circuit is configured to reset the response indication deferral parameter if furthermore there is a valid Duration field for channel access reservation in the MPDU. 10. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to leave the response indication deferral parameter un-updated if the access point identification circuit determines that the received data is not received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the new response indication deferral parameter for the RXVECTOR parameter RESPONSE_INDICATION is smaller than or equal to the current response indication deferral parameter. 11. The mobile radio communication device of claim 10, wherein the response indication deferral setting circuit is configured to reset the response indication deferral parameter if furthermore there is a valid Duration field for channel access reservation in the MPDU that has a value larger than the response indication deferral parameter. 12. The mobile radio communication device of claim 1, wherein the response indication deferral parameter comprises a response indication deferral count. 13. The mobile radio communication device of claim 1, wherein the mobile radio communication device comprises a station according to IEEE 802.11ah. 14. A method for controlling a mobile radio communication device, the method comprising: receiving data; determining whether the received data is received from or sent to an access point corresponding to the mobile radio communication device; and setting a response indication deferral parameter based on the determining. 15. The method of claim 14, further comprising: resetting the response indication deferral parameter based on the determining whether the received data is received from or sent to an access point corresponding to the mobile radio communication device. 16. The method of claim 14, wherein the received data comprises a physical protocol data unit. 17. The method of claim 14, wherein the received data comprises at least one frame within a physical layer service data unit. 18. The method of claim 14, further comprising: resetting the response indication deferral parameter to zero if it is determined that the received data is received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the mobile radio communication device obtains both RXVECTOR parameter RESPONSE_INDICATION and the Duration field from the received data. 19. The method of claim 14, further comprising: replacing the current response indication deferral parameter with a new response indication deferral value if it is determined that the received data is received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore there is no duration field in the received data. 20. The method of claim 14, further comprising: updating the response indication deferral parameter if it is determined that the received data is not received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the new response indication deferral parameter for the RXVECTOR parameter RESPONSE_INDICATION is larger than the current response indication deferral parameter. 21. The method of claim 14, further comprising: leaving the response indication deferral parameter un-updated if it is determined that the received data is not received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the new response indication deferral parameter for the RXVECTOR parameter RESPONSE_INDICATION is smaller than or equal to the current response indication deferral parameter. 22. The method of claim 14, wherein the response indication deferral parameter comprises a response indication deferral count. 23. The method of claim 14, wherein the mobile radio communication device comprises a station according to IEEE 802.11ah.
According to various embodiments, a mobile radio communication device may be provided. The mobile radio communication device may include: a receiver configured to receive data; an access point identification circuit configured to determine whether the received data is received from or sent to an access point corresponding to the mobile radio communication device; and a response indication deferral setting circuit configured to set a response indication deferral parameter based on the determination of the access point identification circuit.1. A mobile radio communication device comprising: a receiver configured to receive data; an access point identification circuit configured to determine whether the received data is received from or sent to an access point corresponding to the mobile radio communication device; and a response indication deferral setting circuit configured to set a response indication deferral parameter based on the determination of the access point identification circuit. 2. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to reset the response indication deferral parameter based on the determination of the access point identification circuit. 3. The mobile radio communication device of claim 1, wherein the received data comprises a physical protocol data unit. 4. The mobile radio communication device of claim 1, wherein the received data comprises at least one frame within a physical layer service data unit. 5. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to reset the response indication deferral parameter to zero if the access point identification circuit determines that the received data is received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the mobile radio communication device obtains both RXVECTOR parameter RESPONSE_INDICATION and the Duration field from the received data. 6. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to replace the current response indication deferral parameter with a new response indication deferral value if the access point identification circuit determines that the received data is received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore there is no duration field in the received data. 7. The mobile radio communication device of claim 1, wherein the access point identification circuit is configured to determine whether the received data is received from or sent to an access point corresponding to the mobile radio communication device based on whether a received PPDU is an NDP MAC frame. 8. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to update the response indication deferral parameter if the access point identification circuit determines that the received data is not received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the new response indication deferral parameter for the RXVECTOR parameter RESPONSE_INDICATION is larger than the current response indication deferral parameter. 9. The mobile radio communication device of claim 8, wherein the response indication deferral setting circuit is configured to reset the response indication deferral parameter if furthermore there is a valid Duration field for channel access reservation in the MPDU. 10. The mobile radio communication device of claim 1, wherein the response indication deferral setting circuit is configured to leave the response indication deferral parameter un-updated if the access point identification circuit determines that the received data is not received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the new response indication deferral parameter for the RXVECTOR parameter RESPONSE_INDICATION is smaller than or equal to the current response indication deferral parameter. 11. The mobile radio communication device of claim 10, wherein the response indication deferral setting circuit is configured to reset the response indication deferral parameter if furthermore there is a valid Duration field for channel access reservation in the MPDU that has a value larger than the response indication deferral parameter. 12. The mobile radio communication device of claim 1, wherein the response indication deferral parameter comprises a response indication deferral count. 13. The mobile radio communication device of claim 1, wherein the mobile radio communication device comprises a station according to IEEE 802.11ah. 14. A method for controlling a mobile radio communication device, the method comprising: receiving data; determining whether the received data is received from or sent to an access point corresponding to the mobile radio communication device; and setting a response indication deferral parameter based on the determining. 15. The method of claim 14, further comprising: resetting the response indication deferral parameter based on the determining whether the received data is received from or sent to an access point corresponding to the mobile radio communication device. 16. The method of claim 14, wherein the received data comprises a physical protocol data unit. 17. The method of claim 14, wherein the received data comprises at least one frame within a physical layer service data unit. 18. The method of claim 14, further comprising: resetting the response indication deferral parameter to zero if it is determined that the received data is received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the mobile radio communication device obtains both RXVECTOR parameter RESPONSE_INDICATION and the Duration field from the received data. 19. The method of claim 14, further comprising: replacing the current response indication deferral parameter with a new response indication deferral value if it is determined that the received data is received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore there is no duration field in the received data. 20. The method of claim 14, further comprising: updating the response indication deferral parameter if it is determined that the received data is not received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the new response indication deferral parameter for the RXVECTOR parameter RESPONSE_INDICATION is larger than the current response indication deferral parameter. 21. The method of claim 14, further comprising: leaving the response indication deferral parameter un-updated if it is determined that the received data is not received from or sent to the access point corresponding to the mobile radio communication device, and if furthermore the new response indication deferral parameter for the RXVECTOR parameter RESPONSE_INDICATION is smaller than or equal to the current response indication deferral parameter. 22. The method of claim 14, wherein the response indication deferral parameter comprises a response indication deferral count. 23. The method of claim 14, wherein the mobile radio communication device comprises a station according to IEEE 802.11ah.
2,400
7,994
7,994
14,762,236
2,441
A mobile system (S 1 ) comprises i) a first operating system (OS 1 ), capable of exchanging data with a CPE (E 1 ), ii) a second operating system (OS 2 ) with a tunnel layer and coupled to devices (D 1 -D 3 ) having respective IP prefixes and producing data to be accessed from a central application, via a client gateway (CG), iii) a first means (M 1 ) for obtaining a first IP address for the second operating system (OS 2 ) from the CPE (E 1 ) and through the first operating system (OS 1 ), and a second means (M 2 ) for triggering transmission of this first IP address and the device prefixes by the second operating system (OS 2 ) to the client gateway (CG), through the first operating system (OS 1 ) and the CPE (E 1 ), for requesting the establishment of a tunnel between the second operating system (OS 1 ) and the client gateway (CG) to allow the central application to access to data generated by the devices.
1. Method for controlling access from at least one central application, via a client gateway connected to a mobile communication network, to data originating from at least two devices having respective IP prefixes and coupled to a mobile system comprising a first operating system being capable of exchanging data with a customer premises equipment coupled with the mobile system, and a second operating system comprising a tunnel layer and allowing coupling the at least two devices to the mobile system, said method comprising obtaining by said second operating system a first IP address from said customer premises equipment, through said first operating system, and of transmitting by said second operating system said first IP address and said IP prefixes to said client gateway, through said first operating system and said customer premises equipment, to request the establishment of a tunnel between said second operating system and said client gateway, and therefore allows said central application to access, via said client gateway, to data generated by said devices. 2. Method according to claim 1, wherein said first IP address is the one of said customer premises equipment. 3. Method according to claim 2, wherein a second IP address of said client gateway is either statically configured into said second operating system, or computed from an address of a core network gateway of said mobile communication network, which is dynamically learnt by said first operating system or learnt from a DHCP like server to which said client gateway is coupled. 4. Method according to claim 1, wherein said first and second operating systems are instantiated into said mobile system respectively as first and second virtual machines that are connected via virtual network interfaces making them a private network that is not seen out of said mobile system. 5. Method according to claim 4, wherein said first virtual machine shares its radio connection with said second virtual machine over a virtual network interface it comprises. 6. Method according to claim 4, wherein a second IP address of said client gateway is either statically configured into said second virtual machine, or computed from an address of a core network gateway of said mobile communication network, which is dynamically learnt by said first virtual machine or learnt from a DHCP like server to which said client gateway is coupled. 7. Method according to claim 1, wherein said first operating system is Windows® and said second operating system is Linux. 8. Method according to claim 1, wherein said established tunnel has a type chosen from a group comprising at least a GRE type and an IPSec type. 9. Mobile system comprising a first operating system being capable of exchanging data with a customer premises equipment coupled with the mobile system, and a second operating system comprising a tunnel layer and allowing coupling of at least two devices to the mobile system, the at least two devices having respective IP prefixes and producing data to be accessed from at least one central application, via a client gateway connected to a mobile communication network, wherein the mobile system further comprises a first means arranged for obtaining a first IP address for said second operating system from said customer premises equipment and through said first operating system, and a second means arranged for triggering transmission of said first IP address and said IP prefixes by said second operating system to said client gateway, through said first operating system and said customer premises equipment, for requesting the establishment of a tunnel between said second operating system and said client gateway, and therefore allows said central application to access, via said client gateway, to data generated by said devices. 10. Mobile system according to claim 9, wherein it comprises a first equipment comprising said first operating system and to which said customer premises equipment is connected, and a second equipment coupled to said first equipment, comprising said second operating system and to which said devices are coupled. 11. Mobile system according to claim 10, wherein said second equipment comprises said first and second means. 12. Mobile system according to claim 9, wherein said first and second operating systems are instantiated respectively as first and second virtual machines that are connected via a private network which is not routed over a virtual switch of said mobile system. 13. Mobile system according to claim 9, wherein said first operating system is Windows® and said second operating system is Linux.
A mobile system (S 1 ) comprises i) a first operating system (OS 1 ), capable of exchanging data with a CPE (E 1 ), ii) a second operating system (OS 2 ) with a tunnel layer and coupled to devices (D 1 -D 3 ) having respective IP prefixes and producing data to be accessed from a central application, via a client gateway (CG), iii) a first means (M 1 ) for obtaining a first IP address for the second operating system (OS 2 ) from the CPE (E 1 ) and through the first operating system (OS 1 ), and a second means (M 2 ) for triggering transmission of this first IP address and the device prefixes by the second operating system (OS 2 ) to the client gateway (CG), through the first operating system (OS 1 ) and the CPE (E 1 ), for requesting the establishment of a tunnel between the second operating system (OS 1 ) and the client gateway (CG) to allow the central application to access to data generated by the devices.1. Method for controlling access from at least one central application, via a client gateway connected to a mobile communication network, to data originating from at least two devices having respective IP prefixes and coupled to a mobile system comprising a first operating system being capable of exchanging data with a customer premises equipment coupled with the mobile system, and a second operating system comprising a tunnel layer and allowing coupling the at least two devices to the mobile system, said method comprising obtaining by said second operating system a first IP address from said customer premises equipment, through said first operating system, and of transmitting by said second operating system said first IP address and said IP prefixes to said client gateway, through said first operating system and said customer premises equipment, to request the establishment of a tunnel between said second operating system and said client gateway, and therefore allows said central application to access, via said client gateway, to data generated by said devices. 2. Method according to claim 1, wherein said first IP address is the one of said customer premises equipment. 3. Method according to claim 2, wherein a second IP address of said client gateway is either statically configured into said second operating system, or computed from an address of a core network gateway of said mobile communication network, which is dynamically learnt by said first operating system or learnt from a DHCP like server to which said client gateway is coupled. 4. Method according to claim 1, wherein said first and second operating systems are instantiated into said mobile system respectively as first and second virtual machines that are connected via virtual network interfaces making them a private network that is not seen out of said mobile system. 5. Method according to claim 4, wherein said first virtual machine shares its radio connection with said second virtual machine over a virtual network interface it comprises. 6. Method according to claim 4, wherein a second IP address of said client gateway is either statically configured into said second virtual machine, or computed from an address of a core network gateway of said mobile communication network, which is dynamically learnt by said first virtual machine or learnt from a DHCP like server to which said client gateway is coupled. 7. Method according to claim 1, wherein said first operating system is Windows® and said second operating system is Linux. 8. Method according to claim 1, wherein said established tunnel has a type chosen from a group comprising at least a GRE type and an IPSec type. 9. Mobile system comprising a first operating system being capable of exchanging data with a customer premises equipment coupled with the mobile system, and a second operating system comprising a tunnel layer and allowing coupling of at least two devices to the mobile system, the at least two devices having respective IP prefixes and producing data to be accessed from at least one central application, via a client gateway connected to a mobile communication network, wherein the mobile system further comprises a first means arranged for obtaining a first IP address for said second operating system from said customer premises equipment and through said first operating system, and a second means arranged for triggering transmission of said first IP address and said IP prefixes by said second operating system to said client gateway, through said first operating system and said customer premises equipment, for requesting the establishment of a tunnel between said second operating system and said client gateway, and therefore allows said central application to access, via said client gateway, to data generated by said devices. 10. Mobile system according to claim 9, wherein it comprises a first equipment comprising said first operating system and to which said customer premises equipment is connected, and a second equipment coupled to said first equipment, comprising said second operating system and to which said devices are coupled. 11. Mobile system according to claim 10, wherein said second equipment comprises said first and second means. 12. Mobile system according to claim 9, wherein said first and second operating systems are instantiated respectively as first and second virtual machines that are connected via a private network which is not routed over a virtual switch of said mobile system. 13. Mobile system according to claim 9, wherein said first operating system is Windows® and said second operating system is Linux.
2,400
7,995
7,995
15,849,947
2,422
The present technology relates to an image processing apparatus and a method capable of performing calibration of a correction amount in brightness correction more easily. The image processing apparatus according to the present technology performs blending calculation for correcting brightness of an image in accordance with a distance from a projection unit configured to project the image to a projection surface onto which the image is projected, and in accordance with a characteristic of the projection unit. The present technology can be applied, for example, to a projector, a camera, or an electronic apparatus including both function of a projector and a camera, a computer that controls these, and to a system in which an apparatuses having a projector and a camera are operating in cooperation.
1. An image processing apparatus, comprising: at least one processor configured to: correct brightness of each image of a plurality of images projected by a plurality of projection units based on mixing ratio information related to a mixing ratio of each image of the plurality of images in a region in which a first image of the plurality of images partially overlaps with a second image of the plurality of images. 2. The image processing apparatus according to claim 1, wherein the mixing ratio information is map information representing the mixing ratio for each pixel of each image of the plurality of images. 3. The image processing apparatus according to claim 1, wherein the at least one processor is further configured to: generate correction information to correct the mixing ratio information based on a characteristic of each projection unit of the plurality of projection units; and correct the mixing ratio information based on the generated correction information. 4. The image processing apparatus according to claim 1, wherein the at least one processor is further configured to correct luminance information related to the brightness of each image of the plurality of images for each projection unit of the plurality of projection units, based on a third image of the plurality of images for which the brightness is corrected. 5. The image processing apparatus according to claim 4, wherein the luminance information is related to the brightness of each image of the plurality of images determined based on a distance calculated from each projection unit of the plurality of projection units to a projection surface. 6. The image processing apparatus according to claim 5, wherein the luminance information is map information representing brightness of each image of the plurality of images for each pixel of a plurality of pixels. 7. The image processing apparatus according to claim 4, wherein the at least one processor is further configured to correct the brightness of each image of the plurality of images based on the corrected luminance information. 8. The image processing apparatus according to claim 7, wherein the at least one processor is further configured to correct the brightness of each image of the plurality of images in a uniform perceptual color space. 9. The image processing apparatus according to claim 7, wherein the at least one processor is further configured to control the projection of each image of the plurality of images for which the brightness is corrected. 10. A method, comprising: correcting brightness of each image of a plurality of images projected by a plurality of projection units based on mixing ratio information related to a mixing ratio of each image of the plurality of images in a region in which a first image of the plurality of images partially overlaps with a second image of the plurality of images. 11. The method according to claim 10, wherein the mixing ratio information is map information representing the mixing ratio for each pixel of each image of the plurality of images. 12. The method according to claim 10, further comprising, generating correction information to correct the mixing ratio information based on a characteristic of each projection unit of the plurality of projection units; and correcting the mixing ratio information based on the generated correction information. 13. The method according to claim 10, further comprising, correcting luminance information related to the brightness of each image of the plurality of images for each projection unit of the plurality of projection units, based on a third image of the plurality of images for which the brightness is corrected. 14. The method according to claim 13, wherein the luminance information is related to the brightness of each image of the plurality of images determined based on a distance calculated from each projection unit of the plurality of projection units to a projection surface. 15. The method according to claim 14, wherein the luminance information is map information representing brightness of each image of the plurality of images for each pixel of a plurality of pixels. 16. The method according to claim 13, further comprising, correcting the brightness of each image of the plurality of images based on the corrected luminance information. 17. The method according to claim 16, further comprising, correcting the brightness of each image of the plurality of images in a uniform perceptual color space. 18. The method according to claim 16, further comprising, projecting each image of the plurality of images for which the brightness is corrected.
The present technology relates to an image processing apparatus and a method capable of performing calibration of a correction amount in brightness correction more easily. The image processing apparatus according to the present technology performs blending calculation for correcting brightness of an image in accordance with a distance from a projection unit configured to project the image to a projection surface onto which the image is projected, and in accordance with a characteristic of the projection unit. The present technology can be applied, for example, to a projector, a camera, or an electronic apparatus including both function of a projector and a camera, a computer that controls these, and to a system in which an apparatuses having a projector and a camera are operating in cooperation.1. An image processing apparatus, comprising: at least one processor configured to: correct brightness of each image of a plurality of images projected by a plurality of projection units based on mixing ratio information related to a mixing ratio of each image of the plurality of images in a region in which a first image of the plurality of images partially overlaps with a second image of the plurality of images. 2. The image processing apparatus according to claim 1, wherein the mixing ratio information is map information representing the mixing ratio for each pixel of each image of the plurality of images. 3. The image processing apparatus according to claim 1, wherein the at least one processor is further configured to: generate correction information to correct the mixing ratio information based on a characteristic of each projection unit of the plurality of projection units; and correct the mixing ratio information based on the generated correction information. 4. The image processing apparatus according to claim 1, wherein the at least one processor is further configured to correct luminance information related to the brightness of each image of the plurality of images for each projection unit of the plurality of projection units, based on a third image of the plurality of images for which the brightness is corrected. 5. The image processing apparatus according to claim 4, wherein the luminance information is related to the brightness of each image of the plurality of images determined based on a distance calculated from each projection unit of the plurality of projection units to a projection surface. 6. The image processing apparatus according to claim 5, wherein the luminance information is map information representing brightness of each image of the plurality of images for each pixel of a plurality of pixels. 7. The image processing apparatus according to claim 4, wherein the at least one processor is further configured to correct the brightness of each image of the plurality of images based on the corrected luminance information. 8. The image processing apparatus according to claim 7, wherein the at least one processor is further configured to correct the brightness of each image of the plurality of images in a uniform perceptual color space. 9. The image processing apparatus according to claim 7, wherein the at least one processor is further configured to control the projection of each image of the plurality of images for which the brightness is corrected. 10. A method, comprising: correcting brightness of each image of a plurality of images projected by a plurality of projection units based on mixing ratio information related to a mixing ratio of each image of the plurality of images in a region in which a first image of the plurality of images partially overlaps with a second image of the plurality of images. 11. The method according to claim 10, wherein the mixing ratio information is map information representing the mixing ratio for each pixel of each image of the plurality of images. 12. The method according to claim 10, further comprising, generating correction information to correct the mixing ratio information based on a characteristic of each projection unit of the plurality of projection units; and correcting the mixing ratio information based on the generated correction information. 13. The method according to claim 10, further comprising, correcting luminance information related to the brightness of each image of the plurality of images for each projection unit of the plurality of projection units, based on a third image of the plurality of images for which the brightness is corrected. 14. The method according to claim 13, wherein the luminance information is related to the brightness of each image of the plurality of images determined based on a distance calculated from each projection unit of the plurality of projection units to a projection surface. 15. The method according to claim 14, wherein the luminance information is map information representing brightness of each image of the plurality of images for each pixel of a plurality of pixels. 16. The method according to claim 13, further comprising, correcting the brightness of each image of the plurality of images based on the corrected luminance information. 17. The method according to claim 16, further comprising, correcting the brightness of each image of the plurality of images in a uniform perceptual color space. 18. The method according to claim 16, further comprising, projecting each image of the plurality of images for which the brightness is corrected.
2,400
7,996
7,996
14,683,873
2,498
One embodiment provides a method including: receiving, on a display device, a request to display data; detecting, using a processor, a factor indicating a need for privacy; activating, based on the detecting, a privacy filter of the display device; and displaying, on the display device, the data. Other aspects are described and claimed.
1. A method, comprising: receiving, on a display device, a request to display data; detecting, using a processor, a factor indicating a need for privacy; activating, based on the detecting, a privacy filter of the display device; and displaying, on the display device, the data. 2. The method of claim 1, wherein the factor comprises the display device location. 3. The method of claim 2, wherein the detecting comprises determining, based on the display device location, if the device is in a predetermined area. 4. The method of claim 2, wherein the detecting comprises determining that the device is moving above a threshold speed. 5. The method of claim 1, wherein the activating comprises modifying a hardware aspect of the display device. 6. The method of claim 1, wherein the factor is selected from the group consisting of metadata corresponding to the data to be displayed and the data to be displayed. 7. The method of claim 6, wherein the metadata comprises security data. 8. The method of claim 7, wherein the security data comprises location data. 9. The method of claim 7, wherein the security data is determined during creation of the data to be displayed; and wherein the security data is modifiable by a user. 10. The method of claim 7, wherein the security data is determined during creation of the data to be displayed; and wherein the security data is not modifiable by a user. 11. An information handling device, comprising: a processor; a display device; a memory device that stores instructions executable by the processor to: receive a request to display a data; detect a factor indicating a need for privacy; activate, based on the detecting, a privacy filter of the display device; and display, on the display device the data. 12. The information handling device of claim 11, wherein the factor comprises the display device location. 13. The information handling device of claim 12, wherein the detecting comprises determining, based on the display device location, if the device is in a predetermined area. 14. The information handling device of claim 12, wherein the detecting comprises determining, based on the device location, if the device is in a predetermined area. 15. The information handling device of claim 12, wherein the detecting comprises determining, that the device is moving above a threshold speed. 16. The information handling device of claim 11, wherein the activating comprises modifying a hardware aspect of the display device. 17. The information handling device of claim 11, wherein the factor is selected from the group consisting of metadata corresponding to the data to be displayed and the data to be displayed; and 18. The information handling device of claim 17, wherein the metadata comprises security data; and wherein the security data comprises location data. 19. The information handling device of claim 17, wherein the metadata comprises security data determined during creation of the data to be displayed; and wherein the security data has a characteristic selected from the group consisting of: being modifiable by a user and not being modifiable by a user. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that receives, at an input device, a request to display data; code that detects a factor indicating a need for privacy; code that activates, based on the detecting, a privacy filter of the display device; and code that displays, on the display device, the data.
One embodiment provides a method including: receiving, on a display device, a request to display data; detecting, using a processor, a factor indicating a need for privacy; activating, based on the detecting, a privacy filter of the display device; and displaying, on the display device, the data. Other aspects are described and claimed.1. A method, comprising: receiving, on a display device, a request to display data; detecting, using a processor, a factor indicating a need for privacy; activating, based on the detecting, a privacy filter of the display device; and displaying, on the display device, the data. 2. The method of claim 1, wherein the factor comprises the display device location. 3. The method of claim 2, wherein the detecting comprises determining, based on the display device location, if the device is in a predetermined area. 4. The method of claim 2, wherein the detecting comprises determining that the device is moving above a threshold speed. 5. The method of claim 1, wherein the activating comprises modifying a hardware aspect of the display device. 6. The method of claim 1, wherein the factor is selected from the group consisting of metadata corresponding to the data to be displayed and the data to be displayed. 7. The method of claim 6, wherein the metadata comprises security data. 8. The method of claim 7, wherein the security data comprises location data. 9. The method of claim 7, wherein the security data is determined during creation of the data to be displayed; and wherein the security data is modifiable by a user. 10. The method of claim 7, wherein the security data is determined during creation of the data to be displayed; and wherein the security data is not modifiable by a user. 11. An information handling device, comprising: a processor; a display device; a memory device that stores instructions executable by the processor to: receive a request to display a data; detect a factor indicating a need for privacy; activate, based on the detecting, a privacy filter of the display device; and display, on the display device the data. 12. The information handling device of claim 11, wherein the factor comprises the display device location. 13. The information handling device of claim 12, wherein the detecting comprises determining, based on the display device location, if the device is in a predetermined area. 14. The information handling device of claim 12, wherein the detecting comprises determining, based on the device location, if the device is in a predetermined area. 15. The information handling device of claim 12, wherein the detecting comprises determining, that the device is moving above a threshold speed. 16. The information handling device of claim 11, wherein the activating comprises modifying a hardware aspect of the display device. 17. The information handling device of claim 11, wherein the factor is selected from the group consisting of metadata corresponding to the data to be displayed and the data to be displayed; and 18. The information handling device of claim 17, wherein the metadata comprises security data; and wherein the security data comprises location data. 19. The information handling device of claim 17, wherein the metadata comprises security data determined during creation of the data to be displayed; and wherein the security data has a characteristic selected from the group consisting of: being modifiable by a user and not being modifiable by a user. 20. A product, comprising: a storage device having code stored therewith, the code being executable by a processor and comprising: code that receives, at an input device, a request to display data; code that detects a factor indicating a need for privacy; code that activates, based on the detecting, a privacy filter of the display device; and code that displays, on the display device, the data.
2,400
7,997
7,997
15,221,152
2,422
Disclosed herein are systems and methods for machine vision. A machine vision system includes a motion rendering device, a first image sensor, and a second image sensor. The machine visions system includes a processor configured to run a computer program stored in memory that is configured to determine a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor, and to determine a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor.
1. A machine vision system comprising: one or more interfaces configured to provide communication with a motion rendering device, a first image sensor, and a second image sensor, wherein: the motion rendering device is configured to provide at least one of a translational movement and an in-plane rotational movement, and is associated with a first coordinate system; the motion rendering device is configured to directly or indirectly carry a first calibration plate and a second calibration plate, and the first calibration plate and the second calibration plate comprise a first plurality of features with known physical positions relative to the first calibration plate and a second plurality of features with known physical positions relative to the second calibration plate, respectively; and the first image sensor and the second image sensor are configured to capture an image of the first calibration plate and the second calibration plate, respectively, and the first image sensor and the second image sensor are associated with a second coordinate system and a third coordinate system, respectively; and a processor configured to run a computer program stored in memory that is configured to: send, via the one or more interfaces to the motion rendering device, first data configured to cause the motion rendering device to move to a requested first pose; receive, via the one or more interfaces from the motion rendering device, a reported first pose; receive, via the one or more interfaces from the first image sensor, a first image of the first calibration plate for the reported first pose; receive, via the one or more interfaces from the second image sensor, a second image of the second calibration plate for the reported first pose; determine a first plurality of correspondences between the first plurality of features on the first calibration plate and first positions of the first plurality of features in the first image; determine a second plurality of correspondences between the second plurality of features on the second calibration plate and second positions of the second plurality of features in the second image; determine a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor; and determine a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor. 2. The machine vision system of claim 1, wherein the computer program is operable to cause the processor to determine a motion correction transform that compensates for a systematic motion error associated with the motion rendering device. 3. The machine vision system of claim 1, wherein the computer program is operable to cause the processor to re-calibrate the machine vision system after a first period of time, comprising re-determining: the first plurality of correspondences; the second plurality of correspondences; the first transformation; and the second transformation. 4. The machine vision system of claim 3, wherein re-calibrating the machine vision system comprises adjusting one or more pre-calibrated parameters. 5. A machine vision system comprising: one or more interfaces configured to provide communication with a motion rendering device, a first image sensor, and a second image sensor, wherein: the motion rendering device is configured to provide at least one of a translational movement and an in-plane rotational movement, and is associated with a first coordinate system; the motion rendering device is further configured to directly or indirectly carry the first image sensor and the second image sensor; the first image sensor and the second image sensor are configured to capture an image of the first calibration plate and the second calibration plate, respectively; the first image sensor and the second image sensor are associated with a second coordinate system and a third coordinate system, respectively; and the first calibration plate and the second calibration plate comprise a first plurality of features with known physical positions relative to the first calibration plate and a second plurality of features with known physical positions relative to the second calibration plate, respectively; and a processor configured to run a computer program stored in memory configured to: send, via the one or more interfaces to the motion rendering device, first data configured to cause the motion rendering device to move to a requested first pose; receive, via the one or more interfaces from the motion rendering device, a reported first pose; receive, via the one or more interfaces from the first image sensor, a first image of the first calibration plate for the reported first pose; receive, via the one or more interfaces from the second image sensor, a second image of the second calibration plate for the reported first pose; determine a first plurality of correspondences between the first plurality of features on the first calibration plate and first positions of the first plurality of features in the first image; determine a second plurality of correspondences between the second plurality of features on the second calibration plate and second positions of the second plurality of features in the second image; determine a first transform between the first coordinate system and the second coordinate system based, at least in part, on the first plurality of correspondences and the reported first pose; and determine a second transform between the first coordinate system and the third coordinate system based, at least in part, on the second plurality of correspondences and the reported first pose. 6. The machine vision system of claim 5, wherein the computer program is operable to cause the processor to determine a motion correction transform that compensates for a systematic motion error associated with the motion rendering device. 7. The machine vision system of claim 5, wherein the computer program is operable to cause the processor to re-calibrate the machine vision system after a first period of time, comprising re-determining: the first plurality of correspondences; the second plurality of correspondences; the first transformation; and the second transformation. 8. The machine vision system of claim 7, wherein re-calibrating the machine vision system comprises adjusting one or more pre-calibrated parameters. 9. A machine vision system comprising: one or more interfaces configured to provide communication with a motion rendering device, a first image sensor, and a second image sensor, wherein: the motion rendering device is configured to provide at least one of a translational movement and an in-plane rotational movement, and is associated with a first coordinate system; the motion rendering device is further configured to directly or indirectly carry a target object comprising a plurality of features with unknown physical positions; and the first image sensor and the second image sensor are configured to capture an image of a first subset and a second subset of the plurality of features in the target object, respectively, and the first image sensor and the second image sensor are associated with a second coordinate system and a third coordinate system, respectively; and a processor configured to run a computer program stored in memory configured to: send, via the one or more interfaces to the motion rendering device, first data configured to cause the motion rendering device to move to a requested first pose; receive, via the one or more interfaces from the motion rendering device, a reported first pose; receive, via the one or more interfaces from the first image sensor, a first image of the first subset of the plurality of features for the reported first pose; receive, via the one or more interfaces from the second image sensor, a second image of the second subset of the plurality of features for the reported first pose; determine the first subset of features on the target object in the first image; determine the second subset of features on the target object in the second image; determine a first transform between the first coordinate system and the second coordinate system based, at least in part, on the first subset of the plurality of features and the reported first pose; and determine a second transform between the first coordinate system and the third coordinate system based, at least in part, on the second subset of the plurality of features and the reported first pose. 10. The machine vision system of claim 9, wherein the computer program is operable to cause the processor to determine a motion correction transform that compensates for a systematic motion error associated with the motion rendering device. 11. The machine vision system of claim 9, wherein the computer program is operable to cause the processor to re-calibrate the machine vision system after a first period of time, comprising re-determining: the first subset of the plurality of features; the second subset of the plurality of features; the first transformation; and the second transformation. 12. The machine vision system of claim 11, wherein re-calibrating the machine vision system comprises adjusting one or more pre-calibrated parameters. 13. A machine vision system comprising: one or more interfaces configured to provide communication with a motion rendering device, a first image sensor, and a second image sensor, wherein: the motion rendering device is configured to provide at least one of a translational movement and an in-plane rotational movement, and is associated with a first coordinate system; the motion rendering device is also configured to directly or indirectly carry the first image sensor and the second image sensor; and the first image sensor and the second image sensor are configured to capture an image of a target object comprising a plurality of features with unknown physical positions, and the first image sensor and the second image sensor are associated with a second coordinate system and a third coordinate system, respectively; and a processor configured to run a computer program stored in memory configured to: send, via the one or more interfaces to the motion rendering device, first data configured to cause the motion rendering device to move to a requested first pose; receive, via the one or more interfaces from the motion rendering device, a reported first pose; receive, via the one or more interfaces from the first image sensor, a first image of a first subset of the plurality of features for the reported first pose; receive, via the one or more interfaces from the second image sensor, a second image of a second subset of the plurality of features for the reported first pose; determine the first subset of features on the target object in the first image; determine the second subset of features on the target object in the second image; determine a first transform between the first coordinate system and the second coordinate system based, at least in part, on the first subset of the plurality of features and the reported first pose; and determine a second transform between the first coordinate system and the third coordinate system based, at least in part, on the second subset of the plurality of features and the reported first pose. 14. The machine vision system of claim 13, wherein the computer program is operable to cause the processor to determine a motion correction transform that compensates for a systematic motion error associated with the motion rendering device. 15. The machine vision system of claim 13, wherein the computer program is operable to cause the processor to re-calibrate the machine vision system after a first period of time, comprising re-determining: the first subset of the plurality of features; the second subset of the plurality of features; the first transformation; and the second transformation. 16. The machine vision system of claim 15, wherein re-calibrating the machine vision system comprises adjusting one or more pre-calibrated parameters.
Disclosed herein are systems and methods for machine vision. A machine vision system includes a motion rendering device, a first image sensor, and a second image sensor. The machine visions system includes a processor configured to run a computer program stored in memory that is configured to determine a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor, and to determine a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor.1. A machine vision system comprising: one or more interfaces configured to provide communication with a motion rendering device, a first image sensor, and a second image sensor, wherein: the motion rendering device is configured to provide at least one of a translational movement and an in-plane rotational movement, and is associated with a first coordinate system; the motion rendering device is configured to directly or indirectly carry a first calibration plate and a second calibration plate, and the first calibration plate and the second calibration plate comprise a first plurality of features with known physical positions relative to the first calibration plate and a second plurality of features with known physical positions relative to the second calibration plate, respectively; and the first image sensor and the second image sensor are configured to capture an image of the first calibration plate and the second calibration plate, respectively, and the first image sensor and the second image sensor are associated with a second coordinate system and a third coordinate system, respectively; and a processor configured to run a computer program stored in memory that is configured to: send, via the one or more interfaces to the motion rendering device, first data configured to cause the motion rendering device to move to a requested first pose; receive, via the one or more interfaces from the motion rendering device, a reported first pose; receive, via the one or more interfaces from the first image sensor, a first image of the first calibration plate for the reported first pose; receive, via the one or more interfaces from the second image sensor, a second image of the second calibration plate for the reported first pose; determine a first plurality of correspondences between the first plurality of features on the first calibration plate and first positions of the first plurality of features in the first image; determine a second plurality of correspondences between the second plurality of features on the second calibration plate and second positions of the second plurality of features in the second image; determine a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor; and determine a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor. 2. The machine vision system of claim 1, wherein the computer program is operable to cause the processor to determine a motion correction transform that compensates for a systematic motion error associated with the motion rendering device. 3. The machine vision system of claim 1, wherein the computer program is operable to cause the processor to re-calibrate the machine vision system after a first period of time, comprising re-determining: the first plurality of correspondences; the second plurality of correspondences; the first transformation; and the second transformation. 4. The machine vision system of claim 3, wherein re-calibrating the machine vision system comprises adjusting one or more pre-calibrated parameters. 5. A machine vision system comprising: one or more interfaces configured to provide communication with a motion rendering device, a first image sensor, and a second image sensor, wherein: the motion rendering device is configured to provide at least one of a translational movement and an in-plane rotational movement, and is associated with a first coordinate system; the motion rendering device is further configured to directly or indirectly carry the first image sensor and the second image sensor; the first image sensor and the second image sensor are configured to capture an image of the first calibration plate and the second calibration plate, respectively; the first image sensor and the second image sensor are associated with a second coordinate system and a third coordinate system, respectively; and the first calibration plate and the second calibration plate comprise a first plurality of features with known physical positions relative to the first calibration plate and a second plurality of features with known physical positions relative to the second calibration plate, respectively; and a processor configured to run a computer program stored in memory configured to: send, via the one or more interfaces to the motion rendering device, first data configured to cause the motion rendering device to move to a requested first pose; receive, via the one or more interfaces from the motion rendering device, a reported first pose; receive, via the one or more interfaces from the first image sensor, a first image of the first calibration plate for the reported first pose; receive, via the one or more interfaces from the second image sensor, a second image of the second calibration plate for the reported first pose; determine a first plurality of correspondences between the first plurality of features on the first calibration plate and first positions of the first plurality of features in the first image; determine a second plurality of correspondences between the second plurality of features on the second calibration plate and second positions of the second plurality of features in the second image; determine a first transform between the first coordinate system and the second coordinate system based, at least in part, on the first plurality of correspondences and the reported first pose; and determine a second transform between the first coordinate system and the third coordinate system based, at least in part, on the second plurality of correspondences and the reported first pose. 6. The machine vision system of claim 5, wherein the computer program is operable to cause the processor to determine a motion correction transform that compensates for a systematic motion error associated with the motion rendering device. 7. The machine vision system of claim 5, wherein the computer program is operable to cause the processor to re-calibrate the machine vision system after a first period of time, comprising re-determining: the first plurality of correspondences; the second plurality of correspondences; the first transformation; and the second transformation. 8. The machine vision system of claim 7, wherein re-calibrating the machine vision system comprises adjusting one or more pre-calibrated parameters. 9. A machine vision system comprising: one or more interfaces configured to provide communication with a motion rendering device, a first image sensor, and a second image sensor, wherein: the motion rendering device is configured to provide at least one of a translational movement and an in-plane rotational movement, and is associated with a first coordinate system; the motion rendering device is further configured to directly or indirectly carry a target object comprising a plurality of features with unknown physical positions; and the first image sensor and the second image sensor are configured to capture an image of a first subset and a second subset of the plurality of features in the target object, respectively, and the first image sensor and the second image sensor are associated with a second coordinate system and a third coordinate system, respectively; and a processor configured to run a computer program stored in memory configured to: send, via the one or more interfaces to the motion rendering device, first data configured to cause the motion rendering device to move to a requested first pose; receive, via the one or more interfaces from the motion rendering device, a reported first pose; receive, via the one or more interfaces from the first image sensor, a first image of the first subset of the plurality of features for the reported first pose; receive, via the one or more interfaces from the second image sensor, a second image of the second subset of the plurality of features for the reported first pose; determine the first subset of features on the target object in the first image; determine the second subset of features on the target object in the second image; determine a first transform between the first coordinate system and the second coordinate system based, at least in part, on the first subset of the plurality of features and the reported first pose; and determine a second transform between the first coordinate system and the third coordinate system based, at least in part, on the second subset of the plurality of features and the reported first pose. 10. The machine vision system of claim 9, wherein the computer program is operable to cause the processor to determine a motion correction transform that compensates for a systematic motion error associated with the motion rendering device. 11. The machine vision system of claim 9, wherein the computer program is operable to cause the processor to re-calibrate the machine vision system after a first period of time, comprising re-determining: the first subset of the plurality of features; the second subset of the plurality of features; the first transformation; and the second transformation. 12. The machine vision system of claim 11, wherein re-calibrating the machine vision system comprises adjusting one or more pre-calibrated parameters. 13. A machine vision system comprising: one or more interfaces configured to provide communication with a motion rendering device, a first image sensor, and a second image sensor, wherein: the motion rendering device is configured to provide at least one of a translational movement and an in-plane rotational movement, and is associated with a first coordinate system; the motion rendering device is also configured to directly or indirectly carry the first image sensor and the second image sensor; and the first image sensor and the second image sensor are configured to capture an image of a target object comprising a plurality of features with unknown physical positions, and the first image sensor and the second image sensor are associated with a second coordinate system and a third coordinate system, respectively; and a processor configured to run a computer program stored in memory configured to: send, via the one or more interfaces to the motion rendering device, first data configured to cause the motion rendering device to move to a requested first pose; receive, via the one or more interfaces from the motion rendering device, a reported first pose; receive, via the one or more interfaces from the first image sensor, a first image of a first subset of the plurality of features for the reported first pose; receive, via the one or more interfaces from the second image sensor, a second image of a second subset of the plurality of features for the reported first pose; determine the first subset of features on the target object in the first image; determine the second subset of features on the target object in the second image; determine a first transform between the first coordinate system and the second coordinate system based, at least in part, on the first subset of the plurality of features and the reported first pose; and determine a second transform between the first coordinate system and the third coordinate system based, at least in part, on the second subset of the plurality of features and the reported first pose. 14. The machine vision system of claim 13, wherein the computer program is operable to cause the processor to determine a motion correction transform that compensates for a systematic motion error associated with the motion rendering device. 15. The machine vision system of claim 13, wherein the computer program is operable to cause the processor to re-calibrate the machine vision system after a first period of time, comprising re-determining: the first subset of the plurality of features; the second subset of the plurality of features; the first transformation; and the second transformation. 16. The machine vision system of claim 15, wherein re-calibrating the machine vision system comprises adjusting one or more pre-calibrated parameters.
2,400
7,998
7,998
14,279,435
2,483
A method of controlling a plurality of cameras in a communication network is provided. The method includes: controlling a camera to receive and analyze information about an idle time of each of at least one other camera; according to the analyzing, controlling the camera to transmit at least one task and/or information about the at least one task to the at least one other camera, wherein the idle time of each of the at least one other camera is set to a time remaining before each of the at least one other camera is configured to execute a task among one or more tasks or a sum of time durations at which no tasks are allocated to each of the at least one other camera.
1. A method of controlling a camera which is connected to at least one other camera and at least one client terminal through a communication network, the method comprising: controlling the camera to receive and analyze information about an idle time of each of the at least one other camera; according to the analyzing, controlling the camera to transmit at least one task of the camera and/or information about the at least one task of the camera to the at least one other camera, wherein the idle time of each of the at least one other camera is set to a time remaining before each of the at least one other camera is configured to execute a task among one or more tasks of each of the at least one other camera or a sum of time durations at which no tasks are allocated to each of the at least one other camera. 2. The method of claim 1, wherein the camera is controlled to receive the information about the idle time of each of the at least one camera at a predetermined time interval. 3. The method of claim 1, further comprising: controlling the camera to transmit information about an idle time of the camera to the at least other camera; and if at least one task of the at least one other camera and/or information about the at least one task of the at least one other camera are received, controlling the camera to execute the received at least one task at the camera, wherein the idle time of the camera is set to a time remaining before the camera is configured to execute a task among one or more tasks of the camera or a sum of time durations at which no tasks are allocated to the camera. 4. The method of claim 3, wherein the camera is controlled to receive the information about the idle time of each of the at least one camera at an interval of a predetermined time and/or transmit the information about the idle time of the camera at the interval of the predetermined time. 5. The method of claim 4, wherein the idle time of each of the at least one camera is calculated by adding time durations for executing all tasks set to be executed within the predetermined time and subtracting a result of the addition from the predetermined time. 6. The method of claim 3, further comprising: controlling the camera to transmit a result of the execution of the received at least one task to the at least one of other camera which transmitted the at least one task. 7. The method of claim 1, further comprising: controlling the at least one other camera to execute the at least one task of the camera transmitted to the at least one other camera; and controlling the at least one other camera to transmit a result of the execution of the at least one task of the camera to the camera. 8. The method of claim 1, further comprising controlling the camera to determine if an idle time of the camera exists, wherein if it is determined that the idle time of the camera does not exist, the camera is controlled to transmit the at least one task of the camera to the at least one other camera. 9. The method of claim 8, wherein if it is determined that the idle time of the camera does not exist, the camera is further controlled to: obtain an estimate of an execution time required for executing the at least one task of the camera to be transmitted; transmit information on the estimate of the execution time and an execution request message to a camera having a longest idle time among the at least one other camera; if an execution-possible message from the camera having the longest idle time is received, transmit the at least one task of the camera to be transmitted to the camera having the longest idle time; if an execution-impossible message from the camera having the longest idle time is received, transmit the information on the estimate of the execution time and the execution request message to a camera having a second longest idle time; and if an execution-possible message from the camera having the second longest idle time is received, transmit the at least one task of the camera to be transmitted to the camera having the second longest idle time. 10. The method of claim 3, further comprising: if information on an estimate of an execution time required for executing the at least one task of the at least one other camera and an execution request message for executing the at least one task of the at least one other camera are received, controlling the camera to transmit an execution-possible message or an execution-impossible message to the at least one other camera according to a result of comparing an idle time of the camera and the estimate of the execution time, before the at least one task of the at least one other camera and/or the information about the at least one task of the at least one other camera are received at the camera; and transmitting a result of the execution of the received at least one task to the at least one other camera. 11. The method of claim 10, wherein the camera is controlled to transmit the execution-possible message or the execution-impossible message to the at least one other camera based on priorities of the at least one other camera. 12. The method of claim 11, wherein a higher priority is set to a camera having a shorter idle time among the at least one other camera. 13. The method of claim 11, wherein the priority of the at least one other camera is set by a user. 14. The method of claim 1, wherein the at least one task of the camera comprises transmitting a result of processing image data captured by the camera to the at least one client terminal, wherein the information about the at least one task of the camera comprises an address of the at least one client terminal, wherein the at least one other camera is controlled to transmit the result of the processing the image data captured by the camera to the address of the at least one client terminal. 15. The method of claim 3, wherein the at least one task of the at least one other camera comprises transmitting a result of processing image data captured by the at least one other camera to the at least one client terminal, wherein the information about the at least one task of the at least one other camera comprises an address of the at least one client terminal, wherein the camera is controlled to transmit the result of the processing the image data captured by the at least one other camera to the address of the at least one client terminal. 16. A camera configured to be connected to at least one other camera and at least one client terminal through a communication network, the camera comprising: an optical system configured to capture image data; and a communication port configured to receive information about an idle time of each of the at least one other camera; and a controller configured to analyze the information about the idle time of each of the at least one other camera, and transmit at least one task of the camera and/or information about the at least one task of the camera to the at least one other camera through the communication port, wherein the idle time of each of the at least one other camera is set to a time remaining before each of the at least one other camera is configured to execute a task among one or more tasks of each of the at least one other camera or a sum of time durations at which no tasks are allocated to each of the at least one other camera. 17. The camera of claim 16, wherein the camera is further configured to receive the information about the idle time of each of the at least one camera at a predetermined time interval. 18. The camera of claim 16, wherein the controller is further configured to transmit information about an idle time of the camera to the at least one other camera, and, if at least one task of the at least one other camera and/or information about the at least one task of the at least one other camera are received, execute the received at least one task of the at least one other camera at the camera. 19. The camera of claim 18, the controller is further configured to receive the information about the idle time of each of the at least one camera at an interval of a predetermined time and/or transmit the information about the idle time of the camera at the interval of the predetermined time. 20. A surveillance system comprising: a plurality of cameras connected to one another; and a plurality of client terminals connected to the plurality of cameras through a communication network, wherein each of the plurality of cameras is configured to transmits an idle time of the camera to the other cameras, wherein each of the plurality of cameras is configured to receive and analyze information about an idle time of each of the other cameras, wherein, according to the analyzing, each of the plurality of cameras is configured to transmit at least one task of the camera to at least one of the other cameras, and wherein each of the plurality of cameras is configured to, if at least one task of at least one of the other cameras is received, execute the received at least one task of the at least one of the other cameras.
A method of controlling a plurality of cameras in a communication network is provided. The method includes: controlling a camera to receive and analyze information about an idle time of each of at least one other camera; according to the analyzing, controlling the camera to transmit at least one task and/or information about the at least one task to the at least one other camera, wherein the idle time of each of the at least one other camera is set to a time remaining before each of the at least one other camera is configured to execute a task among one or more tasks or a sum of time durations at which no tasks are allocated to each of the at least one other camera.1. A method of controlling a camera which is connected to at least one other camera and at least one client terminal through a communication network, the method comprising: controlling the camera to receive and analyze information about an idle time of each of the at least one other camera; according to the analyzing, controlling the camera to transmit at least one task of the camera and/or information about the at least one task of the camera to the at least one other camera, wherein the idle time of each of the at least one other camera is set to a time remaining before each of the at least one other camera is configured to execute a task among one or more tasks of each of the at least one other camera or a sum of time durations at which no tasks are allocated to each of the at least one other camera. 2. The method of claim 1, wherein the camera is controlled to receive the information about the idle time of each of the at least one camera at a predetermined time interval. 3. The method of claim 1, further comprising: controlling the camera to transmit information about an idle time of the camera to the at least other camera; and if at least one task of the at least one other camera and/or information about the at least one task of the at least one other camera are received, controlling the camera to execute the received at least one task at the camera, wherein the idle time of the camera is set to a time remaining before the camera is configured to execute a task among one or more tasks of the camera or a sum of time durations at which no tasks are allocated to the camera. 4. The method of claim 3, wherein the camera is controlled to receive the information about the idle time of each of the at least one camera at an interval of a predetermined time and/or transmit the information about the idle time of the camera at the interval of the predetermined time. 5. The method of claim 4, wherein the idle time of each of the at least one camera is calculated by adding time durations for executing all tasks set to be executed within the predetermined time and subtracting a result of the addition from the predetermined time. 6. The method of claim 3, further comprising: controlling the camera to transmit a result of the execution of the received at least one task to the at least one of other camera which transmitted the at least one task. 7. The method of claim 1, further comprising: controlling the at least one other camera to execute the at least one task of the camera transmitted to the at least one other camera; and controlling the at least one other camera to transmit a result of the execution of the at least one task of the camera to the camera. 8. The method of claim 1, further comprising controlling the camera to determine if an idle time of the camera exists, wherein if it is determined that the idle time of the camera does not exist, the camera is controlled to transmit the at least one task of the camera to the at least one other camera. 9. The method of claim 8, wherein if it is determined that the idle time of the camera does not exist, the camera is further controlled to: obtain an estimate of an execution time required for executing the at least one task of the camera to be transmitted; transmit information on the estimate of the execution time and an execution request message to a camera having a longest idle time among the at least one other camera; if an execution-possible message from the camera having the longest idle time is received, transmit the at least one task of the camera to be transmitted to the camera having the longest idle time; if an execution-impossible message from the camera having the longest idle time is received, transmit the information on the estimate of the execution time and the execution request message to a camera having a second longest idle time; and if an execution-possible message from the camera having the second longest idle time is received, transmit the at least one task of the camera to be transmitted to the camera having the second longest idle time. 10. The method of claim 3, further comprising: if information on an estimate of an execution time required for executing the at least one task of the at least one other camera and an execution request message for executing the at least one task of the at least one other camera are received, controlling the camera to transmit an execution-possible message or an execution-impossible message to the at least one other camera according to a result of comparing an idle time of the camera and the estimate of the execution time, before the at least one task of the at least one other camera and/or the information about the at least one task of the at least one other camera are received at the camera; and transmitting a result of the execution of the received at least one task to the at least one other camera. 11. The method of claim 10, wherein the camera is controlled to transmit the execution-possible message or the execution-impossible message to the at least one other camera based on priorities of the at least one other camera. 12. The method of claim 11, wherein a higher priority is set to a camera having a shorter idle time among the at least one other camera. 13. The method of claim 11, wherein the priority of the at least one other camera is set by a user. 14. The method of claim 1, wherein the at least one task of the camera comprises transmitting a result of processing image data captured by the camera to the at least one client terminal, wherein the information about the at least one task of the camera comprises an address of the at least one client terminal, wherein the at least one other camera is controlled to transmit the result of the processing the image data captured by the camera to the address of the at least one client terminal. 15. The method of claim 3, wherein the at least one task of the at least one other camera comprises transmitting a result of processing image data captured by the at least one other camera to the at least one client terminal, wherein the information about the at least one task of the at least one other camera comprises an address of the at least one client terminal, wherein the camera is controlled to transmit the result of the processing the image data captured by the at least one other camera to the address of the at least one client terminal. 16. A camera configured to be connected to at least one other camera and at least one client terminal through a communication network, the camera comprising: an optical system configured to capture image data; and a communication port configured to receive information about an idle time of each of the at least one other camera; and a controller configured to analyze the information about the idle time of each of the at least one other camera, and transmit at least one task of the camera and/or information about the at least one task of the camera to the at least one other camera through the communication port, wherein the idle time of each of the at least one other camera is set to a time remaining before each of the at least one other camera is configured to execute a task among one or more tasks of each of the at least one other camera or a sum of time durations at which no tasks are allocated to each of the at least one other camera. 17. The camera of claim 16, wherein the camera is further configured to receive the information about the idle time of each of the at least one camera at a predetermined time interval. 18. The camera of claim 16, wherein the controller is further configured to transmit information about an idle time of the camera to the at least one other camera, and, if at least one task of the at least one other camera and/or information about the at least one task of the at least one other camera are received, execute the received at least one task of the at least one other camera at the camera. 19. The camera of claim 18, the controller is further configured to receive the information about the idle time of each of the at least one camera at an interval of a predetermined time and/or transmit the information about the idle time of the camera at the interval of the predetermined time. 20. A surveillance system comprising: a plurality of cameras connected to one another; and a plurality of client terminals connected to the plurality of cameras through a communication network, wherein each of the plurality of cameras is configured to transmits an idle time of the camera to the other cameras, wherein each of the plurality of cameras is configured to receive and analyze information about an idle time of each of the other cameras, wherein, according to the analyzing, each of the plurality of cameras is configured to transmit at least one task of the camera to at least one of the other cameras, and wherein each of the plurality of cameras is configured to, if at least one task of at least one of the other cameras is received, execute the received at least one task of the at least one of the other cameras.
2,400
7,999
7,999
15,468,771
2,492
Mechanisms are provided for facilitating recertification of a user access entitlement. These mechanisms collect, from a system resource of the data processing system, access information representative of accesses of the system resource by a user access entitlement. These mechanisms determine that recertification of the user access entitlement, with regard to the system resource, is to be performed and a pattern of access is determined based on the access information for the user access entitlement. A recertification request graphical user interface is output to a user based on the pattern of access. The graphical user interface includes the pattern of access and one or more graphical user interface elements for receiving a user input specifying acceptance or denial of the recertification of the user access entitlement.
1. A method, in a data processing system having a processor implemented in hardware, for recertification of a user access entitlement, comprising: collecting, from a system resource of the data processing system, access information representative of accesses of the system resource by a user access entitlement; determining, by the processor, that recertification of the user access entitlement, with regard to the system resource, is to be performed; determining, by the processor, at least one first access metric associated with the user access entitlement based on the access information for the user access entitlement; and outputting, by the processor, a recertification request graphical user interface to a user based on the pattern of access, wherein the recertification request graphical user interface comprises: a representation of a comparison of the at least one first access metric, associated with the user access entitlement, with one or more second access metrics associated with one or more other user access entitlements and one or more graphical user interface elements for receiving a user input specifying acceptance or denial of the recertification of the user access entitlement. 2. The method of claim 1, wherein collecting access information from the system resource comprises receiving the access information from an agent component executing on an end system associated with the system resource. 3. The method of claim 1, wherein the system resource of the data processing system comprises at least one of a hardware resource or a software resource. 4. The method of claim 1, wherein determining that recertification of the user access entitlement with regard to the system resource is to be performed comprises determining that a triggering condition specified in a recertification policy of the data processing system has occurred with regard to the user access entitlement. 5. The method of claim 1, wherein determining the at least one first access metric associated with the user access entitlement comprises: determining, by the processor, a context to use as a basis for determining a pattern of access, wherein the context comprises at least one of an identity sub-context, resource sub-context, or transaction sub-context; and determining, by the processor, the pattern of access by analyzing a portion of the access information corresponding to one or more of the identity sub-context, resource sub-context, or transaction sub-context. 6. The method of claim 5, wherein determining the context to use as a basis for determining the pattern of access comprises: sending, by the processor, a recertification notification to a user computing device indicating that recertification of the user access entitlement is necessary; performing, by the processor, a login operation of the user computing device; and in response to performing the login operation successfully, providing, by the processor, to the user computing device, a graphical user interface for inputting characteristics of the context. 7. (canceled) 8. The method of claim 1, wherein the one or more other user access entitlements are user access entitlements having a group association corresponding to a same group identifier as the user access entitlement for which recertification is sought. 9. The method of claim 8, wherein the graphical user interface comprises a pattern of access in the form of at least one graph of the at least one first access metric relative to the one or more second access metrics and one or more threshold values. 10. The method of claim 1, further comprising: determining, by the processor, a recommendation as to whether to accept or deny the recertification of the user access entitlement based on results of the determination of the at least one first access metric and a comparison of the at least one first access metric to the one or more second access metrics, wherein the recertification graphical user interface further comprises the recommendation. 11. A computer program product comprising a non-transitory computer readable medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device of a data processing system, causes the computing device to: collect, from a system resource of the data processing system, access information representative of accesses of the system resource by a user access entitlement; determine that recertification of the user access entitlement, with regard to the system resource, is to be performed; determine at least one first access metric associated with the user access entitlement based on the access information for the user access entitlement; and output a recertification request graphical user interface to a user based on the pattern of access, wherein the recertification request graphical user interface comprises: a representation of a comparison of the at least one first access metric, associated with the user access entitlement, with one or more second access metrics associated with one or more other user access entitlements and one or more graphical user interface elements for receiving a user input specifying acceptance or denial of the recertification of the user access entitlement. 12. The computer program product of claim 11, wherein the computer readable program causes the computing device to collect access information from the system resource by receiving the access information from an agent component executing on an end system associated with the system resource. 13. The computer program product of claim 11, wherein the system resource of the data processing system comprises at least one of a hardware resource or a software resource. 14. The computer program product of claim 11, wherein the computer readable program causes the computing device to determine that recertification of the user access entitlement with regard to the system resource is to be performed by determining that a triggering condition specified in a recertification policy of the data processing system has occurred with regard to the user access entitlement. 15. The computer program product of claim 11, wherein the computer readable program causes the computing device to determine the at least one first access metric associated with the user access entitlement at least by: determining a context to use as a basis for determining a pattern of access, wherein the context comprises at least one of an identity sub-context, resource sub-context, or transaction sub-context; and determining the pattern of access by analyzing a portion of the access information corresponding to one or more of the identity sub-context, resource sub-context, or transaction sub-context. 16. The computer program product of claim 15, wherein the computer readable program causes the computing device to determine the context to use as a basis for determining the pattern of access by: sending a recertification notification to a user computing device indicating that recertification of the user access entitlement is necessary; performing a login operation of the user computing device; and in response to performing the login operation successfully, providing to the user computing device, a graphical user interface for inputting characteristics of the context. 17. (canceled) 18. The computer program product of claim 11, wherein the one or more other user access entitlements are user access entitlements having a group association corresponding to a same group identifier as the user access entitlement for which recertification is sought. 19. The computer program product of claim 18, wherein the graphical user interface comprises a pattern of access in the form of at least one graph of the at least one first access metric relative to the one or more second access metrics and one or more threshold values. 20. The computer program product of claim 11, wherein the computer readable program further causes the computing device to: determine a recommendation as to whether to accept or deny the recertification of the user access entitlement based on results of the determination of the at least one first access metric and a comparison of the at least one first access metric to the one or more second access metrics, wherein the recertification graphical user interface further comprises the recommendation. 21. An apparatus, comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: collect, from a system resource of the data processing system, access information representative of accesses of the system resource by a user access entitlement; determine that recertification of the user access entitlement, with regard to the system resource, is to be performed; determine at least one first access metric associated with the user access entitlement based on the access information for the user access entitlement; and output a recertification request graphical user interface to a user based on the pattern of access, wherein the recertification request graphical user interface comprises: a representation of a comparison of the at least one first access metric, associated with the user access entitlement, with one or more second access metrics associated with one or more other user access entitlements, and one or more graphical user interface elements for receiving a user input specifying acceptance or denial of the recertification of the user access entitlement.
Mechanisms are provided for facilitating recertification of a user access entitlement. These mechanisms collect, from a system resource of the data processing system, access information representative of accesses of the system resource by a user access entitlement. These mechanisms determine that recertification of the user access entitlement, with regard to the system resource, is to be performed and a pattern of access is determined based on the access information for the user access entitlement. A recertification request graphical user interface is output to a user based on the pattern of access. The graphical user interface includes the pattern of access and one or more graphical user interface elements for receiving a user input specifying acceptance or denial of the recertification of the user access entitlement.1. A method, in a data processing system having a processor implemented in hardware, for recertification of a user access entitlement, comprising: collecting, from a system resource of the data processing system, access information representative of accesses of the system resource by a user access entitlement; determining, by the processor, that recertification of the user access entitlement, with regard to the system resource, is to be performed; determining, by the processor, at least one first access metric associated with the user access entitlement based on the access information for the user access entitlement; and outputting, by the processor, a recertification request graphical user interface to a user based on the pattern of access, wherein the recertification request graphical user interface comprises: a representation of a comparison of the at least one first access metric, associated with the user access entitlement, with one or more second access metrics associated with one or more other user access entitlements and one or more graphical user interface elements for receiving a user input specifying acceptance or denial of the recertification of the user access entitlement. 2. The method of claim 1, wherein collecting access information from the system resource comprises receiving the access information from an agent component executing on an end system associated with the system resource. 3. The method of claim 1, wherein the system resource of the data processing system comprises at least one of a hardware resource or a software resource. 4. The method of claim 1, wherein determining that recertification of the user access entitlement with regard to the system resource is to be performed comprises determining that a triggering condition specified in a recertification policy of the data processing system has occurred with regard to the user access entitlement. 5. The method of claim 1, wherein determining the at least one first access metric associated with the user access entitlement comprises: determining, by the processor, a context to use as a basis for determining a pattern of access, wherein the context comprises at least one of an identity sub-context, resource sub-context, or transaction sub-context; and determining, by the processor, the pattern of access by analyzing a portion of the access information corresponding to one or more of the identity sub-context, resource sub-context, or transaction sub-context. 6. The method of claim 5, wherein determining the context to use as a basis for determining the pattern of access comprises: sending, by the processor, a recertification notification to a user computing device indicating that recertification of the user access entitlement is necessary; performing, by the processor, a login operation of the user computing device; and in response to performing the login operation successfully, providing, by the processor, to the user computing device, a graphical user interface for inputting characteristics of the context. 7. (canceled) 8. The method of claim 1, wherein the one or more other user access entitlements are user access entitlements having a group association corresponding to a same group identifier as the user access entitlement for which recertification is sought. 9. The method of claim 8, wherein the graphical user interface comprises a pattern of access in the form of at least one graph of the at least one first access metric relative to the one or more second access metrics and one or more threshold values. 10. The method of claim 1, further comprising: determining, by the processor, a recommendation as to whether to accept or deny the recertification of the user access entitlement based on results of the determination of the at least one first access metric and a comparison of the at least one first access metric to the one or more second access metrics, wherein the recertification graphical user interface further comprises the recommendation. 11. A computer program product comprising a non-transitory computer readable medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device of a data processing system, causes the computing device to: collect, from a system resource of the data processing system, access information representative of accesses of the system resource by a user access entitlement; determine that recertification of the user access entitlement, with regard to the system resource, is to be performed; determine at least one first access metric associated with the user access entitlement based on the access information for the user access entitlement; and output a recertification request graphical user interface to a user based on the pattern of access, wherein the recertification request graphical user interface comprises: a representation of a comparison of the at least one first access metric, associated with the user access entitlement, with one or more second access metrics associated with one or more other user access entitlements and one or more graphical user interface elements for receiving a user input specifying acceptance or denial of the recertification of the user access entitlement. 12. The computer program product of claim 11, wherein the computer readable program causes the computing device to collect access information from the system resource by receiving the access information from an agent component executing on an end system associated with the system resource. 13. The computer program product of claim 11, wherein the system resource of the data processing system comprises at least one of a hardware resource or a software resource. 14. The computer program product of claim 11, wherein the computer readable program causes the computing device to determine that recertification of the user access entitlement with regard to the system resource is to be performed by determining that a triggering condition specified in a recertification policy of the data processing system has occurred with regard to the user access entitlement. 15. The computer program product of claim 11, wherein the computer readable program causes the computing device to determine the at least one first access metric associated with the user access entitlement at least by: determining a context to use as a basis for determining a pattern of access, wherein the context comprises at least one of an identity sub-context, resource sub-context, or transaction sub-context; and determining the pattern of access by analyzing a portion of the access information corresponding to one or more of the identity sub-context, resource sub-context, or transaction sub-context. 16. The computer program product of claim 15, wherein the computer readable program causes the computing device to determine the context to use as a basis for determining the pattern of access by: sending a recertification notification to a user computing device indicating that recertification of the user access entitlement is necessary; performing a login operation of the user computing device; and in response to performing the login operation successfully, providing to the user computing device, a graphical user interface for inputting characteristics of the context. 17. (canceled) 18. The computer program product of claim 11, wherein the one or more other user access entitlements are user access entitlements having a group association corresponding to a same group identifier as the user access entitlement for which recertification is sought. 19. The computer program product of claim 18, wherein the graphical user interface comprises a pattern of access in the form of at least one graph of the at least one first access metric relative to the one or more second access metrics and one or more threshold values. 20. The computer program product of claim 11, wherein the computer readable program further causes the computing device to: determine a recommendation as to whether to accept or deny the recertification of the user access entitlement based on results of the determination of the at least one first access metric and a comparison of the at least one first access metric to the one or more second access metrics, wherein the recertification graphical user interface further comprises the recommendation. 21. An apparatus, comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: collect, from a system resource of the data processing system, access information representative of accesses of the system resource by a user access entitlement; determine that recertification of the user access entitlement, with regard to the system resource, is to be performed; determine at least one first access metric associated with the user access entitlement based on the access information for the user access entitlement; and output a recertification request graphical user interface to a user based on the pattern of access, wherein the recertification request graphical user interface comprises: a representation of a comparison of the at least one first access metric, associated with the user access entitlement, with one or more second access metrics associated with one or more other user access entitlements, and one or more graphical user interface elements for receiving a user input specifying acceptance or denial of the recertification of the user access entitlement.
2,400