hexsha stringlengths 40 40 | size int64 5 1.05M | ext stringclasses 588 values | lang stringclasses 305 values | max_stars_repo_path stringlengths 3 363 | max_stars_repo_name stringlengths 5 118 | max_stars_repo_head_hexsha stringlengths 40 40 | max_stars_repo_licenses listlengths 1 10 | max_stars_count float64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringdate 2015-01-01 00:00:35 2022-03-31 23:43:49 ⌀ | max_stars_repo_stars_event_max_datetime stringdate 2015-01-01 12:37:38 2022-03-31 23:59:52 ⌀ | max_issues_repo_path stringlengths 3 363 | max_issues_repo_name stringlengths 5 118 | max_issues_repo_head_hexsha stringlengths 40 40 | max_issues_repo_licenses listlengths 1 10 | max_issues_count float64 1 134k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 363 | max_forks_repo_name stringlengths 5 135 | max_forks_repo_head_hexsha stringlengths 40 40 | max_forks_repo_licenses listlengths 1 10 | max_forks_count float64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringdate 2015-01-01 00:01:02 2022-03-31 23:27:27 ⌀ | max_forks_repo_forks_event_max_datetime stringdate 2015-01-03 08:55:07 2022-03-31 23:59:24 ⌀ | content stringlengths 5 1.05M | avg_line_length float64 1.13 1.04M | max_line_length int64 1 1.05M | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c1eaa1c4e93bcc8918a5f4a2ee5541b8fe9793b1 | 130 | nc | nesC | data/ctd/wod_009958422O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-08T03:04:18.000Z | 2020-02-08T00:25:12.000Z | data/ctd/wod_009958422O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 1 | 2020-08-18T03:10:27.000Z | 2020-09-14T21:30:35.000Z | data/ctd/wod_009958422O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-17T23:34:20.000Z | 2019-08-14T23:01:09.000Z | version https://git-lfs.github.com/spec/v1
oid sha256:42724924ee3d2d355a19fda61b17bf7c60f84b2df9fbc168aad86f89c91519d0
size 21852
| 32.5 | 75 | 0.884615 |
dc70058fc90e8323c625771ead3082e6f0d735a4 | 4,225 | nc | nesC | tos/platforms/micaz/sim/SimLocalTimeAlarmMicro32P.nc | johannesehala/tos-platforms | 52153f67e226bfd0e8cbc04c75b2e1dfcadc393b | [
"MIT"
] | 2 | 2018-07-19T08:34:48.000Z | 2018-08-20T11:04:44.000Z | tos/platforms/micaz/sim/SimLocalTimeAlarmMicro32P.nc | johannesehala/tos-platforms | 52153f67e226bfd0e8cbc04c75b2e1dfcadc393b | [
"MIT"
] | null | null | null | tos/platforms/micaz/sim/SimLocalTimeAlarmMicro32P.nc | johannesehala/tos-platforms | 52153f67e226bfd0e8cbc04c75b2e1dfcadc393b | [
"MIT"
] | 2 | 2017-01-25T14:20:15.000Z | 2018-11-15T13:58:34.000Z | /*
* Copyright (c) 2014, Defendec.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* - Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* - Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the
* distribution.
* - Neither the name of the copyright holder nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*
* @author Andres Vahter <andres.vahter@defendec.com>
*/
module SimLocalTimeAlarmMicro32P {
provides {
interface Alarm<TMicro, uint32_t>;
interface LocalTime<TMicro>;
}
}
implementation {
sim_event_t* m_timer_event;
bool m_running = FALSE;
uint32_t m_alarm_fire_time = 0;
/* m_last_zero keeps track of the phase of the clock. It denotes the sim
* time at which the underlying clock started, which is needed to
* calculate when compares will occur.
*/
sim_time_t m_last_zero = 0;
sim_time_t last_zero() {
if (m_last_zero == 0) {
m_last_zero = sim_mote_start_time(sim_node());
}
return m_last_zero;
}
sim_time_t notify_clock_ticks_per_sec() {
return 1000000UL;
}
sim_time_t clock_to_sim(sim_time_t t) {
t *= sim_ticks_per_sec();
t /= notify_clock_ticks_per_sec();
return t;
}
sim_time_t sim_to_clock(sim_time_t t) {
t *= notify_clock_ticks_per_sec();
t /= sim_ticks_per_sec();
return t;
}
void timer_event_handle(sim_event_t* evt) {
if (evt->cancelled) {
return;
}
else {
m_running = FALSE;
signal Alarm.fired();
}
}
sim_event_t* allocate_timer_event() {
sim_event_t* new_event = sim_queue_allocate_event();
new_event->handle = timer_event_handle;
new_event->cleanup = sim_queue_cleanup_none;
return new_event;
}
async command void Alarm.start(uint32_t dt) {
m_timer_event = allocate_timer_event();
m_timer_event->time = sim_time() + clock_to_sim(dt);
m_running = TRUE;
sim_queue_insert(m_timer_event);
}
async command void Alarm.stop() {
m_timer_event->cancelled = TRUE;
m_running = FALSE;
}
async command bool Alarm.isRunning() {
return m_running;
}
async command void Alarm.startAt(uint32_t t0, uint32_t dt) {
m_timer_event = allocate_timer_event();
m_timer_event->time = sim_time() + clock_to_sim(t0) + clock_to_sim(dt);
m_running = TRUE;
sim_queue_insert(m_timer_event);
}
async command uint32_t Alarm.getNow() {
sim_time_t elapsed = sim_time() - last_zero();
elapsed = sim_to_clock(elapsed);
return elapsed;
}
async command uint32_t Alarm.getAlarm() {
return m_alarm_fire_time;
}
async command uint32_t LocalTime.get() {
sim_time_t elapsed = sim_time() - last_zero();
elapsed = sim_to_clock(elapsed);
return elapsed;
}
}
| 30.615942 | 79 | 0.672189 |
09d6193fc6adc6e1111a11e8cee7454e27748300 | 1,344 | nc | nesC | nesc/whip6/lib/diagnostic/StateLoggerToUSBPub.nc | wojtex/whip6-pub | 7aca863e45199f4f1354f24b1c88afd8cb34c2ba | [
"Apache-2.0",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-Clear",
"Intel",
"BSD-3-Clause"
] | 1 | 2017-02-21T16:44:56.000Z | 2017-02-21T16:44:56.000Z | nesc/whip6/lib/diagnostic/StateLoggerToUSBPub.nc | wojtex/whip6-pub | 7aca863e45199f4f1354f24b1c88afd8cb34c2ba | [
"Apache-2.0",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-Clear",
"Intel",
"BSD-3-Clause"
] | 9 | 2017-02-21T16:43:31.000Z | 2021-06-10T19:28:41.000Z | nesc/whip6/lib/diagnostic/StateLoggerToUSBPub.nc | wojtex/whip6-pub | 7aca863e45199f4f1354f24b1c88afd8cb34c2ba | [
"Apache-2.0",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-Clear",
"Intel",
"BSD-3-Clause"
] | 12 | 2016-12-19T12:04:17.000Z | 2020-09-17T14:44:39.000Z | /*
* whip6: Warsaw High-performance IPv6.
*
* Copyright (c) 2012-2017 Przemyslaw Horban
* All rights reserved.
*
* This file is distributed under the terms in the attached LICENSE
* files.
*/
/**
* Logs recorded states to USB in binary form.
*
* @author Przemyslaw Horban
*/
#include "StateLogger.h"
generic module StateLoggerToUSBPub(uint8_t bufferSize) {
provides interface StateLogger;
}
implementation {
uint8_t buffer[bufferSize];
uint16_t pos = 0;
void checkPos() {
if (pos + 4 >= bufferSize) {
usb_putchar(STATE_LOG_BUFFER_OVERFLOW);
usb_putchar(STATE_LOG_ENTRY_SEPARATOR);
call StateLogger.writeEntry();
}
}
command void StateLogger.log8(uint8_t data) {
buffer[pos] = data;
pos += 1;
checkPos();
}
command void StateLogger.log16(uint16_t data) {
*((uint16_t_xdata*)&buffer[pos]) = data;
pos += 2;
checkPos();
}
command void StateLogger.log32(uint32_t data) {
*((uint32_t_xdata*)&buffer[pos]) = data;
pos += 4;
checkPos();
}
command void StateLogger.writeEntry() {
uint16_t i;
for (i = 0; i < pos; i++) {
usb_putchar(buffer[i]);
}
usb_putchar(STATE_LOG_ENTRY_SEPARATOR);
pos = 0;
}
}
| 22.032787 | 72 | 0.590774 |
7d43e7c7a55ae9e5225b90ecea7c32fd3d2f1c76 | 131 | nc | nesC | testinput_tier_1/inputs/gfs_c12/mem009/20180415.000000.fv_tracer.res.tile5.nc | JCSDA-internal/fv3-jedi-data | 2598b683f060bf773eb9f65b11adc654578e823e | [
"Apache-2.0"
] | null | null | null | testinput_tier_1/inputs/gfs_c12/mem009/20180415.000000.fv_tracer.res.tile5.nc | JCSDA-internal/fv3-jedi-data | 2598b683f060bf773eb9f65b11adc654578e823e | [
"Apache-2.0"
] | 16 | 2021-06-10T12:54:16.000Z | 2022-03-31T16:46:03.000Z | testinput_tier_1/inputs/gfs_c12/mem009/20180415.000000.fv_tracer.res.tile5.nc | JCSDA-internal/fv3-jedi-data | 2598b683f060bf773eb9f65b11adc654578e823e | [
"Apache-2.0"
] | null | null | null | version https://git-lfs.github.com/spec/v1
oid sha256:2603b0c8d2f90bf774ab0ef898ec3c053a395ff208ed348dab3cab72941bafbb
size 313432
| 32.75 | 75 | 0.885496 |
3d2366056a1bb439121ccc03d25526d45f6ac6fb | 316 | nc | nesC | apps/breakfast/toast/testUart/TestAppC.nc | tp-freeforall/breakfast | 0399619cdb7a81b3c3cc4c5a1b7d69f5c32b8c65 | [
"BSD-3-Clause"
] | 1 | 2020-05-15T18:08:48.000Z | 2020-05-15T18:08:48.000Z | apps/breakfast/toast/testUart/TestAppC.nc | tp-freeforall/breakfast | 0399619cdb7a81b3c3cc4c5a1b7d69f5c32b8c65 | [
"BSD-3-Clause"
] | null | null | null | apps/breakfast/toast/testUart/TestAppC.nc | tp-freeforall/breakfast | 0399619cdb7a81b3c3cc4c5a1b7d69f5c32b8c65 | [
"BSD-3-Clause"
] | null | null | null | configuration TestAppC{
} implementation {
components MainC;
components PlatformSerialC;
components TestP;
components new TimerMilliC();
TestP.Boot -> MainC;
TestP.UartStream -> PlatformSerialC;
TestP.UartByte -> PlatformSerialC;
TestP.StdControl -> PlatformSerialC;
TestP.Timer -> TimerMilliC;
}
| 22.571429 | 38 | 0.75 |
3b6870aeb5f6cbd0549881b0a93526d127582657 | 1,986 | nc | nesC | apps/breakfast/util/i2cPersistentStorage/I2CPersistentStorageP.nc | tp-freeforall/breakfast | 0399619cdb7a81b3c3cc4c5a1b7d69f5c32b8c65 | [
"BSD-3-Clause"
] | 1 | 2020-05-15T18:08:48.000Z | 2020-05-15T18:08:48.000Z | apps/breakfast/util/i2cPersistentStorage/I2CPersistentStorageP.nc | tp-freeforall/breakfast | 0399619cdb7a81b3c3cc4c5a1b7d69f5c32b8c65 | [
"BSD-3-Clause"
] | null | null | null | apps/breakfast/util/i2cPersistentStorage/I2CPersistentStorageP.nc | tp-freeforall/breakfast | 0399619cdb7a81b3c3cc4c5a1b7d69f5c32b8c65 | [
"BSD-3-Clause"
] | null | null | null | #include "InternalFlash.h"
#include "I2CPersistentStorage.h"
#include "decodeError.h"
#include "I2CCom.h"
module I2CPersistentStorageP{
uses interface I2CComSlave;
uses interface InternalFlash;
} implementation {
i2c_message_t msg_internal;
norace i2c_message_t* msg = &msg_internal;
i2c_message_t* swap(i2c_message_t* msg_){
i2c_message_t* swp = msg;
msg = msg_;
return swp;
}
task void readTask(){
error_t error;
i2c_persistent_storage_t* payload =
(i2c_persistent_storage_t*) call I2CComSlave.getPayload(msg);
payload->cmd = I2C_STORAGE_RESPONSE_CMD;
//last byte is version
error = call InternalFlash.read(0, payload->data, IFLASH_SEGMENT_SIZE - 1);
call I2CComSlave.unpause();
}
//we don't unpause the lower layer until the data is filled into
// msg, so it's fine to just return that and do the swap.
async event i2c_message_t* I2CComSlave.slaveTXStart(i2c_message_t* msg_){
return swap(msg_);
}
task void writeTask(){
i2c_persistent_storage_t* payload =
(i2c_persistent_storage_t*) call I2CComSlave.getPayload(msg);
error_t error;
error = call InternalFlash.write(0, payload->data, IFLASH_SEGMENT_SIZE - 1);
call I2CComSlave.unpause();
}
async event i2c_message_t* I2CComSlave.received(i2c_message_t* msg_){
i2c_persistent_storage_t* payload =
(i2c_persistent_storage_t*) call I2CComSlave.getPayload(msg_);
switch(payload->cmd){
case I2C_STORAGE_WRITE_CMD:
//do not let any other commands come in until we're done with
// this one.
call I2CComSlave.pause();
post writeTask();
//we need to hang onto this buffer, since we're going to
// persist the data to flash. do a swap.
return swap(msg_);
case I2C_STORAGE_READ_CMD:
call I2CComSlave.pause();
post readTask();
return msg_;
default:
printf("unknown command\n\r");
return msg_;
}
}
}
| 30.090909 | 80 | 0.686304 |
2bee5b237a57ea55ab358d729d7d91fe159021e5 | 14,439 | nc | nesC | Auto Pick and Place Machine/Chip Feeder/base clamp/4 - 4mm Mill Pockets.nc | briandorey/DIY-Pick-and-Place-Hardware | 536ad730118c6afe94413fa462fbe210404b31c7 | [
"MIT"
] | 47 | 2015-01-20T10:07:07.000Z | 2021-07-19T06:59:22.000Z | Auto Pick and Place Machine/Chip Feeder/base clamp/4 - 4mm Mill Pockets.nc | briandorey/DIY-Pick-and-Place-Hardware | 536ad730118c6afe94413fa462fbe210404b31c7 | [
"MIT"
] | null | null | null | Auto Pick and Place Machine/Chip Feeder/base clamp/4 - 4mm Mill Pockets.nc | briandorey/DIY-Pick-and-Place-Hardware | 536ad730118c6afe94413fa462fbe210404b31c7 | [
"MIT"
] | 16 | 2015-02-25T06:07:08.000Z | 2021-05-25T05:50:49.000Z | ( Made using CamBam - http://www.cambam.co.uk )
( base clamp 10/27/2013 9:53:33 AM )
( T4 : 4.0 )
G21 G90 G91.1 G64 G40
G0 Z3.0
( T4 : 4.0 )
T4 M6
( 4mm Mill Pocket )
G17
M3 S1000
G0 X4.7088 Y23.2322
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X4.5087 Y23.4323 I-0.2004 J-0.0003
G3 X4.3086 Y23.2322 I0.0003 J-0.2004
G3 X4.5087 Y23.0322 I0.2003 J0.0003
G3 X4.7088 Y23.2322 I-0.0003 J0.2004
G1 F25.0 X5.5088 Y23.2335
G3 F150.0 X4.5086 Y24.2323 I-1.0004 J-0.0016
G3 X3.5086 Y23.2322 I0.0003 J-1.0004
G3 X4.5086 Y22.2322 I1.0003 J0.0003
G3 X5.5088 Y23.2322 I-0.0003 J1.0004
G3 Y23.2335 I-1.0004 J-0.0003
G1 F25.0 X4.7088 Y23.2322
G1 Z-2.0
G3 F150.0 X4.5087 Y23.4323 I-0.2004 J-0.0003
G3 X4.3086 Y23.2322 I0.0003 J-0.2004
G3 X4.5087 Y23.0322 I0.2003 J0.0003
G3 X4.7088 Y23.2322 I-0.0003 J0.2004
G1 F25.0 X5.5088 Y23.2335
G3 F150.0 X4.5086 Y24.2323 I-1.0004 J-0.0016
G3 X3.5086 Y23.2322 I0.0003 J-1.0004
G3 X4.5086 Y22.2322 I1.0003 J0.0003
G3 X5.5088 Y23.2322 I-0.0003 J1.0004
G3 Y23.2335 I-1.0004 J-0.0003
G0 Z3.0
G0 X4.5194 Y5.266
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X4.5087 Y5.2663 I-0.011 J-0.2001
G3 X4.3086 Y5.0664 I0.0003 J-0.2004
G3 X4.5087 Y4.866 I0.2007 J0.0004
G3 X4.7088 Y5.0663 I-0.0006 J0.2007
G3 X4.5194 Y5.266 I-0.2004 J-0.0004
G1 F25.0 X4.5633 Y6.0648
G3 F150.0 X4.5086 Y6.0663 I-0.055 J-0.9989
G3 X3.5086 Y5.0663 I0.0003 J-1.0004
G3 X4.5087 Y4.066 I1.0007 J0.0005
G3 X5.5088 Y5.0663 I-0.0006 J1.0007
G3 X4.5633 Y6.0648 I-1.0004 J-0.0004
G1 F25.0 X4.5194 Y5.266
G1 Z-2.0
G3 F150.0 X4.5087 Y5.2663 I-0.011 J-0.2001
G3 X4.3086 Y5.0664 I0.0003 J-0.2004
G3 X4.5087 Y4.866 I0.2007 J0.0004
G3 X4.7088 Y5.0663 I-0.0006 J0.2007
G3 X4.5194 Y5.266 I-0.2004 J-0.0004
G1 F25.0 X4.5633 Y6.0648
G3 F150.0 X4.5086 Y6.0663 I-0.055 J-0.9989
G3 X3.5086 Y5.0663 I0.0003 J-1.0004
G3 X4.5087 Y4.066 I1.0007 J0.0005
G3 X5.5088 Y5.0663 I-0.0006 J1.0007
G3 X4.5633 Y6.0648 I-1.0004 J-0.0004
G0 Z3.0
G0 X32.3108 Y5.0734
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X32.3107 Y5.0662 I0.2001 J-0.0072
G1 Y5.066
G3 X32.5107 Y4.8657 I0.2003 J0.0
G3 X32.7107 Y5.0659 I-0.0003 J0.2002
G1 Y5.066
G3 X32.5106 Y5.2665 I-0.2005 J0.0
G3 X32.3108 Y5.0734 I0.0004 J-0.2003
G1 F25.0 X31.5113 Y5.102
G3 F150.0 X31.5107 Y5.0662 I0.9996 J-0.0357
G1 Y5.0661
G3 X32.5107 Y4.0657 I1.0003 J-0.0001
G3 X33.5107 Y5.066 I-0.0002 J1.0002
G1 Y5.0661
G3 X32.5105 Y6.0665 I-1.0005 J-0.0001
G3 X31.5113 Y5.102 I0.0004 J-1.0003
G1 F25.0 X32.3108 Y5.0734
G1 Z-2.0
G3 F150.0 X32.3107 Y5.0662 I0.2001 J-0.0072
G1 Y5.066
G3 X32.5107 Y4.8657 I0.2003 J0.0
G3 X32.7107 Y5.0659 I-0.0003 J0.2002
G1 Y5.066
G3 X32.5106 Y5.2665 I-0.2005 J0.0
G3 X32.3108 Y5.0734 I0.0004 J-0.2003
G1 F25.0 X31.5113 Y5.102
G3 F150.0 X31.5107 Y5.0662 I0.9996 J-0.0357
G1 Y5.0661
G3 X32.5107 Y4.0657 I1.0003 J-0.0001
G3 X33.5107 Y5.066 I-0.0002 J1.0002
G1 Y5.0661
G3 X32.5105 Y6.0665 I-1.0005 J-0.0001
G3 X31.5113 Y5.102 I0.0004 J-1.0003
G0 Z3.0
G0 X32.4995 Y23.0327
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X32.5105 Y23.0324 I0.011 J0.1995
G1 X32.5109
G3 X32.7107 Y23.2321 I0.0 J0.1998
G3 X32.5107 Y23.4323 I-0.2002 J0.0
G3 X32.3107 Y23.2321 I0.0003 J-0.2003
G1 Y23.2322
G1 Y23.2321
G3 X32.4995 Y23.0327 I0.1998 J0.0001
G1 F25.0 X32.4555 Y22.2339
G3 F150.0 X32.5105 Y22.2324 I0.055 J0.9983
G1 X32.5108
G3 X33.5107 Y23.2321 I0.0001 J0.9998
G1 Y23.2322
G3 X32.5106 Y24.2323 I-1.0002 J0.0
G3 X31.5107 Y23.2321 I0.0003 J-1.0003
G1 Y23.2322
G1 Y23.2321
G3 X32.4555 Y22.2339 I0.9998 J0.0
G1 F25.0 X32.4995 Y23.0327
G1 Z-2.0
G3 F150.0 X32.5105 Y23.0324 I0.011 J0.1995
G1 X32.5109
G3 X32.7107 Y23.2321 I0.0 J0.1998
G3 X32.5107 Y23.4323 I-0.2002 J0.0
G3 X32.3107 Y23.2321 I0.0003 J-0.2003
G1 Y23.2322
G1 Y23.2321
G3 X32.4995 Y23.0327 I0.1998 J0.0001
G1 F25.0 X32.4555 Y22.2339
G3 F150.0 X32.5105 Y22.2324 I0.055 J0.9983
G1 X32.5108
G3 X33.5107 Y23.2321 I0.0001 J0.9998
G1 Y23.2322
G3 X32.5106 Y24.2323 I-1.0002 J0.0
G3 X31.5107 Y23.2321 I0.0003 J-1.0003
G1 Y23.2322
G1 Y23.2321
G3 X32.4555 Y22.2339 I0.9998 J0.0
G0 Z3.0
G0 X60.3126 Y23.225
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X60.512 Y23.0325 I0.1994 J0.0071
G2 X60.5128 I0.0004 J-2.8
G3 X60.7126 Y23.2322 I0.0 J0.1998
G1 Y23.2323
G1 Y23.2321
G1 Y23.2322
G3 X60.5125 Y23.4323 I-0.2002 J-0.0001
G1 X60.5124
G1 X60.5126
G1 X60.5125
G3 X60.3124 Y23.2322 I0.0001 J-0.2002
G1 Y23.2321
G3 X60.3126 Y23.225 I0.1995 J0.0
G1 F25.0 X59.5131 Y23.1965
G3 F150.0 X60.5121 Y22.2325 I0.9989 J0.0355
G2 X60.5127 I0.0003 J-2.0
G3 X61.5126 Y23.2322 I0.0001 J0.9998
G3 X60.5125 Y24.2323 I-1.0002 J-0.0001
G1 X60.5124
G1 X60.5125
G3 X59.5124 Y23.2322 I0.0001 J-1.0002
G1 Y23.2321
G3 X59.5131 Y23.1965 I0.9995 J0.0
G1 F25.0 X60.3126 Y23.225
G1 Z-2.0
G3 F150.0 X60.512 Y23.0325 I0.1994 J0.0071
G2 X60.5128 I0.0004 J-2.8
G3 X60.7126 Y23.2322 I0.0 J0.1998
G1 Y23.2323
G1 Y23.2321
G1 Y23.2322
G3 X60.5125 Y23.4323 I-0.2002 J-0.0001
G1 X60.5124
G1 X60.5126
G1 X60.5125
G3 X60.3124 Y23.2322 I0.0001 J-0.2002
G1 Y23.2321
G3 X60.3126 Y23.225 I0.1995 J0.0
G1 F25.0 X59.5131 Y23.1965
G3 F150.0 X60.5121 Y22.2325 I0.9989 J0.0355
G2 X60.5127 I0.0003 J-2.0
G3 X61.5126 Y23.2322 I0.0001 J0.9998
G3 X60.5125 Y24.2323 I-1.0002 J-0.0001
G1 X60.5124
G1 X60.5125
G3 X59.5124 Y23.2322 I0.0001 J-1.0002
G1 Y23.2321
G3 X59.5131 Y23.1965 I0.9995 J0.0
G0 Z3.0
G0 X60.502 Y5.266
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X60.3124 Y5.0662 I0.011 J-0.2003
G3 X60.5125 Y4.866 I0.2007 J0.0005
G3 X60.7126 Y5.0663 I-0.0006 J0.2007
G3 X60.5126 Y5.2663 I-0.2004 J-0.0004
G3 X60.502 Y5.266 I0.0005 J-0.2006
G1 F25.0 X60.4579 Y6.0648
G3 F150.0 X59.5124 Y5.0662 I0.0551 J-0.9991
G3 X60.5125 Y4.066 I1.0007 J0.0005
G3 X61.5126 Y5.0663 I-0.0006 J1.0007
G3 X60.5125 Y6.0663 I-1.0004 J-0.0004
G3 X60.4579 Y6.0648 I0.0005 J-1.0006
G1 F25.0 X60.502 Y5.266
G1 Z-2.0
G3 F150.0 X60.3124 Y5.0662 I0.011 J-0.2003
G3 X60.5125 Y4.866 I0.2007 J0.0005
G3 X60.7126 Y5.0663 I-0.0006 J0.2007
G3 X60.5126 Y5.2663 I-0.2004 J-0.0004
G3 X60.502 Y5.266 I0.0005 J-0.2006
G1 F25.0 X60.4579 Y6.0648
G3 F150.0 X59.5124 Y5.0662 I0.0551 J-0.9991
G3 X60.5125 Y4.066 I1.0007 J0.0005
G3 X61.5126 Y5.0663 I-0.0006 J1.0007
G3 X60.5125 Y6.0663 I-1.0004 J-0.0004
G3 X60.4579 Y6.0648 I0.0005 J-1.0006
G0 Z3.0
G0 X88.3145 Y5.073
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X88.3143 Y5.0665 I0.2001 J-0.0071
G3 X88.5145 Y4.866 I0.2011 J0.0006
G3 X88.7145 Y5.0662 I-0.001 J0.201
G3 X88.5142 Y5.2662 I-0.2008 J-0.0008
G3 X88.3145 Y5.073 I0.0004 J-0.2003
G1 F25.0 X87.515 Y5.1015
G3 F150.0 X87.5143 Y5.0664 I0.9996 J-0.0356
G3 X88.5145 Y4.066 I1.0011 J0.0007
G3 X89.5145 Y5.0662 I-0.001 J1.001
G3 X88.5143 Y6.0662 I-1.0008 J-0.0008
G3 X87.515 Y5.1015 I0.0003 J-1.0003
G1 F25.0 X88.3145 Y5.073
G1 Z-2.0
G3 F150.0 X88.3143 Y5.0665 I0.2001 J-0.0071
G3 X88.5145 Y4.866 I0.2011 J0.0006
G3 X88.7145 Y5.0662 I-0.001 J0.201
G3 X88.5142 Y5.2662 I-0.2008 J-0.0008
G3 X88.3145 Y5.073 I0.0004 J-0.2003
G1 F25.0 X87.515 Y5.1015
G3 F150.0 X87.5143 Y5.0664 I0.9996 J-0.0356
G3 X88.5145 Y4.066 I1.0011 J0.0007
G3 X89.5145 Y5.0662 I-0.001 J1.001
G3 X88.5143 Y6.0662 I-1.0008 J-0.0008
G3 X87.515 Y5.1015 I0.0003 J-1.0003
G0 Z3.0
G0 X88.5034 Y23.0327
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X88.5141 Y23.0324 I0.011 J0.1997
G3 X88.7145 Y23.2322 I-0.0003 J0.2007
G3 X88.5142 Y23.4322 I-0.2008 J-0.0008
G3 X88.3143 Y23.2322 I0.0004 J-0.2003
G3 X88.5034 Y23.0327 I0.2 J0.0002
G1 F25.0 X88.4593 Y22.2339
G3 F150.0 X88.5142 Y22.2324 I0.055 J0.9985
G3 X89.5145 Y23.2322 I-0.0004 J1.0007
G3 X88.5143 Y24.2322 I-1.0008 J-0.0008
G3 X87.5143 Y23.2322 I0.0003 J-1.0003
G3 X88.4593 Y22.2339 I1.0 J0.0002
G1 F25.0 X88.5034 Y23.0327
G1 Z-2.0
G3 F150.0 X88.5141 Y23.0324 I0.011 J0.1997
G3 X88.7145 Y23.2322 I-0.0003 J0.2007
G3 X88.5142 Y23.4322 I-0.2008 J-0.0008
G3 X88.3143 Y23.2322 I0.0004 J-0.2003
G3 X88.5034 Y23.0327 I0.2 J0.0002
G1 F25.0 X88.4593 Y22.2339
G3 F150.0 X88.5142 Y22.2324 I0.055 J0.9985
G3 X89.5145 Y23.2322 I-0.0004 J1.0007
G3 X88.5143 Y24.2322 I-1.0008 J-0.0008
G3 X87.5143 Y23.2322 I0.0003 J-1.0003
G3 X88.4593 Y22.2339 I1.0 J0.0002
G0 Z3.0
G0 X116.3167 Y23.2253
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X116.5164 Y23.0325 I0.1998 J0.0071
G3 X116.5165 I0.0001 J0.1999
G1 X116.5162
G3 X116.5164 I0.0 J0.2003
G3 X116.7166 Y23.2322 I-0.0001 J0.2003
G3 X116.5165 Y23.4322 I-0.2006 J-0.0006
G3 X116.3166 Y23.2321 I0.0006 J-0.2005
G3 X116.3167 Y23.2253 I0.1999 J0.0004
G1 F25.0 X115.5172 Y23.1969
G3 F150.0 X116.5163 Y22.2325 I0.9993 J0.0356
G1 X116.5164
G1 X116.5162
G1 X116.5163
G3 X117.5166 Y23.2322 I-0.0001 J1.0003
G3 X116.5164 Y24.2322 I-1.0006 J-0.0006
G3 X115.5166 Y23.2321 I0.0006 J-1.0005
G3 X115.5172 Y23.1969 I0.9999 J0.0003
G1 F25.0 X116.3167 Y23.2253
G1 Z-2.0
G3 F150.0 X116.5164 Y23.0325 I0.1998 J0.0071
G3 X116.5165 I0.0001 J0.1999
G1 X116.5162
G3 X116.5164 I0.0 J0.2003
G3 X116.7166 Y23.2322 I-0.0001 J0.2003
G3 X116.5165 Y23.4322 I-0.2006 J-0.0006
G3 X116.3166 Y23.2321 I0.0006 J-0.2005
G3 X116.3167 Y23.2253 I0.1999 J0.0004
G1 F25.0 X115.5172 Y23.1969
G3 F150.0 X116.5163 Y22.2325 I0.9993 J0.0356
G1 X116.5164
G1 X116.5162
G1 X116.5163
G3 X117.5166 Y23.2322 I-0.0001 J1.0003
G3 X116.5164 Y24.2322 I-1.0006 J-0.0006
G3 X115.5166 Y23.2321 I0.0006 J-1.0005
G3 X115.5172 Y23.1969 I0.9999 J0.0003
G0 Z3.0
G0 X116.5057 Y5.2659
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X116.3166 Y5.0664 I0.011 J-0.1998
G3 X116.5163 Y4.866 I0.2006 J0.0003
G3 X116.7166 Y5.0663 I-0.0009 J0.2011
G3 X116.5163 Y5.2662 I-0.2007 J-0.0008
G3 X116.5057 Y5.2659 I0.0004 J-0.2001
G1 F25.0 X116.4616 Y6.0647
G3 F150.0 X115.5166 Y5.0663 I0.055 J-0.9986
G3 X116.5163 Y4.066 I1.0006 J0.0004
G3 X117.5166 Y5.0663 I-0.0009 J1.0011
G3 X116.5163 Y6.0662 I-1.0007 J-0.0008
G3 X116.4616 Y6.0647 I0.0004 J-1.0001
G1 F25.0 X116.5057 Y5.2659
G1 Z-2.0
G3 F150.0 X116.3166 Y5.0664 I0.011 J-0.1998
G3 X116.5163 Y4.866 I0.2006 J0.0003
G3 X116.7166 Y5.0663 I-0.0009 J0.2011
G3 X116.5163 Y5.2662 I-0.2007 J-0.0008
G3 X116.5057 Y5.2659 I0.0004 J-0.2001
G1 F25.0 X116.4616 Y6.0647
G3 F150.0 X115.5166 Y5.0663 I0.055 J-0.9986
G3 X116.5163 Y4.066 I1.0006 J0.0004
G3 X117.5166 Y5.0663 I-0.0009 J1.0011
G3 X116.5163 Y6.0662 I-1.0007 J-0.0008
G3 X116.4616 Y6.0647 I0.0004 J-1.0001
G0 Z3.0
G0 X144.3184 Y5.0739
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X144.3183 Y5.0668 I0.1998 J-0.0071
G2 Y5.0658 I-2.8 J-0.0006
G3 X144.5182 Y4.8657 I0.2001 J0.0
G3 X144.7183 Y5.0661 I-0.0002 J0.2004
G1 Y5.0665
G3 X144.5182 Y5.2667 I-0.2001 J0.0
G3 X144.3184 Y5.0739 I0.0 J-0.1999
G1 F25.0 X143.519 Y5.1024
G3 F150.0 X143.5183 Y5.0666 I0.9992 J-0.0355
G2 Y5.0659 I-2.0 J-0.0004
G3 X144.5181 Y4.0657 I1.0001 J-0.0001
G3 X145.5183 Y5.0661 I-0.0002 J1.0004
G1 Y5.0664
G3 X144.5181 Y6.0667 I-1.0001 J0.0001
G3 X143.519 Y5.1024 I0.0001 J-0.9999
G1 F25.0 X144.3184 Y5.0739
G1 Z-2.0
G3 F150.0 X144.3183 Y5.0668 I0.1998 J-0.0071
G2 Y5.0658 I-2.8 J-0.0006
G3 X144.5182 Y4.8657 I0.2001 J0.0
G3 X144.7183 Y5.0661 I-0.0002 J0.2004
G1 Y5.0665
G3 X144.5182 Y5.2667 I-0.2001 J0.0
G3 X144.3184 Y5.0739 I0.0 J-0.1999
G1 F25.0 X143.519 Y5.1024
G3 F150.0 X143.5183 Y5.0666 I0.9992 J-0.0355
G2 Y5.0659 I-2.0 J-0.0004
G3 X144.5181 Y4.0657 I1.0001 J-0.0001
G3 X145.5183 Y5.0661 I-0.0002 J1.0004
G1 Y5.0664
G3 X144.5181 Y6.0667 I-1.0001 J0.0001
G3 X143.519 Y5.1024 I0.0001 J-0.9999
G0 Z3.0
G0 X144.5072 Y23.0323
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X144.5182 Y23.032 I0.011 J0.1996
G1 X144.5183
G3 X144.7183 Y23.232 I0.0 J0.2
G2 Y23.2325 I2.8 J0.0002
G3 X144.5184 Y23.4325 I-0.1999 J0.0
G1 X144.5183
G1 X144.5185
G1 X144.5184
G3 X144.3183 Y23.2323 I0.0001 J-0.2002
G1 Y23.2319
G3 X144.5072 Y23.0323 I0.1999 J0.0
G1 F25.0 X144.4632 Y22.2335
G3 F150.0 X144.5181 Y22.232 I0.055 J0.9984
G1 X144.5182
G3 X145.5183 Y23.232 I0.0001 J1.0
G2 Y23.2324 I2.0 J0.0001
G3 X144.5183 Y24.2325 I-0.9999 J0.0001
G1 X144.5182
G1 X144.5183
G3 X143.5183 Y23.2322 I0.0002 J-1.0002
G1 Y23.232
G3 X144.4632 Y22.2335 I0.9999 J-0.0001
G1 F25.0 X144.5072 Y23.0323
G1 Z-2.0
G3 F150.0 X144.5182 Y23.032 I0.011 J0.1996
G1 X144.5183
G3 X144.7183 Y23.232 I0.0 J0.2
G2 Y23.2325 I2.8 J0.0002
G3 X144.5184 Y23.4325 I-0.1999 J0.0
G1 X144.5183
G1 X144.5185
G1 X144.5184
G3 X144.3183 Y23.2323 I0.0001 J-0.2002
G1 Y23.2319
G3 X144.5072 Y23.0323 I0.1999 J0.0
G1 F25.0 X144.4632 Y22.2335
G3 F150.0 X144.5181 Y22.232 I0.055 J0.9984
G1 X144.5182
G3 X145.5183 Y23.232 I0.0001 J1.0
G2 Y23.2324 I2.0 J0.0001
G3 X144.5183 Y24.2325 I-0.9999 J0.0001
G1 X144.5182
G1 X144.5183
G3 X143.5183 Y23.2322 I0.0002 J-1.0002
G1 Y23.232
G3 X144.4632 Y22.2335 I0.9999 J-0.0001
G0 Z3.0
G0 X172.3202 Y23.2252
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X172.5202 Y23.0322 I0.2001 J0.0071
G1 X172.5204
G3 X172.72 Y23.2318 I0.0 J0.1996
G2 Y23.2325 I2.8 J0.0004
G3 X172.5202 Y23.4323 I-0.1998 J0.0
G1 X172.5201
G3 X172.3201 Y23.2323 I0.0 J-0.2001
G1 Y23.2324
G1 Y23.2323
G3 X172.3202 Y23.2252 I0.2002 J0.0001
G1 F25.0 X171.5207 Y23.1968
G3 F150.0 X172.5202 Y22.2322 I0.9996 J0.0356
G1 X172.5203
G3 X173.52 Y23.2319 I0.0001 J0.9996
G2 Y23.2324 I2.0 J0.0003
G3 X172.5202 Y24.2323 I-0.9998 J0.0001
G1 X172.5201
G3 X171.5201 Y23.2323 I0.0 J-1.0001
G1 Y23.2322
G1 Y23.2323
G3 X171.5207 Y23.1968 I1.0002 J0.0001
G1 F25.0 X172.3202 Y23.2252
G1 Z-2.0
G3 F150.0 X172.5202 Y23.0322 I0.2001 J0.0071
G1 X172.5204
G3 X172.72 Y23.2318 I0.0 J0.1996
G2 Y23.2325 I2.8 J0.0004
G3 X172.5202 Y23.4323 I-0.1998 J0.0
G1 X172.5201
G3 X172.3201 Y23.2323 I0.0 J-0.2001
G1 Y23.2324
G1 Y23.2323
G3 X172.3202 Y23.2252 I0.2002 J0.0001
G1 F25.0 X171.5207 Y23.1968
G3 F150.0 X172.5202 Y22.2322 I0.9996 J0.0356
G1 X172.5203
G3 X173.52 Y23.2319 I0.0001 J0.9996
G2 Y23.2324 I2.0 J0.0003
G3 X172.5202 Y24.2323 I-0.9998 J0.0001
G1 X172.5201
G3 X171.5201 Y23.2323 I0.0 J-1.0001
G1 Y23.2322
G1 Y23.2323
G3 X171.5207 Y23.1968 I1.0002 J0.0001
G0 Z3.0
G0 X172.5091 Y5.266
G0 Z1.0
G1 F25.0 Z-1.0
G3 F150.0 X172.3201 Y5.0663 I0.011 J-0.1998
G1 Y5.0662
G3 X172.5201 Y4.8657 I0.2005 J0.0
G3 X172.72 Y5.0661 I-0.0005 J0.2004
G1 Y5.0665
G3 X172.5202 Y5.2663 I-0.1998 J0.0
G1 X172.5201
G3 X172.5091 Y5.266 I0.0 J-0.2001
G1 F25.0 X172.4651 Y6.0648
G3 F150.0 X171.5201 Y5.0663 I0.055 J-0.9985
G1 Y5.0662
G3 X172.5201 Y4.0657 I1.0005 J0.0
G3 X173.52 Y5.0661 I-0.0004 J1.0004
G1 Y5.0664
G3 X172.5202 Y6.0663 I-0.9998 J0.0001
G1 X172.5201
G3 X172.4651 Y6.0648 I0.0 J-1.0001
G1 F25.0 X172.5091 Y5.266
G1 Z-2.0
G3 F150.0 X172.3201 Y5.0663 I0.011 J-0.1998
G1 Y5.0662
G3 X172.5201 Y4.8657 I0.2005 J0.0
G3 X172.72 Y5.0661 I-0.0005 J0.2004
G1 Y5.0665
G3 X172.5202 Y5.2663 I-0.1998 J0.0
G1 X172.5201
G3 X172.5091 Y5.266 I0.0 J-0.2001
G1 F25.0 X172.4651 Y6.0648
G3 F150.0 X171.5201 Y5.0663 I0.055 J-0.9985
G1 Y5.0662
G3 X172.5201 Y4.0657 I1.0005 J0.0
G3 X173.52 Y5.0661 I-0.0004 J1.0004
G1 Y5.0664
G3 X172.5202 Y6.0663 I-0.9998 J0.0001
G1 X172.5201
G3 X172.4651 Y6.0648 I0.0 J-1.0001
G0 Z3.0
M5
M30
| 27.608031 | 47 | 0.726574 |
d5865ac353b1932442ff1c7a876ba123c71b8b7d | 130 | nc | nesC | data/ctd/wod_011281173O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-08T03:04:18.000Z | 2020-02-08T00:25:12.000Z | data/ctd/wod_011281173O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 1 | 2020-08-18T03:10:27.000Z | 2020-09-14T21:30:35.000Z | data/ctd/wod_011281173O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-17T23:34:20.000Z | 2019-08-14T23:01:09.000Z | version https://git-lfs.github.com/spec/v1
oid sha256:378d511dc7163c420d7340fb5d728957141f634786ab43acd853f601f64f5e0f
size 27040
| 32.5 | 75 | 0.884615 |
71d8730beb6a9aa6ecec40708db1df7d36be24b4 | 131 | nc | nesC | testref/bump_hdiag_write_grids/test_1-1_local_diag_cor.nc | JCSDA-internal/saber-data | 37028bf8490c90d7637112d16f2a646f8721a753 | [
"Apache-2.0"
] | null | null | null | testref/bump_hdiag_write_grids/test_1-1_local_diag_cor.nc | JCSDA-internal/saber-data | 37028bf8490c90d7637112d16f2a646f8721a753 | [
"Apache-2.0"
] | 3 | 2021-12-24T09:30:00.000Z | 2022-02-15T18:35:59.000Z | testref/bump_hdiag_write_grids/test_1-1_local_diag_cor.nc | JCSDA-internal/saber-data | 37028bf8490c90d7637112d16f2a646f8721a753 | [
"Apache-2.0"
] | null | null | null | version https://git-lfs.github.com/spec/v1
oid sha256:a3390b1aa05579d6fd9b02eb5cc35351be5cbcad079cdeb4802a3350267c0f30
size 274849
| 32.75 | 75 | 0.885496 |
bcbf9e0aae83075ffe808cd6136f3bf5334cb732 | 1,967 | nc | nesC | dataStructures/modules/ListC.nc | aalhag24/Networking-Group-Chat-Application | 11ebda971cdf9c749696d2e2a493be7f546584e4 | [
"MIT"
] | null | null | null | dataStructures/modules/ListC.nc | aalhag24/Networking-Group-Chat-Application | 11ebda971cdf9c749696d2e2a493be7f546584e4 | [
"MIT"
] | null | null | null | dataStructures/modules/ListC.nc | aalhag24/Networking-Group-Chat-Application | 11ebda971cdf9c749696d2e2a493be7f546584e4 | [
"MIT"
] | null | null | null | /**
* ANDES Lab - University of California, Merced
* This class provides a simple list.
*
* @author UCM ANDES Lab
* @author Alex Beltran
* @date 2013/09/03
*
*/
generic module ListC(typedef t, int n){
provides interface List<t>;
}
implementation{
uint16_t MAX_SIZE = n;
t container[n];
uint16_t size = 0;
command void List.pushback(t input){
// Check to see if we have room for the input.
if(size < MAX_SIZE){
// Put it in.
container[size] = input;
size++;
}
}
command void List.pushfront(t input){
// Check to see if we have room for the input.
if(size < MAX_SIZE){
int32_t i;
// Shift everything to the right.
for(i = size-1; i>=0; i--){
container[i+1] = container[i];
}
container[0] = input;
size++;
}
}
command t List.popback(){
t returnVal;
returnVal = container[size];
// We don't need to actually remove the value, we just need to decrement
// the size.
if(size > 0)size--;
return returnVal;
}
command t List.popfront(){
t returnVal;
uint16_t i;
returnVal = container[0];
if(size>0){
// Move everything to the left.
for(i = 0; i<size-1; i++){
container[i] = container[i+1];
}
size--;
}
return returnVal;
}
command error_t List.popAt(uint16_t pos){
uint16_t i;
t tmp = container[0];
if(size>0){
for(i=pos; i<size-1; i++){
container[i] = container[i+1];
}
size = size-1;
return SUCCESS;
}
else {
return FAIL;
}
}
command void List.swap(uint16_t i, uint16_t j){
t tmp = container[i];
container[i] = container[j];
container[j] = tmp;
}
// This is similar to peek head.
command t List.front(){
return container[0];
}
// Peek tail
command t List.back(){
return container[size];
}
command bool List.isEmpty(){
if(size == 0)
return TRUE;
else
return FALSE;
}
command uint16_t List.size(){
return size;
}
command t List.get(uint16_t position){
return container[position];
}
}
| 17.104348 | 74 | 0.621251 |
1e1f08cccd002dceb121c9b61c28958ba5e4b9a5 | 130 | nc | nesC | data/ctd/wod_011204806O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-08T03:04:18.000Z | 2020-02-08T00:25:12.000Z | data/ctd/wod_011204806O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 1 | 2020-08-18T03:10:27.000Z | 2020-09-14T21:30:35.000Z | data/ctd/wod_011204806O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-17T23:34:20.000Z | 2019-08-14T23:01:09.000Z | version https://git-lfs.github.com/spec/v1
oid sha256:948731bf265517b8f2d7fe767350180980da70296a9f8b386d2186b6ef732905
size 46860
| 32.5 | 75 | 0.884615 |
75135f3c830f713d047ee57a3c19bbebe8cef656 | 797 | nc | nesC | Auto Pick and Place Machine/component feeder/old feeder/head stepper mount/4 - 4mm bearing Profile 2.nc | briandorey/DIY-Pick-and-Place-Hardware | 536ad730118c6afe94413fa462fbe210404b31c7 | [
"MIT"
] | 47 | 2015-01-20T10:07:07.000Z | 2021-07-19T06:59:22.000Z | Auto Pick and Place Machine/component feeder/old feeder/head stepper mount/4 - 4mm bearing Profile 2.nc | briandorey/DIY-Pick-and-Place-Hardware | 536ad730118c6afe94413fa462fbe210404b31c7 | [
"MIT"
] | null | null | null | Auto Pick and Place Machine/component feeder/old feeder/head stepper mount/4 - 4mm bearing Profile 2.nc | briandorey/DIY-Pick-and-Place-Hardware | 536ad730118c6afe94413fa462fbe210404b31c7 | [
"MIT"
] | 16 | 2015-02-25T06:07:08.000Z | 2021-05-25T05:50:49.000Z | ( Made using CamBam - http://www.cambam.co.uk )
( head stepper mount 9/28/2013 11:01:13 AM )
( T4 : 3.9 )
G21 G90 G64 G40
G0 Z3.0
( T4 : 3.9 )
T4 M6
( Profile1 )
G17
M3 S1000
G0 X27.0498 Y21.0015
G0 Z1.0
G1 F50.0 Z-5.0
G3 F120.0 X25.2795 Y25.2752 I-6.0255 J0.0076
G2 X25.2758 Y25.2789 I1.3771 J1.3806
G3 X21.0025 Y27.0491 I-4.2654 J-4.2536
G2 X20.9976 I-0.0026 J1.95
G3 X16.7236 Y25.2787 I-0.0073 J-6.0264
G2 X16.7201 Y25.2752 I-1.3806 J1.3772
G3 X14.9497 Y21.0015 I4.2547 J-4.2661
G2 Y20.9965 I-1.95 J-0.0025
G3 X16.72 Y16.723 I6.0242 J-0.0079
G2 X16.7239 Y16.7192 I-1.377 J-1.3808
G3 X20.9973 Y14.949 I4.2651 J4.2533
G2 X21.0024 I0.0027 J-1.95
G3 X25.2759 Y16.7193 I0.0076 J6.025
G2 X25.2796 Y16.723 I1.3807 J-1.377
G3 X27.0498 Y20.9965 I-4.2539 J4.2654
G2 Y21.0015 I1.95 J0.0026
G0 Z3.0
M5
M30
| 24.151515 | 47 | 0.70138 |
a4e2e03a5d2ec220b6e06be5fc5eb43e07792638 | 451 | nc | nesC | nxtmote/tos/chips/at91/gpio/GeneralIOC.nc | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | 1 | 2020-02-28T20:35:09.000Z | 2020-02-28T20:35:09.000Z | nxtmote/tos/chips/at91/gpio/GeneralIOC.nc | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | nxtmote/tos/chips/at91/gpio/GeneralIOC.nc | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | configuration GeneralIOC
{
provides {
interface GeneralIO[uint8_t pin];
interface GpioInterrupt[uint8_t pin]; //TODO: Implement
interface HalAT91GpioInterrupt[uint8_t pin];
}
}
implementation
{
components HalAT91_GeneralIOM;
components HplAT91_GPIOC;
GeneralIO = HalAT91_GeneralIOM;
HalAT91GpioInterrupt = HalAT91_GeneralIOM;
GpioInterrupt = HalAT91_GeneralIOM;
HalAT91_GeneralIOM.HplAT91_GPIOPin -> HplAT91_GPIOC;
}
| 20.5 | 59 | 0.776053 |
08363e27db42bec3facfeb5f4a045ece769a5aa1 | 130 | nc | nesC | data/ctd/wod_015581143O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-08T03:04:18.000Z | 2020-02-08T00:25:12.000Z | data/ctd/wod_015581143O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 1 | 2020-08-18T03:10:27.000Z | 2020-09-14T21:30:35.000Z | data/ctd/wod_015581143O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-17T23:34:20.000Z | 2019-08-14T23:01:09.000Z | version https://git-lfs.github.com/spec/v1
oid sha256:6f002ac4b3d5b4374d8dd40412321d8fe5f9c64e598c3bd0eaf5aa91baf7e44a
size 19028
| 32.5 | 75 | 0.884615 |
8f8b71f3ab3d994c6671302c0d394b9766e3f58b | 522 | nc | nesC | apps/wavelet/CreateMsg.nc | jryans/compass-dsr-tinyos | 78035851f771fb68e7c455f767ec26a5b4a22bbd | [
"BSD-3-Clause"
] | 1 | 2020-10-14T07:54:13.000Z | 2020-10-14T07:54:13.000Z | apps/wavelet/CreateMsg.nc | jryans/compass-dsr-tinyos | 78035851f771fb68e7c455f767ec26a5b4a22bbd | [
"BSD-3-Clause"
] | null | null | null | apps/wavelet/CreateMsg.nc | jryans/compass-dsr-tinyos | 78035851f771fb68e7c455f767ec26a5b4a22bbd | [
"BSD-3-Clause"
] | null | null | null | /**
* Creates a TOSMsg for use by other components. Since our
* networking stack is supported by Transceiver which stores
* these centrally, we can't just create TOSMsgs wherever
* we choose.
* @author Ryan Stinnett
*/
includes AM;
interface CreateMsg
{
/**
* Allocates a message for a specific AM type.
*/
command TOS_MsgPtr create();
/**
* Allocates a message for a specific AM type, and copies
* data from a source message into it.
*/
command TOS_MsgPtr createCopy(TOS_MsgPtr src);
}
| 21.75 | 60 | 0.697318 |
cb0c0aff3113ffb667062cd1dc3c993d95228b16 | 130 | nc | nesC | data/ctd/wod_015269677O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-08T03:04:18.000Z | 2020-02-08T00:25:12.000Z | data/ctd/wod_015269677O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 1 | 2020-08-18T03:10:27.000Z | 2020-09-14T21:30:35.000Z | data/ctd/wod_015269677O.nc | UoA-eResearch/argo | 161d1389e6e81c70b65482dbcf213c9ead32ee4c | [
"MIT"
] | 2 | 2019-07-17T23:34:20.000Z | 2019-08-14T23:01:09.000Z | version https://git-lfs.github.com/spec/v1
oid sha256:96bbd2c8d8873a19b9a38e29874bf0c93e9b0b7dee612b785a608339a12ff843
size 28652
| 32.5 | 75 | 0.884615 |
02dd1d7abcf167ecaabd51682333e80f76bb1815 | 2,618 | nc | nesC | nesc/whip6/lib/netstack/private/GenericRawUDPSocketManagerPrv.nc | wojtex/whip6-pub | 7aca863e45199f4f1354f24b1c88afd8cb34c2ba | [
"Apache-2.0",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-Clear",
"Intel",
"BSD-3-Clause"
] | 1 | 2017-02-21T16:44:56.000Z | 2017-02-21T16:44:56.000Z | nesc/whip6/lib/netstack/private/GenericRawUDPSocketManagerPrv.nc | wojtex/whip6-pub | 7aca863e45199f4f1354f24b1c88afd8cb34c2ba | [
"Apache-2.0",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-Clear",
"Intel",
"BSD-3-Clause"
] | 9 | 2017-02-21T16:43:31.000Z | 2021-06-10T19:28:41.000Z | nesc/whip6/lib/netstack/private/GenericRawUDPSocketManagerPrv.nc | wojtex/whip6-pub | 7aca863e45199f4f1354f24b1c88afd8cb34c2ba | [
"Apache-2.0",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-Clear",
"Intel",
"BSD-3-Clause"
] | 12 | 2016-12-19T12:04:17.000Z | 2020-09-17T14:44:39.000Z | /*
* whip6: Warsaw High-performance IPv6.
*
* Copyright (c) 2012-2017 Konrad Iwanicki
* All rights reserved.
*
* This file is distributed under the terms in the attached LICENSE
* files.
*/
#include "BaseCompileTimeConfig.h"
#include "NetStackCompileTimeConfig.h"
/**
* The generic manager for raw UDP sockets.
*
* @author Konrad Iwanicki
*/
configuration GenericRawUDPSocketManagerPrv
{
provides
{
interface UDPSocketController[udp_socket_id_t sockId];
interface UDPRawReceiver[udp_socket_id_t sockId];
interface UDPRawSender[udp_socket_id_t sockId];
}
}
implementation
{
enum
{
NUM_SOCKETS = uniqueCount("GenericRawUDPSocketManagerPrv::Socket"),
};
enum
{
MAX_BYTES_PROCESSED_PER_TASK = WHIP6_BASE_MAX_BYTES_PROCESSED_PER_TASK,
};
enum
{
IPV6_STACK_CLIENT_ID = unique("IPv6Stack::Client"),
};
components BoardStartupPub as GlobalMainPrv;
components IPv6InterfaceStateProviderPub as IfaceStateProviderPrv;
components InternalIPv6StackPub as IPv6StackPrv;
components PlatformRandomPub as RandomPrv;
components new UDPRawSocketManagerMainPrv(
NUM_SOCKETS,
MAX_BYTES_PROCESSED_PER_TASK
) as ImplPrv;
components new QueuePub(
whip6_ipv6_out_packet_processing_state_t *,
uint8_t,
NUM_SOCKETS
) as PacketSourceAddressSelectorQueueForStackPrv;
components new QueuePub(
whip6_ipv6_out_packet_processing_state_t *,
uint8_t,
NUM_SOCKETS
) as PacketSenderQueueForStackPrv;
UDPSocketController = ImplPrv.UDPSocketController;
UDPRawReceiver = ImplPrv.UDPRawReceiver;
UDPRawSender = ImplPrv.UDPRawSender;
GlobalMainPrv.InitSequence[1] -> ImplPrv;
ImplPrv.IPv6PacketReceiver -> IPv6StackPrv.PacketReceiver[WHIP6_IANA_IPV6_UDP];
ImplPrv.IPv6PacketSender -> IPv6StackPrv.PacketSender[IPV6_STACK_CLIENT_ID];
ImplPrv.IPv6PacketSourceAddressSelector -> IPv6StackPrv.PacketSourceAddressSelector[IPV6_STACK_CLIENT_ID];
ImplPrv.IPv6InterfaceStateProvider -> IfaceStateProviderPrv.Interfaces;
ImplPrv.Random -> RandomPrv;
IPv6StackPrv.HopByHopRoutingLoopIn[WHIP6_IANA_IPV6_UDP] ->
IPv6StackPrv.HopByHopRoutingLoopOut[WHIP6_IANA_IPV6_UDP];
IPv6StackPrv.OptionalOutgoingPacketSourceAddressSelectorQueueForProtocol[IPV6_STACK_CLIENT_ID] ->
PacketSourceAddressSelectorQueueForStackPrv;
IPv6StackPrv.OptionalOutgoingPacketSenderQueueForProtocol[IPV6_STACK_CLIENT_ID] ->
PacketSenderQueueForStackPrv;
}
| 30.8 | 110 | 0.741024 |
a67470543e7cf9e0c77387031765e9447f3e8eba | 4,169 | nc | nesC | tos/chips/m16c60/uart/M16c60SpiP.nc | mtaghiza/tinyos-main-1 | cac075f7eae46c6a37409e66137a78b9bc3a64b1 | [
"BSD-3-Clause"
] | 1,053 | 2015-01-02T02:34:54.000Z | 2022-03-30T20:03:22.000Z | tos/chips/m16c60/uart/M16c60SpiP.nc | mtaghiza/tinyos-main-1 | cac075f7eae46c6a37409e66137a78b9bc3a64b1 | [
"BSD-3-Clause"
] | 101 | 2015-01-04T16:17:48.000Z | 2021-04-24T20:44:35.000Z | tos/chips/m16c60/uart/M16c60SpiP.nc | mtaghiza/tinyos-main-1 | cac075f7eae46c6a37409e66137a78b9bc3a64b1 | [
"BSD-3-Clause"
] | 455 | 2015-01-10T21:41:10.000Z | 2022-03-18T21:54:05.000Z | /*
* Copyright (c) 2011 Lulea University of Technology
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* - Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* - Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the
* distribution.
* - Neither the name of the copyright holders nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
* INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/**
* Spi interface implementations for a M16c/60 Uart.
*
* @author Henrik Makitaavola <henrik.makitaavola@gmail.com>
*/
generic module M16c60SpiP()
{
provides
{
interface SpiPacket;
interface SpiByte;
interface FastSpiByte;
}
uses interface HplM16c60Uart as HplUart;
}
implementation
{
enum
{
S_IDLE,
S_PACKET,
};
uint8_t m_state = S_IDLE;
uint8_t* m_tx_buf;
int8_t* m_rx_buf;
uint8_t m_length;
uint8_t m_pos;
async command error_t SpiPacket.send(uint8_t* txBuf,
uint8_t* rxBuf,
uint16_t len )
{
atomic
{
if (m_state != S_IDLE)
{
return EBUSY;
}
m_state = S_PACKET;
}
if (len == 0)
{
return EINVAL;
}
atomic
{
m_rx_buf = rxBuf;
m_tx_buf = txBuf;
m_length = len;
m_pos = 0;
}
call HplUart.enableRxInterrupt();
atomic call HplUart.tx(m_tx_buf[m_pos]);
return SUCCESS;
}
async event void HplUart.rxDone()
{
atomic
{
if (m_state != S_PACKET)
{
return;
}
m_rx_buf[m_pos++] = call HplUart.rx();
if (m_pos == m_length)
{
// Done sending and receiving.
call HplUart.disableRxInterrupt();
m_state = S_IDLE;
signal SpiPacket.sendDone(m_tx_buf, m_rx_buf, m_length, SUCCESS);
}
else
{
call HplUart.tx(m_tx_buf[m_pos]);
}
}
}
async event void HplUart.txDone() {}
default async event void SpiPacket.sendDone(uint8_t* txBuf,
uint8_t* rxBuf,
uint16_t len,
error_t error ) {}
async command uint8_t SpiByte.write( uint8_t tx )
{
uint8_t tmp = call HplUart.rx(); // Empty rx buf
call HplUart.tx(tx);
while(call HplUart.isRxEmpty());
return call HplUart.rx();
}
async command void FastSpiByte.splitWrite(uint8_t data)
{
uint8_t tmp = call HplUart.rx(); // Empty rx buf
call HplUart.tx(data);
}
async command uint8_t FastSpiByte.splitReadWrite(uint8_t data)
{
uint8_t tmp;
while(call HplUart.isRxEmpty());
tmp = call HplUart.rx();
call HplUart.tx(data);
return tmp;
}
async command uint8_t FastSpiByte.splitRead()
{
while(call HplUart.isRxEmpty());
return call HplUart.rx();
}
async command uint8_t FastSpiByte.write(uint8_t data)
{
return call SpiByte.write(data);
}
}
| 26.386076 | 73 | 0.640921 |
62b00f5fa812b7c2e2bfe43a05b9c068145fcaa9 | 1,229 | nc | nesC | yuceg/tos/interfaces/msleep.nc | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | 1 | 2020-02-28T20:35:09.000Z | 2020-02-28T20:35:09.000Z | yuceg/tos/interfaces/msleep.nc | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | yuceg/tos/interfaces/msleep.nc | tinyos-io/tinyos-3.x-contrib | 3aaf036722a2afc0c0aad588459a5c3e00bd3c01 | [
"BSD-3-Clause",
"MIT"
] | null | null | null | /* Copyright (c) 2008, Computer Engineering Group, Yazd University , Iran .
* All rights reserved.
*
* Permission to use, copy, modify, and distribute this software and its
* documentation for any purpose, without fee, and without written
* agreement is hereby granted, provided that the above copyright
* notice, the (updated) modification history and the author appear in
* all copies of this source code.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS `AS IS'
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, LOSS OF USE, DATA,
* OR PROFITS) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
interface msleep {
command void sleep(uint8_t id,uint16_t ms);
}
| 47.269231 | 79 | 0.748576 |
d00ec3e79645b4feaf3942aaba1474a800fc1710 | 2,528 | ooc | ooc | sdk/os/native/PipeUnix.ooc | fredrikbryntesson/launchtest | 92ceb31995e660b278225e3061fb19603158dac4 | [
"MIT"
] | 152 | 2016-02-03T11:16:59.000Z | 2022-02-28T12:57:14.000Z | sdk/os/native/PipeUnix.ooc | fredrikbryntesson/launchtest | 92ceb31995e660b278225e3061fb19603158dac4 | [
"MIT"
] | 157 | 2015-01-09T13:39:47.000Z | 2016-02-01T12:28:29.000Z | sdk/os/native/PipeUnix.ooc | fredrikbryntesson/launchtest | 92ceb31995e660b278225e3061fb19603158dac4 | [
"MIT"
] | 17 | 2016-04-06T16:37:17.000Z | 2021-01-26T14:32:26.000Z | import ../[unistd, FileDescriptor, Pipe]
version(unix || apple) {
/**
* Unix implementation of pipes.
*/
PipeUnix: class extends Pipe {
readFD, writeFD: FileDescriptor
init: func ~withFDs (=readFD, =writeFD) {
if(readFD == -1 && writeFD == -1) {
init()
return
}
if(readFD == -1) {
fds := [-1] as Int*
pipe(fds)
this readFD = fds[0]
if (pipe(fds) < 0) {
// TODO: add detailed error message
Exception new(This, "Couldn't create pipe") throw()
}
}
if(writeFD == -1) {
fds := [-1] as Int*
pipe(fds)
this writeFD = fds[0]
if (pipe(fds) < 0) {
// TODO: add detailed error message
Exception new(This, "Couldn't create pipe") throw()
}
}
}
init: func ~twos {
fds := [-1, -1] as Int*
/* Try to open a new pipe */
if (pipe(fds) < 0) {
// TODO: add detailed error message
Exception new(This, "Couldn't create pipes") throw()
}
readFD = fds[0]
writeFD = fds[1]
}
read: func ~cstring (buf: CString, len: Int) -> Int {
howmuch := readFD read(buf, len)
if (howmuch <= 0) {
if (errno == EAGAIN) {
return 0
}
eof = true
return -1
}
howmuch
}
/** write 'len' bytes of 'data' to the pipe */
write: func(data: Pointer, len: Int) -> Int {
return writeFD write(data, len)
}
/**
* close the pipe, either in reading or writing
* @param arg 'r' = close in reading, 'w' = close in writing
*/
close: func (end: Char) -> Int {
fd := _getFD(end)
if (fd == 0) return 0
fd close()
}
close: func ~both {
readFD close()
writeFD close()
}
setNonBlocking: func (end: Char) {
fd := _getFD(end)
if (fd == 0) return
fd setNonBlocking()
}
setBlocking: func (end: Char) {
fd := _getFD(end)
if (fd == 0) return
fd setBlocking()
}
// utility functions
_getFD: func (end: Char) -> FileDescriptor {
match end {
case 'r' => readFD
case 'w' => writeFD
case => 0 as FileDescriptor
}
}
}
/* C interface */
include fcntl
include sys/stat
include sys/types
EAGAIN: extern Int
}
| 21.606838 | 67 | 0.465585 |
aeb78dc0a9476b951332c21de429d6570e62027e | 26,027 | ooc | ooc | source/rock/frontend/CommandLine.ooc | fredrikbryntesson/launchtest | 92ceb31995e660b278225e3061fb19603158dac4 | [
"MIT"
] | null | null | null | source/rock/frontend/CommandLine.ooc | fredrikbryntesson/launchtest | 92ceb31995e660b278225e3061fb19603158dac4 | [
"MIT"
] | null | null | null | source/rock/frontend/CommandLine.ooc | fredrikbryntesson/launchtest | 92ceb31995e660b278225e3061fb19603158dac4 | [
"MIT"
] | null | null | null |
// sdk stuff
import io/File, os/[Terminal, Process, Pipe, Time]
import structs/[ArrayList, List, Stack]
import text/StringTokenizer
// our stuff
import Help, Token, BuildParams, AstBuilder, PathList, Target
import rock/frontend/drivers/[Driver, SequenceDriver, MakeDriver, DummyDriver, CCompiler, AndroidDriver]
import rock/backend/json/JSONGenerator
import rock/backend/lua/LuaGenerator
import rock/middle/[Module, Import, UseDef]
import rock/middle/tinker/Tinkerer
import rock/middle/algo/ImportClassifier
import rock/RockVersion
system: extern func (command: CString)
/**
* Handles command-line arguments parsing, launches the appropritae
* driver.
*/
CommandLine: class {
params: BuildParams
mainUseDef: UseDef
init: func(args : ArrayList<String>) {
params = BuildParams new(args[0])
modulePaths := ArrayList<String> new()
isFirst := true
alreadyDidSomething := false
targetModule: Module
for (arg in args) {
if(isFirst) {
isFirst = false
continue
}
longOption := false
if (arg startsWith?("-")) {
option := arg substring(1)
if (option startsWith?("-")) {
longOption = true
option = option substring(1)
}
if (option startsWith?("sourcepath=")) {
"[ERROR] Specifying sourcepath by hand is deprecated.\nInstead, create a .use file and specify the sourcepath from there." println()
hardDeprecation("staticlib")
} else if (option startsWith?("outpath=")) {
if(!longOption) warnUseLong("outpath")
params outPath = File new(arg substring(arg indexOf('=') + 1))
params clean = false
} else if (option startsWith?("staticlib")) {
hardDeprecation("staticlib")
} else if (option startsWith?("dynamiclib")) {
hardDeprecation("dynamiclib")
} else if (option startsWith?("packagefilter=")) {
hardDeprecation("packagefilter")
} else if (option startsWith?("libfolder=")) {
hardDeprecation("libfolder")
} else if(option startsWith?("backend")) {
if(!longOption) warnUseLong("backend")
params backend = arg substring(arg indexOf('=') + 1)
if(params backend != "c" && params backend != "json" && params backend != "luaffi") {
"Unknown backend: %s." format(params backend) println()
params backend = "c"
}
} else if (option startsWith?("incpath=")) {
if(!longOption) warnUseLong("incpath")
params incPath add(arg substring(arg indexOf('=') + 1))
} else if (option startsWith?("D")) {
params defineSymbol(arg substring(2))
} else if (option startsWith?("I")) {
params incPath add(arg substring(2))
} else if (option startsWith?("libpath")) {
if(!longOption) warnUseLong("libpath")
params libPath add(arg substring(arg indexOf('=') + 1))
} else if (option startsWith?("editor")) {
if(!longOption) warnUseLong("editor")
params editor = arg substring(arg indexOf('=') + 1)
} else if (option startsWith?("entrypoint")) {
if(!longOption) warnUseLong("entrypoint")
params entryPoint = arg substring(arg indexOf('=') + 1)
} else if (option == "newsdk") {
hardDeprecation("newsdk")
} else if (option == "newstr") {
hardDeprecation("newstr")
} else if(option == "cstrings") {
hardDeprecation("cstrings")
} else if (option == "inline") {
hardDeprecation("inline")
} else if (option == "no-inline") {
hardDeprecation("inline")
} else if (option == "c") {
params link = false
} else if(option == "debugloop") {
if(!longOption) warnUseLong("debugloop")
params debugLoop = true
} else if(option == "debuglibcache") {
if(!longOption) warnUseLong("debuglibcache")
params debugLibcache = true
} else if (option == "debugtemplates") {
if(!longOption) warnUseLong("debugtemplates")
params debugTemplates = true
} else if(option startsWith?("ignoredefine=")) {
if(!longOption) warnUseLong("ignoredefine")
params ignoredDefines add(option substring(13))
} else if (option == "allerrors") {
if(!longOption) warnUseLong("allerrors")
params fatalError = false
} else if(option startsWith?("dist=")) {
if(!longOption) warnUseLong("dist")
params distLocation = File new(option substring(5))
} else if(option startsWith?("sdk=")) {
"The --sdk=PATH option is deprecated." println()
"If you want to use your own SDK, create your own sdk.use" println()
"and adjust the OOC_LIBS environment variable" println()
} else if(option startsWith?("libs=")) {
if(!longOption) warnUseLong("libs")
params libPath = File new(option substring(5))
} else if(option startsWith?("linker=")) {
if(!longOption) warnUseLong("linker")
params linker = option substring(7)
} else if (option == "nolibcache") {
hardDeprecation("nolibcache")
} else if (option == "libcache") {
if(!longOption) warnUseLong("libcache")
params libcache = true
} else if (option == "libcachepath") {
if(!longOption) warnUseLong("libcachepath")
params libcachePath = option substring(option indexOf('=') + 1)
} else if (option startsWith?("L")) {
params libPath add(option substring(1))
} else if (option startsWith?("l")) {
params dynamicLibs add(option substring(1))
} else if (option == "nolang") {
"--nolang is not supported anymore. \
If you want to fiddle with the SDK, make your own sdk.use" println()
} else if (option == "nomain") {
if(!longOption) warnUseLong("nomain")
params defaultMain = false
} else if (option startsWith?("gc=")) {
suboption := option substring(3)
match suboption {
case "off" =>
params enableGC = false
params undefineSymbol(BuildParams GC_DEFINE)
case "dynamic" =>
params enableGC = true
params dynGC = true
params defineSymbol(BuildParams GC_DEFINE)
case "static" =>
params enableGC = true
params dynGC = false
params defineSymbol(BuildParams GC_DEFINE)
case =>
"Unrecognized option %s." printfln(option)
"Valid values are gc=off, gc=dynamic, gc=static" printfln()
}
} else if (option == "noclean") {
if(!longOption) warnUseLong("noclean")
params clean = false
} else if (option == "nohints") {
if(!longOption) warnUseLong("nohints")
params helpful = false
} else if (option == "nolines") {
if(!longOption) warnUseLong("inline")
params lineDirectives = false
} else if (option == "shout") {
if(!longOption) warnUseLong("inline")
params shout = true
} else if (option == "q" || option == "quiet") {
if(!longOption && option != "q") warnUseLong("quiet")
params shout = false
params verbose = false
params veryVerbose = false
} else if (option == "timing" || option == "t") {
if(!longOption && option != "t") warnUseLong("timing")
params timing = true
} else if (option == "debug" || option == "g") {
warnDeprecated(option, "pg")
params profile = Profile DEBUG
} else if (option == "pg") {
params profile = Profile DEBUG
} else if (option == "pr") {
params profile = Profile RELEASE
} else if (option == "O0") {
params optimization = OptimizationLevel O0
} else if (option == "O1") {
params optimization = OptimizationLevel O1
} else if (option == "O2") {
params optimization = OptimizationLevel O2
} else if (option == "O3") {
params optimization = OptimizationLevel O3
} else if (option == "Os") {
params optimization = OptimizationLevel Os
} else if (option == "verbose" || option == "v") {
if(!longOption && option != "v") warnUseLong("verbose")
params verbose = true
} else if (option == "verboser" || option == "vv") {
if(!longOption && option != "vv") warnUseLong("verbose")
params verbose = true
params verboser = true
} else if (option == "veryVerbose" || option == "vvv") {
if(!longOption && option != "vvv") warnUseLong("veryVerbose")
params verbose = true
params verboser = true
params veryVerbose = true
} else if (option == "stats") {
if(!longOption) warnUseLong("stats")
params stats = true
} else if (option == "run" || option == "r") {
if(!longOption && option != "r") warnUseLong("run")
params run = true
params shout = false
} else if (option startsWith?("target=")) {
targetName := option substring("target=" length())
params target = match targetName {
case "linux" => Target LINUX
case "win" => Target WIN
case "solaris" => Target SOLARIS
case "haiku" => Target HAIKU
case "osx" => Target OSX
case "freebsd" => Target FREEBSD
case "openbsd" => Target OPENBSD
case "netbsd" => Target NETBSD
case "dragonfly" => Target DRAGONFLY
case "android" => Target ANDROID
case =>
"[ERROR] Unknown target: %s" printfln(targetName)
failure(params)
-1
}
params undoTargetSpecific()
params doTargetSpecific()
} else if (option startsWith?("driver=")) {
driverName := option substring("driver=" length())
params driver = match (driverName) {
case "combine" =>
"[ERROR] The combine driver is deprecated." println()
failure(params)
params driver
case "sequence" =>
SequenceDriver new(params)
case "android" =>
params target = Target ANDROID
AndroidDriver new(params)
case "make" =>
MakeDriver new(params)
case "dummy" =>
DummyDriver new(params)
case =>
"[ERROR] Unknown driver: %s" printfln(driverName)
failure(params)
null
}
} else if (option startsWith?("blowup=")) {
if(!longOption) warnUseLong("blowup")
params blowup = option substring(7) toInt()
} else if (option == "V" || option == "version") {
if(!longOption && option != "V") warnUseLong("version")
"rock %s, built on %s" printfln(RockVersion getName(), __BUILD_DATETIME__)
exit(0)
} else if (option == "h" || option == "help") {
if(!longOption && option != "h") warnUseLong("help")
Help printHelp()
exit(0)
} else if (option startsWith?("cc=")) {
if(!longOption) warnUseLong("cc")
params compiler setExecutable(option substring(3))
} else if (option startsWith?("host=")) {
host := option substring("host=" length())
params host = host
} else if (option startsWith?("gcc")) {
hardDeprecation("gcc")
} else if (option startsWith?("icc")) {
hardDeprecation("icc")
} else if (option startsWith?("tcc")) {
hardDeprecation("tcc")
} else if (option startsWith?("clang")) {
hardDeprecation("clang")
} else if (option == "onlyparse") {
if(!longOption) warnUseLong("onlyparse")
params driver = null
params onlyparse = true
} else if (option == "onlycheck") {
if(!longOption) warnUseLong("onlycheck")
params driver = null
} else if (option == "onlygen") {
if(!longOption) warnUseLong("onlygen")
params driver = DummyDriver new(params)
} else if (option startsWith?("o=")) {
params binaryPath = arg substring(arg indexOf('=') + 1)
} else if (option == "slave") {
hardDeprecation("slave")
} else if (option startsWith?("bannedflag=")) {
flag := arg substring(arg indexOf('=') + 1)
params bannedFlags add(flag)
} else if (option startsWith?("j")) {
threads := arg substring(2) toInt()
params parallelism = threads
} else if (option startsWith?("m")) {
arch := arg substring(2)
if (arch == "32" || arch == "64")
params arch = arg substring(2)
else
"Unrecognized architecture: %s" printfln(arch)
} else if (option == "x") {
"Cleaning up outpath and .libs" println()
cleanHardcore()
alreadyDidSomething = true
} else {
"Unrecognized option: %s" printfln(arg)
}
} else if(arg startsWith?("+")) {
params compilerArgs add(arg substring(1))
} else {
lowerArg := arg toLower()
match {
case lowerArg endsWith?(".ooc") =>
modulePaths add(arg)
case lowerArg endsWith?(".use") =>
prepareCompilationFromUse(File new(arg), modulePaths, true, targetModule&)
case lowerArg contains?(".") =>
// unknown file, complain
"[ERROR] Don't know what to do with argument %s, bailing out" printfln(arg)
failure(params)
case =>
// probably an ooc file without the extension
modulePaths add(arg + ".ooc")
}
}
}
try {
params bake()
} catch (e: ParamsError) {
error(e message)
failure(params)
}
if(modulePaths empty?() && !targetModule) {
if (alreadyDidSomething) {
exit(0)
}
uzeFile : File = null
// try to find a .use file
File new(".") children each(|c|
// anyone using an uppercase use file is a criminal anyway.
if(c path toLower() endsWith?(".use")) {
uzeFile = c
}
)
if(!uzeFile) {
"rock: no .ooc nor .use files found" println()
exit(1)
}
prepareCompilationFromUse(uzeFile, modulePaths, false, targetModule&)
}
if(params sourcePath empty?()) {
moduleName := "program"
if (!modulePaths empty?()) {
moduleName = modulePaths get(0)
}
if (moduleName endsWith?(".ooc")) {
moduleName = moduleName[0..-5]
}
virtualUse := UseDef new(moduleName)
virtualUse sourcePath = File new(".") getAbsolutePath()
virtualUse apply(params)
}
errorCode := 0
if (targetModule) {
postParsing(targetModule)
} else for(modulePath in modulePaths) {
code := parse(modulePath replaceAll('/', File separator))
if(code != 0) {
errorCode = 2 // C compiler failure.
break
}
}
// c phase 5: clean up
if(params clean) {
clean()
}
}
prepareCompilationFromUse: func (uzeFile: File, modulePaths: ArrayList<String>, crucial: Bool, targetModule: Module@) {
// extract '.use' from use file
identifier := uzeFile name[0..-5]
uze := UseDef new(identifier)
mainUseDef = uze
uze read(uzeFile, params)
if(uze main) {
// compile as a program
uze apply(params)
modulePaths add(uze main)
} else {
// compile as a library
if (params verbose) {
"Compiling '%s' as a library" printfln(identifier)
}
uze apply(params)
if (!uze sourcePath) {
error("No SourcePath directive in '%s'" format(uzeFile path))
failure(params)
}
params link = false
base := File new(uze sourcePath)
if (!base exists?()) {
error("SourcePath '%s' doesn't exist" format(base path))
failure(params)
}
importz := ArrayList<String> new()
base walk(|f|
if (f file?() && f path toLower() endsWith?(".ooc")) {
importz add(f rebase(base) path[0..-5])
}
true
)
fullName := ""
module := Module new(fullName, uze sourcePath, params, nullToken)
module token = Token new(0, 0, module, 0)
module lastModified = uzeFile lastModified()
module dummy = true
for (importPath in importz) {
imp := Import new(importPath, module token)
module addImport(imp)
}
targetModule = module
}
}
clean: func {
params outPath rm_rf()
}
cleanHardcore: func {
clean()
File new(params libcachePath) rm_rf()
}
parse: func (moduleName: String) -> Int {
(moduleFile, pathElement) := params sourcePath getFile(moduleName)
if(!moduleFile) {
"[ERROR] Could not find main .ooc file: %s" printfln(moduleName)
"[INFO] SourcePath = %s" printfln(params sourcePath toString())
failure(params)
exit(1)
}
modulePath := moduleFile path
fullName := moduleName[0..-5] // strip the ".ooc"
module := Module new(fullName, pathElement path, params, nullToken)
module token = Token new(0, 0, module, 0)
module main = true
module lastModified = moduleFile lastModified()
// phase 1: parse
if (params verbose) {
"Parsing..." println()
}
AstBuilder new(modulePath, module, params)
postParsing(module)
return 0
}
postParsing: func (module: Module) {
first := static true
if(params onlyparse) {
if(params verbose) println()
// Oookay, we're done here.
success(params)
return
}
parseMs := Time measure(||
module parseImports(null)
)
if (params timing) {
"Parsing took %d ms" printfln(parseMs)
}
if(params verbose) {
"Resolving..." println()
}
// phase 2: tinker
allModules := module collectDeps()
resolveMs := Time measure(||
if(!Tinkerer new(params) process(allModules)) {
failure(params)
}
)
if (params timing) {
"Resolving took %d ms" printfln(resolveMs)
}
// phase 2bis: classify imports
for (module in allModules) {
ImportClassifier classify(module)
}
if(params backend == "c") {
// c phase 3: launch the driver
if(params driver != null) {
code := 0
compileMs := Time measure(||
code = params driver compile(module)
)
if(code == 0) {
if (params timing) {
"C generation & compiling took %d ms" printfln(compileMs)
}
success(params)
if(params run) {
// FIXME: that's the driver's job
if(params binaryPath && !params binaryPath empty?()) Process new(["./" + params binaryPath]) execute()
else Process new(["./" + module simpleName]) execute()
}
} else {
failure(params)
}
}
if (mainUseDef && mainUseDef luaBindings) {
// generate lua stuff!
params clean = false // --backend=luaffi implies -noclean
params outPath = File new(mainUseDef file parent, mainUseDef luaBindings)
if (params verbose) {
"Writing lua bindings to #{params outPath path}" println()
}
for(candidate in module collectDeps()) {
LuaGenerator new(params, candidate) write() .close()
}
}
} else if(params backend == "json") {
// json phase 3: generate.
params clean = false // --backend=json implies -noclean
for(candidate in module collectDeps()) {
JSONGenerator new(params, candidate) write() .close()
}
} else if(params backend == "luaffi") {
// generate lua stuff!
params clean = false // --backend=luaffi implies -noclean
for(candidate in module collectDeps()) {
LuaGenerator new(params, candidate) write() .close()
}
}
first = false
}
warn: static func (message: String) {
Terminal setFgColor(Color yellow)
"[WARN ] %s" printfln(message)
Terminal reset()
}
error: static func (message: String) {
Terminal setFgColor(Color red)
"[ERROR] %s" printfln(message)
Terminal reset()
}
warnDeprecated: static func (old, instead: String) {
warn("Option -%s is deprecated, use -%s instead." format(old, instead))
}
warnUseLong: static func (option: String) {
warn("Option -%s is deprecated, use --%s instead." format(option, option))
}
hardDeprecation: static func (parameter: String) {
error("%s parameter is deprecated" format(parameter))
}
success: static func (params: BuildParams) {
if (params shout) {
Terminal setAttr(Attr bright)
Terminal setFgColor(Color green)
"[ OK ]" println()
Terminal reset()
}
}
failure: static func (params: BuildParams) {
if (params shout) {
Terminal setAttr(Attr bright)
Terminal setFgColor(Color red)
"[FAIL]" println()
Terminal reset()
}
// FIXME: should we *ever* exit(1) ?
exit(1)
}
}
CompilationFailedException: class extends Exception {
init: func {
super("Compilation failed!")
}
}
| 33.410783 | 152 | 0.469243 |
de8feb210f4596fe97608224b0a7331358a57bc0 | 10,968 | ooc | ooc | sdk/lang/Format.ooc | fredrikbryntesson/launchtest | 92ceb31995e660b278225e3061fb19603158dac4 | [
"MIT"
] | 152 | 2016-02-03T11:16:59.000Z | 2022-02-28T12:57:14.000Z | sdk/lang/Format.ooc | fredrikbryntesson/launchtest | 92ceb31995e660b278225e3061fb19603158dac4 | [
"MIT"
] | 157 | 2015-01-09T13:39:47.000Z | 2016-02-01T12:28:29.000Z | sdk/lang/Format.ooc | fredrikbryntesson/launchtest | 92ceb31995e660b278225e3061fb19603158dac4 | [
"MIT"
] | 17 | 2016-04-06T16:37:17.000Z | 2021-01-26T14:32:26.000Z | /**
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
*/
InvalidFormatException: class extends Exception {
init: func(msg :CString) { message = "invalid format string! \"" + msg == null ? "" : msg toString() + "\"" }
}
InvalidTypeException: class extends Exception {
init: func (T: Class) { message = "invalid type %s passed to generic function!" format(T name) }
}
/* Text Formatting */
TF_ALTERNATE := 1 << 0
TF_ZEROPAD := 1 << 1
TF_LEFT := 1 << 2
TF_SPACE := 1 << 3
TF_EXP_SIGN := 1 << 4
TF_SMALL := 1 << 5
TF_PLUS := 1 << 6
TF_UNSIGNED := 1 << 7
FSInfoStruct: cover {
precision : Int
fieldwidth : Int
flags: SizeT
base : Int
bytesProcessed: SizeT
length : Int
}
__digits: String = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
__digits_small: String = "0123456789abcdefghijklmnopqrstuvwxyz"
argNext: inline func<T> (va: VarArgsIterator*, T: Class) -> T {
if (!va@ hasNext?()) InvalidFormatException new(null) throw()
return va@ next(T)
}
m_printn: func <T> (res: Buffer, info: FSInfoStruct@, arg: T) {
sign: Char = '\0'
tmp: Char[36]
digits := __digits
size := info fieldwidth
i := 0
n: UInt64
signed_n: Int64
if(T size == 4) {
n = arg as UInt32
signed_n = arg as Int32
} else {
n = arg as UInt64
signed_n = arg as Int64
}
/* Preprocess the flags. */
if(info flags & TF_ALTERNATE && info base == 16) {
res append('0')
res append (info flags & TF_SMALL ? 'x' : 'X')
}
if(info flags & TF_SMALL) digits = __digits_small
if(!(info flags & TF_UNSIGNED) && signed_n < 0) {
sign = '-'
n = -signed_n
} else if(info flags & TF_EXP_SIGN) {
sign = '+'
}
if(sign) {
size -= 1
}
/* Find the number in reverse. */
if(n == 0) {
tmp[i] = '0'
i += 1
} else {
while(n != 0) {
tmp[i] = digits[n % info base]
i += 1
n /= info base
}
}
/* Pad the number with zeros or spaces. */
if(!(info flags & TF_LEFT))
while(size > i) {
size -= 1
if(info flags & TF_ZEROPAD) res append('0')
else res append (' ')
}
if(sign) res append(sign)
/* Write any zeros to satisfy the precision. */
while(i < info precision) {
info precision -= 1
res append('0')
}
/* Write the number. */
while(i != 0) {
i -= 1
size -= 1
res append(tmp[i])
}
/* Left align the numbers. */
if(info flags & TF_LEFT) {
while(size > 0) {
size -= 1
res append(' ')
}
}
}
getCharPtrFromStringType: func <T> (s : T) -> Char* {
res : Char* = null
match (T) {
case String => res = s as String ? s as String toCString() : null
case Buffer => res = s as Buffer ? s as Buffer toCString() : null
case CString => res = s as Char*
case Pointer => res = s as Char*
case =>
if(T size == Pointer size) {
res = s as Char*
} else {
InvalidTypeException new(T) throw()
}
}
return res
}
getSizeFromStringType: func<T> (s : T) -> SizeT {
res : SizeT = 0
match (T) {
case String => res = s as String _buffer size
case Buffer => res = s as Buffer size
case CString => res = s as CString length()
case Pointer => res = s as CString length()
case => InvalidTypeException new(T) throw()
}
return res
}
parseArg: func(res: Buffer, info: FSInfoStruct*, va: VarArgsIterator*, p: Char*) {
info@ flags |= TF_UNSIGNED
info@ base = 10
mprintCall := true
/* Find the conversion. */
match(p@) {
case 'i' =>
info@ flags &= ~TF_UNSIGNED
case 'd' =>
info@ flags &= ~TF_UNSIGNED
case 'u' =>
case 'o' =>
info@ base = 8
case 'x' =>
info@ flags |= TF_SMALL
info@ base = 16
case 'X' =>
info@ base = 16
case 'p' =>
info@ flags |= TF_ALTERNATE | TF_SMALL
info@ base = 16
case 'f' =>
// reconstruct the original format statement.
// TODO let this do the real thing.
mprintCall = false
tmp := Buffer new()
tmp append('%')
if(info@ flags & TF_ALTERNATE)
tmp append('#')
else if(info@ flags & TF_ZEROPAD)
tmp append('0')
else if (info@ flags & TF_LEFT)
tmp append('-')
else if (info@ flags & TF_SPACE)
tmp append(' ')
else if (info@ flags & TF_EXP_SIGN)
tmp append('+')
if (info@ fieldwidth != 0)
tmp append(info@ fieldwidth toString())
if (info@ precision >= 0)
tmp append("." + info@ precision toString())
tmp append("f")
T := va@ getNextType()
match T {
case LDouble =>
res append(tmp toString() cformat(argNext(va, LDouble) as LDouble))
case Double =>
res append(tmp toString() cformat(argNext(va, Double) as Double))
case => // assume everything else is Float
res append(tmp toString() cformat(argNext(va, Float) as Float))
}
case 'c' =>
mprintCall = false
i := 0
if(!(info@ flags & TF_LEFT))
while(i < info@ fieldwidth) {
i += 1
res append(' ')
}
res append(argNext(va, Char) as Char)
while(i < info@ fieldwidth) {
i += 1
res append(' ')
}
case 's' =>
mprintCall = false
T := va@ getNextType()
s : T = argNext(va, T)
sval: Char* = getCharPtrFromStringType(s)
if(sval) {
/* Change to -2 so that 0-1 doesn't cause the
* loop to keep going. */
if(info@ precision == -1) info@ precision = -2
while((sval@) && (info@ precision > 0 || info@ precision <= -2)) {
if(info@ precision > 0) {
info@ precision -= 1
}
res append(sval@)
sval += 1
}
} else {
res append("(nil)")
}
case '%' =>
res append('%')
mprintCall = false
case =>
mprintCall = false
}
if(mprintCall) {
T := va@ getNextType()
m_printn(res, info, argNext(va, T))
}
}
getEntityInfo: inline func (info: FSInfoStruct@, va: VarArgsIterator*, start: Char*, end: Pointer) {
/* save original pointer */
p := start
checkedInc := func {
if (p < end) p += 1
else InvalidFormatException new(start) throw()
}
/* Find any flags. */
info flags = 0
while(p as Pointer < end) {
checkedInc()
match(p@) {
case '#' => info flags |= TF_ALTERNATE
case '0' => info flags |= TF_ZEROPAD
case '-' => info flags |= TF_LEFT
case ' ' => info flags |= TF_SPACE
case '+' => info flags |= TF_EXP_SIGN
case => break
}
}
/* Find the field width. */
info fieldwidth = 0
while(p@ digit?()) {
if(info fieldwidth > 0)
info fieldwidth *= 10
info fieldwidth += (p@ as Int - 0x30)
checkedInc()
}
/* Find the precision. */
info precision = -1
if(p@ == '.') {
checkedInc()
info precision = 0
if(p@ == '*') {
T := va@ getNextType()
info precision = argNext(va, T) as Int
checkedInc()
}
while(p@ digit?()) {
if (info precision > 0)
info precision *= 10
info precision += (p@ as Int - 0x30)
checkedInc()
}
}
/* Find the length modifier. */
info length = 0
while (p@ == 'l' || p@ == 'h' || p@ == 'L') {
info length += 1
checkedInc()
}
info bytesProcessed = p as SizeT - start as SizeT
}
format: func ~main <T> (fmt: T, args: ... ) -> T {
if (args count == 0) return fmt
res := Buffer new(512)
va := args iterator()
ptr := getCharPtrFromStringType(fmt)
end : Pointer = (ptr as SizeT + getSizeFromStringType(fmt) as SizeT) as Pointer
while (ptr as Pointer < end) {
match (ptr@) {
case '%' => {
if (va hasNext?()) {
info: FSInfoStruct
getEntityInfo(info&, va&, ptr, end)
ptr += info bytesProcessed
parseArg(res, info&, va&, ptr)
} else {
ptr += 1
if (ptr@ == '%') {
res append(ptr@)
} else {
// missing argument, display format string instead:
res append('%')
res append(ptr@)
}
}
}
case => res append(ptr@)
}
ptr += 1
}
result: T
match (T) {
case String => result = res toString()
case Buffer => result = res
case => result = res toCString()
}
return result
}
extend Buffer {
format: func(args: ...) {
setBuffer(format~main(this, args))
}
}
extend String {
format: func(args: ...) -> This {
format~main(this, args)
}
printf: func(args: ...) {
format~main(this, args) print()
}
printfln: func(args: ...) {
format~main(this, args) println()
}
}
| 28.195373 | 113 | 0.497721 |
96ad7807006a9fb5f1bb4cfe1acd980976493642 | 183 | rst | reStructuredText | docs/source2/generated/statsmodels.tsa.stattools.levinson_durbin_pacf.rst | GreatWei/pythonStates | c4a9b326bfa312e2ae44a70f4dfaaf91f2d47a37 | [
"BSD-3-Clause"
] | 76 | 2019-12-28T08:37:10.000Z | 2022-03-29T02:19:41.000Z | docs/source2/generated/statsmodels.tsa.stattools.levinson_durbin_pacf.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 11 | 2015-07-22T22:11:59.000Z | 2020-10-09T08:02:15.000Z | docs/source2/generated/statsmodels.tsa.stattools.levinson_durbin_pacf.rst | cluterdidiw/statsmodels | 543037fa5768be773a3ba31fba06e16a9edea46a | [
"BSD-3-Clause"
] | 35 | 2020-02-04T14:46:25.000Z | 2022-03-24T03:56:17.000Z | statsmodels.tsa.stattools.levinson\_durbin\_pacf
================================================
.. currentmodule:: statsmodels.tsa.stattools
.. autofunction:: levinson_durbin_pacf | 30.5 | 48 | 0.606557 |
e6b6e415d490547a161c40e6514d343adbbb620f | 1,556 | rst | reStructuredText | source/courses/comp317.rst | NicholasSynovic/coursedescriptions | 1579ca42f427457603912272f2e98a7382fe5df9 | [
"Apache-2.0"
] | 4 | 2015-01-23T21:43:20.000Z | 2021-12-11T13:22:54.000Z | source/courses/comp317.rst | NicholasSynovic/coursedescriptions | 1579ca42f427457603912272f2e98a7382fe5df9 | [
"Apache-2.0"
] | 48 | 2015-08-20T20:06:42.000Z | 2021-08-29T17:07:40.000Z | source/courses/comp317.rst | NicholasSynovic/coursedescriptions | 1579ca42f427457603912272f2e98a7382fe5df9 | [
"Apache-2.0"
] | 22 | 2015-01-23T21:44:04.000Z | 2022-01-24T17:35:19.000Z | .. header:: COMP 317: Social, Legal, and Ethical Issues in Computing
.. footer:: COMP 317: Social, Legal, and Ethical Issues in Computing
.. index::
Social, Legal, and Ethical Issues in Computing
Social Issues
Legal Issues
Ethical Issues
Issues
COMP 317
########################################################
COMP 317: Social, Legal, and Ethical Issues in Computing
########################################################
******************
Course Information
******************
.. sidebar:: General Information
**Credit Hours**
* 3
**Prerequisites**
* Any :doc:`COMP 300 level <undergraduate-courses>` course
* Any `Ethics <https://www.luc.edu/core/ethicscoursesub-first.shtml>`_ course
About
=====
This course covers social, legal, and ethical issues commonly arising in key areas related to computing technologies.
Description
===========
This course will explore a variety of ethical and legal issues facing those who use or program computers. Issues can be divided broadly into professional ethics, dealing with the ethical responsibilities of the programmer, and social issues, dealing with concerns we all have as citizens.
Outcome
=======
Understanding of laws and issues in areas such as privacy, encryption, freedom of speech, copyrights and patents, computer crime, and computer/software reliability and safety; understanding of philosophical perspectives such as utilitarianism versus deontological ethics and basics of the U.S. legal system.
*******
Syllabi
*******
|see-syllabi|
| 30.509804 | 307 | 0.670308 |
118b5f084dc8c15d13c5233821c268cbcbf35e29 | 97 | rst | reStructuredText | docs/changelog.rst | google-research/reverse-engineering-neural-networks | 5449da47db051cb073786615b2de52c9fdade319 | [
"Apache-2.0"
] | 102 | 2020-07-23T18:15:18.000Z | 2022-03-10T10:49:12.000Z | docs/changelog.rst | google-research/reverse-engineering-neural-networks | 5449da47db051cb073786615b2de52c9fdade319 | [
"Apache-2.0"
] | 26 | 2020-07-24T21:47:58.000Z | 2021-05-18T02:16:00.000Z | docs/changelog.rst | google-research/reverse-engineering-neural-networks | 5449da47db051cb073786615b2de52c9fdade319 | [
"Apache-2.0"
] | 24 | 2020-08-02T00:19:09.000Z | 2022-01-17T10:43:42.000Z | v0.1.0
------
* **Added**
* Initial publicized release.
* Added documentation using sphinx.
| 13.857143 | 37 | 0.639175 |
a62922686af0906dae666b62611460d34fd79c7d | 885 | rst | reStructuredText | docs/nanigent.predict.resources.v1.rst | gramhagen/ml-agent | a0db7376a959d4039e6a1aca94aed6d9a86c7898 | [
"Apache-2.0"
] | null | null | null | docs/nanigent.predict.resources.v1.rst | gramhagen/ml-agent | a0db7376a959d4039e6a1aca94aed6d9a86c7898 | [
"Apache-2.0"
] | null | null | null | docs/nanigent.predict.resources.v1.rst | gramhagen/ml-agent | a0db7376a959d4039e6a1aca94aed6d9a86c7898 | [
"Apache-2.0"
] | null | null | null | ml-agent.predict.resources.v1 package
=====================================
Submodules
----------
ml-agent.predict.resources.v1.model_ids_api module
--------------------------------------------------
.. automodule:: ml-agent.predict.resources.v1.model_ids_api
:members:
:undoc-members:
:show-inheritance:
ml-agent.predict.resources.v1.models_api module
-----------------------------------------------
.. automodule:: ml-agent.predict.resources.v1.models_api
:members:
:undoc-members:
:show-inheritance:
ml-agent.predict.resources.v1.predict_api module
------------------------------------------------
.. automodule:: ml-agent.predict.resources.v1.predict_api
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: ml-agent.predict.resources.v1
:members:
:undoc-members:
:show-inheritance:
| 22.692308 | 59 | 0.557062 |
18b5a34a0ded37870c2861f3252366975c1c847d | 1,373 | rst | reStructuredText | bin/awscli/examples/dynamodb/create-table.rst | iilness2/bash-lambda-layer-custom | 0b054d4ccb0623460354ba1f58059258c095a494 | [
"MIT"
] | 399 | 2018-12-09T14:38:30.000Z | 2022-03-31T19:34:05.000Z | bin/awscli/examples/dynamodb/create-table.rst | iilness2/bash-lambda-layer-custom | 0b054d4ccb0623460354ba1f58059258c095a494 | [
"MIT"
] | 64 | 2018-12-10T01:34:46.000Z | 2022-01-13T14:04:34.000Z | bin/awscli/examples/dynamodb/create-table.rst | iilness2/bash-lambda-layer-custom | 0b054d4ccb0623460354ba1f58059258c095a494 | [
"MIT"
] | 80 | 2018-12-10T10:36:53.000Z | 2022-03-22T13:40:32.000Z | **To create a table**
This example creates a table named *MusicCollection*.
Command::
aws dynamodb create-table --table-name MusicCollection --attribute-definitions AttributeName=Artist,AttributeType=S AttributeName=SongTitle,AttributeType=S --key-schema AttributeName=Artist,KeyType=HASH AttributeName=SongTitle,KeyType=RANGE --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
Output::
{
"TableDescription": {
"AttributeDefinitions": [
{
"AttributeName": "Artist",
"AttributeType": "S"
},
{
"AttributeName": "SongTitle",
"AttributeType": "S"
}
],
"ProvisionedThroughput": {
"NumberOfDecreasesToday": 0,
"WriteCapacityUnits": 5,
"ReadCapacityUnits": 5
},
"TableSizeBytes": 0,
"TableName": "MusicCollection",
"TableStatus": "CREATING",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "Artist"
},
{
"KeyType": "RANGE",
"AttributeName": "SongTitle"
}
],
"ItemCount": 0,
"CreationDateTime": 1421866952.062
}
}
| 30.511111 | 309 | 0.504734 |
8999910065673e6e69e02cf10c05871ff847971f | 254 | rst | reStructuredText | gatech/cs6476/terms.rst | orsenthil/omscs-transcend | d83b8cc5698f8338e4f4513e2dca53f58fc27808 | [
"Apache-2.0"
] | 56 | 2017-08-17T19:12:46.000Z | 2022-01-31T07:26:57.000Z | gatech/cs6476-O01/terms.rst | KpJitendraKumar/coursedocs | 5146caf733a5749331dba43aa97604c634358747 | [
"Apache-2.0"
] | 1 | 2017-03-23T12:40:54.000Z | 2019-03-20T07:06:34.000Z | gatech/cs6476-O01/terms.rst | KpJitendraKumar/coursedocs | 5146caf733a5749331dba43aa97604c634358747 | [
"Apache-2.0"
] | 42 | 2017-03-23T12:37:15.000Z | 2022-02-28T04:17:15.000Z | Terms
=====
.. math::
\min _ { s ^ { x } ,s ^ { y } ,\alpha } \sum _ { i = 1} ^ { k } E \left( \alpha _ { i } \mathbf { s } _ { \mathbf { i } } ^ { \mathbf { X } } + \left( 1- \alpha _ { i } \right) \mathbf { s } _ { \dot { i } } ^ { y } \right)
| 28.222222 | 227 | 0.385827 |
b9fa0a73062667f0cf824aad94c5b2ebab9b95f7 | 313 | rst | reStructuredText | AUTHORS.rst | jmoeckel/pycountry-convert | 094979dcc0463e7ca59e5822a648f6f0eab40787 | [
"MIT"
] | 12 | 2020-05-11T16:42:25.000Z | 2022-01-12T10:54:27.000Z | AUTHORS.rst | jmoeckel/pycountry-convert | 094979dcc0463e7ca59e5822a648f6f0eab40787 | [
"MIT"
] | null | null | null | AUTHORS.rst | jmoeckel/pycountry-convert | 094979dcc0463e7ca59e5822a648f6f0eab40787 | [
"MIT"
] | 19 | 2019-09-05T11:36:25.000Z | 2022-01-31T12:24:40.000Z | `pycountry-convert` is written and maintained by many contributors.
pycountry-convert: Authors
````````````````````````````
We'd like to thank the following people who have contributed to the `pycountry-convert` repository.
- Ian Molee <ian@tune.com>
- Jeff Tanner <jefft@tune.com>
- Ohad Lahav <ohad@tune.com> | 31.3 | 99 | 0.702875 |
73d0eb9aec52594b61e730ec5435a24e4228231e | 35,572 | rst | reStructuredText | docs/source/changes.rst | kw-0/MyGrad | 307f1bb5f2391e7f4df49fe43a7acf9d1e8ea141 | [
"MIT"
] | 147 | 2018-07-14T01:37:35.000Z | 2022-03-29T06:37:58.000Z | docs/source/changes.rst | kw-0/MyGrad | 307f1bb5f2391e7f4df49fe43a7acf9d1e8ea141 | [
"MIT"
] | 223 | 2018-05-31T14:13:18.000Z | 2022-02-27T18:53:49.000Z | docs/source/changes.rst | kw-0/MyGrad | 307f1bb5f2391e7f4df49fe43a7acf9d1e8ea141 | [
"MIT"
] | 27 | 2018-06-17T14:42:05.000Z | 2021-10-31T00:21:09.000Z | =========
Changelog
=========
This is a record of all past mygrad releases and what went into them,
in reverse chronological order. All previous releases should still be available
on pip.
.. _v2.0.2:
------------------
2.0.2 - 2021-04-10
------------------
Exposes :func:`~mygrad.execute_op` at top-level namespace
.. _v2.0.1:
------------------
2.0.1 - 2021-04-03
------------------
Bug Fixes
---------
- :func:`~mygrad.matmul` and :func:`~mygrad.multi_matmul` were missing from the top-level namespace of ``mygrad``.
- A 0D tensor involved in a broadcasted operation would have a numpy-float set for its gradient instead of a 0D
array.
New Functions
-------------
The following non-differentiable NumPy functions now work on mygrad tensors (and return ndarrays).
Aliases of these are available at the top-level namespace of ``mygrad``
- np.isnan
- np.isfinite
- np.isinf
- np.isnat
- np.signbit
- np.logical_not
- np.logical_and
- np.logical_or
- np.logical_xor
- np.greater
- np.greater_equal
- np.less
- np.less_equal
- np.equal
- np.not_equal
- np.floor_divide
- np.remainder
- np.mod
- np.fmod
- np.divmod
- np.rint
- np.sign
- np.floor
- np.ceil
- np.trunc
- np.isclose
.. _v2.0.0:
------------------
2.0.0 - 2021-03-30
------------------
🎉🎉🎉
This is a compatibility-breaking update to MyGrad, and it's great!
MyGrad 2.0 represents a major overhaul to this project.
This release creates near parity between the experiences of using MyGrad and using NumPy, and uses NumPy's new
mechanisms for overriding functions so that NumPy functions can operate "directly" on MyGrad's tensors, and thus
can be used to construct differentiable computational graphs!
.. code:: python
>>> import numpy as np
>>> from mygrad import tensor
>>> x = tensor([1., 2.])
>>> np.square(x).backward() # backprop through NumPy functions!
>>> x.grad
array([2., 4.])
Another important, but less exciting, feature is that MyGrad now protects users from inadvertently
corrupting the state of a computational graph by, say, mutating a NumPy array that is participating in
the graph.
This is very useful for protecting people – especially students – from unwittingly poisoning the results
of their calculations.
Lastly... no more "nulling" gradients! MyGrad will now handle deleting gradients for you in a way that
is nicely compatible with gradient-based optimization work flows.
New Functions and Utilities
---------------------------
- :func:`~mygrad.tensor`
- :func:`~mygrad.astensor`
- :func:`~mygrad.asarray`
- :func:`~mygrad.no_autodiff`
- :func:`~mygrad.mem_guard_off`
- :func:`~mygrad.mem_guard_on`
- :func:`~mygrad.turn_memory_guarding_off`
- :func:`~mygrad.turn_memory_guarding_on`
- :func:`~mygrad.concatenate`
- :func:`~mygrad.stack`
- :func:`~mygrad.linalg.norm`
Dropping Support for Python 3.6 and Numpy < 1.17
------------------------------------------------
MyGrad now abides by the `NEP 29 <https://numpy.org/neps/nep-0029-deprecation_policy.html>`_ recommendation, and adopts
a common “time window-based” policy for support of Python and NumPy versions.
As such the Python 3.7 and Numpy 1.17 are the minimum versions supported by MyGrad 2.0.
The Interfaces Between ``mygrad.Tensor`` and ``numpy.array`` Match
------------------------------------------------------------------
You can now control the dimensionality of a tensor and whether or not a tensor copies its data upon initialization, via the
:func:`~mygrad.tensor` interface. This mirrors the behavior of :func:`~numpy.array`
+-------------------------------------------------------+-------------------------------------------------------+-------------------------------------------------+
| Numpy | MyGrad 1.X | MyGrad 2.0 |
+=======================================================+=======================================================+=================================================+
| .. code:: python | .. code:: python | .. code:: python |
| | | |
| >>> np.array([1., 2.], copy=True, ndmin=2) | >>> mg.Tensor([1., 2.], copy=True, ndmin=2) | >>> mg.tensor([1., 2.], copy=True, ndmin=2) |
| array([[1., 2.]]) | <TypeError> | Tensor([[1., 2.]]) |
+-------------------------------------------------------+-------------------------------------------------------+-------------------------------------------------+
Support for dtype, where, and out in ufuncs
-------------------------------------------
MyGrad now implements ufuncs with support for specifying dtype, boolean masks, and in-place targets. The
additional methods, such as ``mygrad.add.reduce``, are not yet implemented.
+---------------------------------------------------------------+
| MyGrad 2.0 |
+===============================================================+
| .. code:: python |
| |
| >>> mg.add([1, 2],[0, 2], where=[True, False], dtype=float)|
| Tensor([3., 1.]) |
+---------------------------------------------------------------+
Augmented Updates on Tensors Now Match NumPy's Behavior
-------------------------------------------------------
Previously, augmented assignment expressions, such as ``tensor *= 2``, behaved merely
as a shorthand for the simple assignment ``tensor = tensor * 2``.
This is in stark contrast to the behavior of an augmented assignment on a NumPy array, which
`mutates the array in-place <https://www.pythonlikeyoumeanit.com/Module3_IntroducingNumpy/BasicIndexing.html#Augmented-Assignments>`_.
This meant that there was a major discrepancy between how these expressions behaved across MyGrad and
NumPy.
This has changed in MyGrad 2.0: all augmented assignment expressions operate in-place on tensors and
mutate their underlying data.
+-----------------------------------+-----------------------------------+-----------------------------------+
| Numpy | MyGrad 1.X | MyGrad 2.0 |
+===================================+===================================+===================================+
| .. code:: python | .. code:: python | .. code:: python |
| | | |
| >>> x = np.array([1., 2.]) | >>> x = mg.Tensor([1., 2.]) | >>> x = mg.tensor([1., 2.]) |
| >>> y = x | >>> y = x | >>> y = x |
| >>> x *= 2 | >>> x *= 2 # x = 2 * x | >>> x *= 2 |
| >>> x is y | >>> x is y # doesn't match! | >>> x is y # matches! |
| True | False | True |
+-----------------------------------+-----------------------------------+-----------------------------------+
Creating and Augmenting Views of Tensors
----------------------------------------
MyGrad now provides rich support for creating and manipulating views of tensors.
All `basic indexing <https://www.pythonlikeyoumeanit.com/Module3_IntroducingNumpy/BasicIndexing.html#>`_ operations
performed on a tensor will produce a view of said tensor.
This means that these two tensors share memory
(While MyGrad 1.X created a view of the underlying NumPy array under the hood for basic indexing, its notion
of supporting views went no further than that.)
As with NumPy arrays the "parent" of a view can be accessed through the tensor's ``.base``
attribute
+-----------------------------------+-------------------------------------+-----------------------------------+
| Numpy | MyGrad 1.X | MyGrad 2.0 |
+===================================+=====================================+===================================+
| .. code:: python | .. code:: python | .. code:: python |
| | | |
| >>> x = np.array([1., 2., 3.]) | >>> x = mg.Tensor([1., 2., 3.]) | >>> x = mg.tensor([1., 2., 3.])|
| >>> y = x[:2] | >>> y = x[:2] | >>> y = x[:2] |
| >>> np.shares_memory(x, y) | >>> np.shares_memory(x, y) | >>> np.shares_memory(x, y) |
| True | True | True |
| >>> y.base is x | >>> y.base is x # doesn't match!| >>> y.base is x # matches! |
| True | <AttributeError> | True |
+-----------------------------------+-------------------------------------+-----------------------------------+
Mutating shared data will propagate through views:
+-----------------------------------+-------------------------------------+------------------------------------+
| Numpy | MyGrad 1.X | MyGrad 2.0 |
+===================================+=====================================+====================================+
| .. code:: python | .. code:: python | .. code:: python |
| | | |
| >>> y *= -1 | >>> y *= -1 | >>> y *= -1 |
| >>> y | >>> y | >>> y |
| array([-1., -2.]) | Tensor([-1., -2.]) | Tensor([-1., -2.]) |
| >>> x | >>> x # doesn't match! | >>> x # matches! |
| array([-1., -2., 3.]) | Tensor([1., 2., 3.]) | Tensor([-1., -2., 3.]) |
+-----------------------------------+-------------------------------------+------------------------------------+
Furthermore, views of tensors now propagate corresponding gradient information as well!
This means that if ``y`` is a view of ``x``, then ``y.grad`` will be a corresponding view of ``x.grad``.
This is true for all varieties of views, views of views, etc., of ``x``.
.. code-block:: python
# Because `y` is a view of `x`, `y.grad` will be
# a corresponding view of `x.grad`
>>> (x ** 2).backward()
>>> x.grad
array([-2., -4., 6., 8.])
>>> y.grad
array([-2., -4.])
>>> y.grad.base is x.grad
True
This rich support for views, augmented assignments, and in-place updates on tensors enables much more sophisticated
operations on tensors now.
For example, let's make a shape-(3, 3) tensor and perform and operations involving views of its diagonal and
its anti-diagonal. (Note that :func:`~mygrad.einsum` is capable of returning a view of a tensor's diagonal,
and that MyGrad fully supports backpropagation through all flavors of einsum!)
.. code-block:: python
>>> x = mg.tensor([[0., 1., 2.],
... [3., 4., 5.],
... [6., 7., 8.]])
# view of diagonal of `x`
>>> diag = mg.einsum("ii->i", x)
>>> diag
Tensor([0., 4., 8.])
# view of anti-diagonal of `x`
>>> anti_diag = mg.einsum("ii->i", x[:, ::-1])
>>> anti_diag
Tensor([2., 4., 6.])
# Compute derivatives of their summed difference
>>> (diag - anti_diag).sum().backward()
>>> x.grad
array([[ 1., 0., -1.],
[ 0., 0., 0.],
[-1., 0., 1.]])
# The views of `x` have the appropriate corresponding
# views of `x.grad`
>>> diag.grad
array([1., 0., 1.])
>>> anti_diag.grad
array([-1., 0., -1.])
Bye-Bye Null Gradients!
-----------------------
Gone are the days of having to manually clear your tensors' gradients and the computational graph that they were
in; now MyGrad does it for you!
This means that ``Tensor.null_gradients()`` no longer does anything other than emit a deprecation warning.
In an upcoming minor release this method will be removed entirely.
In MyGrad 2.0, calling :func:`~mygrad.Tensor.backward` will finish its computation by clearing the computational graph that was involved
in the backpropagation.
Thus any internally-referenced tensors associated with that computational graph become free for garbage collection.
This is very nice behavior to help prevent students from filling up their RAM unwittingly.
And instead of worrying about nulling gradients manually, a tensor will automatically have its gradient cleared any time that it is
involved in a new mathematical operation.
This enables the following common workflow for performing gradient-based optimization:
+-------------------------------------+-------------------------------------+
| MyGrad 1.X | MyGrad 2.0 |
+=====================================+=====================================+
| .. code:: python | .. code:: python |
| | |
| >>> x = mg.Tensor([1., 2.]) | >>> x = mg.tensor([1., 2.]) |
| >>> for _ in range(10): | >>> for _ in range(10): |
| ... y = 3 * x | ... y = 3 * x # nulls grad |
| ... assert x.grad is None | ... assert x.grad is None |
| ... y.backward() | ... y.backward() |
| ... assert all(x.grad == 3.) | ... assert all(x.grad == 3.) |
| ... y.null_gradients() | |
+-------------------------------------+-------------------------------------+
.. code-block:: python
for _ in range(num_optimization_steps):
# using `model_params` in a function will automatically
# set its gradients to `None`
loss = compute_loss(data, model_params) # gradients cleared
loss.backward() # compute gradients
optimize(model_params) # do stuff with gradients
You can also call :func:`~mygrad.Tensor.null_grad` to manually clear an individual tensor's gradient.
Safety First: Memory Guarding Behavior in MyGrad 2.0
----------------------------------------------------
In MyGrad 1.X it was all too easy to unwittingly corrupt the state of a computational graph by mutating
a NumPy array mid-computation.
This could lead to incorrect calculations of gradients! This is the stuff of horrifying nightmares.
Now MyGrad tracks all of the arrays that are involved in active computational graphs and locks their memory
so that they are read-only (except for when the user mutates the array explicitly with a MyGrad operation).
This means that the sort of mutation that could have lurked silently in the dimly-lit alleyways of bugs-ville will
now get loudly narc'd on by MyGrad's merciless memory guard!
+---------------------------------------------+---------------------------------------+
| MyGrad 1.X | MyGrad 2.0 |
+=============================================+=======================================+
| .. code:: python | .. code:: python |
| | |
| >>> arr = np.array([1., 2.]) | >>> arr = np.array([1., 2.]) |
| >>> tn = mg.Tensor([1. 1.]) | >>> tn = mg.tensor([1. 1.]) |
| >>> z = x * y | >>> z = x * y |
| # mutating x will corrupt | # mutating x will corrupt |
| # backprop through z... | # backprop through z... |
| >>> x[:] = 0. | >>> x[:] = 0. # you shall not pass!|
| | ValueError: read-only! |
| >>> z.backward() # uh oh... | >>> z.backward() |
| >>> tn.grad # should be: (1., 2.) | >>> tn.grad |
| array([0., 0.]) | array([1., 2.]) |
+---------------------------------------------+---------------------------------------+
Any tensor or array that is no longer participating in an active computational graph will automatically
have its write-ability restored to its original state.
.. code-block:: python
# memory guarding is released once an array is no
# longer involved in an active computational graph
>>> import mygrad as mg
>>> import numpy as np
>>> x = np.array([1., 2.])
>>> y = mg.ones_like(x)
>>> z = x * y # x and y are locked
>>> z.backward() # graph cleared; x and y are "released"
>>> x[:] = 0 # can write to x
>>> x
array([0., 0.])
# This result is not referenced, thus
# x and y are immediately released by the
# memory-guard; no graph-clearing is needed
>>> x * y
Tensor([0., 0.])
>>> x[:] = 1.
But with great responsibility comes great ...uhh... slowness? This memory-guarding feature can lead to slowdowns
of **up to 50% for computations involving many small tensors**
(It used to be **a lot** worse... like 5x worse. I worked really hard to speed it up! I promise!).
That being said, computations involving beefy tensors (e.g. standard neural networks) will not be significantly
affected by the overhead associated with the memory guard.
Please refer to :ref:`performance-tips` for responsible ways to disable this memory-guarding mechanism.
Speaking of optimizations...
Disabling Automatic Differentiation
-----------------------------------
Sometimes you want to use your MyGrad code to do calculations, but you don't actually need to compute
any derivatives.
A common example of this is evaluating the test-time performance of a machine learning model that you are
in the process of optimizing – you don't actually need to perform backpropagation when you are processing
the test data.
In these circumstances, you can greatly reduce the overhead cost associated with building a computational
graph by using the :func:`~mygrad.no_autodiff` decorator / context manager. See the linked documentation
for extensive examples of its usage.
.. code-block:: python
# demonstrating mygrad in no-autodiff mode
>>> import mygrad as mg
>>> x = mg.Tensor([1., 2., 3., 4.])
>>> with mg.no_autodiff:
... y = x ** 2 # operation not tracked
>>> y.backward()
>>> y.grad, x.grad # x is not "connected" to y
(array([1., 1., 1.]), None)
For computations involving many small tensors, this can produce **up to a 3x speedup**! So make sure you
make keen use of this when you don't actually need to perform autodiff.
Revamping Constant Semantics to be Explicit
-------------------------------------------
Previously, specifying ``constant=False`` in a mygrad function did not actually mean
that the function would necessarily produce a non-constant tensor. Rather, it simply
meant that the output would not be _forced_ to be a constant – whether or not the result
was a constant depended on the inputs (i.e. a function whose inputs were all constants
would thus produce a constant).
This was a very bad design decision! Now, specifying ``constant=False`` guarantees that
the output of a function is a non-constant (meaning that it facilitates backpropagation
through a computational graph).
That being said, we usually _do_ want constant information to propagate through functions.
Thus ``constant=None`` is now the default value – its behavior matches that of ``constant=False``
from MyGrad 1.X – for all functions that accept the argument.
It is also now standard to require that this argument be a keyword-only argument.
+---------------------------------------------+----------------------------------------------+
| MyGrad 1.X | MyGrad 2.0 |
+=============================================+==============================================+
| .. code:: python | .. code:: python |
| | |
| >>> t1 = mg.tensor(1., constant=True) | >>> t1 = mg.tensor(1., constant=True) |
| >>> t2 = mg.tensor(1., constant=True) | >>> t2 = mg.tensor(1., constant=True) |
| | |
| >>> out = mg.add(t1, t2, constant=False) | >>> out = mg.add(t1, t2, constant=False) |
| >>> out.constant | >>> out.constant |
| True | False |
| | |
| | # constant = None |
| | >>> out = mg.add(t1, t2) |
| | >>> out.constant |
| | True |
+---------------------------------------------+----------------------------------------------+
>>> t1 = mg.tensor(1., constant=True)
>>> t2 = mg.tensor(1., constant=True)
# old behavior
>>> out = mg.add(t1, t2, constant=False)
>>> out.constant
True
# new behavior
>>> out = mg.add(t1, t2, constant=False)
>>> out.constant
False
>>> out = mg.add(t1, t2, constant=None)
>>> out.constant
True
Remove Scalar-Only Conditions on Backpropagation
------------------------------------------------
Previously, one could only invoke backpropagation from a non-scalar tensor only if that tensor was
the culmination of operations that preserved a one-to-one mapping between the elements of an upstream
tensor with its downstream neighbor. Otherwise an error was raised. This ensured that ``tensor.grad``
would always be the same shape as ``tensor``, and not represent a higher-dimensional tensor.
Now calling ``tensor.backward()`` from a non-scalar tensor will behave as if the tensor was summed prior
to invoking backpropagation. This is simple, easy-to-understand behavior, which ensures that ``tensor.grad``
can always be interpreted as an array of scalar-valued derivatives.
+---------------------------------------------+---------------------------------------+
| MyGrad 1.X | MyGrad 2.0 |
+=============================================+=======================================+
| .. code:: python | .. code:: python |
| | |
| >>> t1 = mg.Tensor([[1., 2.], | >>> t1 = mg.tensor([[1., 2.], |
| ... [0., -1]]) | ... [0., -1]]) |
| >>> t2 = mg.Tensor([[0., 1.], | >>> t2 = mg.tensor([[0., 1.], |
| ... [3., -1]]) | ... [3., -1]]) |
| >>> z = t1 @ t2 | >>> z = t1 @ t2 |
| >>> z.backward() | >>> z.backward() |
| <InvalidBackprop: Scalar-only> | >>> t1.grad |
| | array([[1., 2.], |
| | [1., 2.]]) |
+---------------------------------------------+---------------------------------------+
Integer-valued Tensors Are Treated as Constants
-----------------------------------------------
Derivatives involving integer-valued tensors are typically ill-defined, and in MyGrad 1.X they
were generally just wrong. Now integer-valued tensors can only be involved in computational
graphs as constants.
+---------------------------------------------+-------------------------------------------------+
| MyGrad 1.X | MyGrad 2.0 |
+=============================================+=================================================+
| .. code:: python | .. code:: python |
| | |
| >>> t1 = mg.Tensor([[1, 2]).constant | >>> t1 = mg.tensor([[1, 2]]).constant |
| False | True |
+---------------------------------------------+-------------------------------------------------+
Is This Code Well-Tested?
-------------------------
Yes! I consider MyGrad's test suite to be the most important part of the library. It is
the only reason why I feel comfortable releasing this code for students, teachers, and others to use.
I leverage thorough `property-based testing <https://increment.com/testing/in-praise-of-property-based-testing/>`_ using the `Hypothesis library <https://hypothesis.readthedocs.io/en/latest/>`_
to exercise this code as rigorously as I can manage. These tests `even found bugs in NumPy <https://github.com/numpy/numpy/issues/10930>`_!
Special Thanks
--------------
Special thanks to Alex Silverstein, Zac Dodds, and Petar Griggs for all of the fruitful discussions, ideas, and influence that you provided
throughout this major update.
.. _v1.9.0:
------------------
1.9.0 - 2020-08-28
------------------
The most significant aspect of this release is the implementation of ``Tensor.__array__``, which enables a huge amount
of cross-compatibility with numpy utilities (`#288 <https://github.com/rsokl/MyGrad/pull/288>`_). Note that any previous
reliance of a numpy function to produce an array of tensor-scalars will likely produce a standard numpy array instead.
Improvements:
- ``x**1`` and ``x**2`` are now special-cased in order to make these common operations more efficient (`#266 <https://github.com/rsokl/MyGrad/pull/266>`_)
- The derivative of :func:`~mygrad.nnet.losses.focal_loss` was refactored to handle special edge-cases and the tests for focal loss were improved to exercise these edge cases (`#269 <https://github.com/rsokl/MyGrad/pull/269>`_)
- Various improvements to the tests (`#271 <https://github.com/rsokl/MyGrad/pull/271>`_, `#277 <https://github.com/rsokl/MyGrad/pull/277>`_, `#290 <https://github.com/rsokl/MyGrad/pull/290>`_, `#284 <https://github.com/rsokl/MyGrad/pull/284>`_, `#289 <https://github.com/rsokl/MyGrad/pull/289>`_, `#282 <https://github.com/rsokl/MyGrad/pull/282>`_, `#292 <https://github.com/rsokl/MyGrad/pull/292>`_, `#293 <https://github.com/rsokl/MyGrad/pull/293>`_)
- The internal mechanism for tracking tensors in computational graph now depends on hashing tensor-IDs instead of hashing tensors directly. The fact that tensors could be hashed was due to the fact that its equality specialty methods were being monkey-patched (`#276 <https://github.com/rsokl/MyGrad/pull/276>`_)
- :func:`~mygrad.nnet.activations.softmax` and :func:`~mygrad.nnet.activations.logsoftmax` both expose ``axis`` arguments (`#268 <https://github.com/rsokl/MyGrad/pull/268>`_)
Bug fixes:
- `0D tensors could not be indexed into <https://github.com/rsokl/MyGrad/issues/272>`_ – e.g. to insert a newaxis (`#273 <https://github.com/rsokl/MyGrad/pull/273>`_)
- There was a potential numerical instability in :func:`mygrad.nnet.layers.batchnorm` (`#285 <https://github.com/rsokl/MyGrad/pull/285>`_)
- The ``dtype`` argument in ``Tensor.__init__`` was ignored when the array-like argument, x, was another Tensor-instance (`#294 <https://github.com/rsokl/MyGrad/pull/294>`_)
New features:
- ``Tensor.__array__`` now exposes the tensor's underlying numpy array – this enables a huge amount of cross-compatibility with numpy utilities (`#288 <https://github.com/rsokl/MyGrad/pull/288>`_)
- Adds :func:`~mygrad.asarray` (`#279 <https://github.com/rsokl/MyGrad/pull/279>`_)
- Adds :func:`~mygrad.astensor` (`#294 <https://github.com/rsokl/MyGrad/pull/294>`_)
.. _v1.8.1:
------------------
1.8.1 - 2020-07-28
------------------
This is an `internal change <https://github.com/rsokl/MyGrad/pull/265>`_ to the backprop
mechanism for ``Tensor.__getitem__``, which produces considerable speedups (2x-4x) for backprop
through basic indexing and boolean indexing. Thanks to Petar Griggs for finding this.
.. _v1.8.0:
------------------
1.8.0 - 2020-07-25
------------------
New features:
- Adds :func:`~mygrad.any` and :func:`~mygrad.Tensor.any`
- Adds :func:`~mygrad.random.rand`
- Adds :func:`~mygrad.random.randint`
- Adds :func:`~mygrad.random.randn`
- Adds :func:`~mygrad.random.random`
- Adds :func:`~mygrad.random.random_integers`
- Adds :func:`~mygrad.random.random_sample`
- Adds :func:`~mygrad.random.ranf`
- Adds :func:`~mygrad.random.sample`
- Adds :func:`~mygrad.random.seed`
Thanks to Darshan Krishnaswamy and Sam Carpenter for adding this functionality!
Fixes a bug in the GRU layer where mixed floating point precision dtypes between data and weights raised an error.
Thanks to Petar Griggs for the fix!
.. _v1.7.1:
------------------
1.7.1 - 2020-07-11
------------------
Fixes a bug in :func:`~mygrad.nnet.losses.negative_log_likelihood`, where setting ``constant=True`` had no effect.
.. _v1.7.0:
------------------
1.7.0 - 2020-07-11
------------------
This release continues the process of integrating functions from `mynn <https://github.com/davidmascharka/MyNN>`_.
New features:
- Adds :func:`~mygrad.nnet.initializers.glorot_normal`
- Adds :func:`~mygrad.nnet.initializers.glorot_uniform`
- Adds :func:`~mygrad.nnet.initializers.he_normal`
- Adds :func:`~mygrad.nnet.initializers.he_uniform`
- Adds :func:`~mygrad.nnet.initializers.normal`
- Adds :func:`~mygrad.nnet.initializers.uniform`
- Adds :func:`~mygrad.nnet.losses.focal_loss`
- Adds :func:`~mygrad.nnet.losses.negative_log_likelihood`
Big thanks to David Mascharka!
Improvements:
The interfaces to :func:`~mygrad.reshape` and :func:`~mygrad.Tensor.reshape` were adjusted to match exactly the interfaces to their NumPy counterparts.
I.e. :func:`~mygrad.reshape` now requires ``newshape`` to be a sequence, whereas :func:`~mygrad.Tensor.reshape` can accept an unpacked sequence for its
``newshape``.
:func:`~mygrad.Tensor.shape` is now settable - triggering an in-place reshape of a tensor, matching the corresponding behavior in NumPy.
Internal changes:
The logic for writing an in-place operation has been consolidated into a convenient wrapper: :func:`~mygrad.Tensor._in_place_op`.
.. _v1.6.0:
------------------
1.6.0 - 2020-06-21
------------------
New features:
- Adds :func:`~mygrad.nnet.activations.elu`
- Adds :func:`~mygrad.nnet.activations.glu`
- Adds :func:`~mygrad.nnet.activations.leaky_relu`
- Adds :func:`~mygrad.nnet.activations.selu`
- Adds :func:`~mygrad.nnet.activations.soft_sign`
Big thanks to David Mascharka!
.. _v1.5.0:
-------------------
1.5.0 - 2020-02-16
-------------------
New features:
- Adds :func:`~mygrad.Tensor.astype` method.
- Adds :func:`~mygrad.nnet.activations.hard_tanh`
- ``y_true`` can now be passed as a ``Tensor`` to :func:`~mygrad.nnet.losses.softmax_crossentropy`
This update also includes various improvements to the library's test suite.
.. _v1.4.1:
-------------------
1.4.1 - 2020-01-09
-------------------
This release performs an internal refactor in the ``nnet`` module of the library, as well as
an analogous refactor in the test suite. This also fixes a docstring in the ``multiclass_hinge``
loss to properly show a description in the readthedocs page.
.. _v1.4.0:
-------------------
1.4.0 - 2019-12-19
-------------------
This release adds the :func:`~mygrad.repeat` operation. It also includes some minor
improvements to mygrad's test suite.
.. _v1.3.0:
-------------------
1.3.0 - 2019-11-30
-------------------
This release adds :func:`~mygrad.clip` and :func:`~mygrad.where`.
It also includes a major fix to the graph-traversal mechanism for null-gradients and clear-graph,
eliminating an exponentially-scaling runtime.
``+x`` will now invoke ``mygrad.positive``, mirroring the numpy behavior
There are improvements to user-facing error messages and input validation in addition to major
improvements to mygrad's test suite. There is now a 100% line-coverage gate in mygrad's CI system.
.. _v1.2.0:
-------------------
1.2.0 - 2019-08-03
-------------------
We're finally keeping a formal changelog!
This release makes substantial improvements to MyGrad's error-checking and handling, in order to make much simpler the process of debugging issues with buggy custom operations. Specifically, :func:`~mygrad.operation_base.Operation.backward` now checks for an invalid-gradients on each call of :func:`~mygrad.operation_base.Operation.backward_var`, and raises a descriptive error message.
``mygrad.errors`` was introduced to provide descriptive, MyGrad-specific exceptions. For example, we no longer raise bare exceptions for scenarios like invalid backprop through a scalar-only graph; rather, we now raise a descriptive ``InvalidBackprop`` exception.
MyGrad's testing framework received wide-ranging improvements, yielding complete test coverage and fewer flaky tests. Coverage checks were added to the project's CI process.
:func:`~mygrad.maximum` and :func:`~mygrad.minimum` were patched to permit backpropagation through scalar inputs.
Internal implementation details of :func:`~mygrad.einsum` were adjusted to remove redundant code in its backpropagation machinery.
:func:`~mygrad.Tensor.null_gradients` was refactored to ensure that only a single traversal of the computational graph is performed to null all of the tensors' gradients. Furthermore, `Tensor.null_gradients(clear_graph=True)` now only performs a single graph traversal, instead of two.
In keeping with NumPy's behavior, performing `+x` (where `x` is a mygrad-tensor) no longer returns a reference of `x`, but returns `mygrad.positive(x)`.
Backpropagation through :func:`~mygrad.max` and :func:`~mygrad.min` now works for 0D tensors.
Input validation was added to :func:`mygrad.nnet.layers.utils.sliding_window_view`.
Fixed backpropagation through basic indexing, `x[ind] = b`, in which broadcasting occurred and `b` possess "excess" leading singleton dimensions.
| 48.397279 | 452 | 0.519144 |
236b5ae8d1e0f23d3aa7d444e0d29ae3218777d1 | 159 | rst | reStructuredText | docs/source/reference/dataframe/api/mars.dataframe.groupby.GroupBy.transform.rst | tomzhang/mars-1 | 6f1d85e37eb1b383251314cb0ba13e06288af03d | [
"Apache-2.0"
] | 1 | 2020-06-25T13:51:16.000Z | 2020-06-25T13:51:16.000Z | docs/source/reference/dataframe/api/mars.dataframe.groupby.GroupBy.transform.rst | tomzhang/mars-1 | 6f1d85e37eb1b383251314cb0ba13e06288af03d | [
"Apache-2.0"
] | null | null | null | docs/source/reference/dataframe/api/mars.dataframe.groupby.GroupBy.transform.rst | tomzhang/mars-1 | 6f1d85e37eb1b383251314cb0ba13e06288af03d | [
"Apache-2.0"
] | null | null | null | mars.dataframe.groupby.GroupBy.transform
========================================
.. currentmodule:: mars.dataframe.groupby
.. automethod:: GroupBy.transform | 26.5 | 41 | 0.597484 |
400a101f19a7e9b42f074e5578d91b6e9dcb9383 | 9,919 | rst | reStructuredText | README.rst | priyablue/max7219 | ed77c0d19258e9d96ca283e89ccd81a92a9c0032 | [
"MIT"
] | null | null | null | README.rst | priyablue/max7219 | ed77c0d19258e9d96ca283e89ccd81a92a9c0032 | [
"MIT"
] | null | null | null | README.rst | priyablue/max7219 | ed77c0d19258e9d96ca283e89ccd81a92a9c0032 | [
"MIT"
] | 1 | 2021-05-12T05:20:05.000Z | 2021-05-12T05:20:05.000Z | Raspberry PI MAX7219 driver
===========================
Interfacing LED matrix displays with the MAX7219 driver
`[PDF datasheet] <https://raw.github.com/rm-hull/max7219/master/docs/MAX7219-datasheet.pdf>`_
in Python (both 2.7 and 3.x are supported) using hardware SPI on the Raspberry Pi. A LED matrix can be acquired for a few pounds from outlets like
`Banggood <http://www.banggood.com/MAX7219-Dot-Matrix-Module-DIY-Kit-SCM-Control-Module-For-Arduino-p-72178.html?currency=GBP>`_.
Likewise 7-segment displays are available from `Ali-Express <http://www.aliexpress.com/item/MAX7219-Red-Module-8-Digit-7-Segment-Digital-LED-Display-Tube-For-Arduino-MCU/1449630475.html>`_ or `Ebay <http://www.ebay.com/itm/-/172317726225>`_.
This library supports:
* multiple cascaded devices
* LED matrix and seven-segment variants
.. image:: https://raw.githubusercontent.com/rm-hull/max7219/master/docs/images/devices.jpg
:alt: max7219 matrix
Python Usage
------------
For the matrix device, initialize the ``matrix`` class:
.. code:: python
import max7219.led as led
device = led.matrix()
device.show_message("Hello world!")
For the 7-segment device, initialize the ``sevensegment`` class:
.. code:: python
import max7219.led as led
device = led.sevensegment()
device.write_number(deviceId=0, value=3.14159)
The MAX7219 chipset supports a serial 16-bit register/data buffer which is
clocked in on pin DIN every time the clock edge falls, and clocked out on DOUT
16.5 clock cycles later. This allows multiple devices to be chained together.
When initializing cascaded devices, it is necessary to specify a ``cascaded=...``
parameter, and generally methods which target specific devices will expect a
``deviceId=...`` parameter, counting from zero.
For more information, see https://max7219.readthedocs.io/
.. image:: https://raw.githubusercontent.com/rm-hull/max7219/master/docs/images/IMG_2810.JPG
:alt: max7219 sevensegment
Pre-requisites
--------------
By default, the SPI kernel driver is **NOT** enabled on the Raspberry Pi Raspian image.
You can confirm whether it is enabled using the shell commands below::
$ lsmod | grep -i spi
spi_bcm2835 7424 0
_[Depending on the kernel version, this may report **spi_bcm2807** rather than **spi_bcm2835** -
either should be adequate]_
And that the devices are successfully installed in ``/dev``::
$ ls -l /dev/spi*
crw------- 1 root root 153, 0 Jan 1 1970 /dev/spidev0.0
crw------- 1 root root 153, 1 Jan 1 1970 /dev/spidev0.1
If you have no ``/dev/spi`` files and nothing is showing using ``lsmod`` then this
implies the kernel SPI driver is not loaded. Enable the SPI as follows (steps
taken from https://learn.sparkfun.com/tutorials/raspberry-pi-spi-and-i2c-tutorial#spi-on-pi):
#. Run ``sudo raspi-config``
#. Use the down arrow to select ``9 Advanced Options``
#. Arrow down to ``A6 SPI.``
#. Select **yes** when it asks you to enable SPI,
#. Also select **yes** when it asks about automatically loading the kernel module.
#. Use the right arrow to select the **<Finish>** button.
#. Select **yes** when it asks to reboot.
.. image:: https://cloud.githubusercontent.com/assets/1915543/16681787/b615b20c-44ee-11e6-9533-b0dce2b007b1.png
After rebooting re-check that the ``lsmod | grep -i spi`` command shows whether
SPI driver is loaded before proceeding. If you are stil experiencing problems, refer to the official
Raspberry Pi `SPI troubleshooting guide <https://www.raspberrypi.org/documentation/hardware/raspberrypi/spi/README.md#troubleshooting>`_ for further details, or ask a `new question <https://github.com/rm-hull/max7219/issues/new>`_ - but please remember to add as much detail as possible.
GPIO pin-outs
-------------
The breakout board has two headers to allow daisy-chaining:
============ ====== ============= ========= ====================
Board Pin Name Remarks RPi Pin RPi Function
------------ ------ ------------- --------- --------------------
1 VCC +5V Power 2 5V0
2 GND Ground 6 GND
3 DIN Data In 19 GPIO 10 (MOSI)
4 CS Chip Select 24 GPIO 8 (SPI CE0)
5 CLK Clock 23 GPIO 11 (SPI CLK)
============ ====== ============= ========= ====================
**Note**: See below for cascading/daisy-chaining, power supply and level-shifting.
Installing the library
----------------------
**Note**: The library has been tested against Python 2.7 and 3.4. For **Python3** installation, substitute ``pip`` ⇒ ``pip3``, ``python`` ⇒ ``python3``, ``python-dev`` ⇒ ``python3-dev``, and ``python-pip`` ⇒ ``python3-pip`` in the instructions below.
Install the latest version of the library directly from `PyPI <https://pypi.python.org/pypi?:action=display&name=max7219>`_::
$ sudo apt-get install python-dev python-pip
$ sudo pip install max7219
Alternatively, clone the code from github::
$ git clone https://github.com/rm-hull/max7219.git
$ cd max7219
$ sudo pip install -e .
Next, follow the specific steps below for your OS.
Raspbian
^^^^^^^^
.. code:: bash
$ cd max7219
$ sudo apt-get install python-dev python-pip
$ sudo pip install spidev
$ sudo python setup.py install
Arch Linux
^^^^^^^^^^
.. code:: bash
cd max7219
pacman -Sy base-devel python2
pip install spidev
python2 setup.py install
Cascading, power supply & level shifting
----------------------------------------
The MAX7219 chip supports cascading devices by connecting the DIN of one chip to the DOUT
of another chip. For a long time I was puzzled as to why this didnt seem to work properly
for me, despite spending a lot of time investigating and always assuming it was a bug in
code.
- Because the Raspberry PI can only supply a limited amount of power from the 5V rail,
it is recommended that any LED matrices are powered separately by a 5V supply, and grounded
with the Raspberry PI. It is possible to power one or two LED matrices directly from a
Raspberry PI, but any more is likely to cause intermittent faults & crashes.
- Also because the GPIO ports used for SPI are 3.3V, a simple level shifter (as per the diagram
below) should be employed on the DIN, CS and CLK inputs to boost the levels to 5V. Again it
is possible to drive them directly by the 3.3V GPIO pins, it is just outside tolerance, and
will result in intermittent issues.
.. image:: https://raw.githubusercontent.com/rm-hull/max7219/master/docs/images/level-shifter.jpg
:alt: max7219 levelshifter
Despite the above two points, I still had no success getting cascaded matrices
to work properly. Revisiting the wiring, I had connected the devices in serial
connecting the out pins of one device to the in pins of another. This just
produced garbled images.
Connecting the CLK lines on the input side all together worked first time. I
can only assume that there is some noise on the clock line, or a dry solder
joint somewhere.
.. image:: https://raw.githubusercontent.com/rm-hull/max7219/master/docs/images/matrix_cascaded.jpg
:alt: max7219 cascaded
If you have more than one device and they are daisy-chained together, you can initialize the
library with:
.. code:: python
import max7219.led as led
device = led.matrix(cascaded = 3)
device.show_message("Hello world!")
To address a specific device, most other methods expect a ``deviceId=N`` parameter
(where N=0..cascaded-1).
Examples
--------
Run the example code as follows::
$ sudo python examples/matrix_test.py
or::
$ sudo python examples/sevensegment_test.py
**Note**: By default, SPI is only accessible by root (hence using ``sudo`` above). Follow `these <http://quick2wire.com/non-root-access-to-spi-on-the-pi>`_ instructions to create an ``spi`` group, and adding your user to that group, so you don't have to run as root.
References
----------
- http://hackaday.com/2013/01/06/hardware-spi-with-python-on-a-raspberry-pi/
- http://gammon.com.au/forum/?id=11516
- http://louisthiery.com/spi-python-hardware-spi-for-raspi/
- http://www.brianhensley.net/2012/07/getting-spi-working-on-raspberry-pi.html
- http://raspi.tv/2013/8-x-8-led-array-driven-by-max7219-on-the-raspberry-pi-via-python
- http://quick2wire.com/non-root-access-to-spi-on-the-pi
Contributing
------------
Pull requests (code changes / documentation / typos / feature requests / setup) are gladly accepted. If you are
intending some large-scale changes, please get in touch first to make sure we're on the same page: try and include
a docstring for any new methods, and try and keep method bodies small, readable and PEP8-compliant.
Contributors
^^^^^^^^^^^^
* Thijs Triemstra (@thijstriemstra)
* Jon Carlos (@webmonger)
* Unattributed (@wkapga)
* Taras (@tarasius)
* Brice Parent (@agripo)
License
-------
The MIT License (MIT)
Copyright (c) 2016 Richard Hull
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| 39.676 | 287 | 0.714689 |
e4982c6d3f8ca15be0f2d7469c6cc8a429395d18 | 2,121 | rst | reStructuredText | docs/index.rst | jkeyes/python-sparkpost | 20e9fb44b61b40baae53c08f4e6f5bbfb0068894 | [
"Apache-2.0"
] | null | null | null | docs/index.rst | jkeyes/python-sparkpost | 20e9fb44b61b40baae53c08f4e6f5bbfb0068894 | [
"Apache-2.0"
] | null | null | null | docs/index.rst | jkeyes/python-sparkpost | 20e9fb44b61b40baae53c08f4e6f5bbfb0068894 | [
"Apache-2.0"
] | null | null | null | SparkPost Python API client
===========================
The super-mega-official Python package for using the SparkPost API.
Installation
------------
Install from PyPI using `pip`_:
.. code-block:: bash
$ pip install sparkpost
.. _pip: http://www.pip-installer.org/en/latest/
Authorization
-------------
Go to `API & SMTP`_ in the SparkPost app and create an API key. We recommend using the ``SPARKPOST_API_KEY`` environment variable:
.. code-block:: python
from sparkpost import SparkPost
sp = SparkPost() # uses environment variable
Alternatively, you can pass the API key to the SparkPost class:
.. code-block:: python
from sparkpost import SparkPost
sp = SparkPost('YOUR API KEY')
For SparkPost EU and Enterprise accounts, pass in a second parameter to set the API host.
.. code-block:: python
from sparkpost import SparkPost
sp = SparkPost('YOUR API KEY', 'https://api.eu.sparkpost.com/api')
.. _API & SMTP: https://app.sparkpost.com/configuration/credentials
Resources
---------
The following resources are available in python-sparkpost:
.. toctree::
:glob:
:maxdepth: 1
resources/*
API reference
-------------
Auto-generated API reference for python-sparkpost:
.. toctree::
:maxdepth: 2
api
Using in Django
---------------
Configure Django to use SparkPost email backend
.. toctree::
:maxdepth: 2
django/backend
Additional documentation
------------------------
The underlying SparkPost API is documented at the official `SparkPost API Reference`_.
.. _SparkPost API Reference: https://www.sparkpost.com/api
Contribute
----------
#. Check for open issues or open a fresh issue to start a discussion around a feature idea or a bug.
#. Fork `the repository`_ on GitHub and make your changes in a branch on your fork
#. Write a test which shows that the bug was fixed or that the feature works as expected.
#. Send a pull request. Make sure to add yourself to AUTHORS_.
.. _`the repository`: http://github.com/SparkPost/python-sparkpost
.. _AUTHORS: https://github.com/SparkPost/python-sparkpost/blob/master/AUTHORS.rst
| 21.865979 | 130 | 0.692598 |
7087357b7c1560eadbf5eb2bc14e008d10a63552 | 563 | rst | reStructuredText | docs/index.rst | makinacorpus/django-mapentity | 69a53fb8a676a9d4f6173b77fc09f99d38c4795e | [
"BSD-3-Clause"
] | 26 | 2015-01-12T13:22:40.000Z | 2022-03-07T16:50:52.000Z | docs/index.rst | TheoLechemia/mapentity-project | 41c78ead51d4733dcbef69c557fa6eec0ade71f7 | [
"BSD-3-Clause"
] | 94 | 2015-01-21T16:20:47.000Z | 2022-03-24T13:44:56.000Z | docs/index.rst | TheoLechemia/mapentity-project | 41c78ead51d4733dcbef69c557fa6eec0ade71f7 | [
"BSD-3-Clause"
] | 13 | 2015-01-22T07:11:15.000Z | 2019-10-28T02:07:48.000Z | .. MapEntity documentation master file, created by
sphinx-quickstart on Tue Oct 8 10:20:33 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to MapEntity's documentation!
=====================================
MapEntity is a CRUD interface for geospatial entities built with Django.
.. toctree::
:maxdepth: 2
installation
getting_started
customization
development
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 20.107143 | 76 | 0.660746 |
6a1f619d04bc128607bc5524f7ae2de62ca6deed | 751 | rst | reStructuredText | docs/index.rst | shoeffner/Flask-NoFLoC | 9ea86947c9a6777d1c4b308c6b5222bc965af679 | [
"MIT"
] | 1 | 2021-04-23T09:18:54.000Z | 2021-04-23T09:18:54.000Z | docs/index.rst | shoeffner/Flask-NoFLoC | 9ea86947c9a6777d1c4b308c6b5222bc965af679 | [
"MIT"
] | null | null | null | docs/index.rst | shoeffner/Flask-NoFLoC | 9ea86947c9a6777d1c4b308c6b5222bc965af679 | [
"MIT"
] | null | null | null | Welcome to Flask-NoFLoC
===========================
Welcome to Flask-NoFLoC's documentation.
Flask-NoFLoC is a Flask_ extension_ to
Get started with :doc:`installation` and then get an overview with the :doc:`quickstart`.
The rest of the docs describe Flask-NoFLoC's :doc:`api`.
.. _Flask: https://flask.palletsprojects.com
.. _extension: https://flask.palletsprojects.com/en/1.1.x/extensiondev/
.. toctree::
:maxdepth: 1
:caption: Contents:
installation
quickstart
API Reference
-------------
If you are looking for information on a specific function, class or
method, this part of the documentation is for you.
.. toctree::
:maxdepth: 2
api
Additional Notes
----------------
.. toctree::
:maxdepth: 2
license
| 17.880952 | 89 | 0.669774 |
8ebc05ea8427c33a3156be490709d452b312cf51 | 456 | rst | reStructuredText | doc/source/bms/user-guide/troubleshooting.rst | OpenTelekomCloud/docs | bf7f76b5c8f74af898e3b3f726ee563c89ec2fed | [
"Apache-2.0"
] | 1 | 2017-09-18T13:54:39.000Z | 2017-09-18T13:54:39.000Z | doc/source/bms/user-guide/troubleshooting.rst | opentelekomcloud/docs | bf7f76b5c8f74af898e3b3f726ee563c89ec2fed | [
"Apache-2.0"
] | 34 | 2020-02-21T17:23:45.000Z | 2020-09-30T09:23:10.000Z | doc/source/bms/user-guide/troubleshooting.rst | OpenTelekomCloud/docs | bf7f76b5c8f74af898e3b3f726ee563c89ec2fed | [
"Apache-2.0"
] | 14 | 2017-08-01T09:33:20.000Z | 2019-12-09T07:39:26.000Z | ===============
Troubleshooting
===============
.. toctree::
:maxdepth: 1
what-can-i-do-if-i-cannot-log-in-to-my-bms-or-the-bms-evs-disk-is-lost-after-the-bms-is-started-or-r.md
what-should-i-do-if-a-key-pair-created-using-puttygen-cannot-be-imported-to-the-management-console.md
what-can-i-do-if-disks-cannot-be-attached-to-a-bms-that-restarts-abnormally.md
what-can-i-do-if-an-evs-disk-attached-to-a-windows-bms-is-in-offline-state.md
| 32.571429 | 106 | 0.688596 |
e0ae6eabb02f75443a821cf0b02f65bd232892d2 | 291 | rest | reStructuredText | server/requests/put_studyProgramUrl.rest | UniversityOfHelsinkiCS/ohjaajarekisteri | e84192dc37771daf22ee0961299abae658f64d88 | [
"MIT"
] | null | null | null | server/requests/put_studyProgramUrl.rest | UniversityOfHelsinkiCS/ohjaajarekisteri | e84192dc37771daf22ee0961299abae658f64d88 | [
"MIT"
] | 39 | 2020-05-06T12:11:53.000Z | 2022-01-22T12:04:59.000Z | server/requests/put_studyProgramUrl.rest | UniversityOfHelsinkiCS/ohjaajarekisteri | e84192dc37771daf22ee0961299abae658f64d88 | [
"MIT"
] | null | null | null | PUT http://localhost:8001/api/studyProgramUrls/
Content-Type: application/json
Authorization: bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6Mywicm9sZSI6ImFkbWluIiwiaWF0IjoxNTUwODM5MDU2fQ.j3-ora4N58OO2ZUUiqOMcy1ZPC39WVVdG1F4FnW3KYw
{
"type": "kokkelis",
"url": "www.testi.fi"
} | 36.375 | 157 | 0.828179 |
ae1154dcbcc68365276ad9e9b30144cc7fd94399 | 1,572 | rst | reStructuredText | latex/tagging/README.rst | NathanDunfield/artexamscan | e3e082ad185d724bb040bf7f7a18f345652ae846 | [
"MIT"
] | null | null | null | latex/tagging/README.rst | NathanDunfield/artexamscan | e3e082ad185d724bb040bf7f7a18f345652ae846 | [
"MIT"
] | null | null | null | latex/tagging/README.rst | NathanDunfield/artexamscan | e3e082ad185d724bb040bf7f7a18f345652ae846 | [
"MIT"
] | null | null | null | Tagging exams via LaTeX
=======================
This directory describes how to add the QR tags and grader boxes to an
exam. It requires only a reasonably recent TeX setup. Provided you use
LuaLaTeX, you can tag 1,500+ pages/minute using this method.
An example
==========
Here is a simple example with two exam versions where we want 10
copies of each, numbered 1-10 and 11-20 respectively.
1. The instructor provides PDF files for each version, in our example
"m1-9am.pdf" and "m1-10am.pdf" as well as a PDF of the desired
coversheet, here "cover.pdf". (You can use different cover sheets
with different versions if you want.)
2. To tag an exam, one creates very short LaTeX files, here called
"m1-9am-tagged.tex" and "m1-10am-tagged.tex". These rely on the
included LaTeX class "tickyoverlay.cls". You will need to
modify the "examdefault" parameters in the file to suit.
3. Now just run PDFLaTeX on these files::
pdflatex m1-9am-tagged.tex
pdflatex m1-10am-tagged.tex
which will generate two 70 page PDF files, one for each version.
LuaLaTeX
========
For a 4 times speedup, one can use LuaLaTeX instead of PDFLaTeX. Just
make sure the file "qrencode.lua" is in the same directory as
"tickyoverlay.cls" and then do::
lualatex m1-9am-tagged.tex
lualatex m1-10am-tagged.tex
Timings
=======
Time needed to generate 100 copies of a 7 page exam (including
coversheet) on Nathan's 2014 MacPro:
1. PDFLaTeX: 2 minutes 43 seconds
2. LuaLaTeX: 28 seconds
The first way results in a 3M file, the second in a 2.2M one.
| 28.581818 | 70 | 0.72201 |
e6a9f2612cb28505bfaf4f2b72907873d428d922 | 1,911 | rst | reStructuredText | docs/contributing/logo.rst | andygrunwald/Gerrie | baf2dcf49b7fb5684f553c8a219aeb8ee02709b4 | [
"MIT"
] | 8 | 2015-01-30T16:10:31.000Z | 2020-07-03T22:31:54.000Z | docs/contributing/logo.rst | andygrunwald/Gerrie | baf2dcf49b7fb5684f553c8a219aeb8ee02709b4 | [
"MIT"
] | 8 | 2015-01-04T14:52:48.000Z | 2016-11-19T09:15:37.000Z | docs/contributing/logo.rst | andygrunwald/Gerrie | baf2dcf49b7fb5684f553c8a219aeb8ee02709b4 | [
"MIT"
] | 11 | 2015-01-10T17:15:42.000Z | 2016-11-14T00:14:18.000Z | Logo
#####
Every product needs a logo.
Even Gerrie.
Sadly the current authors are not very good in designing a meaningful logo.
This is the reason why we hope that a designer got some time to create a new shiny logo :)
This logo can be used to :doc:`talk about it at conferences or usergroups</contributing/various>` or to create a recognition value.
It will be displayed on every site where Gerrie is announced.
We don`t get a special briefing.
But here are some small words.
Maybe they help to develop an idea:
Gerrie is a data mining tool / crawler to retrieve data from Googles code review system "Gerrit".
Data mining is a more general word about retrieving, cleaning and analyze data to get new information out of it.
The data which is mind is related to developer who spend (mostly) spare time to improve a software system by fixing bugs or adding new features.
The data contains source code, comments, up and down votes.
The data is retrieved by standarized interfaces which are provided by Gerrit.
During this process the data will be transformed in a different format and stored in a database.
The word "mining" can be lead to mining in general like coal mining.
The author was born in Duisburg, Germany.
Duisburg, Germany is a city which lived a long long time by mining.
Many coal mines are around Duisburg.
The designer got free space to design it, but it would be nice to be a little bit related to such kind of story.
`gerrit`_, the open source project got already a logo: Diffy - The Kung Fu Review Cuckoo
.. image:: ./../_static/gerrit-logo-diffy.png
Feel free.
We would like to receive a nice suggestion from you.
Sadly we do not get a (big) budget for it, because we do not earn money with Gerrie.
Everything what we can offer is fame.
You will be mentioned everywhere we will publish this logo.
.. _gerrit: https://code.google.com/p/gerrit/
| 46.609756 | 148 | 0.759288 |
44583499fae52b681025f0bd90ca77ffd3236d9c | 145 | rst | reStructuredText | docs_build/message.rst | he0119/nonebot2 | bd7ee0a1bafc0ea7a7501ba37541349d4a81b73e | [
"MIT"
] | 1 | 2022-01-26T12:52:33.000Z | 2022-01-26T12:52:33.000Z | docs_build/message.rst | he0119/nonebot2 | bd7ee0a1bafc0ea7a7501ba37541349d4a81b73e | [
"MIT"
] | null | null | null | docs_build/message.rst | he0119/nonebot2 | bd7ee0a1bafc0ea7a7501ba37541349d4a81b73e | [
"MIT"
] | null | null | null | \-\-\-
sidebar_position: 4
\-\-\-
NoneBot.message 模块
======================
.. automodule:: nonebot.message
:members:
:show-inheritance:
| 13.181818 | 31 | 0.544828 |
6073aabcaaccf538cd626e6ffc0efd1750b577d1 | 1,157 | rst | reStructuredText | docs/vim/LicenseManager/ReservationInfo/State.rst | nandonov/pyvmomi | ad9575859087177623f08b92c24132ac019fb6d9 | [
"Apache-2.0"
] | 12 | 2016-09-14T21:59:46.000Z | 2019-12-18T18:02:55.000Z | docs/vim/LicenseManager/ReservationInfo/State.rst | nandonov/pyvmomi | ad9575859087177623f08b92c24132ac019fb6d9 | [
"Apache-2.0"
] | 2 | 2019-01-07T12:02:47.000Z | 2019-01-07T12:05:34.000Z | docs/vim/LicenseManager/ReservationInfo/State.rst | nandonov/pyvmomi | ad9575859087177623f08b92c24132ac019fb6d9 | [
"Apache-2.0"
] | 8 | 2020-05-21T03:26:03.000Z | 2022-01-26T11:29:21.000Z | .. _vim.LicenseManager.ReservationInfo: ../../../vim/LicenseManager/ReservationInfo.rst
.. _vim.LicenseManager.ReservationInfo.State: ../../../vim/LicenseManager/ReservationInfo/State.rst
vim.LicenseManager.ReservationInfo.State
========================================
Describes the reservation state of a license.
:contained by: `vim.LicenseManager.ReservationInfo`_
:type: `vim.LicenseManager.ReservationInfo.State`_
:name: licensed
values:
--------
unlicensedUse
The LicenseManager failed to acquire a license but the implementation policy allows us to use the licensed feature anyway. This is possible, for example, when a license server becomes unavailable after a license had been successfully reserved from it.
notUsed
This license is currently unused by the system, or the feature does not apply. For example, a DRS license appears as NotUsed if the host is not part of a DRS-enabled cluster.
noLicense
This indicates that the license has expired or the system attempted to acquire the license but was not successful in reserving it.
licensed
The required number of licenses have been acquired from the license source.
| 41.321429 | 254 | 0.760588 |
b145b7e674ee7c6a0ee6e16c33dd6113154dec27 | 232 | rst | reStructuredText | doc/source/guide/return.rst | fnoeding/exoself | 11dfceea12a9f6f8ed0018fd60e6de5f73b9fa35 | [
"BSD-3-Clause"
] | 4 | 2015-12-18T10:36:38.000Z | 2021-03-19T04:54:03.000Z | doc/source/guide/return.rst | fnoeding/exoself | 11dfceea12a9f6f8ed0018fd60e6de5f73b9fa35 | [
"BSD-3-Clause"
] | null | null | null | doc/source/guide/return.rst | fnoeding/exoself | 11dfceea12a9f6f8ed0018fd60e6de5f73b9fa35 | [
"BSD-3-Clause"
] | null | null | null | .. highlight:: boo
return
========
The ``return`` statement transfers control from the current function to the caller. For non-``void`` functions ``return`` passes also it`s argument back to the caller of the current function.
| 25.777778 | 191 | 0.715517 |
9bd3c0a6585f6a6e2b57555725914a1e282c8f69 | 234 | rst | reStructuredText | doc/src/sphinx/metrics/RateLimiting.rst | whiter4bbit/finagle | 1e06db17ca2de4b85209dd2fbc18e635815e994b | [
"Apache-2.0"
] | 2 | 2016-01-13T13:48:13.000Z | 2018-01-18T21:57:25.000Z | doc/src/sphinx/metrics/RateLimiting.rst | whiter4bbit/finagle | 1e06db17ca2de4b85209dd2fbc18e635815e994b | [
"Apache-2.0"
] | 4 | 2021-12-09T22:56:07.000Z | 2021-12-09T22:56:09.000Z | doc/src/sphinx/metrics/RateLimiting.rst | sohailalam2/finagle | 1ac2ce7657fd5952a92049a745945fe800c87fb5 | [
"Apache-2.0"
] | 4 | 2015-04-26T07:16:40.000Z | 2020-11-23T06:30:24.000Z | RateLimitingFilter
<<<<<<<<<<<<<<<<<<
**refused**
a counter of the number of refused connections by the rate limiting filter
RetryingFilter
<<<<<<<<<<<<<<
**retries**
a histogram of the number of times the service had to retry
| 19.5 | 76 | 0.666667 |
6e67ab3818ee10bdfb3c4377e1b14ce45082b619 | 2,757 | rst | reStructuredText | docs/source/get_started/rest_documentation.rst | karthikrameskum/motech | f207eb1c6e53a20a3f1464d536a5a9c8c30bac03 | [
"BSD-3-Clause"
] | 17 | 2015-08-11T07:39:39.000Z | 2021-08-30T04:24:51.000Z | docs/source/get_started/rest_documentation.rst | karthikrameskum/motech | f207eb1c6e53a20a3f1464d536a5a9c8c30bac03 | [
"BSD-3-Clause"
] | 474 | 2015-08-11T08:15:03.000Z | 2018-03-29T16:11:11.000Z | docs/source/get_started/rest_documentation.rst | karthikrameskum/motech | f207eb1c6e53a20a3f1464d536a5a9c8c30bac03 | [
"BSD-3-Clause"
] | 71 | 2015-09-03T15:09:11.000Z | 2018-07-24T04:34:30.000Z | =============================================
Automatic REST API documentation UI in MOTECH
=============================================
MOTECH uses `Swagger <http://swagger.io/>`_ for generating a user interface that documents and allows
testing of REST APIs. An interface can be generated for each module that wishes to register a REST API for documenting.
This document will describe the process of registering a REST API for the module with the system.
It is worth noting that this documentation will always be generated for :doc:`MDS <model_data/model_data>` entities that have REST access enabled.
Overview of the UI
##################
Swagger will generate documentation for each endpoint specified in the API description:
.. image:: img/rest_docs_operations.png
:scale: 100 %
:alt: Swagger UI - multiple operations
:align: center
For each HTTP method allowed for a given endpoint, the user will be able to view the details of the operation,
such as the structure of the response, structure of the expected request, allowed parameters.
.. image:: img/rest_docs_single_operation.png
:scale: 100 %
:alt: Swagger UI - multiple operations
:align: center
Users can use that UI to easily execute REST calls against the API and view the responses.
Registering REST documentation
##############################
The first step for registering rest documentation is creating a Swagger spec file that will describe the API.
More information on spec files, ways of generating them and so on can be found on the `Swagger spec Wiki <https://github.com/swagger-api/swagger-spec/wiki>`_.
After generating the file, it has to be exposed by the module through HTTP. You can achieve this either by placing the
file in your webapp resources or by creating a Spring controller that will serve this content. For more information on exposing a resource,
refer to the :doc:`UI documentation <ui>`.
After the resource is exposed through the UI, its path should be specified in the ModuleRegistrationData bean using
the **restDocsPath** property. Below is an example of a simple module registration that registers the spec file
with the system.
.. code-block:: xml
<bean id="moduleRegistrationData" class="org.motechproject.osgi.web.ModuleRegistrationData">
<constructor-arg name="url" value="../mymodule/resources/index.html"/>
<constructor-arg name="moduleName" value="my-module"/>
<property name="restDocsPath" value="/mymodule/resources/spec.json"/>
</bean>
After these steps, the module and its API will be incorporated into the Swagger UI exposed by MOTECH.
| 51.055556 | 158 | 0.694233 |
7fb1e6d4010c5407e6ce1f8b5e91473d0802bf70 | 72 | rst | reStructuredText | README.rst | ihmeuw-msca/simbin | 270a0d76b814c45db5363feccfe114b682eca691 | [
"BSD-2-Clause"
] | null | null | null | README.rst | ihmeuw-msca/simbin | 270a0d76b814c45db5363feccfe114b682eca691 | [
"BSD-2-Clause"
] | null | null | null | README.rst | ihmeuw-msca/simbin | 270a0d76b814c45db5363feccfe114b682eca691 | [
"BSD-2-Clause"
] | 1 | 2021-05-28T17:20:32.000Z | 2021-05-28T17:20:32.000Z | Simulation of Multivariate Binomial
===================================
| 24 | 35 | 0.444444 |
dc4db6866b0750badfdff01c20994db6cf56ea6c | 337 | rst | reStructuredText | doc/wrappers/preprocessors.rst | KristianHolsheimer/keras-gym | 0296ddcc8685e1ce732c3173caaa0fd25af9ef58 | [
"MIT"
] | 16 | 2019-07-01T10:56:26.000Z | 2021-01-31T18:56:56.000Z | doc/wrappers/preprocessors.rst | KristianHolsheimer/keras-gym | 0296ddcc8685e1ce732c3173caaa0fd25af9ef58 | [
"MIT"
] | 10 | 2019-03-10T21:56:10.000Z | 2020-09-06T21:49:55.000Z | doc/wrappers/preprocessors.rst | KristianHolsheimer/keras-gym | 0296ddcc8685e1ce732c3173caaa0fd25af9ef58 | [
"MIT"
] | 5 | 2019-08-02T22:11:19.000Z | 2020-04-19T20:18:38.000Z | Preprocessors
=============
.. autosummary::
:nosignatures:
keras_gym.wrappers.BoxActionsToReals
keras_gym.wrappers.ImagePreprocessor
keras_gym.wrappers.FrameStacker
.. autoclass:: keras_gym.wrappers.BoxActionsToReals
.. autoclass:: keras_gym.wrappers.ImagePreprocessor
.. autoclass:: keras_gym.wrappers.FrameStacker
| 22.466667 | 51 | 0.762611 |
ca20e715641a9c7a50ab775a48bec336fa05f8f6 | 23 | rst | reStructuredText | doc_lang/source/c-06-file-io.rst | CAUCHY2932/DocSphinx | 67596935702dbcaddc61aed6c342a8f1913b8520 | [
"MIT"
] | null | null | null | doc_lang/source/c-06-file-io.rst | CAUCHY2932/DocSphinx | 67596935702dbcaddc61aed6c342a8f1913b8520 | [
"MIT"
] | null | null | null | doc_lang/source/c-06-file-io.rst | CAUCHY2932/DocSphinx | 67596935702dbcaddc61aed6c342a8f1913b8520 | [
"MIT"
] | null | null | null | c 文件 io
**************
| 7.666667 | 14 | 0.217391 |
b94da7aabc0fd6d7ca10714d76af327bbbacc4c1 | 7,393 | rst | reStructuredText | docs/server-side/data.rst | Ximpia/ximpia | ed2ad22faa42ceca2bde782a47624e5a6ef60e3b | [
"Apache-2.0"
] | 1 | 2020-09-11T01:54:24.000Z | 2020-09-11T01:54:24.000Z | docs/server-side/data.rst | Ximpia/ximpia | ed2ad22faa42ceca2bde782a47624e5a6ef60e3b | [
"Apache-2.0"
] | null | null | null | docs/server-side/data.rst | Ximpia/ximpia | ed2ad22faa42ceca2bde782a47624e5a6ef60e3b | [
"Apache-2.0"
] | null | null | null | Data
====
Ximpia provides a data layer to connect to your data sources. These data sources can be
relational (SQL) data sources or noSQL data sources.
We follow philosophy that service layer do not call django model data access methods directly
and use data classes (what could be similar to a django model manager).
You may have your data operations return django models or simple entities like dictionaries
with attribute calling (customer.name) when you deal with data sources and noSQL.
Advantages of data layer:
* You keep simple and complex data access methods in the data layer
* Knows which data nodes to call (master or replicas) depending on type of request (view or action)
* You may change your data sources without changing your services / business layers
* You may change data sources (django, sqlalchemy, redis, mongo)
Currenly we offer CommonDAO which deals with django data model access. This class has common
data operations.
Master/Replicas
---------------
In case you use master/slave data nodes, you don't need to have routers to route to master or slaves.
We think a simpler design is good. Depending on view or action request, the data layer hits master
or slave nodes. In case no slaves defined, will allways hit default data node.
BaseModel
---------
Keeps attributes about auditory (create_user and update_user) and create and update fields.
All django models would extend this model.
Keeps delete management. When we delete data, we write ``isDeleted=True``.
You can delete data for ever with ``is_real`` attribute in delete operations.
CommonDAO
---------
Your data layer classes will be related to django models, like:
.. code-block:: python
from ximpia.xpcore.data import CommonDAO
class UserMetaDAO(CommonDAO):
model = UserMeta
def my_data_operation(self, customer_id):
# you return django queryset, django model, attribute dictionary or plain dictionary
# good place to have complex queries, Q objects, etc...
Implementation:
.. code-block:: python
db_customer = self._instances(CustomerDAO)[0]
customers = db_customer.search(id=23)
Will return a django queryset, where you can filter (``customers.filter(name='john')``)
In case you need to access model (from services and business) and do operations on model, do:
.. code-block:: python
db_customer.model.objects.all()
**Exception**
Will raise ``XpMsgException`` with error literal message and attribute ``origin='data'``:
.. code-block:: python
from ximpia.xpcore.models import XpMsgException
try:
db_customers.get(id=34)
except XpMsgException:
# treat exception
* :ref:`commondao.check`
* :ref:`commondao.create`
* :ref:`commondao.delete`
* :ref:`commondao.delete_if_exists`
* :ref:`commondao.delete_by_id`
* :ref:`commondao.filter_data`
* :ref:`commondao.get`
* :ref:`commondao.get_all`
* :ref:`commondao.get_create`
* :ref:`commondao.get_map`
* :ref:`commondao.get_by_id`
* :ref:`commondao.save`
* :ref:`commondao.search`
* :ref:`commondao.search_fields`
.. _commondao.check:
check
"""""
Checks if exists
**Attributes**
keyword attributes, like:
.. code-block:: python
if db_customer.check(name='john', status='new'):
# more...
**Returns**
True/False
.. _commondao.create:
create
""""""
Will create model with attributes
.. code-block:: python
customer = db_customer.create(name='john', status='new')
**Attributes**
keyword attributes
**Returns**
Django model created
.. _commondao.delete:
delete
""""""
Will delete rows that match the keyword attributes
.. code-block:: python
db_customer.delete(name='john', status='new')
**Attributes**
* ``is_real`` (boolean)
keywork attributes
**Returns**
None
.. _commondao.delete_if_exists:
delete_if_exists
""""""""""""""""
Will delete rows that match the keyword attributes in case exists. If not, does not throw exception.
.. code-block:: python
db_customer.delete_if_exists(name='john', status='new')
**Attributes**
* ``is_real`` (boolean)
keywork attributes
**Returns**
None
.. _commondao.delete_by_id:
delete_by_id
""""""""""""
Delete by primary key
.. code-block:: python
db_customer.delete_by_id(23)
db_customer.delete_by_id(23, is_real=True)
**Attributes**
* ``pk`` (long)
**Optional Attributes**
* ``is_real`` (boolean)
**Returns**
None
.. _commondao.filter_data:
filter_data
"""""""""""
Search model with ordering and paging
.. code-block:: python
db_customer.filter_data(status='OK')
db_customer.filter_data(status='OK', xpNumberMatches=100)
**Attributes**
keyword attributes
**Optional Attributes**
* ``xpNumberMatches`` (int) : default 100
* ``xpPage`` (int) : default 1
* ``xpOrderBy`` (tuple)
keywork attributes
**Returns**
queryset
.. _commondao.get:
get
"""
Get model which match attributes
**Attributes**
keyword attributes for query
**Returns**
django model
.. _commondao.get_all:
get_all
"""""""
Get all results for model
**Returns**
django queryset
.. _commondao.get_create:
get_create
""""""""""
Get object. In case does not exist, create model object
**Attributes**
Keyword attributes
**Returns**
(object, created) <model, boolean>
.. _commondao.get_map:
get_map
"""""""
Get container (dictionary) of {id: object, ...} for list of ids
**Attributes**
* ``id_list`` (list) : List of ids
**Returns**
Dictionary with ids and objects
.. _commondao.get_by_id:
get_by_id
"""""""""
Get object by id
**Attributes**
* ``field_id``
**Returns**
Model object
.. _commondao.save:
save
""""
Saves the model
.. code-block:: python
customer = CustomerDAO.model(name='john', status='OK')
customer.save()
db_customer = self._instances(CustomerDAO)[0]
customer = db_customer.get_by_id(23)
customer.name='james'
customer.save()
**Returns**
None
.. _commondao.search:
search
""""""
Search model to get queryset (like filter)
Search model, like:
.. code-block:: python
customers = db_customer.search(name='john')
**Attributes**
* ``qs_args`` (dict) : Keywork attributes like attr=value
**Returns**
Django queryset
.. _commondao.search_fields:
search_fields
"""""""""""""
Search table with paging, ordering for set of fields. listMap allows mapping from keys to model fields.
** Attributes **
* ``fields``:tuple<str>
**Optional Attributes**
* ``page_start``:int [optional] [default:1]
* ``page_end``:int [optional]
* ``number_results``:int [optional] [default:from settings]
* ``order_by``:tuple<str> [optional] [default:[]]
keyword attributes
** Returns **
Returns the query set with values(\*fields).
xpList:ValuesQueryset
| 19.927224 | 111 | 0.637089 |
5efac8e86ea426c331af94995b3b848f43efac69 | 647 | rst | reStructuredText | docs/source/enums/host_numeric_sensor_type.rst | Infinidat/pyvisdk | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | [
"MIT"
] | null | null | null | docs/source/enums/host_numeric_sensor_type.rst | Infinidat/pyvisdk | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | [
"MIT"
] | null | null | null | docs/source/enums/host_numeric_sensor_type.rst | Infinidat/pyvisdk | f2f4e5f50da16f659ccc1d84b6a00f397fa997f8 | [
"MIT"
] | null | null | null |
==================================================================================================
HostNumericSensorType
==================================================================================================
.. describe:: HostNumericSensorType
.. py:data:: HostNumericSensorType.fan
Fan sensor
.. py:data:: HostNumericSensorType.other
Other sensor.
.. py:data:: HostNumericSensorType.power
Power sensor
.. py:data:: HostNumericSensorType.temperature
Temperature sensor
.. py:data:: HostNumericSensorType.voltage
Voltage Sensor
| 18.485714 | 98 | 0.42813 |
b9e058da294c69534ed63efbac8e696cd5cb5570 | 368 | rst | reStructuredText | HISTORY.rst | kuvandjiev/djangocms-redirect | e06d3fa9f28aed875b5d25d13b2bfb4b02cddac1 | [
"BSD-3-Clause"
] | null | null | null | HISTORY.rst | kuvandjiev/djangocms-redirect | e06d3fa9f28aed875b5d25d13b2bfb4b02cddac1 | [
"BSD-3-Clause"
] | null | null | null | HISTORY.rst | kuvandjiev/djangocms-redirect | e06d3fa9f28aed875b5d25d13b2bfb4b02cddac1 | [
"BSD-3-Clause"
] | null | null | null | .. :changelog:
History
-------
0.2.0 (2018-11-03)
++++++++++++++++++
* Updated for Django 1.11
* Added configurable cache timeout
* Added configuration option to check redirect on 404 only
0.1.1 (2017-11-19)
++++++++++++++++++
* Added missing migration.
* Fixed compatibility issue with Django 1.8
0.1.0 (2016-02-01)
++++++++++++++++++
* First release on PyPI.
| 16 | 58 | 0.595109 |
60c4437b20391e8e4279128a683a4e4e3e040447 | 405 | rst | reStructuredText | docs/install.rst | weex/federation | 01357aacb04b076442ce5f803a0fc65df5a74d09 | [
"BSD-3-Clause"
] | 93 | 2016-11-26T10:52:13.000Z | 2022-01-15T20:07:35.000Z | docs/install.rst | weex/federation | 01357aacb04b076442ce5f803a0fc65df5a74d09 | [
"BSD-3-Clause"
] | 75 | 2016-10-18T10:15:44.000Z | 2019-10-05T22:16:32.000Z | docs/install.rst | weex/federation | 01357aacb04b076442ce5f803a0fc65df5a74d09 | [
"BSD-3-Clause"
] | 9 | 2017-04-08T08:03:45.000Z | 2021-09-13T22:00:48.000Z | Install
=======
Dependencies
------------
Depending on your operating system, certain dependencies will need to be installed.
lxml
....
lxml itself is installed by pip but the dependencies need to be installed `as per lxml instructions <http://lxml.de/installation.html#requirements>`_.
Installation
------------
Install with pip or include in your requirements file.
::
pip install federation
| 18.409091 | 150 | 0.718519 |
d6fab8775959f86075ada260a947a50dd2780ae0 | 1,345 | rst | reStructuredText | doc/rst/source.utils.rst | AbbieZ/Fraudiscern | ae30c58a62129e05e219ad48309ceb7f5d944bba | [
"Apache-2.0"
] | 2 | 2021-07-28T15:11:22.000Z | 2022-02-19T03:12:36.000Z | doc/rst/source.utils.rst | AbbieZ/Fraudiscern | ae30c58a62129e05e219ad48309ceb7f5d944bba | [
"Apache-2.0"
] | null | null | null | doc/rst/source.utils.rst | AbbieZ/Fraudiscern | ae30c58a62129e05e219ad48309ceb7f5d944bba | [
"Apache-2.0"
] | 2 | 2021-07-05T15:22:08.000Z | 2021-07-08T01:54:10.000Z | source.utils package
====================
Submodules
----------
source.utils.data\_generator module
-----------------------------------
.. automodule:: source.utils.data_generator
:members:
:undoc-members:
:show-inheritance:
source.utils.data\_loader module
--------------------------------
.. automodule:: source.utils.data_loader
:members:
:undoc-members:
:show-inheritance:
source.utils.data\_sampler module
---------------------------------
.. automodule:: source.utils.data_sampler
:members:
:undoc-members:
:show-inheritance:
source.utils.hdfs\_manager module
---------------------------------
.. automodule:: source.utils.hdfs_manager
:members:
:undoc-members:
:show-inheritance:
source.utils.logger module
--------------------------
.. automodule:: source.utils.logger
:members:
:undoc-members:
:show-inheritance:
source.utils.model\_persistence module
--------------------------------------
.. automodule:: source.utils.model_persistence
:members:
:undoc-members:
:show-inheritance:
source.utils.spark\_manager module
----------------------------------
.. automodule:: source.utils.spark_manager
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: source.utils
:members:
:undoc-members:
:show-inheritance:
| 19.214286 | 46 | 0.573234 |
3586d0ea8275b5fa1a7c64ce8af1ade88a761a91 | 213 | rst | reStructuredText | doc/sphinx-guides/source/user/super-user.rst | ekoi/dataverse4-dans-migration | 2df5aeacdd6e9adfd44f1778336640dcfd9cb94c | [
"Apache-2.0"
] | null | null | null | doc/sphinx-guides/source/user/super-user.rst | ekoi/dataverse4-dans-migration | 2df5aeacdd6e9adfd44f1778336640dcfd9cb94c | [
"Apache-2.0"
] | null | null | null | doc/sphinx-guides/source/user/super-user.rst | ekoi/dataverse4-dans-migration | 2df5aeacdd6e9adfd44f1778336640dcfd9cb94c | [
"Apache-2.0"
] | null | null | null | Super User
+++++++++++++++++++++++
[Note: Documentation to be added about features available for super admins of
the Dataverse, which provides several options for configuring and
customizing your application.]
| 30.428571 | 78 | 0.7277 |
adc6c6a187c011a2cc54998c8aa78d4c1c763476 | 1,601 | rst | reStructuredText | docs/source/basics/features.rst | damar-wicaksono/gsa-module | e2f62d7b3d53ce4afd2a66d2746f607366816488 | [
"MIT"
] | 4 | 2019-01-19T12:50:55.000Z | 2021-12-06T10:03:09.000Z | docs/source/basics/features.rst | damar-wicaksono/gsa-module | e2f62d7b3d53ce4afd2a66d2746f607366816488 | [
"MIT"
] | 1 | 2020-10-07T10:41:16.000Z | 2020-10-07T10:41:16.000Z | docs/source/basics/features.rst | damar-wicaksono/gsa-module | e2f62d7b3d53ce4afd2a66d2746f607366816488 | [
"MIT"
] | 1 | 2020-02-14T21:41:19.000Z | 2020-02-14T21:41:19.000Z | .. gsa_module_features:
List of Features
----------------
The following is the main features of the current release (v0.9.0):
- Capability to generate design of computer experiments using 4 different methods: simple random sampling (srs), latin hypercube sampling (lhs), sobol' sequence, and optimized latin hypercube using either command line interface gsa_create_sample or the module API via import gsa_module
- Sobol' quasi-random number sequence generator is natively implemented in Python3 based on C++ implementation of Joe and Kuo (2008).
- Randomization of the Sobol' quasi-random number using random shift procedure
- Optimization of the latin hypercube design is done via evolutionary stochastic algorithm (ESE)
- Generation of separate test points based on a given design using Hammersley quasi-random sequence
- Capability to generate design of computer experiments for screening analysis (One-at-a-time design), based on the trajectory design (original Morris) and radial design (Saltelli et al.)
- Capability to compute the statistics of elementary effects, standardized or otherwise both for trajectory and radial designs. The statistics (mean, mean of absolute, and standard deviation) are used as the basis of parameter importance ranking.
- Capability to estimate the first-order (main effect) Sobol' sensitivity indices using two different estimators (Saltelli and Janon).
- Capability to estimate the total effect Sobol' sensitivity indices using two different estimators (Sobol-Homma and Jansen).
- All estimated quantities are equipped with their bootstrap samples
| 88.944444 | 286 | 0.798876 |
e72461be0b15ef740d13ea9cdfb552e1303ac1a8 | 1,068 | rst | reStructuredText | source/code_example/sound.rst | scls-cs/cs-intro | 70111e40b1944f31cf375bad3a69a421c986e96a | [
"MIT"
] | null | null | null | source/code_example/sound.rst | scls-cs/cs-intro | 70111e40b1944f31cf375bad3a69a421c986e96a | [
"MIT"
] | null | null | null | source/code_example/sound.rst | scls-cs/cs-intro | 70111e40b1944f31cf375bad3a69a421c986e96a | [
"MIT"
] | null | null | null | 声音编码
==========
两种信号
--------------
模拟信号
"""""""
模拟信号是指连续变化的信号,比如温度,声音,气压等。
数字信号
"""""""
数字信号是指取值离散,不连续的信号。
声音编码
--------
声音编码就是将模拟的声音信号转化为数字信号的过程,其中主要环节是采样和量化。
采样
""""
采样就是每隔一定时间采集模拟声音信号的样本。每秒钟采集的样本数称为采样频率。
量化
""""
量化是将采集到的模拟量值序列转换为二进制数序列的过程,这个二进制数序列就是对声音的编码。
.. note::
声音的采样频率越高,量化位数越大,声音的还原度越高。
声音存储空间
""""""""""
.. note::
采样频率f,量化位数x,双声道,录制t秒的声音需要多大空间进行存储?
解析:存储的声音是由采样点组成,采样点个数为f*t。每个采样点由x位编码,所以存储声音需要f*t*x*2(位)的存储空间。
.. code-block:: text
问题1: 对电话音频信号进行数字化,若采样频率为9.4kHz,每个样本的值用16位二进制数量化,且为单声道录制,
则每分钟的电话录音需要占用__________字节。
A. 9400*16
B. 9.4*1024*16*60/8
C. 9400*16*60/8
D. 以上答案均不对
答案:C
.. code-block:: text
问题2: 对录音机输出的音频信号进行数字化,若采样频率为47kHz,每个样本的值用16位二进制数量化,且为立体声双声道录制,
则每秒录音需要占用__________字节。
A. 47000*16/8
B. 47000*16*2/8
C. 47*1024*16*2/8
D. 以上答案均不对
答案:B
.. code-block:: text
问题3: 对录音机输出的音频信号进行数字化,量化范围是0-1023,采样频率为47kHz,且为立体声双声道录制,
则每秒录音需要占用__________字节。
A. 47000*1024*2/8
B. 47000*10*2/8
C. 47*1024*10*2/8
D. 以上答案均不对
答案:B
解析:0-1023的量化范围说明量化位数为10。
| 13.35 | 65 | 0.654494 |
76e9dc9df45cb8fd1e8005c928bc9b6fc1285fba | 598 | rst | reStructuredText | docs/contents.rst | CoderDojoTC/python-minecraft | d78ff6d30e47b887f5db12f23de81dd716576c0b | [
"MIT"
] | 31 | 2015-01-30T22:30:55.000Z | 2022-02-12T17:10:26.000Z | docs/contents.rst | CoderDojoTC/python-minecraft | d78ff6d30e47b887f5db12f23de81dd716576c0b | [
"MIT"
] | 16 | 2015-01-20T23:56:43.000Z | 2016-01-03T04:14:04.000Z | docs/contents.rst | CoderDojoTC/python-minecraft | d78ff6d30e47b887f5db12f23de81dd716576c0b | [
"MIT"
] | 24 | 2015-02-21T22:52:43.000Z | 2021-12-14T00:18:34.000Z | ==============================================================
Build Worlds in Minecraft with Python Documentation Contents
==============================================================
This is the master table of contents. However, you will probably find
the :doc:`Overview Page <index>` to be more user-friendly.
.. toctree::
:hidden:
index
.. toctree::
:maxdepth: 2
classroom/index
other-setups/index
mentors/index
reference/index
faq/index
glossary
todo
release-notes
Indices and Tables
==================
* :ref:`genindex`
* :ref:`search`
| 19.290323 | 69 | 0.518395 |
8fd241c20a246f63f9dd53b4c577474713085a24 | 425 | rst | reStructuredText | doc/source/packagedoc_files/paws.core.workflows.IO.rst | slaclab/paws | 60cf9a554f646fd30be4aaee325bd0b86c6ace68 | [
"BSD-3-Clause-LBNL"
] | 2 | 2019-07-19T14:41:55.000Z | 2021-03-11T23:15:24.000Z | doc/source/packagedoc_files/paws.core.workflows.IO.rst | slaclab/paws | 60cf9a554f646fd30be4aaee325bd0b86c6ace68 | [
"BSD-3-Clause-LBNL"
] | 11 | 2017-12-13T19:57:38.000Z | 2018-05-21T23:01:08.000Z | doc/source/packagedoc_files/paws.core.workflows.IO.rst | slaclab/paws | 60cf9a554f646fd30be4aaee325bd0b86c6ace68 | [
"BSD-3-Clause-LBNL"
] | 4 | 2017-09-11T17:26:57.000Z | 2020-12-13T00:07:06.000Z | paws.core.workflows.IO package
==============================
Submodules
----------
paws.core.workflows.IO.batch_fabio_load module
----------------------------------------------
.. automodule:: paws.core.workflows.IO.batch_fabio_load
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: paws.core.workflows.IO
:members:
:undoc-members:
:show-inheritance:
| 18.478261 | 55 | 0.548235 |
5e7fc9b4fb90720ea8138771fc6d2bc77b288feb | 2,198 | rst | reStructuredText | docs/index.rst | fakufaku/fast_bss_eval | 6e8a9a99aa7947c8075bc216fa007ef391e97f66 | [
"MIT"
] | 39 | 2021-10-13T06:44:45.000Z | 2022-02-07T21:53:46.000Z | docs/index.rst | fakufaku/fast_bss_eval | 6e8a9a99aa7947c8075bc216fa007ef391e97f66 | [
"MIT"
] | 4 | 2022-03-18T04:49:57.000Z | 2022-03-20T17:11:33.000Z | docs/index.rst | fakufaku/fast_bss_eval | 6e8a9a99aa7947c8075bc216fa007ef391e97f66 | [
"MIT"
] | 4 | 2021-10-13T06:02:03.000Z | 2022-03-23T05:25:48.000Z | .. fast_bss_eval documentation master file, created by
sphinx-quickstart on Mon Oct 4 09:37:44 2021.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to fast_bss_eval's documentation!
=========================================
.. toctree::
:maxdepth: 1
:hidden:
:caption: Contents:
changelog
.. image:: https://readthedocs.org/projects/fast-bss-eval/badge/?version=latest
:target: https://fast-bss-eval.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
.. image:: https://github.com/fakufaku/fast_bss_eval/actions/workflows/lint.yml/badge.svg?branch=main
:target: https://github.com/fakufaku/fast_bss_eval/actions/workflows/lint.yml
:alt: Linting Status
.. image:: https://github.com/fakufaku/fast_bss_eval/actions/workflows/pythonpackage.yml/badge.svg
:target: https://github.com/fakufaku/fast_bss_eval/actions/workflows/pythonpackage.yml
:alt: Tests Status
| Do you have a zillion BSS audio files to process and it is taking days ?
| Is your simulation never ending ?
|
| Fear no more! `fast_bss_eval` is here to help **you!**
``fast_bss_eval`` is a fast implementation of the bss_eval metrics for the
evaluation of blind source separation. Our implementation of the bss\_eval
metrics has the following advantages compared to other existing ones.
* seamlessly works with **both** `numpy <https://numpy.org/>`_ arrays and `pytorch <https://pytorch.org>`_ tensors
* very fast
* can be even faster by using an iterative solver (add ``use_cg_iter=10`` option to the function call)
* supports batched computations
* differentiable via pytorch
* can run on GPU via pytorch
.. automodule:: fast_bss_eval
API
~~~
.. autofunction:: fast_bss_eval.bss_eval_sources
.. autofunction:: fast_bss_eval.sdr
.. autofunction:: fast_bss_eval.sdr_pit_loss
.. autofunction:: fast_bss_eval.sdr_loss
.. autofunction:: fast_bss_eval.si_bss_eval_sources
.. autofunction:: fast_bss_eval.si_sdr
.. autofunction:: fast_bss_eval.si_sdr_pit_loss
.. autofunction:: fast_bss_eval.si_sdr_loss
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 31.855072 | 114 | 0.737489 |
8ab50f19dc20fead6ef9822dd0aa34e7d315c0eb | 8,781 | rst | reStructuredText | docs/source/how_to_use.rst | jacione/cohere | a8f2afb88a9f826c13da231ac82d34e104139908 | [
"BSD-3-Clause"
] | null | null | null | docs/source/how_to_use.rst | jacione/cohere | a8f2afb88a9f826c13da231ac82d34e104139908 | [
"BSD-3-Clause"
] | null | null | null | docs/source/how_to_use.rst | jacione/cohere | a8f2afb88a9f826c13da231ac82d34e104139908 | [
"BSD-3-Clause"
] | null | null | null | .. _use:
Use
===
| User scripts can be obtained from https://github.com/AdvancedPhotonSource/cohere-scripts. The cohere-scripts package, is a supplemental git repository that contains executable python scripts for every stage of CDI phase retrieval, from raw data processing to final images. Part of package are classes encapsulating instruments that are used to collect experiment data at the 34-ID-C beamline. User with other instruments than those defined in 34-ID-C suite can easily add own code that would encapsulate user's beamline specifics.
In addition the repository contains directory with sample configuration files for each stage, and a complete example including experimental data, and configuration files to phase the data.
Installing Scripts
##################
| There are three ways to get the cohere-scripts package onto your local computer, or remote computer.
1. The cohere-scripts package can be downloaded as a zip file by visiting https://github.com/advancedPhotonSource/cohere-scripts and clicking the green “Code” button and selecting “Download ZIP”.
2. Alternatively one can clone the repository using git. This will create the cohere-scripts directory containing all of the cohere-scripts content. In this code below we clone it to cohere-scripts-main directory.
::
git clone https://github.com/advancedPhotonSource/cohere-scripts cohere-scripts-main
3. Another way to get the cohere-scripts package on your machine is with the wget command:
::
wget https://github.com/AdvancedPhotonSource/cohere-scripts/archive/main.zip
unzip main.zip
| After the package is installed change directory to cohere-scripts-main and run setup.py script. The setup.py script modifies paths from relative to absolute in the provided example configuration.:
::
cd cohere-scripts-main
python setup.py
Scripts
#######
| In this documentation it is assumed the scripts are invoked from cohere-scripts directory, and activated conda environment has cohere package installed. Below is a list of scripts with description and explanation how to run them:
- create_experiment.py
This script creates a new experiment directory with conf subdirectory. The conf contains all configuration files. The files contain all parameters, but most of them commented out to make them inactive. User needs to edit the configuration files and un-comment to use wanted functionality.
Running this script:
::
python cohere-scripts/create_experiment.py <id> <scan ranges> <working_dir> --specfile <specfile> --beamline <beamline>
The parameters are as follows:
* id: an arbitrary literal value assign to this experiment
* scan ranges: scans that will be included in the data. This can be a single number, a range separated with "-", or combination of numbers and ranges separated by comma.
* working_dir: a directory where the experiment will be created
* specfile: optional, but typically used for beamline aps_34idc
* beamline: optional, specifies at which beamline the experiment was performed. If not configured, the beamline preprocessing and visualization is not available.
- setup_34idc.py
This script creates a new experiment directory structure.
Running this script:
::
python cohere-scripts/setup_34idc.py <id> <scan range> <conf_dir> --specfile <specfile> --copy_prep
The parameters are as follows:
* id: an arbitrary literal value assign to this experiment
* scan range: scans that will be included in the data. This can be a single number, a range separated with "-", or combination of numbers and ranges separated by comma.
* conf_dir: a directory from which the configuration files will be copied
* specfile: optional, used when specfile configured in <conf_dir>/config file should be replaced by another specfile
* copy_prep: this is a switch parameter, set to True if the prep_data.tif file should be copied from experiment with the <conf_dir> into the prep directory of the newly created experiment
- beamline_preprocess.py
To run this script a configuration file "config_prep" must be defined in the <experiment_dir>/conf directory and the main configuration "config" file must have beamline parameter configured. This script reads raw data, applies correction based on physical properties of the instrument used during experiment, and optionally aligns and combines multiple scans. The prepared data file is stored in <experiment_dir>/preprocessed_data/prep_data.tif file.
note: when separate_scan is configured to True, a prep_data.tiff file is created for each scan.
Running this script:
::
python cohere-scripts/beamline_preprocess.py <experiment_dir>
The parameters are as follows:
- experiment directory: directory of the experiment space
- standard_preprocess.py
To run this script a configuration file "config_data" must be defined in the <experiment_dir>/conf directory, and the "prep_data.tif" file must be present in experiment space. This script reads the preprocessed data, formats the data according to configured parameters, and produces data.tif file. The file is stored in <experiment_dir>/phasing_data/data.tif file.
Running this script:
::
python cohere-scripts/standard_preprocess.py <experiment_dir>
The parameters are as follows:
* experiment directory: directory of the experiment space
- run_reconstruction.py
To run this script a configuration file "config_rec" must be defined in the <experiment_dir>/conf directory, and the "data.tif" file must be present in experiment space. This script reads the data file and executs phasing script. The reconstruction results are saved in <experiment_dir>/results_phasing directory.
note: The results might be saved in different location in experiment space, depending on the use case. Refer to 'Experiment' section for details.
Running this script:
::
python cohere-scripts/run_reconstruction.py <experiment_dir> --rec_id <alternate reconstruction id>
The parameters are as follows:
* experiment directory: directory of the experiment space.
* rec_id: optional parameter, when present, the alternate configuration will be used to run reconstruction. . Refer to 'Experiment' section for details.
- beamline_visualization.py
To run this script a configuration file "config_disp" must be defined in the <experiment_dir>/conf directory, the main configuration "config" file must have beamline parameter configured, and the reconstruction must be completed. This script reads the reconstructed files, and processes them to create .vts files that can be viewed utilizing visualization tools such Paraview. The script will process "image.npy" files that are in the experiment space that is defined by the <experiment_dir>. If "resuls_dir" configuration parameter is defined in config_disp, then the program will find and process all image.npy files in that directory tree, otherwise it will find and process all image.npy files in experiment directory tree. If rec_id parameter is present, the script will find and process all image.npy files in directory tree startin with <experiment_dir>/results_pasing_<rec_id>. If --image_file option is used the programm will process the given single file.
Running this script:
::
python cohere-scripts/beamline_visualization.py <experiment_dir> --rec_id <reconstruction id> --image_file <image_file>
The parameters are as follows:
* experiment directory: directory of the experiment space
* rec_id: optional, id of alternate reconstruction, defined by alternate configuration file rec_config_<rec_id>
* image_file: optional parameter, if given, this file will be processed.
- everything.py
To run this script all configuration files must be defined. This script runs the scripts in the following order: beamline_preprocess.py, standard_preprocess.py, run_reconstruction.py, and beamline_visualization.py. If the beamline parameter is not defined in the experiment main configuration file "config", the beamline_preprocess.py and beamline_visualization.py scripts will be omitted, as they are customized for a beamline.
Running this script:
::
python cohere-scripts/everything.py <experiment_dir> --rec_id <reconstruction id>
The parameters are as follows:
* experiment directory: directory of the experiment space
* rec_id: optional parameter, when present, the alternate configuration will be used to run reconstruction
- cdi_window.py
This script starts GUI that offers complete interface to run all the scripts described above. In addition GUI interface offers easy way to modify configuration.
Running this script:
::
python cohere-scripts/cdi_window.py
| 64.094891 | 967 | 0.780207 |
f5eab477c9e21fb99b39252a3cc888523e6ab080 | 68 | rst | reStructuredText | HISTORY.rst | alexmorley/hmrb | c1a5505f7c6ce0886f1dd347c38863cbefb96a4f | [
"Apache-2.0"
] | null | null | null | HISTORY.rst | alexmorley/hmrb | c1a5505f7c6ce0886f1dd347c38863cbefb96a4f | [
"Apache-2.0"
] | null | null | null | HISTORY.rst | alexmorley/hmrb | c1a5505f7c6ce0886f1dd347c38863cbefb96a4f | [
"Apache-2.0"
] | null | null | null | 1.0.0 (29.04.2020)
++++++++++++++++++
- Initial open-source release
| 17 | 29 | 0.514706 |
6dbe6a110928e846ac16eeea9821d60012d4fe72 | 883 | rst | reStructuredText | docs/source/install.rst | adam-coogan/swyft | c54bdd9f77ddf02fda857e26640df012cbe545fc | [
"MIT"
] | null | null | null | docs/source/install.rst | adam-coogan/swyft | c54bdd9f77ddf02fda857e26640df012cbe545fc | [
"MIT"
] | 1 | 2021-05-06T19:53:33.000Z | 2021-07-22T09:32:30.000Z | docs/source/install.rst | adam-coogan/swyft | c54bdd9f77ddf02fda857e26640df012cbe545fc | [
"MIT"
] | null | null | null | Installation
===============
pip
--------
**After installing** `pytorch <https://pytorch.org/get-started/locally/>`_, please run the command:
::
pip install swyft
::
github
---------
If you want the lastest version, it is also possible to install *swyft* from github.
To do so, run the command:
::
pip install git+https://github.com/undark-lab/swyft.git
::
develop
---------
If you're interested in contributing to swyft there is another procedure.
First clone the github repo, navigate to the repo in your terminal, from within that directory run the command:
::
pip install -e .[dev]
::
The :code:`-e` flag will install *swyft* in development mode such that your version of the code is used when *swyft* is imported.
The :code:`[dev]` flag installs the extra tools necessary to format and test your contribution.
.. _pytorch: https://pytorch.org/get-started/locally/
| 26.757576 | 129 | 0.706682 |
d3e53b7610700b0b01042b88c79b8128c7ba3973 | 144 | rst | reStructuredText | doc/newsfragments/reload_schedule_task_target.change.rst | kn-ms/testplan | 91959ceace33cfd500f95a6630b0f2bd5879453e | [
"Apache-2.0"
] | 29 | 2021-01-22T07:01:39.000Z | 2022-03-23T09:35:16.000Z | doc/newsfragments/reload_schedule_task_target.change.rst | kn-ms/testplan | 91959ceace33cfd500f95a6630b0f2bd5879453e | [
"Apache-2.0"
] | 26 | 2021-01-24T16:15:08.000Z | 2022-03-09T08:01:00.000Z | doc/newsfragments/reload_schedule_task_target.change.rst | kn-ms/testplan | 91959ceace33cfd500f95a6630b0f2bd5879453e | [
"Apache-2.0"
] | 18 | 2021-01-22T07:01:42.000Z | 2022-01-26T11:21:14.000Z | * Improvement for interactive mode :ref:`reload feature <Interactive_Reload>` can now pick up changes to the test added by ``plan.schedule()``.
| 72 | 143 | 0.763889 |
83bbeee619cb3dc24f4777e791dd234ca05c3fc2 | 630 | rst | reStructuredText | docs/source/index.rst | KWR-Water/greta | ddeb23ac25b5d6efb1a00a99e6671a63eb654c22 | [
"MIT"
] | null | null | null | docs/source/index.rst | KWR-Water/greta | ddeb23ac25b5d6efb1a00a99e6671a63eb654c22 | [
"MIT"
] | null | null | null | docs/source/index.rst | KWR-Water/greta | ddeb23ac25b5d6efb1a00a99e6671a63eb654c22 | [
"MIT"
] | null | null | null | .. HGC documentation master file, created by
sphinx-quickstart on Thu Oct 17 15:06:25 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Introduction
===============================
SSTR - A python package to model the behavior of Organic MicroPollutants (OMPs) and pathogens for 4 standard types of Public Supply Well
Fields (PSWFs)
.. toctree::
:maxdepth: 2
:glob:
getting-started.rst
tutorial.rst
faq.rst
glossary.rst
API reference <api/modules.rst>
about.rst
known_issues.rst
.. example.rst very large snippet
| 26.25 | 136 | 0.687302 |
5d139df545e2cde0ac426b7406e0a94d4f950892 | 547 | rst | reStructuredText | build/pkgs/kenzo/SPKG.rst | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 1,742 | 2015-01-04T07:06:13.000Z | 2022-03-30T11:32:52.000Z | build/pkgs/kenzo/SPKG.rst | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 66 | 2015-03-19T19:17:24.000Z | 2022-03-16T11:59:30.000Z | build/pkgs/kenzo/SPKG.rst | UCD4IDS/sage | 43474c96d533fd396fe29fe0782d44dc7f5164f7 | [
"BSL-1.0"
] | 495 | 2015-01-10T10:23:18.000Z | 2022-03-24T22:06:11.000Z | kenzo: Construct topological spaces and compute homology groups
===============================================================
Description
-----------
Kenzo is a package to compute properties (mainly homology groups) of
topological spaces. It allows defining spaces created from others by
constuctions like loop spaces, classifying spaces and so on.
License
-------
GPL
Upstream Contact
----------------
- https://github.com/gheber/kenzo
- https://github.com/miguelmarco/kenzo/
Dependencies
------------
- ECL (Embedded Common Lisp)
| 19.535714 | 68 | 0.627057 |
d985f8c2c00ba9f2b87b14ea9e3d23ab8cbe46f6 | 807 | rst | reStructuredText | README.rst | rajgiriUW/BGlib | 128893abf1083d2c0892148c9fbf0b2affa43f65 | [
"MIT"
] | null | null | null | README.rst | rajgiriUW/BGlib | 128893abf1083d2c0892148c9fbf0b2affa43f65 | [
"MIT"
] | null | null | null | README.rst | rajgiriUW/BGlib | 128893abf1083d2c0892148c9fbf0b2affa43f65 | [
"MIT"
] | null | null | null | BGlib
=====
.. image:: https://github.com/pycroscopy/BGlib/workflows/build/badge.svg?branch=master
:target: https://github.com/pycroscopy/BGlib/actions?query=workflow%3Abuild
:alt: GitHub Actions
.. image:: https://img.shields.io/pypi/v/BGlib.svg
:target: https://pypi.org/project/bglib/
:alt: PyPI
.. image:: https://img.shields.io/pypi/l/sidpy.svg
:target: https://pypi.org/project/sidpy/
:alt: License
.. image:: http://pepy.tech/badge/BGlib
:target: http://pepy.tech/project/BGlib
:alt: Downloads
.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.4542239.svg
:target: https://doi.org/10.5281/zenodo.4542239
:alt: DOI
Band Excitation (BE) and General Mode (G-Mode) Scanning Probe Microscopy (SPM) data format translation, analysis and visualization codes
| 32.28 | 136 | 0.705081 |
647bc569cbb76ac9d831943b5dfb2a0d9fa4affb | 439 | rst | reStructuredText | argumentclinic/docs/sphinx/requirements_autorun.rst | catherinedevlin/argument-clinic | 19f33ee668916c8c079d118146223ebba8ea0bfc | [
"MIT"
] | 1 | 2015-11-12T02:38:59.000Z | 2015-11-12T02:38:59.000Z | argumentclinic/docs/sphinx/requirements_autorun.rst | catherinedevlin/argument-clinic | 19f33ee668916c8c079d118146223ebba8ea0bfc | [
"MIT"
] | null | null | null | argumentclinic/docs/sphinx/requirements_autorun.rst | catherinedevlin/argument-clinic | 19f33ee668916c8c079d118146223ebba8ea0bfc | [
"MIT"
] | null | null | null | Software Requirements (``autorun``)
===================================
(Discovered with Sphinx's ``autorun`` extension)
Python packages
---------------
.. runblock:: pycon
>>> import sys
>>> sys.path.insert(0, '.')
>>> from get_reqs import requirements_met
>>> print("\n".join(requirements_met()))
PostgreSQL
----------
Your system should run PostgreSQL 9.1 or better.
.. runblock:: console
$ psql --version
| 16.259259 | 48 | 0.567198 |
5ba91cf554c9407c3e14b68d1f848d3f7fa6ee9b | 2,451 | rst | reStructuredText | docs/lottery.rst | Slurpcy/YamiCogs | b13a920a5a50050c4f85e6072e7b004361bd9109 | [
"MIT"
] | 4 | 2020-10-19T17:42:35.000Z | 2021-10-10T18:31:07.000Z | docs/lottery.rst | Slurpcy/YamiCogs | b13a920a5a50050c4f85e6072e7b004361bd9109 | [
"MIT"
] | 12 | 2020-12-27T15:03:20.000Z | 2022-02-11T17:43:22.000Z | docs/lottery.rst | Slurpcy/YamiCogs | b13a920a5a50050c4f85e6072e7b004361bd9109 | [
"MIT"
] | 6 | 2020-12-23T15:09:11.000Z | 2021-12-17T14:55:17.000Z | .. _lottery:
=======
Lottery
=======
This cog is currently in Beta status. Not all features have been impletemented and/or documented
| This is the cog guide for the lottery cog.
| You will find detailed docs about usage and commands.
``[p]`` is considered as your prefix.
.. note:: To use this cog, load it by typing this::
[p]load lottery
.. _lottery-usage:
-----
Usage
-----
Lottery Games
.. _lottery-commands:
--------
Commands
--------
.. _lottery-command-lottery:
^^^^^^^
lottery
^^^^^^^
**Syntax**
.. code-block:: none
[p]lottery
**Description**
Lottery Game
.. _lottery-command-lottery-lucky3:
""""""
lucky3
""""""
**Syntax**
.. code-block:: none
[p]lottery lucky3
**Description**
Play a game of Lucky 3
.. _lottery-command-lottery-games:
"""""
games
"""""
**Syntax**
.. code-block:: none
[p]lottery games
**Description**
View game explanations
.. _lottery-command-lottoset:
^^^^^^^^
lottoset
^^^^^^^^
**Syntax**
.. code-block:: none
[p]lottoset
**Description**
Lottery Settings
.. _lottery-command-lottoset-lucky3:
""""""
lucky3
""""""
**Syntax**
.. code-block:: none
[p]lottoset lucky3
**Description**
Configure settings for Lucky 3
.. _lottery-command-lottoset-lucky3-prize:
"""""
prize
"""""
**Syntax**
.. code-block:: none
[p]lottoset lucky3 prize <prize>
**Description**
Set the Prize amount
.. _lottery-command-lottoset-lucky3-cost:
""""
cost
""""
**Syntax**
.. code-block:: none
[p]lottoset lucky3 cost <cost>
**Description**
Set the cost per game
.. _lottery-command-lottoset-lucky3-icons:
"""""
icons
"""""
**Syntax**
.. code-block:: none
[p]lottoset lucky3 icons <icons>
**Description**
Sets the number of Emojis to choose from
* Valid options are 2-9
Approximate Win percentrages are
.. code-block:: none
| Icons: 2 | 25.0%
| Icons: 3 | 11.1%
| Icons: 4 | 6.3%
| Icons: 5 | 4.0%
| Icons: 6 | 2.8%
| Icons: 7 | 2.1%
| Icons: 8 | 1.6%
| Icons: 9 | 1.2%
.. _lottery-command-lottoset-lucky3-enable:
""""""
enable
""""""
**Syntax**
.. code-block:: none
[p]lottoset lucky3 enable <state>
**Description**
Enable or Disable the Lucky3 game.
<state> should be any of these combinations,
`on/off`, `yes/no`, `1/0`, `true/false`
.. _lottery-command-lottoset-info:
""""
info
""""
**Syntax**
.. code-block:: none
[p]lottoset info
**Description**
View configured settings
| 11.671429 | 96 | 0.613219 |
a32daf99770e1d56ffa9274a5c7a383d69652e9e | 813 | rst | reStructuredText | src/interpolatedGroundMotion.rst | gaaraujo/OpenSeesPyDoc | a7424f5a1ac5cbda2c221fd68af5b5f3564e2dbe | [
"MIT"
] | 79 | 2017-12-25T14:37:24.000Z | 2022-03-18T19:28:20.000Z | src/interpolatedGroundMotion.rst | gaaraujo/OpenSeesPyDoc | a7424f5a1ac5cbda2c221fd68af5b5f3564e2dbe | [
"MIT"
] | 212 | 2018-02-23T21:03:29.000Z | 2022-02-16T15:41:23.000Z | src/interpolatedGroundMotion.rst | gaaraujo/OpenSeesPyDoc | a7424f5a1ac5cbda2c221fd68af5b5f3564e2dbe | [
"MIT"
] | 97 | 2017-12-25T14:37:25.000Z | 2022-03-25T17:14:06.000Z | .. include:: sub.txt
============================
Interpolated Ground Motion
============================
.. function:: groundMotion(gmTag,'Interpolated',*gmTags,'-fact',facts)
:noindex:
This command is used to construct an interpolated GroundMotion object, where the motion is determined by combining several previously defined ground motions in the load pattern.
======================== =============================================================
``gmTag`` |int| unique tag among ground motions in load pattern
``gmTags`` |listi| the tags of existing ground motions in pattern to be used for interpolation
``facts`` |listf| the interpolation factors. (optional)
======================== =============================================================
| 45.166667 | 180 | 0.503075 |
6197f26dca996507348f2d1bd8574a678feab8f4 | 371 | rst | reStructuredText | lv_examples/src/lv_ex_widgets/lv_ex_btn/index.rst | Yinyifeng18/lv_sim_qt | 4609c359fe7019e4d3b3a743b4b64fa03bb7a437 | [
"MIT"
] | 77 | 2022-01-12T01:25:12.000Z | 2022-03-29T11:52:38.000Z | lv_examples/src/lv_ex_widgets/lv_ex_btn/index.rst | Yinyifeng18/lv_sim_qt | 4609c359fe7019e4d3b3a743b4b64fa03bb7a437 | [
"MIT"
] | 1 | 2021-09-24T17:04:01.000Z | 2021-09-24T17:04:01.000Z | lv_examples/src/lv_ex_widgets/lv_ex_btn/index.rst | Yinyifeng18/lv_sim_qt | 4609c359fe7019e4d3b3a743b4b64fa03bb7a437 | [
"MIT"
] | 9 | 2022-01-12T13:28:45.000Z | 2022-02-17T11:44:35.000Z | C
^
Simple Buttons
""""""""""""""""
.. lv_example:: lv_ex_widgets/lv_ex_btn/lv_ex_btn_1
:language: c
.. lv_example:: lv_ex_widgets/lv_ex_btn/lv_ex_btn_2
:language: c
MicroPython
^^^^^^^^^^^
Simple Buttons
""""""""""""""""
.. lv_example:: lv_ex_widgets/lv_ex_btn/lv_ex_btn_1
:language: py
.. lv_example:: lv_ex_widgets/lv_ex_btn/lv_ex_btn_2
:language: py
| 15.458333 | 51 | 0.668464 |
ec2b0ee7577d714b89ad1aeb669424f4e236dd17 | 1,077 | rst | reStructuredText | README.rst | sun0x00/redtorch_python | 18c675c27369baf661e1efaaf74af6ecd54d53a0 | [
"MIT"
] | 1 | 2018-04-22T16:00:59.000Z | 2018-04-22T16:00:59.000Z | README.rst | sun0x00/redtorch_python | 18c675c27369baf661e1efaaf74af6ecd54d53a0 | [
"MIT"
] | null | null | null | README.rst | sun0x00/redtorch_python | 18c675c27369baf661e1efaaf74af6ecd54d53a0 | [
"MIT"
] | 3 | 2018-05-13T03:33:25.000Z | 2018-09-21T07:56:08.000Z | RedTorch-Python
^^^^^^^^
提示
-----
项目停止维护,请移步Java移植版本RedTorch
简介
-----
本项目致力于为期货多账户管理提供便捷的的操作方法,由开源项目 `vnpy <http://www.vnpy.org/>`_ 修改而来,在使用本项目之前,请首先了解vnpy的相关协议和内容。
当前项目仅支持CTP接口的动态管理,如需在linux下使用,请清空redtorch.api.ctp,并直接将vnpy1.7.1 linux编译后的vnpy.api.ctp中的文件复制到redtorch.api.ctp文件夹中
环境准备
----
请参考 `vnpy-v1.7.1 <https://github.com/vnpy/vnpy/tree/v1.7.1>`_ 准备相关软件环境,主要是32位python27和TA-Lib
兼容性说明
-------
当前版本基于vnpy 1.7.1修改,完全复制了其包结构,但只移植了CTP接口,包名均已修改以便避与vnpy免冲突,因此不影响vnpy原版的正常使用,部分代码做了较大改动。
有兼容vnpy后续版本的计划
已知重要提醒
---------
当前项目仍然处于Alpha开发阶段,由于动态管理账户和vnpy原设计理念有较大不同,原界面为增量更新,所以在账户动态变更后,例如移除接口后账户信息仍在UI界面存在,所有代码,仅供参考
安装
----
推荐方法
::::::
通过命令pip install redtorch 安装redtorch,请确保python版本符合要求,TA-Lib已经安装
其他方法
::::::
下载本项目,使用IDE导入,推荐使用 `PyCharm <https://www.jetbrains.com/pycharm/>`_ ,运行redtorch/trader/run.py
联系作者
------
sun0x00@gmail.com
License
---------
MIT
用户在遵循本项目协议的同时,如果用户下载、安装、使用本项目中所提供的软件,软件作者对任何原因在使用本项目中提供的软件时可能对用户自己或他人造成的任何形式的损失和伤害不承担任何责任。
作者有权根据有关法律、法规的变化修改本项目协议。修改后的协议会随附于本项目的新版本中。
当发生有关争议时,以最新的协议文本为准。如果用户不同意改动的内容,用户可以自行删除本项目。如果用户继续使用本项目,则视为您接受本协议的变动。
| 17.370968 | 112 | 0.777159 |
2f462fa939b4c86a26b0dec657ead69a3cb8087c | 563 | rst | reStructuredText | vendor/packages/translate-toolkit/docs/formats/flex.rst | DESHRAJ/fjord | 8899b6286b23347c9b024334e61c33fe133e836d | [
"BSD-3-Clause"
] | null | null | null | vendor/packages/translate-toolkit/docs/formats/flex.rst | DESHRAJ/fjord | 8899b6286b23347c9b024334e61c33fe133e836d | [
"BSD-3-Clause"
] | 1 | 2021-12-13T20:55:07.000Z | 2021-12-13T20:55:07.000Z | vendor/packages/translate-toolkit/docs/formats/flex.rst | DESHRAJ/fjord | 8899b6286b23347c9b024334e61c33fe133e836d | [
"BSD-3-Clause"
] | null | null | null |
.. _flex:
Adobe Flex properties files
***************************
.. versionadded:: 1.8
Adobe Flex applications use .properties files similar to :doc:`Java properties
<properties>`, but with UTF-8 encoding, and therefore :doc:`prop2po
</commands/prop2po>` and po2prop are used for conversion.
We welcome more testing and feedback, but based on our solid support for
properties, this probably works perfectly.
.. _flex#references:
References
==========
* `Description for Adobe Flex properties files
<http://livedocs.adobe.com/flex/3/html/l10n_3.html>`_
| 24.478261 | 78 | 0.714032 |
d04e80ad407c197b70b9c408685f581386203803 | 1,004 | rst | reStructuredText | docs/howtolens.rst | PyJedi/PyAutoLens | bcfb2e7b447aa24508fc648d60b6fd9b4fd852e7 | [
"MIT"
] | null | null | null | docs/howtolens.rst | PyJedi/PyAutoLens | bcfb2e7b447aa24508fc648d60b6fd9b4fd852e7 | [
"MIT"
] | null | null | null | docs/howtolens.rst | PyJedi/PyAutoLens | bcfb2e7b447aa24508fc648d60b6fd9b4fd852e7 | [
"MIT"
] | null | null | null | .. _howtolens:
HowToLens Lectures
------------------
The best way to learn **PyAutoLens** is by going through the **HowToLens** lecture series on the `autolens workspace <https://github.com/Jammy2211/autolens_workspace>`_.
The lectures are provided as Jupyter notebooks - checkout
`this link <https://github.com/Jammy2211/autolens_workspace/blob/master/howtolens/chapter_1_introduction/tutorial_2_profiles.ipynb>`_
for an example of one of the first notebooks.
It can be found in the workspace & consists of 5 chapters:
- **Introduction** - An introduction to strong gravitational lensing & **PyAutolens**.
- **Lens Modeling** - How to model strong lenses, including a primer on Bayesian non-linear analysis.
- **Pipelines** - How to build pipelines & tailor them to your own science case.
- **Inversions** - How to perform pixelized reconstructions of the source-galaxy.
- **Hyper-Mode** - How to use **PyAutoLens** advanced modeling features that adapt the model to the strong lens being analysed. | 55.777778 | 169 | 0.757968 |
e751185acdedbeeddbc030da02a07d8b84b28135 | 1,235 | rst | reStructuredText | apc_random_orders/README.rst | Juxi/apb-baseline | fd47a5fd78cdfd75c68601a40ca4726d7d20c9ce | [
"BSD-3-Clause"
] | 9 | 2017-02-06T10:24:56.000Z | 2022-02-27T20:59:52.000Z | apc_random_orders/README.rst | Juxi/apb-baseline | fd47a5fd78cdfd75c68601a40ca4726d7d20c9ce | [
"BSD-3-Clause"
] | null | null | null | apc_random_orders/README.rst | Juxi/apb-baseline | fd47a5fd78cdfd75c68601a40ca4726d7d20c9ce | [
"BSD-3-Clause"
] | 2 | 2017-10-15T08:33:37.000Z | 2019-03-05T07:29:38.000Z | .. image:: http://picknik.io/PickNik_Logo3.png
:target: http://picknik.io/
:alt: picknik logo
Generate Mock Amazon order
--------------------------
Simple Python utility to create a simulated bin inventory and random
json order, that you can get by running::
python random_orders.py order.json
Note that you can repeat experiments setting the used seed, and modify
the likelyhood of the number of objects per bin too::
usage: random_orders.py [-h] [--probabilites PROBABILITES] [--seed SEED]
filename
positional arguments:
filename filename to save the json order to
optional arguments:
-h, --help show this help message and exit
--probabilites PROBABILITES, -p PROBABILITES
Quote delimited list of probabilites. Eg "[0.5, 0.2,
0.2, 0.1]". Determines the likelyhood of filling up
with [1, 2...] elements the bins that aren't
determined by the contest rules. Defaults to [0.7,
0.2, 0.1], so that there are no bins with more than 3
elements.
--seed SEED, -s SEED
| 37.424242 | 81 | 0.577328 |
05c6f4544d96e31d071048ce159e1c21aaf8d53c | 5,609 | rst | reStructuredText | docs/modules/ppo2.rst | 757670303037/stable-baselines | 91ac2e82dca3961ecd9d239427a095a21ba85674 | [
"MIT"
] | 3,681 | 2018-07-02T16:07:58.000Z | 2022-03-31T12:29:00.000Z | docs/modules/ppo2.rst | qiaozhQZ/stable-baselines | b3f414f4f2900403107357a2206f80868af16da3 | [
"MIT"
] | 1,088 | 2018-07-09T11:36:45.000Z | 2022-03-31T23:50:35.000Z | docs/modules/ppo2.rst | qiaozhQZ/stable-baselines | b3f414f4f2900403107357a2206f80868af16da3 | [
"MIT"
] | 910 | 2018-07-23T12:16:47.000Z | 2022-03-28T09:39:06.000Z | .. _ppo2:
.. automodule:: stable_baselines.ppo2
PPO2
====
The `Proximal Policy Optimization <https://arxiv.org/abs/1707.06347>`_ algorithm combines ideas from A2C (having multiple workers)
and TRPO (it uses a trust region to improve the actor).
The main idea is that after an update, the new policy should be not too far from the old policy.
For that, PPO uses clipping to avoid too large update.
.. note::
PPO2 is the implementation of OpenAI made for GPU. For multiprocessing, it uses vectorized environments
compared to PPO1 which uses MPI.
.. note::
PPO2 contains several modifications from the original algorithm not documented
by OpenAI: value function is also clipped and advantages are normalized.
Notes
-----
- Original paper: https://arxiv.org/abs/1707.06347
- Clear explanation of PPO on Arxiv Insights channel: https://www.youtube.com/watch?v=5P7I-xPq8u8
- OpenAI blog post: https://blog.openai.com/openai-baselines-ppo/
- ``python -m stable_baselines.ppo2.run_atari`` runs the algorithm for 40M
frames = 10M timesteps on an Atari game. See help (``-h``) for more
options.
- ``python -m stable_baselines.ppo2.run_mujoco`` runs the algorithm for 1M
frames on a Mujoco environment.
Can I use?
----------
- Recurrent policies: ✔️
- Multi processing: ✔️
- Gym spaces:
============= ====== ===========
Space Action Observation
============= ====== ===========
Discrete ✔️ ✔️
Box ✔️ ✔️
MultiDiscrete ✔️ ✔️
MultiBinary ✔️ ✔️
============= ====== ===========
Example
-------
Train a PPO agent on `CartPole-v1` using 4 processes.
.. code-block:: python
import gym
from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common import make_vec_env
from stable_baselines import PPO2
# multiprocess environment
env = make_vec_env('CartPole-v1', n_envs=4)
model = PPO2(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=25000)
model.save("ppo2_cartpole")
del model # remove to demonstrate saving and loading
model = PPO2.load("ppo2_cartpole")
# Enjoy trained agent
obs = env.reset()
while True:
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()
Parameters
----------
.. autoclass:: PPO2
:members:
:inherited-members:
Callbacks - Accessible Variables
--------------------------------
Depending on initialization parameters and timestep, different variables are accessible.
Variables accessible "From timestep X" are variables that can be accessed when
``self.timestep==X`` in the ``on_step`` function.
+--------------------------------+-----------------------------------------------------+
|Variable | Availability|
+================================+=====================================================+
|- self |From timestep 1 |
|- total_timesteps | |
|- callback | |
|- log_interval | |
|- tb_log_name | |
|- reset_num_timesteps | |
|- cliprange_vf | |
|- new_tb_log | |
|- writer | |
|- t_first_start | |
|- n_updates | |
|- mb_obs | |
|- mb_rewards | |
|- mb_actions | |
|- mb_values | |
|- mb_dones | |
|- mb_neglogpacs | |
|- mb_states | |
|- ep_infos | |
|- actions | |
|- values | |
|- neglogpacs | |
|- clipped_actions | |
|- rewards | |
|- infos | |
+--------------------------------+-----------------------------------------------------+
|- info |From timestep 1 |
|- maybe_ep_info | |
+--------------------------------+-----------------------------------------------------+
| 42.492424 | 130 | 0.371546 |
18de3f8f3830875346a89f9fea4c11843d2d899d | 2,356 | rst | reStructuredText | docs/rtd/_static/supported-layers.rst | paulkogni/backpack | 3122de062d5bbcdcba8f8e02d24adb1bd2cdada6 | [
"MIT"
] | null | null | null | docs/rtd/_static/supported-layers.rst | paulkogni/backpack | 3122de062d5bbcdcba8f8e02d24adb1bd2cdada6 | [
"MIT"
] | null | null | null | docs/rtd/_static/supported-layers.rst | paulkogni/backpack | 3122de062d5bbcdcba8f8e02d24adb1bd2cdada6 | [
"MIT"
] | null | null | null | Supported models
====================================
BackPACK expects models to be
`sequences <https://pytorch.org/docs/stable/nn.html#sequential>`_
of `PyTorch NN modules <https://pytorch.org/docs/stable/nn.html>`_.
For example,
.. code-block:: python
model = torch.nn.Sequential(
torch.nn.Linear(764, 64),
torch.nn.ReLU(),
torch.nn.Linear(64, 10)
)
This page lists the layers currently supported by BackPACK.
**Do not rewrite the** :code:`forward()` **function of the** :code:`Sequential` **or the inner modules!**
If the forward is not standard, the additional backward pass to compute second-order quantities will not match the actual function.
First-order extensions that extract information might work outside of this framework, but it is not tested.
.. raw:: html
<hr/>
For first-order extensions
--------------------------------------
You can use any layers, as long as they do not have parameters.
BackPACK can extract more information about the gradient w.r.t. the parameters of those layers:
* `Conv2d <https://pytorch.org/docs/stable/nn.html#conv2d>`_
* `Linear <https://pytorch.org/docs/stable/nn.html#linear>`_
**Some layers lead to the concept of "inidividual gradient for a sample in a minibatch" to be ill-defined.**
This is the case for Batch Normalization layers, for example.
.. raw:: html
<hr/>
For second-order extensions
--------------------------------------
BackPACK needs to know how to compute an additional backward pass.
In addition to the parametrized layers above, this implemented for the following layers:
**Loss functions**
* `MSELoss <https://pytorch.org/docs/stable/nn.html#mseloss>`_
* `CrossEntropyLoss <https://pytorch.org/docs/stable/nn.html#crossentropyloss>`_
**Layers without parameters**
* `MaxPool2d <https://pytorch.org/docs/stable/nn.html#maxpool2d>`_
* `AvgPool2d <https://pytorch.org/docs/stable/nn.html#avgpool2d>`_
* `Dropout <https://pytorch.org/docs/stable/nn.html#dropout>`_
* `ReLU <https://pytorch.org/docs/stable/nn.html#relu>`_
* `Sigmoid <https://pytorch.org/docs/stable/nn.html#sigmoid>`_
* `Tanh <https://pytorch.org/docs/stable/nn.html#tanh>`_
Custom layers
--------------------------------------
:code:`torch.nn.functional.flatten` can not be used in this setup because it is a function, not a module.
Use :code:`backpack.core.layers.Flatten` instead.
| 33.657143 | 131 | 0.697793 |
621e1e7fe56450002c5275b34a26c01efa8d009f | 61,580 | rst | reStructuredText | doc/userguide/src/CreatingTestData/Variables.rst | N-Aero/robotframework | e15664f80e830f30209f118117bf361c74ebe7fb | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-06-05T21:55:16.000Z | 2021-06-05T21:55:16.000Z | doc/userguide/src/CreatingTestData/Variables.rst | N-Aero/robotframework | e15664f80e830f30209f118117bf361c74ebe7fb | [
"ECL-2.0",
"Apache-2.0"
] | 63 | 2020-03-04T17:31:39.000Z | 2022-03-01T09:12:16.000Z | doc/userguide/src/CreatingTestData/Variables.rst | fakegit/robotframework | c8daebe5b7c23cbcb2dd43e3b9aba6053f49ce58 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | Variables
=========
.. contents::
:depth: 2
:local:
Introduction
------------
Variables are an integral feature of Robot Framework, and they can be
used in most places in test data. Most commonly, they are used in
arguments for keywords in test case tables and keyword tables, but
also all settings allow variables in their values. A normal keyword
name *cannot* be specified with a variable, but the BuiltIn_ keyword
:name:`Run Keyword` can be used to get the same effect.
Robot Framework has its own variables that can be used as scalars__, lists__
or `dictionaries`__ using syntax `${SCALAR}`, `@{LIST}` and `&{DICT}`,
respectively. In addition to this, `environment variables`_ can be used
directly with syntax `%{ENV_VAR}`.
Variables are useful, for example, in these cases:
- When strings change often in the test data. With variables you only
need to make these changes in one place.
- When creating system-independent and operating-system-independent test
data. Using variables instead of hard-coded strings eases that considerably
(for example, `${RESOURCES}` instead of `c:\resources`, or `${HOST}`
instead of `10.0.0.1:8080`). Because variables can be `set from the
command line`__ when tests are started, changing system-specific
variables is easy (for example, `--variable HOST:10.0.0.2:1234
--variable RESOURCES:/opt/resources`). This also facilitates
localization testing, which often involves running the same tests
with different strings.
- When there is a need to have objects other than strings as arguments
for keywords. This is not possible without variables.
- When different keywords, even in different test libraries, need to
communicate. You can assign a return value from one keyword to a
variable and pass it as an argument to another.
- When values in the test data are long or otherwise complicated. For
example, `${URL}` is shorter than
`http://long.domain.name:8080/path/to/service?foo=1&bar=2&zap=42`.
If a non-existent variable is used in the test data, the keyword using
it fails. If the same syntax that is used for variables is needed as a
literal string, it must be `escaped with a backslash`__ as in `\${NAME}`.
__ `Scalar variables`_
__ `List variables`_
__ `Dictionary variables`_
__ `Setting variables in command line`_
__ Escaping_
Using variables
---------------
This section explains how to use variables, including the normal scalar
variable syntax `${var}`, how to use variables in list and dictionary
contexts like `@{var}` and `&{var}`, respectively, and how to use environment
variables like `%{var}`. Different ways how to create variables are discussed
in the subsequent sections.
Robot Framework variables, similarly as keywords, are
case-insensitive, and also spaces and underscores are
ignored. However, it is recommended to use capital letters with
global variables (for example, `${PATH}` or `${TWO WORDS}`)
and small letters with local variables that are only available in certain
test cases or user keywords (for example, `${my var}`). Much more
importantly, though, case should be used consistently.
Variable name consists of the variable type identifier (`$`, `@`, `&`, `%`),
curly braces (`{`, `}`) and the actual variable name between the braces.
Unlike in some programming languages where similar variable syntax is
used, curly braces are always mandatory. Variable names can basically have
any characters between the curly braces. However, using only alphabetic
characters from a to z, numbers, underscore and space is recommended, and
it is even a requirement for using the `extended variable syntax`_.
.. _scalar variable:
.. _scalar variables:
Scalar variable syntax
~~~~~~~~~~~~~~~~~~~~~~
The most common way to use variables in Robot Framework test data is using
the scalar variable syntax like `${var}`. When this syntax is used, the
variable name is replaced with its value as-is. Most of the time variable
values are strings, but variables can contain any object, including numbers,
lists, dictionaries, or even custom objects.
The example below illustrates the usage of scalar variables. Assuming
that the variables `${GREET}` and `${NAME}` are available
and assigned to strings `Hello` and `world`, respectively,
both the example test cases are equivalent.
.. sourcecode:: robotframework
*** Test Cases ***
Constants
Log Hello
Log Hello, world!!
Variables
Log ${GREET}
Log ${GREET}, ${NAME}!!
When a scalar variable is used alone without any text or other variables
around it, like in `${GREET}` above, the variable is replaced with
its value as-is and the value can be any object. If the variable is not used
alone, like `${GREER}, ${NAME}!!` above, its value is first converted into
a string and then concatenated with the other data.
.. note:: Variable values are used as-is without conversions also when
passing arguments to keywords using the `named arguments`_
syntax like `argname=${var}`.
The example below demonstrates the difference between having a
variable in alone or with other content. First, let us assume
that we have a variable `${STR}` set to a string `Hello,
world!` and `${OBJ}` set to an instance of the following Java
object:
.. sourcecode:: java
public class MyObj {
public String toString() {
return "Hi, terra!";
}
}
With these two variables set, we then have the following test data:
.. sourcecode:: robotframework
*** Test Cases ***
Objects
KW 1 ${STR}
KW 2 ${OBJ}
KW 3 I said "${STR}"
KW 4 You said "${OBJ}"
Finally, when this test data is executed, different keywords receive
the arguments as explained below:
- :name:`KW 1` gets a string `Hello, world!`
- :name:`KW 2` gets an object stored to variable `${OBJ}`
- :name:`KW 3` gets a string `I said "Hello, world!"`
- :name:`KW 4` gets a string `You said "Hi, terra!"`
.. Note:: Converting variables to Unicode obviously fails if the variable
cannot be represented as Unicode. This can happen, for example,
if you try to use byte sequences as arguments to keywords so that
you catenate the values together like `${byte1}${byte2}`.
A workaround is creating a variable that contains the whole value
and using it alone in the cell (e.g. `${bytes}`) because then
the value is used as-is.
.. _list variable:
.. _list variables:
.. _list expansion:
List variable syntax
~~~~~~~~~~~~~~~~~~~~
When a variable is used as a scalar like `${EXAMPLE}`, its value is be
used as-is. If a variable value is a list or list-like, it is also possible
to use it as a list variable like `@{EXAMPLE}`. In this case the list is expanded
and individual items are passed in as separate arguments. This is easiest to explain
with an example. Assuming that a variable `@{USER}` has value `['robot', 'secret']`,
the following two test cases are equivalent:
.. sourcecode:: robotframework
*** Test Cases ***
Constants
Login robot secret
List Variable
Login @{USER}
Robot Framework stores its own variables in one internal storage and allows
using them as scalars, lists or dictionaries. Using a variable as a list
requires its value to be a Python list or list-like object. Robot Framework
does not allow strings to be used as lists, but other iterable objects such
as tuples or dictionaries are accepted.
Starting from Robot Framework 4.0, list expansion can be used in combination with
`list item access`__ making these usages possible:
.. sourcecode:: robotframework
*** Test Cases ***
Nested container
${nested} = Evaluate [['a', 'b', 'c'], {'key': ['x', 'y']}]
Log Many @{nested}[0] # Logs 'a', 'b' and 'c'.
Log Many @{nested}[1][key] # Logs 'x' and 'y'.
Slice
${items} = Create List first second third
Log Many @{items}[1:] # Logs 'second' and 'third'.
__ `Accessing sequence items`_
Using list variables with other data
''''''''''''''''''''''''''''''''''''
It is possible to use list variables with other arguments, including
other list variables.
.. sourcecode:: robotframework
*** Test Cases ***
Example
Keyword @{LIST} more args
Keyword ${SCALAR} @{LIST} constant
Keyword @{LIST} @{ANOTHER} @{ONE MORE}
Using list variables with settings
''''''''''''''''''''''''''''''''''
List variables can be used only with some of the settings__. They can
be used in arguments to imported libraries and variable files, but
library and variable file names themselves cannot be list
variables. Also with setups and teardowns list variable can not be used
as the name of the keyword, but can be used in arguments. With tag related
settings they can be used freely. Using scalar variables is possible in
those places where list variables are not supported.
.. sourcecode:: robotframework
*** Settings ***
Library ExampleLibrary @{LIB ARGS} # This works
Library ${LIBRARY} @{LIB ARGS} # This works
Library @{LIBRARY AND ARGS} # This does not work
Suite Setup Some Keyword @{KW ARGS} # This works
Suite Setup ${KEYWORD} @{KW ARGS} # This works
Suite Setup @{KEYWORD AND ARGS} # This does not work
Default Tags @{TAGS} # This works
__ `All available settings in test data`_
.. _dictionary variable:
.. _dictionary variables:
.. _dictionary expansion:
Dictionary variable syntax
~~~~~~~~~~~~~~~~~~~~~~~~~~
As discussed above, a variable containing a list can be used as a `list
variable`_ to pass list items to a keyword as individual arguments.
Similarly a variable containing a Python dictionary or a dictionary-like
object can be used as a dictionary variable like `&{EXAMPLE}`. In practice
this means that the dictionary is expanded and individual items are passed as
`named arguments`_ to the keyword. Assuming that a variable `&{USER}` has
value `{'name': 'robot', 'password': 'secret'}`, the following two test cases
are equivalent.
.. sourcecode:: robotframework
*** Test Cases ***
Constants
Login name=robot password=secret
Dict Variable
Login &{USER}
Starting from Robot Framework 4.0, dictionary expansion can be used in combination with
`dictionary item access`__ making usages like `&{nested}[key]` possible.
__ `Accessing individual dictionary items`_
Using dictionary variables with other data
''''''''''''''''''''''''''''''''''''''''''
It is possible to use dictionary variables with other arguments, including
other dictionary variables. Because `named argument syntax`_ requires positional
arguments to be before named argument, dictionaries can only be followed by
named arguments or other dictionaries.
.. sourcecode:: robotframework
*** Test Cases ***
Example
Keyword &{DICT} named=arg
Keyword positional @{LIST} &{DICT}
Keyword &{DICT} &{ANOTHER} &{ONE MORE}
Using dictionary variables with settings
''''''''''''''''''''''''''''''''''''''''
Dictionary variables cannot generally be used with settings. The only exception
are imports, setups and teardowns where dictionaries can be used as arguments.
.. sourcecode:: robotframework
*** Settings ***
Library ExampleLibrary &{LIB ARGS}
Suite Setup Some Keyword &{KW ARGS} named=arg
.. _environment variable:
Accessing list and dictionary items
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to access items of subscriptable variables, e.g. lists and dictionaries,
using special syntax like `${var}[item]` or `${var}[nested][item]`.
Starting from Robot Framework 4.0, it is also possible to use item access together with
`list expansion`_ and `dictionary expansion`_ by using syntax `@{var}[item]` and
`&{var}[item]`, respectively.
.. note:: Prior to Robot Framework 3.1 the normal item access syntax was `@{var}[item]`
with lists and `&{var}[item]` with dictionaries. Robot Framework 3.1 introduced
the generic `${var}[item]` syntax along with some other nice enhancements and
the old item access syntax was deprecated in Robot Framework 3.2.
.. _sequence items:
Accessing sequence items
''''''''''''''''''''''''
It is possible to access a certain item of a variable containing a `sequence`__
(e.g. list, string or bytes) with the syntax `${var}[index]`, where `index`
is the index of the selected value. Indices start from zero, negative indices
can be used to access items from the end, and trying to access an item with
too large an index causes an error. Indices are automatically converted to
integers, and it is also possible to use variables as indices.
.. sourcecode:: robotframework
*** Test Cases ***
Positive index
Login ${USER}[0] ${USER}[1]
Title Should Be Welcome ${USER}[0]!
Negative index
Keyword ${SEQUENCE}[-1]
Index defined as variable
Keyword ${SEQUENCE}[${INDEX}]
Sequence item access supports also the `same "slice" functionality as Python`__
with syntax like `${var}[1:]`. With this syntax you do not get a single
item but a slice of the original sequence. Same way as with Python you can
specify the start index, the end index, and the step:
.. sourcecode:: robotframework
*** Test Cases ***
Start index
Keyword ${SEQUENCE}[1:]
End index
Keyword ${SEQUENCE}[:4]
Start and end
Keyword ${SEQUENCE}[2:-1]
Step
Keyword ${SEQUENCE}[::2]
Keyword ${SEQUENCE}[1:-1:10]
.. note:: The slice syntax is new in Robot Framework 3.1. It was extended to work
with `list expansion`_ like `@{var}[1:]` in Robot Framework 4.0.
.. note:: Prior to Robot Framework 3.2, item and slice access was only supported
with variables containing lists, tuples, or other objects considered
list-like. Nowadays all sequences, including strings and bytes, are
supported.
__ https://docs.python.org/3/glossary.html#term-sequence
__ https://docs.python.org/glossary.html#term-slice
.. _dictionary items:
Accessing individual dictionary items
'''''''''''''''''''''''''''''''''''''
It is possible to access a certain value of a dictionary variable
with the syntax `${NAME}[key]`, where `key` is the name of the
selected value. Keys are considered to be strings, but non-strings
keys can be used as variables. Dictionary values accessed in this
manner can be used similarly as scalar variables.
If a key is a string, it is possible to access its value also using
attribute access syntax `${NAME.key}`. See `Creating dictionary variables`_
for more details about this syntax.
.. sourcecode:: robotframework
*** Test Cases ***
Dictionary variable item
Login ${USER}[name] ${USER}[password]
Title Should Be Welcome ${USER}[name]!
Key defined as variable
Log Many ${DICT}[${KEY}] ${DICT}[${42}]
Attribute access
Login ${USER.name} ${USER.password}
Title Should Be Welcome ${USER.name}!
Nested item access
''''''''''''''''''
Also nested subscriptable variables can be accessed using the same
item access syntax like `${var}[item1][item2]`. This is especially useful
when working with JSON data often returned by REST services. For example,
if a variable `${DATA}` contains `[{'id': 1, 'name': 'Robot'},
{'id': 2, 'name': 'Mr. X'}]`, this tests would pass:
.. sourcecode:: robotframework
*** Test Cases ***
Nested item access
Should Be Equal ${DATA}[0][name] Robot
Should Be Equal ${DATA}[1][id] ${2}
Environment variables
~~~~~~~~~~~~~~~~~~~~~
Robot Framework allows using environment variables in the test data using
the syntax `%{ENV_VAR_NAME}`. They are limited to string values. It is
possible to specify a default value, that is used if the environment
variable does not exists, by separating the variable name and the default
value with an equal sign like `%{ENV_VAR_NAME=default value}`.
Environment variables set in the operating system before the test execution are
available during it, and it is possible to create new ones with the keyword
:name:`Set Environment Variable` or delete existing ones with the
keyword :name:`Delete Environment Variable`, both available in the
OperatingSystem_ library. Because environment variables are global,
environment variables set in one test case can be used in other test
cases executed after it. However, changes to environment variables are
not effective after the test execution.
.. sourcecode:: robotframework
*** Test Cases ***
Environment variables
Log Current user: %{USER}
Run %{JAVA_HOME}${/}javac
Environment variables with defaults
Set port %{APPLICATION_PORT=8080}
.. note:: Support for specifying the default value is new in Robot Framework 3.2.
Java system properties
~~~~~~~~~~~~~~~~~~~~~~
When running tests with Jython, it is possible to access `Java system properties`__
using same syntax as `environment variables`_. If an environment variable and a
system property with same name exist, the environment variable will be used.
.. sourcecode:: robotframework
*** Test Cases ***
System properties
Log %{user.name} running tests on %{os.name}
Log %{custom.property=default value}
__ http://docs.oracle.com/javase/tutorial/essential/environment/sysprop.html
Creating variables
------------------
Variables can spring into existence from different sources.
.. _Variable sections:
Variable section
~~~~~~~~~~~~~~~~
The most common source for variables are Variable sections in `test case
files`_ and `resource files`_. Variable sections are convenient, because they
allow creating variables in the same place as the rest of the test
data, and the needed syntax is very simple. Their main disadvantages are
that values are always strings and they cannot be created dynamically.
If either of these is a problem, `variable files`_ can be used instead.
Creating scalar variables
'''''''''''''''''''''''''
The simplest possible variable assignment is setting a string into a
scalar variable. This is done by giving the variable name (including
`${}`) in the first column of the Variable section and the value in
the second one. If the second column is empty, an empty string is set
as a value. Also an already defined variable can be used in the value.
.. sourcecode:: robotframework
*** Variables ***
${NAME} Robot Framework
${VERSION} 2.0
${ROBOT} ${NAME} ${VERSION}
It is also possible, but not obligatory,
to use the equals sign `=` after the variable name to make assigning
variables slightly more explicit.
.. sourcecode:: robotframework
*** Variables ***
${NAME} = Robot Framework
${VERSION} = 2.0
If a scalar variable has a long value, it can be `split into multiple rows`__
by using the `...` syntax. By default rows are concatenated together using
a space, but this can be changed by having `SEPARATOR=<sep>` as the first item.
.. sourcecode:: robotframework
*** Variables ***
${EXAMPLE} This value is joined
... together with a space.
${MULTILINE} SEPARATOR=\n
... First line.
... Second line.
... Third line.
__ `Dividing data to several rows`_
Creating list variables
'''''''''''''''''''''''
Creating list variables is as easy as creating scalar variables. Again, the
variable name is in the first column of the Variable section and
values in the subsequent columns. A list variable can have any number
of values, starting from zero, and if many values are needed, they
can be `split into several rows`__.
__ `Dividing data to several rows`_
.. sourcecode:: robotframework
*** Variables ***
@{NAMES} Matti Teppo
@{NAMES2} @{NAMES} Seppo
@{NOTHING}
@{MANY} one two three four
... five six seven
Creating dictionary variables
'''''''''''''''''''''''''''''
Dictionary variables can be created in the Variable section similarly as
list variables. The difference is that items need to be created using
`name=value` syntax or existing dictionary variables. If there are multiple
items with same name, the last value has precedence. If a name contains
a literal equal sign, it can be escaped__ with a backslash like `\=`.
.. sourcecode:: robotframework
*** Variables ***
&{USER 1} name=Matti address=xxx phone=123
&{USER 2} name=Teppo address=yyy phone=456
&{MANY} first=1 second=${2} ${3}=third
&{EVEN MORE} &{MANY} first=override empty=
... =empty key\=here=value
Dictionary variables have two extra properties
compared to normal Python dictionaries. First of all, values of these
dictionaries can be accessed like attributes, which means that it is possible
to use `extended variable syntax`_ like `${VAR.key}`. This only works if the
key is a valid attribute name and does not match any normal attribute
Python dictionaries have. For example, individual value `&{USER}[name]` can
also be accessed like `${USER.name}` (notice that `$` is needed in this
context), but using `${MANY.3}` is not possible.
.. tip:: With nested dictionary variables keys are accessible like
`${VAR.nested.key}`. This eases working with nested data structures.
Another special property of dictionary variables is
that they are ordered. This means that if these dictionaries are iterated,
their items always come in the order they are defined. This can be useful
if dictionaries are used as `list variables`_ with `for loops`_ or otherwise.
When a dictionary is used as a list variable, the actual value contains
dictionary keys. For example, `@{MANY}` variable would have value `['first',
'second', 3]`.
__ Escaping_
Variable file
~~~~~~~~~~~~~
Variable files are the most powerful mechanism for creating different
kind of variables. It is possible to assign variables to any object
using them, and they also enable creating variables dynamically. The
variable file syntax and taking variable files into use is explained
in section `Resource and variable files`_.
Setting variables in command line
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Variables can be set from the command line either individually with
the :option:`--variable (-v)` option or using a variable file with the
:option:`--variablefile (-V)` option. Variables set from the command line
are globally available for all executed test data files, and they also
override possible variables with the same names in the Variable section and in
variable files imported in the test data.
The syntax for setting individual variables is :option:`--variable
name:value`, where `name` is the name of the variable without
`${}` and `value` is its value. Several variables can be
set by using this option several times. Only scalar variables can be
set using this syntax and they can only get string values.
.. sourcecode:: bash
--variable EXAMPLE:value
--variable HOST:localhost:7272 --variable USER:robot
In the examples above, variables are set so that
- `${EXAMPLE}` gets the value `value`
- `${HOST}` and `${USER}` get the values
`localhost:7272` and `robot`
The basic syntax for taking `variable files`_ into use from the command line
is :option:`--variablefile path/to/variables.py`, and `Taking variable files into
use`_ section has more details. What variables actually are created depends on
what variables there are in the referenced variable file.
If both variable files and individual variables are given from the command line,
the latter have `higher priority`__.
__ `Variable priorities and scopes`_
Return values from keywords
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Return values from keywords can also be set into variables. This
allows communication between different keywords even in different test
libraries.
Variables set in this manner are otherwise similar to any other
variables, but they are available only in the `local scope`_
where they are created. Thus it is not possible, for example, to set
a variable like this in one test case and use it in another. This is
because, in general, automated test cases should not depend on each
other, and accidentally setting a variable that is used elsewhere
could cause hard-to-debug errors. If there is a genuine need for
setting a variable in one test case and using it in another, it is
possible to use BuiltIn_ keywords as explained in the next section.
Assigning scalar variables
''''''''''''''''''''''''''
Any value returned by a keyword can be assigned to a `scalar variable`_.
As illustrated by the example below, the required syntax is very simple:
.. sourcecode:: robotframework
*** Test Cases ***
Returning
${x} = Get X an argument
Log We got ${x}!
In the above example the value returned by the :name:`Get X` keyword
is first set into the variable `${x}` and then used by the :name:`Log`
keyword. Having the equals sign `=` after the variable name is
not obligatory, but it makes the assignment more explicit. Creating
local variables like this works both in test case and user keyword level.
Notice that although a value is assigned to a scalar variable, it can
be used as a `list variable`_ if it has a list-like value and as a `dictionary
variable`_ if it has a dictionary-like value.
.. sourcecode:: robotframework
*** Test Cases ***
Example
${list} = Create List first second third
Length Should Be ${list} 3
Log Many @{list}
Assigning list variables
''''''''''''''''''''''''
If a keyword returns a list or any list-like object, it is possible to
assign it to a `list variable`_:
.. sourcecode:: robotframework
*** Test Cases ***
Example
@{list} = Create List first second third
Length Should Be ${list} 3
Log Many @{list}
Because all Robot Framework variables are stored in the same namespace, there is
not much difference between assigning a value to a scalar variable or a list
variable. This can be seen by comparing the last two examples above. The main
differences are that when creating a list variable, Robot Framework
automatically verifies that the value is a list or list-like, and the stored
variable value will be a new list created from the return value. When
assigning to a scalar variable, the return value is not verified and the
stored value will be the exact same object that was returned.
Assigning dictionary variables
''''''''''''''''''''''''''''''
If a keyword returns a dictionary or any dictionary-like object, it is possible
to assign it to a `dictionary variable`_:
.. sourcecode:: robotframework
*** Test Cases ***
Example
&{dict} = Create Dictionary first=1 second=${2} ${3}=third
Length Should Be ${dict} 3
Do Something &{dict}
Log ${dict.first}
Because all Robot Framework variables are stored in the same namespace, it would
also be possible to assign a dictionary into a scalar variable and use it
later as a dictionary when needed. There are, however, some actual benefits
in creating a dictionary variable explicitly. First of all, Robot Framework
verifies that the returned value is a dictionary or dictionary-like similarly
as it verifies that list variables can only get a list-like value.
A bigger benefit is that the value is converted into a special dictionary
that it uses also when `creating dictionary variables`_ in the Variable section.
Values in these dictionaries can be accessed using attribute access like
`${dict.first}` in the above example. These dictionaries are also ordered, but
if the original dictionary was not ordered, the resulting order is arbitrary.
Assigning multiple variables
''''''''''''''''''''''''''''
If a keyword returns a list or a list-like object, it is possible to assign
individual values into multiple scalar variables or into scalar variables and
a list variable.
.. sourcecode:: robotframework
*** Test Cases ***
Assign multiple
${a} ${b} ${c} = Get Three
${first} @{rest} = Get Three
@{before} ${last} = Get Three
${begin} @{middle} ${end} = Get Three
Assuming that the keyword :name:`Get Three` returns a list `[1, 2, 3]`,
the following variables are created:
- `${a}`, `${b}` and `${c}` with values `1`, `2`, and `3`, respectively.
- `${first}` with value `1`, and `@{rest}` with value `[2, 3]`.
- `@{before}` with value `[1, 2]` and `${last}` with value `3`.
- `${begin}` with value `1`, `@{middle}` with value `[2]` and ${end} with
value `3`.
It is an error if the returned list has more or less values than there are
scalar variables to assign. Additionally, only one list variable is allowed
and dictionary variables can only be assigned alone.
Using :name:`Set Test/Suite/Global Variable` keywords
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The BuiltIn_ library has keywords :name:`Set Test Variable`,
:name:`Set Suite Variable` and :name:`Set Global Variable` which can
be used for setting variables dynamically during the test
execution. If a variable already exists within the new scope, its
value will be overwritten, and otherwise a new variable is created.
Variables set with :name:`Set Test Variable` keyword are available
everywhere within the scope of the currently executed test case. For
example, if you set a variable in a user keyword, it is available both
in the test case level and also in all other user keywords used in the
current test. Other test cases will not see variables set with this
keyword.
Variables set with :name:`Set Suite Variable` keyword are available
everywhere within the scope of the currently executed test
suite. Setting variables with this keyword thus has the same effect as
creating them using the `Variable section`_ in the test data file or
importing them from `variable files`_. Other test suites, including
possible child test suites, will not see variables set with this
keyword.
Variables set with :name:`Set Global Variable` keyword are globally
available in all test cases and suites executed after setting
them. Setting variables with this keyword thus has the same effect as
`creating from the command line`__ using the options :option:`--variable` or
:option:`--variablefile`. Because this keyword can change variables
everywhere, it should be used with care.
.. note:: :name:`Set Test/Suite/Global Variable` keywords set named
variables directly into `test, suite or global variable scope`__
and return nothing. On the other hand, another BuiltIn_ keyword
:name:`Set Variable` sets local variables using `return values`__.
__ `Setting variables in command line`_
__ `Variable scopes`_
__ `Return values from keywords`_
.. _built-in variable:
Built-in variables
------------------
Robot Framework provides some built-in variables that are available
automatically.
Operating-system variables
~~~~~~~~~~~~~~~~~~~~~~~~~~
Built-in variables related to the operating system ease making the test data
operating-system-agnostic.
.. table:: Available operating-system-related built-in variables
:class: tabular
+------------+------------------------------------------------------------------+
| Variable | Explanation |
+============+==================================================================+
| ${CURDIR} | An absolute path to the directory where the test data |
| | file is located. This variable is case-sensitive. |
+------------+------------------------------------------------------------------+
| ${TEMPDIR} | An absolute path to the system temporary directory. In UNIX-like |
| | systems this is typically :file:`/tmp`, and in Windows |
| | :file:`c:\\Documents and Settings\\<user>\\Local Settings\\Temp`.|
+------------+------------------------------------------------------------------+
| ${EXECDIR} | An absolute path to the directory where test execution was |
| | started from. |
+------------+------------------------------------------------------------------+
| ${/} | The system directory path separator. `/` in UNIX-like |
| | systems and :codesc:`\\` in Windows. |
+------------+------------------------------------------------------------------+
| ${:} | The system path element separator. `:` in UNIX-like |
| | systems and `;` in Windows. |
+------------+------------------------------------------------------------------+
| ${\\n} | The system line separator. :codesc:`\\n` in UNIX-like systems |
| | and :codesc:`\\r\\n` in Windows. |
+------------+------------------------------------------------------------------+
.. sourcecode:: robotframework
*** Test Cases ***
Example
Create Binary File ${CURDIR}${/}input.data Some text here${\n}on two lines
Set Environment Variable CLASSPATH ${TEMPDIR}${:}${CURDIR}${/}foo.jar
Number variables
~~~~~~~~~~~~~~~~
The variable syntax can be used for creating both integers and
floating point numbers, as illustrated in the example below. This is
useful when a keyword expects to get an actual number, and not a
string that just looks like a number, as an argument.
.. sourcecode:: robotframework
*** Test Cases ***
Example 1A
Connect example.com 80 # Connect gets two strings as arguments
Example 1B
Connect example.com ${80} # Connect gets a string and an integer
Example 2
Do X ${3.14} ${-1e-4} # Do X gets floating point numbers 3.14 and -0.0001
It is possible to create integers also from binary, octal, and
hexadecimal values using `0b`, `0o` and `0x` prefixes, respectively.
The syntax is case insensitive.
.. sourcecode:: robotframework
*** Test Cases ***
Example
Should Be Equal ${0b1011} ${11}
Should Be Equal ${0o10} ${8}
Should Be Equal ${0xff} ${255}
Should Be Equal ${0B1010} ${0XA}
Boolean and None/null variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Also Boolean values and Python `None` and Java `null` can
be created using the variable syntax similarly as numbers.
.. sourcecode:: robotframework
*** Test Cases ***
Boolean
Set Status ${true} # Set Status gets Boolean true as an argument
Create Y something ${false} # Create Y gets a string and Boolean false
None
Do XYZ ${None} # Do XYZ gets Python None as an argument
Null
${ret} = Get Value arg # Checking that Get Value returns Java null
Should Be Equal ${ret} ${null}
These variables are case-insensitive, so for example `${True}` and
`${true}` are equivalent. Additionally, `${None}` and
`${null}` are synonyms, because when running tests on the Jython
interpreter, Jython automatically converts `None` and
`null` to the correct format when necessary.
Space and empty variables
~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to create spaces and empty strings using variables
`${SPACE}` and `${EMPTY}`, respectively. These variables are
useful, for example, when there would otherwise be a need to `escape
spaces or empty cells`__ with a backslash. If more than one space is
needed, it is possible to use the `extended variable syntax`_ like
`${SPACE * 5}`. In the following example, :name:`Should Be
Equal` keyword gets identical arguments but those using variables are
easier to understand than those using backslashes.
.. sourcecode:: robotframework
*** Test Cases ***
One space
Should Be Equal ${SPACE} \ \
Four spaces
Should Be Equal ${SPACE * 4} \ \ \ \ \
Ten spaces
Should Be Equal ${SPACE * 10} \ \ \ \ \ \ \ \ \ \ \
Quoted space
Should Be Equal "${SPACE}" " "
Quoted spaces
Should Be Equal "${SPACE * 2}" " \ "
Empty
Should Be Equal ${EMPTY} \
There is also an empty `list variable`_ `@{EMPTY}` and an empty `dictionary
variable`_ `&{EMPTY}`. Because they have no content, they basically
vanish when used somewhere in the test data. They are useful, for example,
with `test templates`_ when the `template keyword is used without
arguments`__ or when overriding list or dictionary variables in different
scopes. Modifying the value of `@{EMPTY}` or `&{EMPTY}` is not possible.
.. sourcecode:: robotframework
*** Test Cases ***
Template
[Template] Some keyword
@{EMPTY}
Override
Set Global Variable @{LIST} @{EMPTY}
Set Suite Variable &{DICT} &{EMPTY}
.. note:: `${SPACE}` represents the ASCII space (`\x20`) and `other spaces`__
should be specified using the `escape sequences`__ like `\xA0`
(NO-BREAK SPACE) and `\u3000` (IDEOGRAPHIC SPACE).
__ Escaping_
__ https://groups.google.com/group/robotframework-users/browse_thread/thread/ccc9e1cd77870437/4577836fe946e7d5?lnk=gst&q=templates#4577836fe946e7d5
__ http://jkorpela.fi/chars/spaces.html
__ Escaping_
Automatic variables
~~~~~~~~~~~~~~~~~~~
Some automatic variables can also be used in the test data. These
variables can have different values during the test execution and some
of them are not even available all the time. Altering the value of
these variables does not affect the original values, but some values
can be changed dynamically using keywords from the `BuiltIn`_ library.
.. table:: Available automatic variables
:class: tabular
+------------------------+-------------------------------------------------------+------------+
| Variable | Explanation | Available |
+========================+=======================================================+============+
| ${TEST NAME} | The name of the current test case. | Test case |
+------------------------+-------------------------------------------------------+------------+
| @{TEST TAGS} | Contains the tags of the current test case in | Test case |
| | alphabetical order. Can be modified dynamically using | |
| | :name:`Set Tags` and :name:`Remove Tags` keywords. | |
+------------------------+-------------------------------------------------------+------------+
| ${TEST DOCUMENTATION} | The documentation of the current test case. Can be set| Test case |
| | dynamically using using :name:`Set Test Documentation`| |
| | keyword. | |
+------------------------+-------------------------------------------------------+------------+
| ${TEST STATUS} | The status of the current test case, either PASS or | `Test |
| | FAIL. | teardown`_ |
+------------------------+-------------------------------------------------------+------------+
| ${TEST MESSAGE} | The message of the current test case. | `Test |
| | | teardown`_ |
+------------------------+-------------------------------------------------------+------------+
| ${PREV TEST NAME} | The name of the previous test case, or an empty string| Everywhere |
| | if no tests have been executed yet. | |
+------------------------+-------------------------------------------------------+------------+
| ${PREV TEST STATUS} | The status of the previous test case: either PASS, | Everywhere |
| | FAIL, or an empty string when no tests have been | |
| | executed. | |
+------------------------+-------------------------------------------------------+------------+
| ${PREV TEST MESSAGE} | The possible error message of the previous test case. | Everywhere |
+------------------------+-------------------------------------------------------+------------+
| ${SUITE NAME} | The full name of the current test suite. | Everywhere |
+------------------------+-------------------------------------------------------+------------+
| ${SUITE SOURCE} | An absolute path to the suite file or directory. | Everywhere |
+------------------------+-------------------------------------------------------+------------+
| ${SUITE DOCUMENTATION} | The documentation of the current test suite. Can be | Everywhere |
| | set dynamically using using :name:`Set Suite | |
| | Documentation` keyword. | |
+------------------------+-------------------------------------------------------+------------+
| &{SUITE METADATA} | The free metadata of the current test suite. Can be | Everywhere |
| | set using :name:`Set Suite Metadata` keyword. | |
+------------------------+-------------------------------------------------------+------------+
| ${SUITE STATUS} | The status of the current test suite, either PASS or | `Suite |
| | FAIL. | teardown`_ |
+------------------------+-------------------------------------------------------+------------+
| ${SUITE MESSAGE} | The full message of the current test suite, including | `Suite |
| | statistics. | teardown`_ |
+------------------------+-------------------------------------------------------+------------+
| ${KEYWORD STATUS} | The status of the current keyword, either PASS or | `User |
| | FAIL. | keyword |
| | | teardown`_ |
+------------------------+-------------------------------------------------------+------------+
| ${KEYWORD MESSAGE} | The possible error message of the current keyword. | `User |
| | | keyword |
| | | teardown`_ |
+------------------------+-------------------------------------------------------+------------+
| ${LOG LEVEL} | Current `log level`_. | Everywhere |
+------------------------+-------------------------------------------------------+------------+
| ${OUTPUT FILE} | An absolute path to the `output file`_. | Everywhere |
+------------------------+-------------------------------------------------------+------------+
| ${LOG FILE} | An absolute path to the `log file`_ or string NONE | Everywhere |
| | when no log file is created. | |
+------------------------+-------------------------------------------------------+------------+
| ${REPORT FILE} | An absolute path to the `report file`_ or string NONE | Everywhere |
| | when no report is created. | |
+------------------------+-------------------------------------------------------+------------+
| ${DEBUG FILE} | An absolute path to the `debug file`_ or string NONE | Everywhere |
| | when no debug file is created. | |
+------------------------+-------------------------------------------------------+------------+
| ${OUTPUT DIR} | An absolute path to the `output directory`_. | Everywhere |
+------------------------+-------------------------------------------------------+------------+
Suite related variables `${SUITE SOURCE}`, `${SUITE NAME}`,
`${SUITE DOCUMENTATION}` and `&{SUITE METADATA}` are
available already when test libraries and variable files are imported.
Possible variables in these automatic variables are not yet resolved
at the import time, though.
Variable priorities and scopes
------------------------------
Variables coming from different sources have different priorities and
are available in different scopes.
Variable priorities
~~~~~~~~~~~~~~~~~~~
*Variables from the command line*
Variables `set in the command line`__ have the highest priority of all
variables that can be set before the actual test execution starts. They
override possible variables created in Variable sections in test case
files, as well as in resource and variable files imported in the
test data.
Individually set variables (:option:`--variable` option) override the
variables set using `variable files`_ (:option:`--variablefile` option).
If you specify same individual variable multiple times, the one specified
last will override earlier ones. This allows setting default values for
variables in a `start-up script`_ and overriding them from the command line.
Notice, though, that if multiple variable files have same variables, the
ones in the file specified first have the highest priority.
__ `Setting variables in command line`_
*Variable section in a test case file*
Variables created using the `Variable section`_ in a test case file
are available for all the test cases in that file. These variables
override possible variables with same names in imported resource and
variable files.
Variables created in the Variable sections are available in all other sections
in the file where they are created. This means that they can be used also
in the Setting section, for example, for importing more variables from
resource and variable files.
*Imported resource and variable files*
Variables imported from the `resource and variable files`_ have the
lowest priority of all variables created in the test data.
Variables from resource files and variable files have the same
priority. If several resource and/or variable file have same
variables, the ones in the file imported first are taken into use.
If a resource file imports resource files or variable files,
variables in its own Variable section have a higher priority than
variables it imports. All these variables are available for files that
import this resource file.
Note that variables imported from resource and variable files are not
available in the Variable section of the file that imports them. This
is due to the Variable section being processed before the Setting section
where the resource files and variable files are imported.
*Variables set during test execution*
Variables set during the test execution either using `return values
from keywords`_ or `using Set Test/Suite/Global Variable keywords`_
always override possible existing
variables in the scope where they are set. In a sense they thus
have the highest priority, but on the other hand they do not affect
variables outside the scope they are defined.
*Built-in variables*
`Built-in variables`_ like `${TEMPDIR}` and `${TEST_NAME}`
have the highest priority of all variables. They cannot be overridden
using Variable section or from command line, but even they can be reset during
the test execution. An exception to this rule are `number variables`_, which
are resolved dynamically if no variable is found otherwise. They can thus be
overridden, but that is generally a bad idea. Additionally `${CURDIR}`
is special because it is replaced already during the test data processing time.
Variable scopes
~~~~~~~~~~~~~~~
Depending on where and how they are created, variables can have a
global, test suite, test case or local scope.
Global scope
''''''''''''
Global variables are available everywhere in the test data. These
variables are normally `set from the command line`__ with the
:option:`--variable` and :option:`--variablefile` options, but it is also
possible to create new global variables or change the existing ones
with the BuiltIn_ keyword :name:`Set Global Variable` anywhere in
the test data. Additionally also `built-in variables`_ are global.
It is recommended to use capital letters with all global variables.
Test suite scope
''''''''''''''''
Variables with the test suite scope are available anywhere in the
test suite where they are defined or imported. They can be created
in Variable sections, imported from `resource and variable files`_,
or set during the test execution using the BuiltIn_ keyword
:name:`Set Suite Variable`.
The test suite scope *is not recursive*, which means that variables
available in a higher-level test suite *are not available* in
lower-level suites. If necessary, `resource and variable files`_ can
be used for sharing variables.
Since these variables can be considered global in the test suite where
they are used, it is recommended to use capital letters also with them.
Test case scope
'''''''''''''''
Variables with the test case scope are visible in a test case and in
all user keywords the test uses. Initially there are no variables in
this scope, but it is possible to create them by using the BuiltIn_
keyword :name:`Set Test Variable` anywhere in a test case.
Also variables in the test case scope are to some extend global. It is
thus generally recommended to use capital letters with them too.
Local scope
'''''''''''
Test cases and user keywords have a local variable scope that is not
seen by other tests or keywords. Local variables can be created using
`return values`__ from executed keywords and user keywords also get
them as arguments__.
It is recommended to use lower-case letters with local variables.
__ `Setting variables in command line`_
__ `Return values from keywords`_
__ `User keyword arguments`_
Advanced variable features
--------------------------
Extended variable syntax
~~~~~~~~~~~~~~~~~~~~~~~~
Extended variable syntax allows accessing attributes of an object assigned
to a variable (for example, `${object.attribute}`) and even calling
its methods (for example, `${obj.getName()}`). It works both with
scalar and list variables, but is mainly useful with the former
Extended variable syntax is a powerful feature, but it should
be used with care. Accessing attributes is normally not a problem, on
the contrary, because one variable containing an object with several
attributes is often better than having several variables. On the
other hand, calling methods, especially when they are used with
arguments, can make the test data pretty complicated to understand.
If that happens, it is recommended to move the code into a test library.
The most common usages of extended variable syntax are illustrated
in the example below. First assume that we have the following `variable file`_
and test case:
.. sourcecode:: python
class MyObject:
def __init__(self, name):
self.name = name
def eat(self, what):
return '%s eats %s' % (self.name, what)
def __str__(self):
return self.name
OBJECT = MyObject('Robot')
DICTIONARY = {1: 'one', 2: 'two', 3: 'three'}
.. sourcecode:: robotframework
*** Test Cases ***
Example
KW 1 ${OBJECT.name}
KW 2 ${OBJECT.eat('Cucumber')}
KW 3 ${DICTIONARY[2]}
When this test data is executed, the keywords get the arguments as
explained below:
- :name:`KW 1` gets string `Robot`
- :name:`KW 2` gets string `Robot eats Cucumber`
- :name:`KW 3` gets string `two`
The extended variable syntax is evaluated in the following order:
1. The variable is searched using the full variable name. The extended
variable syntax is evaluated only if no matching variable
is found.
2. The name of the base variable is created. The body of the name
consists of all the characters after the opening `{` until
the first occurrence of a character that is not an alphanumeric character
or a space. For example, base variables of `${OBJECT.name}`
and `${DICTIONARY[2]}`) are `OBJECT` and `DICTIONARY`,
respectively.
3. A variable matching the body is searched. If there is no match, an
exception is raised and the test case fails.
4. The expression inside the curly brackets is evaluated as a Python
expression, so that the base variable name is replaced with its
value. If the evaluation fails because of an invalid syntax or that
the queried attribute does not exist, an exception is raised and
the test fails.
5. The whole extended variable is replaced with the value returned
from the evaluation.
If the object that is used is implemented with Java, the extended
variable syntax allows you to access attributes using so-called bean
properties. In essence, this means that if you have an object with the
`getName` method set into a variable `${OBJ}`, then the
syntax `${OBJ.name}` is equivalent to but clearer than
`${OBJ.getName()}`. The Python object used in the previous example
could thus be replaced with the following Java implementation:
.. sourcecode:: java
public class MyObject:
private String name;
public MyObject(String name) {
name = name;
}
public String getName() {
return name;
}
public String eat(String what) {
return name + " eats " + what;
}
public String toString() {
return name;
}
}
Many standard Python objects, including strings and numbers, have
methods that can be used with the extended variable syntax either
explicitly or implicitly. Sometimes this can be really useful and
reduce the need for setting temporary variables, but it is also easy
to overuse it and create really cryptic test data. Following examples
show few pretty good usages.
.. sourcecode:: robotframework
*** Test Cases ***
String
${string} = Set Variable abc
Log ${string.upper()} # Logs 'ABC'
Log ${string * 2} # Logs 'abcabc'
Number
${number} = Set Variable ${-2}
Log ${number * 10} # Logs -20
Log ${number.__abs__()} # Logs 2
Note that even though `abs(number)` is recommended over
`number.__abs__()` in normal Python code, using
`${abs(number)}` does not work. This is because the variable name
must be in the beginning of the extended syntax. Using `__xxx__`
methods in the test data like this is already a bit questionable, and
it is normally better to move this kind of logic into test libraries.
Extended variable syntax works also in `list variable`_ context.
If, for example, an object assigned to a variable `${EXTENDED}` has
an attribute `attribute` that contains a list as a value, it can be
used as a list variable `@{EXTENDED.attribute}`.
Extended variable assignment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to set attributes of
objects stored to scalar variables using `keyword return values`__ and
a variation of the `extended variable syntax`_. Assuming we have
variable `${OBJECT}` from the previous examples, attributes could
be set to it like in the example below.
__ `Return values from keywords`_
.. sourcecode:: robotframework
*** Test Cases ***
Example
${OBJECT.name} = Set Variable New name
${OBJECT.new_attr} = Set Variable New attribute
The extended variable assignment syntax is evaluated using the
following rules:
1. The assigned variable must be a scalar variable and have at least
one dot. Otherwise the extended assignment syntax is not used and
the variable is assigned normally.
2. If there exists a variable with the full name
(e.g. `${OBJECT.name}` in the example above) that variable
will be assigned a new value and the extended syntax is not used.
3. The name of the base variable is created. The body of the name
consists of all the characters between the opening `${` and
the last dot, for example, `OBJECT` in `${OBJECT.name}`
and `foo.bar` in `${foo.bar.zap}`. As the second example
illustrates, the base name may contain normal extended variable
syntax.
4. The name of the attribute to set is created by taking all the
characters between the last dot and the closing `}`, for
example, `name` in `${OBJECT.name}`. If the name does not
start with a letter or underscore and contain only these characters
and numbers, the attribute is considered invalid and the extended
syntax is not used. A new variable with the full name is created
instead.
5. A variable matching the base name is searched. If no variable is
found, the extended syntax is not used and, instead, a new variable
is created using the full variable name.
6. If the found variable is a string or a number, the extended syntax
is ignored and a new variable created using the full name. This is
done because you cannot add new attributes to Python strings or
numbers, and this way the new syntax is also less
backwards-incompatible.
7. If all the previous rules match, the attribute is set to the base
variable. If setting fails for any reason, an exception is raised
and the test fails.
.. note:: Unlike when assigning variables normally using `return
values from keywords`_, changes to variables done using the
extended assign syntax are not limited to the current
scope. Because no new variable is created but instead the
state of an existing variable is changed, all tests and
keywords that see that variable will also see the changes.
Variables inside variables
~~~~~~~~~~~~~~~~~~~~~~~~~~
Variables are allowed also inside variables, and when this syntax is
used, variables are resolved from the inside out. For example, if you
have a variable `${var${x}}`, then `${x}` is resolved
first. If it has the value `name`, the final value is then the
value of the variable `${varname}`. There can be several nested
variables, but resolving the outermost fails, if any of them does not
exist.
In the example below, :name:`Do X` gets the value `${JOHN HOME}`
or `${JANE HOME}`, depending on if :name:`Get Name` returns
`john` or `jane`. If it returns something else, resolving
`${${name} HOME}` fails.
.. sourcecode:: robotframework
*** Variables ***
${JOHN HOME} /home/john
${JANE HOME} /home/jane
*** Test Cases ***
Example
${name} = Get Name
Do X ${${name} HOME}
.. _inline Python evaluation:
Inline Python evaluation
~~~~~~~~~~~~~~~~~~~~~~~~
Variable syntax can also be used for evaluating Python expressions. The
basic syntax is `${{expression}}` i.e. there are double curly braces around
the expression. The `expression` can be any valid Python expression such as
`${{1 + 2}}` or `${{['a', 'list']}}`. Spaces around the expression are allowed,
so also `${{ 1 + 2 }}` and `${{ ['a', 'list'] }}` are valid. In addition to
using normal `scalar variables`_, also `list variables`_ and
`dictionary variables`_ support `@{{expression}}` and `&{{expression}}` syntax,
respectively.
Main usages for this pretty advanced functionality are:
- Evaluating Python expressions involving Robot Framework's variables
(`${{len('${var}') > 3}}`, `${{$var[0] if $var is not None else None}}`).
- Creating values that are not Python base types
(`${{decimal.Decimal('0.11')}}`, `${{datetime.date(2019, 11, 5)}}`).
- Creating values dynamically (`${{random.randint(0, 100)}}`,
`${{datetime.date.today()}}`).
- Constructing collections, especially nested collections (`${{[1, 2, 3, 4]}}`,
`${{ {'id': 1, 'name': 'Example', 'children': [7, 9]} }}`).
- Accessing constants and other useful attributes in Python modules
(`${{math.pi}}`, `${{platform.system()}}`).
This is somewhat similar functionality than the `extended variable syntax`_
discussed earlier. As the examples above illustrate, this syntax is even more
powerful as it provides access to Python built-ins like `len()` and modules
like `math`. In addition to being able to use variables like `${var}` in
the expressions (they are replaced before evaluation), variables are also
available using the special `$var` syntax during evaluation. The whole expression
syntax is explained in the `Evaluating expressions`_ appendix.
.. tip:: Instead of creating complicated expressions, it is often better
to move the logic into a `custom library`__. That eases
maintenance, makes test data easier to understand and can also
enhance execution speed.
.. note:: The inline Python evaluation syntax is new in Robot Framework 3.2.
__ `Creating test libraries`_
| 41.777476 | 147 | 0.644592 |
80c3ae70d1cd92b34a3703d6487a09e9303281b1 | 1,170 | rst | reStructuredText | docs/source/libai.data.rst | Kitten97/libai | 4ca0341628dc0b850acbc30d3766797cfc8a6c86 | [
"Apache-2.0"
] | null | null | null | docs/source/libai.data.rst | Kitten97/libai | 4ca0341628dc0b850acbc30d3766797cfc8a6c86 | [
"Apache-2.0"
] | null | null | null | docs/source/libai.data.rst | Kitten97/libai | 4ca0341628dc0b850acbc30d3766797cfc8a6c86 | [
"Apache-2.0"
] | null | null | null | libai.data
##############################
libai.data.data_utils module
---------------------------------
.. currentmodule:: libai.data
.. automodule:: libai.data.data_utils
:members:
IndexedCachedDataset,
IndexedDataset,
MMapIndexedDataset,
get_indexed_dataset,
BlockIndexedDataset,
SentenceIndexedDataset,
SplitDataset,
split_ds,
libai.data.datasets module
---------------------------------
.. currentmodule:: libai.data
.. automodule:: libai.data.datasets
:members:
MNISTDataset,
CIFAR10Dataset,
CIFAR100Dataset,
ImageNetDataset,
BertDataset
libai.data.samplers module
---------------------------------
.. currentmodule:: libai.data
.. automodule:: libai.data.samplers
:members:
CyclicSampler,
SingleRoundSampler,
libai.data.build module
---------------------------------
.. currentmodule:: libai.data
.. automodule:: libai.data.build
:members:
build_nlp_train_val_test_loader,
build_nlp_train_loader,
build_nlp_test_loader,
build_image_train_loader,
build_image_test_loader,
| 23.4 | 40 | 0.576068 |
2fc912ac1485ce45ccd849de7ed50230c63d30cf | 5,278 | rst | reStructuredText | doc/Audio_codecs/G_711_README.rst | wangnuannuan/embarc_audio | 04ea739d71a3b6c98aaca5545d1311f8f83abcff | [
"BSD-3-Clause"
] | 2 | 2019-12-31T14:16:46.000Z | 2020-09-23T01:21:07.000Z | doc/Audio_codecs/G_711_README.rst | wangnuannuan/embarc_audio | 04ea739d71a3b6c98aaca5545d1311f8f83abcff | [
"BSD-3-Clause"
] | 3 | 2019-11-20T10:17:24.000Z | 2021-08-03T12:14:21.000Z | doc/Audio_codecs/G_711_README.rst | wangnuannuan/embarc_audio | 04ea739d71a3b6c98aaca5545d1311f8f83abcff | [
"BSD-3-Clause"
] | 1 | 2021-08-03T02:45:48.000Z | 2021-08-03T02:45:48.000Z | G.711
-----
Introduction
~~~~~~~~~~~~
`G.711 <https://www.itu.int/rec/recommendation.asp?lang=en&parent=T-REC-G.711-198811-I>`__
is an ITU-T standard for audio companding: Pulse code modulation
(PCM) of voice frequencies.
Source Code
~~~~~~~~~~~
This codec is based on the original `G.711 <https://www.itu.int/rec/recommendation.asp?lang=en&parent=T-REC-G.711-198811-I>`__,
25 November 1988, corresponding ANSI-C code is available in the G.711 module of the `ITU-T G.191 <https://www.itu.int/rec/T-REC-G.191/en>`__ Software Tools Library, 13 January 2019,
and this codec passes the test cases successfully.
Multi-instance and reentrance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Codec hasn't tested on multi-instance run, for more information about support of
multi-instance run see codec source documentation. Codec supports reentrance
in case if AGU address pointer registers (:exc:`AGU_AUX_APx`), AGU offset registers (:exc:`AGU_AUX_OSx`),
AGU modifier registers (:exc:`AGU_AUX_MODx`), see [1] in :ref:`Refs`, will be saved and restored.
Codec-Specific Build Options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:exc:`LTO_BUILD` and :exc:`REMAPITU_T` options are not available for this codec. Use the following command to build the G.711 codec:
.. code:: shell
mdb -run -cl -nsim -tcf=<default TCF from /rules/common_hw_config.mk> g711demo.elf <number of frames> [u|A] [-r] [lili|loli|lilo] <input_file> <output_file> <BlockSize> <1stBlock> <NoOfBlocks> [-skip n]
The following table lists the G.711 codec parameters
.. table:: Command-Line Option Descriptions for G.711 Codec
:align: center
:widths: 20,130
+-----------------------------------+-----------------------------------+
| **Option** | **Description** |
+===================================+===================================+
| :exc:`<input_file>` | Specifies the input file |
+-----------------------------------+-----------------------------------+
| :exc:`<output_file>` | Specifies the output file |
+-----------------------------------+-----------------------------------+
| :exc:`-r` | Disables even-bit swap by A-law |
| | encoding and decoding (DEFAULT) |
+-----------------------------------+-----------------------------------+
| :exc:`A` | Input data for encoding and |
| | output data for decoding are in |
| | α-law (G.711) format |
+-----------------------------------+-----------------------------------+
| :exc:`u` | Input data for encoding and |
| | output data for decoding are in |
| | µ-law (G.711) format |
+-----------------------------------+-----------------------------------+
| :exc:`lili` | Convert the input file |
| | linear to linear: lin -> (A/u)log |
| | -> lin |
+-----------------------------------+-----------------------------------+
| :exc:`lilo` | Convert the input file |
| | linear to (A/u)-log |
+-----------------------------------+-----------------------------------+
| :exc:`loli` | Convert the input file |
| | (A/u) log to linear |
+-----------------------------------+-----------------------------------+
| :exc:`<BlockSize>` | Number of samples per block |
| | [256] (optional) |
+-----------------------------------+-----------------------------------+
| :exc:`<1stBlock>` | Number of the first block |
| | of the input file to be processed |
| | (optional) |
+-----------------------------------+-----------------------------------+
| :exc:`<NoOfBlocks>` | Number of blocks to |
| | process, starting with block |
| | <1stBlock> |
+-----------------------------------+-----------------------------------+
| :exc:`-skip *n*` | *n* is the number of samples to |
| | skip (optional) |
+-----------------------------------+-----------------------------------+
**Examples**
The following command converts the linear samples to A-law log:
.. code:: shell
mdb -run -cl -nsim -tcf=em9d_voice_audio g711demo.elf A lilo ../testvectors/Ref/sweep.src ../testvectors/sweep-r.a 256 1 256
The following command converts from A-law log to linear samples:
.. code:: shell
mdb -run -cl -nsim -tcf=em9d_voice_audio g711demo.elf A loli ../testvectors/Ref/sweep-r.a ../testvectors/sweep.src 256 1 256
| 54.412371 | 205 | 0.395036 |
a14bd2d10341af73ca33c64e0fd74e6821d1c374 | 1,702 | rst | reStructuredText | plugins/asciidoc_reader/README.rst | mohnjahoney/website_source | edc86a869b90ae604f32e736d9d5ecd918088e6a | [
"MIT"
] | 13 | 2020-01-27T09:02:25.000Z | 2022-01-20T07:45:26.000Z | plugins/asciidoc_reader/README.rst | mohnjahoney/website_source | edc86a869b90ae604f32e736d9d5ecd918088e6a | [
"MIT"
] | 29 | 2020-03-22T06:57:57.000Z | 2022-01-24T22:46:42.000Z | plugins/asciidoc_reader/README.rst | mohnjahoney/website_source | edc86a869b90ae604f32e736d9d5ecd918088e6a | [
"MIT"
] | 6 | 2020-07-10T00:13:30.000Z | 2022-01-26T08:22:33.000Z | AsciiDoc Reader
###############
This plugin allows you to use `AsciiDoc <http://www.methods.co.nz/asciidoc/>`_
to write your posts. File extension should be ``.asc``, ``.adoc``,
or ``.asciidoc``.
Dependency
----------
There are two command line utilities commonly used to render AsciiDoc:
``asciidoc`` and ``asciidoctor``. One of the two will need to be installed and
on the PATH.
**Note**: The ``asciidoctor`` utility is recommended since the original
``asciidoc`` is no longer maintained.
Settings
--------
======================================== =======================================================
Setting name (followed by default value) What does it do?
======================================== =======================================================
``ASCIIDOC_CMD = asciidoc`` Selects which utility to use for rendering. Will
autodetect utility if not provided.
``ASCIIDOC_OPTIONS = []`` A list of options to pass to AsciiDoc. See the `manpage
<http://www.methods.co.nz/asciidoc/manpage.html>`_.
======================================== =======================================================
Example file header
-------------------
Following the `example <https://github.com/getpelican/pelican/blob/master/docs/content.rst#file-metadata>`_ in the main pelican documentation:
.. code-block:: none
= My super title
:date: 2010-10-03 10:20
:modified: 2010-10-04 18:40
:tags: thats, awesome
:category: yeah
:slug: my-super-post
:authors: Alexis Metaireau, Conan Doyle
:summary: Short version for index and feeds
== title level 2
and so on...
| 34.04 | 142 | 0.532315 |
c7ae2b14d2187e65d169a1dc0ce570c00328a690 | 107 | rst | reStructuredText | docs/usage.rst | datalayer-contrib/jupyter-kernel-provider-k8s | 259957d6ca46440fd032e744e73c63388a43babf | [
"BSD-3-Clause"
] | 1 | 2019-08-02T14:28:06.000Z | 2019-08-02T14:28:06.000Z | docs/usage.rst | datalayer-contrib/jupyter-kernel-provider-k8s | 259957d6ca46440fd032e744e73c63388a43babf | [
"BSD-3-Clause"
] | 7 | 2019-07-29T21:22:52.000Z | 2020-01-20T18:54:57.000Z | docs/usage.rst | datalayer-contrib/jupyter-kernel-provider-k8s | 259957d6ca46440fd032e744e73c63388a43babf | [
"BSD-3-Clause"
] | 2 | 2019-07-03T18:36:12.000Z | 2019-10-19T10:04:38.000Z | =====
Usage
=====
To use Kubernetes Kernel Provider in a project::
import kubernetes_kernel_provider
| 13.375 | 48 | 0.71028 |
8cbad0451f4f482a7ff9629ed140160ba7a703f8 | 428 | rst | reStructuredText | sila_library/CHANGELOG.rst | lemmi25/sila2lib | ac4db8ee7fe6c99bde498151a539b25be2021d2f | [
"MIT"
] | null | null | null | sila_library/CHANGELOG.rst | lemmi25/sila2lib | ac4db8ee7fe6c99bde498151a539b25be2021d2f | [
"MIT"
] | null | null | null | sila_library/CHANGELOG.rst | lemmi25/sila2lib | ac4db8ee7fe6c99bde498151a539b25be2021d2f | [
"MIT"
] | null | null | null | Changelog
=========
0.2.1
-----
*Authors:* Timm Severin (timm.severin@tum.de)
fdl\_parser.py
~~~~~~~~~~~~~~
- ``FDLValidator`` has now a ``validate()`` function which can be used
to validate FDL files.
- Default value for string parameters does not include ``return_val``
any more, this has been used in the auto-generated \_real.py
\_simulation implementations. This has to be implemented manually as
of now.
| 23.777778 | 71 | 0.67757 |
fa354bd9366dbc8742a449647372724b74585e66 | 49 | rst | reStructuredText | README.rst | ShazbotLabs/mindy | 137381f43db65478ed73080769dca4383c0ac79e | [
"MIT"
] | null | null | null | README.rst | ShazbotLabs/mindy | 137381f43db65478ed73080769dca4383c0ac79e | [
"MIT"
] | null | null | null | README.rst | ShazbotLabs/mindy | 137381f43db65478ed73080769dca4383c0ac79e | [
"MIT"
] | null | null | null | Mindy
=====
A shell UI based on Prompt Toolkit.
| 9.8 | 35 | 0.673469 |
1a6cef5ae2a418aa73d502305d36609564f56837 | 28,407 | rst | reStructuredText | CHANGELOG.rst | ecds/readux | 4eac8b48efef8126f4f2be28b5eb943c85a89c2e | [
"Apache-2.0"
] | 18 | 2017-06-12T09:58:02.000Z | 2021-10-01T11:14:34.000Z | CHANGELOG.rst | ecds/readux | 4eac8b48efef8126f4f2be28b5eb943c85a89c2e | [
"Apache-2.0"
] | 276 | 2019-04-26T20:13:01.000Z | 2022-03-31T10:26:28.000Z | CHANGELOG.rst | ecds/readux | 4eac8b48efef8126f4f2be28b5eb943c85a89c2e | [
"Apache-2.0"
] | 7 | 2018-03-13T23:44:26.000Z | 2021-09-15T17:54:55.000Z | .. _CHANGELOG:
CHANGELOG
=========
Release 2.2.0
---------------------
* Stream ingest uploads to S3
* Adds status records for ingest tasks
* Adds bulk ingest
* Adds email notifications for ingest success and failure
Release 2.1.1
---------------------
* Fixes migration conflicts
Release 2.1.0
---------------------
* Adds ingest
Release 2.0.0-alpha.4
---------------------
* Full text search of volumes
* Adds option for partial or exact match searches
* Adds style information to annotations
* Adds ability to send to Voyant
* Fixes bug for links to current volume, page, etc.
Release 2.0.0-alpha.3
---------------------
* Add Zenodo integration
Release 2.0.0-alpha.2
---------------------
* Add branch option for deployment.
Release 2.0.0-alpha.1
---------------------
Readux 2 is a total rewrite centered on using IIIF.
Release 1.8.3 - Doc Fixes
-------------------------
**FINAL 1.X RELEASE**
* Add publishing notes to README
Release 1.8.2 - Bug Fixes
-------------------------
* Bugfix: When a page has many annotations (more than 10) some
annotations are not clickable and some annotations are missing from
the side scroll bar
Release 1.8.1 - Bug Fixes
-------------------------
* Annotation counts do not appear in Volume list
* Option to export personal versus group annotations is missing
Release 1.8 - WebSocket 1.x Upgrade, Move to ECDS VM
----------------------------------------------------
* Chore: Move Readux to New Prod server
* Chore: As a Readux developer, I would like to replace the Social Auth
OAuth Key and Secret with Emory University Library accounts.
* As a Readux admin, I would like to have a management command to make
post-1922 Emory Yearbooks not visible in Readux
* Upgrade Daphne and related components from 0.x versions to 1.x fully
stable versions
* Chore: Research related to beginning Readux 2.x development and setting
up a new AWS testing environment
* Bugfix: Volume landing page does not display
* Chore: Add lxml rebuild to the fabfile for deploy
* Chore: Add missing tei page to emory:dzj14 and emory:cwv8v on Readux
production site and create KB article for this issue
* Chore: Make a digitaledition_jekylltheme 0.7 release that is dependent
on by the Readux 1.7 Release
* Chore: Fix pip install on Jenkins asking for user interaction
* Bugfix: Export issue: Pages with no XML in volume "Original sacred harp"
are excluded from export
Release 1.7 - Export Notifications, Export Search Improvements, Bug Triage
--------------------------------------------------------------------------
* As a site administrator, I want to embed collections into editable pages
that can be ordered randomly or alphabetically so that I can introduce
users to the site
* Bugfix: Display issue in Readux export: Annotation highlighting cut off
on oblong pages
* Bugfix: Display issue in Readux export: Right side of pages cut off on
oblong pages
* Bugfix: Display issue in Readux export: Vertical page inelegantly displays
in box for vertical pages in oblong books
* Bugfix: Image thumbnails show as broken links on "Annotations tagged
with..." pages in exported editions
* Bugfix: Deep Zoom images are not loaded in the recent exports, even when
images are hosted independently and deep zoom is included in the export
* As a scholarly edition author I can use a dynamic form that breaks the
export process up into logical steps so that I understand the choices
I am making when exporting a volume and avoid making incompatible choices
* Bugfix: Jekyll site website packages downloaded in the testreadux
environment do not display page image thumbnails when uploaded to GitHub
* Bugfix: Image thumbnails on the "browse pages" page and images for
individual pages served from Readux show as broken links in exported
editions
* Bugfix: Top level navigation should include "Collections", "About",
"Annotation", "Export", and "Credits."
* As a scholarly edition author I can start the generation of my edition
on my computer, run in the background, and receive notification when it
is ready, so that I can do other things while it is generating and
downloading
* As a scholarly edition author I want the option to exclude deep zoom
from my website package so that I can display my edition without having
to connect to Readux
* Bugfix: clicking on export button causes error when logged in via LDAP
* As a scholarly edition author I want to the option to include deep
zoom images in my website package so that I can display my edition
without having to connect to Readux
* Bugfix: Fix CSS/Jquery issue in GitHub export
* Bugfix: No padding on simple pages
* Bugfix: Editable pages cannot include span tags (for COinS) and html
source cannot be edited
* As a site administrator, I want the Readux home page to be editable
and configurable so that I can display custom-designed text and collections
to introduce users to the site
* As a scholarly edition author I want the option to download page images
with my website package so that I can display my edition without having
to connect to Readux
* As a scholarly edition author I want users to to see an index based on
annotation tags that includes a count of annotation and page numbers so
that they can see how tags are used across the volume
* As a user I want to see the option of downloading the TEI in the main
export form so that I can choose between all export options on the same
webpage
* As an annotated edition author I want users to be able to keep track of
applied filters and search keywords in the URL so that each search could
be referenced in the future or shared with another person
* As an annotated edition author I want users to be able to facet
annotation searches by tags so that they can more easily identify
relevant content
* As an annotated edition author I want users to be able to facet
searches by annotation and page content so that they can more easily
identify relevant content
* As an annotated edition author I want users to be able to search my
edition partial word matching so that they can more easily identify
relevant content
* As a scholarly edition author I can refresh my website if I make changes
to the annotations on Readux, without overwriting my original local
customizations, so that I can create an updated copy of my content
* As an annotated edition author I want my site to include a sitemap for
volume pages, annotations and other content so that my site will be
findable by search engines
* New **IIIF_ID_SUFFIX** configuration option for IIIF image server
(`#4 <https://github.com/emory-libraries/readux/pull/4>`_ via
`@ghukill <https://github.com/ghukill>`_)
* OCR to TEI facsimile now supports output from ABBYY Recognition
Server (`#4 <https://github.com/emory-libraries/readux/pull/4>`_
via `@ghukill <https://github.com/ghukill>`_)
Release 1.6.1
-------------
* Require eulfedora 1.6 or greater for debug filter and connection retries
Release 1.6 - Group Annotation
------------------------------
* As a site administrator I want to create and manage annotation groups
of existing users so that I can support group annotation projects.
* As a logged in user I want to see annotations shared with groups I
belong to so that I can collaborate with other members of those groups.
* As a logged in user when I am making an annotation I want to grant
annotation groups access to read, edit, or delete my annotation so
that I can collaborate with group members.
* As a logged in user, I want to see an indication when an annotation
is shared with a group.
* As a logged in user, I want to see who authored an annotation so that
I can easily distinguish my annotations from those shared with groups
I belong to.
* As a logged in user, I can only update annotation permissions if
I have the admin annotation permission, so that full access to editing
and deleting annotations can be controlled.
* As a logged in user when I export a volume I want to choose between
exporting only my annotations or all annotations in a group I belong
to so that I can export an individual or collaborative project.
* Now using `django-guardian <https://github.com/django-guardian/django-guardian>`_
for per-object annotation permissions.
* Includes a new annotator permissions Javasscript module (included in
readux codebase for now).
* Data migrations to clean up redundant data in annotation extra data
JSON field and grant default annotation author permissions.
Release 1.5.1
-------------
* Reference final released versions of annotator-meltdown and
annotator-meltdown-zotero
Release 1.5 - Enhanced Annotation
---------------------------------
* As a researcher I want to make internal references to other pages in
order to show connections to other parts of the same work.
* As a researcher I want to include audio in my annotations so I can
demonstrate audible differences in the content.
* As a researcher I want to include video in my annotations so I can
associate enriched media with volume content.
* As a researcher I want to link my Zotero account with my Readux login
so that I can add Zotero citations to my annotations.
* As a researcher I want to look up Zotero citations and add them to my
annotations in order to show my sources.
* As researcher I want to search the text of my annotations for the
volume I am working on in order to find specific notes or content.
* As a site user I want to login in with Emory credentials so that
I can easily start making annotations.
* As a user, I can find readux volume pages through a search engine,
so I can easily find relevant content.
* TEI export now includes an encoding description in the TEI header.
* bugfix: Annotation window sometimes pops up in the top right of the
screen, should hover near highlighted text/image. (Actual fix in
`annotator-marginalia <http://emory-lits-labs.github.io/annotator-marginalia/>`_)
* bugfix: Exported site browse annotations by tag never displays more
than one annotation. (Actual fix in `digitaledition-jekylltheme <https://github.com/emory-libraries-ecds/digitaledition-jekylltheme>`_)
* Project documentation now includes technical descriptions and diagrams
of Fedora object models and readux processes.
Release 1.4.1
-------------
* As a Readux admin, I want a record when the export feature is used so
that I can find out who is creating exported editions.
Release 1.4 - Basic Export
--------------------------
This release adds the capability to export a single Readux volume with
annotations to create a standalone annotated website edition, using
Jekyll and with optional GitHub / GitHub Pages integration.
Export functionality
^^^^^^^^^^^^^^^^^^^^
* As an annotated edition author I want to export an edition that has TEI
with text, image references, and annotations so that I can have a
durable format copy of my edition with my annotation content.
* As an annotated edition author, I want to generate a web site package
with volume content and annotations so that I can publish my digital
edition.
* As an annotated edition author I want to generate a website package that
can be modified so that I can customize my edition.
* As an annotated edition author, I want a website package that allows me
to browse pages by thumbnail so that site visitor can easily select a
page of interest.
* As an annotated edition author, I want my website edition to include
annotation counts for each page so that my site visitors know which
pages have annotations.
* As an annotated edition author, I want my website edition to include
tags in the annotation display so that my site visitors can see my
categorization.
* As an annotated edition author, I want my website edition to support
keyword searching so that my site visitors can find content of
interest.
* As an annotated edition author, I want to be able to customize my
website edition’s page urls to match the number in the source text so
that my site visitors experience an intuitive navigation of the
edition.
* As an annotated edition author, I want the option of creating a new
GitHub repository with my exported website edition, so that I can
version my data and publish it on GitHub Pages.
* As an annotated edition author, I want my website edition to include
citation information so that my site visitors can reference it properly.
* As an annotated edition author, I want to have a copy of the exported
TEI in the website bundle so that I can see the data used to generate
the web edition.
* As an annotated edition author, I want my website edition to include
social media integration so that my site visitors can share content.
* As an annotated edition author, I want my website edition to be viewable
on tablets so that my site visitors can view it on multiple devices.
* As an annotated edition author I want my website edition to include
individual annotation pages so that users can more easily view and
cite long form and multimedia annotation content.
Other updates
^^^^^^^^^^^^^
* As a site user, I want to link my social login accounts so that I can
access annotations from any of my accounts.
* As an annotated edition author, I want to see an error message in the
event that I log out while trying to export my edition so that I know
I need to be logged in to complete the export.
* As a site user I want to see a permanent url on the pages for volume
and single-page so that I can make stable references.
* Update latest 3.x Bootstrap and django-eultheme 1.2.1
Release 1.3.7
-------------
* As a site administrator I want to include video content in site pages
so that I can share dynamic content like screencasts.
Release 1.3.6
-------------
* Improved regenerate-id logic for OCR, use a readux image url when
generating page TEI.
Release 1.3.5
-------------
* Proxy all IIIF image requests through the application, to handle
IIIF server that is not externally accessible.
Release 1.3.4
-------------
* bugfix: collection detail pagination navigation
* bugfix: id generation error in OCR/TEI xml
* Improved page mismatch detection when generating TEI from OCR
* Revised placeholder page images for covers and volume browse
* Modify update_page_arks manage command to handle the large number
of page arks in production
Release 1.3.3
-------------
* bugfix: collection detail pagination display
* bugfix: correct page absolute url, esp. for use in annotation uris
Release 1.3 - Simple Annotation
-------------------------------
TEI Facsimile
^^^^^^^^^^^^^
* As a system administrator, I want to run a script to generate TEI
facsimile for volumes that have pages loaded, so that I can work with
OCR content in a standardized format.
* As a user I would like to view the TEI underlying the page view and
annotation, so that I can understand more about how it works, and to
understand how to use facsimile data.
* As a researcher I want to see a view of the TEI underlying the page
view and annotation that excludes OCR for barcodes so that I can
focus on facsimile data of scholarly importance.
Display improvements
^^^^^^^^^^^^^^^^^^^^
* As a user, I want to navigate from page view to page view without
having to scroll down to each page view, so that I have a better
reading experience.
* As a user, I can see the thumbnail for landscape pages when browsing
volumes, so I can better select appropriate pages.
Annotation
^^^^^^^^^^
* As a researcher, I want to select the OCR text on a page in order to
copy or annotate content.
* As a site user I want to filter volumes by whether or not they have
page-level access so that I know which volumes I can read online and
annotate.
* As a researcher I can log in to readux using social media credentials,
so that I do not need a separate account to create annotations.
* As a researcher I want to annotate part of the text on a page in order
to provide additional information about the text.
* As a researcher I want to annotate an image or part of an image in
order to provide additional information about the image.
* As a researcher I want to include simple formatting in my notes to
make them more readable.
* As a researcher I want to include images in my annotations so that
users can see important visual materials.
* As a researcher I want to tag annotations so that I can indicate
connections among related content.
* As a researcher I want to edit and delete my annotations, in order to
make changes or remove notes I no longer want.
* As a user I can see my annotations in the margin of the page, so that
I can read all of the annotations conveniently.
* As a researcher I want to see which volumes I have annotated when I am
browsing or searching so that I can easily resume annotating.
* As a researcher I want to see which pages I have annotated so that I
can assess the status of my digital edition.
* As a researcher I want to make annotations anchored to stable
identifiers that are unique across an entire volume so that I can
maintain consistency and generate valid exports in my digital editions.
* As a user I want to see a created or last modified timestamp on
annotations so that I know when they were last updated.
* As a user I want to see only the date created or last modified on
annotations that are more than a week old so that I know a rough
estimate of when they were last updated.
Annotation Administration
^^^^^^^^^^^^^^^^^^^^^^^^^
* As a site administrator I want to see which user authored an
annotation so that I can respond to the correct user in reference to
an annotation.
* As a site administrator, I want to view, edit, and delete annotations
in the Django admin site so that I can manage annotations to remove
spam or update the annotation owner.
* As a site administrator I want to click on the URI link for an
annotation in the admin and see the annotated page in a separate
window so that I can verify its display.
Additional Administration functionality
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* As a site administrator I want to create and edit html pages so that
I can add content explaining the site to users.
Release 1.2.2
-------------
* Require eulfedora 1.2 for auto-checksum on ingest against Fedora 3.8
Release 1.2.1
-------------
* Update required version of django-downtime and eultheme.
Release 1.2 - Fedora 3.8 migration support
------------------------------------------
* As a site user I will see a Site Down page when maintenance is being
performed on the site or or other circumstances that will cause the
site to be temporarily unavailable so that I will have an general
idea of when I can use the site again.
* As a site user I will see a banner that displays an informative
message on every page of the site so that I can be informed about
future site maintenance or other events and know an approximate amount
of any scheduled downtime.
* As an application administrator, I want to generate a list of pids for
testing so that I can verify the application works with real data.
* Any new Fedora objects will be created with Managed datastreams instead
of Inline for RELS-EXT and Dublin Core.
* Upgraded to Django 1.7
* Now using `django-auth-ldap <https://pythonhosted.org/django-auth-ldap/>`
for LDAP login instead of eullocal.
Release 1.1.2
-------------
* Fix last-modified method for search results to work in cover mode.
Release 1.1.1
-------------
* Fix volume sitemaps to include both Volume 1.0 and 1.1 content models.
Release 1.1 - Import
--------------------
* As an administrative user, I want to run a script to import a volume
and its associated metadata into the repository so that I can add new
content to readux.
* As a user, I want to browse newly imported content and previously
digitized content together, so that I can access newly added content
in the same way as previously digitized materials.
* As a user I can opt to sort items on a collection browse page by date
added, in order to see the newest material at the top of the list, so
that I can see what is new in a collection.
* As a user, I want the option to view or download a PDF, with a warning
for large files, so that I can choose how best to view the content.
* As an administrative user, I want to be able to run a script to load
missing pages for a volume so that I can load all pages when the
initial page load was interrupted.
Release 1.0.2
-------------
* As a user, I want the website to support caching so I don't have to re-download
content that hasn't changed and the site will be faster.
* bugfix: fix indexing error for items with multiple titles
* error-handling & logging for volumes with incomplete or invalid OCR XML
* adjust models to allow eulfedora syncrepo to create needed content model objects
Release 1.0.1
-------------
* Include *.TIF in image file patterns searched when attempting to identify
page images in **import_covers** and **import_pages** scripts
* Additional documentation and tips for running **import_covers** and
**import_pages** scripts
* Bugfix: workaround for pdfminer maximum recursion error being triggered by
outline detection for some PDF documents
* Enable custom 404, 403, and 500 error pages based on eultheme styles
Release 1.0 - Page-Level Access
-------------------------------
Cover images and list view improvements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* As a researcher, when I'm viewing a list of titles, I want the option to
toggle to a cover view as an alternate way to view the content.
* As a user, when I toggle between cover and list views I want to be able to
reload or go back in history without needing to reselect the mode I was last
viewing, so that the site doesn't disrupt my browsing experience.
* As a user, when I page through a collection or search results, I expect the
display to stay in the mode that I've selected (covers or list view), so that
I don't have to reselect it each time.
Volume landing page and Voyant improvements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* As a user when I select a title in the list view, I first see an information
page about the item, including pdf and page view selections, so that I know
more about the item before I access it.
* As a user, I want to be able to see the full title of a book without longer
titles overwhelming the page, so I can get to the information I want
efficiently.
* As a researcher, I want to pass a text to Voyant for analysis in a way that
takes advantage of caching, so that if the text has already been loaded in
Voyant I won't have to wait as long.
* As a reseacher, I can easily read a page of text in Voyant, because the text
is neatly formatted, so that I can more readily comprehend the text.
* As a user, I can see how large a pdf is before downloading it so that I can
make appropriate choices about where and how to view pdfs.
* As a user, when I load a pdf I want to see the first page with content rather
than a blank page, so that I have easier access with less confusion.
Page-level access / read online
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* As a researcher, I can page through a book viewing a single page at a time in
order to allow viewing the details or bookmarking individual pages.
* As a user, when I'm browsing a collection or viewing search results, I can
select an option to read the book online if pages are available, so that I can
quickly access the content.
* As a researcher, I want the option to view pages as thumbnails to enhance
navigation.
* As a researcher, when I'm browsing page image thumbnails I want to see an
indicator when there's an error loading an image so that I don't mistake
errors for blank pages.
* As a researcher, I want to be able to toggle to a mode where I can zoom in on
an image so that I can inspect the features of a page.
* As a user, I want to be able to distinguish when I can and cannot use the zoom
function, so I can tell when the feature is unavailable (e.g., due to image
load error).
* As a researcher, I want to search within a single book so that I can find
specific pages that contain terms relevant to my research.
Navigation improvements
^^^^^^^^^^^^^^^^^^^^^^^
* As a user, I want to see a label or source information for the collection
banner image so that I know where the image comes from.
* As a user, I want to be able to identify a resource in an open tab by title,
so I can quickly select the correct tab when using multiple tabs.
* As a user, when paging through a collection list or viewing a set of pages in
the reading view, I can find the web page navigation at the top or bottom of
the page, so that I do not have to scroll far to click to go to another web
page in the series.
Integrations with external services
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* As a twitter user, when I tweet a link to a readux collection, book, or page
image, I want a preview displayed on twitter so that my followers can see
something of the content without clicking through.
* As a facebook user, when I share a link to a readux collection, book, or page
image, I want a preview displayed on facebook so that my friends can see
something of the content without clicking through.
* A search engine crawling the readux site will be able to obtain basic semantic
data about collections and books on the site so the search engine’s results
can be improved.
* A search engine can harvest information about volume content via site maps in
order to index the content and make it more discoverable.
Release 0.9 - PDF Access
-------------------------
* As a researcher, I want to browse a list of collections in order to
select a subset of items to browse.
* As a researcher, I want to browse through a paginated list of all the
books in a single collection in order to see the range of materials
that are present.
* As a researcher, when looking at a list of books in a collection, I
can view a PDF using my native PDF browser in order to view the
contents of the book.
* As a researcher, I can search by simple keyword or phrase in order to
find books that fit my interests.
* A search engine can harvest information about site content via site
maps in order to index the content and make it more discoverable.
* As a researcher, I can select a text and pass it to Voyant to do text
analysis for the purposes of my research.
* As a researcher, I want to be able to harvest contents into my Zotero
library in order to facilitate research.
* As a researcher browsing a list of titles in a collection or search
results, I want to see the author name and the year of publication
so that if I am looking for a particular title or edition I have more
information to identify it quickly without opening the pdf.
* As a researcher viewing keyword search results, I want to see titles
or authors with matching terms higher in the list so that if I am
searching for a title or author by keyword the item appears on the first
or second page of results, and I don't have to page through all the
results to find what I am looking for.
* As a user I can see a logo for the site, so I visually recognize that
I am in a coherent site whenever I see it.
* As a user I see university branding on the site, so that I know that
it is an Emory University resource.
* As a user I want to read a brief description of the content of a collection
on the collection list page and individual collection pages, so that
I can determine my level of interest in it.
* As an admin user, I want to be able to login with my Emory LDAP account
so that I can re-use my existing credentials.
* As a user I can view a list of collections on the landing page by thumbnail
image so that I can select an area of interest from visual cues.
* As a user, when viewing a single collection, I can see a visual cue of
the collection's content, so that I can connect the item I see on the
list view to the page I am viewing.
* As a researcher I can filter search results by collection facets, in
order to see the material most relevant to my interests.
* As an admin, I can upload images and associate them with collections,
so that I can manage thumbnail and splash images displayed on collection
browse and display pages.
| 46.799012 | 137 | 0.742352 |
b308fd438f519ce3a45a927440637ced5610f309 | 1,452 | rst | reStructuredText | docs/source/bscscan.modules.rst | gabr1e11/bscscan-python | bb91d275d85e79d750d3d712a45824d6ca53172f | [
"MIT"
] | 208 | 2021-02-19T14:01:56.000Z | 2022-03-27T02:09:21.000Z | docs/source/bscscan.modules.rst | gabr1e11/bscscan-python | bb91d275d85e79d750d3d712a45824d6ca53172f | [
"MIT"
] | 42 | 2021-02-19T15:26:55.000Z | 2022-03-06T11:30:29.000Z | docs/source/bscscan.modules.rst | gabr1e11/bscscan-python | bb91d275d85e79d750d3d712a45824d6ca53172f | [
"MIT"
] | 77 | 2021-02-19T12:49:40.000Z | 2022-02-23T23:48:48.000Z | bscscan.modules package
=======================
Submodules
----------
bscscan.modules.accounts module
-------------------------------
.. automodule:: bscscan.modules.accounts
:members:
:undoc-members:
:show-inheritance:
bscscan.modules.blocks module
-----------------------------
.. automodule:: bscscan.modules.blocks
:members:
:undoc-members:
:show-inheritance:
bscscan.modules.contracts module
--------------------------------
.. automodule:: bscscan.modules.contracts
:members:
:undoc-members:
:show-inheritance:
bscscan.modules.logs module
---------------------------
.. automodule:: bscscan.modules.logs
:members:
:undoc-members:
:show-inheritance:
bscscan.modules.proxy module
----------------------------
.. automodule:: bscscan.modules.proxy
:members:
:undoc-members:
:show-inheritance:
bscscan.modules.stats module
----------------------------
.. automodule:: bscscan.modules.stats
:members:
:undoc-members:
:show-inheritance:
bscscan.modules.tokens module
-----------------------------
.. automodule:: bscscan.modules.tokens
:members:
:undoc-members:
:show-inheritance:
bscscan.modules.transactions module
-----------------------------------
.. automodule:: bscscan.modules.transactions
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: bscscan.modules
:members:
:undoc-members:
:show-inheritance:
| 18.615385 | 44 | 0.576446 |
5cbd97dd40b16e7edc924c34db44cd1d686f94e4 | 501 | rst | reStructuredText | escape/doc/source/ESCAPE.rst | 5GExchange/escape | eb35d460597a0386b18dd5b6a5f62a3f30eed5fa | [
"Apache-2.0"
] | 10 | 2016-11-16T16:26:16.000Z | 2021-04-26T17:20:28.000Z | escape/doc/source/ESCAPE.rst | 5GExchange/escape | eb35d460597a0386b18dd5b6a5f62a3f30eed5fa | [
"Apache-2.0"
] | 3 | 2017-04-20T11:29:17.000Z | 2017-11-06T17:12:12.000Z | escape/doc/source/ESCAPE.rst | 5GExchange/escape | eb35d460597a0386b18dd5b6a5f62a3f30eed5fa | [
"Apache-2.0"
] | 10 | 2017-03-27T13:58:52.000Z | 2020-06-24T22:42:51.000Z | The *ESCAPE.py* top module
==========================
.. automodule:: ESCAPE
:members:
:private-members:
:special-members:
:exclude-members: __dict__,__weakref__,__module__
:undoc-members:
:show-inheritance:
Submodules
++++++++++
.. toctree::
:maxdepth: 1
Service Layer (SL) <layers/service>
Resource Orchestration Sublayer (ROS) <layers/orchestration>
Controller Adaptation Sublayer (CAS) <layers/adaptation>
Infrastructure Layer (IL) <layers/infrastructure> | 23.857143 | 64 | 0.664671 |
49eb305303338d43a84ffba484e87979345975b2 | 259 | rst | reStructuredText | doc/ref/cli/_includes/timeout-option.rst | byteskeptical/salt | 637fe0b04f38b2274191b005d73b3c6707d7f400 | [
"Apache-2.0"
] | 9,425 | 2015-01-01T05:59:24.000Z | 2022-03-31T20:44:05.000Z | doc/ref/cli/_includes/timeout-option.rst | byteskeptical/salt | 637fe0b04f38b2274191b005d73b3c6707d7f400 | [
"Apache-2.0"
] | 33,507 | 2015-01-01T00:19:56.000Z | 2022-03-31T23:48:20.000Z | doc/ref/cli/_includes/timeout-option.rst | byteskeptical/salt | 637fe0b04f38b2274191b005d73b3c6707d7f400 | [
"Apache-2.0"
] | 5,810 | 2015-01-01T19:11:45.000Z | 2022-03-31T02:37:20.000Z | .. option:: -t TIMEOUT, --timeout=TIMEOUT
The timeout in seconds to wait for replies from the Salt minions. The
timeout number specifies how long the command line client will wait to
query the minions and check on running jobs. Default: |timeout| | 51.8 | 74 | 0.745174 |
251cf29aec31f0cdd5305600fc58376cee87c480 | 2,002 | rst | reStructuredText | docs/source/index.rst | hrnciar/hdmf | 73d93364fa5b4d007a1cbf217b16c66a166bf39b | [
"BSD-3-Clause-LBNL"
] | null | null | null | docs/source/index.rst | hrnciar/hdmf | 73d93364fa5b4d007a1cbf217b16c66a166bf39b | [
"BSD-3-Clause-LBNL"
] | 1 | 2021-04-21T07:39:52.000Z | 2021-04-21T07:39:52.000Z | docs/source/index.rst | hrnciar/hdmf | 73d93364fa5b4d007a1cbf217b16c66a166bf39b | [
"BSD-3-Clause-LBNL"
] | 1 | 2020-12-02T04:57:45.000Z | 2020-12-02T04:57:45.000Z | .. HDMF documentation master file, created by
sphinx-quickstart on Thu Nov 17 10:41:07 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
The Hierarchical Data Modeling Framework
========================================
HDMF is a Python package for working with standardizing, reading, and writing hierarchical object data.
HDMF is a by-product of the `Neurodata Without Borders: Neurophysiology (NWB) <http://www.nwb.org/>`_ project.
The goal of NWB was to enable collaborative science within the neurophysiology and systems neuroscience communities
through data standardization. The team of neuroscientists and software developers involved with NWB
recognize that adoption of a unified data format is an important step toward breaking down the barriers to
data sharing in neuroscience. HDMF was central to the NWB development efforts, and has since been split off
with the intention of providing it as an open-source tool for other scientific communities.
If you use HDMF in your research, please use the following citation:
A. J. Tritt et al., "HDMF: Hierarchical Data Modeling Framework for Modern Science Data Standards,"
2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 2019, pp. 165-179,
doi: 10.1109/BigData47090.2019.9005648.
.. toctree::
:maxdepth: 2
:caption: Getting Started
getting_started
contributing
.. toctree::
:maxdepth: 2
:caption: Overview
overview_intro
overview_software_architecture
overview_citing
.. toctree::
:maxdepth: 2
:caption: Resources
tutorials/index
extensions
building_api
validation
export
api_docs
software_process
make_roundtrip_test
.. toctree::
:maxdepth: 2
:caption: For Maintainers
make_a_release
update_requirements
.. toctree::
:maxdepth: 2
:caption: Legal
legal
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| 27.805556 | 115 | 0.736264 |
37ecfc4d3b1e5b7ae56ebeeb1aca7f30c897c634 | 320 | rst | reStructuredText | docs/en/api-reference/codecs/index.rst | brendenvogt/esp-adf | 99e6e3277d2fd3e3be5154926baa36a02ab87731 | [
"MIT-0"
] | 1 | 2021-05-03T14:19:46.000Z | 2021-05-03T14:19:46.000Z | docs/en/api-reference/codecs/index.rst | brendenvogt/esp-adf | 99e6e3277d2fd3e3be5154926baa36a02ab87731 | [
"MIT-0"
] | null | null | null | docs/en/api-reference/codecs/index.rst | brendenvogt/esp-adf | 99e6e3277d2fd3e3be5154926baa36a02ab87731 | [
"MIT-0"
] | null | null | null | Codecs
******
.. toctree::
:maxdepth: 1
AAC Decoder <aac_decoder>
AMR Decoder and Encoder <amr_codecs>
FLAC Decoder <flac_decoder>
MP3 Decoder <mp3_decoder>
OGG Decoder <ogg_decoder>
OPUS Decoder <opus_decoder>
WAV Decoder and Encoder <wav_codecs>
Resample Filter <filter_resample>
| 21.333333 | 40 | 0.6875 |
d2cbce9c96827c2689ebe472807a63df5814312e | 3,791 | rst | reStructuredText | README.rst | tehamalab/django-haystack-es | 476a87ad0da6bf6000145477d9ede6b9e254133d | [
"BSD-3-Clause"
] | 8 | 2017-09-12T13:21:32.000Z | 2019-10-16T12:54:26.000Z | README.rst | tehamalab/django-haystack-es | 476a87ad0da6bf6000145477d9ede6b9e254133d | [
"BSD-3-Clause"
] | 1 | 2019-07-08T00:07:50.000Z | 2019-07-17T13:03:10.000Z | README.rst | tehamalab/django-haystack-es | 476a87ad0da6bf6000145477d9ede6b9e254133d | [
"BSD-3-Clause"
] | 4 | 2018-09-11T08:46:57.000Z | 2020-09-21T06:38:35.000Z | =============================
Django Haystack ES
=============================
.. image:: https://badge.fury.io/py/django-haystack-es.svg
:target: https://badge.fury.io/py/django-haystack-es
.. image:: https://travis-ci.org/tehamalab/django-haystack-es.svg?branch=master
:target: https://travis-ci.org/tehamalab/django-haystack-es
.. image:: https://codecov.io/gh/tehamalab/django-haystack-es/branch/master/graph/badge.svg
:target: https://codecov.io/gh/tehamalab/django-haystack-es
Django Haystack backend for Elasticsearch 5.
Quickstart
----------
Install Django Haystack ES::
pip install django-haystack-es
Add ``haystack_es.backends.ElasticsearchSearchEngine`` to your ``HAYSTACK_CONNECTIONS`` engine in ``settings.py``
Example
.. code-block:: python
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'haystack_es.backends.Elasticsearch5SearchEngine',
# ...
}
}
Define your indexes using ``haystack_es.indexes`` instead of ``haystack.indexes``.
Example
.. code-block:: python
# myapp/search_indexes.py
from haystack_es import indexes
from myapp.models import MyModel
class MyModelIndex(indexes.SearchIndex, indexes.Indexable):
text = indexes.CharField(document=True, use_template=True)
# ...
If you have `celery-haystack <http://celery-haystack.readthedocs.org/>`_ installed you can use
``haystack_es.indexes.CelerySearchIndex`` for defining your SearchIndex utilizing celery-haystack
If you want to utilize additional SearchQuerySet methods use ``haystack_es.query.SearchQuerySet``
instead of ``haystack.query.SearchQuerySet``.
Example
.. code-block:: python
from haystack_es.query import SearchQuerySet
sqs = SearchQuerySet().filter(content='some query')
sqs.boost_fields({'field_name': 2, 'some_field': 1.5, 'another_field': 1})
sqs.facet('some_field')
# ...
Differences compared to the default django-haystack Elasticsearch backend
---------------------------------------------------------------------------
* Intended for Elasticsearch >= 5
* Allows query-time fields boosting.
* Allows query-time
`negative boost <https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-boosting-query.html>`_
* Provides additional SearchFields; ``DictField``, ``NestedField`` and ``GeometryField``
* Tries to use Elasticsearch
`filter context <https://www.elastic.co/guide/en/elasticsearch/reference/current/query-filter-context.html>`_
instead of query string for filtering results.
* Uses `multi-fields <https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-fields.html>`_
for creating shadow fields which are useful for performing operations like
faceting and exact matches which need non-analyzed values.
Query-time fields boosting
----------------------------
::
from haystack_es.query import SearchQuerySet
SearchQuerySet().boost_fields(boost_fields)
Example ``SearchQuerySet().boost_fields({'field_name': 2, 'another_field': 1})``
Negative boosting
------------------
::
from haystack_es.query import SearchQuerySet
SearchQuerySet().boost_negative(query, negative_boost)
example
``SearchQuerySet().boost_negative({'match': {'category.raw': 'awful type'}}, negative_boost)``
Running Tests
-------------
Does the code actually work?
::
source <YOURVIRTUALENV>/bin/activate
(myenv) $ pip install tox
(myenv) $ tox
Credits
-------
Inspired by
* `haystack-elasticsearch5`: https://github.com/Alkalit/haystack-elasticsearch5
Tools used in rendering this package:
* Cookiecutter_
* `cookiecutter-djangopackage`_
.. _Cookiecutter: https://github.com/audreyr/cookiecutter
.. _`cookiecutter-djangopackage`: https://github.com/pydanny/cookiecutter-djangopackage
| 27.875 | 116 | 0.701398 |
19796967236d3c1848116bd02b8d2395645cf4ba | 4,444 | rst | reStructuredText | docs/ommer-tutorial.rst | Mr-Leshiy/tests | b456a759d56e31de40edc201eafda822cb37aab1 | [
"MIT"
] | 352 | 2015-10-06T12:19:58.000Z | 2022-03-25T02:21:06.000Z | docs/ommer-tutorial.rst | MitchellTesla/Ethereum-tests | 38d5c93203e84da1611739b4736ca816cb193407 | [
"MIT"
] | 569 | 2015-02-02T05:47:17.000Z | 2022-03-10T12:37:06.000Z | docs/ommer-tutorial.rst | MitchellTesla/Ethereum-tests | 38d5c93203e84da1611739b4736ca816cb193407 | [
"MIT"
] | 272 | 2015-01-23T17:20:27.000Z | 2022-03-31T22:53:49.000Z | .. ommer-tests-tutorial:
###########################################
Blocktests with Ommer / Uncle Blocks
###########################################
`Ori Pomerantz <mailto://qbzzt1@gmail.com>`_
In this tutorial you learn hwo to write blockchain tests where the chain
includes `ommer/uncle blocks <https://ethereum.org/en/glossary/#ommer>`_,
blocks that are not part of the main chain but still deserve a reward.
Uncle Blocks
============
Uncle blocks are created because there are multiple miners working at any point
in time. If two miners propose a follow-up block for the chain, only one of them
becomes the real block, but the other miner who did the same work also deserves
a reward (otherwise the chances of getting a mining reward will be too low, and there will
be a lot less miners to keep the blockchain going).
.. image:: images/uncleBlock.png
:alt: Illustration of three main chain block, and an uncle block that also inherits
from block 1, and is recognized as an uncle by block 3.
For example, in the illustration above block #1 was proposed by a miner and accepted.
Then two separate miners created followup blocks labeled as 2. One of them is the block
that was eventually accepted, the green 2. The other one is the orange block that was
eventually rejected. Then a miner (one of the ones above or a separate one) created
block 3 and specified that the green 2 is the parent block and the orange 2 is an
uncle / ommer block (ommer is a gender neutral term for an uncle or aunt). This
way the miner that created the green 2 block gets the full reward (2 ETH), but the one
who created the orange 2 still gets something (1.75 ETH in this case).
Writing Tests
===============
The test writing process for ommer tests is a bit complicated. First you write a test file
such as **11_ommerFiller.yml**, which has the uncle information, and run the test. However,
this test always fails. The state root that is calculated by retesteth in the uncle
block is different from the actual state root, because it does not include the payment
to the miner of the uncle block.
Writing the Filler File
-----------------------
The only fields of the uncle block that matter are in the header, so you don't specify
them in the filler file as blocks, but as a list of uncle block headers. For example,
here is block 4 from `11_ommerFiller.yml
<https://github.com/ethereum/tests/blob/develop/docs/tutorial_samples/11_ommerFiller.yml/>`_.
::
- blocknumber: 4
transactions: []
uncleHeaders:
- populateFromBlock: 1
extraData: 0x43
coinbase: 0xCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
- populateFromBlock: 2
extraData: 0x43
coinbase: 0xBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
As you can see, the **uncleHeaders** list contains two uncles. The first
is an attempt to continue block 1, so it is an alternative block 2. The second
is an attempt to continue block 2, so it is an alternative block 3.
.. note::
Most of the header fields are copied from the parent, except for the
ones we explicitly specify (in this case, **extraData** and **coinbase**).
Correcting the State Root
-------------------------
When you run this test it is guaranteed to fail. The reason is that the state
root that **retesteth** calculates is wrong, because it ignores the payments to
the miners that created the uncles (in this case, 0xCC...CC and 0xBB...BB)
So you can expect an error message similar to this:
.. image:: images/stateRootErr.png
:alt: Error message showing which fields have different values than expected
As you can see, the field that is different is **stateRoot** (and **hash**, which is
the hash of everything so if there are any differences it ends up different too).
When you put that field in the block header with the correct value the test works,
as you can see by running `12_ommerGoodFiller.yml
<https://github.com/ethereum/tests/blob/develop/docs/tutorial_samples/12_ommerGoodFiller.yml/>`_.
Conclusion
==========
The main case in which new ommer tests are needed is when the block header is modified
and we need to make sure that only valid block headers are accepted, not just in the main
chain but also as ommers. `Here is an example of such a test
<https://github.com/ethereum/tests/blob/develop/src/BlockchainTestsFiller/TransitionTests/bcBerlinToLondon/londonUnclesFiller.json>`_.
Hopefully you now know enough to write this kind of test.
| 45.346939 | 134 | 0.738074 |
9c4fa3aa0d63e610761f73c2d6d932d36f595646 | 674 | rst | reStructuredText | readme.rst | RoverRobotics-forks/booty | 17f13f0bc28ad855a3fab895478c85c57f356a38 | [
"MIT"
] | 6 | 2017-09-21T14:28:45.000Z | 2021-03-03T09:37:00.000Z | readme.rst | RoverRobotics-forks/booty | 17f13f0bc28ad855a3fab895478c85c57f356a38 | [
"MIT"
] | 2 | 2017-09-04T03:30:08.000Z | 2017-09-17T23:59:31.000Z | readme.rst | RoverRobotics-forks/booty | 17f13f0bc28ad855a3fab895478c85c57f356a38 | [
"MIT"
] | 4 | 2017-12-12T13:40:37.000Z | 2021-12-04T02:29:43.000Z | ====================
Purpose
====================
This code base is for creating a bootloader to be utilized for programming microcontroller flash
memory. This program takes a hex file as the input and will parse that hex file and send it to
the microcontroller bootloader over a serial UART (commonly referred to as a "serial port" or
"COM port").
The companion project to this is the `bootypic <http://github.com/slightlynybbled/bootypic>`_ project.
Hopefully, more will follow. GUI provided at `bootycontrol <http://github.com/slightlynybbled/bootycontrol>`_.
Please see the `documentation <https://booty.readthedocs.io/en/latest/index.html>`_ for further guidance!
| 48.142857 | 111 | 0.743323 |
c37a006d1afe71b43f03256a12eb872baa66b7c1 | 2,201 | rst | reStructuredText | docs/greenday_core.tests.rst | meedan/montage | 4da0116931edc9af91f226876330645837dc9bcc | [
"Apache-2.0"
] | 6 | 2018-07-31T16:48:07.000Z | 2020-02-01T03:17:51.000Z | docs/greenday_core.tests.rst | meedan/montage | 4da0116931edc9af91f226876330645837dc9bcc | [
"Apache-2.0"
] | 41 | 2018-08-07T16:43:07.000Z | 2020-06-05T18:54:50.000Z | docs/greenday_core.tests.rst | meedan/montage | 4da0116931edc9af91f226876330645837dc9bcc | [
"Apache-2.0"
] | 1 | 2018-08-07T16:40:18.000Z | 2018-08-07T16:40:18.000Z | greenday_core.tests package
===========================
Submodules
----------
greenday_core.tests.base module
-------------------------------
.. automodule:: greenday_core.tests.base
:members:
:undoc-members:
:show-inheritance:
greenday_core.tests.test_documents module
-----------------------------------------
.. automodule:: greenday_core.tests.test_documents
:members:
:undoc-members:
:show-inheritance:
greenday_core.tests.test_eventbus module
----------------------------------------
.. automodule:: greenday_core.tests.test_eventbus
:members:
:undoc-members:
:show-inheritance:
greenday_core.tests.test_indexers module
----------------------------------------
.. automodule:: greenday_core.tests.test_indexers
:members:
:undoc-members:
:show-inheritance:
greenday_core.tests.test_model module
-------------------------------------
.. automodule:: greenday_core.tests.test_model
:members:
:undoc-members:
:show-inheritance:
greenday_core.tests.test_model_comments module
----------------------------------------------
.. automodule:: greenday_core.tests.test_model_comments
:members:
:undoc-members:
:show-inheritance:
greenday_core.tests.test_signals module
---------------------------------------
.. automodule:: greenday_core.tests.test_signals
:members:
:undoc-members:
:show-inheritance:
greenday_core.tests.test_user_deletion module
---------------------------------------------
.. automodule:: greenday_core.tests.test_user_deletion
:members:
:undoc-members:
:show-inheritance:
greenday_core.tests.test_youtube_client module
----------------------------------------------
.. automodule:: greenday_core.tests.test_youtube_client
:members:
:undoc-members:
:show-inheritance:
greenday_core.tests.test_youtube_thumbnails_cache_client module
---------------------------------------------------------------
.. automodule:: greenday_core.tests.test_youtube_thumbnails_cache_client
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: greenday_core.tests
:members:
:undoc-members:
:show-inheritance:
| 23.168421 | 72 | 0.584734 |
02e1c193c6191da98df3fc3e3ac72020c14e3218 | 669 | rst | reStructuredText | README.rst | meyt/microhttp-restful | 68be4cd882fb0f5eabc18a9b5b9b3aaf3239d8a7 | [
"MIT"
] | 1 | 2018-09-26T08:56:13.000Z | 2018-09-26T08:56:13.000Z | README.rst | meyt/microhttp-restful | 68be4cd882fb0f5eabc18a9b5b9b3aaf3239d8a7 | [
"MIT"
] | 13 | 2017-11-08T14:05:56.000Z | 2019-01-31T12:11:31.000Z | README.rst | meyt/microhttp-restful | 68be4cd882fb0f5eabc18a9b5b9b3aaf3239d8a7 | [
"MIT"
] | null | null | null | microhttp-restful
=================
.. image:: https://travis-ci.org/meyt/microhttp-restful.svg?branch=master
:target: https://travis-ci.org/meyt/microhttp-restful
.. image:: https://coveralls.io/repos/github/meyt/microhttp-restful/badge.svg?branch=master
:target: https://coveralls.io/github/meyt/microhttp-restful?branch=master
A tool-chain for create RESTful web applications.
Features:
- SQLAlchemy mixins for:
- Pagination
- Ordering
- Filtering
- SoftDelete
- Activation
- Create/Modify datetime
- HTTP form validation.
- Automatically transform the SQLAlchemy models
and queries into JSON with standard naming(camelCase).
| 26.76 | 91 | 0.724963 |
4c3974bec144f86c657e103111831e7a0842fb7c | 35,081 | rst | reStructuredText | llvm/docs/ORCv2.rst | MochalovaAn/llvm | 528aa5ca4aa9df447dc3497ef19da3b124e88d7d | [
"Apache-2.0"
] | null | null | null | llvm/docs/ORCv2.rst | MochalovaAn/llvm | 528aa5ca4aa9df447dc3497ef19da3b124e88d7d | [
"Apache-2.0"
] | null | null | null | llvm/docs/ORCv2.rst | MochalovaAn/llvm | 528aa5ca4aa9df447dc3497ef19da3b124e88d7d | [
"Apache-2.0"
] | null | null | null | ===============================
ORC Design and Implementation
===============================
.. contents::
:local:
Introduction
============
This document aims to provide a high-level overview of the design and
implementation of the ORC JIT APIs. Except where otherwise stated all discussion
refers to the modern ORCv2 APIs (available since LLVM 7). Clients wishing to
transition from OrcV1 should see Section :ref:`transitioning_orcv1_to_orcv2`.
Use-cases
=========
ORC provides a modular API for building JIT compilers. There are a number
of use cases for such an API. For example:
1. The LLVM tutorials use a simple ORC-based JIT class to execute expressions
compiled from a toy language: Kaleidoscope.
2. The LLVM debugger, LLDB, uses a cross-compiling JIT for expression
evaluation. In this use case, cross compilation allows expressions compiled
in the debugger process to be executed on the debug target process, which may
be on a different device/architecture.
3. In high-performance JITs (e.g. JVMs, Julia) that want to make use of LLVM's
optimizations within an existing JIT infrastructure.
4. In interpreters and REPLs, e.g. Cling (C++) and the Swift interpreter.
By adopting a modular, library-based design we aim to make ORC useful in as many
of these contexts as possible.
Features
========
ORC provides the following features:
**JIT-linking**
ORC provides APIs to link relocatable object files (COFF, ELF, MachO) [1]_
into a target process at runtime. The target process may be the same process
that contains the JIT session object and jit-linker, or may be another process
(even one running on a different machine or architecture) that communicates
with the JIT via RPC.
**LLVM IR compilation**
ORC provides off the shelf components (IRCompileLayer, SimpleCompiler,
ConcurrentIRCompiler) that make it easy to add LLVM IR to a JIT'd process.
**Eager and lazy compilation**
By default, ORC will compile symbols as soon as they are looked up in the JIT
session object (``ExecutionSession``). Compiling eagerly by default makes it
easy to use ORC as an in-memory compiler for an existing JIT (similar to how
MCJIT is commonly used). However ORC also provides built-in support for lazy
compilation via lazy-reexports (see :ref:`Laziness`).
**Support for Custom Compilers and Program Representations**
Clients can supply custom compilers for each symbol that they define in their
JIT session. ORC will run the user-supplied compiler when the a definition of
a symbol is needed. ORC is actually fully language agnostic: LLVM IR is not
treated specially, and is supported via the same wrapper mechanism (the
``MaterializationUnit`` class) that is used for custom compilers.
**Concurrent JIT'd code** and **Concurrent Compilation**
JIT'd code may be executed in multiple threads, may spawn new threads, and may
re-enter the ORC (e.g. to request lazy compilation) concurrently from multiple
threads. Compilers launched my ORC can run concurrently (provided the client
sets up an appropriate dispatcher). Built-in dependency tracking ensures that
ORC does not release pointers to JIT'd code or data until all dependencies
have also been JIT'd and they are safe to call or use.
**Removable Code**
Resources for JIT'd program representations
**Orthogonality** and **Composability**
Each of the features above can be used independently. It is possible to put
ORC components together to make a non-lazy, in-process, single threaded JIT
or a lazy, out-of-process, concurrent JIT, or anything in between.
LLJIT and LLLazyJIT
===================
ORC provides two basic JIT classes off-the-shelf. These are useful both as
examples of how to assemble ORC components to make a JIT, and as replacements
for earlier LLVM JIT APIs (e.g. MCJIT).
The LLJIT class uses an IRCompileLayer and RTDyldObjectLinkingLayer to support
compilation of LLVM IR and linking of relocatable object files. All operations
are performed eagerly on symbol lookup (i.e. a symbol's definition is compiled
as soon as you attempt to look up its address). LLJIT is a suitable replacement
for MCJIT in most cases (note: some more advanced features, e.g.
JITEventListeners are not supported yet).
The LLLazyJIT extends LLJIT and adds a CompileOnDemandLayer to enable lazy
compilation of LLVM IR. When an LLVM IR module is added via the addLazyIRModule
method, function bodies in that module will not be compiled until they are first
called. LLLazyJIT aims to provide a replacement of LLVM's original (pre-MCJIT)
JIT API.
LLJIT and LLLazyJIT instances can be created using their respective builder
classes: LLJITBuilder and LLazyJITBuilder. For example, assuming you have a
module ``M`` loaded on a ThreadSafeContext ``Ctx``:
.. code-block:: c++
// Try to detect the host arch and construct an LLJIT instance.
auto JIT = LLJITBuilder().create();
// If we could not construct an instance, return an error.
if (!JIT)
return JIT.takeError();
// Add the module.
if (auto Err = JIT->addIRModule(TheadSafeModule(std::move(M), Ctx)))
return Err;
// Look up the JIT'd code entry point.
auto EntrySym = JIT->lookup("entry");
if (!EntrySym)
return EntrySym.takeError();
// Cast the entry point address to a function pointer.
auto *Entry = (void(*)())EntrySym.getAddress();
// Call into JIT'd code.
Entry();
The builder classes provide a number of configuration options that can be
specified before the JIT instance is constructed. For example:
.. code-block:: c++
// Build an LLLazyJIT instance that uses four worker threads for compilation,
// and jumps to a specific error handler (rather than null) on lazy compile
// failures.
void handleLazyCompileFailure() {
// JIT'd code will jump here if lazy compilation fails, giving us an
// opportunity to exit or throw an exception into JIT'd code.
throw JITFailed();
}
auto JIT = LLLazyJITBuilder()
.setNumCompileThreads(4)
.setLazyCompileFailureAddr(
toJITTargetAddress(&handleLazyCompileFailure))
.create();
// ...
For users wanting to get started with LLJIT a minimal example program can be
found at ``llvm/examples/HowToUseLLJIT``.
Design Overview
===============
ORC's JIT program model aims to emulate the linking and symbol resolution
rules used by the static and dynamic linkers. This allows ORC to JIT
arbitrary LLVM IR, including IR produced by an ordinary static compiler (e.g.
clang) that uses constructs like symbol linkage and visibility, and weak [3]_
and common symbol definitions.
To see how this works, imagine a program ``foo`` which links against a pair
of dynamic libraries: ``libA`` and ``libB``. On the command line, building this
program might look like:
.. code-block:: bash
$ clang++ -shared -o libA.dylib a1.cpp a2.cpp
$ clang++ -shared -o libB.dylib b1.cpp b2.cpp
$ clang++ -o myapp myapp.cpp -L. -lA -lB
$ ./myapp
In ORC, this would translate into API calls on a hypothetical CXXCompilingLayer
(with error checking omitted for brevity) as:
.. code-block:: c++
ExecutionSession ES;
RTDyldObjectLinkingLayer ObjLinkingLayer(
ES, []() { return std::make_unique<SectionMemoryManager>(); });
CXXCompileLayer CXXLayer(ES, ObjLinkingLayer);
// Create JITDylib "A" and add code to it using the CXX layer.
auto &LibA = ES.createJITDylib("A");
CXXLayer.add(LibA, MemoryBuffer::getFile("a1.cpp"));
CXXLayer.add(LibA, MemoryBuffer::getFile("a2.cpp"));
// Create JITDylib "B" and add code to it using the CXX layer.
auto &LibB = ES.createJITDylib("B");
CXXLayer.add(LibB, MemoryBuffer::getFile("b1.cpp"));
CXXLayer.add(LibB, MemoryBuffer::getFile("b2.cpp"));
// Create and specify the search order for the main JITDylib. This is
// equivalent to a "links against" relationship in a command-line link.
auto &MainJD = ES.createJITDylib("main");
MainJD.addToLinkOrder(&LibA);
MainJD.addToLinkOrder(&LibB);
CXXLayer.add(MainJD, MemoryBuffer::getFile("main.cpp"));
// Look up the JIT'd main, cast it to a function pointer, then call it.
auto MainSym = ExitOnErr(ES.lookup({&MainJD}, "main"));
auto *Main = (int(*)(int, char*[]))MainSym.getAddress();
int Result = Main(...);
This example tells us nothing about *how* or *when* compilation will happen.
That will depend on the implementation of the hypothetical CXXCompilingLayer.
The same linker-based symbol resolution rules will apply regardless of that
implementation, however. For example, if a1.cpp and a2.cpp both define a
function "foo" then ORCv2 will generate a duplicate definition error. On the
other hand, if a1.cpp and b1.cpp both define "foo" there is no error (different
dynamic libraries may define the same symbol). If main.cpp refers to "foo", it
should bind to the definition in LibA rather than the one in LibB, since
main.cpp is part of the "main" dylib, and the main dylib links against LibA
before LibB.
Many JIT clients will have no need for this strict adherence to the usual
ahead-of-time linking rules, and should be able to get by just fine by putting
all of their code in a single JITDylib. However, clients who want to JIT code
for languages/projects that traditionally rely on ahead-of-time linking (e.g.
C++) will find that this feature makes life much easier.
Symbol lookup in ORC serves two other important functions, beyond providing
addresses for symbols: (1) It triggers compilation of the symbol(s) searched for
(if they have not been compiled already), and (2) it provides the
synchronization mechanism for concurrent compilation. The pseudo-code for the
lookup process is:
.. code-block:: none
construct a query object from a query set and query handler
lock the session
lodge query against requested symbols, collect required materializers (if any)
unlock the session
dispatch materializers (if any)
In this context a materializer is something that provides a working definition
of a symbol upon request. Usually materializers are just wrappers for compilers,
but they may also wrap a jit-linker directly (if the program representation
backing the definitions is an object file), or may even be a class that writes
bits directly into memory (for example, if the definitions are
stubs). Materialization is the blanket term for any actions (compiling, linking,
splatting bits, registering with runtimes, etc.) that are required to generate a
symbol definition that is safe to call or access.
As each materializer completes its work it notifies the JITDylib, which in turn
notifies any query objects that are waiting on the newly materialized
definitions. Each query object maintains a count of the number of symbols that
it is still waiting on, and once this count reaches zero the query object calls
the query handler with a *SymbolMap* (a map of symbol names to addresses)
describing the result. If any symbol fails to materialize the query immediately
calls the query handler with an error.
The collected materialization units are sent to the ExecutionSession to be
dispatched, and the dispatch behavior can be set by the client. By default each
materializer is run on the calling thread. Clients are free to create new
threads to run materializers, or to send the work to a work queue for a thread
pool (this is what LLJIT/LLLazyJIT do).
Top Level APIs
==============
Many of ORC's top-level APIs are visible in the example above:
- *ExecutionSession* represents the JIT'd program and provides context for the
JIT: It contains the JITDylibs, error reporting mechanisms, and dispatches the
materializers.
- *JITDylibs* provide the symbol tables.
- *Layers* (ObjLinkingLayer and CXXLayer) are wrappers around compilers and
allow clients to add uncompiled program representations supported by those
compilers to JITDylibs.
Several other important APIs are used explicitly. JIT clients need not be aware
of them, but Layer authors will use them:
- *MaterializationUnit* - When XXXLayer::add is invoked it wraps the given
program representation (in this example, C++ source) in a MaterializationUnit,
which is then stored in the JITDylib. MaterializationUnits are responsible for
describing the definitions they provide, and for unwrapping the program
representation and passing it back to the layer when compilation is required
(this ownership shuffle makes writing thread-safe layers easier, since the
ownership of the program representation will be passed back on the stack,
rather than having to be fished out of a Layer member, which would require
synchronization).
- *MaterializationResponsibility* - When a MaterializationUnit hands a program
representation back to the layer it comes with an associated
MaterializationResponsibility object. This object tracks the definitions
that must be materialized and provides a way to notify the JITDylib once they
are either successfully materialized or a failure occurs.
Absolute Symbols, Aliases, and Reexports
========================================
ORC makes it easy to define symbols with absolute addresses, or symbols that
are simply aliases of other symbols:
Absolute Symbols
----------------
Absolute symbols are symbols that map directly to addresses without requiring
further materialization, for example: "foo" = 0x1234. One use case for
absolute symbols is allowing resolution of process symbols. E.g.
.. code-block: c++
JD.define(absoluteSymbols(SymbolMap({
{ Mangle("printf"),
{ pointerToJITTargetAddress(&printf),
JITSymbolFlags::Callable } }
});
With this mapping established code added to the JIT can refer to printf
symbolically rather than requiring the address of printf to be "baked in".
This in turn allows cached versions of the JIT'd code (e.g. compiled objects)
to be re-used across JIT sessions as the JIT'd code no longer changes, only the
absolute symbol definition does.
For process and library symbols the DynamicLibrarySearchGenerator utility (See
:ref:`How to Add Process and Library Symbols to JITDylibs
<ProcessAndLibrarySymbols>`) can be used to automatically build absolute
symbol mappings for you. However the absoluteSymbols function is still useful
for making non-global objects in your JIT visible to JIT'd code. For example,
imagine that your JIT standard library needs access to your JIT object to make
some calls. We could bake the address of your object into the library, but then
it would need to be recompiled for each session:
.. code-block: c++
// From standard library for JIT'd code:
class MyJIT {
public:
void log(const char *Msg);
};
void log(const char *Msg) { ((MyJIT*)0x1234)->log(Msg); }
We can turn this into a symbolic reference in the JIT standard library:
.. code-block: c++
extern MyJIT *__MyJITInstance;
void log(const char *Msg) { __MyJITInstance->log(Msg); }
And then make our JIT object visible to the JIT standard library with an
absolute symbol definition when the JIT is started:
.. code-block: c++
MyJIT J = ...;
auto &JITStdLibJD = ... ;
JITStdLibJD.define(absoluteSymbols(SymbolMap({
{ Mangle("__MyJITInstance"),
{ pointerToJITTargetAddress(&J), JITSymbolFlags() } }
});
Aliases and Reexports
---------------------
Aliases and reexports allow you to define new symbols that map to existing
symbols. This can be useful for changing linkage relationships between symbols
across sessions without having to recompile code. For example, imagine that
JIT'd code has access to a log function, ``void log(const char*)`` for which
there are two implementations in the JIT standard library: ``log_fast`` and
``log_detailed``. Your JIT can choose which one of these definitions will be
used when the ``log`` symbol is referenced by setting up an alias at JIT startup
time:
.. code-block: c++
auto &JITStdLibJD = ... ;
auto LogImplementationSymbol =
Verbose ? Mangle("log_detailed") : Mangle("log_fast");
JITStdLibJD.define(
symbolAliases(SymbolAliasMap({
{ Mangle("log"),
{ LogImplementationSymbol
JITSymbolFlags::Exported | JITSymbolFlags::Callable } }
});
The ``symbolAliases`` function allows you to define aliases within a single
JITDylib. The ``reexports`` function provides the same functionality, but
operates across JITDylib boundaries. E.g.
.. code-block: c++
auto &JD1 = ... ;
auto &JD2 = ... ;
// Make 'bar' in JD2 an alias for 'foo' from JD1.
JD2.define(
reexports(JD1, SymbolAliasMap({
{ Mangle("bar"), { Mangle("foo"), JITSymbolFlags::Exported } }
});
The reexports utility can be handy for composing a single JITDylib interface by
re-exporting symbols from several other JITDylibs.
.. _Laziness:
Laziness
========
Laziness in ORC is provided by a utility called "lazy reexports". A lazy
reexport is similar to a regular reexport or alias: It provides a new name for
an existing symbol. Unlike regular reexports however, lookups of lazy reexports
do not trigger immediate materialization of the reexported symbol. Instead, they
only trigger materialization of a function stub. This function stub is
initialized to point at a *lazy call-through*, which provides reentry into the
JIT. If the stub is called at runtime then the lazy call-through will look up
the reexported symbol (triggering materialization for it if necessary), update
the stub (to call directly to the reexported symbol on subsequent calls), and
then return via the reexported symbol. By re-using the existing symbol lookup
mechanism, lazy reexports inherit the same concurrency guarantees: calls to lazy
reexports can be made from multiple threads concurrently, and the reexported
symbol can be any state of compilation (uncompiled, already in the process of
being compiled, or already compiled) and the call will succeed. This allows
laziness to be safely mixed with features like remote compilation, concurrent
compilation, concurrent JIT'd code, and speculative compilation.
There is one other key difference between regular reexports and lazy reexports
that some clients must be aware of: The address of a lazy reexport will be
*different* from the address of the reexported symbol (whereas a regular
reexport is guaranteed to have the same address as the reexported symbol).
Clients who care about pointer equality will generally want to use the address
of the reexport as the canonical address of the reexported symbol. This will
allow the address to be taken without forcing materialization of the reexport.
Usage example:
If JITDylib ``JD`` contains definitions for symbols ``foo_body`` and
``bar_body``, we can create lazy entry points ``Foo`` and ``Bar`` in JITDylib
``JD2`` by calling:
.. code-block:: c++
auto ReexportFlags = JITSymbolFlags::Exported | JITSymbolFlags::Callable;
JD2.define(
lazyReexports(CallThroughMgr, StubsMgr, JD,
SymbolAliasMap({
{ Mangle("foo"), { Mangle("foo_body"), ReexportedFlags } },
{ Mangle("bar"), { Mangle("bar_body"), ReexportedFlags } }
}));
A full example of how to use lazyReexports with the LLJIT class can be found at
``llvm_project/llvm/examples/LLJITExamples/LLJITWithLazyReexports``.
Supporting Custom Compilers
===========================
TBD.
.. _transitioning_orcv1_to_orcv2:
Transitioning from ORCv1 to ORCv2
=================================
Since LLVM 7.0, new ORC development work has focused on adding support for
concurrent JIT compilation. The new APIs (including new layer interfaces and
implementations, and new utilities) that support concurrency are collectively
referred to as ORCv2, and the original, non-concurrent layers and utilities
are now referred to as ORCv1.
The majority of the ORCv1 layers and utilities were renamed with a 'Legacy'
prefix in LLVM 8.0, and have deprecation warnings attached in LLVM 9.0. In LLVM
12.0 ORCv1 will be removed entirely.
Transitioning from ORCv1 to ORCv2 should be easy for most clients. Most of the
ORCv1 layers and utilities have ORCv2 counterparts [2]_ that can be directly
substituted. However there are some design differences between ORCv1 and ORCv2
to be aware of:
1. ORCv2 fully adopts the JIT-as-linker model that began with MCJIT. Modules
(and other program representations, e.g. Object Files) are no longer added
directly to JIT classes or layers. Instead, they are added to ``JITDylib``
instances *by* layers. The ``JITDylib`` determines *where* the definitions
reside, the layers determine *how* the definitions will be compiled.
Linkage relationships between ``JITDylibs`` determine how inter-module
references are resolved, and symbol resolvers are no longer used. See the
section `Design Overview`_ for more details.
Unless multiple JITDylibs are needed to model linkage relationships, ORCv1
clients should place all code in a single JITDylib.
MCJIT clients should use LLJIT (see `LLJIT and LLLazyJIT`_), and can place
code in LLJIT's default created main JITDylib (See
``LLJIT::getMainJITDylib()``).
2. All JIT stacks now need an ``ExecutionSession`` instance. ExecutionSession
manages the string pool, error reporting, synchronization, and symbol
lookup.
3. ORCv2 uses uniqued strings (``SymbolStringPtr`` instances) rather than
string values in order to reduce memory overhead and improve lookup
performance. See the subsection `How to manage symbol strings`_.
4. IR layers require ThreadSafeModule instances, rather than
std::unique_ptr<Module>s. ThreadSafeModule is a wrapper that ensures that
Modules that use the same LLVMContext are not accessed concurrently.
See `How to use ThreadSafeModule and ThreadSafeContext`_.
5. Symbol lookup is no longer handled by layers. Instead, there is a
``lookup`` method on JITDylib that takes a list of JITDylibs to scan.
.. code-block:: c++
ExecutionSession ES;
JITDylib &JD1 = ...;
JITDylib &JD2 = ...;
auto Sym = ES.lookup({&JD1, &JD2}, ES.intern("_main"));
6. Module removal is not yet supported. There is no equivalent of the
layer concept removeModule/removeObject methods. Work on resource tracking
and removal in ORCv2 is ongoing.
For code examples and suggestions of how to use the ORCv2 APIs, please see
the section `How-tos`_.
How-tos
=======
How to manage symbol strings
----------------------------
Symbol strings in ORC are uniqued to improve lookup performance, reduce memory
overhead, and allow symbol names to function as efficient keys. To get the
unique ``SymbolStringPtr`` for a string value, call the
``ExecutionSession::intern`` method:
.. code-block:: c++
ExecutionSession ES;
/// ...
auto MainSymbolName = ES.intern("main");
If you wish to perform lookup using the C/IR name of a symbol you will also
need to apply the platform linker-mangling before interning the string. On
Linux this mangling is a no-op, but on other platforms it usually involves
adding a prefix to the string (e.g. '_' on Darwin). The mangling scheme is
based on the DataLayout for the target. Given a DataLayout and an
ExecutionSession, you can create a MangleAndInterner function object that
will perform both jobs for you:
.. code-block:: c++
ExecutionSession ES;
const DataLayout &DL = ...;
MangleAndInterner Mangle(ES, DL);
// ...
// Portable IR-symbol-name lookup:
auto Sym = ES.lookup({&MainJD}, Mangle("main"));
How to create JITDylibs and set up linkage relationships
--------------------------------------------------------
In ORC, all symbol definitions reside in JITDylibs. JITDylibs are created by
calling the ``ExecutionSession::createJITDylib`` method with a unique name:
.. code-block:: c++
ExecutionSession ES;
auto &JD = ES.createJITDylib("libFoo.dylib");
The JITDylib is owned by the ``ExecutionEngine`` instance and will be freed
when it is destroyed.
How to use ThreadSafeModule and ThreadSafeContext
-------------------------------------------------
ThreadSafeModule and ThreadSafeContext are wrappers around Modules and
LLVMContexts respectively. A ThreadSafeModule is a pair of a
std::unique_ptr<Module> and a (possibly shared) ThreadSafeContext value. A
ThreadSafeContext is a pair of a std::unique_ptr<LLVMContext> and a lock.
This design serves two purposes: providing a locking scheme and lifetime
management for LLVMContexts. The ThreadSafeContext may be locked to prevent
accidental concurrent access by two Modules that use the same LLVMContext.
The underlying LLVMContext is freed once all ThreadSafeContext values pointing
to it are destroyed, allowing the context memory to be reclaimed as soon as
the Modules referring to it are destroyed.
ThreadSafeContexts can be explicitly constructed from a
std::unique_ptr<LLVMContext>:
.. code-block:: c++
ThreadSafeContext TSCtx(std::make_unique<LLVMContext>());
ThreadSafeModules can be constructed from a pair of a std::unique_ptr<Module>
and a ThreadSafeContext value. ThreadSafeContext values may be shared between
multiple ThreadSafeModules:
.. code-block:: c++
ThreadSafeModule TSM1(
std::make_unique<Module>("M1", *TSCtx.getContext()), TSCtx);
ThreadSafeModule TSM2(
std::make_unique<Module>("M2", *TSCtx.getContext()), TSCtx);
Before using a ThreadSafeContext, clients should ensure that either the context
is only accessible on the current thread, or that the context is locked. In the
example above (where the context is never locked) we rely on the fact that both
``TSM1`` and ``TSM2``, and TSCtx are all created on one thread. If a context is
going to be shared between threads then it must be locked before any accessing
or creating any Modules attached to it. E.g.
.. code-block:: c++
ThreadSafeContext TSCtx(std::make_unique<LLVMContext>());
ThreadPool TP(NumThreads);
JITStack J;
for (auto &ModulePath : ModulePaths) {
TP.async(
[&]() {
auto Lock = TSCtx.getLock();
auto M = loadModuleOnContext(ModulePath, TSCtx.getContext());
J.addModule(ThreadSafeModule(std::move(M), TSCtx));
});
}
TP.wait();
To make exclusive access to Modules easier to manage the ThreadSafeModule class
provides a convenience function, ``withModuleDo``, that implicitly (1) locks the
associated context, (2) runs a given function object, (3) unlocks the context,
and (3) returns the result generated by the function object. E.g.
.. code-block:: c++
ThreadSafeModule TSM = getModule(...);
// Dump the module:
size_t NumFunctionsInModule =
TSM.withModuleDo(
[](Module &M) { // <- Context locked before entering lambda.
return M.size();
} // <- Context unlocked after leaving.
);
Clients wishing to maximize possibilities for concurrent compilation will want
to create every new ThreadSafeModule on a new ThreadSafeContext. For this
reason a convenience constructor for ThreadSafeModule is provided that implicitly
constructs a new ThreadSafeContext value from a std::unique_ptr<LLVMContext>:
.. code-block:: c++
// Maximize concurrency opportunities by loading every module on a
// separate context.
for (const auto &IRPath : IRPaths) {
auto Ctx = std::make_unique<LLVMContext>();
auto M = std::make_unique<LLVMContext>("M", *Ctx);
CompileLayer.add(MainJD, ThreadSafeModule(std::move(M), std::move(Ctx)));
}
Clients who plan to run single-threaded may choose to save memory by loading
all modules on the same context:
.. code-block:: c++
// Save memory by using one context for all Modules:
ThreadSafeContext TSCtx(std::make_unique<LLVMContext>());
for (const auto &IRPath : IRPaths) {
ThreadSafeModule TSM(parsePath(IRPath, *TSCtx.getContext()), TSCtx);
CompileLayer.add(MainJD, ThreadSafeModule(std::move(TSM));
}
.. _ProcessAndLibrarySymbols:
How to Add Process and Library Symbols to the JITDylibs
=======================================================
JIT'd code typically needs access to symbols in the host program or in
supporting libraries. References to process symbols can be "baked in" to code
as it is compiled by turning external references into pre-resolved integer
constants, however this ties the JIT'd code to the current process's virtual
memory layout (meaning that it can not be cached between runs) and makes
debugging lower level program representations difficult (as all external
references are opaque integer values). A bettor solution is to maintain symbolic
external references and let the jit-linker bind them for you at runtime. To
allow the JIT linker to find these external definitions their addresses must
be added to a JITDylib that the JIT'd definitions link against.
Adding definitions for external symbols could be done using the absoluteSymbols
function:
.. code-block:: c++
const DataLayout &DL = getDataLayout();
MangleAndInterner Mangle(ES, DL);
auto &JD = ES.createJITDylib("main");
JD.define(
absoluteSymbols({
{ Mangle("puts"), pointerToJITTargetAddress(&puts)},
{ Mangle("gets"), pointerToJITTargetAddress(&getS)}
}));
Manually adding absolute symbols for a large or changing interface is cumbersome
however, so ORC provides an alternative to generate new definitions on demand:
*definition generators*. If a definition generator is attached to a JITDylib,
then any unsuccessful lookup on that JITDylib will fall back to calling the
definition generator, and the definition generator may choose to generate a new
definition for the missing symbols. Of particular use here is the
``DynamicLibrarySearchGenerator`` utility. This can be used to reflect the whole
exported symbol set of the process or a specific dynamic library, or a subset
of either of these determined by a predicate.
For example, to load the whole interface of a runtime library:
.. code-block:: c++
const DataLayout &DL = getDataLayout();
auto &JD = ES.createJITDylib("main");
JD.setGenerator(DynamicLibrarySearchGenerator::Load("/path/to/lib"
DL.getGlobalPrefix()));
// IR added to JD can now link against all symbols exported by the library
// at '/path/to/lib'.
CompileLayer.add(JD, loadModule(...));
Or, to expose an allowed set of symbols from the main process:
.. code-block:: c++
const DataLayout &DL = getDataLayout();
MangleAndInterner Mangle(ES, DL);
auto &JD = ES.createJITDylib("main");
DenseSet<SymbolStringPtr> AllowList({
Mangle("puts"),
Mangle("gets")
});
// Use GetForCurrentProcess with a predicate function that checks the
// allowed list.
JD.setGenerator(
DynamicLibrarySearchGenerator::GetForCurrentProcess(
DL.getGlobalPrefix(),
[&](const SymbolStringPtr &S) { return AllowList.count(S); }));
// IR added to JD can now link against any symbols exported by the process
// and contained in the list.
CompileLayer.add(JD, loadModule(...));
Roadmap
=======
ORC is still undergoing active development. Some current and future works are
listed below.
Current Work
------------
1. **TargetProcessControl: Improvements to in-tree support for out-of-process
execution**
The ``TargetProcessControl`` API provides various operations on the JIT
target process (the one which will execute the JIT'd code), including
memory allocation, memory writes, function execution, and process queries
(e.g. for the target triple). By targeting this API new components can be
developed which will work equally well for in-process and out-of-process
JITing.
2. **ORC RPC based TargetProcessControl implementation**
An ORC RPC based implementation of the ``TargetProcessControl`` API is
currently under development to enable easy out-of-process JITing via
file descriptors / sockets.
3. **Core State Machine Cleanup**
The core ORC state machine is currently implemented between JITDylib and
ExecutionSession. Methods are slowly being moved to `ExecutionSession`. This
will tidy up the code base, and also allow us to support asynchronous removal
of JITDylibs (in practice deleting an associated state object in
ExecutionSession and leaving the JITDylib instance in a defunct state until
all references to it have been released).
Near Future Work
----------------
1. **ORC JIT Runtime Libraries**
We need a runtime library for JIT'd code. This would include things like
TLS registration, reentry functions, registration code for language runtimes
(e.g. Objective C and Swift) and other JIT specific runtime code. This should
be built in a similar manner to compiler-rt (possibly even as part of it).
2. **Remote jit_dlopen / jit_dlclose**
To more fully mimic the environment that static programs operate in we would
like JIT'd code to be able to "dlopen" and "dlclose" JITDylibs, running all of
their initializers/deinitializers on the current thread. This would require
support from the runtime library described above.
3. **Debugging support**
ORC currently supports the GDBRegistrationListener API when using RuntimeDyld
as the underlying JIT linker. We will need a new solution for JITLink based
platforms.
Further Future Work
-------------------
1. **Speculative Compilation**
ORC's support for concurrent compilation allows us to easily enable
*speculative* JIT compilation: compilation of code that is not needed yet,
but which we have reason to believe will be needed in the future. This can be
used to hide compile latency and improve JIT throughput. A proof-of-concept
example of speculative compilation with ORC has already been developed (see
``llvm/examples/SpeculativeJIT``). Future work on this is likely to focus on
re-using and improving existing profiling support (currently used by PGO) to
feed speculation decisions, as well as built-in tools to simplify use of
speculative compilation.
.. [1] Formats/architectures vary in terms of supported features. MachO and
ELF tend to have better support than COFF. Patches very welcome!
.. [2] The ``LazyEmittingLayer``, ``RemoteObjectClientLayer`` and
``RemoteObjectServerLayer`` do not have counterparts in the new
system. In the case of ``LazyEmittingLayer`` it was simply no longer
needed: in ORCv2, deferring compilation until symbols are looked up is
the default. The removal of ``RemoteObjectClientLayer`` and
``RemoteObjectServerLayer`` means that JIT stacks can no longer be split
across processes, however this functionality appears not to have been
used.
.. [3] Weak definitions are currently handled correctly within dylibs, but if
multiple dylibs provide a weak definition of a symbol then each will end
up with its own definition (similar to how weak definitions are handled
in Windows DLLs). This will be fixed in the future.
| 41.078454 | 81 | 0.740116 |
c855d1cab84b1282838e44898fa5e2b58f39dd1a | 6,320 | rst | reStructuredText | documentation/integrity.rst | socialpoint-labs/sqlflow | 5f1c23ff3207e0e162ae5ab93f80129a982e5468 | [
"MIT"
] | 67 | 2019-09-17T16:36:15.000Z | 2022-03-28T09:16:31.000Z | documentation/integrity.rst | socialpoint-labs/sqlflow | 5f1c23ff3207e0e162ae5ab93f80129a982e5468 | [
"MIT"
] | 10 | 2019-11-14T16:43:37.000Z | 2021-12-19T16:06:20.000Z | documentation/integrity.rst | socialpoint-labs/sqlflow | 5f1c23ff3207e0e162ae5ab93f80129a982e5468 | [
"MIT"
] | 7 | 2019-12-04T14:31:17.000Z | 2021-12-23T15:53:01.000Z | Data quality and integrity with SQLBucket
=========================================
On top of writing ETL, SQLBucket helps you organize some integrity checks you
may need on your ETL. The SQL for integrity checks has only one convention: it
must returns the result of your integrity as a boolean on the alias field
**passed**. Everything is done in plain and simple SQL. You can have as many
other fields as you want.
The integrity SQL must reside in .sql files in the ``integrity`` folder within
a project.
You will find below some examples on how they should be written.
**No nulls in field**
If we were to guarantee a field does not contain any nulls, the following query
would check just that.
.. code-block:: sql
select
count(1) = 0 as passed
from my_table
where my_field is null
When an integrity fails, it will log the rows of your queries in terminal, so
you can easily review the failures. It is advised to actually put the result
calculated as well as the result expected in the query on their separate fields.
For the same check, the integrity query would look as follow:
.. code-block:: sql
select
0 as expected,
count(1) as calculated,
count(1) = 0 as passed
from my_table
where my_field is null
**Expected number of rows**
.. code-block:: sql
select
10 as expected,
count(1) as calculated,
count(1) = 10 as passed
from my_table
**Non-negative values**
.. code-block:: sql
select
0 as expected,
count(1) as calculated,
count(1) = 0 as passed
from my_table
where positive_field < 0
**Only specific values**
.. code-block:: sql
select
0 as expected,
count(1) as calculated,
count(1) = 0 as passed
from my_table
where field not in ('high', 'low')
**Values must be unique**
.. code-block:: sql
select
count(distinct(my_field)) as expected,
count(1) as calculated,
count(1) = count(distinct(my_field)) as passed
from my_table
Organization and best practice
------------------------------
The folder organization of integrity queries is completely up to the user, as
long as they are all contained in the ``integrity`` folder, and returns the
field ``passed`` as a boolean.
It is also possible to have multiple checks within one SQL file, simply
by using ``union all`` on the query. This is how it could be done:
.. code-block:: sql
select
'field must be unique' as integrity_name,
count(1) = count(distinct(my_field)) as passed,
count(distinct(my_field)) as expected,
count(1) as calculated
from my_table
union all
select
'only values high/low' as integrity_name,
count(1) = 0 as passed,
0 as expected,
count(1) as calculated
from my_table
where field not in ('high', 'low')
More advanced
-------------
You may want to check that the aggregation from your ETL worked as intended,
and make sure that the data from your raw data tables equals the aggregation
in the summary tables.
The following integrity SQL could be done as follow:
.. code-block:: sql
select
(select sum(revenue) from raw_table) as source,
(select sum(revenue) from summary_table) as target,
(select sum(revenue) from raw_table)
=
(select sum(revenue) from summary_table) as passed
This could work if revenues were only integers. However, in real life, cost or
revenues are reported in the float/double precision form. The above check
therefore is not going to work most likely.
For comparing floats you could do it this way:
.. code-block:: sql
select
(select sum(revenue) from raw_table) as source,
(select sum(revenue) from summary_table) as target,
abs((select sum(revenue) from raw_table) - (select sum(revenue) from summary_table)) < 1 as passed
This integrity check will pass if the difference between the 2 sums is under 1.
It could be set to something smaller if needed.
Tricks with Jinja2
------------------
SQLBucket works with Jinja2 under the hood which means you can have for loops
or if/else execution flows in your integrity checks, as well as, create some
macros. See below some examples:
**No nulls in multiple fields**
.. code-block:: sql
{% set fields = ['field_a', 'field_b', 'field_c', 'field_d', 'field_e'] %}
{% for field in fields %}
select
{{ field }} as '{{ field }}',
0 as expected,
count(1) as calculated,
count(1) = 0 as passed
from my_table
where {{ field }} is null
{{ "union all" if not loop.last }}
{% endfor %}
This will effectively create 5 checks into one query.
**No nulls in multiple fields via macro**
When creating your SQLBucket object, you can pass as a parameter a folder path
for jinja macros.
.. code-block:: python
bucket = SQLBucket(macro_folder='/path/to/macro/folder')
This gives you the opportunity to create Jinja macros to simplify your code
when doing integrity. To keep the same example as above, we could create the
following macro to check nulls in multiple fields from a table.
.. code-block:: jinja
{% macro check_no_nulls(table_name, list_of_fields) %}
{% for field in list_of_fields %}
select
{{ field }} as '{{ field }}',
0 as expected,
count(1) as calculated,
count(1) = 0 as passed
from {{ table_name }}
where {{ field }} is null
{{ "union all" if not loop.last }}
{% endfor %}
{% endmacro %}
Then, for every tables you want to check for nulls, it would take only the
following (assuming the macros is written on a file called macros.jinja2):
.. code-block:: jinja
{% import 'macros.jinja2' as macros %}
{{ macros.check_no_nulls('table_name', ['field_a', 'field_b', 'field_c','field_d'])}}
Using macros is an excellent way to prevent writing the same SQL over and over.
SQLBucket library contains a list of macros already coded for immediate use.
You can refer to the `list of macros on its dedicated documentation page`_.
.. _list of macros on its dedicated documentation page: https://github.com/socialpoint-labs/sqlbucket/blob/master/documentation/macros.rst
| 26.224066 | 138 | 0.670095 |
98a04f5bf8a3dbd2086f7606c4caa9b9fd9f450e | 272 | rst | reStructuredText | docs_rst/pymatgen_diffusion.rst | cajfisher/pymatgen-diffusion | a4265d232b21804ba397b2b193ffc65c9f5665e8 | [
"BSD-3-Clause"
] | null | null | null | docs_rst/pymatgen_diffusion.rst | cajfisher/pymatgen-diffusion | a4265d232b21804ba397b2b193ffc65c9f5665e8 | [
"BSD-3-Clause"
] | null | null | null | docs_rst/pymatgen_diffusion.rst | cajfisher/pymatgen-diffusion | a4265d232b21804ba397b2b193ffc65c9f5665e8 | [
"BSD-3-Clause"
] | null | null | null | pymatgen\_diffusion package
===========================
Subpackages
-----------
.. toctree::
pymatgen_diffusion.aimd
pymatgen_diffusion.neb
Module contents
---------------
.. automodule:: pymatgen_diffusion
:members:
:undoc-members:
:show-inheritance:
| 14.315789 | 34 | 0.595588 |
c52f43e2c10f0adb8ac9d6c3c69cde46dabc62cf | 52 | rst | reStructuredText | documentation/theory/correlation.rst | diehlpk/muDIC | b5d90aa62267b4bd0b88ae0a989cf09a51990654 | [
"MIT"
] | 70 | 2019-04-15T08:08:23.000Z | 2022-03-23T08:24:25.000Z | documentation/theory/correlation.rst | diehlpk/muDIC | b5d90aa62267b4bd0b88ae0a989cf09a51990654 | [
"MIT"
] | 34 | 2019-05-03T18:09:43.000Z | 2022-02-10T11:36:29.000Z | documentation/theory/correlation.rst | diehlpk/muDIC | b5d90aa62267b4bd0b88ae0a989cf09a51990654 | [
"MIT"
] | 37 | 2019-04-25T15:39:23.000Z | 2022-03-28T21:40:24.000Z | Correlation
=======================================
| 17.333333 | 39 | 0.211538 |
2d5f09227cb0acf047db758601ecff78ae370a6a | 1,949 | rst | reStructuredText | README.rst | mathrip/pysap-mri | d6acae5330c684b66a99d0fbf13e44b0b66046cf | [
"CECILL-B"
] | null | null | null | README.rst | mathrip/pysap-mri | d6acae5330c684b66a99d0fbf13e44b0b66046cf | [
"CECILL-B"
] | null | null | null | README.rst | mathrip/pysap-mri | d6acae5330c684b66a99d0fbf13e44b0b66046cf | [
"CECILL-B"
] | null | null | null | |Travis|_ |Coveralls|_ |Doc|_ |CircleCI|_
.. |Travis| image:: https://travis-ci.org/CEA-COSMIC/pysap-mri.svg?branch=master
.. _Travis: https://travis-ci.org/CEA-COSMIC/pysap-mri
.. |Coveralls| image:: https://coveralls.io/repos/CEA-COSMIC/pysap-mri/badge.svg?branch=master&kill_cache=1
.. _Coveralls: https://coveralls.io/github/CEA-COSMIC/pysap-mri
.. |Doc| image:: https://readthedocs.org/projects/pysap-mri/badge/?version=latest
.. _Doc: https://pysap-mri.readthedocs.io/en/latest/?badge=latest
.. |CircleCI| image:: https://circleci.com/gh/CEA-COSMIC/pysap-mri.svg?style=svg
.. _CircleCI: https://circleci.com/gh/CEA-COSMIC/pysap-mri
pySAP-mri
=========
Python Sparse data Analysis Package external MRI plugin.
This work is made available by a community of people, amoung which the
CEA Neurospin UNATI and CEA CosmoStat laboratories, in particular A. Grigis,
J.-L. Starck, P. Ciuciu, and S. Farrens.
Installation instructions
=========================
Install python-pySAP using `pip install python-pySAP`. Later install pysap-mri by calling setup.py
Note: If you want to use undecimated wavelet transform, please point the `$PATH` environment variable to
pysap external binary directory:
`export PATH=$PATH:/path-to-pysap/build/temp.linux-x86_64-<PYTHON_VERSION>/extern/bin/`
Special Installations
=====================
`gpuNUFFT <https://www.opensourceimaging.org/project/gpunufft/>`_
---------------------------------------------------------------
For faster NUFFT operation, pysap-mri uses gpuNUFFT, to run the NUFFT on GPU. To install gpuNUFFT, please use:
``pip install gpuNUFFT``
We are still in the process of merging this and getting a pip release for gpuNUFFT. Note, that you can still use CPU
NFFT without installing the package.
Important links
===============
- Official pySAP source code repo: https://github.com/cea-cosmic/pysap
- pySAP HTML documentation (last stable release): http://cea-cosmic.github.io/pysap
| 38.215686 | 116 | 0.715239 |
6f1e078cbab8ef724d4d606c87281b48c68ffa73 | 420 | rst | reStructuredText | doc/pygeonlp.api.parser.rst | geonlp-platform/pygeonlp | f91a38ac416b7dbd5ef689742d0ffac48e6d350e | [
"BSD-2-Clause"
] | 7 | 2021-07-21T08:22:00.000Z | 2022-02-18T09:18:31.000Z | doc/pygeonlp.api.parser.rst | geonlp-platform/pygeonlp | f91a38ac416b7dbd5ef689742d0ffac48e6d350e | [
"BSD-2-Clause"
] | 8 | 2021-07-09T01:43:14.000Z | 2021-09-09T05:02:23.000Z | doc/pygeonlp.api.parser.rst | geonlp-platform/pygeonlp | f91a38ac416b7dbd5ef689742d0ffac48e6d350e | [
"BSD-2-Clause"
] | null | null | null | pygeonlp.api.parser module
==========================
.. py:module:: pygeonlp.api.parser
.. autoclass:: pygeonlp.api.parser.Parser
:members:
:special-members: __init__
:undoc-members:
:show-inheritance:
.. autoclass:: pygeonlp.api.parser.Statistics
:members:
:undoc-members:
:show-inheritance:
.. autoclass:: pygeonlp.api.parser.ParseError
:members:
:undoc-members:
:show-inheritance:
| 19.090909 | 45 | 0.657143 |
288ebbeb38dfdd19f2185879703545eea00475b9 | 4,619 | rst | reStructuredText | docs/intro.rst | OOub/prosper | 0865c1dff85a3b5d1f32a4c20d159bbe2df94ce5 | [
"AFL-3.0"
] | 17 | 2019-07-16T14:36:12.000Z | 2022-01-10T10:22:04.000Z | docs/intro.rst | OOub/prosper | 0865c1dff85a3b5d1f32a4c20d159bbe2df94ce5 | [
"AFL-3.0"
] | 2 | 2019-07-17T16:02:25.000Z | 2019-07-17T16:04:43.000Z | docs/intro.rst | OOub/prosper | 0865c1dff85a3b5d1f32a4c20d159bbe2df94ce5 | [
"AFL-3.0"
] | 11 | 2019-07-17T11:22:33.000Z | 2021-09-01T07:13:17.000Z | ************
Introduction
************
This package contains all the source code to reproduce the numerical
experiments described in the paper. It contains a parallelized implementation
of the Binary Sparse Coding (BSC) [1], Gaussian Sparse Coding (GSC) [2],
Maximum Causes Analysis (MCA) [3], Maximum Magnitude Causes Analysis (MMCA) [4],
Ternary Sparse Coding (TSC) [5], and Discrete Sparse Coding [7] models. All these probabilistic generative models
are trained using a truncated Expectation Maximization (EM) algorithm [6].
Software dependencies
=====================
Python related dependencies can be installed using:
.. code-block::bash
$ pip install -r requirements.txt
MPI4PY also requires a system level installation of MPI.
You can do that on MacOS using Homebrew:
.. code-block::bash
$ brew install mpich
for Ubuntu systems:
.. code-block::bash
$ sudo apt install mpich
for any other system you might wish to review the relevent section of the MPI4PY [installation guidelines](https://mpi4py.readthedocs.io/en/stable/appendix.html#building-mpi)
Installation
============
The recommended approch to install the framework is to obtain
the most recent stable version from `github.com`:
.. code-block:: bash
$ git clone git@github.com:mlold/prosper.git
$ cd prosper
$ python setup.py install
After installation you should run the testsuite to ensure all neccessary
dependencies are installed correctly and that everyting works as expected:
.. code-block:: bash
$ nosetests -v
Optionally you can replace the final line with:
.. code-block:: bash
$ python setup.py develop
This option installs the library using links and it allows the user to edit the library without reinstalling it (useful for Prosper developers).
Running examples
================
You are now ready to run your first dictionary learning experiments on artificial
data.
Create some artifical training data by running `bars-create-data.py`:
.. code-block:: bash
$ cd examples/barstests
$ python bars-learnign-and-inference.py param-bars-<...>.py
where <...> should be appropriately replaced to correspond to one of the parameter
files available in the directory. The bars-run-all.py script should then initialize
and run the algorithm which corresponds to the chosen parameter file.
Results/Output
--------------
The results produced by the code are stored in a 'results.h5' file
under "./output/.../". The file stores the model parameters (e.g., W, pi etc.)
for each EM iteration performed. To read the results file, you can use
openFile function of the standard tables package in python. Moreover, the
results files can also be easily read by other packages such as Matlab etc.
Running on a parallel architecture
==================================
The code uses MPI based parallelization. If you have parallel resources
(i.e., a multi-core system or a compute cluster), the provided code can make a
use of parallel compute resources by evenly distributing the training data
among multiple cores.
To run the same script as above, e.g.,
a) On a multi-core machine with 32 cores:
.. code-block:: bash
$ mpirun -np 32 bars-learning-and-inference.py param-bars-<...>.py
b) On a cluster:
.. code-block:: bash
$ mpirun --hostfile machines python bars-learning-and-inference.py param-bars-<...>.py
where 'machines' contains a list of suitable machines.
See your MPI documentation for the details on how to start MPI parallelized
programs.
References
==========
[1] M. Henniges, G. Puertas, J. Bornschein, J. Eggert, and J. Lücke (2010).
Binary Sparse Coding.
Proc. LVA/ICA 2010, LNCS 6365, 450-457.
[2] A.-S. Sheikh, J. A. Shelton, J. Lücke (2014).
A Truncated EM Approach for Spike-and-Slab Sparse Coding.
Journal of Machine Learning Research, 15:2653-2687.
[3] G. Puertas, J. Bornschein, and J. Lücke (2010).
The Maximal Causes of Natural Scenes are Edge Filters.
Advances in Neural Information Processing Systems 23, 1939-1947.
[4] J. Bornschein, M. Henniges, J. Lücke (2013).
Are V1 simple cells optimized for visual occlusions? A comparative study.
PLOS Computational Biology 9(6): e1003062.
[5] G. Exarchakis, M. Henniges, J. Eggert, and J. Lücke (2012).
Ternary Sparse Coding.
International Conference on Latent Variable Analysis and Signal Separation (LVA/ICA), 204-212.
[6] J. Lücke and J. Eggert (2010).
Expectation Truncation and the Benefits of Preselection in Training Generative Models.
Journal of Machine Learning Research 11:2855-2900.
[7] G. Exarchakis, and J. Lücke (2017).
Discrete Sparse Coding.
Neural Computation, 29(11), 2979-3013.
| 30.388158 | 174 | 0.737389 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.