hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4f2df30226b9c13008bdacb99aeb37bdc7b493dd | 157 | md | Markdown | README.md | DarkLeopard/scrollbar-theme-directive | 340af3455203bc7981bec8a8095e2f51839db261 | [
"MIT"
] | null | null | null | README.md | DarkLeopard/scrollbar-theme-directive | 340af3455203bc7981bec8a8095e2f51839db261 | [
"MIT"
] | null | null | null | README.md | DarkLeopard/scrollbar-theme-directive | 340af3455203bc7981bec8a8095e2f51839db261 | [
"MIT"
] | null | null | null | # scrollbar-theme-directive
**JS Framework:** Angular
**Tested:** Angular 8, Ionic Framework
Directive to change the scrollbar style in a specific block.
| 19.625 | 60 | 0.757962 | eng_Latn | 0.918736 |
4f2e12ccdb734be6123e40cccbc2f349e4aea065 | 5,418 | md | Markdown | README.md | lefevred/MMM-NearCompliments | 9409e6fdb32b4708b46c4697c1106c9a0a552816 | [
"MIT"
] | 6 | 2019-04-12T08:21:28.000Z | 2021-11-09T19:29:37.000Z | README.md | lefevred/MMM-NearCompliments | 9409e6fdb32b4708b46c4697c1106c9a0a552816 | [
"MIT"
] | 2 | 2020-11-17T22:44:13.000Z | 2020-11-18T15:53:06.000Z | README.md | lefevred/MMM-NearCompliments | 9409e6fdb32b4708b46c4697c1106c9a0a552816 | [
"MIT"
] | 1 | 2021-03-29T13:05:49.000Z | 2021-03-29T13:05:49.000Z | # MMM-NearCompliments
This is an extension for the [MagicMirror](https://github.com/MichMich/MagicMirror). It uses the USER_PRESENCE notification of [MMM-PIR-Sensor](https://github.com/semox/MMM-NearCompliments.git) to activate distance measurement of a [HC-SR04 sensor](https://tutorials-raspberrypi.de/entfernung-messen-mit-ultraschallsensor-hc-sr04/) to display the compliments module for a specified time when a measured distance is fallen below a specified distance. Inspired by https://github.com/thobach/MMM-Gestures and https://github.com/mochman/MMM-Swipe.
## Installation
1. Navigate into your MagicMirror's `modules` folder and execute `git clone https://github.com/semox/MMM-NearCompliments.git`. A new folder will appear navigate into it.
2. Execute `npm install` to install the node dependencies.
3. Configure config/config.js with documentation below.
4. Reboot your Pi.
## Using the module
To use this module, add it to the modules array in the `config/config.js` file:
````javascript
modules: [
{
module: 'MMM-NearCompliments',
// position: 'bottom_left',
config: {
// See 'Configuration options' for more information.
}
}
]
````
## Configuration Options
The following properties can be configured:
<table width="100%">
<!-- why, markdown... -->
<thead>
<tr>
<th>Option</th>
<th width="100%">Description</th>
</tr>
<thead>
<tbody>
<tr>
<td><code>echoPin</code></td>
<td>The echo pin your HC-SR04 is connected to.
</td>
</tr>
<tr>
<td><code>triggerPin</code></td>
<td>The trigger pin your HC-SR04 is connected to.
</td>
</tr>
<tr>
<td><code>distance</code></td>
<td>Specified distance in cm which measured distance has to be smaller to display compliments module.
<br><b>Possible values:</b> <code>int</code>
<br><b>Default value:</b> <code>70</code>
</td>
</tr>
<tr>
<td><code>sensorTimeout</code></td>
<td>Sensor timeout to ignore bad measurements.
<br><b>Possible values:</b> <code>int</code>
<br><b>Default value:</b> <code>5000</code>
</td>
</tr>
<tr>
<td><code>animationSpeed</code></td>
<td>Animation speed in ms to display compliments module.
<br><b>Possible values:</b> <code>int</code>
<br><b>Default value:</b> <code>200</code>
</td>
</tr>
<tr>
<td><code>measuringInterval</code></td>
<td>Measuring interval of two measured distances. Defined in ms.
<br><b>Possible values:</b> <code>int</code>
<br><b>Default value:</b> <code>500</code>
</td>
</tr>
<tr>
<td><code>delay</code></td>
<td>Delay defined in seconds to show compliments module.
<br><b>Possible values:</b> <code>int</code>
<br><b>Default value:</b> <code>30</code>
</td>
</tr>
<tr>
<td><code>autoStart</code></td>
<td>Auto start the distance measurment. Use only if MMM-PIR-Sensor is _not_ available.
<br><b>Possible values:</b> <code>boolean</code>
<br><b>Default value:</b> <code>true</code>
</td>
</tr>
<tr>
<td><code>usePIR</code></td>
<td>Should we use PIR sensor to activate only distance measurment when USER_PRESENCE notification is sent by MMM-PIR-Sensor?
<br><b>Possible values:</b> <code>boolean</code>
<br><b>Default value:</b> <code>false</code>
</td>
</tr>
<tr>
<td><code>powerSavingDelay</code></td>
<td>Power saving delay in seconds defined in MMM-PIR-Sensor. It gets overwritten by MMM-PIR-Sensore module and is only defined for backup purpose.
<br><b>Possible values:</b> <code>int</code>
<br><b>Default value:</b> <code>30</code>
</td>
</tr>
<tr>
<td><code>verbose</code></td>
<td>Verbose mode to give more information in log.
<br><b>Possible values:</b> <code>boolean</code>
<br><b>Default value:</b> <code>false</code>
</td>
</tr>
<tr>
<td><code>calibrate</code></td>
<td>Calibration mode to display measured distance on magic mirror. Note: You have to define <code>position: 'bottom_left'</code> in config.js.
<br><b>Possible values:</b> <code>boolean</code>
<br><b>Default value:</b> <code>false</code>
</td>
</tr>
</tbody>
</table>
The MIT License (MIT)
=====================
Copyright © 2019 Sebastian Kirschner
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the “Software”), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
**The software is provided “as is”, without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages or other liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the software or the use or other dealings in the software.**
| 38.7 | 543 | 0.665006 | eng_Latn | 0.926025 |
4f2e650a187b1beddd9daa2c98ab4f3cb5691045 | 43 | md | Markdown | README.md | Ghost-die/ghost-auth | 80a780454b568c319521d7e2a3d34e14ed0c8525 | [
"MIT"
] | null | null | null | README.md | Ghost-die/ghost-auth | 80a780454b568c319521d7e2a3d34e14ed0c8525 | [
"MIT"
] | null | null | null | README.md | Ghost-die/ghost-auth | 80a780454b568c319521d7e2a3d34e14ed0c8525 | [
"MIT"
] | null | null | null |
# 基于laravel/socialite 开发的个人Oauth 2.0授权登陆.
| 14.333333 | 41 | 0.790698 | eng_Latn | 0.354226 |
4f2e8b6f4bb4a06431ca3384452947d3503474f9 | 825 | md | Markdown | _food-pantry/st-mary-st-athanasius-coptic-church.md | jimthoburn/foodoasisla-jekyll | 070e779083a4222360bc275e0f6737a905ab6410 | [
"MIT"
] | 16 | 2016-08-11T04:32:13.000Z | 2020-05-01T14:11:12.000Z | _food-pantry/st-mary-st-athanasius-coptic-church.md | jimthoburn/foodoasisla-jekyll | 070e779083a4222360bc275e0f6737a905ab6410 | [
"MIT"
] | 88 | 2016-10-23T19:33:50.000Z | 2020-01-18T13:33:42.000Z | _food-pantry/st-mary-st-athanasius-coptic-church.md | jimthoburn/foodoasisla-jekyll | 070e779083a4222360bc275e0f6737a905ab6410 | [
"MIT"
] | 48 | 2016-08-17T02:19:37.000Z | 2019-10-05T19:54:26.000Z | ---
name: St. Mary & St. Athanasius Coptic Church
address_1: 17431 Roscoe Blvd.
address_2: ''
city: Northridge
state: CA
zip: '91325'
phone: (818)342-4414
latitude: '34.2216778'
longitude: '-118.513091'
category: Food Pantry
website: ''
daycode1: Sun3rd
day1_open: 0900
day1_close: '1400'
daycode2: Tue3rd
day2_open: 0900
day2_close: '1400'
daycode3: ''
day3_open: ''
day3_close: ''
daycode4: ''
day4_open: ''
day4_close: ''
daycode5: ''
day5_open: ''
day5_close: ''
daycode6: ''
day6_open: ''
daycode7: ''
day7_open: ''
day7_close: ''
Year_Round (Y/N): ''
season_open: ''
season_close: ''
title: 'St. Mary & St. Athanasius Coptic Church, Food Oasis Los Angeles'
uri: /food-pantry/st-mary-st-athanasius-coptic-church/
formatted_day1_open: 9am
formatted_day1_close: 2pm
formatted_day2_open: 9am
formatted_day2_close: 2pm
---
| 18.75 | 72 | 0.727273 | eng_Latn | 0.318835 |
4f2eacd8ad891bf8d280b07e89f4d82960b9e490 | 3,963 | md | Markdown | CHANGELOG.md | kostyaplis/aws-xray-sdk-ruby | 1fcbf430ce6d2593aa9691ab9bd8e1e4ca24ca09 | [
"Apache-2.0"
] | 59 | 2018-01-10T00:53:08.000Z | 2022-03-23T08:54:21.000Z | CHANGELOG.md | kostyaplis/aws-xray-sdk-ruby | 1fcbf430ce6d2593aa9691ab9bd8e1e4ca24ca09 | [
"Apache-2.0"
] | 49 | 2018-01-19T03:25:33.000Z | 2022-01-04T22:11:52.000Z | CHANGELOG.md | kostyaplis/aws-xray-sdk-ruby | 1fcbf430ce6d2593aa9691ab9bd8e1e4ca24ca09 | [
"Apache-2.0"
] | 53 | 2018-01-19T22:37:41.000Z | 2021-12-23T18:02:02.000Z | 0.12.0 (2021-04-01)
-------------------
* Added - Added support for Rails 6.1 ActiveRecord [PR #57](https://github.com/aws/aws-xray-sdk-ruby/pull/57)
* Fixed - Fixed grammar of log messages [PR #61](https://github.com/aws/aws-xray-sdk-ruby/pull/61)
0.11.5 (2020-06-10)
-------------------
* Added - Added support for IMDSv2 for EC2 metadata [PR #48](https://github.com/aws/aws-xray-sdk-ruby/pull/48)
0.11.4 (2020-03-31)
-------------------
* Bugfix - Fixed issue where wrong DB-connection was returned [PR #35](https://github.com/aws/aws-xray-sdk-ruby/pull/35)
* Added - Whitelist SageMakerRuntime InvokeEndpoint operation [PR #36](https://github.com/aws/aws-xray-sdk-ruby/pull/36)
* Bugfix - Do not log Lambda runtime segments [PR #39](https://github.com/aws/aws-xray-sdk-ruby/pull/39)
* Bugfix - Fix typo of aws services white list [PR #41](https://github.com/aws/aws-xray-sdk-ruby/pull/41)
* Added - Add missing rds data service sdk [PR #42](https://github.com/aws/aws-xray-sdk-ruby/pull/42)
* Added - Updated service whitelist [PR #43](https://github.com/aws/aws-xray-sdk-ruby/pull/43)
* Bugfix - Use full qualified constant name [PR #45](https://github.com/aws/aws-xray-sdk-ruby/pull/45)
0.11.3 (2019-10-31)
-------------------
* Added - Lambda instrumentation support [PR #32](https://github.com/aws/aws-xray-sdk-ruby/pull/32)
0.11.2 (2019-07-18)
-------------------
* Added - AWS SNS service whitelist support [PR #29](https://github.com/aws/aws-xray-sdk-ruby/pull/29)
* Bugfix - Fixed typo for Firehose client in AWS service manifest file [Issue #26](https://github.com/aws/aws-xray-sdk-ruby/issues/26), [PR #27](https://github.com/aws/aws-xray-sdk-ruby/pull/27)
* Bugfix - Fixed custom daemon address configuration [PR #18](https://github.com/aws/aws-xray-sdk-ruby/pull/18)
* Bugfix - Fixed trace header in the HTTP response for JS apps[PR #16](https://github.com/aws/aws-xray-sdk-ruby/pull/16)
* Fixed broken travis CI [PR #24](https://github.com/aws/aws-xray-sdk-ruby/pull/24)
0.11.1 (2018-10-09)
-------------------
* Bugfix - Fixed an issue where sampling rule poller is terminated on Puma clustered mode. [ISSUE#14](https://github.com/aws/aws-xray-sdk-ruby/issues/14)
0.11.0 (2018-09-25)
-------------------
* **Breaking**: The default sampler now launches background tasks to poll sampling rules from X-Ray service. See more details on how to create sampling rules: https://docs.aws.amazon.com/xray/latest/devguide/xray-console-sampling.html.
* **Breaking**: The sampling modules related to local sampling rules have been renamed and moved to `sampling/local` namespace.
* **Breaking**: The default json serializer is switched to `multi_json` for experimental JRuby support. Now you need to specify `oj` or `jrjackson` in Gemfile. [PR#5](https://github.com/aws/aws-xray-sdk-ruby/pull/5)
* **Breaking**: The SDK now requires `aws-sdk-xray` >= `1.4.0`.
* Feature: Environment variable `AWS_XRAY_DAEMON_ADDRESS` now takes an additional notation in `tcp:127.0.0.1:2000 udp:127.0.0.2:2001` to set TCP and UDP destination separately. By default it assumes a X-Ray daemon listening to both UDP and TCP traffic on `127.0.0.1:2000`.
* Bugfix - Call only once if `current_entity` is nil. [PR#9](https://github.com/aws/aws-xray-sdk-ruby/pull/9)
0.10.2 (2018-03-30)
-------------------
* Feature - Added SDK and Ruby runtime information to sampled segments.
0.10.1 (2018-02-21)
-------------------
* Bugfix - Fixed an issue where patched net/http returns incorrect reponse object. [ISSUE#2](https://github.com/aws/aws-xray-sdk-ruby/issues/2)
* Bugfix - Fixed an issue where client ip is set incorrectly if the ip address is retrieved from header `X-Forwarded-For`. [ISSUE#3](https://github.com/aws/aws-xray-sdk-ruby/issues/3)
0.10.0 (2018-01-22)
-------------------
* Bugfix - Fixed an issue where subsegment captures could break even if `context_missing` is set to `LOG_ERROR`.
* Bugfix - Fixed gemspec to have the correct Ruby version requirement.
| 68.327586 | 273 | 0.699975 | eng_Latn | 0.529001 |
4f2f25b586c1b29186fdc7991cd700986cb6e1ef | 714 | md | Markdown | docs/GetStudyTypesQueryResult.md | CanopySimulations/canopy-python | 9ec37e674e65d6fbef0402ac0c612c163d55631e | [
"MIT"
] | null | null | null | docs/GetStudyTypesQueryResult.md | CanopySimulations/canopy-python | 9ec37e674e65d6fbef0402ac0c612c163d55631e | [
"MIT"
] | 1 | 2022-01-31T10:18:08.000Z | 2022-01-31T10:18:08.000Z | docs/GetStudyTypesQueryResult.md | CanopySimulations/canopy-python | 9ec37e674e65d6fbef0402ac0c612c163d55631e | [
"MIT"
] | null | null | null | # GetStudyTypesQueryResult
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**study_types** | [**list[StudyTypeDefinition]**](StudyTypeDefinition.md) | | [optional] [readonly]
**sim_types** | [**list[SimTypeDefinition]**](SimTypeDefinition.md) | | [optional] [readonly]
**config_types** | [**list[ConfigTypeDefinition]**](ConfigTypeDefinition.md) | | [optional] [readonly]
**config_type_metadata** | [**list[ConfigTypeMetadata]**](ConfigTypeMetadata.md) | | [optional] [readonly]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 51 | 161 | 0.644258 | yue_Hant | 0.61976 |
4f3108c196bded5de442665b29eded32af40a1de | 837 | md | Markdown | README.md | sdkramer/redeploy-afr-market | 04c9b67afde4634cc9cc91a1d8115a95f8594381 | [
"MIT"
] | null | null | null | README.md | sdkramer/redeploy-afr-market | 04c9b67afde4634cc9cc91a1d8115a95f8594381 | [
"MIT"
] | null | null | null | README.md | sdkramer/redeploy-afr-market | 04c9b67afde4634cc9cc91a1d8115a95f8594381 | [
"MIT"
] | null | null | null | # Front-End
Use "npm install" from your terminal within the 'African-Marketplace' folder. Then "npm start" to pull up a local server to view any changes. Routing is handled in App.js. Private routes were used to manage the dashboard and components available only to logged-in users. The private routing was defined in src/Components/PrivateRoute.js such that a user requires a token, received on login, for access.
## Link to deployed app
https://front-end-african-market.vercel.app/
## Link to Product Vision Document
https://www.notion.so/devandapaige/Product-Vision-Document-36453fcff86b4961959e6c9a3d4cb449
## Router
- [ ] "/" ==> [Homepage]
- [ ] "/login" ==> [Login Page]
- [ ] "/signup" ==> [Signup Form]
- [ ] "/dashboard" ==> [Dashboard with prices]
- [ ] "/additem" ==> [Sell list with global and local items]
| 41.85 | 402 | 0.716846 | eng_Latn | 0.933338 |
4f312d398b0ca587ccfe4fe20d2eb7a5890db7f6 | 27 | md | Markdown | README.md | RVSWeb/JS-Basic | 5205ccdd2fbbb8b667d9011ca8a009119f772685 | [
"MIT"
] | null | null | null | README.md | RVSWeb/JS-Basic | 5205ccdd2fbbb8b667d9011ca8a009119f772685 | [
"MIT"
] | null | null | null | README.md | RVSWeb/JS-Basic | 5205ccdd2fbbb8b667d9011ca8a009119f772685 | [
"MIT"
] | null | null | null | # JS-Basic
JS Review Basic
| 9 | 15 | 0.740741 | kor_Hang | 0.971226 |
4f3135917fc24996f745dda710002d2197363fa8 | 1,496 | md | Markdown | README.md | MakarenkoPavel/MemoryScanner | 645ca844072f52abde0cce522daa84353612189d | [
"MIT"
] | 5 | 2019-10-05T05:05:36.000Z | 2020-05-21T17:39:57.000Z | README.md | b00m-b00m/MemoryScanner | 645ca844072f52abde0cce522daa84353612189d | [
"MIT"
] | 1 | 2020-05-18T00:17:01.000Z | 2020-05-18T00:17:01.000Z | README.md | MakarenkoPavel/MemoryScanner | 645ca844072f52abde0cce522daa84353612189d | [
"MIT"
] | 1 | 2019-01-07T06:04:42.000Z | 2019-01-07T06:04:42.000Z | # MemoryScanner
Memory scanner project for Linux distributions with GUI based on Qt framework
# Screenshot:

# Build:
### Build from source (Ubuntu/Mint):
* clone or download source code
* go to directory that contains source code
* open terminal and ececute following commands
* first of all make sure that you have all required dependencies:
```
$ sudo apt-get install qt5-default
$ sudo apt-get install qt5-qmake
```
* build project in this way:
```
$ qmake
$ make
```
* or this (shadow build):
```
$ mkdir build
$ cd build
$ qmake ../ProcessMemoryScanner.pro
$ make
```
* after all just execute ```$ sudo ./ProcessMemoryScanner ``` in terminal
# Features:
* several supported types: *int, float, double, short, long*
* several search types: *equals to, less then, greater than, increased, decreased, changed, unchanged*
# To do:
* fix incomplete displayed process name in TaskWidget
* fix failure in large processes (out_of_range pointer issue)
* add opportunitie to search in certain memory regions (only in stack and heap for example)
* bind progress bar with search event (yeap, it's just added to gui yet)
* add embedded hex editor to see all available memory regions
* make it crossplatform. Simply extend abstract classes and reimplement platform specific methods
| 31.166667 | 116 | 0.701872 | eng_Latn | 0.982625 |
4f3319590089367ca9139aceb5272ca57de8d8bc | 797 | md | Markdown | README.md | pedroxca/-solid-api-typescript | d298f4e73b817011c63f20bf6f24cf498104b509 | [
"MIT"
] | null | null | null | README.md | pedroxca/-solid-api-typescript | d298f4e73b817011c63f20bf6f24cf498104b509 | [
"MIT"
] | null | null | null | README.md | pedroxca/-solid-api-typescript | d298f4e73b817011c63f20bf6f24cf498104b509 | [
"MIT"
] | null | null | null | # This api was written following the SOLID protocol
## Where did I get this idea?
https://www.youtube.com/watch?v=vAV4Vy4jfkc
## How to install and use
```bash
git clone https://github.com/pedroxca/-solid-api-typescript
```
Then:
```bash
npm install
```
or
```bash
yarn install
```
Almost there!
Now you should configure your `.env` file, which should be located in the `root` path of your application
Then run:
```bash
npm run dev
```
or
```bash
yarn dev
```
## How to convert it to JavaScript
Since browsers and Node.js can't read Typescript, we have to compile it to JavaScript so servers and browsers can read it too.
To do this run:
```bash
npx tsc
```
or
```bash
yarn tsc
```
Now, you should be good to go! 🎉🎉🎊
| 13.741379 | 127 | 0.641154 | eng_Latn | 0.995175 |
4f33296933083d4c4ae6c9de0cd2c819760d8704 | 202 | md | Markdown | help/cmds/g/index.md | arimasterskkk/werewolves | 29cac82f2ae58750f3d390a67870c34a843d4ccf | [
"MIT"
] | null | null | null | help/cmds/g/index.md | arimasterskkk/werewolves | 29cac82f2ae58750f3d390a67870c34a843d4ccf | [
"MIT"
] | null | null | null | help/cmds/g/index.md | arimasterskkk/werewolves | 29cac82f2ae58750f3d390a67870c34a843d4ccf | [
"MIT"
] | null | null | null | **Game**
*Commands in this category are typically to do with managing the game and its various states.*
**Notes:**
- Most of the commands here are Game Master only.
**Commands in this category:**
| 20.2 | 94 | 0.717822 | eng_Latn | 0.999659 |
4f34340c106b978cb759969fae5cced8e7b7b5f9 | 1,901 | md | Markdown | _includes/parse-server/database.md | anishkny/docs-1 | 3db21718261ad61002d01ced7d10ce4e888a1d81 | [
"BSD-3-Clause"
] | 3 | 2019-04-15T13:17:51.000Z | 2021-11-01T17:04:03.000Z | _includes/parse-server/database.md | anishkny/docs-1 | 3db21718261ad61002d01ced7d10ce4e888a1d81 | [
"BSD-3-Clause"
] | null | null | null | _includes/parse-server/database.md | anishkny/docs-1 | 3db21718261ad61002d01ced7d10ce4e888a1d81 | [
"BSD-3-Clause"
] | 1 | 2019-09-07T05:23:14.000Z | 2019-09-07T05:23:14.000Z | # Database
Parse Server let you use [MongoDB](https://www.mongodb.org/) or [Postgres](https://www.postgresql.org/) as a database.
The prefered database is MongoDB but Postgres is a great option if you're starting a new project and you expect to have a stable Schema.
## MongoDB
If you have not used MongoDB before, we highly recommend familiarizing yourself with it first before proceeding.
The Mongo requirements for Parse Server are:
* MongoDB version 2.6.X, 3.0.X, or 3.2.X
* An SSL connection is recommended (but not required).
If this is your first time setting up a production MongoDB instance, we recommend using either [mLab](http://www.mLab.com) or [ObjectRocket](https://objectrocket.com/). These are database-as-a-service companies which provide fully managed MongoDB instances, and can help you scale up as needed.
When using MongoDB with your Parse app, you need to manage your indexes yourself. You will also need to size up your database as your data grows.
If you are planning to run MongoDB on your own infrastructure, we highly recommend using the [RocksDB Storage Engine](#using-mongodb--rocksdb).
In order to allow for better scaling of your data layer, it is possible to direct queries to a mongodb secondary for read operations. See: [Mongo Read Preference](#using-mongodb-read-preference).
## Postgres
The Posgres requirements for Parse Server are:
* Postgres version 9.5
* PostGIS extensions 2.3
The postgres database adapter will be automatically loaded when you pass a valid postgres URL, for example: `postgres://localhost:5432`.
### Caveats
* Join tables are resolved in memory, there is no performance improvements using Postgres over MongoDB for relations or pointers.
* Mutating the schema implies running ALTER TABLE, therefore we recommend you setup your schema when your tables are not full.
* Properly index your tables to maximize the performance.
| 50.026316 | 294 | 0.78222 | eng_Latn | 0.997469 |
4f34a9922d49f88c08d45dc55c7ea6cbad42e078 | 48 | md | Markdown | README.md | mockillo/GGJ19 | fa8fb9f8a0f43f178f80a81c900f5ed81d15de4e | [
"MIT"
] | null | null | null | README.md | mockillo/GGJ19 | fa8fb9f8a0f43f178f80a81c900f5ed81d15de4e | [
"MIT"
] | null | null | null | README.md | mockillo/GGJ19 | fa8fb9f8a0f43f178f80a81c900f5ed81d15de4e | [
"MIT"
] | null | null | null | # GGJ19
Project repository for our GGJ19 game.
| 12 | 38 | 0.770833 | eng_Latn | 0.992018 |
4f34f951c6fd4002501a7e990f5a8e02d3985fa4 | 1,820 | md | Markdown | archive/README.md | ch-stark/federation-dev | de1337ff0ba79ced70677b1892bde50928edd469 | [
"Apache-2.0"
] | 59 | 2018-10-26T07:19:57.000Z | 2021-07-28T14:56:48.000Z | archive/README.md | scollier/federation-dev | e5d69e36cbdd5357b9053e6a6c538c78e00410a7 | [
"Apache-2.0"
] | 61 | 2018-10-26T09:28:39.000Z | 2020-11-20T02:47:12.000Z | archive/README.md | scollier/federation-dev | e5d69e36cbdd5357b9053e6a6c538c78e00410a7 | [
"Apache-2.0"
] | 66 | 2018-10-26T07:20:17.000Z | 2021-09-02T08:19:07.000Z | ## What is Federation
Simply, Federation is the process of connecting one or more Kubernetes clusters. The explanation from OperatorHub states,<br>
``
Kubernetes Federation is a tool to sync (aka "federate") a set of Kubernetes objects from a "source" into a set of other clusters. Common use-cases include federating Namespaces across all of your clusters or rolling out an application across several geographically distributed clusters. The Kubernetes Federation Operator runs all of the components under the hood to quickly get up and running with this powerful concept. Federation is a key part of any Hybrid Cloud capability.
``<br>
There are two ways to use Kubefed currently: Namespace and Cluster scoped.
For a breakdown of what this entails, see our [KubeFed Cluster-Scoped Vs Namespace-Scoped](docs/kubefed-scope.md) guide.
### Namespace Scoped labs
Namespace scoped Federation was initially the only supported mechanism for federating
multiple OpenShift/Kubernetes environments. Namespace scoped Federation uses OperatorHub,
which is included within OpenShift 4.1, to install the Federation Operator.
* A simple application federated [OpenShift Container Plaform 4](./docs/ocp4-namespace-scoped.md)
* Federated MongoDB and *Pacman* [Federating an application with a Database](./federated-mongodb/README.md)
* An automated demo exists to demonstrate running Kubefed in 3.11 and 4.x clusters [Automated Demo](./automated-demo/README.md)
OpenShift 3.11 versions for the simple application scenario are also available:
* Using [Minishift](./docs/minishift.md)
* Using [CDK](./docs/cdk.md)
### Cluster Scoped
Cluster scoped KubeFed is now supported as of version 0.1.0 of the KubeFed
operator.
* A simple application federated [OpenShift Container Platform 4 (cluster scoped)](./docs/ocp4-cluster-scoped.md)
| 65 | 479 | 0.799451 | eng_Latn | 0.989614 |
4f3500fd8da2a8154b3465bed51bb56bfea7309e | 676 | md | Markdown | _posts/2021-07-25-无线网络.md | tianhaoo/tianhaoo.github.io | f8a66a04cef216e9b87653f02417e7d3d47608c4 | [
"MIT"
] | 5 | 2017-11-27T02:52:02.000Z | 2021-10-04T05:56:31.000Z | _posts/2021-07-25-无线网络.md | tianhaoo/tianhaoo.github.io | f8a66a04cef216e9b87653f02417e7d3d47608c4 | [
"MIT"
] | 81 | 2018-02-02T12:11:33.000Z | 2021-09-30T13:29:03.000Z | _posts/2021-07-25-无线网络.md | tianhaoo/tianhaoo.github.io | f8a66a04cef216e9b87653f02417e7d3d47608c4 | [
"MIT"
] | 1 | 2019-03-27T09:36:01.000Z | 2019-03-27T09:36:01.000Z | ---
layout: post
title: 无线网络
subtitle: ""
date: 2021-07-25
author: tianhaoo
header-img: img/post-bg/53.jpg
catalog: true
tags:
- 科研
---
## python
```python
# 传输速率取决于带宽,发送者的功率,期望到达的距离
def get_rate(B, power, distance):
P_mw = math.pow(10, power / 10) # mW
N0_mw_MHz = math.pow(10, N0 / 10) # mW/MHz
path_loss = 10 * salpha * math.log(distance, 10) + C_path_loss # dB
channel_variance = math.pow(10, -0.1 * path_loss)
sigma = math.sqrt((2 / (4 - math.pi)) * math.sqrt(channel_variance))
h = sigma / (distance * distance)
c = B * 1000 * math.log(1 + (P_mw * h) / (B) * N0_mw_MHz), 2) # bit/ms
return c # bit/ms
```
| 22.533333 | 75 | 0.588757 | eng_Latn | 0.208811 |
4f35c1f9d73e6f2c97a51e36024f29318a9092e3 | 297 | md | Markdown | docs/pos/SYM.md | coltekin/gk-treebank | 627969171ea6cecf59cb77ec508ad210c971dd99 | [
"CC-BY-2.0"
] | 3 | 2016-05-05T09:22:35.000Z | 2020-06-14T10:51:28.000Z | docs/pos/SYM.md | coltekin/gk-treebank | 627969171ea6cecf59cb77ec508ad210c971dd99 | [
"CC-BY-2.0"
] | null | null | null | docs/pos/SYM.md | coltekin/gk-treebank | 627969171ea6cecf59cb77ec508ad210c971dd99 | [
"CC-BY-2.0"
] | 1 | 2017-10-17T13:15:59.000Z | 2017-10-17T13:15:59.000Z | ---
layout: default
title: 'SYM'
shortdef: 'symbol'
udver: '2'
---
### Definition
A symbol is a word-like entity that differs from ordinary words by form, function, or both.
We follow the general/universal definition of `SYM`.
See [u-pos/SYM]() for details.
testing [SYM](http://githug.com/).
| 18.5625 | 91 | 0.700337 | eng_Latn | 0.979032 |
4f35d06738de141782160cea7abfbf51fcfc9713 | 1,341 | md | Markdown | CorgEng.DependencyInjection/README.md | PowerfulBacon/CorgEng | 1e67300d962202d4039af1a26a3be6b90459e5c3 | [
"MIT"
] | 1 | 2022-03-26T12:17:41.000Z | 2022-03-26T12:17:41.000Z | CorgEng.DependencyInjection/README.md | PowerfulBacon/CorgEng | 1e67300d962202d4039af1a26a3be6b90459e5c3 | [
"MIT"
] | 5 | 2022-03-26T15:37:49.000Z | 2022-03-27T20:17:41.000Z | CorgEng.DependencyInjection/README.md | PowerfulBacon/CorgEng | 1e67300d962202d4039af1a26a3be6b90459e5c3 | [
"MIT"
] | null | null | null | # CorgEng.DependencyInjection
## Contents
[toc]
## Summary
Dependency injection provides a way of specifying that something wants something that performs a task without specifying the implementation of that task.
It allows us to override and easilly change the functionality of existing systems without needing to rip them out and rewrite them entirely.
It also allows us to inject custom dependency overrides for unit tests, which allow us to force certain behaviours for test cases.
Dependencies are created as singleton instances, so changes a property on 1 will affect the properties of all instances.
Since the injected dependencies are static variables, to create instanced dependencies, a factory class is required. The factory class can be injected into the static dependency and then used to created instanced dependencies.
## Defining dependancies
```csharp=
interface ILogger
{
void WriteLine(string message, LogType logType);
}
```
```csharp=
[Dependancy(defaultDependency=true)]
class Logger : ILogger
{
...
}
```
## Using dependancies
```csharp=
[UsingDependency]
private static ILogger logger;
...
logger.WriteLine("test", LogType.Standard);
```
The using dependency tag will automatically fetch the required dependency. The dependency variable must be static to accept the singleton during load time.
| 30.477273 | 226 | 0.786726 | eng_Latn | 0.998163 |
4f363fda48c549e5d9897f65ce11e9615002e937 | 17,212 | md | Markdown | docs/2014/sql-server/install/configure-a-report-server-for-e-mail-delivery-ssrs-configuration-manager.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/sql-server/install/configure-a-report-server-for-e-mail-delivery-ssrs-configuration-manager.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/sql-server/install/configure-a-report-server-for-e-mail-delivery-ssrs-configuration-manager.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Configurar un servidor de informes para la entrega de correo electrónico (Administrador de configuración de SSRS) | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- database-engine
ms.topic: conceptual
helpviewer_keywords:
- reports [Reporting Services], distributing
- report servers [Reporting Services], e-mail delivery
- remote SMTP service [Reporting Services]
- distributing reports [Reporting Services], e-mail
- CDO
- Collaboration Data Objects
- SMTP settings [Reporting Services]
- e-mail [Reporting Services]
- sending reports
- mail [Reporting Services]
- local SMTP service [Reporting Services]
ms.assetid: b838f970-d11a-4239-b164-8d11f4581d83
author: markingmyname
ms.author: maghan
manager: craigg
ms.openlocfilehash: 275503157f0bfbc004463f3bc567cfbef4a3fc41
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/02/2018
ms.locfileid: "48117060"
---
# <a name="configure-a-report-server-for-e-mail-delivery-ssrs-configuration-manager"></a>Configurar un servidor de informes para la entrega de correo electrónico (Administrador de configuración de SSRS)
[!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] incluye una extensión de entrega por correo electrónico para distribuir informes mediante el correo electrónico. Según cómo defina la suscripción del correo electrónico, una entrega podría estar compuesta de una notificación, un vínculo, datos adjuntos o un informe incrustado. La extensión de entrega por correo electrónico funciona con la tecnología de servidor de correo existente. El servidor de correo debe ser un servidor SMTP o un reenviador. El servidor de informes se conecta a un servidor SMTP a través de bibliotecas de Collaboration Data Objects (CDO), cdosys.dll, que el sistema operativo proporciona.
La extensión de entrega por correo electrónico del servidor de informes no está configurada de manera predeterminada. Debe utilizar el Administrador de configuración de Reporting Services para configurar dicha extensión mínimamente. Para establecer propiedades avanzadas, debe editar el archivo `RSReportServer.config` . Si no puede configurar el servidor de informes para que utilice esta extensión, puede entregar los informes en una carpeta compartida. Para obtener más información, vea [File Share Delivery in Reporting Services](../../reporting-services/subscriptions/file-share-delivery-in-reporting-services.md).
||
|-|
|[!INCLUDE[applies](../../includes/applies-md.md)] [!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] en modo nativo|
## <a name="bkmk_configuration_requirements"></a> Requisitos de configuración
- La entrega por correo electrónico del servidor de informes se implementa en Collaboration Data Objects (CDO) y requiere un servidor del Protocolo simple de transferencia de correo (SMTP) local o remoto, o bien un reenviador SMTP. SMTP no se admite en todos los sistemas operativos Windows. Si usa la edición basada en Itanium de Windows Server 2008, no se admite SMTP. Para obtener más información sobre las opciones de configuración que se proporcionan a través de CDO, vea [el tema sobre la coclase Configuration](http://go.microsoft.com/fwlink/?LinkId=98237) en MSDN.
- La cuenta del servicio Servidor de informes debe tener permiso para enviar correo electrónico en el servidor SMTP.
- La extensión de entrega por correo electrónico utiliza la codificación UTF-8 en los datos adjuntos del correo electrónico. La codificación no se puede modificar; la extensión de representación en HTML solo admite UTF-8.
> [!NOTE]
> La extensión de entrega por correo electrónico predeterminada no es compatible con la firma digital ni el cifrado de mensajes de correo salientes.
## <a name="bkmk_configure_for_local_or_remote_SMTP"></a> Configurar un servidor de informes para un servicio SMTP Local o remoto
Puede utilizar un servicio SMTP local o un servidor o reenviador SMTP remoto para admitir la entrega por correo electrónico. Si tiene acceso a un servidor SMTP remoto existente, debería plantearse utilizarlo. Si no hay ningún servidor SMTP disponible o si después encuentra errores de entrega de informes que pueden atribuirse a errores en la conexión del equipo, debería pasar a utilizar un servicio SMTP local. Más adelante en este tema se proporcionan detalles sobre cómo configurar un servidor de informes para un servicio local o remoto.
## <a name="bkmk_setting_email_delivery"></a> Las opciones de configuración para la entrega de correo electrónico
Para poder utilizar la entrega por correo electrónico del servidor de informes, debe establecer valores de configuración que proporcionen información sobre qué servidor SMTP se debe utilizar.
Para configurar un servidor de informes para la entrega por correo electrónico, siga este procedimiento:
- Use el Administrador de configuración de Reporting Services si solo va a especificar un servidor SMTP y una cuenta de usuario que tenga permiso para enviar correo electrónico. Ésta es la configuración mínima necesaria para configurar la extensión de entrega por correo electrónico del servidor de informes. Para obtener más información, consulte [configuración de correo electrónico - Administrador de configuración (modo nativo de SSRS) ](../../reporting-services/install-windows/e-mail-settings-reporting-services-native-mode-configuration-manager.md) y [entrega por correo electrónico en Reporting Services](../../reporting-services/subscriptions/e-mail-delivery-in-reporting-services.md).
- (Opcionalmente) Utilice un procesador de texto para especificar valores adicionales en el archivo RSreportserver.config. Este archivo contiene toda la configuración para la distribución del correo electrónico del servidor de informes. Si utiliza un servidor SMTP adicional o limita la entrega de correo electrónico a hosts específicos, debe configurar opciones adicionales en estos archivos. Para obtener más información sobre cómo buscar y modificar archivos de configuración, consulte [modificar un archivo de configuración de Reporting Services (RSreportserver.config) ](../../reporting-services/report-server/modify-a-reporting-services-configuration-file-rsreportserver-config.md) en libros en pantalla de SQL Server.
> [!NOTE]
> Las opciones de correo electrónico del servidor de informes se basan en CDO. Si desea obtener más detalles acerca de opciones específicas, puede consultar la documentación de producción de CDO.
## <a name="bkmk_example_config_file"></a> Configuración de correo electrónico del servidor de informes de ejemplo
El ejemplo siguiente muestra las opciones de configuración del archivo RSreportserver.config para un servidor SMTP remoto. Para obtener información acerca de las descripciones de la configuración y los valores válidos, vea [RSReportServer Configuration File](../../reporting-services/report-server/rsreportserver-config-configuration-file.md) en [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] libros en pantalla o la documentación del producto CDO.
```
<RSEmailDPConfiguration>
<SMTPServer>mySMTPServer.Adventure-Works.com</SMTPServer>
<SMTPServerPort></SMTPServerPort>
<SMTPAccountName></SMTPAccountName>
<SMTPConnectionTimeout></SMTPConnectionTimeout>
<SMTPServerPickupDirectory></SMTPServerPickupDirectory>
<SMTPUseSSL></SMTPUseSSL>
<SendUsing>2</SendUsing>
<SMTPAuthenticate></SMTPAuthenticate>
<From>my-rs-email-account@Adventure-Works.com</From>
<EmbeddedRenderFormats>
<RenderingExtension>MHTML</RenderingExtension>
</EmbeddedRenderFormats>
<PrivilegedUserRenderFormats></PrivilegedUserRenderFormats>
<ExcludedRenderFormats>
<RenderingExtension>HTMLOWC</RenderingExtension>
<RenderingExtension>NULL</RenderingExtension>
</ExcludedRenderFormats>
<SendEmailToUserAlias>True</SendEmailToUserAlias>
<DefaultHostName></DefaultHostName>
<PermittedHosts>
<HostName>Adventure-Works.com</HostName>
<HostName>hotmail.com</HostName>
</PermittedHosts>
</RSEmailDPConfiguration>
```
## <a name="bkmk_setting_TO_field"></a> Opciones de configuración para la configuración de la a: campo de un mensaje
Las suscripciones definidas por el usuario creadas según los permisos otorgados por la tarea **Administrar suscripciones individuales** contienen un nombre de usuario establecido previamente basado en la cuenta de usuario de dominio. Cuando el usuario crea la suscripción, el nombre del destinatario del campo **Para:** se rellena automáticamente mediante la cuenta de usuario de dominio de la persona que está creando la suscripción.
Si está utilizando un servidor SMTP o reenviador que emplee cuentas de correo electrónico distintas a la cuenta de usuario de dominio, la entrega del informe generará un error cuando el servidor SMTP intente entregar el informe al usuario.
Para solucionar este problema, puede modificar la configuración para permitir a los usuarios escribir un nombre en el campo **Para:** :
1. Abra RSReportServer.config en un editor de texto.
2. Establecer `SendEmailToUserAlias` a `False`.
3. Establecer `DefaultHostName` en el nombre de sistema de nombres de dominio (DNS) o la dirección IP del servidor SMTP o reenviador.
4. Guarde el archivo.
## <a name="bkmk_options_remote_SMTP"></a> Opciones de configuración para el servicio SMTP remoto
La conexión entre el servidor de informes y un servidor o reenviador SMTP viene determinada por las opciones de configuración siguientes:
- `SendUsing` Especifica un método para enviar mensajes. Se puede elegir entre un servicio SMTP de red o un directorio de recogida del servicio SMTP local. Para utilizar un servicio SMTP remoto, este valor debe establecerse en **2** en el archivo RSReportServer.config.
- `SMTPServer` Especifica el servidor SMTP o reenviador. Se trata de un valor necesario si utiliza un servidor o reenviador SMTP remoto.
- `From` establece el valor que aparece en el **desde:** línea de un mensaje de correo electrónico. Se trata de un valor necesario si utiliza un servidor o reenviador SMTP remoto.
Entre los demás valores que se utilizan para un servicio SMTP remoto se incluyen los siguientes (tenga en cuenta que no es necesario especificarlos a menos que desee reemplazar los valores predeterminados).
- **SMTPServerPort** se configura para el puerto 25.
- **SMTPAuthenticate** especifica cómo se conecta el servidor de informes al servidor SMTP remoto. El valor predeterminado es 0 (o sin autenticación). En tal caso, la conexión se realiza a través de un acceso anónimo. En función de su configuración de dominio, es posible que el servidor de informes y el servidor SMTP tengan que ser miembros del mismo dominio.
Para enviar correo electrónico a listas de distribución restringidas (por ejemplo, listas de distribución que acepten mensajes entrantes solo de cuentas autenticadas), establezca **SMTPAuthenticate** en **2**.
## <a name="bkmk_options_local_SMTP"></a> Opciones de configuración para el servicio SMTP Local
Configurar un servicio SMTP local resulta útil si se está probando o solucionando problemas de la entrega por correo electrónico del servidor de informes. El servicio SMTP local no está habilitado de forma predeterminada. Para obtener instrucciones sobre cómo habilitarlo, consulte [configurar un servidor de informes para la entrega de correo electrónico (Administrador de configuración de SSRS)](../../../2014/sql-server/install/configure-a-report-server-for-e-mail-delivery-ssrs-configuration-manager.md) y [configuración de correo electrónico - Administrador de configuración (modo nativo de SSRS) ](../../reporting-services/install-windows/e-mail-settings-reporting-services-native-mode-configuration-manager.md) .
La conexión entre el servidor de informes y un servidor o reenviador SMTP local viene determinada por las opciones de configuración siguientes:
- `SendUsing` se establece en **1**.
- **SMTPServerPickupDirectory** se establece en una carpeta de la unidad local.
> [!NOTE]
> Asegúrese de que no establezca `SMTPServer` si usa un servidor SMTP local.
- `From` establece el valor que aparece en el **desde:** línea de un mensaje de correo electrónico. Este valor es necesario.
## <a name="bkmk_use_configuration_manager"></a> Para configurar el correo electrónico del servidor de informes mediante el Administrador de configuración de Reporting Services
1. Compruebe que el servicio Servidor de informes de Windows disponga de permisos `Send As` para el servidor SMTP.
2. Inicie el Administrador de configuración de Reporting Services y conéctese a la instancia del servidor de informes
3. En la página Configuración de correo electrónico, escriba el nombre del servidor SMTP. Este valor puede ser una dirección IP, un nombre UNC de un equipo de la intranet corporativa o un nombre de dominio completo.
4. En **Dirección del remitente**, escriba el nombre de una cuenta que tenga permiso para enviar correo electrónico desde el servidor SMTP.
5. Haga clic en **Aplicar**.
## <a name="bkmk_confiugre_remote_SMTP"></a> Para configurar un servicio SMTP remoto para el servidor de informes
1. Compruebe que el servicio Servidor de informes de Windows disponga de permisos `Send As` para el servidor SMTP.
2. Abra el archivo RSReportServer.config en un editor de texto.
3. Compruebe que <`UrlRoot`> se establece en la dirección URL del servidor. Este valor se establece al configurar el servidor de informes y debe mostrar la dirección. Si no es así, escriba la dirección URL del servidor de informes.
4. En la sección Entrega, busque <`ReportServerEmail`>.
5. En <`SMTPServer`>, escriba el nombre del servidor SMTP. Este valor puede ser una dirección IP, un nombre UNC de un equipo de la intranet corporativa o un nombre de dominio completo.
6. Compruebe que <`SendUsing`> está establecido en 2. Si se especifica otro valor, el servidor de informes no se habrá configurado para utilizar un servicio SMTP remoto.
7. En <`From`>, escriba el nombre de una cuenta que tenga permiso para enviar correo electrónico desde el servidor SMTP.
8. Guarde el archivo.
El servidor de informes utilizará la nueva configuración de forma automática, por lo que no necesita reiniciar el servicio. Puede especificar valores adicionales para SMTP con el fin de configurar cómo se utilizará el servidor para la entrega por correo electrónico del servidor de informes. Para obtener más información, consulte [configurar un servidor de informes para la entrega de correo electrónico](../../../2014/sql-server/install/configure-a-report-server-for-e-mail-delivery-ssrs-configuration-manager.md) y [RSReportServer Configuration File](../../reporting-services/report-server/rsreportserver-config-configuration-file.md) en [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] libros en pantalla.
## <a name="bkmk_confiugre_local_SMTP"></a> Para configurar un servicio SMTP local para el servidor de informes
1. En el Panel de control, haga clic en **Agregar o quitar programas**.
2. Haga clic en **Agregar o quitar componentes de Windows** para iniciar el Asistente para componentes de Windows.
3. Seleccione **Servidor de aplicaciones** y haga clic en **Detalles**.
4. Seleccione **Internet Information Services (IIS)** y haga clic en **Detalles**.
5. Active la casilla **Servicio SMTP** y haga clic en **Aceptar**.
6. En el Asistente para componentes de Windows, haga clic en **Siguiente**. Haga clic en **Finalizar**.
7. Compruebe que el servicio esté ejecutándose en la consola **Servicios** .
8. Abra el archivo **RsReportServer.config** en un editor de texto.
9. Compruebe que `<UrlRoot>` se haya establecido en la dirección URL del servidor de informes. Este valor se establece al configurar el servidor de informes y debe mostrar la dirección. Si no es así, escriba la dirección URL del servidor de informes.
10. En la sección Entrega, busque `<ReportServerEmail>.`.
11. En `<SMTPServer>`, borre todos los valores de esta configuración, pero no elimine las etiquetas.
12. Establezca `<SendUsing>` en 1. Si se especifica otro valor, el servidor de informes no se habrá configurado para utilizar un servicio SMTP local.
13. Establezca `<SMTPServerPickupDirectory>` en una carpeta de la unidad local.
14. Establezca `<From>` en una cuenta con permiso para enviar correo electrónico desde el servidor SMTP.
15. Guarde el archivo.
## <a name="see-also"></a>Vea también
[Administrador de configuración de Reporting Services (modo nativo)](../../../2014/sql-server/install/reporting-services-configuration-manager-native-mode.md)
| 73.242553 | 736 | 0.771264 | spa_Latn | 0.963303 |
4f36c441f130b341b68cfd74cb5d982769efc516 | 6,959 | md | Markdown | avi/r/avi_network.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 78 | 2021-01-15T14:10:30.000Z | 2022-02-14T09:17:40.000Z | avi/r/avi_network.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 5 | 2021-04-09T15:21:28.000Z | 2022-01-28T19:02:05.000Z | avi/r/avi_network.md | chrisjaimon2012/tfwriter | 1ea629ed386bbe6a8f21617a430dae19ba536a98 | [
"MIT"
] | 30 | 2021-01-17T13:16:57.000Z | 2022-03-21T12:52:08.000Z | # avi_network
[back](../avi.md)
### Index
- [Example Usage](#example-usage)
- [Variables](#variables)
- [Resource](#resource)
- [Outputs](#outputs)
### Terraform
```terraform
terraform {
required_providers {
avi = ">= 0.2.3"
}
}
```
[top](#index)
### Example Usage
```terraform
module "avi_network" {
source = "./modules/avi/r/avi_network"
# cloud_ref - (optional) is a type of string
cloud_ref = null
# dhcp_enabled - (optional) is a type of bool
dhcp_enabled = null
# exclude_discovered_subnets - (optional) is a type of bool
exclude_discovered_subnets = null
# ip6_autocfg_enabled - (optional) is a type of bool
ip6_autocfg_enabled = null
# name - (required) is a type of string
name = null
# synced_from_se - (optional) is a type of bool
synced_from_se = null
# tenant_ref - (optional) is a type of string
tenant_ref = null
# uuid - (optional) is a type of string
uuid = null
# vcenter_dvs - (optional) is a type of bool
vcenter_dvs = null
# vrf_context_ref - (optional) is a type of string
vrf_context_ref = null
configured_subnets = [{
prefix = [{
ip_addr = [{
addr = null
type = null
}]
mask = null
}]
static_ips = [{
addr = null
type = null
}]
static_ranges = [{
begin = [{
addr = null
type = null
}]
end = [{
addr = null
type = null
}]
}]
}]
labels = [{
key = null
value = null
}]
}
```
[top](#index)
### Variables
```terraform
variable "cloud_ref" {
description = "(optional)"
type = string
default = null
}
variable "dhcp_enabled" {
description = "(optional)"
type = bool
default = null
}
variable "exclude_discovered_subnets" {
description = "(optional)"
type = bool
default = null
}
variable "ip6_autocfg_enabled" {
description = "(optional)"
type = bool
default = null
}
variable "name" {
description = "(required)"
type = string
}
variable "synced_from_se" {
description = "(optional)"
type = bool
default = null
}
variable "tenant_ref" {
description = "(optional)"
type = string
default = null
}
variable "uuid" {
description = "(optional)"
type = string
default = null
}
variable "vcenter_dvs" {
description = "(optional)"
type = bool
default = null
}
variable "vrf_context_ref" {
description = "(optional)"
type = string
default = null
}
variable "configured_subnets" {
description = "nested block: NestingList, min items: 0, max items: 0"
type = set(object(
{
prefix = set(object(
{
ip_addr = set(object(
{
addr = string
type = string
}
))
mask = number
}
))
static_ips = list(object(
{
addr = string
type = string
}
))
static_ranges = list(object(
{
begin = set(object(
{
addr = string
type = string
}
))
end = set(object(
{
addr = string
type = string
}
))
}
))
}
))
default = []
}
variable "labels" {
description = "nested block: NestingList, min items: 0, max items: 0"
type = set(object(
{
key = string
value = string
}
))
default = []
}
```
[top](#index)
### Resource
```terraform
resource "avi_network" "this" {
# cloud_ref - (optional) is a type of string
cloud_ref = var.cloud_ref
# dhcp_enabled - (optional) is a type of bool
dhcp_enabled = var.dhcp_enabled
# exclude_discovered_subnets - (optional) is a type of bool
exclude_discovered_subnets = var.exclude_discovered_subnets
# ip6_autocfg_enabled - (optional) is a type of bool
ip6_autocfg_enabled = var.ip6_autocfg_enabled
# name - (required) is a type of string
name = var.name
# synced_from_se - (optional) is a type of bool
synced_from_se = var.synced_from_se
# tenant_ref - (optional) is a type of string
tenant_ref = var.tenant_ref
# uuid - (optional) is a type of string
uuid = var.uuid
# vcenter_dvs - (optional) is a type of bool
vcenter_dvs = var.vcenter_dvs
# vrf_context_ref - (optional) is a type of string
vrf_context_ref = var.vrf_context_ref
dynamic "configured_subnets" {
for_each = var.configured_subnets
content {
dynamic "prefix" {
for_each = configured_subnets.value.prefix
content {
# mask - (required) is a type of number
mask = prefix.value["mask"]
dynamic "ip_addr" {
for_each = prefix.value.ip_addr
content {
# addr - (required) is a type of string
addr = ip_addr.value["addr"]
# type - (required) is a type of string
type = ip_addr.value["type"]
}
}
}
}
dynamic "static_ips" {
for_each = configured_subnets.value.static_ips
content {
# addr - (required) is a type of string
addr = static_ips.value["addr"]
# type - (required) is a type of string
type = static_ips.value["type"]
}
}
dynamic "static_ranges" {
for_each = configured_subnets.value.static_ranges
content {
dynamic "begin" {
for_each = static_ranges.value.begin
content {
# addr - (required) is a type of string
addr = begin.value["addr"]
# type - (required) is a type of string
type = begin.value["type"]
}
}
dynamic "end" {
for_each = static_ranges.value.end
content {
# addr - (required) is a type of string
addr = end.value["addr"]
# type - (required) is a type of string
type = end.value["type"]
}
}
}
}
}
}
dynamic "labels" {
for_each = var.labels
content {
# key - (required) is a type of string
key = labels.value["key"]
# value - (optional) is a type of string
value = labels.value["value"]
}
}
}
```
[top](#index)
### Outputs
```terraform
output "cloud_ref" {
description = "returns a string"
value = avi_network.this.cloud_ref
}
output "id" {
description = "returns a string"
value = avi_network.this.id
}
output "tenant_ref" {
description = "returns a string"
value = avi_network.this.tenant_ref
}
output "uuid" {
description = "returns a string"
value = avi_network.this.uuid
}
output "vrf_context_ref" {
description = "returns a string"
value = avi_network.this.vrf_context_ref
}
output "this" {
value = avi_network.this
}
```
[top](#index) | 20.588757 | 71 | 0.559851 | eng_Latn | 0.866169 |
4f36f6acbc9b656f77c662105a6b96bec89a3de9 | 4,774 | md | Markdown | _posts/2010-08-19-69.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | 1 | 2020-11-20T20:39:54.000Z | 2020-11-20T20:39:54.000Z | _posts/2010-08-19-69.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | null | null | null | _posts/2010-08-19-69.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | null | null | null | ---
title: >
Jawa
layout: post
permalink: /view/69
votes: 15
preview: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACUAAAAgCAIAAAAaMSbnAAAABnRSTlMA/wD/AP5AXyvrAAACxklEQVRIie1WMW/TQBh9ac5xauO4hIamIouFIoZInREDC4gR9jKwVwgQUheqbhQJIVWoQ/cObIAEIxv9BahSxdCCJWQ1CWmrXNxcG+xQhksv57MbDBJi6VOG7+57n9/dl/PzZY6Pf+IEC3eqSMLSu63EeQVpykk8/ez9FxE/uX05jVJ6jI0Qiw/TgAV9JRilB+CY3lWCP4KhZVnQZ0Hf0LLxbEI/M/YrJfgLydNSEb3EDqSH22IApiZ0MdNs9+Shqmdo2eezV+hBKMvb5xJ6kAinZMiF8kyynttiaKlPabZ7KfX4/kYj8/ThTTHY3PpuW3nqH9lWHgAPqH9Uq14UnIWXH+T6pUe35KFt6ZtbrVq1BIAH1I8slzDXlYamEANgW3nLzFlmLsqJwGt0eFA83Nsfv+A1+LpbJxuItKt4uDfo5wYlACwTAGrVEvu6DeBTW7fM3I3ZxfWVOcFRUDzcE7GTpQDcvq3MiAMBgBiOA+AqwFwXCIPJ6W871GvrlXIB6HiNzvrKXJQTAU9xUNcFUMRwBTTGGfRzgxKAzNihtlt3Q5Pn7j1+wXcmcxS9+ArEmz4MJM6gfsYebrlSLgDwGp215XlHer7MGQHuLzjlrVf9zA1NcQQq5cL1B6tpNOKSp1lMRC+YnOZHP6h7DukCWFueF+1NA8Wh4oY1Jtu51+hQ/8ghXb46bbcOoFIujLB8JdVs97hZ8x/3CplDIu0OEdQ97aQVLOg7WjdAYcRfIqdY0HdKhuwy3M9kDpGf4pAuMMxxnrFbT1SSJXnQzBTs8byfI5apc9PYB7wGnbFDwSGyqW8zHcD9xdW3r98wYOfzR36ArR8dRI1fQOxmakL3Wc/v9gD43UEQ52Tk+8vstUtTE/r0eZ0ehG6LiTjl/SVNecL3/Z/iTO9M70zv/+mp32vu6Nzu5DglfluecAWSL5zpL58py38BsiWGyXXS28sAAAAASUVORK5CYII="
---
<dl class="side-by-side">
<dt>Preview</dt>
<dd>
<img class="preview" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACUAAAAgCAIAAAAaMSbnAAAABnRSTlMA/wD/AP5AXyvrAAACxklEQVRIie1WMW/TQBh9ac5xauO4hIamIouFIoZInREDC4gR9jKwVwgQUheqbhQJIVWoQ/cObIAEIxv9BahSxdCCJWQ1CWmrXNxcG+xQhksv57MbDBJi6VOG7+57n9/dl/PzZY6Pf+IEC3eqSMLSu63EeQVpykk8/ez9FxE/uX05jVJ6jI0Qiw/TgAV9JRilB+CY3lWCP4KhZVnQZ0Hf0LLxbEI/M/YrJfgLydNSEb3EDqSH22IApiZ0MdNs9+Shqmdo2eezV+hBKMvb5xJ6kAinZMiF8kyynttiaKlPabZ7KfX4/kYj8/ThTTHY3PpuW3nqH9lWHgAPqH9Uq14UnIWXH+T6pUe35KFt6ZtbrVq1BIAH1I8slzDXlYamEANgW3nLzFlmLsqJwGt0eFA83Nsfv+A1+LpbJxuItKt4uDfo5wYlACwTAGrVEvu6DeBTW7fM3I3ZxfWVOcFRUDzcE7GTpQDcvq3MiAMBgBiOA+AqwFwXCIPJ6W871GvrlXIB6HiNzvrKXJQTAU9xUNcFUMRwBTTGGfRzgxKAzNihtlt3Q5Pn7j1+wXcmcxS9+ArEmz4MJM6gfsYebrlSLgDwGp215XlHer7MGQHuLzjlrVf9zA1NcQQq5cL1B6tpNOKSp1lMRC+YnOZHP6h7DukCWFueF+1NA8Wh4oY1Jtu51+hQ/8ghXb46bbcOoFIujLB8JdVs97hZ8x/3CplDIu0OEdQ97aQVLOg7WjdAYcRfIqdY0HdKhuwy3M9kDpGf4pAuMMxxnrFbT1SSJXnQzBTs8byfI5apc9PYB7wGnbFDwSGyqW8zHcD9xdW3r98wYOfzR36ArR8dRI1fQOxmakL3Wc/v9gD43UEQ52Tk+8vstUtTE/r0eZ0ehG6LiTjl/SVNecL3/Z/iTO9M70zv/+mp32vu6Nzu5DglfluecAWSL5zpL58py38BsiWGyXXS28sAAAAASUVORK5CYII=">
</dd>
<dt>Original</dt>
<dd>
<img class="preview" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAAAgCAMAAACVQ462AAAABGdBTUEAALGPC/xhBQAAAwBQTFRFAAAAWTkZTD8nWUouaFc2cEkgek4jdWI8ZmhmiVonhG9Ek3xL/zr7//FbuLy5////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAASioLfAAAAQB0Uk5T////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////AFP3ByUAAAAYdEVYdFNvZnR3YXJlAFBhaW50Lk5FVCB2My4zNqnn4iUAAAHpSURBVEhLtVQJcoMwEDO+CJC0/38tlbS2OYIbpjPdAczhlbWyFjflNDyfzzXXmHgz5bUTQ4n62blvpBMgMS3xROTpKp8vB/tQhnV1L+VvAMgfBiBcAEyXAMs8xuBVwiT2yAfCfQAjcA9gx30r4R2gX8JHANO/L+IlQBXrz9vYEjs3PT80HyhP+jtE24sGdwfAto/5QijbSWfhIIAM0MLeHRkI4OVw7AuBnTS5wGxqpXcAzKkMmMWzxEpjH+uA1zYIJ6aYkxqAFs550AXGTvsFBcWNpt/5EZy175WBwQEPyQgIwiajShQqJclVZFKeo/UxX6uKPR6YSQy+re3bGx0piSyilhpL+BJ7s5HfYRcE4P0JgHAxeuPrGwBKYfH/AHAqIcQQcYRbJQwQdn/Oy8JcAcRlmecF14jxFM2JZwDqF5Af4zhWMTlmAFRz8bELgD0PISy4jl8jHqaJ6dYu0BNhd10A2WgcwxhCtZgSSgX5UW66AOapgPXNVbJX640bDIwgKIRacbFpe/xQgj6DgFHoR78Erj8H6EARrAJeaiF17Iuo9UWDHBI7Gm1rXSYkG38REfXbytIBCGprRflVcNwA0Mxs6HrCRFifM1OO3uPR01Zl9zdNdgDHTjf/fz0etRHKqJ+QyrGbCvADIBrDdaQghvUAAAAASUVORK5CYII=">
</dd>
<dt>Title</dt>
<dd>Jawa</dd>
<dt>Description</dt>
<dd>Jawa from Star Wars</dd>
<dt>Added By</dt>
<dd>unknown</dd>
<dt>Added On</dt>
<dd>2010-08-19</dd>
<dt>Votes</dt>
<dd>15</dd>
</dl>
| 159.133333 | 2,250 | 0.868664 | kor_Hang | 0.915261 |
4f370861003a4befe54e5d51a9719c6e1c0b36e0 | 860 | md | Markdown | website/docs/apis/config/tools/sass.md | zys-contrib/modern.js | db406ee8bf6a24fb12ef9e47920789c9d3db5c71 | [
"MIT"
] | null | null | null | website/docs/apis/config/tools/sass.md | zys-contrib/modern.js | db406ee8bf6a24fb12ef9e47920789c9d3db5c71 | [
"MIT"
] | null | null | null | website/docs/apis/config/tools/sass.md | zys-contrib/modern.js | db406ee8bf6a24fb12ef9e47920789c9d3db5c71 | [
"MIT"
] | null | null | null | ---
sidebar_label: sass
---
# tools.sass
:::info 适用的工程方案
* MWA
* 模块
:::
:::caution 注意
MWA 项目需要请确保使用【[new](/docs/apis/commands/mwa/new)】 启用 Sass 支持。
模块项目需要请确保使用【[new](/docs/apis/commands/module/new)】 启用 Sass 支持。
:::
* 类型: `Object | Function`
* 默认值: `{}`
对应 [sass-loader](https://github.com/webpack-contrib/sass-loader) 的配置,值为 `Object` 类型时,利用 `Object.assign` 函数与默认配置合并。
```js title="modern.config.js"
export default defineConfig({
tools: {
sass: {}
}
});
```
值为 `Function` 类型时,内部默认配置作为第一参数传入,可以直接修改配置对象不做返回,也可以返回一个对象作为最终结果;第二个参数为修改 `sass-loader` 配置的工具函数集合。
如下配置 [additionalData](https://github.com/webpack-contrib/sass-loader#additionaldata):
```js title="modern.config.js"
export default defineConfig({
tools: {
sass: opts => {
opts.additionalData = async (content, loaderContext) => {
// ...
};
},
},
});
```
| 18.695652 | 114 | 0.65 | yue_Hant | 0.807868 |
4f37d966938742bcdb99d41aff20126092b0f30b | 618 | md | Markdown | docs/Model/SharingDefinitionSenderDTO.md | agodoodev71/test-arx | 1218a1a6e4224a9c3f3c6b9b890e76e17457290b | [
"MTLL"
] | null | null | null | docs/Model/SharingDefinitionSenderDTO.md | agodoodev71/test-arx | 1218a1a6e4224a9c3f3c6b9b890e76e17457290b | [
"MTLL"
] | null | null | null | docs/Model/SharingDefinitionSenderDTO.md | agodoodev71/test-arx | 1218a1a6e4224a9c3f3c6b9b890e76e17457290b | [
"MTLL"
] | null | null | null | # SharingDefinitionSenderDTO
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**sharing_definition_sender_id** | **string** | Id. | [optional]
**sharing_definition_id** | **string** | Sharing Id | [optional]
**email** | **string** | Sender Email. | [optional]
**alias** | **string** | External Id. | [optional]
**address_book_id** | **int** | Id of the addressbook element. | [optional]
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
| 41.2 | 161 | 0.601942 | yue_Hant | 0.378781 |
4f3912906d933c704d45f675074c08c93af6b55f | 2,841 | md | Markdown | README.md | danieldkm/exemplos | 4d0a22a3ecd7e0be4b7e0cfb20379706082efef0 | [
"MIT"
] | null | null | null | README.md | danieldkm/exemplos | 4d0a22a3ecd7e0be4b7e0cfb20379706082efef0 | [
"MIT"
] | null | null | null | README.md | danieldkm/exemplos | 4d0a22a3ecd7e0be4b7e0cfb20379706082efef0 | [
"MIT"
] | null | null | null | <!-- PROJECT SHIELDS -->
<!--
*** I'm using markdown "reference style" links for readability.
*** Reference links are enclosed in brackets [ ] instead of parentheses ( ).
*** See the bottom of this document for the declaration of the reference variables
*** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use.
*** https://www.markdownguide.org/basic-syntax/#reference-style-links
-->
[![Contributors][contributors-shield]][contributors-url]
[![Forks][forks-shield]][forks-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url]
[![MIT License][license-shield]][license-url]
[![LinkedIn][linkedin-shield]][linkedin-url]
<!-- PROJECT LOGO -->
<br />
<p align="center">
<a href="https://www.unifil.br/">
<img src="banner.png" alt="Banner">
</a>
<h3 align="center">Exemplos</h3>
<p align="center">
<br />
</p>
</p>
<!-- TABLE OF CONTENTS -->
## Índice
* [Sobre](#sobre)
* [Instruções para começar](#instruções-para-começar)
* [Pré-requisitos](#pré-requisitos)
* [Instalação](#instalação)
* [Licença](#licença)
* [Contato](#contato)
<!-- ABOUT THE PROJECT -->
## Sobre
Projetos genericos, que representa algum exemplo.
<!-- GETTING STARTED -->
## Instruções para começar
Essas instruções fornecerão uma cópia do projeto em execução na sua máquina local para fins de desenvolvimento e teste. Consulte implantação para obter notas sobre como implantar o projeto em um sistema.
<!-- LICENSE -->
## Licença
Distributed under the MIT License. See `LICENSE` for more information.
<!-- CONTACT -->
## Contato
Daniel K. Morita - [@DneelDKM](https://twitter.com/DneelKM) - danielmorita@hotmail.com
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/danieldkm/exemplos.svg?style=flat-square
[contributors-url]: https://github.com/danieldkm/exemplos/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/danieldkm/exemplos.svg?style=flat-square
[forks-url]: https://github.com/danieldkm/exemplos/network/members
[stars-shield]: https://img.shields.io/github/stars/danieldkm/exemplos.svg?style=flat-square
[stars-url]: https://github.com/danieldkm/exemplos/stargazers
[issues-shield]: https://img.shields.io/github/issues/danieldkm/exemplos.svg?style=flat-square
[issues-url]: https://github.com/danieldkm/exemplos/issues
[license-shield]: https://img.shields.io/github/license/danieldkm/exemplos.svg?style=flat-square
[license-url]: https://github.com/danieldkm/exemplos/blob/master/LICENSE
[linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=flat-square&logo=linkedin&colorB=555
[linkedin-url]: https://linkedin.com/in/daniel-k-morita-7b928831
[product-screenshot]: images/screenshot.png
| 35.5125 | 203 | 0.730729 | por_Latn | 0.292043 |
4f39f5d506f12b8b2c6104946a7fec6d6ba7096f | 3,127 | md | Markdown | README.md | messagemedia/auth0-integration | 8d03ebdfbf37150d7c6d90f744434c3e1e7c8e67 | [
"Apache-2.0"
] | 3 | 2018-10-18T01:44:17.000Z | 2021-05-20T22:50:09.000Z | README.md | messagemedia/auth0-integration | 8d03ebdfbf37150d7c6d90f744434c3e1e7c8e67 | [
"Apache-2.0"
] | null | null | null | README.md | messagemedia/auth0-integration | 8d03ebdfbf37150d7c6d90f744434c3e1e7c8e67 | [
"Apache-2.0"
] | 3 | 2019-09-05T04:58:15.000Z | 2022-03-01T12:17:24.000Z | # Setting up MessageMedia SMS for Auth
[](http://hits.dwyl.io/messagemedia/auth0-integration)
This guide assumes that you've already got your MessageMedia API credentials. If you don't, you can sign up for them for free over [here](https://developers.messagemedia.com/register).
## Auth0 Configuration
Make sure the SMS Passwordless connection is enabled by going to [Connections → Passwordless in your Auth0 account](https://manage.auth0.com/#/connections/passwordless) and switch it on. You can configure everything through the UI except for choosing a different provider. You can enter dummy values for Twilio SID and AuthToken.
To change the provider to MessageMedia, you need to use the Management API. To get an access token, go to the [API Explorer tab of your Auth0 Management API](https://manage.auth0.com/#/apis/management/explorer). You can use the API Explorer directly, curl or Postman.

If you use the API Explorer, you need to add the access token and tenant URL to it:

Using your access token, make a GET request to `https://{YOURTENANT}.auth0.com/api/v2/connections` to find out the ID for your SMS connection. You can make that request [in the API Explorer](https://auth0.com/docs/api/management/v2#!/Connections/get_connections) by clicking **Try**.

Go through the returned JSON to find the connection named `sms` and take note of the ID to use in the next step. Also, copy the `options` object.

Next, make a PATCH request to `https://{YOURTENANT}.auth0.com/api/v2/connections/{CONNECTION_ID}`, using the connection ID you found. You can do that [in the API Explorer](https://auth0.com/docs/api/management/v2#!/Connections/patch_connections_by_id). The body of the request must be the existing `options` object with these edits: remove `twilio_sid` and `twilio_token` and instead add the following entries.
````json
"provider": "sms_gateway",
"gateway_url": "https://auth0-api.messagemedia.com/messages",
"gateway_authentication": {
"method": "bearer",
"subject": "{YOUR_API_KEY}",
"audience": "{YOUR_API_SECRET}",
"secret": "MessageMedia"
},
"from": "{SENDER_PHONE_NUMBER}",
````
Enter your Message Media API key and secret as `subject` and `audience` in the `gateway_authentication` section. The `secret` is hardcoded to the string `MessageMedia`. `from` is the sender's phone number.
Send the request (**Try** in the API explorer) to store the configuration.

Once that's done, you can change to the _Try_ tab in the Auth0 UI for the SMS Connection, enter your phone number and hit **Try**. An authentication code message should arrive on your phone.
| 66.531915 | 410 | 0.754397 | eng_Latn | 0.900363 |
4f3b2714fa07ed1edd0dd7bcf67ed6ec7eabd307 | 9,164 | md | Markdown | articles/azure-sql-edge/deploy-portal.md | jptarqu/azure-docs | de3e738917e10c63531e1c7fd713ffa0a47d08b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-sql-edge/deploy-portal.md | jptarqu/azure-docs | de3e738917e10c63531e1c7fd713ffa0a47d08b9 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-16T22:09:49.000Z | 2021-03-16T22:09:49.000Z | articles/azure-sql-edge/deploy-portal.md | jptarqu/azure-docs | de3e738917e10c63531e1c7fd713ffa0a47d08b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Deploy Azure SQL Edge using the Azure portal
description: Learn how to deploy Azure SQL Edge using the Azure portal
keywords: deploy SQL Edge
services: sql-edge
ms.service: sql-edge
ms.topic: conceptual
author: SQLSourabh
ms.author: sourabha
ms.reviewer: sstein
ms.date: 09/22/2020
---
# Deploy Azure SQL Edge
Azure SQL Edge is a relational database engine optimized for IoT and Azure IoT Edge deployments. It provides capabilities to create a high-performance data storage and processing layer for IoT applications and solutions. This quickstart shows you how to get started with creating an Azure SQL Edge module through Azure IoT Edge using the Azure portal.
## Before you begin
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
* Sign in to the [Azure portal](https://portal.azure.com/).
* Create an [Azure IoT Hub](../iot-hub/iot-hub-create-through-portal.md).
* Register an [IoT Edge Device from the Azure portal](../iot-edge/how-to-register-device-portal.md).
* Prepare the IoT Edge device to [deploy IoT Edge module from the Azure portal](../iot-edge/how-to-deploy-modules-portal.md).
> [!NOTE]
> To deploy an Azure Linux VM as an IoT Edge device, see this [quickstart guide](../iot-edge/quickstart-linux.md).
## Deploy SQL Edge Module from Azure Marketplace
Azure Marketplace is an online applications and services marketplace where you can browse through a wide range of enterprise applications and solutions that are certified and optimized to run on Azure, including [IoT Edge modules](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1&subcategories=iot-edge-modules). Azure SQL Edge can be deployed to an edge device through the marketplace.
1. Find the Azure SQL Edge module on the Azure Marketplace.<br><br>

2. Pick the software plan that best matches your requirements and click **Create**. <br><br>

3. On the Target Devices for IoT Edge Module page, specify the following details and then click **Create**
|**Field** |**Description** |
|---------|---------|
|Subscription | The Azure subscription under which the IoT Hub was created |
|IoT Hub | Name of the IoT Hub where the IoT Edge device is registered and then select "Deploy to a device" option|
|IoT Edge Device Name | Name of the IoT Edge device where SQL Edge would be deployed |
4. On the **Set Modules on device:** page, click on the Azure SQL Edge module under **IoT Edge Modules**. The default module name is set to *AzureSQLEdge*.
5. On the *Module Settings* section of the **Update IoT Edge Module** blade, specify the desired values for the *IoT Edge Module Name*, *Restart Policy* and *Desired Status*.
> [!IMPORTANT]
> Do not change or update the **Image URI** settings on the module.
6. On the *Environment Variables* section of the **Update IoT Edge Module** blade, specify the desired values for the environment variables. For a complete list of Azure SQL Edge environment variables refer [Configure using environment variables](configure.md#configure-by-using-environment-variables). The following default environment variables are defined for the module.
|**Parameter** |**Description**|
|---------|---------|
| MSSQL_SA_PASSWORD | Change the default value to specify a strong password for the SQL Edge admin account. |
| MSSQL_LCID | Change the default value to set the desired language ID to use for SQL Edge. For example, 1036 is French. |
| MSSQL_COLLATION | Change the default value to set the default collation for SQL Edge. This setting overrides the default mapping of language ID (LCID) to collation. |
> [!IMPORTANT]
> Do not change or update the **ACCEPT_EULA** environment variable for the module.
7. On the *Container Create Options* section of the **Update IoT Edge Module** blade, update the following options as per requirement.
- **Host Port :** Map the specified host port to port 1433 (default SQL port) in the container.
- **Binds** and **Mounts :** If you need to deploy more than one SQL Edge module, ensure that you update the mounts option to create a new source & target pair for the persistent volume. For more information on mounts and volume, refer [Use volumes](https://docs.docker.com/storage/volumes/) on docker documentation.
```json
{
"HostConfig": {
"CapAdd": [
"SYS_PTRACE"
],
"Binds": [
"sqlvolume:/sqlvolume"
],
"PortBindings": {
"1433/tcp": [
{
"HostPort": "1433"
}
]
},
"Mounts": [
{
"Type": "volume",
"Source": "sqlvolume",
"Target": "/var/opt/mssql"
}
]
},
"Env": [
"MSSQL_AGENT_ENABLED=TRUE",
"ClientTransportType=AMQP_TCP_Only",
"PlanId=asde-developer-on-iot-edge"
]
}
```
> [!IMPORTANT]
> Do not change the `PlanId` enviroment variable defined in the create config setting. If this value is changed, the Azure SQL Edge container will fail to start.
8. On the **Update IoT Edge Module** pane, click **Update**.
9. On the **Set modules on device** page click **Next: Routes >** if you need to define routes for your deployment. Otherwise click **Review + Create**. For more information on configuring routes, see [Deploy modules and establish routes in IoT Edge](../iot-edge/module-composition.md).
11. On the **Set modules on device** page, click **Create**.
## Connect to Azure SQL Edge
The following steps use the Azure SQL Edge command-line tool, **sqlcmd**, inside the container to connect to Azure SQL Edge.
> [!NOTE]
> SQL Command line tools (sqlcmd) are not available inside the ARM64 version of Azure SQL Edge containers.
1. Use the `docker exec -it` command to start an interactive bash shell inside your running container. In the following example `azuresqledge` is name specified by the `Name` parameter of your IoT Edge Module.
```bash
sudo docker exec -it azuresqledge "bash"
```
2. Once inside the container, connect locally with sqlcmd. Sqlcmd is not in the path by default, so you have to specify the full path.
```bash
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "<YourNewStrong@Passw0rd>"
```
> [!TIP]
> You can omit the password on the command-line to be prompted to enter it.
3. If successful, you should get to a **sqlcmd** command prompt: `1>`.
## Create and query data
The following sections walk you through using **sqlcmd** and Transact-SQL to create a new database, add data, and run a query.
### Create a new database
The following steps create a new database named `TestDB`.
1. From the **sqlcmd** command prompt, paste the following Transact-SQL command to create a test database:
```sql
CREATE DATABASE TestDB
Go
```
2. On the next line, write a query to return the name of all of the databases on your server:
```sql
SELECT Name from sys.Databases
Go
```
### Insert data
Next create a new table, `Inventory`, and insert two new rows.
1. From the **sqlcmd** command prompt, switch context to the new `TestDB` database:
```sql
USE TestDB
```
2. Create new table named `Inventory`:
```sql
CREATE TABLE Inventory (id INT, name NVARCHAR(50), quantity INT)
```
3. Insert data into the new table:
```sql
INSERT INTO Inventory VALUES (1, 'banana', 150); INSERT INTO Inventory VALUES (2, 'orange', 154);
```
4. Type `GO` to execute the previous commands:
```sql
GO
```
### Select data
Now, run a query to return data from the `Inventory` table.
1. From the **sqlcmd** command prompt, enter a query that returns rows from the `Inventory` table where the quantity is greater than 152:
```sql
SELECT * FROM Inventory WHERE quantity > 152;
```
2. Execute the command:
```sql
GO
```
### Exit the sqlcmd command prompt
1. To end your **sqlcmd** session, type `QUIT`:
```sql
QUIT
```
2. To exit the interactive command-prompt in your container, type `exit`. Your container continues to run after you exit the interactive bash shell.
## Connect from outside the container
You can connect and run SQL queries against your Azure SQL Edge instance from any external Linux, Windows, or macOS tool that supports SQL connections. For more information on connecting to a SQL Edge container from outside, refer [Connect and Query Azure SQL Edge](https://docs.microsoft.com/azure/azure-sql-edge/connect).
In this quickstart, you deployed a SQL Edge Module on an IoT Edge device.
## Next Steps
- [Machine Learning and Artificial Intelligence with ONNX in SQL Edge](onnx-overview.md)
- [Building an end to end IoT Solution with SQL Edge using IoT Edge](tutorial-deploy-azure-resources.md)
- [Data Streaming in Azure SQL Edge](stream-data.md)
- [Troubleshoot deployment errors](troubleshoot.md)
| 41.466063 | 428 | 0.702859 | eng_Latn | 0.946366 |
4f3b3c0d310b519b699838447b4f495383924a33 | 26,481 | md | Markdown | website/cs4e-firstcourse.md | ai4energy/enpo811203 | 2dc05f41ad38b94fcdb63458326ed29aeaa0f8e7 | [
"MIT"
] | null | null | null | website/cs4e-firstcourse.md | ai4energy/enpo811203 | 2dc05f41ad38b94fcdb63458326ed29aeaa0f8e7 | [
"MIT"
] | null | null | null | website/cs4e-firstcourse.md | ai4energy/enpo811203 | 2dc05f41ad38b94fcdb63458326ed29aeaa0f8e7 | [
"MIT"
] | null | null | null | @def title = "理解计算"
@def tags = ["compter science", "guide"]
# 理解计算
\tableofcontents
伴随着物联网、大数据、云计算及三维可视化等技术的发展,在“碳达峰、碳中和”双碳目标驱动下,传统工业领域数字化、智能化已经成为第四次工业革命的重要方向之一,这也是中国制造2025的主攻方向。
未来是万物互联、数字孪生、端边云协同的时代。云、大、智、物、移等等是智慧能源绕不开的内容。从数字化到智能化是一个过程,而智能化是整个框架的核心。云、大、物、移都是IT技术。我们需要对计算机有所了解和认识,并能与行业领域专门知识结合,才能更有利于我们塑造智慧化的未来。
那么,非IT工科专业,计算机如何入门呢?
计算就是机械化的信息处理。这点可以先读一读王垠的[《解谜计算机科学》](http://www.yinwang.org/blog-cn/2018/04/13/computer-science)有个认识和了解。也可以先看一看yuziwen的[PL教程 第一章 人和机器](https://yuziwen.github.io/pl-tutorial-1.html)。
接下来,我们会从函数、递归、模型处理、抽象等几个方面进行介绍。
## 从函数说起
有很多人说,计算机入门最合适的捷径就是编程。很多人的编程是从c语言开始的,一开始就陷入了scanf/printf的格式化输入/输出里去。而我们这里高度概括的来讲讲。
### 编程就是写函数
对,编程就是写函数。很多人接触的c语言入门的第一个程序差不多长这样:
```c
#include <stdio.h>
int main() {
printf("Hello, World!\n");
return 0;
}
```
而有些学院因为涉及到很多计算密集型的计算,编程采用了Fortran入门,hello world程序像这样:
```
program main
write(*,*) "Hello, World!"
end program
```
的确,这就是写的函数,是主函数。
- **函数可能有返回值,也可能没有返回值**
像刚才的c代码,return 0返回了整数0。而早期的c语言如果使用void main也是可以,但是却不是标准的语法:
```c
#include <stdio.h>
void main() {
printf("Hello, World!\n");
}
```
事实上,操作系统会根据程序的返回值判断是否正确执行了。比如bash下,我们执行一条命令之后,可以用`echo $?`来查看上一条命令的返回值。而在windows下,如果使用的是命令行终端(cmd.exe),可以用`echo %errorlevel%`来查看上一条命令的返回值,如果使用的的是powershell,可以用`$LASTEXITCODE`来获取上一条命令的返回值。
通常程序返回0代表程序正确结束。所以我们c的主函数一般都是int的返回类型,最后有一句return 0。
- **编程的函数不同于数学上的函数**
虽然编程的函数和数学上的函数都使用了function这个词,但含义是不一样的。
数学上的函数就是个定义,比如$y=\sqrt{x}$,描述的就是有个y,其平方是x。
但是编程上的函数是一系列的操作过程,找到这个y,使得其平方是我们给出的那个x。比如我们可以把$y=\sqrt{x}$变形,得到$y^2=x$, 或者说$y=x/y$,那么我们对给定的某个x(如2),我们可以给个y的初值1进去迭代,我们会发现y就在1和2之间“震荡”,那我们可以用旧的y和新的y算术平均$y_{new}=\frac{1}{2}(y_{old}+x/y_{old})$或者加权平均$y_{new}=\alpha y_{old}+(1-\alpha)(x/y_{old})$,来进行迭代,从而找到那个我们希望的y值。
- **函数能被调用,能被嵌套的调用**
写函数的目的实际上是把具有一定功能性代码整合起来,可以重复的使用,也就是可以被调用。比如如下的代码,主函数main调用了fun1,而fun1又调用了fun2:
```c
include <stdio.h>
int fun2()
{
printf("I am in fun2");
return 0;
}
int fun1()
{
printf("I am in fun1");
fun2();
return 0;
}
int main()
{
fun1();
return 0;
}
```
- **函数可能会改变参数本身的内容,也可能不会**
数学上的函数定义是这样的:给定一个集合A,对于集合X中的每一个元素x,在集合Y中都有唯一的一个元素y和x对应。
事实上,编程中的函数,给定输入的参数x,执行一定的操作,但是这些操作不改变x,那么这样的函数是没有改变参数本身的内容。然而有的函数是会改变参数本身的内容的。比如给你一个数组,调用一个排序函数来排序,以原来的数组名返回排序之后的数组,其实就是改变了参数的内容。
在fortran中,有subroutine和function之分,可以认为function就是纯的函数,而subroutine可以使用参数来承接函数调用之后对参数的改变。
而在c中,如果我们写个交换函数swap,使用传地址的方式进行传参数,实际上也改变了参数本身的内容。
在Julia中,如果一个函数名以!结尾,表明它会改变输入参数的内容。而不带!的函数名,只是会有返回值,不会改变输入参数的值。
- **函数并不一定有名字**
可以认为,没有名字的函数可以使用一次,然后没法再次唤醒它。
在一些语言中,没有名字的函数可以使用lambda表达式进行定义。比如scheme,
```scheme
(lambda (x) (+ x 2))
```
又比如python,
```python
lambda x: x+2
```
在javascript中则可以这样定义函数,
```javascript
x-> x+2
```
甚至,C++,
```cpp
[](float x)->float{return x+2;};
```
没有名字有什么用呢?其实有的,比如在python中,可以使用map来调用一次,
```python
map(lambda x: x+2, [1,3,5,7])
```
比如scheme这样调用,
```scheme
((lambda (x) (+ x 2)) 4)
```
当然,javacript和c++也可以类似的对匿名函数进行使用。
但是匿名函数使用一次,没法在想用的时候再次召唤出来。所以我们可以给它取个名字,就好比给锅安装了个柄,可以通过柄把锅抓住。
如下的C++把匿名函数取了个名字addfunciton,并调用它,
```cpp
auto addfunction=[](float x)->float{return x+2;};
addfunction(3)
```
如下的python代码把匿名函数取了个名字add2,并调用它
```python
add2 = lambda x: x+2
add2(3)
```
如下的scheme代码把匿名函数取了个名字add2,并调用它
```scheme
(define add2 (lambda (x) (+ x 2))) (add2 3)
```
给函数取名,是因为我们要方便我们在需要的时候召唤它。
- **加减乘除本质就是函数**
3+4对应的scheme表达可以写成(+ 3 4), 这里+其实就是一个函数。所以说,加减乘除本质上就是函数。
我们甚至可以在julia的repl中这样操作:把 + 运算符(本质就是+函数)赋值给f,通过f(2,3)来调用,
```julia
f = +
f(2,3)
```
我们使用“3+2”这样的表达式时,我们称呼这是中缀表达式。而如果我们按照scheme的写法,(+ 3 2)时,采用的是前缀表达。
可见,运算符就是个函数。
c++可以进行运算符重载,即可以重定义或重载大部分 C++ 内置的运算符。这样,就能使用自定义类型的运算符。重载的运算符是带有特殊名称的函数,函数名是由关键字 operator 和其后要重载的运算符符号构成的。与其他函数一样,重载运算符有一个返回类型和一个参数列表。
python呢,对一个类而言,如果你重新定义了`__add__`函数,就是重载了+运算符。
而julia呢,用的是多重分发,你针对你的类型增加了Base.(+)函数,就是对你的这个类型定义了+运算符。
实际上,我们还可以走得更远一点,仍然在julia的repl中,
```julia
abc = Meta.parse("3+2")
Meta.show_sexpr(abc)
```
来看看内部的代码表示。我们可以发现,julia底层就是类似于scheme的表达。可以粗浅地认为julia是一个高级版的scheme。
核心本质,加减乘除运算符本质都是函数。采用前缀表达式,尽管像四则混合运算这样的情况对眼睛不友好,可是使用函数表达方式,很用户友好。
- **主函数并不是必须的,但运行程序必须有个入口**
我们在学习c或者Fortran的时候,总是从main函数开始的,这其实是编译器的要求。
在julia语言中,如果你用`julia foo.jl`启动你的脚本foo.jl, 那这个foo.jl脚本本身就是你要执行的程序,大逻辑上会按照这个脚本内的内容顺序执行。当然在这个大逻辑框架内部,你可以调用别的函数,或者有循环,有分支等等。
在python语言中,如果我们使用pycharm来开始运行调试,会让你指定调试入口,就是指出从哪个文件开始运行。大逻辑上会按照你的py脚本内的内容顺序执行。但是如果你的py文件前面只是定义了符号、变量、函数等等,实际上是啥也没干。而如果在你的py脚本最后存在`if __name__=="__main__"`语句,则是以本脚本启动的时候,程序入口所在,
```python
def hello():
print("hello")
if __name__=="__main__"
hello()
```
而java呢,它是把一个类写成一个.java文件,编译之后,一个.java文件就对应一个.class文件,把一些.class文件打包到一块呢就成了.jar文件。编译的时候,通过Manifest.txt里Main-class指定入口。
对于c,我们甚至可以开脑洞想一想,用lambda定义一个匿名函数,然后把它命名为main,然后编译运行它,(我没有实测过是否可行啊),
```cpp
auto main=[]()->int { printf("hello world");return 0;};
```
总之,主函数并不是必须的,但运行程序必须有个入口。
- **提供给函数的参数需要按约定好的要求**
那我们现在来说说函数的参数。函数的参数就是一种约定,你要我处理一件事,你就得按照我的要求来提供材料。你要发快递,那么,收件人地址、收件人姓名、收件人电话,这就是我提出的要求。函数也一样。你要我计算一个浮点数x对应的sin(x)的值,那你就得按float给我提供这个参数x。
因为数据有类型,所以函数对参数可以表达类型要求。返回值对应的也有类型。
而有些语言是动态类型的,就是不显示的声明类型,运行到这里的时候,是什么类型,就按什么类型给你处理,比如
```julia
function f(x)
x + 2
end
```
python、julia等是动态类型的。而c/c++、Fortran等语言是静态类型的。
我们在寄件的时候,可以问你要三个参数,收件人地址、收件人姓名、收件人电话,如果我约定的是一个结构化的数据(比如含有上述三个信息的表格),你就可以直接一次给我整个表格。同样的,如果我要你给交一系列的打印文字,你可以一页一页的交给我,也可以装订成若干册交给我。装订(打包)成册,就是以结构化的信息给我提供了所需信息。问题是,你给我提供的信息得按我的要求来。
函数也一样,可以以独立的一个个的浮点数来传递我所需的参数,也可以打包成结构体。取决于定义函数的时候所用的形式,就是取决于函数的要求。甚至,我们曾看到,在Fortran的一个实际程序中,对一个太阳能集热板组件,把所有的输入参数存入一个数组xin,所有的输出参数存入一个数组xout,然后调用对应的subroutine。
```
subroutine solarcollector(xin,xout)
...
end
```
总之,一句话,提供给函数的参数需要按约定好的要求,这个约定和要求是在定义函数的时候决定的。
因为计算机是机械化的信息处理,所以最初只能按确定的形式接受。但是人们通过一些办法实现了对人更友好,就可以处理一页一页交过来我能处理,一册一册叫过来我也能处理。某种意义上讲,这就是一种多态。
Julia语言中通过对同一个函数名写不同的处理方法来实现。而c++通过参数形式的不同实现不同的具体的函数,来处理多态问题。这个我们暂时不去深究。
### 函数的集合构成库
- **函数可以封装成库给别人调用,有动态库,有静态库**
我们写函数的本质是把具有一定功能性代码整合起来,可以重复的使用。至于函数内部的实现,你可以认为就是一个黑箱。但是我得告诉你我有什么功能。就像快递收发点一样,怎么运输的你不用管,但是我得有个目录告诉你有些什么选择和对应的收费标准。
c语言中,stdio等库就是这么个情况,把很多有用的函数封装在一起,成为一个库(静态和动态我们等会儿再说),库里有什么则通过stdio.h这个文件告诉你,同时stdio.h也告诉你了要以什么样的参数方式调用。
我们如果使用CLion,创建一个新项目,选择c library,可以产生如下的两个文件,library.h和library.cpp
```cpp
#ifndef TESTD_LIBRARY_H
#define TESTD_LIBRARY_H
void hello();
#endif //TESTD_LIBRARY_H
```
```cpp
#include "library.h"
#include <iostream>
void hello() {
std::cout << "Hello, World!" << std::endl;
}
```
CLion为了方便我们使用,还产生了项目的 CMakelists.txt文件。
```cmake
cmake_minimum_required(VERSION 3.20)
project(testd)
set(CMAKE_CXX_STANDARD 14)
add_library(testd library.cpp)
```
在Clion中,我们build一下,就产生了对应的libtestd.a文件。
如果我们当初创建c library的时候,选择的是动态链接库,则产生的CMakelists.txt略有不同,
```
add_library(testd SHARED library.cpp)
```
我们build之后呢,产生的则是libtestd.dll文件。
这里我们首先解释一下动态链接库和静态链接库。
从上面的描述我们可以发现,人们可以把一些函数封装成库,提供给用户服务。c语言就有很多这样的库,你写的时候,就可以调用c语言自带的这些库函数。编译的时候,你的c语言源代码会编译成.o文件,但是要做成exe文件,还需要把库里所有要用到的那些函数代码嵌入进来,这种方式就是静态的链接到了你的exe文件中。提供的这些库,就称呼为静态链接库。
而如果某些函数使用率非常的高,或者你自己脑回路清奇,或者乐意,你也可以把那些用到的函数不嵌入进来,只是告诉你的exe程序,需要用到某个库中的某个函数,在运行的时候才加载那个库,调用库里的函数。这样的方式就是动态链接库。
在windows下,静态链接库一般是lib为后缀名,动态链接库一般是dll为后缀名。
在linux下,静态链接库一般是a为后缀名,动态链接库一般是so为后缀名。
在mac os x 下,动态链接库一般是dylib为后缀名。
如果我们有linux系统,我们可以在终端下使用`ldd ls`,来查看ls这个命令运行的时候要用到的动态链接库。如果找不到某个so文件,那么你的程序就无法执行。
那么系统会到哪里去找这些动态链接库和静态链接库呢。先说Linux下的情况。静态的,是编译时要嵌入的,所以如果使用gcc命令行编译,`gcc -I`的部分`-I`后面指定头文件搜索目录,`-L`指定加载的库所在地。动态的,编译的时候,也是需要`-I` `-L`这样的选项指定。还有就是会去系统的两个变量所描述的目录去找。一个是`INCLUDE_PATH`,一个是`LD_LIBRARY_PATH`。而这两个变量系统会维护,你自己也可以对它进行改变。windows下类似,但往往还在当前目录去搜索。如果使用visual studio系列的集成开发环境编程,则在配置中,可以指定使用哪些静态动态链接库,以及它们所在的位置。
静态链接的好说,函数都嵌入到你的exe中了,拷贝到别的电脑上就能用。它的缺点是可执行文件相对较大。动态链接库,除了拷贝你的exe文件过去之外,还得确保在目标机子上,相应的动态链接库也能被找到。
- **函数能跨语言调用**
我们通过c语言写函数,编译成动态链接库(dll或者so),它能被python调用,比如以下是一个软件的程序片段(https://github.com/NREL/REopt_API/blob/master/reo/src/sscapi.py),
```python
class PySSC:
def __init__(self):
if sys.platform == 'win32' or sys.platform == 'cygwin':
# nlaws 201201 Windows is no longer supported (by celery) but is cygwin supported?
self.pdll = CDLL("ssc.dll")
elif sys.platform == 'darwin':
# NOTE: the path of this file must be in DYLD_LIBRARY_PATH
self.pdll = CDLL("ssc.dylib")
elif sys.platform == 'linux2' or sys.platform == 'linux':
# NOTE: the path of this file must be in LD_LIBRARY_PATH
self.pdll = CDLL('ssc.so')
else:
print("Platform of type {} not supported for wind analyses.".format(sys.platform))
...
def version(self):
self.pdll.ssc_version.restype = c_int
return self.pdll.ssc_version()
```
它首先根据你系统的不同,加载了对应的动态链接库,然后定义了version函数。你调用这个python版的version函数时,本质是执行了动态链接库中的ssc_version()函数。
dll也可以被julia调用。对dll,Julia 提供简洁且高效的调用方式。Julia 的哲学是 no boilerplate: Julia 可以直接调用 C/Fortran 的函数,不需要任何"胶水"代码,代码生成或其它编译过程。上述特性可以仅仅通过调用 ccall 实现,它的语法看起来就像是普通的函数调用。
被调用的代码必须是一个共享库(.so, .dylib, .dll)。大多数 C 和 Fortran 库都已经是以共享库的形式发布的,但在用 GCC 或 Clang 编译自己的代码时,需要添加 -shared 和 -fPIC 编译器选项。
可以通过(:function, "library") 或 ("function", "library") 这两种形式来索引库中的函数,其中 function 是函数名,library 是库名。(特定平台/操作系统的)加载路径中可用的共享库将按名称解析。 也可以指定库的完整路径。
一个典型的例子如下:
```julia
using Compat
const coolproplibrary = joinpath(@__DIR__, "./CoolProp.dll")
function PropsSI(fluid::AbstractString, output::AbstractString)
val = ccall( (:Props1SI, coolproplibrary), Cdouble, (Cstring, Cstring), fluid, output)
if val == Inf
error("CoolProp: ", get_global_param_string("errstring"))
end
return val
end
```
就是通过coolproplibrary标记这个dll文件,然后写一个PropsSI函数,内部调用Props1SI进行处理。
在fortran与c混合编程中,过去进行混合语言编程是比较麻烦的,现在随着Fortran标准的进一步发展,已经很简单了,使用`ISO_C_BINDING`,就好了,下面是<https://github.com/OP-DSL/OPS>中的代码片段,
```
module OPS_Fortran_Declarations
use, intrinsic :: ISO_C_BINDING
...
integer(c_int) :: OPS_READ = 1
integer(c_int) :: OPS_WRITE = 2
integer(c_int) :: OPS_RW = 3
integer(c_int) :: OPS_INC = 4
integer(c_int) :: OPS_MIN = 5
integer(c_int) :: OPS_MAX = 6
...
subroutine ops_reduction_result_real_8 (reduction_handle, var)
use, intrinsic :: ISO_C_BINDING
type(ops_reduction) :: reduction_handle
real(8), dimension(:), target :: var
call ops_reduction_result_c (reduction_handle%reductionCptr, reduction_handle%reductionPtr%size, c_loc(var))
end subroutine ops_reduction_result_real_8
...
end module OPS_Fortran_Declarations
```
这段代码虽然不能直接就运行检查,但是启发我们,fortran和c互相调用,使用`ISO_C_BINDING`就很简单了。
小结一下,函数能跨语言调用,按照提供(要求)的方法就好。
- **可执行程序就是个函数,但是函数参数的给定方式是命令行参数**
在linux下的shell下,我们知道命令有很多选项,比如`ls`,可以在后面加上选项和参数,比如`ls -l /usr/local`。
在windows下也一样,以`dir`这个命令为例,可以加上选项和参数,`dir /a c:\ `,这里`/a`是选项,`c:\ `是参数。
我们在<https://github.com/wertarbyte/coreutils/blob/master/src/ls.c>看一下ls的源代码,
```c
int
main (int argc, char **argv)
{
...
initialize_main (&argc, &argv);
...
i = decode_switches (argc, argv);
...
}
```
这里的argc和argv就是参数(argument, arg)的数量(count,c)和值(value,v)。而decode_switches就是解析出来参数的值,然后根据参数的值确定函数下一步的行为。
有的可能会单独写一个函数来处理命令行参数,类似于这样,
```
#ifdef TRILIBRARY
parsecommandline(1, &triswitches, &b);
#else /* not TRILIBRARY */
parsecommandline(argc, argv, &b);
#endif /* not TRILIBRARY */
m.steinerleft = b.steiner;
```
Fortran也类似,对于Fortran2003及其之后,使用`GET_COMMAND_ARGUMENT`来获取参数
```
PROGRAM test_get_command_argument
INTEGER :: i,n
CHARACTER(len=32) :: arg
i = 1
DO
CALL get_command_argument(i, arg)
IF (LEN_TRIM(arg) == 0) EXIT
READ(arg,'(I3)') n
WRITE (*,*) n*n
i = i+1
END DO
END PROGRAM
```
比如编译后的可执行程序叫testComArg,在终端中输入: `./testComArg 2 3`
将得到:
```
4
9
```
当然python什么的,不外如是。
所以说,可执行程序就是个函数,但是函数参数的给定方式是命令行参数。
- **程序执行时的环境变量也会影响函数的行为**
还需要说的就是,系统运行的时候存在一些平时我们没有注意的一些“变量”,通常称为环境变量。包括,用户名是啥啊,用的shell是啥啊,当前目录是上啊,等等。在命令行提示符下输入`env`回车,你就能看到很多的环境变量。通俗点说呢,就是个变量表。有变量名,有值。linux下也是如此。
某些函数在运行时会询问这个参数表(环境变量表)中某些参数的值,从而确定自己的行为。
比如我们的julia,我们执行`versioninfo()`的时候,可以看到`JULIA_PKG_SERVER = https://mirrors.tuna.tsinghua.edu.cn/julia`,这样,我在进入Pkg模式,添加软件包时,它就会去`JULIA_PKG_SERVER`标记的镜像站点下载。
```julia
julia> versioninfo()
Julia Version 1.7.2
Commit bf53498635 (2022-02-06 15:21 UTC)
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-12.0.1 (ORCJIT, skylake)
Environment:
JULIA_PKG_SERVER = https://mirrors.tuna.tsinghua.edu.cn/julia
julia>
```
当然的,是这些函数内部要求了要去查某个或者某些环境变量的值,并使用了它。
小结来说,程序执行时的环境变量也会影响函数的行为。
- **函数运行时的“环境”与闭包**
函数调用,形象来讲就是做了个梦,被调用的函数中再调用函数,就是梦中做梦。(你看过电影盗梦空间吗?)
打过游戏的同学可能也有体会,函数调用就是玩游戏时打开了一个门,屏幕黑了一下或者白了一下,进入了一个新的情景,在这个新的情景中还可能会继续下去打开新的密室。
可是你在梦中所携带的工具是进入梦境就给你了的,就是“变量-值”表。打游戏的技能集,就相当于是若干个变量-值对。
我们从一个游戏情景返回到原来的情景,就相当于是函数调用返回。返回回来,情节就接着演啊。这就是进入一层梦境的时候,就把所有的技能集加在最前面,返回的时候就去掉最前面加上去的那些。栈嘛,就是栈。
操作系统,或者说编译器,总之就是底层的那些,在调用的时候,实际上是把函数入口和“环境”(不只是环境变量,还包括你的参数集,一起组成环境)当成一个整体开始进入梦境。当成一个整体就是构成一个结构体,就是当做一个闭包。
关于闭包,更多的可以看看这个,[Lisp 已死,Lisp 万岁!](http://www.yinwang.org/blog-cn/2013/03/26/lisp-dead-alive) 或者yuziwen的[PL教程 第一章 人和机器](https://yuziwen.github.io/pl-tutorial-1.html), “习题:找规律”之前的那一部分内容。
### 由函数构成服务
- **事件驱动与死循环,服务程序**
我们初学编程的时候,老师给我们说,不要写死循环。是的,那时候电脑就给你个黑屏,死循环无法退出,机子就只好重启了。
可是,真的不能写死循环吗?以下来自<https://github.com/APMonitor/arduino/blob/master/0_Test_Device/Python/tclab_v2/tclab_v2.ino>,是一个温度控制小实验板上的程序,arduino上运行的。复制这么长的代码不利于阅读,但是为了完整性,我们还是都贴过来。
```c
#include "Arduino.h"
// determine board type
#if defined(__AVR_ATmega328P__) || defined(__AVR_ATmega168__)
String boardType = "Arduino Uno";
#elif defined(__AVR_ATmega32U4__) || defined(__AVR_ATmega16U4__)
String boardType = "Arduino Leonardo/Micro";
#elif defined(__AVR_ATmega1280__) || defined(__AVR_ATmega2560__)
String boardType = "Arduino Mega";
#else
String boardType = "Unknown board";
#endif
// Enable debugging output
const bool DEBUG = false;
// constants
const String vers = "2.0.1"; // version of this firmware
const long baud = 115200; // serial baud rate
const char sp = ' '; // command separator
const char nl = '\n'; // command terminator
// pin numbers corresponding to signals on the TC Lab Shield
const int pinT1 = 0; // T1
const int pinT2 = 2; // T2
const int pinQ1 = 3; // Q1
const int pinQ2 = 5; // Q2
const int pinLED1 = 9; // LED1
// temperature alarm limits
const int limT1 = 50; // T1 high alarm (°C)
const int limT2 = 50; // T2 high alarm (°C)
// LED1 levels
const int hiLED = 60; // hi LED
const int loLED = hiLED/16; // lo LED
// global variables
char Buffer[64]; // buffer for parsing serial input
int buffer_index = 0; // index for Buffer
String cmd; // command
float val; // command value
int ledStatus; // 1: loLED
// 2: hiLED
// 3: loLED blink
// 4: hiLED blink
long ledTimeout = 0; // when to return LED to normal operation
float LED = 100; // LED override brightness
float P1 = 200; // heater 1 power limit in units of pwm. Range 0 to 255
float P2 = 100; // heater 2 power limit in units in pwm, range 0 to 255
float Q1 = 0; // last value written to heater 1 in units of percent
float Q2 = 0; // last value written to heater 2 in units of percent
int alarmStatus; // hi temperature alarm status
boolean newData = false; // boolean flag indicating new command
int n = 10; // number of samples for each temperature measurement
void readCommand() {
while (Serial && (Serial.available() > 0) && (newData == false)) {
int byte = Serial.read();
if ((byte != '\r') && (byte != nl) && (buffer_index < 64)) {
Buffer[buffer_index] = byte;
buffer_index++;
}
else {
newData = true;
}
}
}
// for debugging with the serial monitor in Arduino IDE
void echoCommand() {
if (newData) {
Serial.write("Received Command: ");
Serial.write(Buffer, buffer_index);
Serial.write(nl);
Serial.flush();
}
}
// return average of n reads of thermister temperature in °C
inline float readTemperature(int pin) {
float degC = 0.0;
for (int i = 0; i < n; i++) {
degC += analogRead(pin) * 0.322265625 - 50.0; // use for 3.3v AREF
//degC += analogRead(pin) * 0.170898438 - 50.0; // use for 1.75v AREF
}
return degC / float(n);
}
void parseCommand(void) {
if (newData) {
String read_ = String(Buffer);
// separate command from associated data
int idx = read_.indexOf(sp);
cmd = read_.substring(0, idx);
cmd.trim();
cmd.toUpperCase();
// extract data. toFloat() returns 0 on error
String data = read_.substring(idx + 1);
data.trim();
val = data.toFloat();
// reset parameter for next command
memset(Buffer, 0, sizeof(Buffer));
buffer_index = 0;
newData = false;
}
}
void sendResponse(String msg) {
Serial.println(msg);
}
void sendFloatResponse(float val) {
Serial.println(String(val, 3));
}
void sendBinaryResponse(float val) {
byte *b = (byte*)&val;
Serial.write(b, 4);
}
void dispatchCommand(void) {
if (cmd == "A") {
setHeater1(0);
setHeater2(0);
sendResponse("Start");
}
else if (cmd == "LED") {
ledTimeout = millis() + 10000;
LED = max(0, min(100, val));
sendResponse(String(LED));
}
else if (cmd == "P1") {
P1 = max(0, min(255, val));
sendResponse(String(P1));
}
else if (cmd == "P2") {
P2 = max(0, min(255, val));
sendResponse(String(P2));
}
else if (cmd == "Q1") {
setHeater1(val);
sendFloatResponse(Q1);
}
else if (cmd == "Q1B") {
setHeater1(val);
sendBinaryResponse(Q1);
}
else if (cmd == "Q2") {
setHeater2(val);
sendFloatResponse(Q2);
}
else if (cmd == "Q2B") {
setHeater1(val);
sendBinaryResponse(Q2);
}
else if (cmd == "R1") {
sendFloatResponse(Q1);
}
else if (cmd == "R2") {
sendFloatResponse(Q2);
}
else if (cmd == "SCAN") {
sendFloatResponse(readTemperature(pinT1));
sendFloatResponse(readTemperature(pinT2));
sendFloatResponse(Q1);
sendFloatResponse(Q2);
}
else if (cmd == "T1") {
sendFloatResponse(readTemperature(pinT1));
}
else if (cmd == "T1B") {
sendBinaryResponse(readTemperature(pinT1));
}
else if (cmd == "T2") {
sendFloatResponse(readTemperature(pinT2));
}
else if (cmd == "T2B") {
sendBinaryResponse(readTemperature(pinT2));
}
else if (cmd == "VER") {
sendResponse("TCLab Firmware " + vers + " " + boardType);
}
else if (cmd == "X") {
setHeater1(0);
setHeater2(0);
sendResponse("Stop");
}
else if (cmd.length() > 0) {
setHeater1(0);
setHeater2(0);
sendResponse(cmd);
}
Serial.flush();
cmd = "";
}
void checkAlarm(void) {
if ((readTemperature(pinT1) > limT1) or (readTemperature(pinT2) > limT2)) {
alarmStatus = 1;
}
else {
alarmStatus = 0;
}
}
void updateStatus(void) {
// determine led status
ledStatus = 1;
if ((Q1 > 0) or (Q2 > 0)) {
ledStatus = 2;
}
if (alarmStatus > 0) {
ledStatus += 2;
}
// update led depending on ledStatus
if (millis() < ledTimeout) { // override led operation
analogWrite(pinLED1, LED);
}
else {
switch (ledStatus) {
case 1: // normal operation, heaters off
analogWrite(pinLED1, loLED);
break;
case 2: // normal operation, heater on
analogWrite(pinLED1, hiLED);
break;
case 3: // high temperature alarm, heater off
if ((millis() % 2000) > 1000) {
analogWrite(pinLED1, loLED);
} else {
analogWrite(pinLED1, loLED/4);
}
break;
case 4: // high temperature alarm, heater on
if ((millis() % 2000) > 1000) {
analogWrite(pinLED1, hiLED);
} else {
analogWrite(pinLED1, loLED);
}
break;
}
}
}
// set Heater 1
void setHeater1(float qval) {
Q1 = max(0., min(qval, 100.));
analogWrite(pinQ1, (Q1*P1)/100);
}
// set Heater 2
void setHeater2(float qval) {
Q2 = max(0., min(qval, 100.));
analogWrite(pinQ2, (Q2*P2)/100);
}
// arduino startup
void setup() {
analogReference(EXTERNAL);
while (!Serial) {
; // wait for serial port to connect.
}
Serial.begin(baud);
Serial.flush();
setHeater1(0);
setHeater2(0);
ledTimeout = millis() + 1000;
}
// arduino main event loop
void loop() {
readCommand();
if (DEBUG) echoCommand();
parseCommand();
dispatchCommand();
checkAlarm();
updateStatus();
}
```
arduino是个单片机。最后一个函数,就相当于是主程序,就是个死循环。
```c
void loop() {
readCommand();
if (DEBUG) echoCommand();
parseCommand();
dispatchCommand();
checkAlarm();
updateStatus();
}```
显然,它就是在那里轮询,问你的命令是啥,然后解析命令,然后分发命令,然后返回状态,然后继续。
事件驱动程序也一样。图形化的窗口,实际上在不断的“询问”你的鼠标,你的键盘输入,基于此做出反应。直到你点了关闭按钮或者退出菜单。
服务器程序也一样。网页服务器程序就是在那里等待你的请求。专业的说法是监听,listen。
那为什么dos时代不让我们写死循环呢?因为没有任务管理器。没有上帝之手从外面关掉它。现在操作系统进步了,多任务的,都有办法。而且现在有些程序运行于沙箱模式,你可以随便玩,但是在外头可以销毁沙箱。
事实上,从函数调用的过程来说,操作系统就是最初的那个“根”函数。(类似于桌面上一堆窗口,你的desktop是哪个根窗口。)当然,操作系统不是一个函数组成的。比喻来说,大体如此。
- c++的函数模板与泛型编程
- 动态类型语言与静态类型语言,及它们的互相靠拢
python的类型提示
## 符号与模型
- 从符号到模型与parser
- 函数的调用就是用它来处理你的模型
- 模型处理与代码生成
- 编译器也是个函数
好,现在是时候说说编译器了。编译器其实也是一个函数,它处理你的源代码(符号),首先是解析源代码(parse)获得这个符号代表的模型,然后对这个模型进行处理,最后以汇编或者机器码的形式输出到文件。
以gcc为例,通常我们的编译过程是这样的,
```c
gcc foo.c
```
如果我们把这个过程放慢,那就是第一步编译到汇编,第二步从汇编到机器码,第三步进行连接,
```
gcc -S foo.c -o foo.asm
gcc foo.asm -o foo.o
gcc foo.o -o foo.exe
```
从c语言代码到汇编代码,就是个解析符号、处理模型、输出模型的过程。
在机器码的基础上发展了汇编语言,在汇编语言的基础上发展了c这样的高级语言。
基于c语言我们开发了应用程序foo,如果把c移植到别的平台上去,就是可以把c编译成另一种平台的汇编代码,那么我们的程序如果没有使用太多的诡异特性的话,一般就能在这种新的平台运行。各管一层。
- **make与makefile以及项目文件**
如果有成千上万个源代码,而且还有互相依赖,这时你修改了某个源代码,那么凡是依赖它的源代码都应该重新编译。一行行的输入
```c
gcc first.c -o first.o
gcc second.c -o second.o
...
```
这是很繁琐的。人们发展了make这个好玩意。它通过makefile确定依赖关系和生成规则,并且根据时间戳,递归的把凡是修改时间比目标代码更新的文件及其依赖于它的文件都重新编译一下。一个典型的makefile长这样,
```
foo.o:foo.c
gcc foo.c -o foo.o
bar.o:bar.c
gcc bar.c -o bar.o
myprogram:foo.o bar.o
gcc foo.o bar.o -o myprogram
```
核心就是,依赖关系+生成规则。而为了适应不同的平台,适应批量的处理,还能定义变量,使用通配符。关于make及makefile可以看看一看此篇文字,不用看的太多,理解makefile描述的就是依赖关系+生成规则就够了。
- **从make,到automake,configure,再到qmake,wmake**
经常碰到的事情是一个程序在不同的平台上编译,使用不同的makefile,因为在makefile中还得说明使用什么编译器,INCLUDE目录在哪里,库文件在哪里。人们总是想偷懒的,所以又发展了自动生成makefile的工具,有什么automake啊,configure啊之类的。所以我们经常在开源软件包中看到readme说明的安装过程像这样,
```
./configure --prefix=/usr/local FC=ifort
make
make install
```
此外,还有qmake,wmake这些各种古怪稀奇的玩意。qmake是qt搞的,wmake是cfd软件openfoam重度使用的。基本道理都差不多。wmake貌似是会递归的查找当前目录及其子目录,然后在每个目录都去给编译一下。
实际上,visual studio系列也有这么个东西,不过不是makefile+make。它叫项目文件。在一个解决方案下可能有多个项目,每个项目就有个项目文件。如果我们用文本编辑工具打开这个项目文件的话,发现它说描述的是同样的事情,就是依赖关系和生成规则。当然visual studio系列也支持make,有个nmake工具。
- **cmake也是个函数**
那么,能不能根据依赖关系和目标关系自动生成makefile,自动生成项目文件这些呢?你说对了,还真是有。cmake就是这么个工具。它根据CMakelists.txt和你选择的工具链去生成makefile这些,或者生成visual studio需要的项目文件。然后调用你的工具链去编译代码。
从我们前面的理解来看,cmake也是个函数,它的输入就是CMakelists.txt等,然后输出就是你所要的项目文件等。
对c/cpp/fortran系列来说,jetbrains出品的CLion堪称利器。(jetbrains的工具,学术免费,开源开发免费。)
## 递归思想与“事物”的结构
- 斐波拉契数列与递归
- 递归与循环
- 四则混合运算、表达式树与递归
## 解释器给我们的启示与对模型的处理
我们提交一个Add 3 4
- 抽象语法树
- 递归处理是解释器的处理办法
## 抽象与封装
- 结构体,有理数的加减乘除
- 结构体演变为类
- 以类为基础的面向对象编程,封装,继承,多态
- 算法的不依赖于具体的类型
C++的模板类
julia的参数类型
- java的java、class、jar
- julia的多重分发
- go的嵌入体
- 库与头文件、包、模块
- 软件包与生态
- 名字空间
- 硬件的一层又一层抽象
由晶体管构建了门电路,由门电路构建了加法器、寄存器、锁存器,进一步构建了更上层的功能模块。设计cpu的时候,不是在晶体管层面设计的,但是一定有“编译器”一样的东西给打版到能生产的级别上去。这就是硬件的一层又一层抽象。
- 软件的一层又一层抽象
从机器码构建了汇编码,从汇编码构建了c语言这样的高级语言。你使用c这样的高级语言写程序,通常不需要管背后的汇编是怎么回事,更不用管机器码了。这就是一层抽象。基于c可以构建更高级的语言,更加用户友好或者安全。
别人的库,也是一层抽象,你在扫码的时候,并不需要知道在0101层面是怎么工作的。
关于计算机科学入门,这里有一个[大约5小时的40集课程](https://www.bilibili.com/video/BV1EW411u7th?spm_id_from=333.337.search-card.all.click),可以快速的看一遍。
## 图形界面
- 从gcc、gdb到IDE
根据我们前面的内容以及从[make和makefile介绍](https://seisman.github.io/how-to-write-makefile/)所获得的认识,写程序其实是没有图形界面的,是使用一个编辑器编辑源文件,然后使用编译器编译,使用连接器连接,最后变成可执行文件。而调试呢,也是如此,gdb就是一个调试器。但是,有图形界面的工具,如集成开发环境(IDE),比如visual studio,clion等。我们在IDE上点某个按钮的时候,实际上背后就是执行了某个函数之类的。比如点击IDE上的build的时候实际上就执行了构建任务。那么关联点击事件和构建任务的,就有消息发出和消息响应之类的事情。
这里想说的是,点点鼠标隐藏了很多东西,而我们知道背后执行的流程,让我们有清晰的图景。
- 儿童编程Scratch与背后的code
Scratch编程最近都很流行,我们在使用“源码编辑器”或者编程猫之类的工具的时候,在它的IDE上还可以显示出python或者javascript的源码来。这就很直接的告诉我们,一段代码的表现形式实际上可以有多种多样的。本质说的(表达的)还是同一个东西。
- 从delphi、C#、QT到浏览器
我们学习程序是从c或者fortran开始的,基本都是命令行的。搞出一个图形界面的程序是很让人欣喜的。以前有个流行的神器delphi,就拖拉拽个界面出来,编写对应的函数就ok了。delphi现在仍然还活着。其某个重要的开发人员后来加入了微软,搞了个C#,也是这样拖拉拽就搞个界面出来了。而QT、gtk之类的,则是c++的图形界面库。此外QT还有其他一些增强功能。QT就用了信号槽的办法来进行处理,就是你在某个图形元素上进行了某个操作,比如点击,那么就会发出信号,而设置了对应的信号处理函数,就会有相应的代码进行处理。现在更多的图形界面都切换成了网页形式。有很多的程序本质都是一个浏览器内核装修了一下,当然处理代码还得是自己写的。比如vscode就是这样的。曾经流行过客户端/服务器架构,现在时兴的则是浏览器/服务器架构。有很多手机app、微信小程序就是个网页。而网页的图形库实际上非常丰富,除了html+css基本东西之外,还有很多javascript的ui库,比如react、vue、angular等等。
这里想说的是,图形前端现在重点已经发生了变化。装修固然重要,但是写程序很多都是背后运行的那个代码。用户体验却往往就是从ui开始的。所以,于工程计算而言,了解流程上怎么贯通就好。真正需要漂亮的前端的时候,是需要相应的专业人员的。
## 如何进行进一步学习
- 学习一门编程语言,掌握关键语言特性,忽略次要特性
- 注意语言特有的那些区别于其他语言的地方
比如fortran的数组下标是从1开始的,而c是从0开始的,fortran的多维数组按列排列等。
- 编程与算法,sicp与leecode
要想理解编程,走一走sicp这样的内容,要想学算法实现,可能leecode和竞赛编程是合适的参考。
- 食材、调料与大餐
就算了解再多的食材和调料的特性你也不是就成为了厨师,同样的,就算你掌握了所有的语法,你也称不了编程高手。
要想做一盘好菜,一开始也并不需要了解所有的食材和调料。需要做的是观摩厨师做菜、自己亲自去做、请人品鉴,再来一遍。写程序也一样。
- 对用户友好与对机器友好
从0101到汇编到c,再到今天的高级语言,从javascript到typescript,这些都是越来越用户友好,而非机器友好。
## 释疑与参考材料
### [make和makefile介绍](https://seisman.github.io/how-to-write-makefile/)
[make和makefile介绍](https://seisman.github.io/how-to-write-makefile/),这个不要看太多,理解程序编译过程,makefile描述的是依赖关系和生成规则,操作一下就ok了。
### [yuziwen的博客](https://yuziwen.github.io/)
[yuziwen的博客](https://yuziwen.github.io/),初学的看一看,理解一下函数、递归和结构体。
### [王垠的博客](http://www.yinwang.org/)
[王垠的博客](http://www.yinwang.org/),很多都可以读一读,尤其是《解谜计算机科学》、《怎样写一个解释器》、《如何掌握所有的语言》,还有《对 Parser 的误解》、《Lisp 已死,Lisp 万岁!》。
### [sicp](https://mitpress.mit.edu/sites/default/files/sicp/index.html)
[sicp](https://mitpress.mit.edu/sites/default/files/sicp/index.html)就是大名鼎鼎的sicp。老实说,我没有看完。大体理解过程抽象(就是函数)、数据抽象(就是结构体、类)就差不多了,最多再理解到复数计算、自动求导计算差不多了。有中文版。
### [sicp-js](https://sourceacademy.org/sicpjs)
[sicp-js](https://sourceacademy.org/sicpjs)是javascript版的sicp,新加坡国立大学一名教教授干的。好处是使用javascript实现,直接在网页上就能运行。其实用javascript入门编程就是好,环境啥的都不用装,有个Chrome,按一下F12就什么都有了。
### [htdp](https://htdp.org/)
[htdp](https://htdp.org/)是经典编程入门书之一,scheme语言的,写得很细。
### [让我们谈谈lambda演算](https://github.com/txyyss/Lambda-Calculus)
[让我们谈谈lambda演算](https://github.com/txyyss/Lambda-Calculus)是王盛颐写的介绍lambda演算的。一般而言,并不需要深入去理解。
### [lisp的本质](https://www.cnblogs.com/jxcia_Lai/archive/2012/10/29/2744226.html)
[lisp的本质](https://www.cnblogs.com/jxcia_Lai/archive/2012/10/29/2744226.html)我不知道中文的原文最早出现在哪里,这是其英文原文<https://www.defmacro.org/ramblings/lisp.html>。恐怕学了java才看得懂,看不懂也不影响啥。
以上这些书和链接,不用全看,不然就真的回不来了。理解核心要义。而这个要义就在《解谜计算机科学》,在《计算机科学基础班(第三期)报名》的课程大纲里,在《怎样写一个解释器》里。
还有一点要说的,编程和数学是两码事,编程也不是算法,编程入门到能理解解释器就差不多了,算法入门可能就需要刷力扣那些了。 | 26.966395 | 452 | 0.72705 | yue_Hant | 0.518044 |
4f3c392bb8c16f334a949514d5c448fa276fc4d1 | 2,175 | md | Markdown | docs/tag-gettext.md | morganney/ttag | c058b135eb1ac8783bac7f873b341fbe0fbcfb97 | [
"MIT"
] | 235 | 2018-05-08T13:56:56.000Z | 2022-03-31T11:04:21.000Z | docs/tag-gettext.md | morganney/ttag | c058b135eb1ac8783bac7f873b341fbe0fbcfb97 | [
"MIT"
] | 135 | 2018-05-04T19:30:32.000Z | 2022-02-06T04:39:23.000Z | docs/tag-gettext.md | morganney/ttag | c058b135eb1ac8783bac7f873b341fbe0fbcfb97 | [
"MIT"
] | 34 | 2018-05-07T07:59:25.000Z | 2022-02-01T11:24:45.000Z | ---
id: tag-gettext
title: t (gettext tag)
---
The `t` function (or [template literal tag](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#Tagged_templates))
is used almost as a simple GNU `gettext` function but with the possibility to embed some expressions inside
template literals.
## Usage:
```js
import { t } from 'ttag';
function hello(name) {
return t`Hello ${name}`
}
```
## Try
https://jsfiddle.net/AlexMost/5vu9ep2c/1/
## Format
To make localized strings more clear and reliable for translators there are some restrictions on expressions that can
be used inside string template. Here are allowed expressions formats.
### Valid
```js
t`Hello Mike` // valid.
t`Hello ${ name }` // valid. (identifiers are allowed)
t`simple string ${this.props.name}` // valid. (member expressions are also allowed)
const f = (arg) => {
// multiline strings will be dedented (by default)
// so translators will not see extra tabs or spaces before each line.
return t`multiline
strings
are supported`
}
```
### Invalid
```js
t`Hello ${getUserName()}` // function calls are not allowed.
t`` // empty strings are not allowed.
t`${greeting} ${name}` // strings that does not contain meaningful information are not allowed.
```
## Extract
All translations can be extracted to `.po` file with `ttag-cli` tool.
```js
import { t } from 'ttag';
const name = 'Mike';
console.log(t`Hello ${name}`);
```
```po
#: src/source.js:8
msgid "Hello ${ name }"
msgstr ""
```
## Resolve
From the example above, let's assume that translator added translation to .po file:
```po
#: src/source.js:8
msgid "Hello ${ name }"
msgstr "Hello ${ name } [translated]"
```
The resulting compiled file will contain this:
```js
import { t } from 'ttag'
const name = 'Mike'
console.log(`Hello ${ name } [translated]`)
```
If there are no expressions inside template, then ttag will resolve translation as a simple string.
## Default resolve
By default, if no translation was found, ttag will just strip `t` tag before the tagged template.
| 23.387097 | 147 | 0.668046 | eng_Latn | 0.978643 |
4f3da5c46585e83c77be7f77fb39fc9ac24d3590 | 7,192 | md | Markdown | README.md | liamjvs/application-scenario | 7692af58181b631ed2f02f8ab7faa69e95f1ac61 | [
"MIT"
] | 1 | 2019-12-09T09:45:51.000Z | 2019-12-09T09:45:51.000Z | README.md | liamjvs/application-scenario | 7692af58181b631ed2f02f8ab7faa69e95f1ac61 | [
"MIT"
] | null | null | null | README.md | liamjvs/application-scenario | 7692af58181b631ed2f02f8ab7faa69e95f1ac61 | [
"MIT"
] | null | null | null | # Azure Application Scenario Troubleshooting Workshop
Created this workshop for my customer as they wanted a *real-world* application deployed into Azure that would break. The task of the workshop would then be identifying the break and then implementing Azure functionality to monitor/alert/react to the break. We ran this workshop in 5 hours.
My customer uses Azure CLI and bash so this workshop caters to this.
## Workshop Scenario Overview
The participants of the workshop oversee the ASP.NET MVC web frontend for the Music Store 'ASP.NET MVC Music Store'. The application is deployed across three VMs that are load balanced. The application communicates to a Azure SQL server database. There are six scenarios:
1. A member of your team has pushed the latest code to production but it has not gone to plan. Customers are ringing up that the website is no longer functioning.
Investigate the issue and implement functionality that will alert to any downtime.
2. A member of your team has noticed that the website is no longer functioning. Investigate the issue and implement functionality that will alert to any downtime.
3. The SQL database team are reporting no traffic to their SQL database for the application. Investigate why this is and how this can be prevented in the future.
4. The management of the organisation are conducting a BCDR audit of all applications for the music store. Implement functionality that will allow the application to return to as-is in the event of a disaster. Your current RTO is 30 minutes.
5. Customers are ringing up complaining that the website is running slowly and receiving the occasional gateway timeout. They’re using the telephone ordering service instead. Investigate why the website is operating slowly and what functionality can be implemented to prevent this in the future.
6. It’s been all over the news – the ransomware WAN2CRY is being spread amongst the internet and somehow ended up on our web farm. Our website is not operational. Restore the service and meet our RTO of 30 minutes.
The generated `./client/break/*.sh` scripts then break the environment by:
1. Introducing a new route to the route table that prevents traffic getting to the SQL server
2. Killing the IIS service (`iisreset /stop`) on the VMs in the resource group
3. Adding an outbound rule to all NSGs in the resource group blocking the SQL Server
4. BCDR Review of the Application (introduction of features to prevent a disaster)
- Didn't reveal to the participants but the final task deletes the web folder and so this task prepares for the final one
5. Downloading and executing a PowerShell script to max out CPU on the servers (`stress.ps1`)
6. Executing a script to delete the web folder on the VMs (`wan2cry.ps1`)
The participants should not worry about the SQL team/SQL server as this is out of their remit and hosted by another team. The resource for this sits in your subscription.
### Possible Solutions
Everyone in the workshop I ran had different solutions. I've just jotted down some possible solutions.
1. - **Identify**: Use the Network tooling under Azure Monitor to identify the issue, create an RDP rule for the VM and troubleshoot on the VM, view the IIS logs where the connection will be failing .etc
- **Fix**: NPM to monitor the connection to the Azure SQL instance, alerting via Log Analytics on the IIS logs .etc
2. - **Identify**: Look up the health of the load balancer, investigate on the Virtual Machine the service .etc
- **Fix**: Alerting on the load balancer, Azure automation account to restart the host if no response on port 80 .etc
3. - **Identify**: Similar to 1, use the Networking stack .etc
- **Fix**: NPM to monitor the connection to the Azure SQL instance, alerting via Log Analytics on the IIS logs .etc
4. - **Identify**: BCDR
- **Fix**: Implement Recovery Services (Backups), Site Recovery (although this wouldn't work with the break), open up the ARM template and get the 'CustomScriptExtension' .etc
5. - **Identify**: View metrics on the Virtual Machine for CPU, look on the VM for the spike in CPU .etc
- **Fix**: Azure Monitor alerting, Azure Automation for reboot .etc
6. - **Identify**: Load Balancer Health will be at 0%, web folder disappeared .etc
- **Fix**: Restore backup, redeploy application code .etc
## Application Topology

## How To Deploy
### Host
- Change the variables in `./deploy.sh` on lines 3-9
- Execute `./deploy.sh`. The script will deploy the ARM template, set up the host files, copy the files to the storage account, set up the client files and then place these in a folder called '`./client`'
- The '`./client`' folder can then be zipped and shared with the participants of the workshop
### Client
- Distribute the `./client` folder (I zip it up and then share via a Storage Account with a https://aka.ms link)
- Change the variables `RESOURCE_GROUP` and `RESOURCE_PREFIX` in `./client/_variables.sh` to be unique to the participant. If two participants have the same strings, the deployment may fail as the resource names are not unique
- Execute `./client/deploy.sh` within the distributed `./client` folder
- Solution is deployed to Azure (~20-25 minutes)
- Step through PowerPoint and execute tasks in `./client/break/*.sh`
## How the scenario is constructed
The following is deployed to the host subscription:
- Azure SQL Server hosting the Music Store database
- Storage Account hosting the Azure CLI commands to break the environment within the client's subscription
Both resources are used by all participants.
The files within the directory `./templates/*` are modified to reflect the host deployment and dumped in either `./host/` or `./client/`. The files in `./host/*` are uploaded and stored on the host storage account. The client files are then ammended to reflect the deployment and stored in `./client/*`. Review `./deploy.sh` for further detail.
### Host Files
The `./host` folder is only used at the time of executing `./deploy.sh`. Files are modified to reflect the host deployment and then uploaded from this folder to the storage account.
### Client Files
The client scripts located in `./client/break/*.sh` post-execution of `./deploy.sh` are ciphered using a `tr` command:
```bash
echo "This is a text string" | tr '[A-Za-z]' '[N-ZA-Mn-za-m]'
Guvf vf n grkg fgevat
```
```bash
echo "Guvf vf n grkg fgevat" | tr '[N-ZA-Mn-za-m]' '[A-Za-z]'
This is a text string
```
This prevents the prying eye from instantly seeing the Storage Account URI for the Azure CLI commands, downloading this file and then seeing how the break happens.
The `./client/break/*.sh` scripts effectively decipher themselves, `curl` a script from the host's Storage Account (`./host/break/*.sh`), execute this script and in some cases, pass this output to another script that is again curled (`./host/break/*a.sh`).
## Footnotes
The command `iisreset` may have to be ran on the/sent to the VM after some networking alterations to drop/reset any open connections. Example being tasks 1 and 3, after fixing the broken application, an `iisreset` was necessary (or you can wait a period of time).
Enjoy. | 77.333333 | 344 | 0.765712 | eng_Latn | 0.998361 |
4f3dfb01861fa8324b9296f2b2322a318018b44a | 3,346 | md | Markdown | README.md | akaasjager/mod_security | 0d044e2807876ba97af1563411eb610a318069df | [
"Apache-2.0"
] | null | null | null | README.md | akaasjager/mod_security | 0d044e2807876ba97af1563411eb610a318069df | [
"Apache-2.0"
] | null | null | null | README.md | akaasjager/mod_security | 0d044e2807876ba97af1563411eb610a318069df | [
"Apache-2.0"
] | null | null | null | # Description
Ever wanted a little guardian angel to protect your chef deployed
servers from the bad guys? Like a bad-ass Jiminy Cricket on your
shoulder?
This package is to make deployment and testing of mod_security easier
with Chef. Right now it centers entirely around the OWASP Core Rule
Sets of mod_security rules. In future, it will allow you to manage/deploy
custom rule/rulesets of your own.
There are 2 main use cases right now:
## Install on a real production server
* Adjust the attributes to your liking and install the default
recipe.
## Find out what ModSecurity could do for your site in less than
15-minutes.
* Setup a base chef recipe for a server.
* Set it to install the default recipe
* Create a cookbook to reverse proxy to your real server sorta like
this:
<pre>
mod_secure_proxy "my_site" do
server_name "www.mysite.com"
enable_https true # if you want to proxy https too
end
</pre>
* Set your local /etc/hosts (or equiv.) file to have that server's IP
look like your site
* Test to your heart's content.
* Look at /var/log/modsec.log and see what you could be blocking
* Change the "SecRuleEngine" attribute to "On" and test some more
# Requirements
## Cookbooks
This cookbook depends on apache2 and build-essential
## Platforms
### Supported
* Ubuntu (tested on 12.04 and 13.04)
* Debian (tested on 6.0.8 and 7.2.0)
* RedHat (untested)
* CentOS (tested on 6.5)
* Fedora (untested)
* FreeBSD (untested)
### Coming Soon
* Arch (anything else that apache2 supports)
# Attributes
Major ones will be documented soon. For right now check the
attributes file. A few suggestions
* Compile from source. I normally prefer not to do this, but some
core rules break if you don't
* crs->bundled determines if the bundled version of the crs should be used or
if the core rules are downloaded from the SpiderLabs GitHub releases.
* Base rules should generally be safe, the other groups much less
so. There are some rules with inconsistently named data files that are
fixed by this cookbook.
* custom->rules allows you to install your own custom rules
* custom->datafiles allows you to install your own data files to be used in
pmFromFile directives
Recipes
=======
default
-------
This installs base, the OWASP core rule set and your own custom rules currently.
install_base
------------
install_owasp_core_rule_set
---------------------------
install_custom_rule_set
-----------------------
Reads custom->rules and custom->datafiles properties and creates .conf and .data
files based on their contents in mod_security/rules which is included by the default
mod_security.conf file
License and Authors
===================
Author:: Jason Rohwedder <jro@honeyapps.com>
Author:: Frank Breedijk <fbreedijk@schubergphilis.com>
Copyright:: 2011, HoneyApps, Inc
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 29.610619 | 84 | 0.749552 | eng_Latn | 0.996785 |
4f3e242e8a674d21e54e0bb217e4005e9902fb76 | 94 | md | Markdown | README.md | krgt/nextjs-mui-admin-template | b4564be274d33cdd40ccf2640c1cde4518f67b11 | [
"MIT"
] | null | null | null | README.md | krgt/nextjs-mui-admin-template | b4564be274d33cdd40ccf2640c1cde4518f67b11 | [
"MIT"
] | 1 | 2021-03-01T15:46:21.000Z | 2021-03-01T15:46:21.000Z | README.md | krgt/nextjs-mui-admin-template | b4564be274d33cdd40ccf2640c1cde4518f67b11 | [
"MIT"
] | null | null | null | # nextjs-mui-admin-template
React admin template. Built with NextJS, Material-UI, Typescript.
| 31.333333 | 65 | 0.797872 | eng_Latn | 0.519527 |
4f3e9173fb525d98e4b3419ef3128dc53bddd30e | 1,032 | md | Markdown | README.md | icicleio/Psr7Adaptor | 640cb173b195860708e202d784dc8540357f240d | [
"MIT"
] | 1 | 2015-06-04T02:20:02.000Z | 2015-06-04T02:20:02.000Z | README.md | icicleio/Psr7Adaptor | 640cb173b195860708e202d784dc8540357f240d | [
"MIT"
] | null | null | null | README.md | icicleio/Psr7Adaptor | 640cb173b195860708e202d784dc8540357f240d | [
"MIT"
] | null | null | null | # PSR-7 Bridge for Icicle
Bridges the asynchronous HTTP messages and streams of Icicle's [HTTP component](https://github.com/icicleio/http) to [PSR-7](http://www.php-fig.org/psr/psr-7/) compatible interface. Use with caution and only in cases where blocking is acceptable, as reading or writing from a synchronous stream can block the entire process.
[](https://twitter.com/icicleio)
[](https://travis-ci.org/icicleio/psr7-bridge)
[](https://coveralls.io/r/icicleio/psr7-bridge)
[](http://semver.org)
[](LICENSE)
Currently under development.
| 86 | 324 | 0.774225 | yue_Hant | 0.456612 |
4f3ee6ad867a3436fd7de5263416302d18ce199d | 10,764 | md | Markdown | README.md | stormraiser/disunknown | 194cc01851fe26bc2f0ed87cdcc238c801f4a333 | [
"MIT"
] | 17 | 2021-09-13T19:47:15.000Z | 2022-03-18T03:07:55.000Z | README.md | stormraiser/disunknown | 194cc01851fe26bc2f0ed87cdcc238c801f4a333 | [
"MIT"
] | null | null | null | README.md | stormraiser/disunknown | 194cc01851fe26bc2f0ed87cdcc238c801f4a333 | [
"MIT"
] | 2 | 2022-01-31T02:31:11.000Z | 2022-03-18T03:07:48.000Z | # DisUnknown: Distilling Unknown Factors for Disentanglement Learning
See introduction on our [project page](https://stormraiser.github.io/disunknown/)
## Requirements
- PyTorch >= 1.8.0
- [`torch.linalg.eigh()`](https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html#torch.linalg.eigh) is used in a few places. If you use a older version please change them to [`torch.symeig()`](https://pytorch.org/docs/stable/generated/torch.symeig.html#torch.symeig)
- [PyYAML](https://pyyaml.org/), for loading configuration files
- Optional: [h5py](https://www.h5py.org/), for using the 3D Shapes dataset
- Optional: [Matplotlib](https://matplotlib.org/stable/index.html), for plotting sample distributions in code space
## Preparing Datasets
Dataset classes and default configurations are provided for the following datasets. See below for how to add new datasets, or you can open an issue and the author might consider adding it. Some datasets need to be prepared before using:
```
$ python disentangler.py prepare_data <dataset_name> --data_path </path/to/dataset>
```
If the dataset does not have a standard training/test split it will be split randomly. Use the `--test_portion <portion>` option to set the portion of test samples. Some dataset have additional options.
- MNIST, Fashion-MNIST, QMNIST, SVHN
- Dataset names are `mnist`, `fashion_mnist`, `qmnist`, `svhn`.
- `data_path` should be the same as those for the built-in dataset classes provided by torchvision.
- We use the full NIST digit dataset from QMNIST (`what = 'nist'`) and it needs to be split.
- For SVHN, set `include_extra: true` in `dataset_args` in the configuration file (this is the default) to include the extra training images in the training set.
- [3D Chairs](https://www.di.ens.fr/willow/research/seeing3Dchairs/)
- Dataset name is `chairs`.
- `data_path` should be the folder containing the `rendered_chairs` folder.
- Needs to be split.
- You may use `--compress` to down-sample all images and save them as a NumPy array of PNG-encoded `bytes`. Use `--downsample_size <size>` to set image size, default to 128. Note that this does not dictate the training-time image size, which is configured separately. Compressing the images speeds up training only slightly if a multi-processing dataloader is used but makes plotting significantly faster.
- Unrelated to this work, but the author wants to note that this dataset curiously contains 31 azimuth angles times two altitudes for a total of 62 images for each chair with image id `031` skipped, apparently because 32 was the intended number of azimuth angles but when they rendered the images those angles were generated using `numpy.linspace(0, 360, 32)`, ignoring the fact that 0 and 360 are the same angle, then removed the duplicated images `031` and `063` after they realized the mistake. Beware of off-by-one errors in linspace, especially if it is also circular!
- [3D shapes](https://github.com/deepmind/3d-shapes)
- Dataset name is `3dshapes`.
- `data_path` should be the folder containing `3dshapes.h5`.
- Needs to be split.
- You may use `--compress` to extract all images and then save them as a NumPy array of PNG-encoded `bytes`. This is mainly for space-saving: the original dataset, when extracted from HDFS, takes 5.9GB of memory. The re-compressed version takes 2.2GB. Extraction and compression takes about an hour.
- [dSprites](https://github.com/deepmind/dsprites-dataset)
- Dataset name is `dsprites`
- `data_path` should be the folder containing the `.npz` file.
- Needs to be split.
- This dataset is problematic. I found that orientation 0 and orientation 39 are the same, and presumably that was because similar to 3D Chairs something like `linspace(0, 360, 40)` was used to generate the angles. So yes, I'm telling you again, beware of off-by-one errors in linspace, especially if it is also circular! Anyway in my dataset class I discarded orientation 39, so there are only 39 different orientations and 3 * 6 * 39 * 32 * 32 = 718848 images.
- The bigger problem is that each of the three shapes (square, ellipse, heart) has a different symmetry. For hearts, each image uniquely determines an orientation angle; for ellipses, each image has two possible orientation angles; and for squares, each image has four possible orientation angles. They managed to make the dataset so that (apart from orientation 0 and 39 being the same) different orientations correspond to different images because 2 and 4 are not divisors of 39 (which makes me wonder if the off-by-one error was intentional) but the orientation is still conceptually wrong, since if you consider the orientation angles of ellipses modulo 180 or the orientation angles of squares modulo 90, then the orientation class IDs are not ordered in increasing order of orientation angles. Instead the orientation angles of ellipses go around twice in this range and the orientation angles of squares go around four times. To solve this problem, I included an option to set `relabel_orientation: true` in `dataset_args` in the configuration file (this is the default) which will cause the orientation of ellipses and squares to be re-labeled in the correct order. Specifically, for ellipses orientation *t* is re-labeled as (*t* * 2) % 39 and for squares orientation *t* is re-labeled as (*t* * 4) % 39. But still, this causes ellipses to rotate twice as slowly and squares to rotate four times as slowly when the orientation increases, which is still not ideal. When shapes with different symmetries are mixed there is simply no easy solution, and do not expect good results on this dataset if the unknown factor contains the orientation.
- `--compress` does the same thing as in 3D Shapes.
## Training
To train, use
```
$ python disentangler.py train --config_file </path/to/config/file> --save_path </path/to/save/folder>
```
The configuration file is in YAML. See the [commented example](config_example.yaml) for explanations. If `config_file` is omitted, it is expected that `save_path` already exists and contains `config.yaml`. Otherwise `save_path` will be created if it does not exist, and `config_file` will be copied into it. If `save_path` already contains a previous training run that has been halted, it will by default resume from the latest checkpoint. `--start_from <stage_name> [<iteration>]` can be used to choose another restarting point. `--start_from stage1` to restart from scratch. Specifying `--data_path` or `--device` will override those settings in the configuration file.
Although our goal is to deal with the cases where some factors are labeled and some factors are unknown, it feels wrong not to extrapolate to the cases where all factors are labeled or where all factors are unknown. Wo do allow these, but some parts of our method will become unnecessary and will be discarded accordingly. In particular if all factors are unknown then we just train a VAE in stage I and then a GAN having the same code space in stage II, so you can use this code for just training a GAN. We don't have the myriad of GAN tricks though.
## Meaning of Visualization Images
During training, images generated for visualization will be saved in the subfolder `samples`. `test_images.jpg` contains images from the test set in even-numbered columns (starting from zero), with odd-numbered columns being empty. The generated images will contain corresponding reconstructed images in even-numbered columns, while each image in odd-numbered columns is generated by combining the unknown code from its left and the labeled code from its right (warp to the next row).
Example test images:

Example generated images:

## Adding a New Dataset
`__init__()` should accept four positional arguments `root`, `part`, `labeled_factors`, `transform` in that order, plus any additional keyword arguments that one expects to receive from `dataset_args` in the configuration file. `root` is the path to the dataset folder. `transform` is as usual. `part` can be `train`, `test` or `plot`, specifying which subset of the dataset to load. The plotting set is generally the same as the test set, but `part = 'plot'` is passed in so that a smaller plotting set can be used if the test set is too large.
`labeled_factors` is a list of factor names. `__getitem__()` should return a tuple `(image, labels)` where `image` is the image and `labels` is a one-dimensional PyTorch tensor of type `torch.int64`, containing the labels for that image in the order listed in `labeled_factors`. `labels` should always be a one-dimensional tensor even if there is only one labeled factor, not a Python `int` or a zero-dimensional tensor. If `labeled_factors` is empty then `__getitem__()` should return `image` only.
In addition, metadata about the factors should be available in the following properties: `nclass` should be a list of `int`s containing the number of classes of each factor, and `class_freq` should be a list of PyTorch tensors, each being one-dimensional, containing the distribution of classes of each factor in (the current split of) the dataset.
If any preparation is required, implement a static method `prepare_data(args)` where `args` is a return value of `argparse.ArgumentParser.parse_args()`, containing properties `data_path` and `test_portion` by default. If additional command-line arguments are needed, implement a static method `add_prepare_args(parser)` where `parser.add_argument()` can be called.
Finally add it to the dictionary of recognized datasets in `data/__init__.py`.
Default configuration should also be created as `default_config/datasets/<dataset_name>.yaml`. It should at a minimum contain `image_size`, `image_channels` and `factors`. `factors` has the same syntax as `labeled_factors` as explained in the example training configuration. It should contain a complete list of all factors. In particular, if the dataset does not include a complete set of labels, there should be a factor called `unknown` which will become the default unknown factor if `labeled_factors` is not set in the training configuration.
Any additional settings in the default configuration will override global defaults in `default_config/default_config.yaml`.
## Citing This Work (BibTeX)
```
@inproceedings{xiang2021disunknown,
title={DisUnknown: Distilling Unknown Factors for Disentanglement Learning},
author={Xiang, Sitao and Gu, Yuming and Xiang, Pengda and Chai, Menglei and Li, Hao and Zhao, Yajie and He, Mingming},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={14810--14819},
year={2021}
}
``` | 112.125 | 1,653 | 0.774247 | eng_Latn | 0.998326 |
4f3f0c09307e124bdec2340bb28f2cf9c54b1dfe | 1,333 | md | Markdown | README.md | abhijeetMishra101/UITableView-AutoScroll-Extension | d4f168f21549c2ebe4ee72efd5595301a7db8f5f | [
"MIT"
] | 1 | 2020-04-28T15:43:17.000Z | 2020-04-28T15:43:17.000Z | README.md | abhijeetMishra101/UITableView-AutoScroll-Extension | d4f168f21549c2ebe4ee72efd5595301a7db8f5f | [
"MIT"
] | null | null | null | README.md | abhijeetMishra101/UITableView-AutoScroll-Extension | d4f168f21549c2ebe4ee72efd5595301a7db8f5f | [
"MIT"
] | null | null | null | # UITableView+AutoScroll
A UITableView extension which automatically scrolls a UITableView to show a cell clicked by user in-case the selected cell's complete visible area is either below or above the UITableView's visible area.
## Motivation
Had trouble in scenarios when a user selects a UITableViewCell is partly visible? Say you have a logic to check/uncheck or expand/collapse a cell upon click but the cell clicked by user is partly either below the visible area or above the visible area.
Wouldn't it be great to automatically scroll the selected cell to visible area, to show the end user complete info about the cell he/she is interested in?
I mean anyways the user is going to scroll such a cell to view the entire content, so shouldn't autoscrolling it to visible area be an ideal UX experience?

:unamused::tired_face::expressionless:
## Solution
Here is a piece of code(an extension for UITableView) which can make it. Just call the UITableView's func "makeSelectedCellVisible()" and pass the selected cells indexpath as a parameter. And that's it! Just call this code wherever your UITableView's selection logic lies (either UITableViewDelegate's didSelectRow fun, an IBAction or any custom target function code you have).

That's all to make it work! :trollface: :see_no_evil:
| 70.157895 | 377 | 0.795199 | eng_Latn | 0.997377 |
4f3f7708e340540d67b2335e9363dffb121f139a | 312 | md | Markdown | README.md | vinojv96/covid19info_clock | e7b5c5200fe2a90e4f92fc3e1c5ca6dea499cb45 | [
"MIT"
] | 2 | 2021-02-09T12:36:04.000Z | 2022-02-06T10:49:47.000Z | README.md | vinojv96/covid19info_clock | e7b5c5200fe2a90e4f92fc3e1c5ca6dea499cb45 | [
"MIT"
] | null | null | null | README.md | vinojv96/covid19info_clock | e7b5c5200fe2a90e4f92fc3e1c5ca6dea499cb45 | [
"MIT"
] | null | null | null | # covid19info_clock
led matrix clock with covid 19 corona cases count detailed display

# Watch Video
[](http://www.youtube.com/watch?v=NpA5-jC9PUM)
| 52 | 105 | 0.778846 | kor_Hang | 0.255894 |
4f40659c6b213dba91d0c157382fbf520e438ece | 45 | md | Markdown | backend/README.md | cyxceven/stardew-valley-extension | 397546ce2823d297c6d612a1170254c419cd90aa | [
"Apache-2.0"
] | 24 | 2019-04-13T19:27:40.000Z | 2021-10-15T23:09:26.000Z | backend/README.md | cyxceven/stardew-valley-extension | 397546ce2823d297c6d612a1170254c419cd90aa | [
"Apache-2.0"
] | 10 | 2019-04-14T11:37:55.000Z | 2022-02-26T10:20:51.000Z | backend/README.md | cyxceven/stardew-valley-extension | 397546ce2823d297c6d612a1170254c419cd90aa | [
"Apache-2.0"
] | 7 | 2019-04-11T15:45:21.000Z | 2021-07-01T19:57:17.000Z | # Stardew Weather Twitch Extension Backend
| 11.25 | 42 | 0.8 | yue_Hant | 0.346015 |
4f409efdfb4e719a5ba162f6dcc759149af25a71 | 119 | md | Markdown | README.md | typlog/shisa | 0dfbc039950b9039fb93054deb52bb2b10e3a53a | [
"MIT"
] | null | null | null | README.md | typlog/shisa | 0dfbc039950b9039fb93054deb52bb2b10e3a53a | [
"MIT"
] | 1 | 2021-05-13T01:30:31.000Z | 2021-05-13T01:30:31.000Z | README.md | jessuni/shisa | 0dfbc039950b9039fb93054deb52bb2b10e3a53a | [
"MIT"
] | 1 | 2021-01-19T10:31:41.000Z | 2021-01-19T10:31:41.000Z | # shisa
Build your own podcast player with desired structure and styles.
*This is a WIP repo, please do not use it.*
| 19.833333 | 64 | 0.747899 | eng_Latn | 0.999961 |
4f40a04407039aca7a202845e299aee2b1ac0583 | 5,192 | md | Markdown | intl.en-US/Instances/Instance purchasing options/Reserved Instances/Manage Reserved Instances.md | listenwong/ecs | 5bf839cd50ab46caed67a39a063ce290d7c5a8b8 | [
"MIT"
] | null | null | null | intl.en-US/Instances/Instance purchasing options/Reserved Instances/Manage Reserved Instances.md | listenwong/ecs | 5bf839cd50ab46caed67a39a063ce290d7c5a8b8 | [
"MIT"
] | null | null | null | intl.en-US/Instances/Instance purchasing options/Reserved Instances/Manage Reserved Instances.md | listenwong/ecs | 5bf839cd50ab46caed67a39a063ce290d7c5a8b8 | [
"MIT"
] | null | null | null | # Manage Reserved Instances {#concept_kc4_cnr_dgb .concept}
This topic describes how to split, merge, and modify the scope of your Reserved Instances \(RIs\). Such actions allow you to benefit from billing discounts of more Pay-As-You-Go instance types.
## Before you begin: {#section_b5n_pnr_dgb .section}
To make this topic easier to understand, RIs to be split, merged, or modified are hereinafter referred to as original RIs, while split, merged, or modified RIs are hereinafter referred to as target RIs.
Before you split, merge, or modify RIs, make sure that the following conditions are met:
- You have successfully purchased original RIs and they are within a valid term.
- There is no ongoing splitting, merging, or modification request.
- The RI to be modified only requires its size to be adjusted. The instance type family of an RI cannot be modified.
After you submit a splitting, merging, or modification request:
- The original RI changes to changing status, which will be automatically refreshed after the request is processed.
- A request in progress cannot be changed or canceled. If you want to roll back your changes, you must submit another request.
After an RI is split, merged, or modified:
- The target RI becomes valid on the hour. If it matches one or more new Pay-As-You-Go instances, the billing discount is applied within the same hour.
- The original RI becomes invalid on the hour, and its price is updated to USD 0.
- If the target RI is a zonal RI, the type of resource reservation is also updated automatically.
For example, you successfully split an ecs.g5.2xlarge zonal RI \(RI1\) into two ecs.g5.xlarge zonal RIs \(RI2 and RI3\) at 2019-02-26 13:45:00. In this case, RI1 becomes invalid at 2019-02-26 13:00:00, while RI2 and RI3 take effect also at 2019-02-26 13:00:00. Starting 2019-02-26 13:00:00, the reserved instance type eligible for billing discount is also changed from ecs.g5.2xlarge to ecs.g5.xlarge. If RI2 and RI3 match instances immediately after they take effect, the hourly bill discount for ecs.g5.xlarge instances is also applied starting 2019-02-26 13:00:00.
If the original RI fails to be split, merged, or modified, it will remain valid.
## Split an RI {#section_zcd_vnr_dgb .section}
You can split an RI into multiple RIs of less computing power. The smaller RIs can then match applicable Pay-As-You-Go instances to better distribute your service traffic.
1. Log on to the [ECS console](https://partners-intl.console.aliyun.com/#/ecs).
2. In the left-side navigation pane, click **Reserved Instances**.
3. On the Reserved Instances page, click **Split** in the **Actions** column of the original RI.
4. On the Split Reserved Instance page, set the name, instance type, and instance quantity of the target RIs.
**Note:** The total computing power of the target RIs must be equal to that of the original RI.
5. Click **OK**.
## Merge RIs {#section_xsk_fpr_dgb .section}
If traffic to your instances increases, you can merge multiple RIs into one RI that has greater computing power to match larger Pay-As-You-Go instances.
**Note:** Before merging RIs, you must verify that the following conditions are met:
- The expiration date of the original RIs must be the same.
- The original RIs have been purchased using the same currency.
- If the original RIs are regional RIs, they must be in the same region. If the original RIs are zonal RIs, they must be in the same zone.
1. Log on to the [ECS console](https://partners-intl.console.aliyun.com/#/ecs).
2. In the left-side navigation pane, click **Reserved Instances**.
3. On the Reserved Instances page, click **Merge** in the **Actions** column of the original RI.
4. On the Merge Reserved Instances page, select the original RIs, and then set the name, instance type, and instance quantity of the target RI.
**Note:** The computing power of the target RI must be equal to that of all selected original RIs, and the target RI must be of an existing instance type. For example, two ecs.g5.2xlarge RIs can be merged into one ecs.g5.4xlarge RI, but one ecs.g5.xlarge RI and two ecs.g5.2xlarge RIs cannot be merged into one ecs.g5.5xlarge RI.
5. Click **OK**.
## Modify the scope of an RI {#section_rrd_ppr_dgb .section}
If your service requirements change, you can modify the scope of your RIs. Specifically, you can:
- Modify a regional RI to a zonal RI.
- Modify a zonal RI to a regional RI.
- Modify the zone of an RI in the same region.
You cannot modify the scope of an RI across regions. For example, if you have a zonal RI in zone B of China East 1 \(Hangzhou\), you can modify it as a zonal RI in another zone of China East 1 \(Hangzhou\), or as a regional RI in China East 1 \(Hangzhou\). However, you cannot modify it as a regional or zonal RI in another region.
1. Log on to the [ECS console](https://partners-intl.console.aliyun.com/#/ecs).
2. In the left-side navigation pane, click **Reserved Instances**.
3. On the Reserved Instances page, click **Modify** in the **Actions** column of the original RI.
4. On the Modify Reserved Instance Page, modify the parameters as needed.
5. Click **OK**.
| 66.564103 | 567 | 0.754815 | eng_Latn | 0.998427 |
4f41250c0647570fdbfc58ad2f1ca45c75491877 | 38 | md | Markdown | README.md | lucamerolla/lucamerolla.com | ade77614f115c54ca9e6623465e2bc9033fdc5c6 | [
"MIT"
] | 1 | 2020-08-08T16:35:27.000Z | 2020-08-08T16:35:27.000Z | README.md | lucamerolla/lucamerolla.com | ade77614f115c54ca9e6623465e2bc9033fdc5c6 | [
"MIT"
] | null | null | null | README.md | lucamerolla/lucamerolla.com | ade77614f115c54ca9e6623465e2bc9033fdc5c6 | [
"MIT"
] | null | null | null | # lucamerolla.com
My personal website
| 12.666667 | 19 | 0.815789 | eng_Latn | 0.455762 |
4f4161773e535ac7a349d555945f17b300e549c0 | 622 | md | Markdown | README.md | SortByte/Network-Manager | 36d8daf58f2e9e222d50d01ce48d314757849db6 | [
"MIT"
] | 47 | 2015-10-16T22:36:34.000Z | 2021-12-30T17:51:51.000Z | README.md | SortByte/Network-Manager | 36d8daf58f2e9e222d50d01ce48d314757849db6 | [
"MIT"
] | 8 | 2016-09-18T17:46:31.000Z | 2020-11-10T11:01:11.000Z | README.md | SortByte/Network-Manager | 36d8daf58f2e9e222d50d01ce48d314757849db6 | [
"MIT"
] | 14 | 2015-09-20T11:54:20.000Z | 2022-03-19T05:09:29.000Z | # Network-Manager
Portable Windows network monitorinig and configuration tool
Made with Microsoft Visual Studio 2015
The solution has 4 projects in it:
- Launcher - C++ appliction used to check dependencies (.NET Framework) before running the main .NET program
- WinLib - C# library containing various network, forms, WinAPI, IO, etc functions
- Network_Manager - C# main program
- UpdateVersion - C# application used when building releases to updated their assembly info and to generate the XML version file
For other info check the [main website](http://www.sortbyte.com/software-programs/networking/network-manager)
| 47.846154 | 128 | 0.79582 | eng_Latn | 0.975371 |
4f425ef3395f2f0b29ed8fd5d287975bfdee0271 | 2,662 | md | Markdown | content/post/time-for-a-new-adventure.md | eorstrom/eorstrom_hugo_site | 6c77a711160b20f36b9e651dce4ecc811e6e998b | [
"Apache-2.0"
] | null | null | null | content/post/time-for-a-new-adventure.md | eorstrom/eorstrom_hugo_site | 6c77a711160b20f36b9e651dce4ecc811e6e998b | [
"Apache-2.0"
] | null | null | null | content/post/time-for-a-new-adventure.md | eorstrom/eorstrom_hugo_site | 6c77a711160b20f36b9e651dce4ecc811e6e998b | [
"Apache-2.0"
] | null | null | null | ---
title: "Time for A New Adventure"
description:
date: "2020-04-06T21:51:14-05:00"
publishDate: "2020-05-05T13:00:00-05:00"
draft: false
---
This blog post has been a long time coming! I have been meaning to revamp my website by giving it a new design and ability to host blog posts for over a year now. After Dave and I stopped producing the Junior Developer Toolbox podcast, I knew I still had more to say and wanted to continue to help others along their journey. But there are also other topics I have thoughts and ideas to get out into the world - general career, travel, and dating being just a few examples. The quarantine due to COVID-19 has provided me the perfect opportunity to dedicate time to update my website.
Keep in mind, the site is a work in progress, and I'm not a natural at design. There are several things I need to fix and enhance on the website, like adding the ability to add comments on blog posts and pagination for blog posts as well, but I figured it was more important to be able to deploy the site successfully before I spent too much time tweaking the design and content. That’s the whole spirit of Agile, right? Get something out there in front of people that they can use/view and provide feedback on so you can iterate and improve upon the product. I tend to strive for perfection, but as Winston Churchill said “perfection is the enemy of progress.” If we waited for things to be perfect before launching them, they’d never get launched.
I plan on my next blog post being on how I successfully converted my site from what it was to use <a href="https://gohugo.io/" target="_blank">Hugo</a>, because it ended up being an unnecessarily painful and lengthy process. But it was so rewarding once I finally figured it out and got it to work. 🎉
I am still trying to find what my niche should be - any popular blogger, vlogger, or successful business person will tell you that you find more success by having a narrow focus, but no one said a little variety was out of the question, right? Having a platform to get my ideas onto the interwebs and interact with others is my current goal, and we’ll see what happens from there.
I may end up changing my mind down the road and switching to WordPress or Squarespace for ease of use, depending on how complex the needs of my site get, but for now I've found that Hugo is a great, simple alternative (once you get used to it!) with a theme that's easy to tweak to fit my needs.
So bear with me on the beginning of this new journey. I hope to work up to one post a week on a variety of topics, whatever strikes inspiration for me that week.
Until next time, be safe and be well!
| 121 | 749 | 0.777611 | eng_Latn | 0.999937 |
4f43af2bddbc07b49f6b245fdc721bd355bef1fd | 1,359 | md | Markdown | docs/DeltaTableOperations.md | japila-books/delta-lake-internals | 58c82cb8189e954ce8a2d85a535c0d5c4fbad5d9 | [
"Apache-2.0"
] | 100 | 2020-01-02T20:11:12.000Z | 2022-03-28T15:04:39.000Z | docs/DeltaTableOperations.md | japila-books/delta-lake-internals | 58c82cb8189e954ce8a2d85a535c0d5c4fbad5d9 | [
"Apache-2.0"
] | 1 | 2022-03-16T10:53:04.000Z | 2022-03-18T08:00:58.000Z | docs/DeltaTableOperations.md | japila-books/delta-lake-internals | 58c82cb8189e954ce8a2d85a535c0d5c4fbad5d9 | [
"Apache-2.0"
] | 24 | 2020-01-24T22:43:37.000Z | 2022-03-13T14:55:58.000Z | # DeltaTableOperations — Delta DML Operations
`DeltaTableOperations` is an abstraction of [management services](#implementations) for executing [delete](#executeDelete), [generate](#executeGenerate), [history](#executeHistory), [update](#executeUpdate), and [vacuum](#executeVacuum) operations (_commands_).
<span id="self">
`DeltaTableOperations` is assumed to be associated with a [DeltaTable](DeltaTable.md).
Method | Command | DeltaTable Operator
---------|----------|---------
<span id="executeDelete"> executeDelete | [DeltaDelete](commands/delete/DeltaDelete.md) | [DeltaTable.delete](DeltaTable.md#delete)
<span id="executeGenerate"> executeGenerate | [DeltaGenerateCommand](commands/generate/DeltaGenerateCommand.md) | [DeltaTable.generate](DeltaTable.md#generate)
<span id="executeHistory"> executeHistory | [DeltaHistoryManager.getHistory](DeltaHistoryManager.md#getHistory) | [DeltaTable.history](DeltaTable.md#history)
<span id="executeUpdate"> executeUpdate | `UpdateTable` ([Spark SQL]({{ book.spark_sql }}/logical-operators/UpdateTable)) | [DeltaTable.update](DeltaTable.md#update) and [DeltaTable.updateExpr](DeltaTable.md#updateExpr)
<span id="executeVacuum"> executeVacuum | [VacuumCommand.gc](commands/vacuum/VacuumCommand.md#gc) | [DeltaTable.vacuum](DeltaTable.md#vacuum)
## Implementations
* [DeltaTable](DeltaTable.md)
| 71.526316 | 261 | 0.768948 | yue_Hant | 0.489897 |
4f44031ccba2012845a6bb1076d56d154a2ac0a8 | 920 | md | Markdown | projects/complete-renovation-copy.md | sherrycheungw/gridsome-forestry-starter | 8b49eb183e456d236bbd90a9df879b6f2d61915f | [
"MIT"
] | null | null | null | projects/complete-renovation-copy.md | sherrycheungw/gridsome-forestry-starter | 8b49eb183e456d236bbd90a9df879b6f2d61915f | [
"MIT"
] | null | null | null | projects/complete-renovation-copy.md | sherrycheungw/gridsome-forestry-starter | 8b49eb183e456d236bbd90a9df879b6f2d61915f | [
"MIT"
] | null | null | null | ---
thumbnail: "/uploads/img_1037.JPG"
title: Complete Renovation
date: 2019-02-06
categories:
- Renovation
project_bg_color: ''
project_fg_color: "#575757"
---
Complete renovation is a full service that includes space planning, mood board, FF&E, and/or home styling. Our experienced team manages the entire construction from start to finish, ensuring a seamless and stress-free project for you that meets your budget.
Our full service design projects involve the following:
* Space Plans and Concept Boards
* All materials and furnishings selections (tiles, flooring, cabinetry, sanitaries, plumbing, lighting, sofas, rugs, curtains, etc)
* Sketches, 2D drawings, 3D Renderings
* Construction documentation or drawings (electrical and lighting plans, elevations, etc)
* Site visits, client meetings and budget reviews
* Procurement

 | 38.333333 | 257 | 0.784783 | eng_Latn | 0.963939 |
4f446eda81dd65a4f22c535bc61daa9621970d71 | 4,425 | md | Markdown | _posts/2020-03-10-DN-Useful-Commands.md | LD8/ld8.github.com | af21aa00be0d7ede473837a401f301c411e90bfb | [
"MIT"
] | null | null | null | _posts/2020-03-10-DN-Useful-Commands.md | LD8/ld8.github.com | af21aa00be0d7ede473837a401f301c411e90bfb | [
"MIT"
] | 2 | 2021-05-20T12:41:02.000Z | 2021-09-28T00:31:57.000Z | _posts/2020-03-10-DN-Useful-Commands.md | LD8/ld8.github.com | af21aa00be0d7ede473837a401f301c411e90bfb | [
"MIT"
] | null | null | null | ---
layout: post
title: "Daily Notes: Useful Commands"
categories: Daily Notes
---
### Commands are difficult to remember, and easy to forget, so here's a common command repository
* UPDATE pip packages
```bash
$ pip install --upgrade django-mailer==2.0.1
# pip <command> <option> <package-name>==<version.number>
```
* COPY file from one dir to another
```bash
$ cp -i path/to/file.py path/to/dir/
# -i is for interactive, will be asked to replace any files
```
* CREATE a directory with this date:
```bash
# create a directory with this date:
$ mkdir "$(date '+%Y-%m-%d')"
# dir name: 2020-03-10
$ mkdir "$(date '+%Y-%m-%d--%H-%M-%S')"
# dir name: 2020-03-10--11-03-01
$ mkdir "$(TZ=UTC-8 date '+%Y-%m-%d--%H-%M-%S')"
# dir name: 2020-03-10--13-10-59 (Beijing time)
$ mkdir "$(TZ=UTC-0 date '+%Y-%m-%d--%H-%M-%S')"
# dir name: 2020-03-10--05-11-05 (Greenwich time)
```
* RSYNC a remote file to local through pipe
```bash
# rsync a remote file to local
$ rsync -avz -e ssh user@ip_add:~/remote_dir/file local_dir/
# '-e' is essential for creating an ssh connection
```
* check mac environment variable
```bash
$ printenv
```
check a specific variable
```bash
$ echo $variable_name
```
If setting more permanent environment variables:
>For system-wide operations, it should be in `/etc/profile`
>For user based operations, it should be in `~/.bash_profile`
>For non-login interactive shells, it should be in `~/.bashrc`
>For better understanding, you better check out this: [Unix Introduction — Shell](https://medium.com/@youngstone89/unix-introduction-shell-980212852897)
* Look for files: `whereis`
```bash
$ whereis python3
/usr/bin/python3
```
* locale
```bash
# check your locale setting
$ locale
# update it
$ sudo update-locale LANG="en_US.UTF-8" LANGUAGE="en_US.UTF-8"
```
* make nested dir
```bash
$ mkdir -p _backups/_archives
```
---
## vim basic
```bash
输入:
i (insert, start editing)
: w filename (将文章以指定的文件名filename保存)
: wq (输入「wq」,存盘并退出vim)
: q! (输入q!, 不存盘强制退出vim)
```
---
## [The Many Uses of Rsync](https://mediatemple.net/blog/tips/many-uses-rsync/)
>It’s faster than scp (Secure Copy) because rsync uses remote-update protocol which allows to transfer just the differences between two sets of files. First time, it copies the whole content of a file or a directory from source to destination but from next time, it copies only the changed blocks and bytes to the destination.
```bash
# basic format
# $ rsync <options> <source> <destination>
$ rsync -av path/to/directory1/ /path/to/directory2/
$ rsync -av path/to/directory1 /path/to/directory2/
# there's a difference: copy the files in that folder or the folder itself
```
### **The flags/options**:
* -a: archive mode, archive mode allows copying files recursively and it also preserves symbolic links, file permissions, user & group ownerships and timestamps
* -v: As with many other commands, this option asks for verbose output. This is especially useful when copying large amounts of data.
* -r : copies data recursively (but don’t preserve timestamps and permission while transferring data
* -z : compress file data
* -h : human-readable, output numbers in a human-readable format
* –delete: This flag isn’t used here but it is a common feature of rsync. This option deletes any files or folders in the destination that aren’t at the source. Use with extreme caution!
* -h or –help: This prints a help page that has useful information about using rsync.
### **zip, copy remotely, remove source files**
```bash
# use tar command to archive the files, django-archive is used to backup databases in my project
$ tar -zcvf backup1.tar.gz path/to/files/
# or create with a timestamp
$ tar -zcvf "$(date '+%y-%m-%d').tar.gz" path/to/files/
$ rsync -avz --remove-source-files path/to/backup1 user@ip_add:/path/to/backups/
# with a timestamp
$ rsync -avz --remove-source-files "$(date '+%y-%m-%d').tar.gz" user@ip_add:/path/to/backups/
```
* ### **More rsync commands: [checkout this post](https://www.tecmint.com/rsync-local-remote-file-synchronization-commands/)**
* ### **More rsync flags: [checkout this post](https://www.linuxtechi.com/rsync-command-examples-linux/)** <-- This post documented better with clearer appearance
| 38.478261 | 326 | 0.680678 | eng_Latn | 0.945175 |
4f455ef1ee51e26532c4f7a310165d9fb899e1e6 | 1,233 | md | Markdown | README.md | cserteGT3/RANSACVisualizer.jl | 9845fc1603cd5d3087ebb853a0c90ce1f44ee29a | [
"MIT"
] | null | null | null | README.md | cserteGT3/RANSACVisualizer.jl | 9845fc1603cd5d3087ebb853a0c90ce1f44ee29a | [
"MIT"
] | 3 | 2020-05-29T09:27:06.000Z | 2021-11-24T17:38:12.000Z | README.md | cserteGT3/RANSACVisualizer.jl | 9845fc1603cd5d3087ebb853a0c90ce1f44ee29a | [
"MIT"
] | null | null | null | # RANSACVisualizer.jl
<!--




 -->
[](https://cserteGT3.github.io/RANSACVisualizer.jl/stable)
[](https://cserteGT3.github.io/RANSACVisualizer.jl/dev)
This package provides a couple of functions to visualize the result of the RANSAC algorithm.
It is built top of [Makie.jl](https://github.com/JuliaPlots/Makie.jl) and [RANSAC.jl](https://github.com/cserteGT3/RANSAC.jl).
Check out the [docs](https://csertegt3.github.io/RANSACVisualizer.jl/dev/) for more.
If it's hard to rotate the screen, because the mouse is too sensitive, use this command (from [issue comment](https://github.com/JuliaPlots/Makie.jl/issues/33#issuecomment-564329940)):
```julia
cameracontrols(scene).rotationspeed[] = 0.01f0
```
| 61.65 | 184 | 0.761557 | yue_Hant | 0.551784 |
4f45ac4db115355ad1c33281f16f7ee8404c4a75 | 1,816 | md | Markdown | services/mobileaccess/nl/ko/advanced-topics-oauthsdk.md | Ranjanasingh9/docs | 45286f485cd18ff628e697ddc5091f396bfdc46b | [
"Apache-2.0"
] | null | null | null | services/mobileaccess/nl/ko/advanced-topics-oauthsdk.md | Ranjanasingh9/docs | 45286f485cd18ff628e697ddc5091f396bfdc46b | [
"Apache-2.0"
] | null | null | null | services/mobileaccess/nl/ko/advanced-topics-oauthsdk.md | Ranjanasingh9/docs | 45286f485cd18ff628e697ddc5091f396bfdc46b | [
"Apache-2.0"
] | 1 | 2020-09-24T04:33:34.000Z | 2020-09-24T04:33:34.000Z | ---
copyright:
years: 2015, 2016, 2017
lastupdated: "2017-04-06"
---
{:codeblock:.codeblock}
**중요: {{site.data.keyword.amafull}} 서비스는 {{site.data.keyword.appid_full}} 서비스로 대체되었습니다. **
# 백엔드 애플리케이션 및 서비스 간의 통신
{: #backend-comm}
일부 시나리오에서는 {{site.data.keyword.Bluemix}}에서 실행 중인 사용자의 백엔드 애플리케이션에서 {{site.data.keyword.amafull}} 서비스가 보호하는 다른 백엔드 서비스(예: {{site.data.keyword.cloudant}} 서비스)로 요청을 전송해야 할 수 있습니다. 이러한 경우 OAuth 토큰을 요청에 추가해야 합니다.
`bms-mca-oauth-sdk npmjs` 모듈을 사용하여 OAuth 토큰을 확보하여 요청에 삽입하십시오.
## bms-mca-oauth-sdk 모듈 설치
{: #sdk}
명령행에서 Node.js 애플리케이션 디렉토리를 열고 다음 명령을 실행하십시오.
```Bash
npm install -save bms-mca-oauth-sdk
```
{: codeblock}
## bms-mca-oauth-sdk 모듈 사용
{: #using-sdk}
다음 코드는 `bms-mca-oauth-sdk` 모듈의 사용법을 보여줍니다.
``` JavaScript
var oauthSDK = require('bms-mca-oauth-sdk');
var options = {
// You can cache tokens to avoid extra roundtrips on every request
// This property defines the number of tokens to be cached
cacheSize: 100,
// The following properties are retrieved automatically when your Node.js
// runs on {{site.data.keyword.Bluemix_notm}} and bound to an instance of {{site.data.keyword.amashort}} Service.
// Alternatively, you can get these property values by clicking Show Credentials
// on the {{site.data.keyword.amashort}} Service tile in your {{site.data.keyword.Bluemix_notm}} application dashboard
appId: "tenantID", // a.k.a. Bluemix applicationGUID
clientId: "clientId",
secret: "secret",
serverUrl: "serverUrl"
};
oauthSDK.getAuthorizationHeader(options).then(function(authHeader){
// In the request that you want to send to the protected resource,
// add the authHeader value.
request.headers.Authorization = authHeader;
// Send request
});
```
{: codeblock}
| 25.942857 | 209 | 0.687775 | kor_Hang | 0.981147 |
4f468592aa84282854aaaee0a68c65d3fe433f95 | 432 | md | Markdown | docs/en/changes/changes.md | mestarshine/skywalking | 485f6870ddd3774e818352f9d0b1e13073a0817d | [
"Apache-2.0"
] | null | null | null | docs/en/changes/changes.md | mestarshine/skywalking | 485f6870ddd3774e818352f9d0b1e13073a0817d | [
"Apache-2.0"
] | null | null | null | docs/en/changes/changes.md | mestarshine/skywalking | 485f6870ddd3774e818352f9d0b1e13073a0817d | [
"Apache-2.0"
] | null | null | null | ## 9.2.0
#### Project
#### OAP Server
* Add more entities for Zipkin to improve performance.
* ElasticSearch: scroll id should be updated when scrolling as it may change.
* Mesh: fix only last rule works when multiple rules are defined in metadata-service-mapping.yaml.
#### Documentation
* Fix invalid links in release docs
All issues and pull requests are [here](https://github.com/apache/skywalking/milestone/136?closed=1)
| 27 | 100 | 0.752315 | eng_Latn | 0.981195 |
4f46bde8ffafd7835b50e83011f32b8b52e16bf9 | 2,042 | md | Markdown | packages/adapter/README.md | cavanmflynn/solsnap | d5e1de273ab3dd9be7de176b966784eb7232be72 | [
"Apache-2.0",
"MIT"
] | null | null | null | packages/adapter/README.md | cavanmflynn/solsnap | d5e1de273ab3dd9be7de176b966784eb7232be72 | [
"Apache-2.0",
"MIT"
] | null | null | null | packages/adapter/README.md | cavanmflynn/solsnap | d5e1de273ab3dd9be7de176b966784eb7232be72 | [
"Apache-2.0",
"MIT"
] | 1 | 2020-12-29T23:27:32.000Z | 2020-12-29T23:27:32.000Z | # SolSnap adapter

[](https://opensource.org/licenses/MIT)
[](https://opensource.org/licenses/Apache-2.0)


SolSnap adapter is used to install Solana snap and expose API toward snap.
For more details on Solana snap itself see [snap repo](https://github.com/cavanmflynn/solsnap) or read full [Solana snap documentation](https://github.com/cavanmflynn/solsnap/wiki).
## Usage
Adapter has only exposed function for installing Solana snap.
```typescript
async function enableSolanaSnap(
config: Partial<SnapConfig>,
pluginOrigin?: string
): Promise<MetamaskSolanaSnap>
```
On snap installation, it is possible to send full or partial configuration.
If you only provide `network` property a predefined configuration for the specified network will be used.
Other properties are optional but will override default values if provided.
Below you can see structure of config object:
```typescript
export interface SnapConfig {
derivationPath: string;
token: string;
network: SolanaNetwork; // 's' || 't'
rpcUrl: string;
unit?: UnitConfiguration;
}
export interface UnitConfiguration {
symbol: string;
decimals: number;
image?: string;
customViewUrl?: string;
}
```
After snap installation, this function returns `MetamaskSolanaSnap` object that can be used to retrieve snap API.
An example of initializing Solana snap and invoking snap API is shown below.
```typescript
// Install snap and fetch API
const snap = await enableSolanaSnap({ network: 't' });
const api = await snap.getSolanaSnapApi();
// Invoke API
const address = await api.getAddress();
console.log(`Snap installed, account generated with address: ${address}`);
```
| 34.610169 | 181 | 0.760529 | eng_Latn | 0.719903 |
4f46d7731ddacf0df4c44160bd74b42ff387b903 | 23 | md | Markdown | README.md | BikeMixel/Demo | c0caf0c62c2cc4b1d5a721d797b214b438800df6 | [
"MIT"
] | null | null | null | README.md | BikeMixel/Demo | c0caf0c62c2cc4b1d5a721d797b214b438800df6 | [
"MIT"
] | null | null | null | README.md | BikeMixel/Demo | c0caf0c62c2cc4b1d5a721d797b214b438800df6 | [
"MIT"
] | null | null | null | # Demo
Hello, world | 11.5 | 16 | 0.608696 | kor_Hang | 0.801297 |
4f483fe940ae7085a07e9540ac751a4775276c5c | 6,037 | md | Markdown | public/Plotly/Examples/Scientific/Ternary/SoilTypes.md | Shmew/Feliz.Plotly | 4b368eddae3d86f87985fde1cadf8290020a96bb | [
"MIT"
] | 46 | 2019-10-13T18:38:59.000Z | 2022-01-30T15:47:10.000Z | public/Plotly/Examples/Scientific/Ternary/SoilTypes.md | Shmew/Feliz.Plotly | 4b368eddae3d86f87985fde1cadf8290020a96bb | [
"MIT"
] | 20 | 2019-10-16T16:58:03.000Z | 2022-01-20T12:08:25.000Z | public/Plotly/Examples/Scientific/Ternary/SoilTypes.md | Shmew/Feliz.Plotly | 4b368eddae3d86f87985fde1cadf8290020a96bb | [
"MIT"
] | 9 | 2019-10-18T14:14:07.000Z | 2022-03-17T07:33:36.000Z | # Feliz.Plotly - Ternary Plots
Taken from [Plotly - Ternary Plots](https://plot.ly/javascript/ternary-plots/)
```fsharp:plotly-chart-ternary-soiltypes
[<RequireQualifiedAccess>]
module Samples.Ternary.SoilTypes
open Fable.SimpleHttp
open Fable.SimpleJson
open Feliz
open Feliz.Plotly
type SoilData =
{ clay: float
sand: float
silt: float }
type SoilTypes =
{ sand: SoilData list
loamysand: SoilData list
sandyloam: SoilData list
sandyclayloam: SoilData list
sandyclay: SoilData list
clay: SoilData list
clayloam: SoilData list
siltyclay: SoilData list
siltyclayloam: SoilData list
siltyloam: SoilData list
silt: SoilData list
loam: SoilData list }
type PlotItem =
{ Label: string
Clay: float list
Sand: float list
Silt: float list }
module PlotItem =
let create label (data: SoilData list) =
let clay, sand, silt =
data
|> List.map (fun d ->
d.clay, d.sand, d.silt)
|> List.unzip3
{ Label = label
Clay = clay
Sand = sand
Silt = silt }
let ofSoilTypes soilTypes =
[ create "sand" soilTypes.sand
create "loamy sand" soilTypes.loamysand
create "sandy loam" soilTypes.sandyloam
create "sandy clay loam" soilTypes.sandyclayloam
create "sandy clay" soilTypes.sandyclay
create "clay" soilTypes.clay
create "clay loam" soilTypes.clayloam
create "silty clay" soilTypes.siltyclay
create "silty clay loam" soilTypes.siltyclayloam
create "silty loam" soilTypes.siltyloam
create "silt" soilTypes.silt
create "loam" soilTypes.loam ]
let render (data: PlotItem list) =
let scatters =
data
|> List.map (fun pItem ->
traces.scatterternary [
scatterternary.mode.lines
scatterternary.name pItem.Label
scatterternary.a pItem.Clay
scatterternary.b pItem.Sand
scatterternary.c pItem.Silt
scatterternary.line [
line.color "#C00"
]
]
)
Plotly.plot [
plot.traces scatters
plot.layout [
layout.ternary [
ternary.sum 100
ternary.aaxis [
aaxis.title [
title.text "Clay"
title.font [
font.size 20
]
]
aaxis.ticksuffix "%"
aaxis.min 0.01
aaxis.linewidth 2
aaxis.ticks.outside
aaxis.ticklen 8
aaxis.showgrid true
]
ternary.baxis [
baxis.title [
title.text "Sand"
title.font [
font.size 20
]
]
baxis.ticksuffix "%"
baxis.min 0.01
baxis.linewidth 2
baxis.ticks.outside
baxis.ticklen 8
baxis.showgrid true
]
ternary.caxis [
caxis.title [
title.text "Silt"
title.font [
font.size 20
]
]
caxis.ticksuffix "%"
caxis.min 0.01
caxis.linewidth 2
caxis.ticks.outside
caxis.ticklen 8
caxis.showgrid true
]
]
layout.showlegend false
layout.width 700
layout.annotations [
annotations.annotation [
annotation.showarrow false
annotation.text """Replica of Daven Quinn's <a href="http://bl.ocks.org/davenquinn/988167471993bc2ece29">block</a>"""
annotation.x 0.50
annotation.y 1.3
]
]
]
]
let chart' = React.functionComponent (fun (input: {| centeredSpinner: ReactElement |}) ->
let isLoading, setLoading = React.useState false
let error, setError = React.useState<Option<string>> None
let content, setContent = React.useState []
let path = "https://gist.githubusercontent.com/davenquinn/988167471993bc2ece29/raw/f38d9cb3dd86e315e237fde5d65e185c39c931c2/data.json"
let loadDataset() =
setLoading(true)
async {
let! (statusCode, responseText) = Http.get path
setLoading(false)
if statusCode = 200 then
responseText
|> SimpleJson.tryParse
|> Option.map
(SimpleJson.mapKeys (fun k -> k.Replace(" ", ""))
>> Json.tryConvertFromJsonAs<SoilTypes>
>> Result.map (PlotItem.ofSoilTypes))
|> function
| Some res ->
match res with
| Ok pItemL -> pItemL |> setContent
| Error e -> setError (Some (sprintf "Error during data transformations:\n%s" e))
| None -> setError (Some "Failed to parse Json data")
setError(None)
else
setError(Some (sprintf "Status %d: could not load %s" statusCode path))
}
|> Async.StartImmediate
React.useEffect(loadDataset, [| path :> obj |])
match isLoading, error with
| true, _ -> input.centeredSpinner
| false, None -> render content
| _, Some error ->
Html.h1 [
prop.style [ style.color.crimson ]
prop.text error
])
let chart (centeredSpinner: ReactElement) = chart' {| centeredSpinner = centeredSpinner |}
``` | 32.809783 | 138 | 0.502071 | eng_Latn | 0.406364 |
4f4877b44748652e2028764373f45e5adaa3506b | 69 | md | Markdown | tag/2020.md | andrewsio/andrewsio-blog.github.io | 008e349a3aeb8ba3e162b5cf72e15093ee63eea2 | [
"MIT"
] | 2 | 2021-10-02T23:32:37.000Z | 2022-01-03T21:57:43.000Z | tag/2020.md | LionsWrath/blog | dc0d67c44e7455850b2ad9cc9f3e75839561e650 | [
"MIT"
] | null | null | null | tag/2020.md | LionsWrath/blog | dc0d67c44e7455850b2ad9cc9f3e75839561e650 | [
"MIT"
] | 1 | 2020-01-02T19:15:55.000Z | 2020-01-02T19:15:55.000Z | ---
layout: tagpage
title: "Tag: 2020"
tag: 2020
robots: noindex
---
| 9.857143 | 18 | 0.652174 | dan_Latn | 0.183955 |
4f489b041f3e98da5d23c8a09caa1fdf73cdd16a | 4,448 | md | Markdown | _posts/2016-04-07-13_PDEs.md | ASU-CompMethodsPhysics-PHY494/ASU-PHY494-2016 | 64cccdb51b6a27d51ee19edf39979ed7add2b2c2 | [
"CC-BY-4.0"
] | null | null | null | _posts/2016-04-07-13_PDEs.md | ASU-CompMethodsPhysics-PHY494/ASU-PHY494-2016 | 64cccdb51b6a27d51ee19edf39979ed7add2b2c2 | [
"CC-BY-4.0"
] | null | null | null | _posts/2016-04-07-13_PDEs.md | ASU-CompMethodsPhysics-PHY494/ASU-PHY494-2016 | 64cccdb51b6a27d51ee19edf39979ed7add2b2c2 | [
"CC-BY-4.0"
] | null | null | null | ---
layout: post
title: 13 Partial Differential Equations (PDEs)
---
Most physical phenomena are ultimately described by a relationship between changing quantities. If only a single quantity such as time is changing, [Ordinary Differential Equations]({{site.baseurl}}{% post_url 2016-02-16-08_ODEs %}) (ODEs) result. However, as soon as more than a single quantity varies independently from the other ones, [partial differential equations](http://mathworld.wolfram.com/PartialDifferentialEquation.html) (**PDEs**) have to be solved.
For example, a scalar field $$U(x, y)$$ admits the the general PDE
$$
A(x, y)\frac{\partial^2 U}{\partial x^2} +
2B(x, y)\frac{\partial^2 U}{\partial x \partial y} +
C(x, y)\frac{\partial^2 U}{\partial y^2} +
D(x, y)\frac{\partial U}{\partial x} +
E(x, y)\frac{\partial U}{\partial y} +
F(x, y) = 0
$$
$$
A(x, y) U_{xx} +
2B(x, y) U_{xy} +
C(x, y) U_{yy} +
D(x, y) U_{x} +
E(x, y) U_{y} +
F(x, y) = 0
$$
(For notational convenience, partial derivatives are often written in subscript notation.)
## Classes of second-order PDEs
Second-order PDEs (highest partial derivatives are second order, such as $$U_{xz}$$ or $$U_{tt}$$) are ubiquitous in physics and can be classified as
* *elliptic* PDEs ($$AC - B^2 > 0$$, second derivatives of all variables and all with the same sign, e.g. Poisson's equation $$\nabla^2 U(x, y, z) = U_{xx} + U_{yy} + U_{zz} = -4\pi\rho(x,y,z)$$);
* *parabolic* PDEs ($$AC - B^2 = 0$$, second order derivative of one variable and first order derivative of the other one, e.g., the heat equation $$\nabla^2 U(x, y, z, t) - a \frac{\partial U}{\partial t} = 0$$);
* *hyperbolic* PDEs ($$AC - B^2 < 0$$, second derivatives of all variables with the opposite sign, e.g. wave equation $$\nabla^2 U(x, y, z, t) - c^{-2} \frac{\partial^2 U}{\partial t^2} = 0$$).
## Boundary conditions
PDEs are more complicated than ODEs. General solutions of PDEs involve arbitrary functions and additional *[boundary conditions](http://mathworld.wolfram.com/BoundaryConditions.html)* (values or derivatives of the solution $$U(x,y)$$ on the boundary of the problem domain) have to be specified to obtain a specific solution (in addition to any *initial conditions*).
* *Dirichlet boundary conditions* prescribe the value of the solution on a surrounding closed surface.
* *Neumann boundary conditions* specify the value of the normal derivative on the surrounding surface.
* *Cauchy boundary conditions* set both the value and the normal derivative on the surrounding closed surface (i.e., both Dirichlet and Neumann boundary conditions are imposed).
* *Robin boundary conditions* specify that a weighted sum of Dirichlet and Neumann boundary conditions have to have a given value on the boundary.
Often, no analytic solutions can be found or the analytic solutions are unwieldy. PDEs can be solved numerically, though. However, different types of PDEs require customized algorithms.
## Solving PDEs numerically
We will look at various [finite difference]({{site.baseurl}}{% post_url 2016-02-04-06_Differentiation%}) schemes (both explicit and implicit) to solve the different classes of PDEs.
## Class material
1. [13_PDEs-1.ipynb]({{site.nbviewer.resources}}/13_PDEs/13_PDEs-1.ipynb):
Poisson's and Laplace's equation: Background
([board notes on PDEs (PDF)]({{site.resources.fileurl}}/13_PDEs/13_PDEs-1-LectureNotes.pdf)),
Gauss-Seidel algorithm, problems:
[13_PDEs-1-Students.ipynb]({{site.nbviewer.resources}}/13_PDEs/13_PDEs-1-Students.ipynb)
2. [13_PDEs-2.ipynb]({{site.nbviewer.resources}}/13_PDEs/13_PDEs-2.ipynb):
fast Jacobi, Gauss-Seidel and Successive Over Relaxation algorithms
(using NumPy array operations, see
[board notes on numpyfied Poisson solvers (PDF)]({{site.resources.fileurl}}/13_PDEs/13_PDEs-2-LectureNotes.pdf)),
Poisson equation, charge density:
[13_PDEs-2-Students.ipynb]({{site.nbviewer.resources}}/13_PDEs/13_PDEs-2-Students.ipynb)
#### Additional resources ####
* _Computational Physics_, Ch **19--23**
* _[Partial differential equation](http://www.scholarpedia.org/article/Partial_differential_equation)_, Andrei D. Polyanin et al. (2008), Scholarpedia, 3(10):4605. doi: [10.4249/scholarpedia.4605](http://doi.org/doi:10.4249/scholarpedia.4605)
* _[Numerical Recipes in C](http://apps.nrbook.com/c/index.html)_, WH
Press, SA Teukolsky, WT Vetterling, BP Flannery. 2nd
ed, 2002. Cambridge University Press. Chapter **19**.
| 60.108108 | 463 | 0.729991 | eng_Latn | 0.916573 |
4f48eef4a222cce22b72e498501c7ef35b9a2325 | 2,117 | md | Markdown | README.md | dconeybe/UnityIssue1154TestApp | 39bc9f34f9d430722d453a61688df03b20c18645 | [
"Apache-2.0"
] | null | null | null | README.md | dconeybe/UnityIssue1154TestApp | 39bc9f34f9d430722d453a61688df03b20c18645 | [
"Apache-2.0"
] | null | null | null | README.md | dconeybe/UnityIssue1154TestApp | 39bc9f34f9d430722d453a61688df03b20c18645 | [
"Apache-2.0"
] | null | null | null | ## UnityIssue1154TestApp
By: Denver Coneybeare <dconeybe@google.com>
Oct 13, 2021
### Background Information
This is a C++ application to attempt to reproduce
https://github.com/firebase/quickstart-unity/issues/1154
in the Firebase C++ SDK to facilitate debugging.
The issue reported by @dex3r is that if a read is the first Firestore operation
performed then the read fails with the following error:
> FirestoreException: Failed to get document from server.
However, if the first operation is, instead, a _write_ then the subsequent read
succeeds.
Based on the logs provided by @dex3r, it appears that the initial connection
with the Firestore backend is taking upwards of 20 seconds, which exceeds the
10-second timeout for reads. Writes, on the other hand, have a less aggressive
timeout and can tolerate this excessively-long initial connection time.
This test app attempts to reproduce the issue at a lower level to more easily
faciliate debugging.
### Instructions
1. Clone this Git repository.
1. Download the Firebase C++ SDK from https://firebase.google.com/docs/cpp/setup
(or from another location, as requested by me).
1. Unzip the downloaded Firebase C++ SDK into the directory into which the Git
repository was cloned; this will create a subdirectory named
`firebase_cpp_sdk`.
1. `cmake -S . -B build`
1. `cd build`
1. `cmake --build .`
1. Copy your `google-services.json` into the `build` directory.
1. `./firebase_unity_issue_1154_test_app --help` or
`debug\firebase_unity_issue_1154_test_app.exe --help` on Windows.
### License
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| 35.881356 | 80 | 0.780822 | eng_Latn | 0.996274 |
4f4a86ce73cbd7807b74e304110264c90883f60b | 3,549 | md | Markdown | contents/byway/11180.md | melitele/byways | 973924f06d66a8020dd0ea58ce640dbd406f95bb | [
"CC-BY-3.0",
"MIT"
] | 4 | 2019-01-26T20:50:25.000Z | 2020-08-12T20:47:19.000Z | contents/byway/11180.md | melitele/byways | 973924f06d66a8020dd0ea58ce640dbd406f95bb | [
"CC-BY-3.0",
"MIT"
] | 3 | 2019-04-01T17:25:37.000Z | 2021-03-09T01:55:25.000Z | contents/byway/11180.md | melitele/byways | 973924f06d66a8020dd0ea58ce640dbd406f95bb | [
"CC-BY-3.0",
"MIT"
] | 7 | 2017-04-19T22:52:17.000Z | 2021-07-13T03:51:42.000Z | ---
template: byway.jade
id: "11180"
name: Paul Bunyan Scenic Byway
distance: "54"
duration: 1.5 hours to drive or four to six hours to enjoy
description: "Like nature's table of contents, the lakes and woods of the Paul Bunyan Scenic Byway are chapters filled with many recreational opportunities and adventures just waiting to be discovered. With the wink and wit of Paul Bunyan lore slipped in for fun, sites and events turn easily into sparkling memories."
contact: "**Brainerd Lakes Area Chamber of Commerce** \r\n 218-829-2838 \r\n 800-450-2838 \r\n\r\n**[Legends and Lore of the Paul Bunyan Scenic Byway](http://www.paulbunyanscenicbyway.org/byway_podcasts)** \r\nVisit this website and listen to podcasts from the byway."
path:
- "yqc|G~ie_QcBsBkMsQ}@oDBmH`KXZQb@g@XqAB{CIizCJm`@f@em@CuD]gF[iC}@iEgEgPa@mCS_C?oDVeDd@yCvDiM~@eFr@qFFkJE{nBAmmAPi|@RqzA"
- "m_v{Grbx~PgBmFeByAyA[igAZscArAuBGqA_@k@_@o@_@sByD_AuDSeCB{j@I}}@~@sM?iBqDcf@[gBe@yAc@_AyBsCcAo@uBWsEEccALmVIwIQcCeAwAgB{AyD]aDEkA?gd@HaSMoDg@_Dm@_CcC_DsAo@wB_@oGKaW?{wAe@"
- "u~_|Gxhm}P|CtBhAh@lAFlFsArDg@lGEr@@h@Dh@H`EvBVLrElAnBXpBLpAXhBd@n@PfD~AzB`BdE~DhS`e@`BfCnDxDnDnFzHnWt@zCd@pGu@nKJ`FbBhOn@tD^~Cd@vB`AfBp@~@fGlElDnEbBxEx@`HHveAD`FIjCyBvK]fCKbDXfEr@zClHjWTpCBzBn@~IzAtDpA~AdAl@xAh@vBNpGHpARrAf@rBjBx@rAlAnDZbD?rE[feA?vPd@xCx@dCrBfDhCzA|BTbGFjEp@xDfCdB`Cr@pAfB`GZrCTfCF~d@Ibv@s@ndCN`ClAlF`AzA~CdCpAXzCDnNYtCjArAzAdBdD\\tAZvCHzCChKR`XK`IBbD?pHAfJHdWEd^NzMj@dFl@`CfAtC`BnC|B`C`NzJvQ|LhC|BxYd]tv@rx@pA|Bh@xBb@bDHpIIleAJvbACxc@CfEBlE"
- "mam{Gxqs~Pu@cPDaOBwKQ_h@f@wfAT_PGwEaAcSEoJ^gOrAoPZeVLwTi@uz@?qATuDb@aD~BsJfBmGxAmJnCui@p@{Gt@_GxB{IxBmF`Q_YfCmEfBqEbAmDjA{Fh@sDhE_b@~@aH|@uIkcAK{@K_GaDmBg@kAIiABiAZwAx@uCjCiBz@sBXi_@nAyGHsASyAm@mBeB_BgDy@qCU{BIsKDm_@Eq]DcUDiCr@eFdB_JRiCEgC}CsTA{MPycAAqVKuNGy`AB_NWutB_@s|@o]FcTs@oDWcCe@mF}C}BeCqt@ymAeB{BkBkBcBqAcCmAoBs@wDm@gjB_ByFk@}H{BgEaB_@WeAMcDCqDb@{ExAuDl@eAEmCsAaAeAa@u@}CeIyBgEc@g@sDeCqB_@aDQaY?_AVkEfB}EvCyCvAqCp@sIxAmE^wJJqFfAsJ~DkIrEsCTeM{AuCGyBXk\\zLoC`BcClE"
- "ctc|G`qn~P?c`Bq@_dAFehA_@ko@a@_Cm@yAaBqBoAy@sCUabAuAgBi@kAy@}@kAs@wA_@oAQgCKgs@a@kaAPahAC}BOuBq@sDk@_BiTwb@eEwH{@qAwEiJi@qAyAyGMsCLgcBv@mxBJ_eC~fAf@nA[lC_Cz\\q_@hDoCvB}@lI_Cz@e@bKoH~@e@rBa@bFy@vBQzIgA~BEfJlAxAJxBK|C_B`G{GdA{@lCiAbTyDfD{@lByAx@kA~HcO"
websites:
- url: "http://www.cityofbreezypointmn.us/index.asp?Type=NONE&SEC={2E6EDF0B-190C-4152-9A89-9A8C5BA233A4}"
name: City of Breezy Point Community Page
- url: "http://www.explorebrainerdlakes.com/"
name: Explore Brainerd Lakes
- url: "http://www.idealtownship.com/"
name: Ideal Township Community Page
- url: "http://www.exploreminnesota.com/where-to-go/scenic-byways/central-byways/paul-bunyan/index.aspx"
name: Paul Bunyan Scenic Byway
- url: "http://www.paulbunyanscenicbyway.org/"
name: Paul Bunyan Scenic Byway
- url: "http://www.pinerivermn.com"
name: Pine River Minnesota Chamber of Commerce
- url: "http://www.paulbunyanscenicbyway.org/byway_podcasts"
name: Podcasts for Paul Bunyan Scenic Byway
designations:
- National Scenic Byway
- Minnesota State Scenic Byway
states:
- MN
ll:
- -94.40431200019862
- 46.71789900013874
bounds:
- - -94.40431200019862
- 46.59143799956399
- - -94.1087880003659
- 46.74033000032921
---
Like nature's table of contents, the lakes and woods of the Paul Bunyan Scenic Byway are chapters filled with many recreational opportunities and adventures just waiting to be discovered. With the wink and wit of Paul Bunyan lore slipped in for fun, sites and events turn easily into sparkling memories. | 77.152174 | 476 | 0.771485 | yue_Hant | 0.490112 |
4f4acefe7f8386173604ab6db257eb8f59e1a2a4 | 3,019 | md | Markdown | CHANGELOG.md | jacobwarren/honeydew | d601143017b8d530df63bf357ed8f2d66649e3d4 | [
"MIT"
] | null | null | null | CHANGELOG.md | jacobwarren/honeydew | d601143017b8d530df63bf357ed8f2d66649e3d4 | [
"MIT"
] | null | null | null | CHANGELOG.md | jacobwarren/honeydew | d601143017b8d530df63bf357ed8f2d66649e3d4 | [
"MIT"
] | null | null | null | ## 1.2.7 (2019-1-8)
### Enhancements
* Adding table prefixes to Ecto Poll Queue (thanks @jfornoff!)
## 1.2.6 (2018-9-19)
### Enhancements
* Honeydew crash log statements now include the following metadata
`:honeydew_crash_reason` and `:honeydew_job`. These metadata entries
can be used for building a LoggerBackend that could forward failures
to an error logger integration like Honeybadger or Bugsnag.
## 1.2.5 (2018-8-24)
### Bug fixes
* Don't restart workers when linked process terminates normally
## 1.2.4 (2018-8-23)
### Bug fixes
* Catch thrown signals on user's init/1
## 1.2.3 (2018-8-23)
### Bug fixes
* Gracefully restart workers when an unhandled message is received.
## 1.2.2 (2018-8-23)
### Bug fixes
* Catch thrown signals from user's job code
## 1.2.1 (2018-8-20)
### Bug fixes
* Stop ignoring `init_retry_secs` worker option
* Fixed `Honeydew.worker_opts` typespecs.
* Fixed `Honeydew.start_workers` specs.
## 1.2.0 (2018-8-17)
Honeydew now supervises your queues and workers for you, you no longer need to
add them to your supervision trees.
### Breaking Changes
* `Honeydew.queue_spec/2` and `Honeydew.worker_spec/3` are now hard deprecated
in favor of `Honeydew.start_queue/2` and `Honeydew.start_workers/3`
### Bug fixes
* Rapidly failing jobs no longer have a chance to take down the worker supervisor.
### Enhancements
* `Honeydew.queues/0` and `Honeydew.workers/0` to list queues and workers running
on the local node.
* `Honeydew.stop_queue/1` and `Honeydew.stop_workers/1` to stop local queues and
workers
* Workers can now use the `failed_init/0` callback in combination with
`Honeydew.reinitialize_worker` to re-init workers if their init fails.
* Many other things I'm forgetting...
## ?
### Breaking Changes
* Updated `Honeydew.cancel/1` to return `{:error, :not_found}` instead of `nil`
when a job is not found on the queue. This makes it simpler to pattern match
against error conditions, since the other condition is
`{:error, :in_progress}`.
* Changed `Honeydew.Queue.cancel/2` callback to return `{:error, :not_found}`
instead of `nil` when a job isn't found. This makes the return types the same
as `Honeydew.cancel/1`.
### Bug fixes
* Fixed issue where new workers would process jobs from suspended queues (#35)
### Enhancements
* Docs and typespecs for the following functions
* `Honeydew.async/3`
* `Honeydew.filter/2`
* `Honeydew.resume/1`
* `Honeydew.suspend/1`
* `Honeydew.yield/2`
## 1.0.4 (2017-11-29)
### Breaking Changes
* Removed `use Honeydew.Queue` in favor of `@behaviour Honeydew.Queue` callbacks
### Enhancements
* Added Honeydew.worker behaviour
* Relaxed typespec for `Honeydew.worker_spec/3` `module_and_args` param (#27)
* New docs for
* `Honeydew.FailureMode`
* `Honeydew.FailureMode.Abandon`
* `Honeydew.FailureMode.Move`
* `Honeydew.FailureMode.Retry`
* `Honeydew.Queue.ErlangQueue`
* `Honeydew.Queue.Mnesia`
* Validate arguments for success and failure modes
| 28.752381 | 82 | 0.723087 | eng_Latn | 0.971929 |
4f4ae4dfcbb0ece4f4abb36e3610db7e49b9cc30 | 1,881 | md | Markdown | LICENSE.md | changliao1025/pyflowline | fb8677c5ebb3d0db8638f7fcc495ffb97376e00f | [
"Unlicense"
] | 4 | 2022-03-23T12:10:20.000Z | 2022-03-29T13:41:16.000Z | LICENSE.md | changliao1025/pyflowline | fb8677c5ebb3d0db8638f7fcc495ffb97376e00f | [
"Unlicense"
] | 1 | 2022-03-24T16:08:35.000Z | 2022-03-24T16:08:35.000Z | LICENSE.md | changliao1025/pyflowline | fb8677c5ebb3d0db8638f7fcc495ffb97376e00f | [
"Unlicense"
] | null | null | null | PyFlowline
A mesh independent river network generator for hydrologic models.
Copyright © 2022, Battelle Memorial Institute
1. Battelle Memorial Institute (hereinafter Battelle) hereby grants permission to any person or entity lawfully obtaining a copy of this software and associated documentation files (hereinafter “the Software”) to redistribute and use the Software in source and binary forms, with or without modification. Such person or entity may use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and may permit others to do so, subject to the following conditions:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimers.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Other than as used herein, neither the name Battelle Memorial Institute or Battelle may be used in any form whatsoever without the express written consent of Battelle.
2. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL BATTELLE OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | 134.357143 | 746 | 0.812334 | eng_Latn | 0.434775 |
4f4b538d6b7d25d991796e18b6a903f5c5fa5fe3 | 3,180 | md | Markdown | _notes/Q - What are some patterns to keep data immutable.md | dan-dm/my-digital-garden | 4b409bc724b4769171809836fe0c7f91eb9e9c39 | [
"MIT"
] | null | null | null | _notes/Q - What are some patterns to keep data immutable.md | dan-dm/my-digital-garden | 4b409bc724b4769171809836fe0c7f91eb9e9c39 | [
"MIT"
] | null | null | null | _notes/Q - What are some patterns to keep data immutable.md | dan-dm/my-digital-garden | 4b409bc724b4769171809836fe0c7f91eb9e9c39 | [
"MIT"
] | null | null | null | ---
Title: What are some patterns to keep data immutable
---
# What are some patterns to keep data immutable
Immutability means that if we want to update a value, we create a new variable that contains the new value instead of changing a previously declared variable. Immutability is a core pattern in functional programming, and it gives us more confidence in our code because we know data cannot change throughout the lifetime of the program.
JavaScript doesn't offer immutable arrays or objects by default, but there are functions we can use to prevent objects from being altered and patterns we can follow to prevent data from being mutated. Strings in JavaScript are immutable.
Some examples of maintaining immutability in JS are:
```javascript
// Spread operator in arrays
const arr1 = [1, 2, 3];
const arr2 = [...arr1];
// This creates a brand new array with the same values
arr1 === arr2; // false
// Spread operator in objects
const obj1 = { hi: 'world' };
const obj2 = { ...obj1 };
// This creates a brand new object with the same properties
obj1 === obj2;// false
// Object.assign
const obj1 = { hi: 'world' };
const obj2 = Object.assign({}, obj1);
// This creates a brand new object with the same properties
obj1 === obj2; // false
// Array.slice
const arr1 = [1, 2, 3];
const arr2 = arr1.slice(1);
console.log(arr1); // [1, 2, 3]
console.log(arr2) ;// [2, 3]
```
> Note that Object.assign and the spread operator just make a shallow copy of the object. If you have nested arrays and/or objects, they will still point to the original value.
JavaScript also offers a lower-level configuration of objects.
You can use `Object.defineProperty` to set the CRUD access on individual properties. Setting `writable: false` means that you can't change the value of a property, and setting `configurable: false` means you can't change the type or delete it from the object.
```javascript
const obj = {};
Object.defineProperty(obj, 'hello', {
value: 'world',
writable: false,
configurable: false,
});
console.log(obj.hello); // 'world'
obj.hello = 'Skilled.dev';
console.log(obj.hello); // 'world'
```
You can use `Object.preventExtensions` to block an object from adding new properties.
```javascript
var obj = {
hello: 'world',
};
Object.preventExtensions(obj);
obj.test = 123;
console.log(obj.test); // undefined
// You are still allowed to change existing properties
obj.hello = 'Skilled.dev';
console.log(obj) // { hello: 'world' }
```
You can use `Object.seal` which is the same as `Object.preventExtensions`, but it also sets all the properties to `configurable: false`. You cannot add new properties, change the type of existing properties, or delete properties.
```javascript
var obj = {
hello: 'world',
};
Object.seal(obj);
delete obj.hello;
console.log(obj); // { hello: 'world' }
```
Using `Object.freeze` gives you the highest level of immutability. You cannot add, change, or delete properties.
```javascript
var obj = {
hello: 'world',
};
Object.freeze(obj);
obj.test = 123;
obj.hello = 'Skilled.dev';
delete obj.hello;
console.log(obj); // { hello: 'world' }
```
# ---
Tags: #coding #interview
Topics: [[Interview Qs]]
| 28.648649 | 335 | 0.719811 | eng_Latn | 0.987862 |
4f4b85ae4d71eca35f5e519331b3c47c9621836d | 552 | md | Markdown | src/news/landing-contest-nov16-2019.md | davidfekke/crgpilotsassoc | 7465fbed650d9329a3ab96e5a3f2073d7a06718d | [
"MIT"
] | null | null | null | src/news/landing-contest-nov16-2019.md | davidfekke/crgpilotsassoc | 7465fbed650d9329a3ab96e5a3f2073d7a06718d | [
"MIT"
] | 12 | 2020-01-06T15:40:04.000Z | 2022-02-26T10:25:05.000Z | src/news/landing-contest-nov16-2019.md | davidfekke/crgpilotsassoc | 7465fbed650d9329a3ab96e5a3f2073d7a06718d | [
"MIT"
] | null | null | null | ---
title: "Save the date - November 16, 2019 - Event at Craig"
date: 2019-07-27
---
## Save the Date!!!! November 16, 2019.
We are planning a Landing contest and Young Eagle rally, maybe more. Block your calendar to join us at KCRG. We will need support from the local community, we will need pilots for Young Eagles and everyone is encouraged to show how great their landing skills are. Start practicing now!
Watch the Facebook Group and Website for more information as well. We will have a volunteer signup and contest signup on the web soon. | 55.2 | 287 | 0.757246 | eng_Latn | 0.998296 |
4f4bd5359efacc281228b372b808178a7dd01b8d | 10,742 | md | Markdown | articles/data-factory/data-flow-sink.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/data-flow-sink.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/data-factory/data-flow-sink.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Configuración de una transformación de receptor en la característica de flujo de datos de asignación de Azure Data Factory
description: Aprenda a configurar una transformación de receptor en el flujo de datos de asignación.
author: kromerm
ms.author: makromer
ms.service: data-factory
ms.topic: conceptual
ms.date: 02/03/2019
ms.openlocfilehash: 7cfe0cf291e8c39a4600234632090c39ab5cd78e
ms.sourcegitcommit: c22327552d62f88aeaa321189f9b9a631525027c
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 11/04/2019
ms.locfileid: "73519329"
---
# <a name="sink-transformation-for-a-data-flow"></a>Transformación del receptor para un flujo de datos
Una vez que haya transformado un flujo de datos, puede recibir los datos en un conjunto de datos de destino. En la transformación del receptor, elija una definición de conjunto de datos para los datos de salida de destino. Puede tener tantas transformaciones del receptor como precise el flujo de datos.
Para justificar los cambios en los datos de entrada y el desfase de esquema, reciba los datos de salida en una carpeta sin un esquema definido en el conjunto de datos de salida. También puede seleccionar **Allow schema drift** (Permitir el desfase de esquema) en el origen para justificar los cambios de columna en los orígenes. Después, asigne automáticamente todos los campos en el receptor.

Para recibir todos los campos de entrada, active **Asignación automática**. Para elegir los campos que se van a recibir en el destino, o para cambiar los nombres de los campos en el destino, desactive la opción **Asignación automática**. Después, abra la pestaña **Asignación** para asignar los campos de salida.

## <a name="output"></a>Output
En los tipos de receptor de Azure Blob Storage o Data Lake Storage, genere la salida de los datos transformados en una carpeta. Spark genera los archivos de datos de salida con particiones según el esquema de partición que se usa en la transformación del receptor.
Puede establecer el esquema de partición en la pestaña **Optimizar**. Si quiere que Data Factory combine la salida en un solo archivo, seleccione **Partición única**.

## <a name="field-mapping"></a>Asignación de campos
En la pestaña **Asignación** de la transformación del receptor, puede asignar las columnas de entrada de la izquierda a los destinos de la derecha. Cuando recibe los flujos de datos en los archivos, Data Factory siempre escribe nuevos archivos en una carpeta. Cuando se asigna un conjunto de datos de base de datos, se eligen las opciones de operaciones de la tabla de la base de datos para insertar, actualizar, upsert o eliminar.

En la tabla de asignación, puede usar la selección múltiple para vincular varias columnas, desvincular varias columnas o asignar varias filas al mismo nombre de columna.
Para asignar siempre el conjunto de campos entrante a un destino tal cual y aceptar completamente las definiciones de esquema flexibles, seleccione **Allow schema drift** (Permitir el desfase de esquema).

Para restablecer las asignaciones de columna, seleccione **Re-map** (Reasignar).

Seleccione **Validar esquema** para que se genere un error en el receptor si el esquema cambia.
Seleccione **Clear the folder** (Borrar la carpeta) para truncar el contenido de la carpeta del receptor antes de escribir los archivos de destino en esa carpeta de destino.
## <a name="fixed-mapping-vs-rule-based-mapping"></a>Asignación fija frente a asignación basada en reglas
Cuando desactive la asignación automática, tendrá la opción de agregar una asignación basada en columnas (asignación fija) o basada en reglas. La asignación basada en reglas le permitirá escribir expresiones con coincidencia de patrones, mientras que la asignación fija asignará nombres de columna lógicos y físicos.

Al elegir la asignación basada en reglas, se indica a ADF que evalúe la expresión coincidente para que coincida con las reglas de patrón de entrada y que defina los nombres de los campos salientes. Puede agregar cualquier combinación de asignaciones basadas en campos y en reglas. A continuación, el ADF genera los nombres de campo en el entorno en tiempo de ejecución en función de los metadatos entrantes del origen. Puede ver los nombres de los campos generados durante la depuración y mediante el panel de vista previa de los datos.
Los detalles sobre la coincidencia de patrones se encuentran en la [documentación del patrón de columnas](concepts-data-flow-column-pattern.md).
También puede escribir patrones de expresiones regulares cuando utilice la coincidencia basada en reglas. Para ello, expanda la fila y escriba una expresión regular junto a "El nombre coincide con:".

Un ejemplo común muy básico para una asignación basada en reglas frente a una asignación fija es el caso en el que desea asignar todos los campos entrantes al mismo nombre en el destino. En el caso de las asignaciones fijas, enumeraría cada columna individual de la tabla. Para la asignación basada en reglas, tendría una única regla que asigne todos los campos que usan ```true()``` al mismo nombre de campo entrante representado por ```$$```.
### <a name="sink-association-with-dataset"></a>Asociación de receptores con conjunto de datos
El conjunto de datos que seleccione para el receptor puede tener o no un esquema establecido en la definición del conjunto de datos. Si no tiene un esquema definido, debe permitir el desfase del esquema. Al definir una asignación fija, la asignación de nombres lógicos a físicos se conservará en la transformación del receptor. Si cambia la definición de esquema del conjunto de datos, se producirá una posible interrupción de la asignación de receptor. Para evitar esto, use la asignación basada en reglas. Las asignaciones basadas en reglas están generalizadas, lo que significa que los cambios de esquema en el conjunto de datos no interrumpirán la asignación.
## <a name="file-name-options"></a>Opciones de nombre de archivo
Configure la nomenclatura de los archivos:
* **Valor predeterminado**: permita que Spark nombre los archivos según los valores predeterminados de PART.
* **Patrón**: escriba un patrón para los archivos de salida. Por ejemplo, **loans[n]** creará loans1.csv, loans2.csv, etc.
* **Por partición**: escriba un nombre de archivo por partición.
* **Como datos de columna**: establezca el archivo de salida en el valor de una columna.
* **Salida en un solo archivo**: con esta opción, ADF combinará los archivos de salida con particiones en un solo archivo con nombre. Para usar esta opción, el conjunto de datos debe resolverse en un nombre de carpeta. Además, tenga en cuenta que esta operación de combinación probablemente produzca un error en función del tamaño del nodo.
> [!NOTE]
> Las operaciones de archivo solo comienzan cuando se ejecuta la actividad de ejecución de Data Flow. No se inician en modo de depuración de Data Flow.
## <a name="database-options"></a>Opciones de base de datos
Seleccione los valores de configuración de la base de datos:

* **Método de actualización**: el valor predeterminado es permitir las inserciones. Borre **Allow insert** (Permitir insertar) si quiere dejar de insertar nuevas filas del origen. Para realizar operaciones de actualización, upsert o eliminación de filas, primero agregue una transformación de alteración de fila a fin de etiquetar las filas para esas acciones.
* **Volver a crear tabla**: coloque o cree la tabla de destino antes de que el flujo de datos finalice.
* **Truncar tabla**: quite todas las filas de la tabla de destino antes de que el flujo de datos finalice.
* **Tamaño del lote**: Escriba un número para depositar las escrituras en fragmentos. Use esta opción para grandes cargas de datos.
* **Permitir almacenamiento provisional**: use PolyBase al cargar Azure Data Warehouse como conjunto de datos receptor.
* **Scripts SQL anteriores y posteriores**: escriba scripts de SQL de varias líneas que se ejecutarán antes (preprocesamiento) y después (procesamiento posterior) de que los datos se escriban en la base de datos del receptor.

> [!NOTE]
> En Data Flow, puede indicarle a Data Factory que cree una definición de tabla en la base de datos de destino. Para crear la definición de tabla, establezca un conjunto de datos en la transformación del receptor que tenga un nuevo nombre de tabla. En el conjunto de datos de SQL, debajo del nombre de la tabla, seleccione **Editar** y escriba un nuevo nombre. Después, en la transformación del receptor, active **Allow schema drift** (Permitir el desfase de esquema). Establezca **Importar esquema** en **Ninguno**.

> [!NOTE]
> Al actualizar o eliminar filas en el receptor de base de datos, debe establecer la columna de clave. Esta opción permite que la transformación de alteración de fila determine la fila única en la biblioteca de movimiento de datos (DML).
### <a name="cosmosdb-specific-settings"></a>Configuración específica de CosmosDB
Cuando dirija datos en CosmosDB, tendrá que tener en cuenta estas opciones adicionales:
* Clave de partición: Este campo es obligatorio. Escriba una cadena que represente la clave de partición de la colección. Ejemplo: ```/movies/title```
* Rendimiento: Establezca un valor opcional para el número de RU que desea aplicar a la colección de CosmosDB para cada ejecución de este flujo de datos. El mínimo es de 400.
## <a name="next-steps"></a>Pasos siguientes
Ahora que ha creado el flujo de datos, agregue una [actividad de Data Flow a la canalización](concepts-data-flow-overview.md).
| 91.033898 | 663 | 0.789145 | spa_Latn | 0.991054 |
4f4cf7fc587e9fddec61fee94c0b7197a5806a0f | 819 | md | Markdown | README.md | grafijs/grafi-dispeckle | 8d4d1e4a875b6583cafe2b56217a6d2f42d99f71 | [
"Apache-2.0"
] | 1 | 2016-04-26T11:59:20.000Z | 2016-04-26T11:59:20.000Z | README.md | grafijs/grafi-dispeckle | 8d4d1e4a875b6583cafe2b56217a6d2f42d99f71 | [
"Apache-2.0"
] | null | null | null | README.md | grafijs/grafi-dispeckle | 8d4d1e4a875b6583cafe2b56217a6d2f42d99f71 | [
"Apache-2.0"
] | null | null | null | # grafi-despeckle
This is your grafi module!
1. befor you start, make sure to `npm install`
1. edit `/src/despeckle.js` there should be `despeckle` function already setup
1. `npm run build` will create distribution file `grafi-despeckle.js`
1. make sure to `npm test`! edit `grafi-despeckle.test.js` as you need
1. edit this README.md (more documentation the better !)
1. ready to publish to npm ? add description in `package.json`
1. once module is published, you should make Pull Requst to [main repo](https://github.com/grafijs/grafi) so this module is included in main bundle.
## License
Copyright 2016 Grafi project contributors
Code licensed under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0)
Documentation licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/)
| 45.5 | 148 | 0.759463 | eng_Latn | 0.940756 |
4f4dbe52af46f225ac8feae73465c2849ee5da9f | 3,754 | md | Markdown | docs/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-development-environment.md | zabereznikova/docs.de-de | 5f18370cd709e5f6208aaf5cf371f161df422563 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-development-environment.md | zabereznikova/docs.de-de | 5f18370cd709e5f6208aaf5cf371f161df422563 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/architecture/containerized-lifecycle/design-develop-containerized-apps/docker-apps-development-environment.md | zabereznikova/docs.de-de | 5f18370cd709e5f6208aaf5cf371f161df422563 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Entwicklungsumgebung für Docker-Apps
description: Lernen Sie die wichtigsten Optionen für Entwicklungstools kennen, die den Docker-Entwicklungslebenszyklus unterstützen.
ms.date: 01/06/2021
ms.openlocfilehash: c6c4a1fda41131c00ba87808ed408f1d3250cabf
ms.sourcegitcommit: 7ef96827b161ef3fcde75f79d839885632e26ef1
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 01/07/2021
ms.locfileid: "97970524"
---
# <a name="development-environment-for-docker-apps"></a>Entwicklungsumgebung für Docker-Apps
## <a name="development-tools-choices-ide-or-editor"></a>Auswahlmöglichkeiten für Entwicklungstools: IDE oder Editor
Ganz gleich, ob Sie eine vollständige und leistungsstarke integrierte Entwicklungsumgebung (IDE) oder einen schlanken und flexiblen Editor bevorzugen: Microsoft bietet Ihnen Tools, die Sie zum Entwickeln von Docker-Anwendungen verwenden können.
### <a name="visual-studio-code-and-docker-cli-cross-platform-tools-for-mac-linux-and-windows"></a>Visual Studio Code und Docker-CLI (plattformübergreifende Tools für Mac, Linux und Windows)
Wenn Sie einen einfachen und plattformübergreifenden Editor bevorzugen, der jede beliebige Entwicklungssprache unterstützt, können Sie Visual Studio Code und die Docker-CLI verwenden. Diese Produkte bieten einen einfachen aber widerstandsfähigen Prozess, der für die Optimierung des Entwicklungsworkflows kritisch ist. Durch Installieren von „Docker für Mac“ oder „Docker für Windows“ (Entwicklungsumgebung), können Docker-Entwickler eine einzelne Docker-CLI verwenden, um Apps sowohl für Windows als auch für Linux (Laufzeitumgebung) zu erstellen. Außerdem unterstützt Visual Studio Code mit IntelliSense für Docker-Dateien und Verknüpfungsaufgaben, um Docker-Befehle aus dem Editor auszuführen, Erweiterungen für Docker.
> [!NOTE]
> Sie können Visual Studio Code hier herunterladen: <https://code.visualstudio.com/download>.
>
> Docker für Mac und Windows können Sie hier herunterladen: <https://www.docker.com/products/docker>.
### <a name="visual-studio-with-docker-tools-windows-development-machine"></a>Visual Studio mit Docker-Tools (Windows-Entwicklungscomputer)
Es wird empfohlen, Visual Studio 2019 mit den aktivierten integrierten Docker-Tools zu verwenden. Mit Visual Studio können Sie Ihre Anwendungen direkt in der gewählten Docker-Umgebung entwickeln, ausführen und überprüfen. Drücken Sie F5, um Ihre Anwendungen (einzelner oder mehrere Container) direkt auf einem Docker-Host zu debuggen. Alternativ drücken Sie STRG+F5, um Ihre App zu bearbeiten und zu aktualisieren, ohne den Container erneut erstellen zu müssen. Dies ist die einfachste und stärkste Wahl für Windows-Entwickler, um Docker-Container für Linux oder Windows zu erstellen.
### <a name="visual-studio-for-mac-mac-development-machine"></a>Visual Studio für Mac (Macintosh-Entwicklungscomputer)
Sie können [Visual Studio für Mac](https://visualstudio.microsoft.com/vs/mac/?utm_medium=microsoft&utm_source=docs.microsoft.com&utm_campaign=inline+link) zum Entwickeln von Docker-basierten Anwendungen verwenden. Visual Studio für Mac bietet im Vergleich mit Visual Studio Code für Mac die reichhaltiger ausgestattete IDE.
## <a name="language-and-framework-choices"></a>Optionen zu Sprache und Framework
Sie können Docker-Anwendungen mithilfe von Microsoft-Tools in den meisten modernen Sprachen entwickeln. Das Folgende ist eine erste Liste, auf die Sie aber nicht beschränkt sind:
- .NET und ASP.NET Core
- Node.js
- Gehe zu
- Java
- Ruby
- Python
Im Grunde genommen können Sie jede moderne Sprache verwenden, die von Docker unter Linux oder Windows unterstützt wird.
>[!div class="step-by-step"]
>[Zurück](deploy-azure-kubernetes-service.md)
>[Weiter](docker-apps-inner-loop-workflow.md)
| 73.607843 | 722 | 0.813532 | deu_Latn | 0.986066 |
4f4ddf397d82a5f88297ee78f35158331712454d | 845 | md | Markdown | Analogy/DevExpress/README.md | Analogy-LogViewer/Analogy.LogViewer.Syncfusion | df2aadc67fa08be300a02c9fc331b9a58744d1bf | [
"MIT"
] | 3 | 2019-12-12T18:43:20.000Z | 2022-02-06T01:24:31.000Z | Analogy/DevExpress/README.md | Analogy-LogViewer/Analogy.LogViewer.Syncfusion | df2aadc67fa08be300a02c9fc331b9a58744d1bf | [
"MIT"
] | 82 | 2019-12-20T19:36:28.000Z | 2020-09-11T10:10:50.000Z | Analogy/DevExpress/README.md | Analogy-LogViewer/Analogy.LogViewer.Syncfusion | df2aadc67fa08be300a02c9fc331b9a58744d1bf | [
"MIT"
] | 1 | 2020-01-04T14:07:03.000Z | 2020-01-04T14:07:03.000Z | # DevExpress Users Controls
Analogy Log Viewer is using DevExpress User Controls for UI.
in order to compile this code please download Version 19.1.5 from [DevExpress Site](https://www.devexpress.com/)
The following dll are needed for compilations (put them in this folder or update Analogy project References):
1. DevExpress.Charts.v19.1.Core.dll
2. DevExpress.Data.v19.1.dll
3. DevExpress.Images.v19.1.dll
4. DevExpress.Printing.v19.1.Core.dll
5. DevExpress.Utils.v19.1.dll
6. DevExpress.XtraBars.v19.1.dll
7. DevExpress.XtraCharts.v19.1.dll
8. DevExpress.XtraCharts.v19.1.UI.dll
9. DevExpress.XtraCharts.v19.1.Wizard.dll
10. DevExpress.XtraDialogs.v19.1.dll
11. DevExpress.XtraEditors.v19.1.dll
12. DevExpress.XtraGrid.v19.1.dll
13. DevExpress.XtraLayout.v19.1.dll
14. DevExpress.XtraPrinting.v19.1.dll
15. DevExpress.XtraTreeList.v19.1.dll | 40.238095 | 112 | 0.797633 | kor_Hang | 0.337684 |
4f4e9decebf53c9bd12cdbae484153dad1a36746 | 403 | md | Markdown | Pointers/Statement_P.md | yepedraza/my-c-first-steps | 4faa31e96bfa0cf2b35c8a69e9a0139c7761faf2 | [
"MIT"
] | null | null | null | Pointers/Statement_P.md | yepedraza/my-c-first-steps | 4faa31e96bfa0cf2b35c8a69e9a0139c7761faf2 | [
"MIT"
] | null | null | null | Pointers/Statement_P.md | yepedraza/my-c-first-steps | 4faa31e96bfa0cf2b35c8a69e9a0139c7761faf2 | [
"MIT"
] | null | null | null | # Statement
---
Do a program that determine a student's final grade.
### Requirements
* Apply the pointer concept.
* The final grade is obtained of three partial grades, where the first's grade value is 30%, the second one is 30% and the third one is the remain percentage.
* The grades should be entered by keyboard and they are the pointers
* At last; the student's final grade must be printed.
| 28.785714 | 158 | 0.74938 | eng_Latn | 0.999964 |
4f4ed88558c2c3029b16dae22358d24430096301 | 2,883 | md | Markdown | articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md | discentem/azure-docs | b1495f74a87004c34c5e8112e2b9f520ce94e290 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md | discentem/azure-docs | b1495f74a87004c34c5e8112e2b9f520ce94e290 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-12-26T08:14:40.000Z | 2021-12-26T08:14:40.000Z | articles/machine-learning/reference-yaml-datastore-data-lake-gen1.md | discentem/azure-docs | b1495f74a87004c34c5e8112e2b9f520ce94e290 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'CLI (v2) Azure Data Lake Gen1 datastore YAML schema'
titleSuffix: Azure Machine Learning
description: Reference documentation for the CLI (v2) Azure Data Lake Gen1 datastore YAML schema.
services: machine-learning
ms.service: machine-learning
ms.subservice: mldata
ms.topic: reference
author: ynpandey
ms.author: yogipandey
ms.date: 10/21/2021
ms.reviewer: laobri
---
# CLI (v2) Azure Data Lake Gen1 YAML schema
The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/azureDataLakeGen1.schema.json.
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
## YAML syntax
| Key | Type | Description | Allowed values | Default value |
| --- | ---- | ----------- | -------------- | ------- |
| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
| `type` | string | **Required.** The type of datastore. | `azure_data_lake_gen1` | |
| `name` | string | **Required.** Name of the datastore. | | |
| `description` | string | Description of the datastore. | | |
| `tags` | object | Dictionary of tags for the datastore. | | |
| `store_name` | string | **Required.** Name of the Azure Data Lake Storage Gen1 account. | | |
| `credentials` | object | Service principal credentials for connecting to the Azure storage account. Credential secrets are stored in the workspace key vault. | | |
| `credentials.tenant_id` | string | The tenant ID of the service principal. **Required if `credentials` is specified.** | | |
| `credentials.client_id` | string | The client ID of the service principal. **Required if `credentials` is specified.** | | |
| `credentials.client_secret` | string | The client secret of the service principal. **Required if `credentials` is specified.** | | |
| `credentials.resource_url` | string | The resource URL that determines what operations will be performed on the Azure Data Lake Storage Gen1 account. | | `https://datalake.azure.net/` |
| `credentials.authority_url` | string | The authority URL used to authenticate the user. | | `https://login.microsoftonline.com` |
## Remarks
The `az ml datastore` command can be used for managing Azure Machine Learning datastores.
## Examples
Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/resources/datastore). Several are shown below.
## YAML: identity-based access
:::code language="yaml" source="~/azureml-examples-main/cli/resources/datastore/adls-gen1-credless.yml":::
## YAML: tenant ID, client ID, client secret
:::code language="yaml" source="~/azureml-examples-main/cli/resources/datastore/adls-gen1.yml":::
## Next steps
- [Install and use the CLI (v2)](how-to-configure-cli.md)
| 49.706897 | 225 | 0.723899 | eng_Latn | 0.865791 |
4f4fff8fe5f4ec732a900f96dcb9c9228425497d | 218 | md | Markdown | _watches/M20190607_082433_TLP_4.md | Meteoros-Floripa/meteoros.floripa.br | 7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad | [
"MIT"
] | 5 | 2020-05-19T17:04:49.000Z | 2021-03-30T03:09:14.000Z | _watches/M20190607_082433_TLP_4.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | null | null | null | _watches/M20190607_082433_TLP_4.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | 2 | 2020-05-19T17:06:27.000Z | 2020-09-04T00:00:43.000Z | ---
layout: watch
title: TLP4 - 07/06/2019 - M20190607_082433_TLP_4T.jpg
date: 2019-06-07 08:24:33
permalink: /2019/06/07/watch/M20190607_082433_TLP_4
capture: TLP4/2019/201906/20190606/M20190607_082433_TLP_4T.jpg
---
| 27.25 | 62 | 0.784404 | yue_Hant | 0.052207 |
4f503330c6b0cbd2bcf99f6892b1c43ce58fbe5e | 159 | md | Markdown | samples/math_optimize_lm_example/README.md | Russ76/mrpt | 4a59edd8b3250acea27fcb94bf8e29bee1ba8e1c | [
"BSD-3-Clause"
] | 1,372 | 2015-07-25T00:33:22.000Z | 2022-03-30T12:55:33.000Z | samples/math_optimize_lm_example/README.md | Russ76/mrpt | 4a59edd8b3250acea27fcb94bf8e29bee1ba8e1c | [
"BSD-3-Clause"
] | 772 | 2015-07-18T19:18:54.000Z | 2022-03-27T02:45:51.000Z | samples/math_optimize_lm_example/README.md | Russ76/mrpt | 4a59edd8b3250acea27fcb94bf8e29bee1ba8e1c | [
"BSD-3-Clause"
] | 588 | 2015-07-23T01:13:18.000Z | 2022-03-31T08:05:40.000Z | See: \ref tutorial_math_levenberg_marquardt
Example implementation source code `LevMarqTest_impl.cpp`:
\include math_optimize_lm_example/LevMarqTest_impl.cpp
| 31.8 | 58 | 0.874214 | eng_Latn | 0.437435 |
4f509d1eb8c8f07d02e045cfdbfb4db202488cbd | 63,040 | md | Markdown | _posts/2021-11-30-20211130.md | Shasha8202/shasha8202.github.io | b1a99a88ae0b854f4632c19efaa9ca089fd74aa9 | [
"MIT"
] | null | null | null | _posts/2021-11-30-20211130.md | Shasha8202/shasha8202.github.io | b1a99a88ae0b854f4632c19efaa9ca089fd74aa9 | [
"MIT"
] | null | null | null | _posts/2021-11-30-20211130.md | Shasha8202/shasha8202.github.io | b1a99a88ae0b854f4632c19efaa9ca089fd74aa9 | [
"MIT"
] | null | null | null | ---
layout: single
title: 공공데이터를 활용한 전력수요 및 SMP 예측
---
# 2위 최정명님 Code 따라서 해보기
***
## 개요
---
* 목적 : 제주지역 일주일 후 28일동안 일별 전력수요량 및 SMP 예측
> 일별 전력수급실적 1개값
> 일별 SMP 3개 값 (가중평균값, 최고, 최저)
* 주의사항
> 예측 모델은 Input 값으로 고유기상속성을 최소 1개이상, 최대 5개 이하
> (SMP, 전력수급실적은 갯수 제한 받지 않음)
* 제공 Dataset
> 시간별 기상 Data (weather_v1.csv, weather_v2.csv)
> 일자별 전력수급실적 (target_v1.csv, target_v2.csv)
> 기간 : 2018-02-01 ~ 2020-01-31 / 2018-02-01 ~ 2020-05-18
# 1. Library & Data
#### 외부 데이터
- SMP, supply, 기온 데이터들을 2010년 부터 2020년의 데이터를 수집하여 2020-05-18 까지의 데이터만 사용하였습니다.
- SMP(제주) 일별 데이터 [2010-01-01 ~ 2020-05-18] : https://www.kpx.or.kr/www/contents.do?key=226
- supply(제주) 일별 데이터 [2010-01-01 ~ 2020-05-18] : https://www.kpx.or.kr/www/contents.do?key=356
- 기온(제주) 일별 데이터(평균, 최저, 최고) [2010-01-01~2020-05-18] : https://data.kma.go.kr/data/grnd/selectAsosRltmList.do?pgmNo=36
- 외부 데이터를 다운로드 받는 시점에 따라 추가적으로 데이터가 더 생기기 때문에 2020-05-18 까지의 데이터만 슬라이싱 하는 과정이 달라 질 수 있음.
```python
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import glob
from tqdm.notebook import tqdm
smp_files = glob.glob('data/smp/*')
supply_files = glob.glob('data/supply/*')
weather_files=glob.glob('data/weather/*')
```
# 2. Data preprocessing
## smp
* SMP 일별 Data 합쳐주고, 날짜/최소/최대/평균 Data & 기간 20년도 1월부터 5월 18일까지 생성
```python
smp_dfs = []
for i in range (len(smp_files)):
tmp = pd.read_csv(smp_files[i]).iloc[:][['구분','최소','최대','평균']] # 하나하나 지울 필요 없이 skiprows=3 명령어 사용하면 위에 3줄까지는 무시하고 다음 행부터 불러옴
tmp['구분'] = pd.to_datetime(tmp['구분'].astype(str))
smp_dfs.append(tmp)
smp_df = pd.concat(smp_dfs, ignore_index=True)
smp_df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>구분</th>
<th>최소</th>
<th>최대</th>
<th>평균</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2010-01-01</td>
<td>146.94</td>
<td>174.02</td>
<td>157.70</td>
</tr>
<tr>
<th>1</th>
<td>2010-01-02</td>
<td>147.44</td>
<td>159.51</td>
<td>153.33</td>
</tr>
<tr>
<th>2</th>
<td>2010-01-03</td>
<td>148.40</td>
<td>176.15</td>
<td>158.78</td>
</tr>
<tr>
<th>3</th>
<td>2010-01-04</td>
<td>150.18</td>
<td>168.55</td>
<td>161.65</td>
</tr>
<tr>
<th>4</th>
<td>2010-01-05</td>
<td>155.30</td>
<td>290.69</td>
<td>169.82</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>4013</th>
<td>2020-12-27</td>
<td>68.81</td>
<td>97.95</td>
<td>74.01</td>
</tr>
<tr>
<th>4014</th>
<td>2020-12-28</td>
<td>69.76</td>
<td>96.01</td>
<td>76.83</td>
</tr>
<tr>
<th>4015</th>
<td>2020-12-29</td>
<td>69.73</td>
<td>92.92</td>
<td>74.96</td>
</tr>
<tr>
<th>4016</th>
<td>2020-12-30</td>
<td>63.18</td>
<td>71.95</td>
<td>69.73</td>
</tr>
<tr>
<th>4017</th>
<td>2020-12-31</td>
<td>69.66</td>
<td>99.75</td>
<td>76.42</td>
</tr>
</tbody>
</table>
<p>4018 rows × 4 columns</p>
</div>
```python
_row = (smp_df['구분'] >= '2010-01-01') & (smp_df['구분'] <= '2020-05-18')
smp_df = smp_df.loc[_row,:]
smp_df = smp_df.reset_index(drop=True)
smp_df.rename(columns={"구분": "일시"},inplace=True)
```
```python
smp_df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>일시</th>
<th>최소</th>
<th>최대</th>
<th>평균</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2010-01-01</td>
<td>146.94</td>
<td>174.02</td>
<td>157.70</td>
</tr>
<tr>
<th>1</th>
<td>2010-01-02</td>
<td>147.44</td>
<td>159.51</td>
<td>153.33</td>
</tr>
<tr>
<th>2</th>
<td>2010-01-03</td>
<td>148.40</td>
<td>176.15</td>
<td>158.78</td>
</tr>
<tr>
<th>3</th>
<td>2010-01-04</td>
<td>150.18</td>
<td>168.55</td>
<td>161.65</td>
</tr>
<tr>
<th>4</th>
<td>2010-01-05</td>
<td>155.30</td>
<td>290.69</td>
<td>169.82</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>3786</th>
<td>2020-05-14</td>
<td>66.78</td>
<td>193.28</td>
<td>100.46</td>
</tr>
<tr>
<th>3787</th>
<td>2020-05-15</td>
<td>61.81</td>
<td>198.23</td>
<td>102.38</td>
</tr>
<tr>
<th>3788</th>
<td>2020-05-16</td>
<td>88.50</td>
<td>220.91</td>
<td>121.19</td>
</tr>
<tr>
<th>3789</th>
<td>2020-05-17</td>
<td>65.78</td>
<td>207.75</td>
<td>116.82</td>
</tr>
<tr>
<th>3790</th>
<td>2020-05-18</td>
<td>66.86</td>
<td>113.31</td>
<td>98.98</td>
</tr>
</tbody>
</table>
<p>3791 rows × 4 columns</p>
</div>
## Supply
* 일별 설비용량(MW), 공급능력(MW), 최대전력(전년,금년,증가율, MW, %), 공급예비력(MW), 예비율(%)
* 설비용량
> 전력계통에 연결되어 운전되고 있는 발전소의 발전단 전력의 합계 또는 시설용량의 합계를 말한다. 전력설비 중 전력계통과 연결되지 않는 자가용 발전설비나 계통에 연결되어 있어도 비상용 발전기와 같이 전기사업용이 아닌 소규모 발전설비 등은 계통설비용량에 포함되지 않는다
* 공급능력
> 전력계통에서 공급능력은 최대수요전력 발생시에 안정되게 공급할 수 있는 최대의 발전가능 출력을 말하는데, 설비용량에서 정비 또는 고장에 의해 발전할 수 없는 양을 제외한 발전기별 공급가능용량의 합으로 표시된다.
* 최대전력
> 어느 일정기간 동안의 1시간 평균전력이 최대인 전력수요 값을 말한다. 산정기간에 따라 일일, 일주간, 일개월, 연간최대전력수요 등으로 구분하며, 요일별, 계절별, 기후조건, 기타 전력소비의 형태 등에 따라 발생시간대가 다르다. 여름철 최대전력수요는 냉방기기의 가동이 많은 15시를 전후한 낮 시간에 주로 발생하며, 겨울철에는 21시를 전후로 한 야간시간대에 발생한다.
* 공급예비력
> 발전설비의 총 설비용량 중에서 예측이 가능한 출력 감소분을 제외한 공급능력용량과 전력수요와의 차이를 말하며, 주로 당일의 수요예측 오차, 발전기의 고장, 계통주파수의 조정 및 기타 순시부하의 변동 등에 대하여 전력을 원활하게 공급할 수 있도록 하기 위하여 보유한다
* 공급예비율
> 공급예비력을 최대수요(전력)로 나누어 백분율로 표시한 것을 말하며, 전력계통이 얼마나 여유를 갖고 있는지 나타내는 척도의 하나로 사용된다.
```python
supply_files
```
['data/supply\\supply_201001.xls',
'data/supply\\supply_201002.xls',
'data/supply\\supply_201003.xls',
'data/supply\\supply_201004.xls',
'data/supply\\supply_201005.xls',
'data/supply\\supply_201006.xls',
'data/supply\\supply_201007.xls',
'data/supply\\supply_201008.xls',
'data/supply\\supply_201009.xls',
'data/supply\\supply_201010.xls',
'data/supply\\supply_201011.xls',
'data/supply\\supply_201012.xls',
'data/supply\\supply_201101.xls',
'data/supply\\supply_201102.xls',
'data/supply\\supply_201103.xls',
'data/supply\\supply_201104.xls',
'data/supply\\supply_201105.xls',
'data/supply\\supply_201106.xls',
'data/supply\\supply_201107.xls',
'data/supply\\supply_201108.xls',
'data/supply\\supply_201109.xls',
'data/supply\\supply_201110.xls',
'data/supply\\supply_201111.xls',
'data/supply\\supply_201112.xls',
'data/supply\\supply_201201.xls',
'data/supply\\supply_201202.xls',
'data/supply\\supply_201203.xls',
'data/supply\\supply_201204.xls',
'data/supply\\supply_201205.xls',
'data/supply\\supply_201206.xls',
'data/supply\\supply_201207.xls',
'data/supply\\supply_201208.xls',
'data/supply\\supply_201209.xls',
'data/supply\\supply_201210.xls',
'data/supply\\supply_201211.xls',
'data/supply\\supply_201212.xls',
'data/supply\\supply_201301.xls',
'data/supply\\supply_201302.xls',
'data/supply\\supply_201303.xls',
'data/supply\\supply_201304.xls',
'data/supply\\supply_201305.xls',
'data/supply\\supply_201306.xls',
'data/supply\\supply_201307.xls',
'data/supply\\supply_201308.xls',
'data/supply\\supply_201309.xls',
'data/supply\\supply_201310.xls',
'data/supply\\supply_201311.xls',
'data/supply\\supply_201312.xls',
'data/supply\\supply_201401.xls',
'data/supply\\supply_201402.xls',
'data/supply\\supply_201403.xls',
'data/supply\\supply_201404.xls',
'data/supply\\supply_201405.xls',
'data/supply\\supply_201406.xls',
'data/supply\\supply_201407.xls',
'data/supply\\supply_201408.xls',
'data/supply\\supply_201409.xls',
'data/supply\\supply_201410.xls',
'data/supply\\supply_201411.xls',
'data/supply\\supply_201412.xls',
'data/supply\\supply_201501.xls',
'data/supply\\supply_201502.xls',
'data/supply\\supply_201503.xls',
'data/supply\\supply_201504.xls',
'data/supply\\supply_201505.xls',
'data/supply\\supply_201506.xls',
'data/supply\\supply_201507.xls',
'data/supply\\supply_201508.xls',
'data/supply\\supply_201509.xls',
'data/supply\\supply_201510.xls',
'data/supply\\supply_201511.xls',
'data/supply\\supply_201512.xls',
'data/supply\\supply_201601.xls',
'data/supply\\supply_201602.xls',
'data/supply\\supply_201603.xls',
'data/supply\\supply_201604.xls',
'data/supply\\supply_201605.xls',
'data/supply\\supply_201606.xls',
'data/supply\\supply_201607.xls',
'data/supply\\supply_201608.xls',
'data/supply\\supply_201609.xls',
'data/supply\\supply_201610.xls',
'data/supply\\supply_201611.xls',
'data/supply\\supply_201612.xls',
'data/supply\\supply_201701.xls',
'data/supply\\supply_201702.xls',
'data/supply\\supply_201703.xls',
'data/supply\\supply_201704.xls',
'data/supply\\supply_201705.xls',
'data/supply\\supply_201706.xls',
'data/supply\\supply_201707.xls',
'data/supply\\supply_201708.xls',
'data/supply\\supply_201709.xls',
'data/supply\\supply_201710.xls',
'data/supply\\supply_201711.xls',
'data/supply\\supply_201712.xls',
'data/supply\\supply_201801.xls',
'data/supply\\supply_201802.xls',
'data/supply\\supply_201803.xls',
'data/supply\\supply_201804.xls',
'data/supply\\supply_201805.xls',
'data/supply\\supply_201806.xls',
'data/supply\\supply_201807.xls',
'data/supply\\supply_201808.xls',
'data/supply\\supply_201809.xls',
'data/supply\\supply_201810.xls',
'data/supply\\supply_201811.xls',
'data/supply\\supply_201812.xls',
'data/supply\\supply_201901.xls',
'data/supply\\supply_201902.xls',
'data/supply\\supply_201903.xls',
'data/supply\\supply_201904.xls',
'data/supply\\supply_201905.xls',
'data/supply\\supply_201906.xls',
'data/supply\\supply_201907.xls',
'data/supply\\supply_201908.xls',
'data/supply\\supply_201909.xls',
'data/supply\\supply_201910.xls',
'data/supply\\supply_201911.xls',
'data/supply\\supply_201912.xls',
'data/supply\\supply_202001.xls',
'data/supply\\supply_202002.xls',
'data/supply\\supply_202003.xls',
'data/supply\\supply_202004.xls',
'data/supply\\supply_202005.xls']
```python
supply_dfs = []
for file in supply_files:
tmp = pd.read_excel(file).iloc[1:,1:]
tmp.columns=['일시','설비용량','공급능력','전년최대전력','금년최대전력','증가율','공급예비력','예비율']
tmp['설비용량'] = tmp['설비용량'].astype(np.float32)
tmp['공급능력'] = tmp['공급능력'].astype(np.float32)
tmp['공급능력'] = tmp['전년최대전력'].astype(np.float32)
tmp['공급능력'] = tmp['금년최대전력'].astype(np.float32)
tmp['공급능력'] = tmp['증가율'].astype(np.float32)
tmp['공급능력'] = tmp['공급예비력'].astype(np.float32)
tmp['공급능력'] = tmp['예비율'].astype(np.float32)
tmp.iloc[::-1]
supply_dfs.append(tmp)
# float 와 np,float32 차이점은?
supply_df = pd.concat(supply_dfs)
supply_df = supply_df.sort_values('일시',ascending=True, ignore_index=True)
```
* Datetime 형식 변경
* 일시에서 '연월일'만 남기고 시분초 Data는 제외하기
```python
supply_df[['일','시']] = supply_df['일시'].str.split(expand=True)
supply_df['일시'] = (pd.to_datetime(supply_df.pop('일'), format='%Y/%m/%d')) #+ pd.to_timedelta(supply_df.pop('시') + ':00'))
```
```python
supply_df.drop(['시'],axis=1, inplace=True)
```
* 결측치 확인
```python
supply_df.isnull().sum().sum()
```
0
```python
start = '2010-01-01'
end = '2020-05-18'
```
```python
start_end = (supply_df['일시']>=start) & (supply_df['일시']<=end)
```
```python
supply_df = supply_df[start_end]
```
```python
supply_df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>일시</th>
<th>설비용량</th>
<th>공급능력</th>
<th>전년최대전력</th>
<th>금년최대전력</th>
<th>증가율</th>
<th>공급예비력</th>
<th>예비율</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2010-01-01</td>
<td>82.129997</td>
<td>38.900002</td>
<td>49.2</td>
<td>51</td>
<td>3</td>
<td>20.00</td>
<td>38.90</td>
</tr>
<tr>
<th>1</th>
<td>2010-01-02</td>
<td>82.129997</td>
<td>41.099998</td>
<td>50.5</td>
<td>51</td>
<td>0.2</td>
<td>20.80</td>
<td>41.10</td>
</tr>
<tr>
<th>2</th>
<td>2010-01-03</td>
<td>82.150002</td>
<td>41.299999</td>
<td>48.5</td>
<td>51</td>
<td>11.5</td>
<td>20.90</td>
<td>41.30</td>
</tr>
<tr>
<th>3</th>
<td>2010-01-04</td>
<td>82.150002</td>
<td>32.099998</td>
<td>47.2</td>
<td>54</td>
<td>17.7</td>
<td>17.40</td>
<td>32.10</td>
</tr>
<tr>
<th>4</th>
<td>2010-01-05</td>
<td>82.150002</td>
<td>42.799999</td>
<td>50.4</td>
<td>56</td>
<td>10.8</td>
<td>23.80</td>
<td>42.80</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>3785</th>
<td>2020-05-14</td>
<td>139.160004</td>
<td>87.849998</td>
<td>67.45</td>
<td>62.70</td>
<td>-7.05</td>
<td>55.08</td>
<td>87.85</td>
</tr>
<tr>
<th>3786</th>
<td>2020-05-15</td>
<td>139.160004</td>
<td>82.989998</td>
<td>65.81</td>
<td>64.91</td>
<td>-1.36</td>
<td>53.87</td>
<td>82.99</td>
</tr>
<tr>
<th>3787</th>
<td>2020-05-16</td>
<td>139.160004</td>
<td>73.599998</td>
<td>62.51</td>
<td>61.75</td>
<td>-1.22</td>
<td>45.45</td>
<td>73.60</td>
</tr>
<tr>
<th>3788</th>
<td>2020-05-17</td>
<td>139.160004</td>
<td>78.250000</td>
<td>62.95</td>
<td>61.55</td>
<td>-2.22</td>
<td>48.16</td>
<td>78.25</td>
</tr>
<tr>
<th>3789</th>
<td>2020-05-18</td>
<td>139.160004</td>
<td>84.540001</td>
<td>66.67</td>
<td>63.91</td>
<td>-4.13</td>
<td>54.03</td>
<td>84.54</td>
</tr>
</tbody>
</table>
<p>3790 rows × 8 columns</p>
</div>
```python
supply_df.head(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>일시</th>
<th>설비용량</th>
<th>공급능력</th>
<th>전년최대전력</th>
<th>금년최대전력</th>
<th>증가율</th>
<th>공급예비력</th>
<th>예비율</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2010-01-01</td>
<td>82.129997</td>
<td>38.900002</td>
<td>49.2</td>
<td>51</td>
<td>3</td>
<td>20.0</td>
<td>38.9</td>
</tr>
<tr>
<th>1</th>
<td>2010-01-02</td>
<td>82.129997</td>
<td>41.099998</td>
<td>50.5</td>
<td>51</td>
<td>0.2</td>
<td>20.8</td>
<td>41.1</td>
</tr>
<tr>
<th>2</th>
<td>2010-01-03</td>
<td>82.150002</td>
<td>41.299999</td>
<td>48.5</td>
<td>51</td>
<td>11.5</td>
<td>20.9</td>
<td>41.3</td>
</tr>
</tbody>
</table>
</div>
```python
supply_df.tail(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>일시</th>
<th>설비용량</th>
<th>공급능력</th>
<th>전년최대전력</th>
<th>금년최대전력</th>
<th>증가율</th>
<th>공급예비력</th>
<th>예비율</th>
</tr>
</thead>
<tbody>
<tr>
<th>3787</th>
<td>2020-05-16</td>
<td>139.160004</td>
<td>73.599998</td>
<td>62.51</td>
<td>61.75</td>
<td>-1.22</td>
<td>45.45</td>
<td>73.60</td>
</tr>
<tr>
<th>3788</th>
<td>2020-05-17</td>
<td>139.160004</td>
<td>78.250000</td>
<td>62.95</td>
<td>61.55</td>
<td>-2.22</td>
<td>48.16</td>
<td>78.25</td>
</tr>
<tr>
<th>3789</th>
<td>2020-05-18</td>
<td>139.160004</td>
<td>84.540001</td>
<td>66.67</td>
<td>63.91</td>
<td>-4.13</td>
<td>54.03</td>
<td>84.54</td>
</tr>
</tbody>
</table>
</div>
## Weather
```python
weather_files
```
['data/weather\\OBS_ASOS_DD_20211125175249.csv',
'data/weather\\OBS_ASOS_DD_20211125175328.csv']
```python
weather_dfs = []
for i in range(len(weather_files)):
tmp = pd.read_csv(weather_files[i],encoding='cp949')
tmp.columns = ['지점','지점명','일시','평균기온','최저기온','최고기온']
tmp.drop(['지점명'],axis=1, inplace=True)
weather_dfs.append(tmp)
weather_df = pd.concat(weather_dfs)
```
```python
weather_df.head(3)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>지점</th>
<th>일시</th>
<th>평균기온</th>
<th>최저기온</th>
<th>최고기온</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>184</td>
<td>2010-01-01</td>
<td>3.7</td>
<td>1.6</td>
<td>7.3</td>
</tr>
<tr>
<th>1</th>
<td>184</td>
<td>2010-01-02</td>
<td>9.0</td>
<td>2.7</td>
<td>14.5</td>
</tr>
<tr>
<th>2</th>
<td>184</td>
<td>2010-01-03</td>
<td>4.2</td>
<td>1.8</td>
<td>6.6</td>
</tr>
</tbody>
</table>
</div>
```python
type(weather_df)
```
pandas.core.frame.DataFrame
```python
start_end_weather = (weather_df['일시']>=start) & (weather_df['일시']<=end)
```
```python
weather_df = weather_df[start_end_weather]
```
```python
weather_df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>지점</th>
<th>일시</th>
<th>평균기온</th>
<th>최저기온</th>
<th>최고기온</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>184</td>
<td>2010-01-01</td>
<td>3.7</td>
<td>1.6</td>
<td>7.3</td>
</tr>
<tr>
<th>1</th>
<td>184</td>
<td>2010-01-02</td>
<td>9.0</td>
<td>2.7</td>
<td>14.5</td>
</tr>
<tr>
<th>2</th>
<td>184</td>
<td>2010-01-03</td>
<td>4.2</td>
<td>1.8</td>
<td>6.6</td>
</tr>
<tr>
<th>3</th>
<td>184</td>
<td>2010-01-04</td>
<td>6.6</td>
<td>2.0</td>
<td>12.5</td>
</tr>
<tr>
<th>4</th>
<td>184</td>
<td>2010-01-05</td>
<td>1.8</td>
<td>1.1</td>
<td>4.0</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>2216</th>
<td>189</td>
<td>2020-05-14</td>
<td>19.6</td>
<td>14.2</td>
<td>23.0</td>
</tr>
<tr>
<th>2217</th>
<td>189</td>
<td>2020-05-15</td>
<td>19.4</td>
<td>18.0</td>
<td>20.9</td>
</tr>
<tr>
<th>2218</th>
<td>189</td>
<td>2020-05-16</td>
<td>18.3</td>
<td>17.0</td>
<td>21.9</td>
</tr>
<tr>
<th>2219</th>
<td>189</td>
<td>2020-05-17</td>
<td>17.8</td>
<td>14.9</td>
<td>20.7</td>
</tr>
<tr>
<th>2220</th>
<td>189</td>
<td>2020-05-18</td>
<td>19.4</td>
<td>17.2</td>
<td>21.5</td>
</tr>
</tbody>
</table>
<p>15164 rows × 5 columns</p>
</div>
```python
weather_df['일시'] = pd.to_datetime(weather_df['일시'].astype(str))
```
```python
weather_df = weather_df.groupby('일시').mean()
```
```python
# df.groupby('sex').mean()
# df.groupby[('sex','pclass')]['survived'].mean()
# ~~ "별" ~~"의" 평균
```
```python
weather_df.index
```
DatetimeIndex(['2010-01-01', '2010-01-02', '2010-01-03', '2010-01-04',
'2010-01-05', '2010-01-06', '2010-01-07', '2010-01-08',
'2010-01-09', '2010-01-10',
...
'2020-05-09', '2020-05-10', '2020-05-11', '2020-05-12',
'2020-05-13', '2020-05-14', '2020-05-15', '2020-05-16',
'2020-05-17', '2020-05-18'],
dtype='datetime64[ns]', name='일시', length=3791, freq=None)
```python
weather_df.reset_index(inplace=True)
```
```python
smp_df.set_index('일시',inplace=True)
supply_df.set_index('일시',inplace=True)
weather_df.set_index('일시',inplace=True)
```
```python
smp_df.index
```
DatetimeIndex(['2010-01-01', '2010-01-02', '2010-01-03', '2010-01-04',
'2010-01-05', '2010-01-06', '2010-01-07', '2010-01-08',
'2010-01-09', '2010-01-10',
...
'2020-05-09', '2020-05-10', '2020-05-11', '2020-05-12',
'2020-05-13', '2020-05-14', '2020-05-15', '2020-05-16',
'2020-05-17', '2020-05-18'],
dtype='datetime64[ns]', name='일시', length=3791, freq=None)
```python
supply_df.index
```
DatetimeIndex(['2010-01-01', '2010-01-02', '2010-01-03', '2010-01-04',
'2010-01-05', '2010-01-06', '2010-01-07', '2010-01-08',
'2010-01-09', '2010-01-10',
...
'2020-05-09', '2020-05-10', '2020-05-11', '2020-05-12',
'2020-05-13', '2020-05-14', '2020-05-15', '2020-05-16',
'2020-05-17', '2020-05-18'],
dtype='datetime64[ns]', name='일시', length=3790, freq=None)
```python
weather_df.index
```
DatetimeIndex(['2010-01-01', '2010-01-02', '2010-01-03', '2010-01-04',
'2010-01-05', '2010-01-06', '2010-01-07', '2010-01-08',
'2010-01-09', '2010-01-10',
...
'2020-05-09', '2020-05-10', '2020-05-11', '2020-05-12',
'2020-05-13', '2020-05-14', '2020-05-15', '2020-05-16',
'2020-05-17', '2020-05-18'],
dtype='datetime64[ns]', name='일시', length=3791, freq=None)
## Data 합치기 (SMP, Supply, Weather)
```python
train_df = pd.concat([smp_df,weather_df,supply_df], axis=1)
```
```python
train_df
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>최소</th>
<th>최대</th>
<th>평균</th>
<th>지점</th>
<th>평균기온</th>
<th>최저기온</th>
<th>최고기온</th>
<th>설비용량</th>
<th>공급능력</th>
<th>전년최대전력</th>
<th>금년최대전력</th>
<th>증가율</th>
<th>공급예비력</th>
<th>예비율</th>
</tr>
<tr>
<th>일시</th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<th>2010-01-01</th>
<td>146.94</td>
<td>174.02</td>
<td>157.70</td>
<td>186.5</td>
<td>3.800</td>
<td>0.750</td>
<td>8.075</td>
<td>82.129997</td>
<td>38.900002</td>
<td>49.2</td>
<td>51</td>
<td>3</td>
<td>20.00</td>
<td>38.90</td>
</tr>
<tr>
<th>2010-01-02</th>
<td>147.44</td>
<td>159.51</td>
<td>153.33</td>
<td>186.5</td>
<td>9.450</td>
<td>3.750</td>
<td>15.150</td>
<td>82.129997</td>
<td>41.099998</td>
<td>50.5</td>
<td>51</td>
<td>0.2</td>
<td>20.80</td>
<td>41.10</td>
</tr>
<tr>
<th>2010-01-03</th>
<td>148.40</td>
<td>176.15</td>
<td>158.78</td>
<td>186.5</td>
<td>4.775</td>
<td>2.350</td>
<td>7.200</td>
<td>82.150002</td>
<td>41.299999</td>
<td>48.5</td>
<td>51</td>
<td>11.5</td>
<td>20.90</td>
<td>41.30</td>
</tr>
<tr>
<th>2010-01-04</th>
<td>150.18</td>
<td>168.55</td>
<td>161.65</td>
<td>186.5</td>
<td>7.700</td>
<td>2.275</td>
<td>13.500</td>
<td>82.150002</td>
<td>32.099998</td>
<td>47.2</td>
<td>54</td>
<td>17.7</td>
<td>17.40</td>
<td>32.10</td>
</tr>
<tr>
<th>2010-01-05</th>
<td>155.30</td>
<td>290.69</td>
<td>169.82</td>
<td>186.5</td>
<td>1.825</td>
<td>0.775</td>
<td>4.275</td>
<td>82.150002</td>
<td>42.799999</td>
<td>50.4</td>
<td>56</td>
<td>10.8</td>
<td>23.80</td>
<td>42.80</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>2020-05-14</th>
<td>66.78</td>
<td>193.28</td>
<td>100.46</td>
<td>186.5</td>
<td>19.125</td>
<td>13.475</td>
<td>23.650</td>
<td>139.160004</td>
<td>87.849998</td>
<td>67.45</td>
<td>62.70</td>
<td>-7.05</td>
<td>55.08</td>
<td>87.85</td>
</tr>
<tr>
<th>2020-05-15</th>
<td>61.81</td>
<td>198.23</td>
<td>102.38</td>
<td>186.5</td>
<td>19.775</td>
<td>18.125</td>
<td>22.900</td>
<td>139.160004</td>
<td>82.989998</td>
<td>65.81</td>
<td>64.91</td>
<td>-1.36</td>
<td>53.87</td>
<td>82.99</td>
</tr>
<tr>
<th>2020-05-16</th>
<td>88.50</td>
<td>220.91</td>
<td>121.19</td>
<td>186.5</td>
<td>17.275</td>
<td>15.250</td>
<td>20.300</td>
<td>139.160004</td>
<td>73.599998</td>
<td>62.51</td>
<td>61.75</td>
<td>-1.22</td>
<td>45.45</td>
<td>73.60</td>
</tr>
<tr>
<th>2020-05-17</th>
<td>65.78</td>
<td>207.75</td>
<td>116.82</td>
<td>186.5</td>
<td>17.400</td>
<td>13.925</td>
<td>21.050</td>
<td>139.160004</td>
<td>78.250000</td>
<td>62.95</td>
<td>61.55</td>
<td>-2.22</td>
<td>48.16</td>
<td>78.25</td>
</tr>
<tr>
<th>2020-05-18</th>
<td>66.86</td>
<td>113.31</td>
<td>98.98</td>
<td>186.5</td>
<td>18.875</td>
<td>15.600</td>
<td>23.925</td>
<td>139.160004</td>
<td>84.540001</td>
<td>66.67</td>
<td>63.91</td>
<td>-4.13</td>
<td>54.03</td>
<td>84.54</td>
</tr>
</tbody>
</table>
<p>3791 rows × 14 columns</p>
</div>
```python
train_df =train_df.reset_index()
```
* supply_df에 없는 Data 찾아서 삭제하기
```python
train_df.loc[(train_df['설비용량'].isna())]
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>일시</th>
<th>최소</th>
<th>최대</th>
<th>평균</th>
<th>지점</th>
<th>평균기온</th>
<th>최저기온</th>
<th>최고기온</th>
<th>설비용량</th>
<th>공급능력</th>
<th>전년최대전력</th>
<th>금년최대전력</th>
<th>증가율</th>
<th>공급예비력</th>
<th>예비율</th>
</tr>
</thead>
<tbody>
<tr>
<th>2328</th>
<td>2016-05-17</td>
<td>72.81</td>
<td>78.03</td>
<td>75.37</td>
<td>186.5</td>
<td>18.7</td>
<td>12.5</td>
<td>24.075</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
```python
train_df = train_df.drop(index=2328)
```
```python
train_df.reset_index(drop=True)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>일시</th>
<th>최소</th>
<th>최대</th>
<th>평균</th>
<th>지점</th>
<th>평균기온</th>
<th>최저기온</th>
<th>최고기온</th>
<th>설비용량</th>
<th>공급능력</th>
<th>전년최대전력</th>
<th>금년최대전력</th>
<th>증가율</th>
<th>공급예비력</th>
<th>예비율</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2010-01-01</td>
<td>146.94</td>
<td>174.02</td>
<td>157.70</td>
<td>186.5</td>
<td>3.800</td>
<td>0.750</td>
<td>8.075</td>
<td>82.129997</td>
<td>38.900002</td>
<td>49.2</td>
<td>51</td>
<td>3</td>
<td>20.00</td>
<td>38.90</td>
</tr>
<tr>
<th>1</th>
<td>2010-01-02</td>
<td>147.44</td>
<td>159.51</td>
<td>153.33</td>
<td>186.5</td>
<td>9.450</td>
<td>3.750</td>
<td>15.150</td>
<td>82.129997</td>
<td>41.099998</td>
<td>50.5</td>
<td>51</td>
<td>0.2</td>
<td>20.80</td>
<td>41.10</td>
</tr>
<tr>
<th>2</th>
<td>2010-01-03</td>
<td>148.40</td>
<td>176.15</td>
<td>158.78</td>
<td>186.5</td>
<td>4.775</td>
<td>2.350</td>
<td>7.200</td>
<td>82.150002</td>
<td>41.299999</td>
<td>48.5</td>
<td>51</td>
<td>11.5</td>
<td>20.90</td>
<td>41.30</td>
</tr>
<tr>
<th>3</th>
<td>2010-01-04</td>
<td>150.18</td>
<td>168.55</td>
<td>161.65</td>
<td>186.5</td>
<td>7.700</td>
<td>2.275</td>
<td>13.500</td>
<td>82.150002</td>
<td>32.099998</td>
<td>47.2</td>
<td>54</td>
<td>17.7</td>
<td>17.40</td>
<td>32.10</td>
</tr>
<tr>
<th>4</th>
<td>2010-01-05</td>
<td>155.30</td>
<td>290.69</td>
<td>169.82</td>
<td>186.5</td>
<td>1.825</td>
<td>0.775</td>
<td>4.275</td>
<td>82.150002</td>
<td>42.799999</td>
<td>50.4</td>
<td>56</td>
<td>10.8</td>
<td>23.80</td>
<td>42.80</td>
</tr>
<tr>
<th>...</th>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<th>3785</th>
<td>2020-05-14</td>
<td>66.78</td>
<td>193.28</td>
<td>100.46</td>
<td>186.5</td>
<td>19.125</td>
<td>13.475</td>
<td>23.650</td>
<td>139.160004</td>
<td>87.849998</td>
<td>67.45</td>
<td>62.70</td>
<td>-7.05</td>
<td>55.08</td>
<td>87.85</td>
</tr>
<tr>
<th>3786</th>
<td>2020-05-15</td>
<td>61.81</td>
<td>198.23</td>
<td>102.38</td>
<td>186.5</td>
<td>19.775</td>
<td>18.125</td>
<td>22.900</td>
<td>139.160004</td>
<td>82.989998</td>
<td>65.81</td>
<td>64.91</td>
<td>-1.36</td>
<td>53.87</td>
<td>82.99</td>
</tr>
<tr>
<th>3787</th>
<td>2020-05-16</td>
<td>88.50</td>
<td>220.91</td>
<td>121.19</td>
<td>186.5</td>
<td>17.275</td>
<td>15.250</td>
<td>20.300</td>
<td>139.160004</td>
<td>73.599998</td>
<td>62.51</td>
<td>61.75</td>
<td>-1.22</td>
<td>45.45</td>
<td>73.60</td>
</tr>
<tr>
<th>3788</th>
<td>2020-05-17</td>
<td>65.78</td>
<td>207.75</td>
<td>116.82</td>
<td>186.5</td>
<td>17.400</td>
<td>13.925</td>
<td>21.050</td>
<td>139.160004</td>
<td>78.250000</td>
<td>62.95</td>
<td>61.55</td>
<td>-2.22</td>
<td>48.16</td>
<td>78.25</td>
</tr>
<tr>
<th>3789</th>
<td>2020-05-18</td>
<td>66.86</td>
<td>113.31</td>
<td>98.98</td>
<td>186.5</td>
<td>18.875</td>
<td>15.600</td>
<td>23.925</td>
<td>139.160004</td>
<td>84.540001</td>
<td>66.67</td>
<td>63.91</td>
<td>-4.13</td>
<td>54.03</td>
<td>84.54</td>
</tr>
</tbody>
</table>
<p>3790 rows × 15 columns</p>
</div>
* Train 위한 Data set 만들기
```python
train_df.head(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>일시</th>
<th>최소</th>
<th>최대</th>
<th>평균</th>
<th>지점</th>
<th>평균기온</th>
<th>최저기온</th>
<th>최고기온</th>
<th>설비용량</th>
<th>공급능력</th>
<th>전년최대전력</th>
<th>금년최대전력</th>
<th>증가율</th>
<th>공급예비력</th>
<th>예비율</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2010-01-01</td>
<td>146.94</td>
<td>174.02</td>
<td>157.70</td>
<td>186.5</td>
<td>3.800</td>
<td>0.750</td>
<td>8.075</td>
<td>82.129997</td>
<td>38.900002</td>
<td>49.2</td>
<td>51</td>
<td>3</td>
<td>20.0</td>
<td>38.9</td>
</tr>
<tr>
<th>1</th>
<td>2010-01-02</td>
<td>147.44</td>
<td>159.51</td>
<td>153.33</td>
<td>186.5</td>
<td>9.450</td>
<td>3.750</td>
<td>15.150</td>
<td>82.129997</td>
<td>41.099998</td>
<td>50.5</td>
<td>51</td>
<td>0.2</td>
<td>20.8</td>
<td>41.1</td>
</tr>
<tr>
<th>2</th>
<td>2010-01-03</td>
<td>148.40</td>
<td>176.15</td>
<td>158.78</td>
<td>186.5</td>
<td>4.775</td>
<td>2.350</td>
<td>7.200</td>
<td>82.150002</td>
<td>41.299999</td>
<td>48.5</td>
<td>51</td>
<td>11.5</td>
<td>20.9</td>
<td>41.3</td>
</tr>
<tr>
<th>3</th>
<td>2010-01-04</td>
<td>150.18</td>
<td>168.55</td>
<td>161.65</td>
<td>186.5</td>
<td>7.700</td>
<td>2.275</td>
<td>13.500</td>
<td>82.150002</td>
<td>32.099998</td>
<td>47.2</td>
<td>54</td>
<td>17.7</td>
<td>17.4</td>
<td>32.1</td>
</tr>
<tr>
<th>4</th>
<td>2010-01-05</td>
<td>155.30</td>
<td>290.69</td>
<td>169.82</td>
<td>186.5</td>
<td>1.825</td>
<td>0.775</td>
<td>4.275</td>
<td>82.150002</td>
<td>42.799999</td>
<td>50.4</td>
<td>56</td>
<td>10.8</td>
<td>23.8</td>
<td>42.8</td>
</tr>
</tbody>
</table>
</div>
```python
train = pd.DataFrame()
train['date'] = train_df['일시']
train['month'] = train['date'].dt.month.astype(np.int32)
train['day'] = train['date'].dt.day.astype(np.int32)
train['weekday'] = train['date'].dt.weekday.astype(np.int32)
train['temp_mean'] = train_df['평균기온'].astype(np.float32)
train['temp_max'] = train_df['최고기온'].astype(np.float32)
train['temp_min'] = train_df['최저기온'].astype(np.float32)
train['supply_capacity'] = train_df['설비용량'].astype(np.float32)
train['supply_ability'] = train_df['공급능력'].astype(np.float32)
train['demand_preyear'] = train_df['전년최대전력'].astype(np.float32)
train['supply_reserve'] = train_df['공급예비력'].astype(np.float32)
train['supply_reserve_ratio'] = train_df['예비율'].astype(np.float32)
train['smp_min'] = train_df['최소'].astype(np.float32)
train['smp_max'] = train_df['최대'].astype(np.float32)
train['smp_mean'] = train_df['평균'].astype(np.float32)
train['demand_thisyear'] = train_df['금년최대전력'].astype(np.float32)
```
```python
train.tail(5)
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>date</th>
<th>month</th>
<th>day</th>
<th>weekday</th>
<th>temp_mean</th>
<th>temp_max</th>
<th>temp_min</th>
<th>supply_capacity</th>
<th>supply_ability</th>
<th>demand_preyear</th>
<th>supply_reserve</th>
<th>supply_reserve_ratio</th>
<th>smp_min</th>
<th>smp_max</th>
<th>smp_mean</th>
<th>demand_thisyear</th>
</tr>
</thead>
<tbody>
<tr>
<th>3786</th>
<td>2020-05-14</td>
<td>5</td>
<td>14</td>
<td>3</td>
<td>19.125</td>
<td>23.650000</td>
<td>13.475</td>
<td>139.160004</td>
<td>87.849998</td>
<td>67.449997</td>
<td>55.080002</td>
<td>87.849998</td>
<td>66.779999</td>
<td>193.279999</td>
<td>100.459999</td>
<td>62.700001</td>
</tr>
<tr>
<th>3787</th>
<td>2020-05-15</td>
<td>5</td>
<td>15</td>
<td>4</td>
<td>19.775</td>
<td>22.900000</td>
<td>18.125</td>
<td>139.160004</td>
<td>82.989998</td>
<td>65.809998</td>
<td>53.869999</td>
<td>82.989998</td>
<td>61.810001</td>
<td>198.229996</td>
<td>102.379997</td>
<td>64.910004</td>
</tr>
<tr>
<th>3788</th>
<td>2020-05-16</td>
<td>5</td>
<td>16</td>
<td>5</td>
<td>17.275</td>
<td>20.299999</td>
<td>15.250</td>
<td>139.160004</td>
<td>73.599998</td>
<td>62.509998</td>
<td>45.450001</td>
<td>73.599998</td>
<td>88.500000</td>
<td>220.910004</td>
<td>121.190002</td>
<td>61.750000</td>
</tr>
<tr>
<th>3789</th>
<td>2020-05-17</td>
<td>5</td>
<td>17</td>
<td>6</td>
<td>17.400</td>
<td>21.049999</td>
<td>13.925</td>
<td>139.160004</td>
<td>78.250000</td>
<td>62.950001</td>
<td>48.160000</td>
<td>78.250000</td>
<td>65.779999</td>
<td>207.750000</td>
<td>116.820000</td>
<td>61.549999</td>
</tr>
<tr>
<th>3790</th>
<td>2020-05-18</td>
<td>5</td>
<td>18</td>
<td>0</td>
<td>18.875</td>
<td>23.924999</td>
<td>15.600</td>
<td>139.160004</td>
<td>84.540001</td>
<td>66.669998</td>
<td>54.029999</td>
<td>84.540001</td>
<td>66.860001</td>
<td>113.309998</td>
<td>98.980003</td>
<td>63.910000</td>
</tr>
</tbody>
</table>
</div>
```python
train.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>date</th>
<th>month</th>
<th>day</th>
<th>weekday</th>
<th>temp_mean</th>
<th>temp_max</th>
<th>temp_min</th>
<th>supply_capacity</th>
<th>supply_ability</th>
<th>demand_preyear</th>
<th>supply_reserve</th>
<th>supply_reserve_ratio</th>
<th>smp_min</th>
<th>smp_max</th>
<th>smp_mean</th>
<th>demand_thisyear</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2010-01-01</td>
<td>1</td>
<td>1</td>
<td>4</td>
<td>3.800</td>
<td>8.075</td>
<td>0.750</td>
<td>82.129997</td>
<td>38.900002</td>
<td>49.200001</td>
<td>20.000000</td>
<td>38.900002</td>
<td>146.940002</td>
<td>174.020004</td>
<td>157.699997</td>
<td>51.0</td>
</tr>
<tr>
<th>1</th>
<td>2010-01-02</td>
<td>1</td>
<td>2</td>
<td>5</td>
<td>9.450</td>
<td>15.150</td>
<td>3.750</td>
<td>82.129997</td>
<td>41.099998</td>
<td>50.500000</td>
<td>20.799999</td>
<td>41.099998</td>
<td>147.440002</td>
<td>159.509995</td>
<td>153.330002</td>
<td>51.0</td>
</tr>
<tr>
<th>2</th>
<td>2010-01-03</td>
<td>1</td>
<td>3</td>
<td>6</td>
<td>4.775</td>
<td>7.200</td>
<td>2.350</td>
<td>82.150002</td>
<td>41.299999</td>
<td>48.500000</td>
<td>20.900000</td>
<td>41.299999</td>
<td>148.399994</td>
<td>176.149994</td>
<td>158.779999</td>
<td>51.0</td>
</tr>
<tr>
<th>3</th>
<td>2010-01-04</td>
<td>1</td>
<td>4</td>
<td>0</td>
<td>7.700</td>
<td>13.500</td>
<td>2.275</td>
<td>82.150002</td>
<td>32.099998</td>
<td>47.200001</td>
<td>17.400000</td>
<td>32.099998</td>
<td>150.179993</td>
<td>168.550003</td>
<td>161.649994</td>
<td>54.0</td>
</tr>
<tr>
<th>4</th>
<td>2010-01-05</td>
<td>1</td>
<td>5</td>
<td>1</td>
<td>1.825</td>
<td>4.275</td>
<td>0.775</td>
<td>82.150002</td>
<td>42.799999</td>
<td>50.400002</td>
<td>23.799999</td>
<td>42.799999</td>
<td>155.300003</td>
<td>290.690002</td>
<td>169.820007</td>
<td>56.0</td>
</tr>
</tbody>
</table>
</div>
```python
train.dtypes
```
date datetime64[ns]
month int32
day int32
weekday int32
temp_mean float32
temp_max float32
temp_min float32
supply_capacity float32
supply_ability float32
demand_preyear float32
supply_reserve float32
supply_reserve_ratio float32
smp_min float32
smp_max float32
smp_mean float32
demand_thisyear float32
dtype: object
# 3. EDA (Exploratory Data Analysis)
### SMP
* <span style="color:red"> smp_min(y값) </span>
* <span style="color:red"> smp_max(y값) </span>
* <span style="color:red"> smp_mean(y값) </span>
### Weather
* temp_mean
* temp_max
* temp_min
### Supply & Demand
* supply_capacity
* supply_ability
* demand_preyer
* <span style="color:red"> demand_thisyear(y값) </span>
* supply_reserve
* supply_reserve_ratio
## 3-0. Data간 상관관계도 확인
```python
corr = train.corr()
corr
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>month</th>
<th>day</th>
<th>weekday</th>
<th>temp_mean</th>
<th>temp_max</th>
<th>temp_min</th>
<th>supply_capacity</th>
<th>supply_ability</th>
<th>demand_preyear</th>
<th>supply_reserve</th>
<th>supply_reserve_ratio</th>
<th>smp_min</th>
<th>smp_max</th>
<th>smp_mean</th>
<th>demand_thisyear</th>
</tr>
</thead>
<tbody>
<tr>
<th>month</th>
<td>1.000000</td>
<td>0.012021</td>
<td>0.000002</td>
<td>0.372005</td>
<td>0.360755</td>
<td>0.373650</td>
<td>0.012593</td>
<td>0.165850</td>
<td>-0.123612</td>
<td>0.084941</td>
<td>0.165850</td>
<td>0.029235</td>
<td>0.027817</td>
<td>0.066462</td>
<td>-0.136914</td>
</tr>
<tr>
<th>day</th>
<td>0.012021</td>
<td>1.000000</td>
<td>-0.001792</td>
<td>0.016211</td>
<td>0.015162</td>
<td>0.013028</td>
<td>-0.003586</td>
<td>-0.030736</td>
<td>0.006030</td>
<td>-0.031606</td>
<td>-0.030736</td>
<td>0.009925</td>
<td>-0.004323</td>
<td>0.007196</td>
<td>0.006233</td>
</tr>
<tr>
<th>weekday</th>
<td>0.000002</td>
<td>-0.001792</td>
<td>1.000000</td>
<td>0.001491</td>
<td>0.004248</td>
<td>-0.005520</td>
<td>0.000868</td>
<td>0.130386</td>
<td>-0.018173</td>
<td>0.080198</td>
<td>0.130386</td>
<td>-0.003112</td>
<td>-0.018435</td>
<td>-0.015342</td>
<td>-0.090767</td>
</tr>
<tr>
<th>temp_mean</th>
<td>0.372005</td>
<td>0.016211</td>
<td>0.001491</td>
<td>1.000000</td>
<td>0.991592</td>
<td>0.988871</td>
<td>0.001796</td>
<td>0.014982</td>
<td>-0.110002</td>
<td>-0.093777</td>
<td>0.014982</td>
<td>0.023419</td>
<td>0.013939</td>
<td>0.057744</td>
<td>-0.142813</td>
</tr>
<tr>
<th>temp_max</th>
<td>0.360755</td>
<td>0.015162</td>
<td>0.004248</td>
<td>0.991592</td>
<td>1.000000</td>
<td>0.967211</td>
<td>0.006866</td>
<td>0.013932</td>
<td>-0.114568</td>
<td>-0.096475</td>
<td>0.013932</td>
<td>0.018629</td>
<td>0.010655</td>
<td>0.051792</td>
<td>-0.147384</td>
</tr>
<tr>
<th>temp_min</th>
<td>0.373650</td>
<td>0.013028</td>
<td>-0.005520</td>
<td>0.988871</td>
<td>0.967211</td>
<td>1.000000</td>
<td>-0.001579</td>
<td>0.012789</td>
<td>-0.101412</td>
<td>-0.089199</td>
<td>0.012789</td>
<td>0.025930</td>
<td>0.016445</td>
<td>0.061163</td>
<td>-0.129141</td>
</tr>
<tr>
<th>supply_capacity</th>
<td>0.012593</td>
<td>-0.003586</td>
<td>0.000868</td>
<td>0.001796</td>
<td>0.006866</td>
<td>-0.001579</td>
<td>1.000000</td>
<td>0.149925</td>
<td>0.716507</td>
<td>0.501496</td>
<td>0.149925</td>
<td>-0.522000</td>
<td>-0.298706</td>
<td>-0.462821</td>
<td>0.682936</td>
</tr>
<tr>
<th>supply_ability</th>
<td>0.165850</td>
<td>-0.030736</td>
<td>0.130386</td>
<td>0.014982</td>
<td>0.013932</td>
<td>0.012789</td>
<td>0.149925</td>
<td>1.000000</td>
<td>-0.052933</td>
<td>0.862704</td>
<td>1.000000</td>
<td>-0.282626</td>
<td>-0.267082</td>
<td>-0.280164</td>
<td>-0.225657</td>
</tr>
<tr>
<th>demand_preyear</th>
<td>-0.123612</td>
<td>0.006030</td>
<td>-0.018173</td>
<td>-0.110002</td>
<td>-0.114568</td>
<td>-0.101412</td>
<td>0.716507</td>
<td>-0.052933</td>
<td>1.000000</td>
<td>0.396011</td>
<td>-0.052933</td>
<td>-0.468423</td>
<td>-0.285415</td>
<td>-0.436473</td>
<td>0.897437</td>
</tr>
<tr>
<th>supply_reserve</th>
<td>0.084941</td>
<td>-0.031606</td>
<td>0.080198</td>
<td>-0.093777</td>
<td>-0.096475</td>
<td>-0.089199</td>
<td>0.501496</td>
<td>0.862704</td>
<td>0.396011</td>
<td>1.000000</td>
<td>0.862704</td>
<td>-0.506726</td>
<td>-0.400350</td>
<td>-0.489325</td>
<td>0.269309</td>
</tr>
<tr>
<th>supply_reserve_ratio</th>
<td>0.165850</td>
<td>-0.030736</td>
<td>0.130386</td>
<td>0.014982</td>
<td>0.013932</td>
<td>0.012789</td>
<td>0.149925</td>
<td>1.000000</td>
<td>-0.052933</td>
<td>0.862704</td>
<td>1.000000</td>
<td>-0.282626</td>
<td>-0.267082</td>
<td>-0.280164</td>
<td>-0.225657</td>
</tr>
<tr>
<th>smp_min</th>
<td>0.029235</td>
<td>0.009925</td>
<td>-0.003112</td>
<td>0.023419</td>
<td>0.018629</td>
<td>0.025930</td>
<td>-0.522000</td>
<td>-0.282626</td>
<td>-0.468423</td>
<td>-0.506726</td>
<td>-0.282626</td>
<td>1.000000</td>
<td>0.799001</td>
<td>0.956811</td>
<td>-0.482532</td>
</tr>
<tr>
<th>smp_max</th>
<td>0.027817</td>
<td>-0.004323</td>
<td>-0.018435</td>
<td>0.013939</td>
<td>0.010655</td>
<td>0.016445</td>
<td>-0.298706</td>
<td>-0.267082</td>
<td>-0.285415</td>
<td>-0.400350</td>
<td>-0.267082</td>
<td>0.799001</td>
<td>1.000000</td>
<td>0.877336</td>
<td>-0.316293</td>
</tr>
<tr>
<th>smp_mean</th>
<td>0.066462</td>
<td>0.007196</td>
<td>-0.015342</td>
<td>0.057744</td>
<td>0.051792</td>
<td>0.061163</td>
<td>-0.462821</td>
<td>-0.280164</td>
<td>-0.436473</td>
<td>-0.489325</td>
<td>-0.280164</td>
<td>0.956811</td>
<td>0.877336</td>
<td>1.000000</td>
<td>-0.462917</td>
</tr>
<tr>
<th>demand_thisyear</th>
<td>-0.136914</td>
<td>0.006233</td>
<td>-0.090767</td>
<td>-0.142813</td>
<td>-0.147384</td>
<td>-0.129141</td>
<td>0.682936</td>
<td>-0.225657</td>
<td>0.897437</td>
<td>0.269309</td>
<td>-0.225657</td>
<td>-0.482532</td>
<td>-0.316293</td>
<td>-0.462917</td>
<td>1.000000</td>
</tr>
</tbody>
</table>
</div>
## * **상관관계별 Data (Targeting 'demand this year')**
```python
condition1 = abs(corr['demand_thisyear'])>=0.7
condition2 = (abs(corr['demand_thisyear'])<0.7) & (abs(corr['demand_thisyear'])>=0.3)
condition3 = (abs(corr['demand_thisyear'])<0.3) & (abs(corr['demand_thisyear'])>=0.1)
condition4 = (abs(corr['demand_thisyear'])<0.1)
```
```python
corr.loc[condition1]['demand_thisyear'] # demand_preyear
```
demand_preyear 0.897437
demand_thisyear 1.000000
Name: demand_thisyear, dtype: float64
```python
corr.loc[condition2]['demand_thisyear'] # Supply_capacity, smp_min, smp_max, smp_mean
```
supply_capacity 0.682936
smp_min -0.482532
smp_max -0.316293
smp_mean -0.462917
Name: demand_thisyear, dtype: float64
```python
corr.loc[condition3]['demand_thisyear'] # Month, temp_mean, temp_max, temp_min, supplu ability, supply_reserve, supply_reserve ratio
```
month -0.136914
temp_mean -0.142813
temp_max -0.147384
temp_min -0.129141
supply_ability -0.225657
supply_reserve 0.269309
supply_reserve_ratio -0.225657
Name: demand_thisyear, dtype: float64
```python
corr.loc[condition4]['demand_thisyear'] # 상관관계 없음
```
day 0.006233
weekday -0.090767
Name: demand_thisyear, dtype: float64
## * **상관관계별 Data (Targeting 'smp_min')**
```python
condition5 = abs(corr['smp_min'])>=0.7
condition6 = (abs(corr['smp_min'])<0.7) & (abs(corr['smp_min'])>=0.3)
condition7 = (abs(corr['smp_min'])<0.3) & (abs(corr['smp_min'])>=0.1)
condition8 = (abs(corr['smp_min'])<0.1)
```
```python
corr.loc[condition5]['smp_min']
```
smp_min 1.000000
smp_max 0.799001
smp_mean 0.956811
Name: smp_min, dtype: float64
```python
corr.loc[condition6]['smp_min'] # supply_capacity,
```
supply_capacity -0.522000
demand_preyear -0.468423
supply_reserve -0.506726
demand_thisyear -0.482532
Name: smp_min, dtype: float64
```python
corr.loc[condition7]['smp_min']
```
supply_ability -0.282626
supply_reserve_ratio -0.282626
Name: smp_min, dtype: float64
```python
corr.loc[condition8]['smp_min']
```
month 0.029235
day 0.009925
weekday -0.003112
temp_mean 0.023419
temp_max 0.018629
temp_min 0.025930
Name: smp_min, dtype: float64
## * **상관관계별 Data (Targeting 'smp_max')**
```python
condition9 = abs(corr['smp_max'])>=0.7
condition10 = (abs(corr['smp_max'])<0.7) & (abs(corr['smp_max'])>=0.3)
condition11 = (abs(corr['smp_max'])<0.3) & (abs(corr['smp_max'])>=0.1)
condition12 = (abs(corr['smp_max'])<0.1)
```
```python
corr.loc[condition9]['smp_max']
```
smp_min 0.799001
smp_max 1.000000
smp_mean 0.877336
Name: smp_max, dtype: float64
```python
corr.loc[condition10]['smp_max']
```
supply_reserve -0.400350
demand_thisyear -0.316293
Name: smp_max, dtype: float64
```python
corr.loc[condition11]['smp_max']
```
supply_capacity -0.298706
supply_ability -0.267082
demand_preyear -0.285415
supply_reserve_ratio -0.267082
Name: smp_max, dtype: float64
```python
corr.loc[condition12]['smp_max']
```
month 0.027817
day -0.004323
weekday -0.018435
temp_mean 0.013939
temp_max 0.010655
temp_min 0.016445
Name: smp_max, dtype: float64
## * **상관관계별 Data (Targeting 'smp_mean')**
```python
condition13 = abs(corr['smp_mean'])>=0.7
condition14 = (abs(corr['smp_mean'])<0.7) & (abs(corr['smp_mean'])>=0.3)
condition15 = (abs(corr['smp_mean'])<0.3) & (abs(corr['smp_mean'])>=0.1)
condition16 = (abs(corr['smp_mean'])<0.1)
```
```python
corr.loc[condition13]['smp_mean']
```
smp_min 0.956811
smp_max 0.877336
smp_mean 1.000000
Name: smp_mean, dtype: float64
```python
corr.loc[condition14]['smp_max']
```
supply_capacity -0.298706
demand_preyear -0.285415
supply_reserve -0.400350
demand_thisyear -0.316293
Name: smp_max, dtype: float64
```python
corr.loc[condition14]['smp_mean']
```
supply_capacity -0.462821
demand_preyear -0.436473
supply_reserve -0.489325
demand_thisyear -0.462917
Name: smp_mean, dtype: float64
```python
corr.loc[condition15]['smp_mean']
```
supply_ability -0.280164
supply_reserve_ratio -0.280164
Name: smp_mean, dtype: float64
```python
corr.loc[condition16]['smp_mean']
```
month 0.066462
day 0.007196
weekday -0.015342
temp_mean 0.057744
temp_max 0.051792
temp_min 0.061163
Name: smp_mean, dtype: float64
## 3-1. y 종속변수 분석 (demand_thisyear, 금년도 일별 최대전력수요)
```python
plt.figure(figsize=(15,5))
sns.lineplot('date','demand_thisyear', data=train, label='demand_thisyear')
plt.grid()
```

```python
plt.figure(figsize=(15,5))
sns.lineplot('date','supply_capacity', data=train, label='supply_capacity', color='r')
plt.grid()
```

```python
plt.figure(figsize=(15,5))
sns.lineplot('date','supply_ability', data=train, label='supply_ability', color='r')
plt.grid()
```

## 3-2. y 종속변수 분석 (SMP 일별 최소, 최대, 평균값)
```python
plt.figure(figsize=(15,5))
sns.lineplot('date','smp_min', data=train, label='smp_min')
plt.grid()
```

```python
plt.figure(figsize=(15,5))
sns.lineplot('date','smp_max', data=train, label='smp_max')
plt.grid()
```

```python
plt.figure(figsize=(15,5))
sns.lineplot('date','smp_mean', data=train, label='smp_mean')
plt.grid()
```

## 3-3. x 독립변수 분석 (targeting 'demand_thisyear')
* 강한 상관관계 **supply_capacity**, smp_min, smp_max, smp_mean
* 약한 상관관계 **supply_reserve, supply_ability, temp_man, temp_mean, temp_min, month**
```python
fig, ax = plt.subplots()
ax.scatter(x=train['supply_capacity'],y=train['demand_thisyear'])
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

```python
fig, ax = plt.subplots()
ax.scatter(x=train['smp_min'],y=train['demand_thisyear'])
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

```python
fig, ax = plt.subplots()
ax.scatter(x=train['smp_max'],y=train['demand_thisyear'])
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

```python
fig, ax = plt.subplots()
ax.scatter(x=train['smp_mean'],y=train['demand_thisyear'])
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

```python
fig, ax = plt.subplots()
ax.scatter(x=train['supply_reserve'],y=train['demand_thisyear'],color='r')
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

```python
fig, ax = plt.subplots()
ax.scatter(x=train['supply_ability'],y=train['demand_thisyear'],color='r')
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

```python
fig, ax = plt.subplots()
ax.scatter(x=train['temp_max'],y=train['demand_thisyear'],color='r')
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

```python
fig, ax = plt.subplots()
ax.scatter(x=train['temp_min'],y=train['demand_thisyear'],color='r')
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

```python
fig, ax = plt.subplots()
ax.scatter(x=train['temp_mean'],y=train['demand_thisyear'],color='r')
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

```python
fig, ax = plt.subplots()
ax.scatter(x=train['month'],y=train['demand_thisyear'],color='r')
plt.show
```
<function matplotlib.pyplot.show(close=None, block=None)>

# 4. 변수 선택 및 모델 구축
#### feature 선택 (상관관계0.3이상 변수)
- demand_thiyear : demand_preyear, supply_capacity + date관련 변수(month, day,weekday) + 기온관련 변수(temp_max, temp_min, temp_mean)
- smp_mean : supply_capacity, demand_preyear, supply_reserve
- smp_max : supply_reserve
- smp_min : supply_capacity, demand_preyear, supply_reserve
#### feature 선택 (최정명님 Code)
- demand_thiyear : month, weekday, day, temp_mean, temp_max, temp_min, supply_capacity, **supply_ability**, demand_preyear, **supply_reserve**
- smp_mean : **temp_mean**
- smp_max : **day**, **weekday**, **month**, **temp_mean**, **smp_mean**
- smp_min : **temp_mean**, **demand_thisyear**
```python
demand_thisyear_use_col = ['month','weekday', 'day', 'temp_mean', 'temp_max', 'temp_min', 'supply_capacity', 'supply_ability', 'demand_preyear', 'supply_reserve','demand_thisyear']
smp_mean_use_col = ['supply_capacity', 'demand_preyear', 'supply_reserve','smp_mean']
smp_max_use_col = ['supply_reserve','smp_max']
smp_min_use_col = ['supply_capacity', 'demand_preyear', 'supply_reserve','smp_min']
```
#### Light GBM과 파라미터 튜닝 이용하여 모델 구축 (별도 Study 필요)
```python
params1 = {
'objective':'mae',
'feature_fraction':0.1,
}
params2 = {
'objective':'mae',
}
params3= {
'objective':'mse',
'feature_fraction':0.1,
}
params4= {
'objective':'mse',
}
def TRAIN_lgbm(params, X_train_, y_train_, X_valid_, y_valid_, verbose=0):
train_data = lgb.Dataset(X_train_, y_train_)
valid_data = lgb.Dataset(X_valid_, y_valid_)
model = lgb.train(params, train_data, 100000, (train_data, valid_data), verbose_eval=verbose, early_stopping_roubnds=100)
return model
```
# 5. 모델 학습 및 검증
# 6. 모델 학습 및 검증
```python
```
```python
```
| 19.29599 | 207 | 0.501428 | kor_Hang | 0.346 |
4f50e2e6314642830255276bc61952835ad389fc | 1,493 | md | Markdown | zhihutop_content/2022/2022-04/2022-04-21/06:53.md | handn-work/spider | 8130508da1ba31df2d1430c0077ad08602417fd3 | [
"MIT"
] | null | null | null | zhihutop_content/2022/2022-04/2022-04-21/06:53.md | handn-work/spider | 8130508da1ba31df2d1430c0077ad08602417fd3 | [
"MIT"
] | null | null | null | zhihutop_content/2022/2022-04/2022-04-21/06:53.md | handn-work/spider | 8130508da1ba31df2d1430c0077ad08602417fd3 | [
"MIT"
] | null | null | null | 民航局发布东航 MU5735 航空器飞行事故调查初步报告情况通报,如何解读,还有哪些信息值得关注?
4 月 19 日俄媒报道俄罗斯和越南将举行联合军演,此次军演释放了什么信号?
为什么中国人相对不常吃大块的纯肉?
4 月 19 日上海新增本土「 2494+16407 」,死亡病例 7 例,目前情况如何?
西方贬低塞尔维亚买的中国装备「过时」,武契奇回应「不仅不过时,还非常有效」,如何看待此事?
20 日,上海两区首日达到社会面清零目标,上海本轮疫情拐点要到来了吗?
父母走后,兄弟姐妹就散了,基本不来往了是真的吗?
武汉同济医院违规使用医保基金被通报,罚款近 6000 万元,这对该行业都起到了哪些警示作用?
如果一种能量液体,能在五分钟内充满汽车并且续航达到 600 多公里,是不是新能源电动车就毫无优势了?
最新一艘 055 型驱逐舰亮相,中国海军已有 4 艘万吨大驱露面,还有哪些信息值得关注?
Netflix 十余年来首次报告用户流失,股价盘后暴跌逾 20% ,这有可能是哪些原因造成的?
台媒快讯出现「新北市遭共军导弹击中」等字样,官方回应「系防灾演练,因错误植入导致」,反映了哪些问题?
小朋友吃哈密瓜,允许她只吃上面甜的半截,算溺爱吗?
15 万落地买什么车开着舒服省心?
为什么现在越来越多的健身博主都开始推荐植物奶,它真的会比奶茶更健康吗?
如何看待中科院教授表示停用知网影响不大?知网在当下学术研究中的作用如何?
为什么《魔戒》里甘道夫大部分情况下都是拔剑去砍人呢?
当一个独立律师持续三个月没有收入时,已经无法正常缴纳社保与房租,他应该去做什么?
护肤品抹在皮肤后,大家说的「好吸收」,是真的吸收进去了吗?
《王者荣耀》同水平狄仁杰能克制李元芳吗?
房子怎么装修才能显大?
曾经关系很好的人,现在没有联系你遗憾吗?
人为了什么而活着?活着的意义在哪里?
为什么粤语歌至今没诞生过能让非母语者看不懂词的金曲呢?
乌方确认东部城市克里米纳被俄军占领,目前俄乌局势如何?
请问有闻起来凉凉的香水吗?
生活在小县城读书重要吗?
爬山的乐趣是什么?
什么装修风格「耐看、不俗气、长期不落伍」?
ENTP 最可怕的是什么?
16 岁的女孩子不读书还能做什么?
养猫和养狗哪一个好养?
加班多,总提不起精神,工作效率低,有什么方法可以解决吗?
我问五险一金的缴纳标准是什么,HR 回复的是「最低基数的最高标准」,求助解答这到底是什么意思?
为什么有些人经常抱怨工作没有意思,干起来又很努力?
对于前途特别迷茫怎么办?
头皮也需要抗衰吗?如何延缓头皮衰老,减少脱发?
高三努力还能补回来吗?
如果让你在霍格沃兹的通知书和无限变身奥特曼之间选一个,你选什么?
不同阶层的人都是如何对抗衰老的?
让你努力的理由是什么?
莴苣、莴笋、贡菜、生菜、油麦菜之间都有什么关系?有哪些吃法?
是不是只有中国人相信钻石代表爱情,外国人结婚买钻戒吗 ?
为什么大多数学日语的人会觉得片假名很烦人?
每天坚持用护肤品,是否可以有效抵抗衰老?
游戏《艾尔登法环》中有哪些乍一看没什么,细想想却很可悲的设定?
职场新人刚入职一个月,发现日常工作内容和自己的预期不符,请问我应该怎么办?
如何发自内心提高对所从事工作的价值认同呢?
能挣钱的员工都具备哪些核心能力?
冥王星为什么不再是太阳系的大行星?
| 29.27451 | 52 | 0.793704 | zho_Hans | 0.437481 |
4f511207661e0206c7c9b65e84c11b8575122942 | 4,967 | md | Markdown | _posts/2017/apr/2017-04-11-flexibility-equates-lower-quality.md | nextmoose/786f1166-5566-4b84-a208-25000902956e | 6bbc5e75753516b8bef39aac764670eabddedfc5 | [
"Unlicense"
] | null | null | null | _posts/2017/apr/2017-04-11-flexibility-equates-lower-quality.md | nextmoose/786f1166-5566-4b84-a208-25000902956e | 6bbc5e75753516b8bef39aac764670eabddedfc5 | [
"Unlicense"
] | null | null | null | _posts/2017/apr/2017-04-11-flexibility-equates-lower-quality.md | nextmoose/786f1166-5566-4b84-a208-25000902956e | 6bbc5e75753516b8bef39aac764670eabddedfc5 | [
"Unlicense"
] | null | null | null | ---
layout: post
title: "Flexibility Equates to Lower Quality"
date: 2017-04-11
place: Moscow, Russia
tags: management
description: |
The more options a programmer has to implement
a feature in a programming language, the lower the
overall quality of the product will be.
keywords:
- quality
- programming language quality
- syntactic sugar
- rubocop
- checkstyle
image: /images/2017/04/scarface.jpg
jb_picture:
caption: Scarface (1983) by Brian De Palma
---
There are two opposing mindsets: "If it works, it's good" vs.
"If it's good, it works;" or "Make it work" vs. "Make it right."
I'm talking about the software source code. I've been hearing this
almost every day in blog comments: Why do we need all those _new_ OOP
principles if our code works just fine without them? What is the
point of introducing a new way, which is supposed to be "better,"
if the existing, traditional, semi-object, semi-procedural, not-so-perfect
approach **just works**?
<!--more-->
{% jb_picture_body %}
Let's try to think bigger. And not just about object-oriented programming, but
in general about software development. There are many examples of the
"just works" mentality.
Take Perl, a programming language famous
for its ability to do anything in three different ways. This means that
there is no one "right" way. I'm not a Perl expert; that's why I'll have you
look at this Ruby code instead:
{% highlight ruby %}
if a > b
m = 'Hello!'
end
{% endhighlight %}
We can rewrite it like this:
{% highlight ruby %}
m = if a > b
'Hello!'
end
{% endhighlight %}
Or this:
{% highlight ruby %}
m = 'Hello!' if a > b
{% endhighlight %}
And one more:
{% highlight ruby %}
m = a > b ? 'Hello' : nil
{% endhighlight %}
Which one is "right?" Are there any Perl developers? Can you suggest
some other way to achieve the same result?
Not surprisingly, in Java (a stricter language than Ruby),
there is only one way to do it:
{% highlight ruby %}
if (a > b) {
m = "Hello!";
}
{% endhighlight %}
Well, I guess I'm wrong; there are two, actually. Here is the second one:
{% highlight ruby %}
if (a > b) m = "Hello!";
{% endhighlight %}
What does this variety of options give us, as programmers? I guess the answer
seriously depends on what we, the programmers, are doing with the code:
writing or reading it. Also, it depends on our
[attitude]({% pst 2014/oct/2014-10-26-hacker-vs-programmer-mentality %}) toward the
software we're creating; we either
_own_ it (hacker mentality) or
_build_ it (designer mentality).
If we're _writing_ it, and we love to think about ourselves as code owners, we
definitely will need that arsenal of
[syntactic sugar](https://en.wikipedia.org/wiki/Syntactic_sugar) weapons. We need them to prove to ourselves
that we're smart and, of course, to show off in front of our friends and
that soulless Ruby interpreter.
On the other hand, if we're designers and happen to _read_ the code that is
full of sugar, which "just works," we'll be very annoyed and
frustrated. Well, maybe I have to speak for myself, but I definitely will be.
{% quote Simply put, higher quality comes from simpler languages. %}
This overly-sugared Ruby syntax is a perfect example of "works vs. good"
positioning. Ruby philosophy is this: It doesn't matter how you write it, as long
as it works. Java philosophy is different; it's much closer to:
Make it right and it will work.
The [weak](https://en.wikipedia.org/wiki/Strong_and_weak_typing)
and
[dynamic](https://en.wikipedia.org/wiki/Type_system#Dynamic_type_checking_and_runtime_type_information)
typing in Ruby vs. the strong and
[static](https://en.wikipedia.org/wiki/Type_system#Static_type_checking)
one in Java also prove my point.
In general, I believe that the more flexible the programming language is, the
lower the _maintainability_---the key quality characteristic---of the code it produces.
Simply put, higher quality comes from simpler languages.
The same is true for the entire software development: The more restrictions
we put on programmers and the fewer options they have for their
"syntax creativity," the higher the quality of the software they write.
Static analyzers like
[Checkstyle](http://checkstyle.sourceforge.net/) for Java or
[Rubocop](https://github.com/bbatsov/rubocop) for Ruby attempt
to solve that problem and prohibit us from using certain language features,
but they lag far behind. We are very "creative."
Now, let's get back to the original OOP question:
Why do we need to improve anything if it works the way it is?
Here is the answer: Modern OOP (as in Java, Ruby, and C++)
doesn't produce quality code because it doesn't have a strong
and properly restricted paradigm. It just has too many "features,"
which were mostly introduced by C++ and remained there for our
mutual "convenience."
They indeed work, but the maintainability of the software we produce
is very low. Well, it's way lower than it could be, if our "creativity"
would be restricted.
| 34.734266 | 108 | 0.751762 | eng_Latn | 0.998319 |
4f5390a80fb9bda20bf1b6601072ae4e6e83ee3c | 982 | md | Markdown | README.md | andrewjherbert/900-Simulator-Utilities | 8aa8615798a6900c01e33171874c3e8d2622fee4 | [
"MIT"
] | null | null | null | README.md | andrewjherbert/900-Simulator-Utilities | 8aa8615798a6900c01e33171874c3e8d2622fee4 | [
"MIT"
] | null | null | null | README.md | andrewjherbert/900-Simulator-Utilities | 8aa8615798a6900c01e33171874c3e8d2622fee4 | [
"MIT"
] | null | null | null | # 900-Simulator-Utilities
Utility programs associated with Elliott 900 Series computer simulator
A set of useful F# programs to accompany my Elliott 900 Series computer simulator:
SIRPrint -- a pretty printer for Elliott 900 Symbolic Input Routine source code.
(SIR is a 900 series assembler).
ScanTapes -- a utility to convert images of Elliott 900 series computer paper tapes
into verious UTF-8 file formats used by the simulator. (Elliott used their own paper
tape codes and it is convenient to translate them to UTF-8 using Unicode characters
corresponding to Elliott characters, or to numerial form as a sequence of integers
representing the numerical value of each row of tape.
StripHeader -- a utility to remove an initial (generally legible) header from a paper
tape file. (Normally used after header is verified using VisualizeTape.)
VisualizeTape -- A program to show the initial portion of a paper tape file in a
visual form, useful for detetcting legible headers.
| 46.761905 | 85 | 0.804481 | eng_Latn | 0.9989 |
4f54577552c7f12e908cd7437fbfae041833e3a8 | 5,305 | md | Markdown | docs/ngx-videogular-demo/getting-started/using-the-api.md | marcin-zbijowski/ngx-videogular | 7f81fbf7aa5ab0a4c3344aae3dc1dbcddbd2e4f2 | [
"MIT"
] | null | null | null | docs/ngx-videogular-demo/getting-started/using-the-api.md | marcin-zbijowski/ngx-videogular | 7f81fbf7aa5ab0a4c3344aae3dc1dbcddbd2e4f2 | [
"MIT"
] | null | null | null | docs/ngx-videogular-demo/getting-started/using-the-api.md | marcin-zbijowski/ngx-videogular | 7f81fbf7aa5ab0a4c3344aae3dc1dbcddbd2e4f2 | [
"MIT"
] | null | null | null | ## Using the API
Videogular's API is the service that will allow you to control the media objects and listen to any change on them. It's not mandatory to use the API but if you need to control externally the medias or you want to listen to changes you need use it.
To start using the API first you need to grab it from the player. To do that listen for the `onPlayerReady` event that will get you the API:
```html
<vg-player (onPlayerReady)="onPlayerReady($event)">
<vg-overlay-play></vg-overlay-play>
<vg-buffering></vg-buffering>
<vg-controls>
<vg-play-pause></vg-play-pause>
<vg-playback-button></vg-playback-button>
<vg-time-display vgProperty="current" vgFormat="mm:ss"></vg-time-display>
<vg-scrub-bar>
<vg-scrub-bar-current-time></vg-scrub-bar-current-time>
<vg-scrub-bar-buffering-time></vg-scrub-bar-buffering-time>
</vg-scrub-bar>
<vg-time-display vgProperty="left" vgFormat="mm:ss"></vg-time-display>
<vg-time-display vgProperty="total" vgFormat="mm:ss"></vg-time-display>
<vg-track-selector></vg-track-selector>
<vg-mute></vg-mute>
<vg-volume></vg-volume>
<vg-fullscreen></vg-fullscreen>
</vg-controls>
<video [vgMedia]="media" #media id="singleVideo" preload="auto" crossorigin>
<source src="http://static.videogular.com/assets/videos/videogular.mp4" type="video/mp4">
</video>
</vg-player>
```
Now on your `Component` get the API:
```typescript
import {Component} from '@angular/core';
import {VgApiService} from '@videogular/ngx-videogular/core';
@Component({
selector: 'bound-player',
templateUrl: 'src/bound-player.html'
})
export class BoundPlayer {
preload: string = 'auto';
api: VgApiService;
constructor() {}
onPlayerReady(api: VgApiService) {
this.api = api;
}
}
```
Now that you have the API you can listen to changes and perform actions:
```typescript
onPlayerReady(api: VgApiService) {
this.api = api;
this.api.getDefaultMedia().subscriptions.ended.subscribe(
() => {
// Set the video to the beginning
this.api.getDefaultMedia().currentTime = 0;
}
);
}
```
You have a lot of events to listen:
- **abort**: Fired when the loading of the media has been aborted.
- **canPlay**: Fired when enough data is available that the media can be played, at least for a couple of frames. This corresponds to the `HAVE_ENOUGH_DATA` readyState.
- **canPlayThrough**: Fired when the ready state changes to `CAN_PLAY_THROUGH`, indicating that the entire media can be played without interruption, assuming the download rate remains at least at the current level. It will also be fired when playback is toggled between paused and playing. Note: Manually setting the `currentTime` will eventually fire a `canplaythrough` event in firefox. Other browsers might not fire this event.
- **durationChange**: Fired when the duration of the media has changed.
- **emptied**: Fired when the current playlist has been emptied.
- **encrypted**: Fired when the current media must be decrypted by the Encrypted Media Extensions API.
- **ended**: Fired when playback completes.
- **error**: Fired when an error occurs. The element's error attribute contains more information.
- **loadedData**: Fired when the current frame of the media has been loaded.
- **loadedMetadata**: Fired when the media's metadata has finished loading; all attributes now contain as much useful information as they're going to.
- **loadStart**: Fired when the browser starts loading the media.
- **pause**: Fired when playback is paused.
- **play**: Fired when playback of the media starts after having been paused; that is, when playback is resumed after a prior pause event.
- **playing**: Fired when the media begins to play (either for the first time, after having been paused, or after ending and then restarting).
- **progress**: Fired periodically to inform interested parties of progress downloading the media. Information about the current amount of the media that has been downloaded is available in the media element's `buffered` attribute.
- **rateChange**: Fired when the playback rate of the media has been changed.
- **seeked**: Fired when a seek operation completes.
- **seeking**: Fired when a seek operation begins.
- **stalled**: Fired when the browser is trying to get media data but the data is not available.
- **suspend**: Fired when the browser is intentionally not getting media data.
- **timeUpdate**: Fired when the time indicated by the element's `currentTime` attribute has changed.
- **volumeChange**: Fired when the audio volume changes (both when the volume is set and when the muted attribute is changed).
- **waiting**: Fired when the requested operation (such as playback) is delayed pending the completion of another operation (such as a seek).
- **startAds**: Fired when an advertisement started. This event will only be triggered if you have the Videogular Google IMA in your media player.
- **endAds**: Fired when an advertisement completes. This event will only be triggered if you have the Videogular Google IMA in your media player.
Event information extracted from MDN: https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Media_events
| 50.52381 | 430 | 0.721395 | eng_Latn | 0.996365 |
4f55aa62b669c5f75b1dbc8c661c708c33827950 | 3,580 | md | Markdown | README.md | wuchunfu/crudapi-admin-web | 56cd48fae0d7d660c7a7c7651f615f982a550170 | [
"MIT"
] | 70 | 2021-03-04T01:56:01.000Z | 2022-03-31T09:39:23.000Z | README.md | wuchunfu/crudapi-admin-web | 56cd48fae0d7d660c7a7c7651f615f982a550170 | [
"MIT"
] | 1 | 2021-11-27T15:51:59.000Z | 2021-11-27T15:51:59.000Z | README.md | wuchunfu/crudapi-admin-web | 56cd48fae0d7d660c7a7c7651f615f982a550170 | [
"MIT"
] | 19 | 2021-04-05T01:10:39.000Z | 2022-03-31T02:51:43.000Z | # Crudapi Admin Web (crudapi-admin-web)
## Language
[中文](README_CN.md)
## GIT URL
Name | Type | License | GitHub| Gitee
--- | --- | --- | --- | ---
crudapi-admin-web | Vue Qusar Code | Open Source | [crudapi-admin-web](https://github.com/crudapi/crudapi-admin-web) | [crudapi-admin-web](https://gitee.com/crudapi/crudapi-admin-web)
crudapi-example| Java SDK | Free | [crudapi-example](https://github.com/crudapi/crudapi-example) | [crudapi-example](https://gitee.com/crudapi/crudapi-example)
crudapi-pro-example | Java Pro SDK | Bussiness | [crudapi-pro-example](https://github.com/crudapi/crudapi-pro-example) | [crudapi-pro-example](https://gitee.com/crudapi/crudapi-pro-example)
## Install the dependencies
```bash
npm install
```
## Start the app in development mode (hot-code reloading, error reporting, etc.)
```bash
npm run dev
```
## Lint the files
```bash
npm run lint
```
## Build the app for production
```bash
npm run build
```
## Docker
```bash
docker build -t crudapi-admin-web:latest .
docker rm -f crudapi-admin-web
docker run -d -p 80:80 --name crudapi-admin-web crudapi-admin-web:latest
```
Visit [ http://127.0.0.1/crudapi ](http://127.0.0.1/crudapi)
## Documentation
[https://help.crudapi.cn](https://help.crudapi.cn)
1. [ 基于Vue和Quasar的前端SPA项目实战之环境搭建(一)](https://help.crudapi.cn/crudapi-admin-web/helloworld.html)
2. [ 基于Vue和Quasar的前端SPA项目实战之用户登录(二)](https://help.crudapi.cn/crudapi-admin-web/login.html)
3. [ 基于Vue和Quasar的前端SPA项目实战之布局菜单(三)](https://help.crudapi.cn/crudapi-admin-web/layout.html)
4. [ 基于Vue和Quasar的前端SPA项目实战之序列号(四)](https://help.crudapi.cn/crudapi-admin-web/sequence.html)
5. [ 基于Vue和Quasar的前端SPA项目实战之动态表单(五)](https://help.crudapi.cn/crudapi-admin-web/metadatatable.html)
6. [ 基于Vue和Quasar的前端SPA项目实战之表关系(六)](https://help.crudapi.cn/crudapi-admin-web/metadatarelation.html)
7. [ 基于Vue和Quasar的前端SPA项目实战之业务数据(七)](https://help.crudapi.cn/crudapi-admin-web/business.html)
8. [ 基于Vue和Quasar的前端SPA项目实战之docker部署(八)](https://help.crudapi.cn/crudapi-admin-web/docker.html)
9. [ 基于Vue和Quasar的前端SPA项目实战之数据导入(九)](https://help.crudapi.cn/crudapi-admin-web/import.html)
10. [ 基于Vue和Quasar的前端SPA项目实战之文件上传(九)](https://help.crudapi.cn/crudapi-admin-web/fileupload.html)
11. [ 基于Vue和Quasar的前端SPA项目实战之联合索引(十一)](https://help.crudapi.cn/crudapi-admin-web/unionindex.html)
12. [ 基于Vue和Quasar的前端SPA项目实战之数据库逆向(十二)](https://help.crudapi.cn/crudapi-admin-web/dbfirst.html)
13. [ 基于Vue和Quasar的前端SPA项目实战之数据导出(十三)](https://help.crudapi.cn/crudapi-admin-web/export.html)
14. [ 基于Vue和Quasar的前端SPA项目实战之模块管理(十四)](https://help.crudapi.cn/crudapi-admin-web/module.html)
15. [ 基于Vue和Quasar的前端SPA项目实战之元数据导出导入(十五)](https://help.crudapi.cn/crudapi-admin-web/metadataexportimport.html)
Ongoing updates...
## Demo
Demo url:[https://demo.crudapi.cn/crudapi/](https://demo.crudapi.cn/crudapi/)

Metadata table

Table relation

Bussiness Data
## Java SDK development
### GitHub repo
[https://github.com/crudapi/crudapi-example](https://github.com/crudapi/crudapi-example)
### Gitee repo
[https://gitee.com/crudapi/crudapi-example](https://gitee.com/crudapi/crudapi-example)
## Contact
#### Email
admin@crudapi.cn
#### QQ
1440737304
#### Weixin
undefinedneqnull
<div align="left">
<img width = "200" src="./img/crudapiweixin.jpeg">
</div>
#### WeixinQun
<div align="left">
<img width = "200" src="./img/weixinqun.png">
</div>
If you have any questions, please contact us!
## License
Copyright (c) 2021-present crudapi
[MIT License](http://en.wikipedia.org/wiki/MIT_License)
| 33.148148 | 190 | 0.737151 | yue_Hant | 0.44356 |
4f56332630411e9a0bfa85b0881e01751916414b | 8,361 | md | Markdown | docs/mfc/tn055-migrating-mfc-odbc-database-class-applications-to-mfc-dao-classes.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 965 | 2017-06-25T23:57:11.000Z | 2022-03-31T14:17:32.000Z | docs/mfc/tn055-migrating-mfc-odbc-database-class-applications-to-mfc-dao-classes.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 3,272 | 2017-06-24T00:26:34.000Z | 2022-03-31T22:14:07.000Z | docs/mfc/tn055-migrating-mfc-odbc-database-class-applications-to-mfc-dao-classes.md | bobbrow/cpp-docs | 769b186399141c4ea93400863a7d8463987bf667 | [
"CC-BY-4.0",
"MIT"
] | 951 | 2017-06-25T12:36:14.000Z | 2022-03-26T22:49:06.000Z | ---
description: "Learn more about: TN055: Migrating MFC ODBC Database Class Applications to MFC DAO Classes"
title: "TN055: Migrating MFC ODBC Database Class Applications to MFC DAO Classes"
ms.date: "09/17/2019"
helpviewer_keywords: ["DAO [MFC], migration", "TN055", "migration [MFC], ODBC database applications", "ODBC classes [MFC], DAO classes", "migrating ODBC database applications [MFC]", "porting database applications to DAO", "ODBC [MFC], DAO", "porting ODBC database applications to DAO", "migrating database applications [MFC]"]
ms.assetid: 0f858bd1-e168-4e2e-bcd1-8debd82856e4
---
# TN055: Migrating MFC ODBC Database Class Applications to MFC DAO Classes
> [!NOTE]
> DAO is used with Access databases and is supported through Office 2013. DAO 3.6 is the final version, and it is considered obsolete. The Visual C++ environment and wizards do not support DAO (although the DAO classes are included and you can still use them). Microsoft recommends that you use [OLE DB Templates](../data/oledb/ole-db-templates.md) or [ODBC and MFC](../data/odbc/odbc-and-mfc.md) for new projects. You should only use DAO in maintaining existing applications.
## Overview
In many situations it may be desirable to migrate applications that use MFC's ODBC database classes to MFC's DAO database classes. This technical note will detail most of the differences between the MFC ODBC and DAO classes. With the differences in mind, it should not be overly difficult to migrate applications from the ODBC classes to the MFC classes if desired.
## Why Migrate from ODBC to DAO
There are a number of reasons why you might want to migrate applications from the ODBC Database Classes to the DAO Database Classes, but the decision is not necessarily simple or obvious. One thing to keep in mind is that the Microsoft Jet database engine that is used by DAO can read any ODBC data source for which you have an ODBC driver. It may be more efficient to use the ODBC Database Classes or call ODBC directly yourself, but the Microsoft Jet database engine can read ODBC data.
Some simple cases that make the ODBC/DAO decision easy. For instance, when you only need access to data in a format that the Microsoft Jet engine can read directly (Access format, Excel format, and so on) the obvious choice is to use the DAO Database Classes.
More complex cases arise when your data exists on a server or on a variety of different servers. In this case, the decision to use the ODBC Database classes or the DAO Database classes is a difficult one. If you want to do things like heterogeneous joins (join data from servers in multiple formats like SQL Server and Oracle), then the Microsoft Jet database engine will perform the join for you rather than forcing you to do the work necessary if you used the ODBC Database Classes or called ODBC directly. If you are using an ODBC driver that supports driver cursors, your best choice might be the ODBC Database classes.
The choice can be complicated, so you might want to write some sample code to test the performance of various methods given your special needs. This technical note assumes that you have made the decision to migrate from the ODBC Database Classes to the DAO Database classes.
## Similarities Between ODBC Database Classes and MFC DAO Database Classes
The original design of the MFC ODBC classes was based on the DAO object model that has been in use in Microsoft Access and Microsoft Visual Basic. This means that there are many common features of the ODBC and DAO MFC classes, which will not all be listed in this section. In general, the programming models are the same.
To highlight a few similarities:
- Both the ODBC and DAO classes have database objects that manage using the underlying database management system (DBMS).
- Both have recordset objects representing a set of results returned from that DBMS.
- The DAO database and recordset objects have members nearly identical to the ODBC classes.
- With both sets of classes, the code to retrieve data is identical except for some object and member name changes. Changes will be required, but usually the process is a straightforward name change when switching from the ODBC classes to DAO classes.
For example, in both models the procedure to retrieve data is to create and open a database object, create and open a recordset object, and navigate (move) though the data performing some operation.
## Differences Between ODBC and DAO MFC Classes
The DAO classes include more objects and a richer set of methods, but this section will only detail the differences in similar classes and functionality.
Probably the most obvious differences between the classes are the name changes for similar classes and global functions. The following list shows the name changes of the objects, methods and global functions associated with the database classes:
|Class or function|Equivalent in MFC DAO classes|
|-----------------------|-----------------------------------|
|`CDatabase`|`CDaoDatabase`|
|`CDatabase::ExecuteSQL`|`CDaoDatabase::Execute`|
|`CRecordset`|`CDaoRecordset`|
|`CRecordset::GetDefaultConnect`|`CDaoRecordset::GetDefaultDBName`|
|`CFieldExchange`|`CDaoFieldExchange`|
|`RFX_Bool`|`DFX_Bool`|
|`RFX_Byte`|`DFX_Byte`|
|`RFX_Int`|`DFX_Short`|
|`RFX_Long`|`DFX_Long`|
||`DFX_Currency`|
|`RFX_Single`|`DFX_Single`|
|`RFX_Double`|`DFX_Double`|
|`RFX_Date`<sup>1</sup>|`DFX_Date` (`COleDateTime`-based)|
|`RFX_Text`|`DFX_Text`|
|`RFX_Binary`|`DFX_Binary`|
|`RFX_LongBinary`|`DFX_LongBinary`|
<sup>1</sup> The `RFX_Date` function is based on `CTime` and `TIMESTAMP_STRUCT`.
The major changes to functionality which may affect your application and require more than simple name changes are listed below.
- The constants and macros used to specify things like recordset open type and recordset open options have been changed.
With the ODBC classes MFC needed to define these options via macros or enumerated types.
With the DAO classes, DAO provides the definition of these options in a header file (DBDAOINT.H). Thus the recordset type is an enumerated member of `CRecordset`, but with DAO it is a constant instead. For example you would use **snapshot** when specifying the type of `CRecordset` in ODBC but **DB_OPEN_SNAPSHOT** when specifying the type of `CDaoRecordset`.
- The default recordset type for `CRecordset` is **snapshot** while the default recordset type for `CDaoRecordset` is **dynaset** (see the Note below for an additional issue about ODBC class snapshots).
- The ODBC `CRecordset` class has an option to create a forward-only recordset type. In the `CDaoRecordset` class, forward-only is not a recordset type, but rather a property (or option) of certain types of recordsets.
- An append-only recordset when opening a `CRecordset` object meant that the recordset's data could be read and appended. With `CDaoRecordset` object, the append-only option means literally that the recordset's data can only be appended (and not read).
- The ODBC classes' transaction member functions are members of `CDatabase` and act at the database level. In the DAO classes, the transaction member functions are members of a higher level class (`CDaoWorkspace`) and thus may impact multiple `CDaoDatabase` objects sharing the same workspace (transaction space).
- The exception class has been changed. `CDBExceptions` are thrown in the ODBC classes and `CDaoExceptions` in the DAO classes.
- `RFX_Date` uses `CTime` and `TIMESTAMP_STRUCT` objects while `DFX_Date` uses `COleDateTime`. The `COleDateTime` is nearly identical to `CTime`, but is based on a 8-byte OLE **DATE** rather than a 4-byte **time_t** so it can hold a much bigger range of data.
> [!NOTE]
> DAO (`CDaoRecordset`) snapshots are read-only while ODBC (`CRecordset`) snapshots may be updateable depending on the driver and use of the ODBC cursor library. If you are using the cursor library, `CRecordset` snapshots are updateable. If you are using any of the Microsoft drivers from Desktop Driver Pack 3.0 without the ODBC cursor library, the `CRecordset` snapshots are read-only. If you are using another driver, check the driver's documentation to see if snapshots (`STATIC_CURSORS`) are read-only.
## See also
[Technical Notes by Number](../mfc/technical-notes-by-number.md)<br/>
[Technical Notes by Category](../mfc/technical-notes-by-category.md)
| 86.195876 | 623 | 0.779213 | eng_Latn | 0.993423 |
4f566337d5f6013874bde5ff8f5c47bae07f5911 | 1,825 | md | Markdown | docs/framework/unmanaged-api/fusion/createhistoryreader-function.md | adamsitnik/docs.cs-cz | 7c534ad2e48aa0772412dc0ecf04945c08fa4211 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/createhistoryreader-function.md | adamsitnik/docs.cs-cz | 7c534ad2e48aa0772412dc0ecf04945c08fa4211 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/fusion/createhistoryreader-function.md | adamsitnik/docs.cs-cz | 7c534ad2e48aa0772412dc0ecf04945c08fa4211 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: CreateHistoryReader – funkce
ms.date: 03/30/2017
api_name:
- CreateHistoryReader
api_location:
- fusion.dll
api_type:
- DLLExport
f1_keywords:
- CreateHistoryReader
helpviewer_keywords:
- CreateHistoryReader function [.NET Framework fusion]
ms.assetid: 66a89acf-8c32-44c0-8787-960c99c7b3ec
topic_type:
- apiref
ms.openlocfilehash: 80979f0424469bb1d4771ad6507bb8c9d5364ab4
ms.sourcegitcommit: 559fcfbe4871636494870a8b716bf7325df34ac5
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 10/30/2019
ms.locfileid: "73108612"
---
# <a name="createhistoryreader-function"></a>CreateHistoryReader – funkce
Vytvoří čtečku historie pro zadaný soubor.
## <a name="syntax"></a>Syntaxe
```cpp
HRESULT CreateHistoryReader (
[in] LPCWSTR wzFilePath,
[out] IHistoryReader **ppHistoryReader
);
```
## <a name="parameters"></a>Parametry
`wzFilePath`
pro Cesta k souboru.
`ppHistoryReader`
mimo Po úspěšném dokončení obsahuje ukazatel na čtečku historie.
## <a name="return-value"></a>Návratová hodnota
Tato metoda vrací standardní chybové kódy modelu COM, jak jsou definovány v WinError. h, kromě hodnot popsaných v následující tabulce.
|Návratový kód|Popis|
|-----------------|-----------------|
|S_OK|Označuje, že metoda byla úspěšně dokončena.|
|E_INVALIDARG|Označuje, že `wzFilePath` nebo `ppHistoryReader` jsou nastaveny na odkaz s hodnotou null.|
## <a name="requirements"></a>Požadavky
**Platformy:** Viz [požadavky na systém](../../get-started/system-requirements.md).
**Knihovna:** Fusion. dll
**Verze .NET Framework:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
## <a name="see-also"></a>Viz také:
- [Globální statické funkce pro fúze](fusion-global-static-functions.md)
| 29.918033 | 137 | 0.714521 | ces_Latn | 0.827496 |
4f56c20389c4709e06c3510079d4601e99f57fce | 632 | md | Markdown | 2021/CVE-2021-3845.md | AndrewMohawk/cve | fa627eabe70182766bfb161228f06b1f5f60c86e | [
"MIT"
] | 1 | 2022-02-25T09:08:49.000Z | 2022-02-25T09:08:49.000Z | 2021/CVE-2021-3845.md | Jeromeyoung/cve | 9113f2d909fddfaf554d5e8b90f9ed402f0fa81e | [
"MIT"
] | null | null | null | 2021/CVE-2021-3845.md | Jeromeyoung/cve | 9113f2d909fddfaf554d5e8b90f9ed402f0fa81e | [
"MIT"
] | null | null | null | ### [CVE-2021-3845](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3845)



### Description
ws-scrcpy is vulnerable to External Control of File Name or Path
### POC
#### Reference
- https://huntr.dev/bounties/dc7fc98f-4f4f-440a-b6f6-124a56ea36ef
#### Github
No PoCs found on GitHub currently.
| 35.111111 | 141 | 0.753165 | yue_Hant | 0.232612 |
4f56c245b4fc80f24ccbf19b663c3a6a8e16f481 | 1,540 | md | Markdown | README.md | jarofghosts/uniq-stream | 1687d9bf6bd0ad727e760a36614a09bd9f065c9b | [
"MIT"
] | 2 | 2015-01-13T23:25:39.000Z | 2016-07-07T19:48:07.000Z | README.md | jarofghosts/uniq-stream | 1687d9bf6bd0ad727e760a36614a09bd9f065c9b | [
"MIT"
] | null | null | null | README.md | jarofghosts/uniq-stream | 1687d9bf6bd0ad727e760a36614a09bd9f065c9b | [
"MIT"
] | null | null | null | # uniq-stream
[](https://travis-ci.org/jarofghosts/uniq-stream)
[](https://www.npmjs.org/package/uniq-stream)
[](https://www.npmjs.org/package/uniq-stream)
[](https://github.com/feross/standard)
[](https://github.com/jarofghosts/uniq-stream/blob/master/LICENSE)
a stream that acts like [`uniq`](http://ss64.com/bash/uniq.html)
## Installation
```
npm install uniq-stream
```
## Usage
pipe it data and it pipes out de-duped (by line) data
#### Example
```javascript
var uniq = require('uniq-stream')
var split = require('split')
var fs = require('fs')
fs.createReadStream('dupe-ridden-file.txt')
.pipe(split())
.pipe(uniq())
.pipe(fs.createWriteStream('clean-file.txt'))
```
## Options
uniq-stream accepts an options object as a first parameter, the acceptable
options are outlined below along with their defaults.
```javascript
{
global: false, // de-dupe data globally rather than line-wise
ignoreCase: false, // case insensitive comparison
skip: 0, // ignore first (value) characters of input string for comparison
inverse: false // only emit duplicated lines
}
```
## License
MIT
| 30.196078 | 144 | 0.72987 | eng_Latn | 0.332183 |
4f56efc7f8b59f8d7d547b3b85f01b735f68c012 | 2,094 | md | Markdown | di-18-zhang-shi-yong-jmx-jian-kong-spring/18.2-chuang-jian-zi-ji-de-mbean.md | AndyCorona/spring-in-action-v5-translate | ea806cbffa3d7c9aba211e95dfcd72fa44d839ae | [
"MIT"
] | null | null | null | di-18-zhang-shi-yong-jmx-jian-kong-spring/18.2-chuang-jian-zi-ji-de-mbean.md | AndyCorona/spring-in-action-v5-translate | ea806cbffa3d7c9aba211e95dfcd72fa44d839ae | [
"MIT"
] | null | null | null | di-18-zhang-shi-yong-jmx-jian-kong-spring/18.2-chuang-jian-zi-ji-de-mbean.md | AndyCorona/spring-in-action-v5-translate | ea806cbffa3d7c9aba211e95dfcd72fa44d839ae | [
"MIT"
] | null | null | null | # 18.2 创建自己的 MBean
Spring 可以轻松地将任何您想要的 bean 公开为 JMX MBean。你要做的就是在类上添加 @ManagedResource 注解,然后在方法或属性上添加 @ManagedOperation 或 @ManagedAttribute 注解。Spring 会进行其他必须的工作。
例如,假设您想提供一个 MBean,来跟踪通过 Taco Cloud 订购了多少玉米卷。您可以定义一个服务 bean 来记录已订购的玉米卷数量。下面的列表显示了实现这个功能的服务。
{% code title="程序清单 18.1 一个 MBean,它统计创建了多少个玉米卷" %}
```java
package tacos.tacos;
import java.util.concurrent.atomic.AtomicLong;
import org.springframework.data.rest.core.event.AbstractRepositoryEventListener;
import org.springframework.jmx.export.annotation.ManagedAttribute;
import org.springframework.jmx.export.annotation.ManagedOperation;
import org.springframework.jmx.export.annotation.ManagedResource;
import org.springframework.stereotype.Service;
@Service
@ManagedResource
public class TacoCounter extends AbstractRepositoryEventListener<Taco> {
private AtomicLong counter;
public TacoCounter(TacoRepository tacoRepo) {
long initialCount = tacoRepo.count();
this.counter = new AtomicLong(initialCount);
}
@Override
protected void onAfterCreate(Taco entity) {
counter.incrementAndGet();
}
@ManagedAttribute
public long getTacoCount() {
return counter.get();
}
@ManagedOperation
public long increment(long delta) {
return counter.addAndGet(delta);
}
}
```
{% endcode %}
TacoCounte 类加了 @Service 注解,将通过组件扫描自动在 Spring 上下文中生成 bean 实例。还可以用 @ManagedResource 注解,以表明这个 bean 也应该是 MBean。作为 MBean,它将公开一个属性和一个操作。`getTacoCount()` 方法用了 @ManagedAttribute 注解,因此它将作为 MBean 属性公开,而 `increment()` 方法用了 @ManagedOperation 注解,将其作为 MBean 操作公开。
图 18.4 显示了 TacoCounter MBean 在 JConsole 中的显示方式。

TacoCounter 还有另一个秘密,尽管它与 JMX 无关。因为它扩展了 AbstractRepositoryEventListener,所以当通过 TacoRepository 保存 Taco 时,将发送持久化事件。在这种情况下,只要创建新的 Taco 对象并保存,就会调用 `onAfterCreate()` 方法,它将计数器加 1。但是 AbstractRepositoryEventListener 还提供了几种处理前后事件的方法,如在创建、保存或删除对象操作时。
使用 MBean 操作和属性在很大程度上是一种拉式操作。也就是,即使 MBean 属性的值发生更改,在通过 JMX 客户端再次访问查看前是不知道的。让我们反过来看看,如何从 MBean 推送通知到 JMX 客户端。
| 35.491525 | 250 | 0.788921 | yue_Hant | 0.449444 |
4f5700178e78cc71a941c1be7450281d010e7f3f | 343 | md | Markdown | src/docs/assets/components/pagination/example-one.md | eddiejaoude/bedrock | 456ad415c7a1f58d6b16f7585d0e2be0ceceec64 | [
"MIT"
] | 2 | 2019-08-29T15:14:58.000Z | 2019-09-11T12:54:05.000Z | src/docs/assets/components/pagination/example-one.md | eddiejaoude/bedrock | 456ad415c7a1f58d6b16f7585d0e2be0ceceec64 | [
"MIT"
] | 11 | 2019-05-29T18:10:35.000Z | 2020-12-04T18:07:00.000Z | src/docs/assets/components/pagination/example-one.md | venzra/bedrock | 456ad415c7a1f58d6b16f7585d0e2be0ceceec64 | [
"MIT"
] | 3 | 2019-05-29T18:12:34.000Z | 2020-02-27T18:16:58.000Z | ```code
import { PaginationEvent } from '@bedrock';
public p1 = {
records: 200,
skip: 0
};
public updatePage(event: PaginationEvent, paginator) {
paginator.skip = event.selectedSkip;
}
```
```code
<rock-pagination
[records]="p1.records"
[skip]="p1.skip"
(pageChange)="updatePage($event, p1)">
</rock-pagination>
```
| 16.333333 | 54 | 0.638484 | nld_Latn | 0.178025 |
4f57aea4ef7209e2c4b33157ac60d830bae96d3c | 2,774 | md | Markdown | readme.md | tkadlec/gulp-snyk | 8f2938273a694fd952be65b688fe8dd757f94085 | [
"MIT"
] | null | null | null | readme.md | tkadlec/gulp-snyk | 8f2938273a694fd952be65b688fe8dd757f94085 | [
"MIT"
] | null | null | null | readme.md | tkadlec/gulp-snyk | 8f2938273a694fd952be65b688fe8dd757f94085 | [
"MIT"
] | null | null | null | # gulp-snyk [](https://travis-ci.org/doug-wade/gulp-snyk)  
> gulp plugin for using Snyk
## Install
```
$ npm install --save-dev gulp-snyk
```
Or
```
$ yarn add --dev gulp-snyk
```
## Usage
To only break the build on vulnerabilities, use snyk without any options
```javascript
const snyk = require('gulp-snyk');
gulp.task('protect', function(cb) {
return snyk({ command: 'protect' }, cb);
});
gulp.task('test', function() {
return snyk({ command: 'test' }, cb);
});
gulp.task('prepublish', 'protect');
```
And then, in your package.json
```javascript
{
"scripts": {
"prepublish": "gulp prepublish",
"test": "gulp test"
}
}
```
For a realistic use-case, check out [the clefs plugin generator](https://github.com/doug-wade/clefs/tree/master/packages/generator-clefs-plugin)
## API
### snyk([options], cb)
#### options
A hash of options to configure snyk. If this is omitted, then it is the
equivalent of passing the following options hash.
```javascript
gulp.task('snyk-test', function(cb) {
return snyk({command: 'test', directory: process.cwd(), debug: false, options: { dev: true }}, cb);
});
```
##### command
Type: `string`<br>
Default: `test`
Example:
```javascript
gulp.task('protect', function(cb) {
return snyk({command: 'protect'}, cb);
});
```
One of the [snyk command-line commands](https://snyk.io/docs/using-snyk/).
For instance: auth, test, wizard, protect, monitor, policy.
##### directory
Type: `string`<br>
Default: `process.cwd()`
Example:
```javascript
gulp.task('snyk-test', function(cb) {
return snyk({command: 'test', directory: path.join(process.cwd(), 'packages', 'my-package')}, cb);
});
```
The directory that contains the package on which to run the snyk command.
##### options
Type: `object`<br>
Default: `{ dev: true }`
Example:
```javascript
gulp.task('snyk-wizard', function(cb) {
return snyk({command: 'wizard', options: {help: true}}, cb);
});
```
The options supported by the [snyk command line](https://snyk.io/docs/using-snyk/).
##### debug
Type: `boolean`<br>
Default: `false`
Example:
```javascript
gulp.task('snyk-help', function(cb) {
return snyk({command: 'test', debug: true}, cb);
});
```
Turns on debug logging
#### cb
The callback from the asynchronous gulp task, the function passed as the first
argument to the gulp task callback. For example:
```javascript
gulp.task('protect', function(cb) {
return snyk({ command: 'protect' }, cb);
});
```
## License
MIT © [Doug Wade](http://dougwade.io)
| 21.671875 | 336 | 0.675198 | eng_Latn | 0.538097 |
4f584fc45f7a2a4bde85dfea4b5ce3ea5041cebc | 4,095 | md | Markdown | README.md | tuckn/WshPath | 79b671c911643da926666ce3a9f6b73268c3435d | [
"MIT"
] | null | null | null | README.md | tuckn/WshPath | 79b671c911643da926666ce3a9f6b73268c3435d | [
"MIT"
] | null | null | null | README.md | tuckn/WshPath | 79b671c911643da926666ce3a9f6b73268c3435d | [
"MIT"
] | null | null | null | # WshPath
Adds useful functions (similar to Node.js path) that handles file path strings into WSH (Windows Script Host).
## tuckn/Wsh series dependency
[WshModeJs](https://github.com/tuckn/WshModeJs)
└─ [WshNet](https://github.com/tuckn/WshNet)
 └─ [WshChildProcess](https://github.com/tuckn/WshChildProcess)
  └─ [WshProcess](https://github.com/tuckn/WshProcess)
    └─ [WshFileSystem](https://github.com/tuckn/WshFileSystem)
      └─ [WshOS](https://github.com/tuckn/WshOS)
        └─ WshPath - This repository
          └─ [WshUtil](https://github.com/tuckn/WshUtil)
            └─ [WshPolyfill](https://github.com/tuckn/WshPolyfill)
The upper layer module can use all the functions of the lower layer module.
## Operating environment
Works on JScript in Windows.
## Installation
(1) Create a directory of your WSH project.
```console
D:\> mkdir MyWshProject
D:\> cd MyWshProject
```
(2) Download this ZIP and unzipping or Use following `git` command.
```console
> git clone https://github.com/tuckn/WshPath.git ./WshModules/WshPath
or
> git submodule add https://github.com/tuckn/WshPath.git ./WshModules/WshPath
```
(3) Include _.\\WshPath\\dist\\bundle.js_ into your .wsf file.
For Example, if your file structure is
```console
D:\MyWshProject\
├─ Run.wsf
├─ MyScript.js
└─ WshModules\
└─ WshPath\
└─ dist\
└─ bundle.js
```
The content of above _Run.wsf_ is
```xml
<package>
<job id = "run">
<script language="JScript" src="./WshModules/WshPath/dist/bundle.js"></script>
<script language="JScript" src="./MyScript.js"></script>
</job>
</package>
```
I recommend this .wsf file encoding to be UTF-8 [BOM, CRLF].
This allows the following functions to be used in _.\\MyScript.js_.
## Usage
Now _.\\MyScript.js_ (JScript) can use the useful functions to handle paths.
for example,
```js
var path = Wsh.Path; // Shorthand
console.log(__filename); // "D:\MyWshProject\Run.wsf"
console.log(__dirname); // "D:\MyWshProject"
console.log(path.delimiter); // ';"
console.log(path.sep); // "\"
path.dirname('C:\\My Data\\image.jpg'); // 'C:\\My Data'
path.basename('C:\\foo\\bar\\baz\\quux.html'); // 'quux.html'
path.extname('index.coffee.md'); // '.md'
path.parse('C:\\home\\user\\dir\\file.txt');
// Returns:
// { root: 'C:\\',
// dir: 'C:\\home\\user\\dir',
// base: 'file.txt',
// ext: '.txt',
// name: 'file' };
path.isAbsolute('C:\\My Data\\hoge.png'); // true
path.isAbsolute('bar\\baz'); // false
path.isAbsolute('.'); // false
path.normalize('C:\\Git\\mingw64\\lib\\..\\etc\\.gitconfig');
// Returns: 'C:\\Git\\mingw64\\etc\\.gitconfig'
path.join(['mingw64\\lib', '..\\etc', '.gitconfig']);
// Returns: 'mingw64\\etc\\.gitconfig'
path.resolve(['mingw64\\lib', '..\\etc', '.gitconfig']);
// Returns: '<Current Working Directory>\\mingw64\\etc\\.gitconfig'
path.isUNC('C:\\foo\\bar.baz'); // false
path.isUNC('\\\\CompName\\'); // false
path.isUNC('\\\\CompName\\SharedDir'); // true
path.isUNC('//CompName//SharedDir'); // false
path.toUNC('C:\\foo\\bar.baz');
// Returns: '\\\\<Your CompName>\\C$\\foo\\bar.baz'
var from = 'C:\\MyApps\\Paint\\Gimp';
var to = 'C:\\MyApps\\Converter\\ImageMagick\\convert.exe';
path.relative(from, to);
// Returns: '..\\..\\Converter\\ImageMagick\\convert.exe'
```
Many other functions are added.
See the [documentation](https://docs.tuckn.net/WshPath) for more details.
### Dependency Modules
You can also use the following useful functions in _.\\MyScript.js_ (JScript).
- [tuckn/WshPolyfill](https://github.com/tuckn/WshPolyfill)
- [tuckn/WshUtil](https://github.com/tuckn/WshUtil)
## Documentation
See all specifications [here](https://docs.tuckn.net/WshPath) and also below.
- [WshPolyfill](https://docs.tuckn.net/WshPolyfill)
- [WshUtil](https://docs.tuckn.net/WshUtil)
## License
MIT
Copyright (c) 2020 [Tuckn](https://github.com/tuckn)
| 29.042553 | 128 | 0.672283 | yue_Hant | 0.384096 |
4f5878d2c5b150eb6b9590b2bdd506699b8d112e | 360 | md | Markdown | extensions/ion/ion-core/README.md | marcgs/DataSpaceConnector | 50cefd0cfec58db5fd504865be1d5a5b7b21a16b | [
"Apache-2.0"
] | 79 | 2021-07-27T08:02:41.000Z | 2022-03-30T17:00:58.000Z | extensions/ion/ion-core/README.md | marcgs/DataSpaceConnector | 50cefd0cfec58db5fd504865be1d5a5b7b21a16b | [
"Apache-2.0"
] | 723 | 2021-07-27T14:37:45.000Z | 2022-03-31T19:56:22.000Z | extensions/ion/ion-core/README.md | marcgs/DataSpaceConnector | 50cefd0cfec58db5fd504865be1d5a5b7b21a16b | [
"Apache-2.0"
] | 52 | 2021-07-30T08:28:58.000Z | 2022-03-22T10:46:55.000Z | This module is currently under development and will someday contain a Java-port of the [ION-Tools](https://github.com/decentralized-identity/ion-tools) and parts of the [ION-SDK](https://github.com/decentralized-identity/ion-sdk).
_Please do not use it in a production environment yet, interfaces and implementations might change without prior notification._
| 90 | 230 | 0.808333 | eng_Latn | 0.98966 |
4f58a722a8236e4d7139f53577e58364cc25c903 | 5,392 | md | Markdown | README.md | sunblade/openvpn-status | fdd684638b5bcf8d89a91cd1e1f204a9cc237652 | [
"MIT"
] | null | null | null | README.md | sunblade/openvpn-status | fdd684638b5bcf8d89a91cd1e1f204a9cc237652 | [
"MIT"
] | 7 | 2021-03-09T12:50:28.000Z | 2022-02-26T15:38:49.000Z | README.md | sunblade/openvpn-status | fdd684638b5bcf8d89a91cd1e1f204a9cc237652 | [
"MIT"
] | 1 | 2020-02-06T19:17:35.000Z | 2020-02-06T19:17:35.000Z | # OpenVPN Status [](https://david-dm.org/auspexeu/openvpn-status) [](https://greenkeeper.io/)
A web-based application to monitor (multiple) [OpenVPN](https://openvpn.net/index.php/open-source/overview.html) servers.
Features
* Multi server support
* WebSocket based real-time events
* Map view
* Disconnect clients remotely
* Persistent event log
* Mobile friendly
* Full material design
### Client panel

### Map view

### Event panel

## Pre-requisites
- [x] [NodeJS](https://nodejs.org/en/download/) 6 or higher
- [x] npm package manager
# Installation
Installation comes in two flavours. Either from source as per the following section or you can skip to the [docker section](https://github.com/AuspeXeu/openvpn-status#docker).
### 1. Get the source
``git clone https://github.com/AuspeXeu/openvpn-status.git``
### 2. Install dependencies
```
cd openvpn-status
npm install
sudo npm install pm2 -g
```
### 3. Configuration
The configuration is handled in the ``cfg.json`` file.
| Option | Default | Description |
| -------- |:-------------:| ------------ |
| port | `3013` | Port on which the server will be listening. |
| bind | `127.0.0.1` | Address to which the server will bind to. Change to `0.0.0.0` to make available on all interfaces. |
| servers | `[{"name": "Server","host": "127.0.0.1","man_port": 7656}]` | Array of servers. |
| username | `admin` | User for basic HTTP authentication. Change to `''` or `false` to disable. |
| password | `admin` | Password for basic HTTP authentication. |
| web.dateFormat | `HH:mm:ss - DD.MM.YY` | DateTime format used in the web frontend ([options](http://momentjs.com/docs/#/displaying/format/)).|
Example:
```
{
"port": 3013,
"bind": "127.0.0.1",
"servers": [
{"id": 0, "name": "Server A", "host": "127.0.0.1","man_port": 7656},
{"id": 1, "name": "Server B", "host": "127.0.0.1","man_port": 6756}
],
"username": "admin",
"password": "YV3qSTxD",
"web": {
"dateFormat": "HH:mm - DD.MM.YY"
}
}
```
### 4. OpenVPN configuration
Add the following line to your configuration file, e.g., `server.conf`. This will start the management console on port `7656` and make it accessible on `127.0.0.1`, i.e. this machine.
```
management 127.0.0.1 7656 //As specified in cfg.json for this server
```
Restart your OpenVPN server.
### 5. Build
Before the application is ready to run, the frontend needs to be built. This is done using npm.
``npm run build``
# Run
This makes the application available on http://127.0.0.1:3013.
### Manually
```
node server.js
```
### As PM2 service
```
pm2 start pm2.json
pm2 save
```
### As Systemd service
```
[Unit]
Description=OpenVPN Status
After=network.target
[Service]
User=root
WorkingDirectory=/home/pi/backend \\ Adjust this
ExecStart=/usr/local/bin/node server.js
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
```
## (optional) Running the service behind nginx as a reverse proxy
In order to integrate the service into your webserver you might want to use nginx as a reverse proxy. The following configuration assumes that the port is set to `3013` as it is by default.
```
server {
listen 80;
server_name [domain];
location / {
proxy_pass http://127.0.0.1:3013
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
}
```
# Docker
### Ports
- **3013**
### Environment variables
| Variable | Description | Default value |
| -------- | ----------- | ------------- |
| **AUTH_USERNAME** | HTTP AUTH username | admin
| **AUTH_PASSWORD** | HTTP AUTH password | admin
| **VPN_NAME** | Name of the VPN | Server
| **VPN_HOST** | Host of the VPN | openvpn
| **VPN_MAN_PORT** | Management port | 7656
| **VPN_DATE_FORMAT** | DateTime format | HH:mm - DD.MM.YY
### Docker-compose.yml
```yml
# Full example :
# https://raw.githubusercontent.com/AuspeXeu/openvpn-status/master/docker-compose.sample.yml
openvpn-status:
image: auspexeu/openvpn-status
container_name: openvpn-status
ports:
- 8080:3013
environment:
- AUTH_USERNAME=admin
- AUTH_PASSWORD=YV3qSTxD
- VPN_NAME="Remote employees"
- VPN_HOST=openvpn
- VPN_MAN_PORT=7656
- VPN_DATE_FORMAT="HH:mm - DD.MM.YY"
links:
- openvpn
depends_on:
- openvpn
restart: "unless-stopped"
```
## Browser support
Find a list of supported browsers [here](https://vuetifyjs.com/en/getting-started/quick-start#supported-browsers)
## Acknowledgements
### Maxmind
[Maxmind](http://dev.maxmind.com/geoip/geoip2/geolite2/) provides all of the IP information used in this project to determine where the VPN clients are connecting from.
### GoSquared
[GoSquared](https://www.gosquared.com) provides the flag icons for this project. The source for the flag icons can be found [here](https://www.gosquared.com/resources/flag-icons/).
| 27.793814 | 263 | 0.693991 | eng_Latn | 0.5922 |
4f59057b8f4937b38a9a6d5fcb5937c31dab91f4 | 334 | md | Markdown | 0-overview/release-note-1/release-note-1.md | UTA-DataScience/DATA1401.2021.Spring | bb1f6839754cd9a7af64c16aaab170e194f4f78d | [
"MIT"
] | 1 | 2021-01-22T19:26:05.000Z | 2021-01-22T19:26:05.000Z | 0-overview/release-note-1/release-note-1.md | UTA-DataScience/DATA1401.2021.Spring | bb1f6839754cd9a7af64c16aaab170e194f4f78d | [
"MIT"
] | null | null | null | 0-overview/release-note-1/release-note-1.md | UTA-DataScience/DATA1401.2021.Spring | bb1f6839754cd9a7af64c16aaab170e194f4f78d | [
"MIT"
] | 1 | 2021-01-22T19:35:58.000Z | 2021-01-22T19:35:58.000Z | ---
title: "Release Note 1"
tags: [release_note]
keywords: "release_note"
sidebar: home_sidebar
summary:
last_updated: July 9, 2019
---
This series of notes have been developed to meet the requirements for teaching Python courses offered in the College of Science at the University of Texas at Arlington at the undergraduate level.
| 30.363636 | 196 | 0.787425 | eng_Latn | 0.997876 |
4f5948f95787c7e483a324e7a7c1a4c89307e100 | 554 | md | Markdown | src/main/resources/stories/middle/mid3.md | jmetzz/coffee-chatbot | da7e76d9532c8e5e38a47a19ffed1f1e27601766 | [
"MIT"
] | null | null | null | src/main/resources/stories/middle/mid3.md | jmetzz/coffee-chatbot | da7e76d9532c8e5e38a47a19ffed1f1e27601766 | [
"MIT"
] | null | null | null | src/main/resources/stories/middle/mid3.md | jmetzz/coffee-chatbot | da7e76d9532c8e5e38a47a19ffed1f1e27601766 | [
"MIT"
] | null | null | null | ## kosten kaart via soort anything else 11
* costs{"bank_object": "kaart"}
- soort_kaart_form
- form{"name": "soort_kaart_form"}
- slot{"requested_slot": "soort_kaart"}
- slot{"soort_kaart": "V"}
- form{"name": null}
- slot{"requested_slot": null}
- action_utter_card_costs
* INFORM_Kaarten{"soort_kaart": "ALL"}
- action_utter_card_costs
## soorten kaart via utter soort naar kredietkaart silver
* info{"bank_object": "kaart"}
- utter_soort_kaart
* INFORM_Kaarten{"soort_kaart": "ALL"}
- action_utter_card_info
| 29.157895 | 57 | 0.687726 | nld_Latn | 0.798164 |
4f5a0726269f897d57e5e5cd0aa5e18ae47a1083 | 9,416 | md | Markdown | react-source-code/docs/algorithm/bitfiled.md | 25juan/react-study | 24a92b6524b06aa5511cf529149bf193c07ef234 | [
"MIT"
] | null | null | null | react-source-code/docs/algorithm/bitfiled.md | 25juan/react-study | 24a92b6524b06aa5511cf529149bf193c07ef234 | [
"MIT"
] | null | null | null | react-source-code/docs/algorithm/bitfiled.md | 25juan/react-study | 24a92b6524b06aa5511cf529149bf193c07ef234 | [
"MIT"
] | null | null | null | ---
title: 位运算
---
# React 算法之位运算
网络上介绍位运算的文章非常多(如[MDN 上的介绍](https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Reference/Operators/Bitwise_Operators)就很仔细).
本文的目的:
1. 温故知新, 对位运算的基本使用做一下简单的总结.
2. 归纳在`javascript`中使用位运算的注意事项.
3. 列举在`react`源码中, 对于位运算的高频使用场景.
## 概念
位运算直接处理每一个比特位(bit), 是非常底层的运算, 优势是速度快, 劣势就是不直观且只支持整数运算.
## 特性
| 位运算 | 用法 | 描述 |
| ----------------- | -------- | --------------------------------------------------------------------------- |
| 按位与(`&`) | `a & b` | 对于每一个比特位,两个操作数都为 1 时, 结果为 1, 否则为 0 |
| 按位或(`\|`) | `a \| b` | 对于每一个比特位,两个操作数都为 0 时, 结果为 0, 否则为 1 |
| 按位异或(`^`) | `a ^ b` | 对于每一个比特位,两个操作数相同时, 结果为 1, 否则为 0 |
| 按位非(`~`) | `~ a` | 反转操作数的比特位, 即 0 变成 1, 1 变成 0 |
| 左移(`<<`) | `a << b` | 将 a 的二进制形式向左移 b (< 32) 比特位, 右边用 0 填充 |
| 有符号右移(`>>`) | `a >> b` | 将 a 的二进制形式向右移 b (< 32) 比特位, 丢弃被移除的位, 左侧以最高位来填充 |
| 无符号右移(`>>>`) | `a >> b` | 将 a 的二进制形式向右移 b (< 32) 比特位, 丢弃被移除的位, 并用 0 在左侧填充 |
在[`ES5`规范中](https://www.ecma-international.org/ecma-262/5.1/#sec-11.10), 对二进制位运算的说明如下:
```
The production A : A @ B, where @ is one of the bitwise operators in the productions above, is evaluated as follows:
1. Let lref be the result of evaluating A.
2. Let lval be GetValue(lref).
3. Let rref be the result of evaluating B.
4. Let rval be GetValue(rref).
5. Let lnum be ToInt32(lval).
6. Let rnum be ToInt32(rval).
7. Return the result of applying the bitwise operator @ to lnum and rnum. The result is a signed 32 bit integer.
```
意思是会将位运算中的左右操作数都转换为`有符号32位整型`, 且返回结果也是`有符号32位整型`
- 所以当操作数是浮点型时首先会被转换成整型, 再进行位运算
- 当操作数过大, 超过了`Int32`范围, 超过的部分会被截取
通过以上知识的回顾, 要点如下:
1. 位运算只能在整型变量之间进行运算
2. js 中的`Number`类型在底层都是以浮点数(参考 IEEE754 标准)进行存储.
3. js 中所有的按位操作符的操作数都会被[转成补码(two's complement)](https://www.ecma-international.org/ecma-262/5.1/#sec-9.5)形式的`有符号32位整数`.
所以在 js 中使用位运算时, 有 2 种情况会造成结果异常:
1. 操作数为浮点型(虽然底层都是浮点型, 此处理解为显示性的浮点型)
- 转换流程: 浮点数 -> 整数(丢弃小数位) -> 位运算
2. 操作数的大小超过`Int32`范围(`-2^31 ~ 2^31-1`). 超过范围的二进制位会被截断, 取`低位32bit`.
```
Before: 11100110111110100000000000000110000000000001
After: 10100000000000000110000000000001
```
另外由于 js 语言的[隐式转换](https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Equality_comparisons_and_sameness), 对非`Number`类型使用位运算操作符时会隐式会发生隐式转换, 相当于先使用`Number(xxx)`将其转换为`number`类型, 再进行位运算:
```js
'str' >>> 0; // ===> Number('str') >>> 0 ===> NaN >>> 0 = 0
```
## 基本使用
为了方便比较, 以下演示代码中的注释, 都写成了 8 位二进制数(上文已经说明, 事实上在 js 中, 位运算最终的结果都是 Int32).
枚举属性:
通过位移的方式, 定义一些枚举常量
```js
const A = 1 << 0; // 0b00000001
const B = 1 << 1; // 0b00000010
const C = 1 << 2; // 0b00000100
```
位掩码:
通过位移定义的一组枚举常量, 可以利用位掩码的特性, 快速操作这些枚举产量(增加, 删除, 比较).
1. 属性增加`|`
1. `ABC = A | B | C`
2. 属性删除`& ~`
1. `AB = ABC & ~C`
3. 属性比较
1. AB 当中包含 B: `AB & B === B`
2. AB 当中不包含 C: `AB & C === 0`
3. A 和 B 相等: `A === B`
```js
const A = 1 << 0; // 0b00000001
const B = 1 << 1; // 0b00000010
const C = 1 << 2; // 0b00000100
// 增加属性
const ABC = A | B | C; // 0b00000111
// 删除属性
const AB = ABC & ~C; // 0b00000011
// 属性比较
// 1. AB当中包含B
console.log(AB & (B === B)); // true
// 2. AB当中不包含C
console.log(AB & (C === 0)); // true
// 3. A和B相等
console.log(A === B); // true
```
## React 当中的使用场景
在 react 核心包中, 位运算使用的场景非常多. 此处只列举出了使用频率较高的示例.
### 优先级管理 lanes
lanes 是`17.x`版本中开始引入的重要概念, 代替了`16.x`版本中的`expirationTime`, 作为`fiber`对象的一个属性(位于`react-reconciler`包), 主要控制 fiber 树在构造过程中的优先级(这里只介绍位运算的应用, 对于 lanes 的深入分析在[`优先级管理`](../main/priority.md)章节深入解读).
变量定义:
首先看源码[ReactFiberLane.js](https://github.com/facebook/react/blob/v17.0.2/packages/react-reconciler/src/ReactFiberLane.js#L74-L103)中的定义
```js
//类型定义
export opaque type Lanes = number;
export opaque type Lane = number;
// 变量定义
export const NoLanes: Lanes = /* */ 0b0000000000000000000000000000000;
export const NoLane: Lane = /* */ 0b0000000000000000000000000000000;
export const SyncLane: Lane = /* */ 0b0000000000000000000000000000001;
export const SyncBatchedLane: Lane = /* */ 0b0000000000000000000000000000010;
export const InputDiscreteHydrationLane: Lane = /* */ 0b0000000000000000000000000000100;
const InputDiscreteLanes: Lanes = /* */ 0b0000000000000000000000000011000;
const InputContinuousHydrationLane: Lane = /* */ 0b0000000000000000000000000100000;
const InputContinuousLanes: Lanes = /* */ 0b0000000000000000000000011000000;
// ...
// ...
const NonIdleLanes = /* */ 0b0000111111111111111111111111111;
export const IdleHydrationLane: Lane = /* */ 0b0001000000000000000000000000000;
const IdleLanes: Lanes = /* */ 0b0110000000000000000000000000000;
export const OffscreenLane: Lane = /* */ 0b1000000000000000000000000000000;
```
源码中`Lanes`和`Lane`都是`number`类型, 并且将所有变量都使用二进制位来表示.
注意: 源码中变量只列出了 31 位, 由于 js 中位运算都会转换成`Int32`(上文已经解释), 最多为 32 位, 且最高位是符号位. 所以除去符号位, 最多只有 31 位可以参与运算.
[方法定义](https://github.com/facebook/react/blob/v17.0.2/packages/react-reconciler/src/ReactFiberLane.js#L121-L194):
```js
function getHighestPriorityLanes(lanes: Lanes | Lane): Lanes {
// 判断 lanes中是否包含 SyncLane
if ((SyncLane & lanes) !== NoLanes) {
return_highestLanePriority = SyncLanePriority;
return SyncLane;
}
// 判断 lanes中是否包含 SyncBatchedLane
if ((SyncBatchedLane & lanes) !== NoLanes) {
return_highestLanePriority = SyncBatchedLanePriority;
return SyncBatchedLane;
}
// ...
// ... 省略其他代码
return lanes;
}
```
在方法定义中, 也是通过位掩码的特性来判断二进制形式变量之间的关系. 除了常规的位掩码操作外, 特别说明其中 2 个技巧性强的函数:
1. `getHighestPriorityLane`: 分离出最高优先级
```js
function getHighestPriorityLane(lanes: Lanes) {
return lanes & -lanes;
}
```
通过`lanes & -lanes`可以分离出所有比特位中最右边的 1, 具体来讲:
- 假设 `lanes(InputDiscreteLanes) = 0b0000000000000000000000000011000`
- 那么 `-lanes = 0b1111111111111111111111111101000`
- 所以 `lanes & -lanes = 0b0000000000000000000000000001000`
- 相比最初的 InputDiscreteLanes, 分离出来了`最右边的1`
- 通过 lanes 的定义, 数字越小的优先级越高, 所以此方法可以获取`最高优先级的lane`
-
2. `getLowestPriorityLane`: 分离出最低优先级
```js
function getLowestPriorityLane(lanes: Lanes): Lane {
// This finds the most significant non-zero bit.
const index = 31 - clz32(lanes);
return index < 0 ? NoLanes : 1 << index;
}
```
`clz32(lanes)`返回一个数字在转换成 32 无符号整形数字的二进制形式后, 前导 0 的个数([MDN 上的解释](https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Reference/Global_Objects/Math/clz32))
- 假设 `lanes(InputDiscreteLanes) = 0b0000000000000000000000000011000`
- 那么 `clz32(lanes) = 27`, 由于 InputDiscreteLanes 在源码中被书写成了 31 位, 虽然在字面上前导 0 是 26 个, 但是转成标准 32 位后是 27 个
- `index = 31 - clz32(lanes) = 4`
- 最后 `1 << index = 0b0000000000000000000000000010000`
- 相比最初的 InputDiscreteLanes, 分离出来了`最左边的1`
- 通过 lanes 的定义, 数字越小的优先级越高, 所以此方法可以获取最低优先级的 lane
### 执行上下文 ExecutionContext
`ExecutionContext`定义与`react-reconciler`包中, 代表`reconciler`在运行时的上下文状态(在`reconciler 执行上下文`章节中深入解读, 此处介绍位运算的应用).
[变量定义](https://github.com/facebook/react/blob/v17.0.2/packages/react-reconciler/src/ReactFiberWorkLoop.old.js#L247-L256):
```js
export const NoContext = /* */ 0b0000000;
const BatchedContext = /* */ 0b0000001;
const EventContext = /* */ 0b0000010;
const DiscreteEventContext = /* */ 0b0000100;
const LegacyUnbatchedContext = /* */ 0b0001000;
const RenderContext = /* */ 0b0010000;
const CommitContext = /* */ 0b0100000;
export const RetryAfterError = /* */ 0b1000000;
// ...
// Describes where we are in the React execution stack
let executionContext: ExecutionContext = NoContext;
```
注意: 和`lanes`的定义不同, `ExecutionContext`类型的变量, 在定义的时候采取的是 8 位二进制表示(因为变量的数量少, 8 位就够了, 没有必要写成 31 位).
使用(由于使用的地方较多, 所以举一个[代表性强的例子](https://github.com/facebook/react/blob/v17.0.2/packages/react-reconciler/src/ReactFiberWorkLoop.old.js#L517-L619), `scheduleUpdateOnFiber` 函数是`react-reconciler`包对`react`包暴露出来的 api, 每一次更新都会调用, 所以比较特殊):
```js
// scheduleUpdateOnFiber函数中包含了好多关于executionContext的判断(都是使用位运算)
export function scheduleUpdateOnFiber(
fiber: Fiber,
lane: Lane,
eventTime: number,
) {
if (root === workInProgressRoot) {
// 判断: executionContext 不包含 RenderContext
if (
deferRenderPhaseUpdateToNextBatch ||
(executionContext & RenderContext) === NoContext
) {
// ...
}
}
if (lane === SyncLane) {
if (
// 判断: executionContext 包含 LegacyUnbatchedContext
(executionContext & LegacyUnbatchedContext) !== NoContext &&
// 判断: executionContext 不包含 RenderContext或CommitContext
(executionContext & (RenderContext | CommitContext)) === NoContext
) {
// ...
}
}
// ...
}
```
## 总结
本节介绍了位运算的基本使用, 并列举了位运算在`react`源码中的高频应用. 在特定的情况下, 使用位运算不仅是提高运算速度, 且位掩码能简洁和清晰的表示出二进制变量之间的关系. 二进制变量虽然有优势, 但是缺点也很明显, 不够直观, 扩展性不好(在 js 当中的二进制变量, 除去符号位, 最多只能使用 31 位, 当变量的数量超过 31 位就需要组合, 此时就会变得复杂). 在阅读源码时, 我们需要了解二级制变量和位掩码的使用. 但在实际开发中, 需要视情况而定, 不能盲目使用.
## 参考资料
[ECMAScript® Language Specification(Standard ECMA-262 5.1 Edition) Binary Bitwise Operators](https://www.ecma-international.org/ecma-262/5.1/#sec-11.10)
[浮点数的二进制表示](https://www.ruanyifeng.com/blog/2010/06/ieee_floating-point_representation.html)
[IEEE 754](https://zh.wikipedia.org/wiki/IEEE_754)
| 32.808362 | 244 | 0.653675 | yue_Hant | 0.707635 |
4f5a77d179c3a3559bc9c99d962207692aedf2b0 | 495 | md | Markdown | talk-api2x/book/event/room.archive.md | ikingye/talk-os | 2545c4497865779a55762e03963a39237df9046b | [
"MIT"
] | 3,084 | 2016-03-29T03:06:40.000Z | 2022-01-14T03:13:06.000Z | talk-api2x/book/event/room.archive.md | ikingye/talk-os | 2545c4497865779a55762e03963a39237df9046b | [
"MIT"
] | 42 | 2016-03-29T02:55:10.000Z | 2018-10-19T09:02:02.000Z | talk-api2x/book/event/room.archive.md | ikingye/talk-os | 2545c4497865779a55762e03963a39237df9046b | [
"MIT"
] | 1,054 | 2016-03-30T04:08:04.000Z | 2022-01-14T03:12:59.000Z | ## room:archive
### summary
room infomation
### channel
team
### response
```json
{
"__v": 0,
"topic": "New Topic",
"creator": "536c834d26faf71918b774ea",
"team": "536c99d0460682621f7ea6e5",
"_id": "536c9d223888f40b20b7e278",
"updatedAt": "2014-05-09T09:17:22.959Z",
"createdAt": "2014-05-09T09:17:22.959Z",
"_creatorId": "536c834d26faf71918b774ea",
"_teamId": "536c99d0460682621f7ea6e5",
"id": "536c9d223888f40b20b7e278",
"isArchived": true
}
```
| 19.8 | 45 | 0.644444 | yue_Hant | 0.184382 |
4f5c0df3e5249d31117b96d860b2718cdc2cbcfa | 693 | md | Markdown | README.md | zhuhuibeishadiao/DriverLoaderPro | b9589c6ffaaf7d3f7c117d8194a41b72027d25ae | [
"MIT"
] | 1 | 2020-03-09T12:27:39.000Z | 2020-03-09T12:27:39.000Z | README.md | zhuhuibeishadiao/DriverLoaderPro | b9589c6ffaaf7d3f7c117d8194a41b72027d25ae | [
"MIT"
] | null | null | null | README.md | zhuhuibeishadiao/DriverLoaderPro | b9589c6ffaaf7d3f7c117d8194a41b72027d25ae | [
"MIT"
] | null | null | null | # DriverLoaderPro
## Language
[简体中文](README-zh.md)
DriverLoaderPro is windows kernel driver load application
- support minifilter(auto write registry.u dont need do anything anymore)
- support winXP - win10 compatible x86 and x64
- supprot input control code
- supprot simple inputbuffer.u can choose ascii or unicode ...ok,maybe more,im trying
- supprot outbuffer view(ascii unicode hex)
- the friendly UI and friendly reminder
## Screenshot
<h1 align="center">
<img width="1000" height="800" src="gif.gif" alt="Awesome">
<br>
<br>
</h1>
## Dependencies
QT>=5.71
MSC_VER>=1800
## How to Build
did u see?yeah.this is it.just pressing u keywork F7
# the project not completed!
| 19.25 | 85 | 0.737374 | eng_Latn | 0.922143 |
4f5c4a7c57caf13b0fa8315180aa2e2941392f4a | 523 | md | Markdown | README.md | wwengfe/GxIconDIY | c74f4ca1b1f5647dab5aa6301973b9e78e3f2f5b | [
"Apache-2.0"
] | 7 | 2017-08-27T16:13:49.000Z | 2018-05-06T06:04:27.000Z | README.md | homeii/GxIconDIY | c74f4ca1b1f5647dab5aa6301973b9e78e3f2f5b | [
"Apache-2.0"
] | 4 | 2020-05-24T07:56:30.000Z | 2022-03-03T22:54:25.000Z | README.md | homeii/GxIconDIY | c74f4ca1b1f5647dab5aa6301973b9e78e3f2f5b | [
"Apache-2.0"
] | null | null | null | # 共享图标包自助打包系统
[](https://www.travis-ci.org/homeii/GxIconDIY)
(status仅供参考,每次打包都有可能成功有可能失败)
每个人的图标都可以是图标包。原理:服务端控制git commit上传图标修改配置文件再由travis CI打包构建上传到releases。
基于[NanoIconPack](https://github.com/by-syk/NanoIconPack)构建
打包地址:[GxIcon](http://1tb.win)
## 请到release查看你的图标状态
## 几个小工具
都是nodejs编写,需要npm install安装依赖
### autoMake.js
根据_automake.json自动打包
### autoInjector.js
简易资源注入工具,用法:
```
node autoInjector.js com.coolapk.market(包名) 1.png(文件)
```
| 22.73913 | 123 | 0.772467 | yue_Hant | 0.568079 |
4f5d1772fcf94ce25c26fd5d2ad399fe07cbaa56 | 2,599 | md | Markdown | docs/vs-2015/profiling/using-performance-rules-to-analyze-data.md | viniciustavanoferreira/visualstudio-docs.pt-br | 2ec4855214a26a53888d4770ff5d6dde15dbb8a5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/profiling/using-performance-rules-to-analyze-data.md | viniciustavanoferreira/visualstudio-docs.pt-br | 2ec4855214a26a53888d4770ff5d6dde15dbb8a5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/profiling/using-performance-rules-to-analyze-data.md | viniciustavanoferreira/visualstudio-docs.pt-br | 2ec4855214a26a53888d4770ff5d6dde15dbb8a5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Usando regras de desempenho para analisar dados | Microsoft Docs
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.technology: vs-ide-debug
ms.topic: conceptual
ms.assetid: 1deed23e-b31b-4714-982f-08ceebfc3096
caps.latest.revision: 21
author: MikeJo5000
ms.author: mikejo
manager: jillfra
ms.openlocfilehash: e0ddfda1e46dc1c5918a4ee1095f39db027bd3fe
ms.sourcegitcommit: 47eeeeadd84c879636e9d48747b615de69384356
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 04/23/2019
ms.locfileid: "63431591"
---
# <a name="using-performance-rules-to-analyze-data"></a>Usando regras de desempenho para analisar dados
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Os avisos de desempenho das Ferramentas de Criação de Perfil do [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] indicam problemas em um aplicativo analisado que podem causar lentidão na execução do programa. Os avisos também podem indicar que talvez seja necessário alterar os métodos de coleta para coletar dados mais úteis. Avisos de desempenho são gerados automaticamente em uma sessão de criação de perfil. Os avisos são exibidos na janela **Lista de Erros** quando um arquivo de dados de criação de perfil está aberto no Visual Studio. Na janela **Lista de Erros**, é possível localizar o código-fonte do problema e exibir informações detalhadas sobre o erro, por exemplo, como resolvê-lo. Também é possível desabilitar avisos nos quais você não está interessado.
> [!NOTE]
> Os avisos de desempenho do criador de perfil são gerados pela análise dinâmica da execução do programa e são independentes dos avisos de Análise de Código. A Análise de Código também pode gerar avisos de desempenho para código gerenciado com base na análise estática do código-fonte. Para obter mais informações, consulte [Analisando a Qualidade do Código Gerenciado](../code-quality/analyzing-managed-code-quality-by-using-code-analysis.md) e [Avisos de Desempenho](../code-quality/performance-warnings.md).
## <a name="in-this-section"></a>Nesta seção
[Como: Exibir avisos de desempenho](../profiling/how-to-view-performance-warnings.md)
Fornece informações sobre como abrir a janela **Lista de Erros** para exibir avisos de desempenho do criador de perfil.
[Como: Configurar regras de desempenho](../profiling/how-to-configure-performance-rules.md)
Fornece informações sobre como ativar ou desativar avisos de desempenho individuais.
[Referência de regras de desempenho](../profiling/performance-rules-reference.md)
Fornece informações detalhadas sobre os avisos de desempenho do criador de perfil
| 72.194444 | 767 | 0.794921 | por_Latn | 0.99797 |
4f5e6e53898668760905139567ab9ad2ce6cb1ce | 998 | md | Markdown | docs/guide/hook-function.md | st1020/alicebot | 955c159901c278d33422f531b6c7e68ab0490ea3 | [
"MIT"
] | 1 | 2021-09-04T08:36:55.000Z | 2021-09-04T08:36:55.000Z | docs/guide/hook-function.md | st1020/alicebot | 955c159901c278d33422f531b6c7e68ab0490ea3 | [
"MIT"
] | null | null | null | docs/guide/hook-function.md | st1020/alicebot | 955c159901c278d33422f531b6c7e68ab0490ea3 | [
"MIT"
] | null | null | null | # 钩子函数
AliceBot 提供了一些钩子函数,可以以装饰器的形式使用,使用方法如下:
```python
from alicebot import Bot
bot = Bot()
@bot.bot_run_hook
def hook_func(_bot: Bot):
pass
if __name__ == '__main__':
bot.run()
```
::: tip 注意
如果你不确定自己在干什么,请不要添加钩子函数。
:::
## Bot 相关钩子
### Bot 启动
```python
@bot.bot_run_hook
async def hook_func(_bot: Bot):
pass
```
### Bot 退出
```python
@bot.bot_exit_hook
def hook_func(_bot: Bot):
pass
```
::: warning 注意
只有这个钩子函数非协程!
:::
## 适配器相关钩子
### 适配器初始化
```python
@bot.adapter_startup_hook
async def hook_func(_adapter: 'T_Adapter'):
pass
```
### 适配器运行
```python
@bot.adapter_run_hook
async def hook_func(_adapter: 'T_Adapter'):
pass
```
### 适配器关闭
```python
@bot.adapter_shutdown_hook
async def hook_func(_adapter: 'T_Adapter'):
pass
```
## 事件处理相关钩子
### 事件预处理
```python
@bot.event_preprocessor_hook
async def hook_func(_event: 'T_Event'):
pass
```
### 事件后处理
```python
@bot.event_postprocessor_hook
async def hook_func(_event: 'T_Event'):
pass
``` | 11.213483 | 43 | 0.663327 | yue_Hant | 0.853297 |
4f5e96f6d3601771ecdc582f2892b7aa30244111 | 116 | md | Markdown | designs/GcdUnit/testbench/readme.md | jbrzozo24/mflowgen | fe168e1ea2311feb35588333aa5d7d7c6ba79625 | [
"BSD-3-Clause"
] | 53 | 2020-11-05T20:13:03.000Z | 2022-03-31T14:51:56.000Z | designs/GcdUnit/testbench/readme.md | jbrzozo24/mflowgen | fe168e1ea2311feb35588333aa5d7d7c6ba79625 | [
"BSD-3-Clause"
] | 27 | 2020-11-04T19:52:38.000Z | 2022-03-17T17:11:01.000Z | designs/GcdUnit/testbench/readme.md | jbrzozo24/mflowgen | fe168e1ea2311feb35588333aa5d7d7c6ba79625 | [
"BSD-3-Clause"
] | 26 | 2020-11-02T18:43:57.000Z | 2022-03-31T14:52:52.000Z | Testbench of GcdUnit from https://code.stanford.edu/ee272/dnn-accelerator-syn/-/tree/master/GcdUnit/design/testbench | 116 | 116 | 0.827586 | kor_Hang | 0.280393 |
4f5eebf6db41b41d64c9b38a8704df1f6024c459 | 5,496 | md | Markdown | website/docs/_posts/2017-11-10-docker.md | fossabot/renovate | df09fe90ab0d40479f05e7d8117a29778c48b24b | [
"MIT"
] | null | null | null | website/docs/_posts/2017-11-10-docker.md | fossabot/renovate | df09fe90ab0d40479f05e7d8117a29778c48b24b | [
"MIT"
] | null | null | null | website/docs/_posts/2017-11-10-docker.md | fossabot/renovate | df09fe90ab0d40479f05e7d8117a29778c48b24b | [
"MIT"
] | null | null | null | ---
date: 2017-11-10
title: Docker
categories:
- language-support
description: Docker Package Manager Support in Renovate
type: Document
order: 2
---
Renovate supports upgrading dependencies in Docker's `Dockerfile` files.
## How It Works
1. Renovate will search each repository for any files named (exactly) `Dockerfile`
2. The first `FROM` line will parsed
3. If no [digest](https://docs.docker.com/engine/reference/commandline/images/) is already in use then Renovate will raise a PR to "pin" that dependency to a Docker digest
4. If the image tag in use "looks" like a semver (e.g. `node:8`, `node:8.9`, `node:8.9.0`, `node:8-onbuild`) then Renovate will look up the Docker registry to determine if any upgrades are available (e.g. `node:8.9.1`).
## Digest Pinning
Pinning your docker images to an exact digest is important for reasons of **immutability**. In short: so every time you `pull`, you get the same content.
If your experience with dependency versioning comes from a place like javascript/npm, you might be used to knowing that exact versions are immutable, e.g. if you specify a version like `2.0.1` then you and your colleagues will always get the exact same "code". What you may not expect is that Docker's tags are not immutable versions even if they look like a version. e.g. you probably expect that `node:8` and `node:8.9` will change over time, but you might incorrectly assume that `node:8.9.0` would never change. Although it probably _shouldn't_, the reality is that it _can_.
Using a docker digest as the image's primary identifier instead of docker tag will achieve immutability but as a human it's quite inconvenient to deal with strings like `FROM node@sha256:552348163f074034ae75643c01e0ba301af936a898d778bb4fc16062917d0430`. The good news is that, as a human you no longer need to manually update such digests once you have Renovate on the job.
Also, to retain some human-friendliness, Renovate will actually retain the tag in the `FROM` line too, e.g. `FROM node:8@sha256:552348163f074034ae75643c01e0ba301af936a898d778bb4fc16062917d0430`. Read on to see how Renovate keeps it up-to-date.
## Digest Updating
If you have followed our advice to go from tags like `node:8` to `node:8@sha256:552348163f074034ae75643c01e0ba301af936a898d778bb4fc16062917d0430`, then you are likely to receive Renovate PRs whenever the `node:8` image is updated on Docker Hub.
Previously this would have been "invisible" to you - one day you pull code that represents `node:8.9.0` and the next day you get `node:8.9.1`. But you can never be sure, especially as Docker caches. Perhaps some of your colleagues or worst still your build machine are stuck on an older version with a security vulnerability.
Instead, you will now receive these updates via Pull Requests, or perhaps committed directly to your repository if you enable branch automerge for convenience. This ensures everyone on the team gets the latest versions and is in sync.
## Version Upgrading
Renovate also supports _upgrading_ versions in Docker tags, e.g. from `node:8.9.0` to `node:8.9.1`. If your tags looks like a version, Renovate will upgrade it like a version.
Thanks to this, you may wish to change the way you tag your image dependencies to be more specific, e.g. change from `node:8` to `node:8.9.1` so that every Renovate PR will be more human friendly, e.g. you can know that you are getting a PR because `node` upgraded from `8.9.1` to `8.9.2` and not because `8.9.1` somehow changed.
Currently, Renovate will upgrade minor/patch versions (e.g. from `8.8` to `8.9` or from `8.9.0` to `8.9.1`) by default, but not upgrade major versions. If you wish to enable major versions then add the preset `docker:enableMajor` to your `extends` array in your `renovate.json`.
Renovate has a some docker-specific intelligence when it comes to versions. For example:
* It understands that tag suffixes are frequently used, such as `node:8.9-onbuild`. Renovate will only upgrade from/to the same suffix.
* It understands that some dependencies (e.g. `node` and `ubuntu`) use even numbers for stable and odd for unstable. Renovate won't upgrade from stable unstable
## Configuring/Disabling
The following configuration options are applicable to Docker:
##### Disable all Docker Renovation
Add `"docker:disable"` to your `extends` array.
##### Disable Renovate for only certain Dockerfiles
Add all paths to ignore into the `ignorePaths` configuration field. e.g.
```
"extends": ["config:base"],
...
"ignorePaths": ["docker/old-files/"]
```
##### Enable Docker major updates
Add `"docker:enableMajor"` to your `extends` array.
##### Disable digest pinning
Add `"default:pinDigestsDisabled"` to your `extends` array.
##### Automerge digest updates
Add `"default:automergeDigest"` to your `extends` array. Also add `"default:automergeBranchPush"` if you wish for these to be committed directly to your base branch without raising a PR first.
## Future Features
The following features are planned but not supported today:
1. Multiple `FROM` lines in `Dockerfile`s
2. `ARG` arguments in `Dockerfile` `FROM` lines
3. Custom `Dockerfile` filenames (e.g. `Dockerfile-dev`)
4. Custom Docker registries (only Docker Hub is currently supported)
5. Docker Compose file support
If any of these features are important to you, please add a comment or at least a `+1` in Renovate's [Issues Tracker](https://github.com/renovateapp/renovate/issues?q=is%3Aopen+is%3Aissue+label%3A%23docker).
| 59.73913 | 579 | 0.76492 | eng_Latn | 0.995912 |
4f5f50da67527db509708abe27002e423f715aeb | 3,412 | md | Markdown | README.md | luke2m/climatechangeisreal | 2239d99f0b1680f09947bd24e29a353287d0adac | [
"Unlicense"
] | null | null | null | README.md | luke2m/climatechangeisreal | 2239d99f0b1680f09947bd24e29a353287d0adac | [
"Unlicense"
] | null | null | null | README.md | luke2m/climatechangeisreal | 2239d99f0b1680f09947bd24e29a353287d0adac | [
"Unlicense"
] | null | null | null | # Climate Change is Real
This repository contains copies of the climate change websites, data, and other information that was unceremoniously removed from the Environmental Protection Agency. While the EPA removed the websites, it does not eliminate the risk the climate change poses to people. These pages provide scientifically-based information about climate change, the risk it poses, and what we can do to combat it.
This repository allows others to republish information on climate change on their own website, such as [cityofchicago.org/climatechangeisreal](https://www.cityofchicago.org/climatechangeisreal), so people can still access this vital information. Republishing this information across multiple organizations helps raise the profile of climate change risks even higher and makes a statement that the science that demonstrates the rise in global temperatures cannot simply be deleted.
A list of cities who republished these websites are available [here](cities.md).
## Contributing Changes
We invite others to help maintain this repository. If you notice any broken links, missing pages, or if more pages were removed, please [open an issue](/issues) or [submit a pull request](/pulls).
## Hosting climatechangeisreal
There are two ways the website can be hosted in your own domain by either embedding an existing website or hosting the files on your own web servers.
### Embed Website
An unbranded, static copy of the website is hosted at http://climatechangeisreal.s3-website.us-east-2.amazonaws.com/. This website can be embedded using an `<iframe>` on your domain. These files are updated once this repository is updated, ensuring minimal long-term maintenance.
Please note, this is hosted on an "http" URL, so it will result in mixed-content warnings if embedded on "https" sites.
### Host Your Own
To begin, clone the repository to your preferred web server with `git clone https://github.com/Chicago/climatechangeisreal.git`.
Configure your web server using the following instructions. We also recommend that you automatically update the repository on a regular basis with a cron job or scheduled task with `git pull origin master`.
If git is not a viable solution, you can also simply [download the entire contents](https://github.com/Chicago/climatechangeisreal/archive/master.zip) as a ZIP file.
#### Apache
To host the site in Apache, you need the following Rewrite rules:
```apache
RewriteEngine On
RewriteRule ^/$ /climatechange
RewriteCond %{REQUEST_URI} !^/sites/(.*)
RewriteCond %{REQUEST_URI} !^/branding/(.*)
RewriteRule ^(.+) $1_.html
```
#### IIS
Use the [web.config](web.config) file included in the repo.
### Custom Branding
This repository is designed to allow custom branding to match your site's name and logo. The `/branding` folder contains customizable HTML stubs to integrate into your own website.
* `logo.png` - Logo used in the site header
* `header.html` - Contains the header, which includes `logo.png`
* `footer.html` - Contains the site footer, which includes the copyright notice and reference to the organization
* `menu.html` - Contains a simple "home button" so users can always navigate back to the `/climatechangeisreal` root. This file likely does not need to be changed.
After cloning the repository or pulling in new updates, simply copy your preferred branding to this folder using the above naming conventions.
| 57.830508 | 480 | 0.784291 | eng_Latn | 0.997477 |
4f5f6f9675af84bb69da034fbb25e94161c7c7f3 | 571 | md | Markdown | translations/es-XL/content/github/administering-a-repository/releasing-projects-on-github/index.md | JoyChannel/docs | 2d85af3d136df027c5e9230cac609b3712abafb3 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2021-01-05T16:29:05.000Z | 2022-02-26T09:08:44.000Z | translations/es-XL/content/github/administering-a-repository/releasing-projects-on-github/index.md | Honeyk25/docs | de643512095e283cb3a56243c1a7adcf680a1d08 | [
"CC-BY-4.0",
"MIT"
] | 144 | 2021-10-21T04:41:09.000Z | 2022-03-30T09:55:16.000Z | translations/es-XL/content/github/administering-a-repository/releasing-projects-on-github/index.md | Honeyk25/docs | de643512095e283cb3a56243c1a7adcf680a1d08 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-08-31T03:18:06.000Z | 2021-10-30T17:49:09.000Z | ---
title: Lanzar proyectos en GitHub
intro: 'Puedes crear un lanzamiento para consolidad software, notas de lanzamiento y archivos binarios para que los demás lo descarguen.'
redirect_from:
- /categories/85/articles/
- /categories/releases/
versions:
free-pro-team: '*'
enterprise-server: '*'
github-ae: '*'
topics:
- Repositories
children:
- /about-releases
- /managing-releases-in-a-repository
- /viewing-your-repositorys-releases-and-tags
- /linking-to-releases
- /comparing-releases
- /automation-for-release-forms-with-query-parameters
---
| 25.954545 | 137 | 0.732049 | spa_Latn | 0.531614 |
4f603edc780371be5692e4ea546ebb7ec46458d0 | 1,932 | md | Markdown | content/storage/disk/quickstart/create_disk.md | DP9166/qingcloud-docs | 0783c79dc0d8436c23561d1d19d0434f60bdf8fc | [
"Apache-2.0"
] | null | null | null | content/storage/disk/quickstart/create_disk.md | DP9166/qingcloud-docs | 0783c79dc0d8436c23561d1d19d0434f60bdf8fc | [
"Apache-2.0"
] | null | null | null | content/storage/disk/quickstart/create_disk.md | DP9166/qingcloud-docs | 0783c79dc0d8436c23561d1d19d0434f60bdf8fc | [
"Apache-2.0"
] | null | null | null | ---
title: "步骤1:购买硬盘"
description: Test description
draft: false
enableToc: false
weight: 20
keyword: 青云,硬盘
---
## 操作场景
您可以在购买云服务器时同时购买硬盘,也可单独购买硬盘。
本章节旨在指导用户单独购买硬盘。
## 操作步骤
1. 登录 [QingCloud 管理控制台](https://console.qingcloud.com/login)。
2. 在控制台导航栏中,选择**产品与服务** > **存储服务** > **云硬盘**,进入**硬盘**页面。
<img src="../_images/disk_page.png" alt="硬盘" style="zoom:50%;" />
3. 单击**创建**,弹出**创建硬盘**窗口。
<img src="../_images/create_disk.png" alt="创建硬盘" style="zoom:50%;" />
4. 根据界面提示,配置硬盘基本信息。各项参数说明如下:
<table>
<tr>
<th style="width: 110px">参数名称</th>
<th>参数说明</th>
</tr>
<tr>
<td>计费模式</td>
<td>支持<b>包年包月</b>及<b>按需计费</b>,详情请参见<a href="/storage/disk/billing/price/">计费说明</a>。</td>
</tr>
<tr>
<td>区域及可用区</td>
<td>硬盘所在区域及可用区。<br>
<li>若所选可用区下无云服务器,建议根据界面提示,点击<b>创建主机</b>进行创建。</li>
<li>若所选可用区下存在云服务器,则默认勾选<b>挂载云服务器</b>,单击下方的下拉列表可选择需要挂载的云服务器。</li><div style="background-color: #D8ECDE; padding: 10px 24px; margin: 10px 0; border-left: 3px solid #00a971;">
<b>说明</b>:<br>
<li>硬盘创建后不可修改可用区。</li>
<li>硬盘只能挂载给相同可用区的云服务器。</li>
<li>不同可用区所支持的硬盘类型不同,详见<a href="/storage/disk/intro/introduction/#产品类型">硬盘分类</a>。</li>
</div></td>
</tr>
<tr>
<td>硬盘类型</td>
<td>需要创建的硬盘类型。<br>不同类型硬盘所支持的主机类型不同,硬盘创建后不可修改硬盘类型。硬盘详细说明可参见<a href="/storage/disk/intro/introduction/#产品类型">硬盘分类</a>。
</td>
</tr>
<tr>
<td>硬盘容量</td>
<td>硬盘的容量。<br>
不同类型硬盘所支持的容量范围不同,将鼠标悬浮于容量设置框上,可查看容量范围提示。</td>
</tr>
<tr>
<td>自动续约</td>
<td>若选择包年包月计费模式,则可设置是否开启自动续约以及续约时长。</td>
</tr>
<tr>
<td>硬盘名称</td>
<td>硬盘的名称。<br>
由系统自动填充,用户也可自定义,长度范围为1-64位字符。硬盘名称在硬盘创建完成后支持修改。
</td>
</tr>
<tr>
<td>附加功能</td>
<td>
启用自动备份:若需要自动备份硬盘数据,请点击<b>启用自动备份</b>,然后选择备份策略。若选项中无合适的备份策略,可点击<b>新建定时备份策略</b>进行创建。</td>
</tr>
<tr>
<td>硬盘数量</td>
<td>需要创建的云硬盘数量。默认至少1块,支持同时创建多块。</td>
</tr>
</table>
4. 配置完成后,点击**立即购买**,弹出**确认配置**对话框。
5. 查看配置信息及配置费用,点击**确认**开始创建硬盘。
| 22.206897 | 178 | 0.613354 | yue_Hant | 0.198064 |
4f60a497625b059a680dafcd8b853767093fc0fb | 16,596 | md | Markdown | source/includes/shopper_activity/_order_activity.md | amarno/api-docs | 67f00cfc978f61bf629b9a84b84c20427a225277 | [
"Apache-2.0"
] | null | null | null | source/includes/shopper_activity/_order_activity.md | amarno/api-docs | 67f00cfc978f61bf629b9a84b84c20427a225277 | [
"Apache-2.0"
] | null | null | null | source/includes/shopper_activity/_order_activity.md | amarno/api-docs | 67f00cfc978f61bf629b9a84b84c20427a225277 | [
"Apache-2.0"
] | null | null | null | # Order Activity
Order Activity will show up on a person's activity timeline as an event. The event is based on the <code>action</code> sent with the order payload. For example, sending <code>placed</code> will result in a "Placed an order" event and sending <code>updated</code> will result in an "Updated an order" event.
Drip will keep a person’s Lifetime Value (LTV) up-to-date with their orders. For example, if a customer places an order with a <code>grand_total</code> of $100, their LTV will be incremented by $100. If the order is then updated, paid, or fulfilled with a <code>grand_total</code> value of $105, the customer’s LTV will increase by $5. If the order is then canceled or refunded with a <code>refund_amount</code> of $105, the customer’s LTV will decrease by $105.
## Create or update an order
To update an existing order, include the <code>provider</code> and <code>order_id</code> for that order in the payload.
> To record an order event:
```shell
curl -X POST "https://api.getdrip.com/v3/YOUR_ACCOUNT_ID/shopper_activity/order" \
-H "Content-Type: application/json" \
-H 'User-Agent: Your App Name (www.yourapp.com)' \
-u YOUR_API_KEY: \
-d @- << EOF
{
"provider": "my_custom_platform",
"email": "user@gmail.com",
"action": "placed",
"occurred_at": "2019-01-17T20:50:00Z",
"order_id": "456445746",
"order_public_id": "#5",
"grand_total": 22.99,
"total_discounts": 5.34,
"total_taxes": 1.00,
"total_fees": 2.00,
"total_shipping": 5.00,
"currency": "USD",
"order_url": "https://mysuperstore.com/order/456445746",
"items": [
{
"product_id": "B01J4SWO1G",
"product_variant_id": "B01J4SWO1G-CW-BOTT",
"sku": "XHB-1234",
"name": "The Coolest Water Bottle",
"brand": "Drip",
"categories": [
"Accessories"
],
"price": 11.16,
"sale_price": 10.16,
"quantity": 2,
"discounts": 5.34,
"taxes": 1.00,
"fees": 0.50,
"shipping": 5.00,
"total": 23.99,
"product_url": "https://mysuperstore.com/dp/B01J4SWO1G",
"image_url": "https://www.getdrip.com/images/example_products/water_bottle.png",
"product_tag": "Best Seller"
}
],
"billing_address": {
"label": "Primary Billing",
"first_name": "Bill",
"last_name": "Billington",
"company": "Bills R US",
"address_1": "123 Bill St.",
"address_2": "Apt. B",
"city": "Billtown",
"state": "CA",
"postal_code": "01234",
"country": "United States",
"phone": "555-555-5555"
},
"shipping_address": {
"label": "Downtown Office",
"first_name": "Ship",
"last_name": "Shipington",
"company": "Shipping 4 Less",
"address_1": "123 Ship St.",
"city": "Shipville",
"state": "CA",
"postal_code": "01234",
"country": "United States",
"phone": "555-555-5555"
}
}
EOF
```
```ruby
require 'drip'
client = Drip::Client.new do |c|
c.api_key = "YOUR API KEY"
c.account_id = "YOUR_ACCOUNT_ID"
end
response = client.create_order_activity_event(
provider: "my_custom_platform",
email: "user@gmail.com", # or person_id
action: "placed",
occurred_at: "2019-01-17T20:50:00Z",
order_id: "456445746",
order_public_id: "#5",
grand_total: 22.99,
total_discounts: 5.34,
total_taxes: 1.00,
total_fees: 2.00,
total_shipping: 5.00,
currency: "USD",
order_url: "https://mysuperstore.com/order/456445746",
items: [
{
product_id: "B01J4SWO1G",
product_variant_id: "B01J4SWO1G-CW-BOTT",
sku: "XHB-1234",
name: "The Coolest Water Bottle",
brand: "Drip",
categories: [
"Accessories"
],
price: 11.16,
sale_price: 10.16,
quantity: 2,
discounts: 5.34,
taxes: 1.00,
fees: 0.50,
shipping: 5.00,
total: 23.99,
product_url: "https://mysuperstore.com/dp/B01J4SWO1G",
image_url: "https://www.getdrip.com/images/example_products/water_bottle.png",
product_tag: "Best Seller"
}
],
billing_address: {
label: "Primary Billing",
first_name: "Bill",
last_name: "Billington",
company: "Bills R US",
address_1: "123 Bill St.",
address_2: "Apt. B",
city: "Billtown",
state: "CA",
postal_code: "01234",
country: "United States",
phone: "555-555-5555"
},
shipping_address: {
label: "Downtown Office",
first_name: "Ship",
last_name: "Shipington",
company: "Shipping 4 Less",
address_1: "123 Ship St.",
city: "Shipville",
state: "CA",
postal_code: "01234",
country: "United States",
phone: "555-555-5555"
}
)
if response.success?
puts "Request accepted"
else
puts "Error occurred"
end
```
> Responds with a <code>202 Accepted</code> if successful. That means the server accepted the request and queued it for processing. The response includes a unique request_id:
```json
{
"request_id": "990c99a7-5cba-42e8-8f36-aec3419186ef"
}
```
### HTTP Endpoint
`POST /v3/:account_id/shopper_activity/order`
### Arguments
<table>
<thead>
<tr>
<th>Property</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>provider</code></td>
<td>Required. The identifier for the provider from which the order data was received in lower snake cased form. For example, <code>shopify</code> or <code>my_custom_platform</code> or <code>my_store</code>.</td>
</tr>
<tr>
<td><code>person_id</code></td>
<td>Optional. The unique identifier of a person at Drip. Either <code>person_id</code> or <code>email</code> must be included. If both are included, <code>email</code> is ignored.</td>
</tr>
<tr>
<td><code>email</code></td>
<td>Optional. The person's email address. Either <code>person_id</code> or <code>email</code> must be included. If both are included, <code>email</code> is ignored.</td>
</tr>
<tr>
<td><code>action</code></td>
<td>Required. The event's action. For Order Activity, this can be either <code>placed</code>, <code>updated</code>, <code>paid</code>, <code>fulfilled</code>, <code>refunded</code>, or <code>canceled</code>. These actions will result in "Placed an order", "Updated an order", "Paid an order", "Fulfilled an order", "Refunded an order", and "Canceled an order" events on the person's timeline, respectively.
</tr>
<tr>
<td><code>occurred_at</code></td>
<td>Optional. The String time at which the order occurred in <a href="http://en.wikipedia.org/wiki/ISO_8601">ISO 8601</a> format. Defaults to the current time.</td>
</tr>
<tr>
<td><code>order_id</code></td>
<td>Required. A unique, internal id for the order (generally the primary key generated by the ecommerce platform).</td>
</tr>
<tr>
<td><code>order_public_id</code></td>
<td>Optional. A public, customer-facing identifier for the order. This will be displayed in the Drip UI and should correspond to the identifier a customer might see in their own order.</td>
</tr>
<tr>
<td><code>grand_total</code></td>
<td>Optional. The total amount of the order. This should include any applicable discounts. Defaults to 0.</td>
</tr>
<tr>
<td><code>total_discounts</code></td>
<td>Optional. The discounts on the entire order. Defaults to 0.</td>
</tr>
<tr>
<td><code>total_taxes</code></td>
<td>Optional. The taxes on the entire order. Defaults to 0.</td>
</tr>
<tr>
<td><code>total_fees</code></td>
<td>Optional. The fees on the entire order. Defaults to 0.</td>
</tr>
<tr>
<td><code>total_shipping</code></td>
<td>Optional. The shipping on the entire order. Defaults to 0.</td>
</tr>
<tr>
<td><code>refund_amount</code></td>
<td>Optional. To adjust a person’s lifetime value for a refund or cancelation, set <code>refund_amount</code> to the amount of the refund and leave <code>grand_total</code> unchanged. Defaults to 0.</td>
</tr>
<tr>
<td><code>currency</code></td>
<td>Optional. The alphabetic <a href="https://en.wikipedia.org/wiki/ISO_4217">ISO 4217</a> code for the currency of the purchase.</td>
</tr>
<tr>
<td><code>order_url</code></td>
<td>Optional. A URL that links back to the shopper’s order on your ecommerce platform.</td>
</tr>
<tr>
<td><code>items</code></td>
<td>
An Array of objects containing information about specific order items.
<br><br>
<table>
<thead>
<tr>
<th>Key</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>product_id</code></td>
<td>Required. A unique identifier for the product from the <code>provider</code>.</td>
</tr>
<tr>
<td><code>product_variant_id</code></td>
<td>Optional. If applicable, a unique identifier for the specific product variant from the provider. For example, a t-shirt may have one product_id, but several product_variant_ids for sizes and colors.</td>
</tr>
<tr>
<td><code>sku</code></td>
<td>Optional. The product SKU.</td>
</tr>
<tr>
<td><code>name</code></td>
<td>Required. The product name.</td>
</tr>
<tr>
<td><code>brand</code></td>
<td>Optional. The product's brand, vendor, or manufacturer.</td>
</tr>
<tr>
<td><code>categories</code></td>
<td>Optional. An array of categories associated with the product (e.g. shoes, vitamins, books, videos).</td>
</tr>
<tr>
<td><code>price</code></td>
<td>Required. The price of a single product.</td>
</tr>
<tr>
<td><code>quantity</code></td>
<td>Optional. The quantity of the item ordered. Defaults to 1.</td>
</tr>
<tr>
<td><code>discount</code></td>
<td>Optional. The discount on the items, taking quantity into account. For example, a $2.66 discount per item would be $5.34 if that item was of quantity 2. Defaults to 0.</td>
</tr>
<tr>
<td><code>taxes</code></td>
<td>Optional. The taxes on the items, taking quantity into account. For example, a $0.50 taxes per item would be $1.00 if that item was of quantity 2. Defaults to 0.</td>
</tr>
<tr>
<td><code>fees</code></td>
<td>Optional. Any additional fees on the items, taking quantity into account. For example, a $0.25 fee per item would be $0.50 if that item was of quantity 2. Defaults to 0.</td>
</tr>
<tr>
<td><code>shipping</code></td>
<td>Optional. The shipping cost on the items, taking quantity into account. For example, $2.50 fee per item would be $5.00 if that item was of quantity 2. Defaults to 0.</td>
</tr>
<tr>
<td><code>total</code></td>
<td>Optional. The line item total after quantity, discount, taxes, fees, and shipping. Defaults to 0.</td>
</tr>
<tr>
<td><code>product_url</code></td>
<td>Optional. A URL to the site containing product details.</td>
</tr>
<tr>
<td><code>image_url</code></td>
<td>Optional. A direct URL to an image of the product. We recommend using an image type that is supported by modern email clients (e.g. JPEG, GIF, PNG). For best display results, image size should be consistent across all products.</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td><code>billing_address</code></td>
<td>
An object containing billing address information.
<br><br>
<table>
<thead>
<tr>
<th>Key</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>label</code></td>
<td>Optional. The label describing the billing address.</td>
</tr>
<tr>
<td><code>first_name</code></td>
<td>Optional. The first name on the billing address.</td>
</tr>
<tr>
<td><code>last_name</code></td>
<td>Optional. The last name on the billing address.</td>
</tr>
<tr>
<td><code>company</code></td>
<td>Optional. The company on the billing address.</td>
</tr>
<tr>
<td><code>address_1</code></td>
<td>Optional. The billing street address.</td>
</tr>
<tr>
<td><code>address_2</code></td>
<td>Optional. Additional line of the billing street address.</td>
</tr>
<tr>
<td><code>city</code></td>
<td>Optional. The billing address city.</td>
</tr>
<tr>
<td><code>state</code></td>
<td>Optional. The billing address state.</td>
</tr>
<tr>
<td><code>postal_code</code></td>
<td>Optional. The billing address postal code.</td>
</tr>
<tr>
<td><code>country</code></td>
<td>Optional. The billing address country.</td>
</tr>
<tr>
<td><code>phone</code></td>
<td>Optional. The phone number associated with the billing address.</td>
</tr>
</tbody>
</table>
</td>
</tr>
<tr>
<td><code>shipping_address</code></td>
<td>
An object containing shipping address information.
<br><br>
<table>
<thead>
<tr>
<th>Key</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>label</code></td>
<td>Optional. The label describing the shipping address.</td>
</tr>
<tr>
<td><code>first_name</code></td>
<td>Optional. The first name on the shipping address.</td>
</tr>
<tr>
<td><code>last_name</code></td>
<td>Optional. The last name on the shipping address.</td>
</tr>
<tr>
<td><code>company</code></td>
<td>Optional. The company on the shipping address.</td>
</tr>
<tr>
<td><code>address_1</code></td>
<td>Optional. The shipping street address.</td>
</tr>
<tr>
<td><code>address_2</code></td>
<td>Optional. Additional line of the shipping street address.</td>
</tr>
<tr>
<td><code>city</code></td>
<td>Optional. The shipping address city.</td>
</tr>
<tr>
<td><code>state</code></td>
<td>Optional. The shipping address state.</td>
</tr>
<tr>
<td><code>postal_code</code></td>
<td>Optional. The shipping address postal code.</td>
</tr>
<tr>
<td><code>country</code></td>
<td>Optional. The shipping address country.</td>
</tr>
<tr>
<td><code>phone</code></td>
<td>Optional. The phone number associated with the shipping address.</td>
</tr>
</tbody>
</table>
</td>
</tr>
</tbody>
</table>
The API will also allow custom attributes to be passed in. These will be exposed within Drip just like [event properties](https://help.drip.com/hc/en-us/articles/115003737312-Event-Properties).
For example, if your platform includes the concept of product tags, you can include a product_tag attribute in the JSON that can be used in a Drip automation or email [via Liquid](https://help.drip.com/hc/en-us/articles/115003737312-Event-Properties#access-properties). You can attach custom attributes either to events or their items.
| 36.798226 | 462 | 0.56128 | eng_Latn | 0.843953 |
4f60e7063d0c58e88826f80791ae8a9e6a8c350a | 502 | md | Markdown | catalog/mahou-shoujo-lyrical-nanoha-comic-a-la-carte/en-US_mahou-shoujo-lyrical-nanoha-comic-a-la-carte.md | htron-dev/baka-db | cb6e907a5c53113275da271631698cd3b35c9589 | [
"MIT"
] | 3 | 2021-08-12T20:02:29.000Z | 2021-09-05T05:03:32.000Z | catalog/mahou-shoujo-lyrical-nanoha-comic-a-la-carte/en-US_mahou-shoujo-lyrical-nanoha-comic-a-la-carte.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 8 | 2021-07-20T00:44:48.000Z | 2021-09-22T18:44:04.000Z | catalog/mahou-shoujo-lyrical-nanoha-comic-a-la-carte/en-US_mahou-shoujo-lyrical-nanoha-comic-a-la-carte.md | zzhenryquezz/baka-db | da8f54a87191a53a7fca54b0775b3c00f99d2531 | [
"MIT"
] | 2 | 2021-07-19T01:38:25.000Z | 2021-07-29T08:10:29.000Z | # Mahou Shoujo Lyrical Nanoha: Comic à la carte

- **type**: manga
- **volumes**: 2
- **original-name**: 魔法少女リリカルなのは コミックアラカルト
## Tags
- comedy
- ecchi
- fantasy
- school
- sci-fi
- slice-of-life
## Authors
- None
## Sinopse
Anthologies of Nanoha shorts
## Links
- [My Anime list](https://myanimelist.net/manga/25218/Mahou_Shoujo_Lyrical_Nanoha__Comic_%C3%A0_la_carte)
| 17.310345 | 107 | 0.687251 | eng_Latn | 0.122148 |
4f62a11221560ee60d173da250edf68d3d6941b4 | 117 | md | Markdown | _collections/_people/metric_n-boulant.md | luisaciobanu/baobab | 1f53596b82ff56124b907c7d822c05221040c371 | [
"CC-BY-3.0"
] | 2 | 2020-06-26T12:13:47.000Z | 2020-06-26T13:25:47.000Z | _collections/_people/metric_n-boulant.md | luisaciobanu/baobab | 1f53596b82ff56124b907c7d822c05221040c371 | [
"CC-BY-3.0"
] | 3 | 2020-07-17T16:19:29.000Z | 2021-01-27T07:51:47.000Z | _collections/_people/metric_n-boulant.md | luisaciobanu/baobab | 1f53596b82ff56124b907c7d822c05221040c371 | [
"CC-BY-3.0"
] | 4 | 2020-12-28T09:29:39.000Z | 2022-01-25T13:18:31.000Z | ---
layout:
name: Nicolas Boulant
position: pi
cat: metric
subcat: acquisition-methodology
avatar:
joined: 2006
---
| 10.636364 | 31 | 0.74359 | eng_Latn | 0.796208 |
4f62c8a7c52d6e456470d1c9a3c695b7db2e3907 | 2,606 | md | Markdown | _posts/2015/2015-07-31-SwiftFoldr.md | sonsongithub/sonsongithub.github.io | 98247a9a5126337964cfbe16af88d5679d37a1fe | [
"Unlicense",
"MIT"
] | null | null | null | _posts/2015/2015-07-31-SwiftFoldr.md | sonsongithub/sonsongithub.github.io | 98247a9a5126337964cfbe16af88d5679d37a1fe | [
"Unlicense",
"MIT"
] | null | null | null | _posts/2015/2015-07-31-SwiftFoldr.md | sonsongithub/sonsongithub.github.io | 98247a9a5126337964cfbe16af88d5679d37a1fe | [
"Unlicense",
"MIT"
] | null | null | null | ---
layout: page
categories:
- blog
title: Swiftで最速foldrを実行するなら?
---
Haskellの勉強をちょこちょこやってるので,```foldr```と```foldl```をSwiftに実装してみたくなった.ご存知のように,Swiftの```CollectionType```には,すでに```reduce```が実装されており,```foldl```については,実装する必要性がない.
だけど,寂しいので,再帰,ループと色々実装方法で```CollectionType```の```extension```として実装してみた.
差分として,[@norio_nomura](https://twitter.com/norio_nomura)に速度について突っ込まれたので計ってみた.
extension CollectionType {
func foldr_recursive<T>(accm:T, f: (Self.Generator.Element, T) -> T) -> T {
var g = self.generate()
func next() -> T {
return g.next().map {x in f(x, next())} ?? accm
}
return next()
}
func foldr_reduce<T>(accm:T, f: (T, Self.Generator.Element) -> T) -> T {
return self.reverse().reduce(accm){f($1, $0)}
}
func foldr_loop<T>(accm:T, f: (Self.Generator.Element, T) -> T) -> T {
var result = accm
for temp in self.reverse() {
result = f(temp, result)
}
return result
}
}
さらに```Array```を```reverse()```するとき,ただの```CollectionType```の```extension```として実装すると,```reverse()```時に愚直に逆方向の配列をコピーしてしまうため,それをさけるために,以下のように,別の```extension```も実装しておく.
// following code is written by @norio_nomura
extension CollectionType where Index : RandomAccessIndexType {
func foldr_loop2<T>(accm:T, @noescape f: (Self.Generator.Element, T) -> T) -> T {
var result = accm
for temp in self.reverse() {
result = f(temp, result)
}
return result
}
func foldr_reduce2<T>(accm:T, @noescape f: (T, Self.Generator.Element) -> T) -> T {
return self.reverse().reduce(accm) { f($0, $1) }
}
}
計測テストのコードは,以下のようになる.
配列のサイズを100, 1000, 10000, 100000, 1000000個にし,それぞれ1000回ずつ計算し,その計算時間を測定した.
Playgroundで測定したところ,最適化フラグの設定ができない点,単純にコードだけを実行していないように見えたので,OSXのユニットテストで実行速度を計測した.
計測コードは[こちら](https://gist.github.com/sonsongithub/b897f516005f53bc3748).
<img src="{{ site.baseurl }}/assets/foldr-2.png" width="50%"/>
1. recursive 普通に再帰で実装.
2. loop リストをreverseで逆順にしてループで実装.
3. loop2 リストをreverseでlazyに逆順にしてループで実装.
4. reduce reduceを内部的に使ったfoldr.
5. reduce2 上記のreduceがlazyなreverseを使うようにしたfoldr.
6. test_reduce reduceのみで実装したバージョン.
```recursive```は配列のサイズが大きくなると,スタック食いつぶして実行できませんでした.
reduceとloopが遅い理由は,```SequenceType```の```reverse()```が毎回律義に実行されて,逆順の配列が毎回生成されているからだと考察します.
これを回避するため,reduce2とloop2は,```CollectionType where Index : RandomAccessIndexType```のextensionとして実装しています.
結果,foldrを実装するなら,loop2,つまり,```extension CollectionType where Index : RandomAccessIndexType```で実装したタイプのfoldrが速いという結果になり,これが一番いいと言えるのかもしれません. | 37.228571 | 163 | 0.676516 | yue_Hant | 0.59991 |
4f638a67636a0fbaa2037ce420c89dda24f38858 | 10,946 | md | Markdown | _posts/2020-08-14-restful-API(GET).md | Gayeoon/Gayeoon.github.io | a730757f229e0eb211553158ac186e3a9ce2e08a | [
"MIT"
] | 1 | 2020-07-29T14:20:31.000Z | 2020-07-29T14:20:31.000Z | _posts/2020-08-14-restful-API(GET).md | Gayeoon/Gayeoon.github.io | a730757f229e0eb211553158ac186e3a9ce2e08a | [
"MIT"
] | null | null | null | _posts/2020-08-14-restful-API(GET).md | Gayeoon/Gayeoon.github.io | a730757f229e0eb211553158ac186e3a9ce2e08a | [
"MIT"
] | null | null | null | ---
layout: post
title: "배달 어플 API 구현 및 명세서 작성하기 - GET 방식 사용"
subtitle: "배달 어플 API 구현 및 명세서 작성하기 - GET 방식 사용"
categories: restful
tags: DB API PHP GET
comments: false
---
## API 구현 및 명세서 작성하기 - GET 방식 1
### 🍅 API 명세서 작성하기

위의 명세서를 기반으로 API 개발을 시작하겠습니다.😆
<br>
<hr>
### 🍨 User 모든 정보 출력 API
```
$r->addRoute('GET', '/users', ['IndexController', 'getUsers']);
```
Index.php에 라우팅 정의
```
/*
* API No. 1
* API Name : User 모든 정보 출력 API
* 마지막 수정 날짜 : 20.08.14
*/
case "getUserInfo":
http_response_code(200);
$keyword = $_GET['userId'];
if(!isValidID($keyword)){
$res->isSuccess = FALSE;
$res->code = 200;
$res->message = "유효하지 않은 ID입니다.";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
}
$res->result = getUserInfo($keyword);
$res->isSuccess = TRUE;
$res->code = 100;
$res->message = "정보 출력 성공";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
```
IndexController.php에 관련 포맷 정의
```
function getUserInfo($keyword)
{
$pdo = pdoSqlConnect();
$query = "SELECT * FROM User WHERE UserId = ? AND IsDeleted = 'N';";
$st = $pdo->prepare($query);
// $st->execute([$param,$param]);
$st->execute([$keyword]); //파라미터 list 형태로 넣을 것
$st->setFetchMode(PDO::FETCH_ASSOC);
$res = $st->fetchAll();
$st = null;
$pdo = null;
return $res;
}
function isValidId($keyword)
{
$pdo = pdoSqlConnect();
$query = "SELECT EXISTS (SELECT * FROM User WHERE UserId = ? AND IsDeleted = 'N') AS exist;";
$st = $pdo->prepare($query);
$st->execute([$keyword]);
// $st->execute();
$st->setFetchMode(PDO::FETCH_ASSOC);
$res = $st->fetchAll();
$st = null;
$pdo = null;
//echo json_encode($res);
return intval($res[0]['exist']);
}
```
IndexPdo.php에 기능 구현


<br>
<hr>
### 🍗 MY배민 페이지의 User 정보 출력 API
```
$r->addRoute('GET', '/user/{user_id}', ['IndexController', 'getUser']);
```
Index.php에 라우팅 정의
```
/*
* API No. 2
* API Name : MY배민 페이지의 User 정보 출력 API
* 마지막 수정 날짜 : 20.08.14
*/
case "getUser":
http_response_code(200);
$keyword = $_GET['userId'];
if(!isValidID($keyword)){
$res->isSuccess = FALSE;
$res->code = 200;
$res->message = "유효하지 않은 ID입니다.";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
}
$res->result = getUser($keyword);
$res->isSuccess = TRUE;
$res->code = 100;
$res->message = "정보 출력 성공";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
```
IndexController.php에 관련 포맷 정의
```
function getUser($keyword)
{
$pdo = pdoSqlConnect();
$query = "SELECT User.Name, User.Level, COUNT(*) AS Coupon
FROM User
JOIN Coupon
ON User.UserIdx= Coupon.UserIdx AND Coupon.isUsed = 'N'
WHERE UserId = ? AND isDeleted = 'N'
GROUP BY User.UserIdx;";
$st = $pdo->prepare($query);
// $st->execute([$param,$param]);
$st->execute([$keyword]); //파라미터 list 형태로 넣을 것
$st->setFetchMode(PDO::FETCH_ASSOC);
$res = $st->fetchAll();
$st = null;
$pdo = null;
return $res;
}
```
IndexPdo.php에 기능 구현


<br>
<hr>
### 🍇 포인트 이용 내역 조회 API
```
$r->addRoute('GET', '/user/{user_id}/point', ['IndexController', 'getUserPoint']);
```
Index.php에 라우팅 정의
```
/*
* API No. 3
* API Name : 포인트 이용 내역 조회 API
* 마지막 수정 날짜 : 20.08.14
*/
case "getUserPoint":
http_response_code(200);
$keyword = $_GET['userId'];
if(!isValidID($keyword)){
$res->isSuccess = FALSE;
$res->code = 200;
$res->message = "유효하지 않은 ID입니다.";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
}
$res->result = getUserPoint($keyword);
$res->isSuccess = TRUE;
$res->code = 100;
$res->message = "정보 출력 성공";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
```
IndexController.php에 관련 포맷 정의
```
function getUserPoint($keyword)
{
$pdo = pdoSqlConnect();
$query = "SELECT Point, date_format(Date,'%Y-%m-%d') AS DATE , EndDate, Point.IsDeleted As Used, Store
FROM Point
JOIN User ON User.UserId = ? AND Point.UserIdx = User.UserIdx
ORDER BY Date DESC;
";
$st = $pdo->prepare($query);
// $st->execute([$param,$param]);
$st->execute([$keyword]); //파라미터 list 형태로 넣을 것
$st->setFetchMode(PDO::FETCH_ASSOC);
$res = $st->fetchAll();
$st = null;
$pdo = null;
return $res;
}
```
IndexPdo.php에 기능 구현


<br>
<hr>
### 🍒 보유 포인트 합계 조회 API
```
$r->addRoute('GET', '/user/{user_id}/point/sum', ['IndexController', 'getUserPointSum']);
```
Index.php에 라우팅 정의
```
/*
* API No. 4
* API Name : 보유 포인트 합계 조회 API
* 마지막 수정 날짜 : 20.08.14
*/
case "getUserPointSum":
http_response_code(200);
$keyword = $_GET['userId'];
if(!isValidID($keyword)){
$res->isSuccess = FALSE;
$res->code = 200;
$res->message = "유효하지 않은 ID입니다.";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
}
$res->result = getUserPointSum($keyword);
$res->isSuccess = TRUE;
$res->code = 100;
$res->message = "정보 출력 성공";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
```
IndexController.php에 관련 포맷 정의
```
function getUserPointSum($keyword)
{
$pdo = pdoSqlConnect();
$query = "SELECT CONCAT(sum(Point), '원') AS Sum FROM Point
JOIN User ON UserId = ?
WHERE User.UserIdx = Point.UserIdx AND Point.IsDeleted <= 0
GROUP BY UserID;
";
$st = $pdo->prepare($query);
// $st->execute([$param,$param]);
$st->execute([$keyword]); //파라미터 list 형태로 넣을 것
$st->setFetchMode(PDO::FETCH_ASSOC);
$res = $st->fetchAll();
$st = null;
$pdo = null;
return $res[0];
}
```
IndexPdo.php에 기능 구현


<br>
<hr>
### 🍟 찜한가게, 바로결제, 전화주문 목록 조회 API
```
$r->addRoute('GET', '/user/{user_id}/choose', ['IndexController', 'getUserChoose']);
```
Index.php에 라우팅 정의
```
/*
* API No. 5
* API Name : 찜한가게, 바로결제, 전화주문 목록 조회 API
* 마지막 수정 날짜 : 20.08.14
*/
case "getUserChoose":
http_response_code(200);
$keyword = $_GET['userId'];
$flag = $_GET['flag'];
if(!isValidId($keyword)){
$res->isSuccess = FALSE;
$res->code = 200;
$res->message = "유효하지 않은 ID입니다.";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
}
if($flag > 2){
$res->isSuccess = FALSE;
$res->code = 300;
$res->message = "유효하지 않은 flag입니다.";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
}
$res->result = getUserChoose($keyword, $flag);
$res->isSuccess = TRUE;
$res->code = 100;
$res->message = "정보 출력 성공";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
```
IndexController.php에 관련 포맷 정의
```
function getUserChoose($keyword, $flag)
{
$pdo = pdoSqlConnect();
if($flag == 2) {
$query = "SELECT Store.Name, Store.Star, Store.Min, Store.Represent, Store.IsOpen, Store.IsOrder
FROM Choose
JOIN User on User.UserId = ?
JOIN Store
ON Choose.UserIdx = User.UserIdx AND Choose.StoreIdx = Store.StoreIdx;";
}
else {
$query = "SELECT Store.Name, Store.Star, Store.Min, Store.Represent, Store.IsOpen, Store.IsOrder
FROM OrderMenu
JOIN User on User.UserId = ?
JOIN Store
ON OrderMenu.UserIdx = User.UserIdx AND OrderMenu.Payment = ? AND OrderMenu.StoreIdx = Store.StoreIdx;";
}
$st = $pdo->prepare($query);
$st->execute([$keyword, $flag]);
$st->setFetchMode(PDO::FETCH_ASSOC);
$res = $st->fetchAll();
$st = null;
$pdo = null;
return $res;
}
```
IndexPdo.php에 기능 구현


<br>
<hr>
### 🍻 Store 상세 정보 출력 API
```
$r->addRoute('GET', '/store/{store_id}', ['IndexController', 'getStore']);
```
Index.php에 라우팅 정의
```
/*
* API No. 6
* API Name : Store 상세 정보 출력 API
* 마지막 수정 날짜 : 20.08.14
*/
case "getStore":
http_response_code(200);
$keyword = $_GET['storeId'];
if(!isValidStoreId($keyword)){
$res->isSuccess = FALSE;
$res->code = 200;
$res->message = "유효하지 않은 ID입니다.";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
}
$res->result = getStore($keyword);
$res->isSuccess = TRUE;
$res->code = 100;
$res->message = "정보 출력 성공";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
```
IndexController.php에 관련 포맷 정의
```
function getStore($keyword)
{
$pdo = pdoSqlConnect();
$query = "SELECT Store.Name, Star, Min, Type, CONCAT(FORMAT(Store.Tip , 0), '원') AS Tip, Store.Phone, Store.DeliveryTime, Explan
FROM Store
WHERE Store.StoreID = ?;";
$st = $pdo->prepare($query);
$st->execute([$keyword]);
$st->setFetchMode(PDO::FETCH_ASSOC);
$res = $st->fetchAll();
$st = null;
$pdo = null;
$res = array_filter($res[0]);
return $res;
}
function isValidStoreId($keyword)
{
$pdo = pdoSqlConnect();
$query = "SELECT EXISTS (SELECT * FROM Store WHERE StoreId = ? AND IsDeleted = 'N') AS exist;";
$st = $pdo->prepare($query);
$st->execute([$keyword]);
// $st->execute();
$st->setFetchMode(PDO::FETCH_ASSOC);
$res = $st->fetchAll();
$st = null;
$pdo = null;
//echo json_encode($res);
return intval($res[0]['exist']);
}
```
IndexPdo.php에 기능 구현


<br>
<hr>
### 🍏 Store 최근리뷰 / 사장님 댓글 개수 조회 API
```
$r->addRoute('GET', '/store/{store_id}/review-cnt', ['IndexController', 'getStoreReview']);
```
Index.php에 라우팅 정의
```
/*
* API No. 7
* API Name : Store 최근리뷰 / 사장님 댓글 개수 조회 API
* 마지막 수정 날짜 : 20.08.14
*/
case "getStoreReview":
http_response_code(200);
$keyword = $_GET['storeId'];
if(!isValidStoreId($keyword)){
$res->isSuccess = FALSE;
$res->code = 200;
$res->message = "유효하지 않은 ID입니다.";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
}
$res->result = getStoreReview($keyword);
$res->isSuccess = TRUE;
$res->code = 100;
$res->message = "정보 출력 성공";
echo json_encode($res, JSON_NUMERIC_CHECK);
break;
```
IndexController.php에 관련 포맷 정의
```
function getStoreReview($keyword)
{
$pdo = pdoSqlConnect();
$query = "SELECT COUNT(*) AS Review, COUNT(Review.Comment) AS Comments
FROM Store
JOIN Review
ON Review.StoreIdx = Store.StoreIdx
WHERE StoreId = ?;";
$st = $pdo->prepare($query);
$st->execute([$keyword]);
$st->setFetchMode(PDO::FETCH_ASSOC);
$res = $st->fetchAll();
$st = null;
$pdo = null;
return $res[0];
}
```
IndexPdo.php에 기능 구현


| 20.498127 | 133 | 0.587977 | kor_Hang | 0.7721 |
4f63f50ee70158359d34d47dc4f67a54d1e994e0 | 13,043 | md | Markdown | gitbook/2020/05/20200529.md | yanalhk/live | e445d081d25dda486a1527e18e2d450ce6a063f0 | [
"MIT"
] | null | null | null | gitbook/2020/05/20200529.md | yanalhk/live | e445d081d25dda486a1527e18e2d450ce6a063f0 | [
"MIT"
] | null | null | null | gitbook/2020/05/20200529.md | yanalhk/live | e445d081d25dda486a1527e18e2d450ce6a063f0 | [
"MIT"
] | 1 | 2020-11-30T17:36:19.000Z | 2020-11-30T17:36:19.000Z | #2020-05-29
10:18:04
法庭文字直播台
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
\#1027尖沙咀 \#提堂
鄭(19)
控罪:
管有任何物品意圖摧毀或損壞財產
被控2019年10月27日,在尖沙咀梳士巴利道,管有鎚仔、噴漆和兩把鉗。
辯方要求押後答辯等候被告醫療報告。
案件押後至7月31號1430九龍城裁判法院第一庭答辯
✅期間繼續以原有條件保釋✅
---
10:29:05
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
\#新案件 \#0901鑽石山
盧(22)
控罪:
(1)刑事損壞
被控於2019年9月1日,在鑽石山站內,無合理辯解而損壞屬於香港鐵路有限公司的三部閉路電視、三部售票機及一部中銀櫃員機
(2)管有任何物品意圖摧毀或損壞財產
被控於同日同地,管有一樽白油漆、一支油漆掃、兩把剪刀、一把鎅刀、兩支黑噴漆、一支白噴漆及一個打火機,意圖在無合法辯解的情況下使用或導致他人使用或准許他人使用該物品以摧毀或損壞屬於另一人的財產
✅保釋批准✅
條件如下:
原有警署保釋金額
不離港
在報稱地址居住
宵禁令0000至0600
案件押後至6月26日1430 九龍城裁判法院第一庭再訊
---
10:37:21
\#西九龍裁判法院第三庭
\#高偉雄裁判官
賴(19)已還押逾9個月 \#覆核聆訊 \#判刑(\#0714沙田 襲警; 2月15日判罰款$2000)
裁決
法庭宣讀首先指出控方原有案情及被告背景,留意到被告的背景及家庭因素導致小四後患上適應性失調等情緒病,父母分離後亦出現家庭改變,每兩星期與父親見面才可獲唯一的300元生活費。被告需自行間斷地兼職,每次為期6-7日。
14年後因社運改變社交生活,亦因小時於母親去過64晚會而受啟蒙,因而在19年6月開始參與和平集會。
裁判官特別提到被告對感化官指出「法律上有錯,道德上冇錯,但係會接受法律後果」。
覆核聆訊時,控方主要論點有關被告行為具挑釁性、會促使在場人士情緒、於多次警戒下沒有離開,而沒有於判刑中訂明。
裁判官當時審視雙方提出案例後,指出本案案情與譯音Chiu Ka Ming 案例相似,但該案被告為成年人。該案判處1個月監禁。另外,法官指留意到辯方所提出案例 朱家言 案的案情較本案嚴重而被告年齡相若,該案原判處1個月監禁,經上訴後改判感化令及社會服務令。裁判官認為朱家言案具參考性,裁判官故認為控方所指並不為正確。
裁判官此後援引案例證明英國對襲警罪判刑指引具參考價值。
英國襲警判刑指引分級制:
1\. 最嚴重,最高判監12星期
2\. 最高判處社會服務令
3\. 最輕微,罰款
此外判刑指引亦有清楚指出加減刑原因。
裁判官指理解被告行為極不可能造成傷害,亦不能稱為十分有意圖。被告被控方指於感化官前沒有悔意,裁判官回應指沒有悔意不能作為加刑因素,亦指出「認罪本身就係表達悔意嘅其中一種方式」,法庭接納被告為真誠睇住認罪及有悔意,維持因此的減刑決定。
法庭又指出被告的家庭背景及沒有足夠支持為被告犯案的最大原因。又指於此基礎下判監為不適合。若非涉及另一區院案件裁判官「會考慮感化方式處理」。但裁判官亦指出判詞為環境、背景、及案情加成下的特例。
裁判官最後亦指控方於覆核提出案情不完整而要求傳召兩名證人的要求不妥當,對被告審訊極頭不公。
法庭駁回控方覆核申請
---
10:50:14
\#區域法院第卅六庭
\#沈小民法官 \#審訊 \[2/3\]
林(19) \#1001黃大仙
[PART 1](https://t.me/youarenotalonehk_live/5427)
[PART 2](https://t.me/youarenotalonehk_live/5447)
傳召PW3 偵緝警員12804吳溢輝,駐守東九龍重案組1B隊
作供時指出:
1) 從來無申請手令檢閱閉路電視
2) 收到消息(不是口供),指案發地點東頭邨道與正德街交界
於2019年10月9日到東頭邨道與正德街附近視察,希望索取拍到案發情況的閉路電視片段。認為基協中學入口的閉路電視最有可能拍攝到案發現場,但沒有入內查看拍攝角度
案發地點
1) 東頭邨道近龍慧樓
2) 東頭邨道與正德街交界(被捕地點)
書面證人供詞指「並無發現有任何閉路電視可拍攝上址」
控方所有證人作供完畢,表面證供成立
休庭至11:00,待辯方處理文件及安排證人
[下一節](https://t.me/youarenotalonehk_live/5479)
---
11:12:09
法庭文字直播台
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
林,劉(22-27)已還押2天 \#提堂(\#20200515黃大仙 對他人身體加以嚴重傷害:私了阻貼文宣休班警A)
開庭
---
11:15:56
法庭文字直播台
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
\#20200515鑽石山
林,劉(22-27)已還押2天 \#提堂(\#20200515黃大仙 對他人身體加以嚴重傷害:私了阻貼文宣休班警A)
D1林(27歲)
D2劉(22歲)
控罪:
對他人身體加以嚴重傷害罪
控罪指二人於今年5月15日在黃大仙龍蟠苑外一條行人天橋,與其他不知名人士非法及惡意對一名休班警員A的身體加以嚴重傷害
由於警方認人程序,未能認出被告,控申押後等指模、DNA化驗
押後至7月24下午九龍城1庭
✅准保釋✅
現金3000
不離港
---
13:46:24
法庭文字直播台
\#觀塘裁判法院第八庭
\#鍾明新裁判官
\#審訊
譚b
控罪:襲擊在正當執行職務的警員 (違反侵害人身罪條例 第36條)
案情:被指於8月4日在黃大仙便衣警警車上襲擊警員 8235
控方傳召2名證人(受襲警員、拍攝警員),並額外傳召2名證人(調查探員、醫生證人)供辯方盤問
事實爭議:被告人有否曾襲擊警員
承認事實:
1\. 合法拘捕—正當地押解被告人
2\. 不爭議是否合法自衛
3\. 受襲警員的傷勢報告
控方依賴影片證實被告犯事前的情緒激動
片段1:
影片錄得被告人被截查時,裸露半身,警方命令被告將雙手放背後,被告聽從,言論間不滿警方作為,其後表示會「瞓街」並實行
片段2:警方叫被告人執拾隨身物品,被告人與搜查警員鬥咀,警方以「涉嫌非法集結」拘捕被告人
PW1(聲稱受襲的警員) 作供
控方指稱被告人於便衣警車輛內襲警
7人私家車 車內位置:
頭排:司機
中排:警長58602、女警員4965、男警員13556
後排:被告(左)、PW1(右)
PW1指在車內聽到被告人喃喃自語,聽到被告說「死黑警、死全家」,突然聽到被告方向傳來「咔」一聲,然後望向被告人,被告人即時用頭撞向PW1左額、左眼角,PW1隨即嘗試制服被告人並扣上手扣,被告人試圖爭扎,PW1隨後感受痛楚,PW1勸喻被告人冷靜下來,被告人下車後隨即跪下、雙手合十並唸經
辯方律師則指被告在車內要求穿衣被拒,當被告人偷偷地使用電話時,PW1搶手機不果後聯同PW2對被告人施襲,辯方律師形容「警方一路打、一路搶手機、一路講『你做咩襲警』」被告人面部紅腫,鼻樑骨折並流鼻血,警員對被告受襲及跪下作出嘲諷
爭拗點:
1\. 被告是施襲者還是受襲者
2\. 警員是否用水淋被告意圖洗走被告血跡,方便於警署內為被告拍照
3\. 警員在於06時分抄寫的記事簿是依照同日17時分錄取的口供所補抄
PW1已接受辯方大部分提問
1430再續
---
14:45:17
法庭文字直播台
\#沙田裁判法院第一庭
\#鄧少雄署理主任裁判官
\#轉介文件
\#1112中大
D1:陳 (21)
D2:李 (24)
D3:張 (18)
D4:鄧 (23)
控罪:
(1)D1-4:暴動
(2)D1:在身處非法集結時使用蒙面物品
(3)D3:在身處非法集結時使用蒙面物品
(4)D4:在身處非法集結時使用蒙面物品
(5)D3:在公眾地方管有攻擊性武器
詳情:
(1)D1-4同被控於19年11月12日在中大二號橋及環迴東路一帶,與黃女子及其他不知名人士參與暴動
(2)-(4)D1/3/4各被控於同日同地在身處非法集結時使用蒙面物品
(5)D3被控於同日同地攜有攻擊性武器,即一個能發出雷射光束的裝置
保釋相關事宜:
維持原有條件保釋
案件轉介至6月18日1430時區域法院應訊。
---
14:45:44
\#觀塘裁判法院第一庭
\#徐綺薇署理主任裁判官
\#20200523牛頭角
D3: \* 15歲少年
控罪:串謀有意圖而傷人 (違反侵害人身罪 第17條 及 刑事罪行條例 第159A 條)
被指於20年5月23日牛頭角下邨行人天橋有意圖而串謀非法及惡意傷害女子何麗英,令她身體受嚴重傷害。
D1及D2當場被捕;D3逃逸後於稍後時間被捕,其後帶領警員於案發地點附近的垃圾桶找尋涉案菜刀,3名被控人於警誡下承認使用菜刀用作恐嚇控方證人,據稱事發原因為政見爭拗
❌拒絕保釋❌
保留8日保釋覆核權利
於6月5日 1430 觀塘裁判法院第一庭作覆核保釋聆訊
案件押後至7月3日1430 觀塘裁判法院第一庭再訊
---
15:01:27
法庭文字直播台
\#東區裁判法院第一庭
\#錢禮主任裁判官
\#1102香港仔 \#提堂
招 (22)
控罪:在公眾地方造成阻礙
案情:被吿於2019年11月2日,於香港仔南豐道黃竹坑道交界,站立在行車道上張開手,被指嚴重阻礙交通,被警員26799以公眾地方行為不檢拘捕。
涉案現場沒有閉路電視,證物不足以起訴,控方以證據不足撤回控罪。
撤回控罪
⁃ 守行為 12個月
⁃ 自簽 $1000
⁃ 需繳交$500 堂費 (從擔保金扣除)
\*證物充公:一個黑色口罩
---
15:03:11
\#沙田裁判法院第一庭
\#鄧少雄署理主任裁判官
\#1007馬鞍山
楊(39)
控罪:
阻礙公職人員
詳情:
被控於19年10月7日在馬鞍山新港城中心四期2樓,阻礙警員9031邵良智執行公務
保釋相關事宜:
沒有新申請
案件押後至6月18日1430時沙田裁判法院第一庭應訊,以待辯方檢視文件及涉案閉路電視片段,維持原有條件保釋。
---
15:04:58
\#觀塘裁判法院第一庭
\#徐綺薇署理主任裁判官
\#新案件 \#1114將軍澳
劉(20)
控罪修訂:
阻撓在正當執行職務的警務人員
(212A)
11月14將軍澳港鐵A出口外阻差人X
✅保釋條件✅
1000現金
居報址
每周一次報到
12-6宵禁
押後7月24日1430觀塘一庭
---
15:07:10
\#西九龍裁判法院第三庭
\#羅德泉主任裁判官
李,李,李(17-20) \#提堂(\#1006旺角 管有攻擊性武器、公眾地方造成阻礙、抗拒警務人員)
新增控罪4:無牌管有無線電器具(d2)管有無線電,編號[159086171](tel:159086171)
新增控罪4,辯方要求毋須答辯,押後 23/7。辯方指未有收妥文件,亦會考慮向律政司商討以其他方式。
d1 申請宵禁令由2100-0700改為 2300-0700 獲批⭕️
辯方指工作時間不彈性,現時工作時數不足。
d2 更改報到警署 獲批⭕️
d3 申請宵禁令由2100-0700 改為 2300-0700 獲批⭕️
禁足令豁免:轉乘交通工具、來往律師樓
案件押後至 23/7 0930 西九龍法院第三庭
---
15:22:42
法庭文字直播台
\#觀塘裁判法院第一庭
\#徐綺薇署理主任裁判官
\#新案件 \#1113將軍澳
孫(17)
控:
管有物品意圖損壞財產
被控11月13將軍澳寶順路翠嶺交界管有一支噴漆,意圖損壞他人財產。
押後至2020年7月6日下午觀塘一庭再提訊
保釋照舊
---
15:32:47
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
\#1225尖沙嘴 \#提堂
王(48)
控罪:襲警
案情:被控於2019年12月25日,在尖沙咀海港城3樓T0317舖外,襲擊警員X
背景:2019年12月26日首次提堂時指在沒有律師陪同下落口供,口供稱是人與人之間相撞,不小心以左手撞到警員右手,並非故意襲警。
‼️答辯意向————❌不認罪
被告申請減少報到次數至一星期一次獲批✅
案件押後至7月16日九龍城裁判法院答辯,期間被告以現有條件保釋。
---
15:36:38
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
\#20200204油麻地 / \#20200205油麻地 \#提堂
D1 陳(23)已還押逾3個月
D2 陳(26)已還押逾3個月
D3 林(19)
D4 劉(16)
D5 莫(16)
控罪:1. 縱火(D1-D5)
2.管有攻擊性武器(D1-D5)
案情:被控於2月4日在油麻地彌敦道及甘肅街交界,無合理辯解而燒毀該處馬路;並於2月5日在油麻地某酒店內,管有10瓶盛載易燃液體的玻璃樽、3罐液化石油氣、2把鐵鎚、4罐噴漆油、2罐電油、1瓶腐蝕液體、1把士巴拿和1把剪鉗。
‼️控方表示眾被告的手機化驗完成,**於**D2**的手機中找到部分相片,其中攝有被綁著的石油氣罐,相信其可用作非法用途。
**控方申請押後以完成化驗、法證等工作以及向律政司索取法律意見
辯方質疑控方有意拖延案件,指控方於上次上庭時表示化驗工作、法律意見可於今次上庭前完成,卻於今日食言並申請押後案件。表示對7月27日能否完成答辯感到不樂觀。又指出眾被告最後若然被定罪,其刑罰或被案件押後期間的案例影響。
**‼️D1、D2保釋申請被拒,需繼續還押;並放棄8日保釋覆核權利‼️
**✅D4 D5 更改保釋條件獲批
案件押後至7月27日1430九龍城裁判法院再訊,期間眾被告以現有條件保釋。
---
15:55:59
法庭文字直播台
\#區域法院第卅六庭
\#沈小民法官 \#審訊 \[2/3\]
林(19) \#1001黃大仙
[PART 1](https://t.me/youarenotalonehk_live/5427)
[PART 2](https://t.me/youarenotalonehk_live/5447)
[PART 3](https://t.me/youarenotalonehk_live/5456)
傳召辯方證人,被告的中學同學作供
1) 案發當日16:30左右,在telegram被捕人士關注組看到涉案的片段,通過辨析片中人的身形及髮型,認出片中人就是被告
2) 於2013年認識被告,案發前2星期曾與被告見面
3) 沒有對影片進行剪輯或修改,但自己不是片主,不能確定影片之前有無被修改
4) 當日下載相關影片到手提電話,至今年5月26日將片段燒錄至光碟交予被告的代表律師
辯方證人作供完畢
\----------------------
控方反對影片呈堂,質疑其表面真確性,因為辯方證人不是片主,在下載片段前,內容有可能已被删改,但無具體指出邊部份被删改……純粹話警員作供與影片不符
法官質問控方律師:
你提出反對,咁你都要俾個理由我點解你反對先得㗎,同警員證供有矛盾呢個只係比重問題(邊個證供較可信),唔係反對呈堂理由……你仲有無實質反對理由?乜野會影響表面真確性?
控答:作供警員無確認片中嘅消防車有出現……
法官:你唔好咁同我講野喇,照你講法我一定要信咗警察先?即使同警察口供有衝突都唔代表唔可以呈堂㗎喎
後來控方律師再重覆講,影片開頭的消防車可能係key上去,因為警員作供時無確認有消防車……
辯方:現階段裁定PW1,2是否可信係好危險,片中警車編號與證人口供吻合,已經可以證明片段表面真確,而且PW2係片中認得自己。再者,辯方證人當日16:30,即被告被捕後半小時已經下載片段,造假幾乎無可能
法官對控方講:你話有做手腳,咁你有無證據證明先?法庭唔想係現階段處理證人可信性
控答:手頭上無資料,反對理由因為辯方無法證明片段表面真確
法官:咁洗唔洗開個voir dire俾你?一係你書面陳詞講你反對理由。如果你提出反對呈堂,咁你俾個理由我喇,又講唔到邊到造假,依家唔係起身提出反對就完㗎,我仲有野要做㗎……
\-----------------
休庭至15:00,下午開庭時控方撤回反對,同意影片呈堂
[下一節](https://t.me/youarenotalonehk_live/5482)
---
16:08:32
\#觀塘裁判法院第八庭
\#鍾明新裁判官
\#審訊
譚b
辯方案情:被告人於車內要求穿衣,PW1: 「頭先叫你著又唔著,依家唔俾你著」,PW1勒住被告雙手係咁chok,然後問:「頭先出面係咪好好玩」,PW1其後鬆開手,被告從左邊褲袋偷偷拿取手機,PW1見狀大叫:「佢攞電話喇」並想搶被告電話,被告狐低身想保護電話,PW1用力拉被告人,並將被告人拉到自己大脾上躺臥,手㬹不斷襲擊被告人。PW1對被告不斷講「你做咩襲警」到被告人鬆開電話方止
被告被帶到北角警署臨時羈押所,跪下並雙手合十唸經,有警員嘲諷「跪下度就啱喇,依家成面血夠哂靚仔」,坐低後,PW1向被告淋水洗面,並以紙巾為被告擦去血跡,被告要求取回染血紙巾被拒,PW1勒令被告除下外褲及內褲,指示被告穿黑衣影相,PW2曾買魚柳包及雞翅,並稱「雞翅唔岩你,都係我食啦」,PW1向被告閒話家常,指自己父親是紅,弟弟是黃,弟弟經常外出工作不負責任,後期向被告表示「你又唔係攻擊我地,又無搞破壞」被告表示「我唔係搞事既人就唔好告我」,PW1其後表示「你一陣驗完傷咪告我」,被告隨後稱「點都好,我原諒你,下次唔好咁重手」,最後握手言和
PW1 聲稱受襲的警員:
堅稱被告人的傷勢是爭扎時造成,但不知道是如何造成,估計自己的左手手指及被告傷勢是糾纏時造成,指被告人有凶暴性及情緒,不敢上前慰問
PW2 拍攝警員:
指稱自己在車內整理拍攝錄像時,突然聽到後方傳來PW1「啊」一聲,發現PW1與被告糾纏,嘗試幫手制服,將被告人身體壓向PW1大脾,並為其鎖上手扣,最後與58602一同合力才能將被告制服,對被告人在北角警署內的情況大部分都不知道
PW3 調查警員:
PW3呈交的案情為—PW1為被告人配戴安全帶時受襲,PW3指自己沒有印象有親身聽取PW1講述案情,不知道案情被修訂才呈交法庭
作供完畢
6月1日 0930 續審 將傳召PW4 醫生證人
---
16:22:19
\#粉嶺裁判法院第二庭
\#蘇文隆署理主任裁判官
\#0727元朗
\#審訊 \#續審
[Part 1](https://t.me/youarenotalonehk_live/5448)
劉 (23)
控罪:阻差辦公
案情:被指阻礙PC15871陳沛然執行職務 (推進防線)
傳召PW4。
控方詢問事發前和事發的情況。
PW4:約7pm到達安樂路東堤街交界設立防線準備驅散道路上示威人士。他在隊伍中無特定位置,而是在隊伍四周行走觀察環境。PW2發出(警員片段內) 第一次警告後小隊向前推進,前方有記者和數名黑衫人士,其中一位為被告。被告緩慢後退,PW2用大聲公叫他行開,被告沒有離開。因小隊推進速度比被告後退速度快,被告距離漸近。被告到了PW3前方,他高舉雙手,小隊繼續前行,於是被告陷入防線內。PW4對於被告接觸前線的情況不太有印象,但指他的確有影響隊伍推進及造成阻礙和混亂。被告進入防線後手舞足蹈並呼叫,致PW4把他壓低和按在地上作拘捕。PW4指主要是他一個人制服被告。
辯方盤問PW4。
盤問大綱
1:事發至今已多月,PW4在7月就此案件落口供,口供的準確度會比現在的作供高。事發當時的情況應有寫在記事冊,口供如和記事冊有出入應以記事冊為準。PW4指原則上同意,口供和記事冊沒記錄的細節可在今天的作供補充。
2:被告裝備,PW4同意事項:
\-被告裝備較少,手上沒武器或防具,只有證件,他被拘捕時手上仍然持有。
辯方把被告的社工證給PW4看 ,詢問PW4當時是否看到這張證件。PW4看了P12片段,同意片段中的證件和實物相似。
3:再播放P12 (警員錄影片段)
辯方用片段向PW4指出小隊在進行了一次驅散後,示威者後退,被告才出現在防線前。PW4亦不知道被告在過程中是否一直在原地,也不知道他什麼時候出現及停留了多久。
4:在場所有警員均沒有向被告提出問題 (他的目的和身份),PW4沒留意被告在防線說話。
5:詢問PW4在隊伍的位置。PW4在事發時和PW3有一段距離,視野被其他小隊成員阻礙,所以不知道以下情況:
\-被告有否接觸PW3的長盾
\-是否有警員從側面疑似推撞被告,致被告倒向PW3
6:詢問被告進入防線後的情況。被告陷入防線一刻,PW4並不是最接近他,PW4前進作出制服。PW4同意:
\-當時有急切需要制服被告
\-其他警員亦會有這想法
PW4不同意:
\-被告進入防線後被多名警員推撞
PW4不肯定:
\-被告被制服後是否想舉高雙手
\-被告陷入防線後是否有其他警員接觸他
播放B2,PW4同意是事發時的片段。
辯方最後向PW4總結案情。括號內為PW4答覆。
\-被告曾向防線高舉雙手,口中唸唸有詞 (不肯定)
\-被告在防線前的動機不明 (不肯定,但判斷他是刻意阻礙推進)
\-被告一直後退,沒有停下 (不肯定)
\-被告被側面持圓盾的警員推進防線 (不肯定)
\-被告沒有主動推警方防線 (不肯定)
\-被告陷入防線後想高舉雙手 (不肯定)
\-被告一直手持社工證 (作拘捕後才發現)
\-不止一名警員想制服被告 (不確定其他警員是否有此意圖)
PW4作供完畢。
休庭待辯方結案陳詞。
辯方陳詞撮要:
\-警員誤會被告在防線前的意圖
\-多個片段顯示被告是被其他警員推進防線而不是主動推撞
\-PW2 PW4 身處位置皆看不到被告陷入防線一刻的情況,他們的供詞亦同意此事實
\-PW3被身上的裝備阻擋視野以致看不見被告是被身邊警員推進他的長盾,PW3供詞亦承認他看不到此接觸
\-PW4稱被告進入防線後製造混亂,PW2 PW3先前供詞已承認沒留意被告在防線內的行為,因已交給PW4處理。
\-B2片段顯示被告進入防線後被多名警員推撞,混亂情況並非由被告主動造成
\-所以,被告可構成控罪要素的行為只有「PW2叫他散開但他沒有離開」
\-片段中由被告出現在馬路中心到被捕只有一分三十秒,這時間是否已超出阻礙的門檻?
\-被告行為和裝備顯示他不是和身後的示威人士為一夥,亦沒衝擊意圖,證人供詞亦同意他行為較和平
\-被告主動表露身份,無意犯罪,他想和警方溝通,而警方沒給予機會
\-控方提供終院案例,辯方指出判詞內提及某些情況並不構成阻差辦公,其中3.「說服警方有關他的錯誤」及5.「緊急事態」適用於本案
\-P12片段顯示警方在第一輪推進後致多人跌倒,兩名示威者倒地被小隊包圍。因此被告想提醒警方減慢推進速度以免前方人群受傷或導致人踩人。辯方認為此舉符合上述的3.和5.,因此被告行為不構成阻差辦公。
押後至6月17日1430粉嶺裁判法院第二庭判決。
---
16:22:33
\#區域法院第卅六庭
\#沈小民法官 \#審訊 \[2/3\]
林(19) \#1001黃大仙
[PART 1](https://t.me/youarenotalonehk_live/5427)
[PART 2](https://t.me/youarenotalonehk_live/5447)
[PART 3](https://t.me/youarenotalonehk_live/5456)
[PART 4](https://t.me/youarenotalonehk_live/5479)
辯方提交餘下證據,當中包括2份被告的醫學報告,撮要如下:
1) 於2019年10月1日19:05,在伊利沙伯醫院鄧醫生撰寫的報告。當時被告完全清醒,腦腔無出血,無骨折,頸部觸痛,右頭頂有3cm裂傷,右前臂15cm擦傷,雙腳膝頭擦傷……
2) 腦神經科報告:被告報稱被硬物打後腦,兩邊膊頭腫痛,視力受影響。醫生診斷GCS 15分,頭頂裂傷增至4cm,視覺神經正常,縫針處理後於10月2日出院
\---------------
辯方提交6位品格證人的書面陳述,他們大部份是認識被告多年的老師。指出被告有禮貌、心地善良、尊敬師長、受同學愛戴
6月1日10:00繼續
被告以原有條件保釋至審訊完結
下一節,[控方陳詞](https://t.me/youarenotalonehk_live/5511)
---
17:03:11
法庭文字直播台
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
\#1201尖沙嘴 \#提堂 \#新案件
D1林(22)
D2 黃(23)
控罪: 1. 管有物品意圖損壞或摧毀財產(D1)
2.適合用作非法用途的工具(D2)
3.管有物品意圖損壞或摧毀財產(D2)
案情:於2019年12月1日在尖沙嘴梳士巴利道管有1罐噴漆,意圖損壞或摧毀他人財產。(控罪1)
於2019年12月1日在尖沙嘴梳士巴利道管有適合作非法用途的工具,即1包索帶。(控罪2)
於2019年12月1日在尖沙嘴梳士巴利道管有1個打火機和兩罐白電油,意圖損壞或摧毀他人的財產。(控罪3)
D1、D2申請保釋獲批✅
條件如下:
現金1000
交出旅遊證件及不得離港
在報稱地址居住
宵禁令0000-0600
一星期報到一次
案件押後至7月27日1430九龍城裁判法院再訊。
---
17:50:19
法庭文字直播台
\#九龍城裁判法院第一庭
\#嚴舜儀署理主任裁判官
\#20200528旺角 \#提堂 \#新案件
D1 麥
D2李
控罪: 1.在公眾地方造成阻礙(D1)
2.非法集結(D2)
3.襲警(D2)(即警員X)
案情:於2020年5月28日在亞皆老街與彌敦道交界地上放置1個金屬罐、1塊磚頭和2個垃圾袋,這些物品或東西對在公眾地方的人或車輛造成阻礙、不便或危害。(控罪1)
於2020年5月28日在亞皆老街與彌敦道交界與其他不知名人士參與非法集結。(控罪2)
於2020年5月28日在亞皆老街與彌敦道交界襲擊正在正當執行職務的警務人員,即警員X。(控罪3)
D1、D2保釋申請獲批✅
條件如下:
現金2000元(D1) 現金10000元(D2)
在獲釋前交出所有旅遊證件
在報稱地址上居住
一星期報到一次
宵禁 0100-0700(D1) 2200-0600(D2)
須遵守禁足令
‼️**D2表示會就第三控罪的匿名令作出反對
**‼️**D2作出兩項投訴:
1.警員在扣留被告時指示被告趕快認罪,指這樣可以「快啲離開警署」。
2.口供上的問答並無真正發生。被告在「抄寫」口供紙時,警員讀出一個捏造的對話,並讓被告默寫出來(即被告與警員並無發生該對話)。**
控方申請押後案件以進行鑑定、擷取閉路電視片段及其他調查工作
案件押後至7月24日1430九龍城裁判法院再訊。
---
22:24:46
法庭文字直播台
\#東區裁判法院第五庭
\#香淑嫻裁判官 \#企圖襲警
\#1112中環
澳洲籍手足(32) \#判刑
辯方律師開始時指出被告母親病情嚴重,曾於瑪麗醫院留院。有兩份醫生證明其病情不輕。
及後,辯方律師向法官解釋根據戒毒所報告及被告驗尿報告,證明被告已經有三年沒有接觸藥物,證明案情並非受藥物影響。
辯方求情是指,被告與其母親關係良好,被告亦希望可盡早照顧病母,因此向法庭求情。此外,事件已經對被告及其家人影響超過半年,當中令他們感到焦慮。因此可見案件己給予被告教訓。
除外,辯方亦指根據《警隊條例》下,最高刑罰為監禁六個月,但案情中,被告並非干犯最嚴重的罪行,故懇請法官判刑時不要以最高刑罰為量刑起點。
法官休庭5分鐘考慮。
法官宣布判詞前,先引述條例內容。
判詞:
1\. 針對被告背景以及過往犯罪記錄,認為與今次案情關係輕微。
2.針對今次案情是否屬於嚴重:法官同意今次案情嚴重,被告從高處投擲轉頭,同時橋下有大量警員。故今次案件屬於潛在傷害很大。
3\. 其他刑罰不適用於本案。
4\. 法官以4.5月監禁為量刑起點,考慮被告認罪,扣減三份一刑期
❗️❗️判處被告三個月即時監禁❗️❗️
(被告父母今天都有聽取裁決,被告被帶走時依依不捨(喊住)回看觀眾席及家人)
(admin 按:感激臨時直播員幫手記錄♂️)
---
| 18.241958 | 279 | 0.726443 | yue_Hant | 0.834721 |