text
stringlengths 2
99.9k
| meta
dict |
|---|---|
var convert = require('./convert'),
func = convert('mean', require('../mean'), require('./_falseOptions'));
func.placeholder = require('./placeholder');
module.exports = func;
|
{
"pile_set_name": "Github"
}
|
LTC4151 High Voltage I2C Current and Voltage Monitor
Required properties:
- compatible: Must be "lltc,ltc4151"
- reg: I2C address
Optional properties:
- shunt-resistor-micro-ohms
Shunt resistor value in micro-Ohms
Defaults to <1000> if unset.
Example:
ltc4151@6e {
compatible = "lltc,ltc4151";
reg = <0x6e>;
shunt-resistor-micro-ohms = <1500>;
};
|
{
"pile_set_name": "Github"
}
|
The latest release of `pdf-lib` (`v1.0.0`) includes several breaking API changes. If you have code written for older versions of `pdf-lib` (`v0.x.x`), you can use the following instructions to help migrate your code to v1.0.0.
Note that many of the API methods are now asynchronous and return promises, so you'll need to `await` on them (or use promise chaining: `.then(res => ...)`).
- Rename _`PDFDocumentFactory`_ to **`PDFDocument`**. `PDFDocument.create` and `PDFDocument.load` are now async (they return promises), so you'll need to `await` on them.
* To create a new PDF document:
```js
const pdfDoc = await PDFDocument.create();
```
* To retrieve and load a PDF where `pdfUrl` points to the PDF to be loaded:
```js
const pdfBuffer = await fetch(pdfUrl).then((res) => res.arrayBuffer());
const pdfDoc = await PDFDocument.load(pdfBuffer);
```
- The purpose of making these methods asynchronous is to avoid blocking the event loop (especially for browser-based usage). If you aren't running this code client-side and are not concerned about blocking the event loop, you can speed up parsing times with:
```js
PDFDocument.load(..., { parseSpeed: ParseSpeeds.Fastest })
```
You can do a similar thing for save:
```js
PDFDocument.save({ objectsPerTick: Infinity });
```
- To draw content on a page in old versions of `pdf-lib`, you needed to create a content stream, invoke some operators, register the content stream, and add it to the document. Something like the following:
```js
const contentStream = pdfDoc.createContentStream(
drawText(
timesRomanFont.encodeText('Creating PDFs in JavaScript is awesome!'),
{
x: 50,
y: 450,
size: 15,
font: 'TimesRoman',
colorRgb: [0, 0.53, 0.71],
},
),
);
page.addContentStreams(pdfDoc.register(contentStream));
```
However, in new versions of `pdf-lib`, this is much simpler. You simply invoke drawing methods on the page, such as [`PDFPage.drawText`](https://pdf-lib.js.org/docs/api/classes/pdfpage#drawtext), [`PDFPage.drawImage`](https://pdf-lib.js.org/docs/api/classes/pdfpage#drawimage), [`PDFPage.drawRectangle`](https://pdf-lib.js.org/docs/api/classes/pdfpage#drawrectangle), or [`PDFPage.drawSvgPath`](https://pdf-lib.js.org/docs/api/classes/pdfpage#drawsvgpath). So the above example becomes:
```js
page.drawText('Creating PDFs in JavaScript is awesome!', {
x: 50,
y: 450,
size: 15,
font: timesRomanFont,
color: rgb(0, 0.53, 0.71),
});
```
Please see the [Usage Examples](#usage-examples) for more in depth examples of drawing content on a page in the new versions of `pdf-lib`. You may also find the [Complete Examples](#complete-examples) to be a useful reference.
- Change _`getMaybe`_ function calls to **`get`** calls. If a property doesn't exist, then `undefined` will be returned. Note, however, that PDF name strings with need to be wrapped in `PDFName.of(...)`. For example, to look up the AcroForm object you'll need to change _`pdfDoc.catalog.getMaybe('AcroForm')`_ to **`pdfDoc.catalog.get(PDFName.of('AcroForm'))`**.
```js
const acroForm = await pdfDoc.context.lookup(
pdfDoc.catalog.get(PDFName.of('AcroForm')),
);
```
> v0.x.x converted the strings passed to `get` and `getMaybe` to `PDFName` objects, but v1.0.0 does not do this conversion for you. So you must always pass actual `PDFName` objects instead of strings.
- To find the AcroForm field references now becomes:
```js
const acroFieldRefs = await pdfDoc.context.lookup(
acroForm.get(PDFName.of('Fields')),
);
```
- To add a new page replace _`pdfDoc.createPage([width, height])`_ with **`pdfDoc.addPage([width, height])`**
```js
const page = pdfDoc.addPage([500, 750]);
```
or simply:
```js
const page = pdfDoc.addPage();
```
* To get the size of the page:
```js
const { width, height } = page.getSize();
page.getWidth();
page.getHeight();
```
* To add images replace _`pdfDoc.embedPNG`_ with **`pdfDoc.embedPng`** and _`pdfDoc.embedJPG`_ with **`pdfDoc.embedJpg`**
* The `pdfDoc.embedPng` and `pdfDoc.embedJpg` methods now return `PDFImage` objects which have the width and height of the image as properties. You can also scale down the width and height by a constant factor using the `PDFImage.scale` method:
```js
const aBigImage = await pdfDoc.embedPng(aBigImageBytes);
const { width, height } = aBigImage.scale(0.25);
```
So, `const [image, dims] = pdfDoc.embedJPG(mediaBuffer)` becomes:
```js
const image = await pdfDoc.embedJpg(mediaBuffer);
// image.width, image.height can be used instead of the dims object.
```
* To save the PDF replace _`PDFDocumentWriter.saveToBytes(pdfDoc)`_ with **`pdfDoc.save()`**
```js
const pdfDocBytes = await pdfDoc.save();
```
* To display the saved PDF now becomes:
```js
const pdfUrl = URL.createObjectURL(
new Blob([await pdfDoc.save()], { type: 'application/pdf' }),
);
window.open(pdfUrl, '_blank');
```
(note: `URL.revokeObjectURL` should be called later to free up memory)
* To get the PDF page count:
```js
pdfDoc.getPages().length;
```
* To copy pages from one document to another you must now call **`destPdf.copyPages(srcPdf, srcPageIndexesArray)`** to copy pages. You can see an example of this in the [Copy Pages](#copy-pages) usage example. Admittedly, this API is slightly less ergonomic than what exists in v0.x.x, but it has two key benefits:
1. It avoids making PDFDocument.addPage and PDFDocument.insertPage async.
When copying multiple pages from the source document, the resulting merged document should have a smaller file size. This is because the page copying API that exists in v0.x.x was intended for copying just one or two pages.
2. When copying large numbers of pages, it could result in redundant objects being created. This new page copying API should eliminate that.
```js
async function mergePdfs(pdfsToMerge: string[]) {
const mergedPdf = await PDFDocument.create();
for (const pdfCopyDoc of pdfsToMerge) {
const pdfBytes = fs.readFileSync(pdfCopyDoc);
const pdf = await PDFDocument.load(pdfBytes);
const copiedPages = await mergedPdf.copyPages(pdf, pdf.getPageIndices());
copiedPages.forEach((page) => {
mergedPdf.addPage(page);
});
}
const mergedPdfFile = await mergedPdf.save();
return mergedPdfFile;
}
```
* If required, you can retrieve the CropBox or MediaBox of a page like so:
```js
const cropBox = page.node.CropBox() || page.node.MediaBox();
```
|
{
"pile_set_name": "Github"
}
|
/* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef DPU_DBG_H_
#define DPU_DBG_H_
#include <stdarg.h>
#include <linux/debugfs.h>
#include <linux/list.h>
enum dpu_dbg_dump_flag {
DPU_DBG_DUMP_IN_LOG = BIT(0),
DPU_DBG_DUMP_IN_MEM = BIT(1),
};
#if defined(CONFIG_DEBUG_FS)
/**
* dpu_dbg_init_dbg_buses - initialize debug bus dumping support for the chipset
* @hwversion: Chipset revision
*/
void dpu_dbg_init_dbg_buses(u32 hwversion);
/**
* dpu_dbg_init - initialize global dpu debug facilities: regdump
* @dev: device handle
* Returns: 0 or -ERROR
*/
int dpu_dbg_init(struct device *dev);
/**
* dpu_dbg_debugfs_register - register entries at the given debugfs dir
* @debugfs_root: debugfs root in which to create dpu debug entries
* Returns: 0 or -ERROR
*/
int dpu_dbg_debugfs_register(struct dentry *debugfs_root);
/**
* dpu_dbg_destroy - destroy the global dpu debug facilities
* Returns: none
*/
void dpu_dbg_destroy(void);
/**
* dpu_dbg_dump - trigger dumping of all dpu_dbg facilities
* @queue_work: whether to queue the dumping work to the work_struct
* @name: string indicating origin of dump
* @dump_dbgbus: dump the dpu debug bus
* @dump_vbif_rt: dump the vbif rt bus
* Returns: none
*/
void dpu_dbg_dump(bool queue_work, const char *name, bool dump_dbgbus_dpu,
bool dump_dbgbus_vbif_rt);
/**
* dpu_dbg_set_dpu_top_offset - set the target specific offset from mdss base
* address of the top registers. Used for accessing debug bus controls.
* @blk_off: offset from mdss base of the top block
*/
void dpu_dbg_set_dpu_top_offset(u32 blk_off);
#else
static inline void dpu_dbg_init_dbg_buses(u32 hwversion)
{
}
static inline int dpu_dbg_init(struct device *dev)
{
return 0;
}
static inline int dpu_dbg_debugfs_register(struct dentry *debugfs_root)
{
return 0;
}
static inline void dpu_dbg_destroy(void)
{
}
static inline void dpu_dbg_dump(bool queue_work, const char *name,
bool dump_dbgbus_dpu, bool dump_dbgbus_vbif_rt)
{
}
static inline void dpu_dbg_set_dpu_top_offset(u32 blk_off)
{
}
#endif /* defined(CONFIG_DEBUG_FS) */
#endif /* DPU_DBG_H_ */
|
{
"pile_set_name": "Github"
}
|
local keywordHandler = KeywordHandler:new()
local npcHandler = NpcHandler:new(keywordHandler)
NpcSystem.parseParameters(npcHandler)
function onCreatureAppear(cid)
npcHandler:onCreatureAppear(cid)
end
function onCreatureDisappear(cid)
npcHandler:onCreatureDisappear(cid)
end
function onCreatureSay(cid, type, msg)
npcHandler:onCreatureSay(cid, type, msg)
end
function onThink()
npcHandler:onThink()
end
local vocations = {
['sorcerer'] = 0,
['druid'] = 1,
['paladin'] = 2,
['knight'] = {
['club'] = 3,
['axe'] = 4,
['sword'] = 5,
}
}
local knightChoice = {}
local function greetCallback(cid)
knightChoice[cid] = nil
return true
end
local voices = {
{ text = "Not enough purple nightshade ... not enough liquid silver. *sigh*" },
{ text = "You think the full moon is a romantic affair? Think again!" },
{ text = "This place isn't safe. You should leave this island." }
}
npcHandler:addModule(VoiceModule:new(voices))
function creatureSayCallback(cid, type, msg)
if not npcHandler:isFocused(cid) then
if msg == "hi" or msg == "hello" then
npcHandler:say("Greetings, visitor. I wonder what may lead you to this {dangerous} place.", cid)
npcHandler:addFocus(cid)
else
return false
end
end
local player = Player(cid)
if not player then
return false
end
if msgcontains(msg, 'tokens') then
elseif isInArray({'dangerous', 'beasts'}, msg:lower()) then
npcHandler:say("So you don't know it yet. This island, Grimvale, is affected by were-sickness. Many {pitiful}, who are stricken with the curse, dwell in the {tunnels} and caverns underneath the village and the nearby hurst.", cid)
elseif msgcontains(msg, 'pitiful') then
npcHandler:say("Yes, pitiful. For they are savage beasts now who regularly come up from below to attack the village. But once they were inhabitants of Grimvale, before they {changed}.", cid)
elseif msgcontains(msg, 'changed') then
npcHandler:say("Through a bite or even a scratch, you may be infected with the were-sickness. If that happens, there is little {hope} - until the next full moon you'll change into a were-creature, depending on the animal that hurt you.", cid)
elseif msgcontains(msg, 'hope') then
npcHandler:say("There is a plant, the purple nightshade. It blossoms exclusively in the light of the full moon and only underground, where the full moon's light is falling through fissures in the surface. Only this plant's blossoms are able to defeat the {were-sickness}.", cid)
elseif isInArray({'were-sickness', 'curse'}, msg:lower()) then
npcHandler:say({"It transforms peaceful villagers into savage beasts. We're not sure how this curse found the way into our small village. But one day it began. At first it befell just a few people. ...",
"In a full moon night they changed into bears and wolves, and tore apart their unsuspecting relatives while they were asleep. ...",
"Those merely wounded, first thought they were lucky. But then we realised they were changing, too. Later, others assumed the forms of badgers and boars also. ...",
"But that does not mean they were any less wild or dangerous than the others."}, cid)
elseif msgcontains(msg, 'tunnels') then
npcHandler:say({"We are not sure what they are doing down there. We're glad if they stay in the caverns and leave us alone. Only at full moon do they come up and threaten the island's surface and village. ...",
"I, however, have a {hunch} as to why they dwell so deep under the earth."}, cid)
elseif msgcontains(msg, 'hunch') then
npcHandler:say({"There are old legends about a subterranean temple that was once built in this area. Supposedly many {artefacts} are still hidden down there. ...",
"I don't have the time to tell you the entire tale, but there is a book downstairs in which you may read the whole story."}, cid)
elseif msgcontains(msg, 'artefacts') then
npcHandler:say("Yes, the story goes that there are ancient artefacts still hidden in the temple ruins, such as helmets in the form of wolven heads, for example. It is said that moonlight crystals are needed to enchant these artefacts.", cid)
elseif msgcontains(msg, 'moon') then
npcHandler:say({"Every month around the 13th, the single Tibian moon will by fully visible to us. That's when the curse hits us hardest. ...",
"The two days around the 13th, the 12th and the 14th, are considered 'Harvest Moon', those are the best to gather {nightshade}. However, only after it has reached its apex on the 13th, the curse strengthens. ...",
"We do not know what happens down there in those tunnels around that time but there is a presence there, we all feel - yet cannot quite fathom. ...",
"At full moon, humans transform into wild beasts: wolves, boars, bears and others. Some call it the {curse} of the Full Moon, others think it is a kind of sickness. .",
"During this time, we try to not leave the house, we shut the windows and hope it will pass. The curse will weaken a bit after that but it returns. Every month."}, cid)
elseif msgcontains(msg, 'nightshade') then
npcHandler:say("Three of these blossoms should suffice to heal some afflicted persons. But if you bring more I'd be grateful, of course.", cid)
elseif msgcontains(msg, 'name') then
npcHandler:say("My name is Maeryn.", cid)
elseif msgcontains(msg, 'maeryn') then
npcHandler:say("Yes, that's me.", cid)
elseif msgcontains(msg, 'time') then
npcHandler:say("It's exactly " .. getFormattedWorldTime() .. ".", cid)
elseif msgcontains(msg, 'job') then
npcHandler:say("I'm the protector of this little village. A bit of a self-proclaimed function, I admit, but someone has to watch over {Grimvale}. It is a {dangerous} place.", cid)
elseif msgcontains(msg, 'grimvale') then
npcHandler:say("The small island you are standing on. For a long time it was a peaceful and placid place. But lately it has become more {dangerous}.", cid)
elseif msgcontains(msg, 'owin') then
npcHandler:say("He's an experienced hunter and knows much about the woods, the animals that dwell there, and about the {werewolves}. He's devoted himself to finding out everything there is to know about the {Curse}.", cid)
elseif msgcontains(msg, 'werewolves') then
npcHandler:say("Yes, my friend, werewolves. They dwell here on {Grimvale}, threatening our life. The were-sickness transforms peaceful villagers into savage beasts. We're not sure how this curse found its way into our small village. But undoubtedly it did.", cid)
elseif msgcontains(msg, 'gladys') then
npcHandler:say("She's an old druid. She's been living here on {Grimvale} since she was a little girl, just like me. She's very interested in were-creature body parts. If you find any, I'm sure she will love to trade with you.", cid)
elseif msgcontains(msg, 'cornell') then
npcHandler:say("He's basically a ferryman nowadays, but I remember when he was our village's leading fisherman. He offers a ferry service between Grimvale and Edron. You must have met him - he sailed you here.", cid)
elseif msgcontains(msg, 'werewolf helmet') then
npcHandler:say("You brought the wolven helmet, as i see. Do you want to change something?", cid)
npcHandler.topic[cid] = 1
elseif msgcontains(msg, 'yes') then
if npcHandler.topic[cid] == 1 then
npcHandler:say("So, which profession would you give preference to when enchanting the helmet: {knight}, {sorcerer}, {druid} or {paladin}?", cid)
npcHandler.topic[cid] = 2
end
elseif isInArray({'knight', 'sorcerer', 'druid', 'paladin'}, msg:lower()) and npcHandler.topic[cid] == 2 then
local helmet = msg:lower()
if not vocations[helmet] then
return false
end
if msg:lower() == 'knight' then
npcHandler:say("And what would be your preferred weapon? {Club}, {axe} or {sword}", cid)
knightChoice[cid] = helmet
npcHandler.topic[cid] = 3
end
if npcHandler.topic[cid] == 2 then
--if (Set storage if player can enchant helmet(need Grim Vale quest)) then
player:setStorageValue(Storage.Grimvale.WereHelmetEnchant, vocations[helmet])
npcHandler:say("So this is your choice. If you want to change it, you will have to come to me again.", cid)
--else
--npcHandler:say("Message when player do not have quest.", cid)
--end
npcHandler.topic[cid] = 0
end
elseif isInArray({'axe', 'club', 'sword'}, msg:lower()) and npcHandler.topic[cid] == 3 then
local weapontype = msg:lower()
if not vocations[knightChoice[cid]][weapontype] then
return false
else
--if (Set storage if player can enchant helmet(need Grim Vale quest)) then
player:setStorageValue(Storage.Grimvale.WereHelmetEnchant, vocations[knightChoice[cid]][weapontype])
npcHandler:say("So this is your choice. If you want to change it, you will have to come to me again.", cid)
--else
--npcHandler:say("Message when player do not have quest.", cid)
--end
knightChoice[cid] = nil
npcHandler.topic[cid] = 0
end
elseif msgcontains(msg, 'bye') then
npcHandler:say("Farewell, then.", cid)
npcHandler:releaseFocus(cid)
end
end
npcHandler:setCallback(CALLBACK_GREET, greetCallback)
npcHandler:setCallback(CALLBACK_MESSAGE_DEFAULT, creatureSayCallback)
|
{
"pile_set_name": "Github"
}
|
/*
Copyright The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
"context"
v1beta1 "k8s.io/api/extensions/v1beta1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
labels "k8s.io/apimachinery/pkg/labels"
schema "k8s.io/apimachinery/pkg/runtime/schema"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
testing "k8s.io/client-go/testing"
)
// FakeNetworkPolicies implements NetworkPolicyInterface
type FakeNetworkPolicies struct {
Fake *FakeExtensionsV1beta1
ns string
}
var networkpoliciesResource = schema.GroupVersionResource{Group: "extensions", Version: "v1beta1", Resource: "networkpolicies"}
var networkpoliciesKind = schema.GroupVersionKind{Group: "extensions", Version: "v1beta1", Kind: "NetworkPolicy"}
// Get takes name of the networkPolicy, and returns the corresponding networkPolicy object, and an error if there is any.
func (c *FakeNetworkPolicies) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.NetworkPolicy, err error) {
obj, err := c.Fake.
Invokes(testing.NewGetAction(networkpoliciesResource, c.ns, name), &v1beta1.NetworkPolicy{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.NetworkPolicy), err
}
// List takes label and field selectors, and returns the list of NetworkPolicies that match those selectors.
func (c *FakeNetworkPolicies) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.NetworkPolicyList, err error) {
obj, err := c.Fake.
Invokes(testing.NewListAction(networkpoliciesResource, networkpoliciesKind, c.ns, opts), &v1beta1.NetworkPolicyList{})
if obj == nil {
return nil, err
}
label, _, _ := testing.ExtractFromListOptions(opts)
if label == nil {
label = labels.Everything()
}
list := &v1beta1.NetworkPolicyList{ListMeta: obj.(*v1beta1.NetworkPolicyList).ListMeta}
for _, item := range obj.(*v1beta1.NetworkPolicyList).Items {
if label.Matches(labels.Set(item.Labels)) {
list.Items = append(list.Items, item)
}
}
return list, err
}
// Watch returns a watch.Interface that watches the requested networkPolicies.
func (c *FakeNetworkPolicies) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
return c.Fake.
InvokesWatch(testing.NewWatchAction(networkpoliciesResource, c.ns, opts))
}
// Create takes the representation of a networkPolicy and creates it. Returns the server's representation of the networkPolicy, and an error, if there is any.
func (c *FakeNetworkPolicies) Create(ctx context.Context, networkPolicy *v1beta1.NetworkPolicy, opts v1.CreateOptions) (result *v1beta1.NetworkPolicy, err error) {
obj, err := c.Fake.
Invokes(testing.NewCreateAction(networkpoliciesResource, c.ns, networkPolicy), &v1beta1.NetworkPolicy{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.NetworkPolicy), err
}
// Update takes the representation of a networkPolicy and updates it. Returns the server's representation of the networkPolicy, and an error, if there is any.
func (c *FakeNetworkPolicies) Update(ctx context.Context, networkPolicy *v1beta1.NetworkPolicy, opts v1.UpdateOptions) (result *v1beta1.NetworkPolicy, err error) {
obj, err := c.Fake.
Invokes(testing.NewUpdateAction(networkpoliciesResource, c.ns, networkPolicy), &v1beta1.NetworkPolicy{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.NetworkPolicy), err
}
// Delete takes name of the networkPolicy and deletes it. Returns an error if one occurs.
func (c *FakeNetworkPolicies) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewDeleteAction(networkpoliciesResource, c.ns, name), &v1beta1.NetworkPolicy{})
return err
}
// DeleteCollection deletes a collection of objects.
func (c *FakeNetworkPolicies) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
action := testing.NewDeleteCollectionAction(networkpoliciesResource, c.ns, listOpts)
_, err := c.Fake.Invokes(action, &v1beta1.NetworkPolicyList{})
return err
}
// Patch applies the patch and returns the patched networkPolicy.
func (c *FakeNetworkPolicies) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.NetworkPolicy, err error) {
obj, err := c.Fake.
Invokes(testing.NewPatchSubresourceAction(networkpoliciesResource, c.ns, name, pt, data, subresources...), &v1beta1.NetworkPolicy{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.NetworkPolicy), err
}
|
{
"pile_set_name": "Github"
}
|
<?php
/* Icinga Web 2 | (c) 2013 Icinga Development Team | GPLv2+ */
namespace Icinga\Module\Monitoring\Backend\Ido\Query;
use Zend_Db_Select;
use Icinga\Data\Filter\Filter;
/**
* Query for contacts
*/
class ContactQuery extends IdoQuery
{
protected $columnMap = [
'contacts' => [
'contact_id' => 'c.contact_id',
'contact' => 'c.contact',
'contact_name' => 'c.contact_name',
'contact_alias' => 'c.contact_alias',
'contact_email' => 'c.contact_email',
'contact_pager' => 'c.contact_pager',
'contact_object_id' => 'c.contact_object_id',
'contact_has_host_notfications' => 'c.contact_has_host_notfications',
'contact_has_service_notfications' => 'c.contact_has_service_notfications',
'contact_can_submit_commands' => 'c.contact_can_submit_commands',
'contact_notify_service_recovery' => 'c.contact_notify_service_recovery',
'contact_notify_service_warning' => 'c.contact_notify_service_warning',
'contact_notify_service_critical' => 'c.contact_notify_service_critical',
'contact_notify_service_unknown' => 'c.contact_notify_service_unknown',
'contact_notify_service_flapping' => 'c.contact_notify_service_flapping',
'contact_notify_service_downtime' => 'c.contact_notify_service_downtime',
'contact_notify_host_recovery' => 'c.contact_notify_host_recovery',
'contact_notify_host_down' => 'c.contact_notify_host_down',
'contact_notify_host_unreachable' => 'c.contact_notify_host_unreachable',
'contact_notify_host_flapping' => 'c.contact_notify_host_flapping',
'contact_notify_host_downtime' => 'c.contact_notify_host_downtime',
'contact_notify_host_timeperiod' => 'c.contact_notify_host_timeperiod',
'contact_notify_service_timeperiod' => 'c.contact_notify_service_timeperiod'
]
];
/** @var Zend_Db_Select The union */
protected $contactQuery;
/** @var IdoQuery[] Subqueries used for the contact query */
protected $subQueries = [];
public function allowsCustomVars()
{
foreach ($this->subQueries as $query) {
if (! $query->allowsCustomVars()) {
return false;
}
}
return true;
}
public function addFilter(Filter $filter)
{
$strangers = array_diff(
$filter->listFilteredColumns(),
array_keys($this->columnMap['contacts'])
);
if (! empty($strangers)) {
$this->transformToUnion();
}
foreach ($this->subQueries as $sub) {
$sub->applyFilter(clone $filter);
}
return $this;
}
protected function joinBaseTables()
{
$this->contactQuery = $this->createSubQuery('Hostcontact', array_keys($this->columnMap['contacts']));
$this->contactQuery->setIsSubQuery();
$this->subQueries[] = $this->contactQuery;
$this->select->from(
['c' => $this->contactQuery],
[]
);
$this->joinedVirtualTables['contacts'] = true;
}
public function order($columnOrAlias, $dir = null)
{
foreach ($this->subQueries as $sub) {
$sub->requireColumn($columnOrAlias);
}
return parent::order($columnOrAlias, $dir);
}
public function where($condition, $value = null)
{
$this->requireColumn($condition);
foreach ($this->subQueries as $sub) {
$sub->where($condition, $value);
}
return $this;
}
public function transformToUnion()
{
$this->contactQuery = $this->db->select();
$this->select->reset();
$this->subQueries = [];
$this->select->distinct()->from(
['c' => $this->contactQuery],
[]
);
$hosts = $this->createSubQuery('Hostcontact', array_keys($this->columnMap['contacts']));
$this->subQueries[] = $hosts;
$this->contactQuery->union([$hosts], Zend_Db_Select::SQL_UNION_ALL);
$services = $this->createSubQuery('Servicecontact', array_keys($this->columnMap['contacts']));
$this->subQueries[] = $services;
$this->contactQuery->union([$services], Zend_Db_Select::SQL_UNION_ALL);
}
}
|
{
"pile_set_name": "Github"
}
|
using FluentAssertions;
using NSubstitute;
using TestStack.Seleno.PageObjects.Controls;
namespace TestStack.Seleno.Tests.PageObjects.Actions.Controls
{
class When_getting_whether_checkBox_is_ticked : HtmlControlSpecificationFor<CheckBox, bool>
{
private readonly bool _result;
public void Given_the_checkbox_is_not_ticked() { }
public When_getting_whether_checkBox_is_ticked() : base(x => x.Exists)
{
_result = SUT.Checked;
}
public void Then_control_should_execute_relevant_script_to_verify_existence_of_checked_attribute()
{
Executor
.Received()
.ScriptAndReturn<object>("$('#Exists').attr('checked')");
}
public void AndThen_it_should_return_false()
{
_result.Should().BeFalse();
}
}
}
|
{
"pile_set_name": "Github"
}
|
const createAtom = require('tiny-atom')
module.exports = function scopedAtom(options) {
options = Object.assign({}, options, { evolve })
const atom = createAtom({}, {}, options)
const fuse = atom.fuse
atom.fuse = function(namespace, initialState, actions) {
actions = Object.keys(actions).reduce((acc, action) => {
acc[`${namespace}.${action}`] = actions[action]
return acc
}, {})
fuse({ [namespace]: initialState }, actions)
}
function evolve({ get, set, dispatch }, action, actions) {
const namespace = action.type
.split('.')
.slice(0, -1)
.join('.')
const top = { get, set, dispatch }
get = () => namespace.split('.').reduce((ref, segment) => ref[segment], top.get())
set = update => top.set({ [namespace]: update })
dispatch = (type, payload) => top.dispatch(`${namespace}.${type}`, payload)
actions[action.type]({ get, set, dispatch, top }, action.payload)
}
return atom
}
|
{
"pile_set_name": "Github"
}
|
@charset "UTF-8";
/// CSS cubic-bezier timing functions. Timing functions courtesy of jquery.easie (github.com/jaukia/easie)
///
/// Timing functions are the same as demoed here: http://jqueryui.com/resources/demos/effect/easing.html
///
/// @type cubic-bezier
$ease-in-quad: cubic-bezier(0.550, 0.085, 0.680, 0.530);
$ease-in-cubic: cubic-bezier(0.550, 0.055, 0.675, 0.190);
$ease-in-quart: cubic-bezier(0.895, 0.030, 0.685, 0.220);
$ease-in-quint: cubic-bezier(0.755, 0.050, 0.855, 0.060);
$ease-in-sine: cubic-bezier(0.470, 0.000, 0.745, 0.715);
$ease-in-expo: cubic-bezier(0.950, 0.050, 0.795, 0.035);
$ease-in-circ: cubic-bezier(0.600, 0.040, 0.980, 0.335);
$ease-in-back: cubic-bezier(0.600, -0.280, 0.735, 0.045);
$ease-out-quad: cubic-bezier(0.250, 0.460, 0.450, 0.940);
$ease-out-cubic: cubic-bezier(0.215, 0.610, 0.355, 1.000);
$ease-out-quart: cubic-bezier(0.165, 0.840, 0.440, 1.000);
$ease-out-quint: cubic-bezier(0.230, 1.000, 0.320, 1.000);
$ease-out-sine: cubic-bezier(0.390, 0.575, 0.565, 1.000);
$ease-out-expo: cubic-bezier(0.190, 1.000, 0.220, 1.000);
$ease-out-circ: cubic-bezier(0.075, 0.820, 0.165, 1.000);
$ease-out-back: cubic-bezier(0.175, 0.885, 0.320, 1.275);
$ease-in-out-quad: cubic-bezier(0.455, 0.030, 0.515, 0.955);
$ease-in-out-cubic: cubic-bezier(0.645, 0.045, 0.355, 1.000);
$ease-in-out-quart: cubic-bezier(0.770, 0.000, 0.175, 1.000);
$ease-in-out-quint: cubic-bezier(0.860, 0.000, 0.070, 1.000);
$ease-in-out-sine: cubic-bezier(0.445, 0.050, 0.550, 0.950);
$ease-in-out-expo: cubic-bezier(1.000, 0.000, 0.000, 1.000);
$ease-in-out-circ: cubic-bezier(0.785, 0.135, 0.150, 0.860);
$ease-in-out-back: cubic-bezier(0.680, -0.550, 0.265, 1.550);
|
{
"pile_set_name": "Github"
}
|
object Test extends App {
val ms = s"""This is a long multiline interpolation
with \u000d\u000a CRLF embedded."""
assert(ms.linesIterator.size == 3, s"lines.size ${ms.linesIterator.size}")
assert(ms contains "\r\n CRLF", "no CRLF")
}
|
{
"pile_set_name": "Github"
}
|
<?pi ?>
<doc></doc>
|
{
"pile_set_name": "Github"
}
|
// Copyright (c) 2006 Tel-Aviv University (Israel).
// All rights reserved.
//
// This file is part of CGAL (www.cgal.org).
// You can redistribute it and/or modify it under the terms of the GNU
// General Public License as published by the Free Software Foundation,
// either version 3 of the License, or (at your option) any later version.
//
// Licensees holding a valid commercial license may use this file in
// accordance with the commercial license agreement provided with the software.
//
// This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
// WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
//
// $URL$
// $Id$
// SPDX-License-Identifier: GPL-3.0+
//
// Author(s) : Ron Wein <wein_r@yahoo.com>
// Efi Fogel <efifogel@gmail.com>
#ifndef CGAL_POLYGON_DECOMPOSITION_STRATEGY_ADAPTER_H
#define CGAL_POLYGON_DECOMPOSITION_STRATEGY_ADAPTER_H
#include <CGAL/license/Minkowski_sum_2.h>
#include <CGAL/basic.h>
#include <CGAL/Polygon_2.h>
#include <CGAL/Partition_traits_2.h>
#include <CGAL/partition_2.h>
namespace CGAL {
struct Tag_optimal_convex_parition { bool dummy; };
struct Tag_approx_convex_parition { bool dummy; };
struct Tag_Greene_convex_parition { bool dummy; };
/*!
* \class
* An adapter of the global planar polygonal partitioning functions
* to a decomposition strategy-class.
*/
template <typename Kernel_, typename Container_, typename StrategyTag_>
class Polygon_decomposition_strategy_adapter {
public:
typedef Kernel_ Kernel;
typedef CGAL::Polygon_2<Kernel, Container_> Polygon_2;
typedef typename Kernel::Point_2 Point_2;
typedef StrategyTag_ Strategy_tag;
protected:
typedef Partition_traits_2<Kernel> Traits_2;
typedef typename Traits_2::Polygon_2 Traits_polygon_2;
// Data members:
const Traits_2* m_traits;
bool m_own_traits; // inidicates whether the kernel should be freed up.
public:
/*! Default constructor. */
Polygon_decomposition_strategy_adapter() :
m_traits(NULL),
m_own_traits(false)
{ init(); }
/*! Constructor. */
Polygon_decomposition_strategy_adapter(const Traits_2& traits) :
m_traits(traits),
m_own_traits(false)
{ init(); }
/*! Destructor */
~Polygon_decomposition_strategy_adapter()
{
if (m_own_traits) {
if (m_traits != NULL) {
delete m_traits;
m_traits = NULL;
}
m_own_traits = false;
}
}
//! Initialize
void init()
{
// Allocate the traits if not provided.
if (m_traits == NULL) {
m_traits = new Traits_2;
m_own_traits = true;
}
}
/*!
* Obtain the traits
* \return the traits
*/
const Traits_2* traits() const { return m_traits; }
/*!
* Decompose a simple polygon to convex sub-polygons.
* \param pgn The input polygon.
* \param oi An output iterator of convex polygons.
* \return A past-the-end iterator for the sub-polygons.
*/
template <typename OutputIterator>
OutputIterator operator()(const Polygon_2& pgn, OutputIterator oi) const
{
std::list<Traits_polygon_2> pgns;
typename std::list<Traits_polygon_2>::const_iterator pgn_it;
if (pgn.orientation() == CLOCKWISE) {
// Make a local copy of the polygon, and reverse the order of its
// vertices to make it counterclockwise oriented.
Polygon_2 my_pgn = pgn;
my_pgn.reverse_orientation();
// Perform the decomposition.
_decompose (my_pgn, Strategy_tag(), std::back_inserter(pgns));
}
else {
// Perform the decomposition on the original polygon.
_decompose (pgn, Strategy_tag(), std::back_inserter(pgns));
}
// Copy the polygons to the output iterator.
for (pgn_it = pgns.begin(); pgn_it != pgns.end(); ++pgn_it)
*oi++ = Polygon_2(pgn_it->vertices_begin(), pgn_it->vertices_end());
return (oi);
}
private:
/*!
* Decompose the given counter clockwise-oriented polygon using the optimal
* convex-partition method.
*/
template <typename OutputIterator>
OutputIterator _decompose(const Polygon_2& pgn,
Tag_optimal_convex_parition ,
OutputIterator oi) const
{
return (optimal_convex_partition_2(pgn.vertices_begin(),
pgn.vertices_end(),
oi, *m_traits));
}
/*!
* Decompose the given counter clockwise-oriented polygon using the
* approximated convex-partition method.
*/
template <typename OutputIterator>
OutputIterator _decompose(const Polygon_2& pgn,
Tag_approx_convex_parition ,
OutputIterator oi) const
{
return (approx_convex_partition_2(pgn.vertices_begin(),
pgn.vertices_end(),
oi, *m_traits));
}
/*!
* Decompose the given counter clockwise-oriented polygon using Greene's
* approximated convex-partition method.
*/
template <typename OutputIterator>
OutputIterator _decompose(const Polygon_2& pgn,
Tag_Greene_convex_parition ,
OutputIterator oi) const
{
return (greene_approx_convex_partition_2(pgn.vertices_begin(),
pgn.vertices_end(),
oi, *m_traits));
}
};
} //namespace CGAL
#endif
|
{
"pile_set_name": "Github"
}
|
/*
Bullet Continuous Collision Detection and Physics Library
Copyright (c) 2003-2006 Erwin Coumans http://continuousphysics.com/Bullet/
This software is provided 'as-is', without any express or implied warranty.
In no event will the authors be held liable for any damages arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it freely,
subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
*/
#include <cstdio>
#define WAVEFRONT_SIZE 32
#define WAVEFRONT_BLOCK_MULTIPLIER 2
#define GROUP_SIZE (WAVEFRONT_SIZE*WAVEFRONT_BLOCK_MULTIPLIER)
#define LINKS_PER_SIMD_LANE 16
#define STRINGIFY( S ) STRINGIFY2( S )
#define STRINGIFY2( S ) #S
#include "BulletCollision/CollisionShapes/btTriangleIndexVertexArray.h"
#include "vectormath/vmInclude.h"
#include "btSoftBodySolverLinkData_DX11SIMDAware.h"
#include "btSoftBodySolver_DX11SIMDAware.h"
#include "btSoftBodySolverVertexBuffer_DX11.h"
#include "BulletSoftBody/btSoftBody.h"
#include "BulletCollision/CollisionShapes/btCapsuleShape.h"
#define MSTRINGIFY(A) #A
static char* UpdatePositionsFromVelocitiesHLSLString =
#include "HLSL/UpdatePositionsFromVelocities.hlsl"
static char* SolvePositionsSIMDBatchedHLSLString =
#include "HLSL/SolvePositionsSIMDBatched.hlsl"
static char* UpdateNodesHLSLString =
#include "HLSL/UpdateNodes.hlsl"
static char* UpdatePositionsHLSLString =
#include "HLSL/UpdatePositions.hlsl"
static char* UpdateConstantsHLSLString =
#include "HLSL/UpdateConstants.hlsl"
static char* IntegrateHLSLString =
#include "HLSL/Integrate.hlsl"
static char* ApplyForcesHLSLString =
#include "HLSL/ApplyForces.hlsl"
static char* UpdateNormalsHLSLString =
#include "HLSL/UpdateNormals.hlsl"
static char* OutputToVertexArrayHLSLString =
#include "HLSL/OutputToVertexArray.hlsl"
static char* VSolveLinksHLSLString =
#include "HLSL/VSolveLinks.hlsl"
static char* ComputeBoundsHLSLString =
#include "HLSL/ComputeBounds.hlsl"
static char* SolveCollisionsAndUpdateVelocitiesHLSLString =
#include "HLSL/solveCollisionsAndUpdateVelocitiesSIMDBatched.hlsl"
btSoftBodyLinkDataDX11SIMDAware::btSoftBodyLinkDataDX11SIMDAware( ID3D11Device *d3dDevice, ID3D11DeviceContext *d3dDeviceContext ) :
m_d3dDevice( d3dDevice ),
m_d3dDeviceContext( d3dDeviceContext ),
m_wavefrontSize( WAVEFRONT_SIZE ),
m_linksPerWorkItem( LINKS_PER_SIMD_LANE ),
m_maxBatchesWithinWave( 0 ),
m_maxLinksPerWavefront( m_wavefrontSize * m_linksPerWorkItem ),
m_numWavefronts( 0 ),
m_maxVertex( 0 ),
m_dx11NumBatchesAndVerticesWithinWaves( d3dDevice, d3dDeviceContext, &m_numBatchesAndVerticesWithinWaves, true ),
m_dx11WavefrontVerticesGlobalAddresses( d3dDevice, d3dDeviceContext, &m_wavefrontVerticesGlobalAddresses, true ),
m_dx11LinkVerticesLocalAddresses( d3dDevice, d3dDeviceContext, &m_linkVerticesLocalAddresses, true ),
m_dx11LinkStrength( d3dDevice, d3dDeviceContext, &m_linkStrength, true ),
m_dx11LinksMassLSC( d3dDevice, d3dDeviceContext, &m_linksMassLSC, true ),
m_dx11LinksRestLengthSquared( d3dDevice, d3dDeviceContext, &m_linksRestLengthSquared, true ),
m_dx11LinksRestLength( d3dDevice, d3dDeviceContext, &m_linksRestLength, true ),
m_dx11LinksMaterialLinearStiffnessCoefficient( d3dDevice, d3dDeviceContext, &m_linksMaterialLinearStiffnessCoefficient, true )
{
m_d3dDevice = d3dDevice;
m_d3dDeviceContext = d3dDeviceContext;
}
btSoftBodyLinkDataDX11SIMDAware::~btSoftBodyLinkDataDX11SIMDAware()
{
}
static Vectormath::Aos::Vector3 toVector3( const btVector3 &vec )
{
Vectormath::Aos::Vector3 outVec( vec.getX(), vec.getY(), vec.getZ() );
return outVec;
}
void btSoftBodyLinkDataDX11SIMDAware::createLinks( int numLinks )
{
int previousSize = m_links.size();
int newSize = previousSize + numLinks;
btSoftBodyLinkData::createLinks( numLinks );
// Resize the link addresses array as well
m_linkAddresses.resize( newSize );
}
void btSoftBodyLinkDataDX11SIMDAware::setLinkAt( const btSoftBodyLinkData::LinkDescription &link, int linkIndex )
{
btSoftBodyLinkData::setLinkAt( link, linkIndex );
if( link.getVertex0() > m_maxVertex )
m_maxVertex = link.getVertex0();
if( link.getVertex1() > m_maxVertex )
m_maxVertex = link.getVertex1();
// Set the link index correctly for initialisation
m_linkAddresses[linkIndex] = linkIndex;
}
bool btSoftBodyLinkDataDX11SIMDAware::onAccelerator()
{
return m_onGPU;
}
bool btSoftBodyLinkDataDX11SIMDAware::moveToAccelerator()
{
bool success = true;
success = success && m_dx11NumBatchesAndVerticesWithinWaves.moveToGPU();
success = success && m_dx11WavefrontVerticesGlobalAddresses.moveToGPU();
success = success && m_dx11LinkVerticesLocalAddresses.moveToGPU();
success = success && m_dx11LinkStrength.moveToGPU();
success = success && m_dx11LinksMassLSC.moveToGPU();
success = success && m_dx11LinksRestLengthSquared.moveToGPU();
success = success && m_dx11LinksRestLength.moveToGPU();
success = success && m_dx11LinksMaterialLinearStiffnessCoefficient.moveToGPU();
if( success )
m_onGPU = true;
return success;
}
bool btSoftBodyLinkDataDX11SIMDAware::moveFromAccelerator()
{
bool success = true;
success = success && m_dx11NumBatchesAndVerticesWithinWaves.moveFromGPU();
success = success && m_dx11WavefrontVerticesGlobalAddresses.moveFromGPU();
success = success && m_dx11LinkVerticesLocalAddresses.moveFromGPU();
success = success && m_dx11LinkStrength.moveFromGPU();
success = success && m_dx11LinksMassLSC.moveFromGPU();
success = success && m_dx11LinksRestLengthSquared.moveFromGPU();
success = success && m_dx11LinksRestLength.moveFromGPU();
success = success && m_dx11LinksMaterialLinearStiffnessCoefficient.moveFromGPU();
if( success )
m_onGPU = false;
return success;
}
btDX11SIMDAwareSoftBodySolver::btDX11SIMDAwareSoftBodySolver(ID3D11Device * dx11Device, ID3D11DeviceContext* dx11Context, DXFunctions::CompileFromMemoryFunc dx11CompileFromMemory) :
btDX11SoftBodySolver( dx11Device, dx11Context, dx11CompileFromMemory ),
m_linkData(m_dx11Device, m_dx11Context)
{
// Initial we will clearly need to update solver constants
// For now this is global for the cloths linked with this solver - we should probably make this body specific
// for performance in future once we understand more clearly when constants need to be updated
m_updateSolverConstants = true;
m_shadersInitialized = false;
}
btDX11SIMDAwareSoftBodySolver::~btDX11SIMDAwareSoftBodySolver()
{
releaseKernels();
}
btSoftBodyLinkData &btDX11SIMDAwareSoftBodySolver::getLinkData()
{
// TODO: Consider setting link data to "changed" here
return m_linkData;
}
void btDX11SIMDAwareSoftBodySolver::optimize( btAlignedObjectArray< btSoftBody * > &softBodies , bool forceUpdate)
{
if(forceUpdate || m_softBodySet.size() != softBodies.size() )
{
// Have a change in the soft body set so update, reloading all the data
getVertexData().clear();
getTriangleData().clear();
getLinkData().clear();
m_softBodySet.resize(0);
for( int softBodyIndex = 0; softBodyIndex < softBodies.size(); ++softBodyIndex )
{
btSoftBody *softBody = softBodies[ softBodyIndex ];
using Vectormath::Aos::Matrix3;
using Vectormath::Aos::Point3;
// Create SoftBody that will store the information within the solver
btAcceleratedSoftBodyInterface *newSoftBody = new btAcceleratedSoftBodyInterface( softBody );
m_softBodySet.push_back( newSoftBody );
m_perClothAcceleration.push_back( toVector3(softBody->getWorldInfo()->m_gravity) );
m_perClothDampingFactor.push_back(softBody->m_cfg.kDP);
m_perClothVelocityCorrectionCoefficient.push_back( softBody->m_cfg.kVCF );
m_perClothLiftFactor.push_back( softBody->m_cfg.kLF );
m_perClothDragFactor.push_back( softBody->m_cfg.kDG );
m_perClothMediumDensity.push_back(softBody->getWorldInfo()->air_density);
// Simple init values. Actually we'll put 0 and -1 into them at the appropriate time
m_perClothMinBounds.push_back( UIntVector3( 0, 0, 0 ) );
m_perClothMaxBounds.push_back( UIntVector3( UINT_MAX, UINT_MAX, UINT_MAX ) );
m_perClothFriction.push_back( softBody->getFriction() );
m_perClothCollisionObjects.push_back( CollisionObjectIndices(-1, -1) );
// Add space for new vertices and triangles in the default solver for now
// TODO: Include space here for tearing too later
int firstVertex = getVertexData().getNumVertices();
int numVertices = softBody->m_nodes.size();
// Round maxVertices to a multiple of the workgroup size so we know we're safe to run over in a given group
// maxVertices can be increased to allow tearing, but should be used sparingly because these extra verts will always be processed
int maxVertices = GROUP_SIZE*((numVertices+GROUP_SIZE)/GROUP_SIZE);
// Allocate space for new vertices in all the vertex arrays
getVertexData().createVertices( numVertices, softBodyIndex, maxVertices );
int firstTriangle = getTriangleData().getNumTriangles();
int numTriangles = softBody->m_faces.size();
int maxTriangles = numTriangles;
getTriangleData().createTriangles( maxTriangles );
// Copy vertices from softbody into the solver
for( int vertex = 0; vertex < numVertices; ++vertex )
{
Point3 multPoint(softBody->m_nodes[vertex].m_x.getX(), softBody->m_nodes[vertex].m_x.getY(), softBody->m_nodes[vertex].m_x.getZ());
btSoftBodyVertexData::VertexDescription desc;
// TODO: Position in the softbody might be pre-transformed
// or we may need to adapt for the pose.
//desc.setPosition( cloth.getMeshTransform()*multPoint );
desc.setPosition( multPoint );
float vertexInverseMass = softBody->m_nodes[vertex].m_im;
desc.setInverseMass(vertexInverseMass);
getVertexData().setVertexAt( desc, firstVertex + vertex );
}
// Copy triangles similarly
// We're assuming here that vertex indices are based on the firstVertex rather than the entire scene
for( int triangle = 0; triangle < numTriangles; ++triangle )
{
// Note that large array storage is relative to the array not to the cloth
// So we need to add firstVertex to each value
int vertexIndex0 = (softBody->m_faces[triangle].m_n[0] - &(softBody->m_nodes[0]));
int vertexIndex1 = (softBody->m_faces[triangle].m_n[1] - &(softBody->m_nodes[0]));
int vertexIndex2 = (softBody->m_faces[triangle].m_n[2] - &(softBody->m_nodes[0]));
btSoftBodyTriangleData::TriangleDescription newTriangle(vertexIndex0 + firstVertex, vertexIndex1 + firstVertex, vertexIndex2 + firstVertex);
getTriangleData().setTriangleAt( newTriangle, firstTriangle + triangle );
// Increase vertex triangle counts for this triangle
getVertexData().getTriangleCount(newTriangle.getVertexSet().vertex0)++;
getVertexData().getTriangleCount(newTriangle.getVertexSet().vertex1)++;
getVertexData().getTriangleCount(newTriangle.getVertexSet().vertex2)++;
}
int firstLink = getLinkData().getNumLinks();
int numLinks = softBody->m_links.size();
int maxLinks = numLinks;
// Allocate space for the links
getLinkData().createLinks( numLinks );
// Add the links
for( int link = 0; link < numLinks; ++link )
{
int vertexIndex0 = softBody->m_links[link].m_n[0] - &(softBody->m_nodes[0]);
int vertexIndex1 = softBody->m_links[link].m_n[1] - &(softBody->m_nodes[0]);
btSoftBodyLinkData::LinkDescription newLink(vertexIndex0 + firstVertex, vertexIndex1 + firstVertex, softBody->m_links[link].m_material->m_kLST);
newLink.setLinkStrength(1.f);
getLinkData().setLinkAt(newLink, firstLink + link);
}
newSoftBody->setFirstVertex( firstVertex );
newSoftBody->setFirstTriangle( firstTriangle );
newSoftBody->setNumVertices( numVertices );
newSoftBody->setMaxVertices( maxVertices );
newSoftBody->setNumTriangles( numTriangles );
newSoftBody->setMaxTriangles( maxTriangles );
newSoftBody->setFirstLink( firstLink );
newSoftBody->setNumLinks( numLinks );
}
updateConstants(0.f);
m_linkData.generateBatches();
m_triangleData.generateBatches();
// Build the shaders to match the batching parameters
buildShaders();
}
}
void btDX11SIMDAwareSoftBodySolver::solveConstraints( float solverdt )
{
//std::cerr << "'GPU' solve constraints\n";
using Vectormath::Aos::Vector3;
using Vectormath::Aos::Point3;
using Vectormath::Aos::lengthSqr;
using Vectormath::Aos::dot;
// Prepare links
int numLinks = m_linkData.getNumLinks();
int numVertices = m_vertexData.getNumVertices();
float kst = 1.f;
float ti = 0.f;
m_dx11PerClothDampingFactor.moveToGPU();
m_dx11PerClothVelocityCorrectionCoefficient.moveToGPU();
// Ensure data is on accelerator
m_linkData.moveToAccelerator();
m_vertexData.moveToAccelerator();
prepareCollisionConstraints();
// Solve drift
for( int iteration = 0; iteration < m_numberOfPositionIterations ; ++iteration )
{
for( int i = 0; i < m_linkData.m_wavefrontBatchStartLengths.size(); ++i )
{
int startWave = m_linkData.m_wavefrontBatchStartLengths[i].start;
int numWaves = m_linkData.m_wavefrontBatchStartLengths[i].length;
solveLinksForPosition( startWave, numWaves, kst, ti );
}
} // for( int iteration = 0; iteration < m_numberOfPositionIterations ; ++iteration )
// At this point assume that the force array is blank - we will overwrite it
solveCollisionsAndUpdateVelocities( 1.f/solverdt );
} // btDX11SIMDAwareSoftBodySolver::solveConstraints
void btDX11SIMDAwareSoftBodySolver::updateConstants( float timeStep )
{
using namespace Vectormath::Aos;
if( m_updateSolverConstants )
{
m_updateSolverConstants = false;
// Will have to redo this if we change the structure (tear, maybe) or various other possible changes
// Initialise link constants
const int numLinks = m_linkData.getNumLinks();
for( int linkIndex = 0; linkIndex < numLinks; ++linkIndex )
{
btSoftBodyLinkData::LinkNodePair &vertices( m_linkData.getVertexPair(linkIndex) );
m_linkData.getRestLength(linkIndex) = length((m_vertexData.getPosition( vertices.vertex0 ) - m_vertexData.getPosition( vertices.vertex1 )));
float invMass0 = m_vertexData.getInverseMass(vertices.vertex0);
float invMass1 = m_vertexData.getInverseMass(vertices.vertex1);
float linearStiffness = m_linkData.getLinearStiffnessCoefficient(linkIndex);
float massLSC = (invMass0 + invMass1)/linearStiffness;
m_linkData.getMassLSC(linkIndex) = massLSC;
float restLength = m_linkData.getRestLength(linkIndex);
float restLengthSquared = restLength*restLength;
m_linkData.getRestLengthSquared(linkIndex) = restLengthSquared;
}
}
} // btDX11SIMDAwareSoftBodySolver::updateConstants
//////////////////////////////////////
// Kernel dispatches
void btDX11SIMDAwareSoftBodySolver::solveLinksForPosition( int startWave, int numWaves, float kst, float ti )
{
m_vertexData.moveToAccelerator();
m_linkData.moveToAccelerator();
// Copy kernel parameters to GPU
SolvePositionsFromLinksKernelCB constBuffer;
// Set the first wave of the batch and the number of waves
constBuffer.startWave = startWave;
constBuffer.numWaves = numWaves;
constBuffer.kst = kst;
constBuffer.ti = ti;
D3D11_MAPPED_SUBRESOURCE MappedResource = {0};
m_dx11Context->Map( solvePositionsFromLinksKernel.constBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource );
memcpy( MappedResource.pData, &constBuffer, sizeof(SolvePositionsFromLinksKernelCB) );
m_dx11Context->Unmap( solvePositionsFromLinksKernel.constBuffer, 0 );
m_dx11Context->CSSetConstantBuffers( 0, 1, &solvePositionsFromLinksKernel.constBuffer );
// Set resources and dispatch
m_dx11Context->CSSetShaderResources( 0, 1, &(m_linkData.m_dx11NumBatchesAndVerticesWithinWaves.getSRV()) );
m_dx11Context->CSSetShaderResources( 1, 1, &(m_linkData.m_dx11WavefrontVerticesGlobalAddresses.getSRV()) );
m_dx11Context->CSSetShaderResources( 2, 1, &(m_vertexData.m_dx11VertexInverseMass.getSRV()) );
m_dx11Context->CSSetShaderResources( 3, 1, &(m_linkData.m_dx11LinkVerticesLocalAddresses.getSRV()) );
m_dx11Context->CSSetShaderResources( 4, 1, &(m_linkData.m_dx11LinksMassLSC.getSRV()) );
m_dx11Context->CSSetShaderResources( 5, 1, &(m_linkData.m_dx11LinksRestLengthSquared.getSRV()) );
m_dx11Context->CSSetUnorderedAccessViews( 0, 1, &(m_vertexData.m_dx11VertexPosition.getUAV()), NULL );
// Execute the kernel
m_dx11Context->CSSetShader( solvePositionsFromLinksKernel.kernel, NULL, 0 );
int numBlocks = ((constBuffer.numWaves + WAVEFRONT_BLOCK_MULTIPLIER - 1) / WAVEFRONT_BLOCK_MULTIPLIER );
m_dx11Context->Dispatch(numBlocks , 1, 1 );
{
// Tidy up
ID3D11ShaderResourceView* pViewNULL = NULL;
m_dx11Context->CSSetShaderResources( 0, 1, &pViewNULL );
m_dx11Context->CSSetShaderResources( 1, 1, &pViewNULL );
m_dx11Context->CSSetShaderResources( 2, 1, &pViewNULL );
m_dx11Context->CSSetShaderResources( 3, 1, &pViewNULL );
m_dx11Context->CSSetShaderResources( 4, 1, &pViewNULL );
m_dx11Context->CSSetShaderResources( 5, 1, &pViewNULL );
ID3D11UnorderedAccessView* pUAViewNULL = NULL;
m_dx11Context->CSSetUnorderedAccessViews( 0, 1, &pUAViewNULL, NULL );
ID3D11Buffer *pBufferNull = NULL;
m_dx11Context->CSSetConstantBuffers( 0, 1, &pBufferNull );
}
} // btDX11SIMDAwareSoftBodySolver::solveLinksForPosition
// End kernel dispatches
/////////////////////////////////////
bool btDX11SIMDAwareSoftBodySolver::buildShaders()
{
// Ensure current kernels are released first
releaseKernels();
bool returnVal = true;
if( m_shadersInitialized )
return true;
updatePositionsFromVelocitiesKernel = dxFunctions.compileComputeShaderFromString( UpdatePositionsFromVelocitiesHLSLString, "UpdatePositionsFromVelocitiesKernel", sizeof(UpdatePositionsFromVelocitiesCB) );
if( !updatePositionsFromVelocitiesKernel.constBuffer )
returnVal = false;
char maxVerticesPerWavefront[20];
char maxBatchesPerWavefront[20];
char waveFrontSize[20];
char waveFrontBlockMultiplier[20];
char blockSize[20];
sprintf(maxVerticesPerWavefront, "%d", m_linkData.getMaxVerticesPerWavefront());
sprintf(maxBatchesPerWavefront, "%d", m_linkData.getMaxBatchesPerWavefront());
sprintf(waveFrontSize, "%d", m_linkData.getWavefrontSize());
sprintf(waveFrontBlockMultiplier, "%d", WAVEFRONT_BLOCK_MULTIPLIER);
sprintf(blockSize, "%d", WAVEFRONT_BLOCK_MULTIPLIER*m_linkData.getWavefrontSize());
D3D10_SHADER_MACRO solvePositionsMacros[6] = { "MAX_NUM_VERTICES_PER_WAVE", maxVerticesPerWavefront, "MAX_BATCHES_PER_WAVE", maxBatchesPerWavefront, "WAVEFRONT_SIZE", waveFrontSize, "WAVEFRONT_BLOCK_MULTIPLIER", waveFrontBlockMultiplier, "BLOCK_SIZE", blockSize, 0, 0 };
solvePositionsFromLinksKernel = dxFunctions.compileComputeShaderFromString( SolvePositionsSIMDBatchedHLSLString, "SolvePositionsFromLinksKernel", sizeof(SolvePositionsFromLinksKernelCB), solvePositionsMacros );
if( !solvePositionsFromLinksKernel.constBuffer )
returnVal = false;
updateVelocitiesFromPositionsWithVelocitiesKernel = dxFunctions.compileComputeShaderFromString( UpdateNodesHLSLString, "updateVelocitiesFromPositionsWithVelocitiesKernel", sizeof(UpdateVelocitiesFromPositionsWithVelocitiesCB) );
if( !updateVelocitiesFromPositionsWithVelocitiesKernel.constBuffer )
returnVal = false;
updateVelocitiesFromPositionsWithoutVelocitiesKernel = dxFunctions.compileComputeShaderFromString( UpdatePositionsHLSLString, "updateVelocitiesFromPositionsWithoutVelocitiesKernel", sizeof(UpdateVelocitiesFromPositionsWithoutVelocitiesCB));
if( !updateVelocitiesFromPositionsWithoutVelocitiesKernel.constBuffer )
returnVal = false;
integrateKernel = dxFunctions.compileComputeShaderFromString( IntegrateHLSLString, "IntegrateKernel", sizeof(IntegrateCB) );
if( !integrateKernel.constBuffer )
returnVal = false;
applyForcesKernel = dxFunctions.compileComputeShaderFromString( ApplyForcesHLSLString, "ApplyForcesKernel", sizeof(ApplyForcesCB) );
if( !applyForcesKernel.constBuffer )
returnVal = false;
solveCollisionsAndUpdateVelocitiesKernel = dxFunctions.compileComputeShaderFromString( SolveCollisionsAndUpdateVelocitiesHLSLString, "SolveCollisionsAndUpdateVelocitiesKernel", sizeof(SolveCollisionsAndUpdateVelocitiesCB) );
if( !solveCollisionsAndUpdateVelocitiesKernel.constBuffer )
returnVal = false;
resetNormalsAndAreasKernel = dxFunctions.compileComputeShaderFromString( UpdateNormalsHLSLString, "ResetNormalsAndAreasKernel", sizeof(UpdateSoftBodiesCB) );
if( !resetNormalsAndAreasKernel.constBuffer )
returnVal = false;
normalizeNormalsAndAreasKernel = dxFunctions.compileComputeShaderFromString( UpdateNormalsHLSLString, "NormalizeNormalsAndAreasKernel", sizeof(UpdateSoftBodiesCB) );
if( !normalizeNormalsAndAreasKernel.constBuffer )
returnVal = false;
updateSoftBodiesKernel = dxFunctions.compileComputeShaderFromString( UpdateNormalsHLSLString, "UpdateSoftBodiesKernel", sizeof(UpdateSoftBodiesCB) );
if( !updateSoftBodiesKernel.constBuffer )
returnVal = false;
computeBoundsKernel = dxFunctions.compileComputeShaderFromString( ComputeBoundsHLSLString, "ComputeBoundsKernel", sizeof(ComputeBoundsCB) );
if( !computeBoundsKernel.constBuffer )
returnVal = false;
if( returnVal )
m_shadersInitialized = true;
return returnVal;
} // btDX11SIMDAwareSoftBodySolver::buildShaders
static Vectormath::Aos::Transform3 toTransform3( const btTransform &transform )
{
Vectormath::Aos::Transform3 outTransform;
outTransform.setCol(0, toVector3(transform.getBasis().getColumn(0)));
outTransform.setCol(1, toVector3(transform.getBasis().getColumn(1)));
outTransform.setCol(2, toVector3(transform.getBasis().getColumn(2)));
outTransform.setCol(3, toVector3(transform.getOrigin()));
return outTransform;
}
static void generateBatchesOfWavefronts( btAlignedObjectArray < btAlignedObjectArray <int> > &linksForWavefronts, btSoftBodyLinkData &linkData, int numVertices, btAlignedObjectArray < btAlignedObjectArray <int> > &wavefrontBatches )
{
// A per-batch map of truth values stating whether a given vertex is in that batch
// This allows us to significantly optimize the batching
btAlignedObjectArray <btAlignedObjectArray<bool> > mapOfVerticesInBatches;
for( int waveIndex = 0; waveIndex < linksForWavefronts.size(); ++waveIndex )
{
btAlignedObjectArray <int> &wavefront( linksForWavefronts[waveIndex] );
int batch = 0;
bool placed = false;
while( batch < wavefrontBatches.size() && !placed )
{
// Test the current batch, see if this wave shares any vertex with the waves in the batch
bool foundSharedVertex = false;
for( int link = 0; link < wavefront.size(); ++link )
{
btSoftBodyLinkData::LinkNodePair vertices = linkData.getVertexPair( wavefront[link] );
if( (mapOfVerticesInBatches[batch])[vertices.vertex0] || (mapOfVerticesInBatches[batch])[vertices.vertex1] )
{
foundSharedVertex = true;
}
}
if( !foundSharedVertex )
{
wavefrontBatches[batch].push_back( waveIndex );
// Insert vertices into this batch too
for( int link = 0; link < wavefront.size(); ++link )
{
btSoftBodyLinkData::LinkNodePair vertices = linkData.getVertexPair( wavefront[link] );
(mapOfVerticesInBatches[batch])[vertices.vertex0] = true;
(mapOfVerticesInBatches[batch])[vertices.vertex1] = true;
}
placed = true;
}
batch++;
}
if( batch == wavefrontBatches.size() && !placed )
{
wavefrontBatches.resize( batch + 1 );
wavefrontBatches[batch].push_back( waveIndex );
// And resize map as well
mapOfVerticesInBatches.resize( batch + 1 );
// Resize maps with total number of vertices
mapOfVerticesInBatches[batch].resize( numVertices+1, false );
// Insert vertices into this batch too
for( int link = 0; link < wavefront.size(); ++link )
{
btSoftBodyLinkData::LinkNodePair vertices = linkData.getVertexPair( wavefront[link] );
(mapOfVerticesInBatches[batch])[vertices.vertex0] = true;
(mapOfVerticesInBatches[batch])[vertices.vertex1] = true;
}
}
}
mapOfVerticesInBatches.clear();
}
// Function to remove an object from a vector maintaining correct ordering of the vector
template< typename T > static void removeFromVector( btAlignedObjectArray< T > &vectorToUpdate, int indexToRemove )
{
int currentSize = vectorToUpdate.size();
for( int i = indexToRemove; i < (currentSize-1); ++i )
{
vectorToUpdate[i] = vectorToUpdate[i+1];
}
if( currentSize > 0 )
vectorToUpdate.resize( currentSize - 1 );
}
/**
* Insert element into vectorToUpdate at index index.
*/
template< typename T > static void insertAtIndex( btAlignedObjectArray< T > &vectorToUpdate, int index, T element )
{
vectorToUpdate.resize( vectorToUpdate.size() + 1 );
for( int i = (vectorToUpdate.size() - 1); i > index; --i )
{
vectorToUpdate[i] = vectorToUpdate[i-1];
}
vectorToUpdate[index] = element;
}
/**
* Insert into btAlignedObjectArray assuming the array is ordered and maintaining both ordering and uniqueness.
* ie it treats vectorToUpdate as an ordered set.
*/
template< typename T > static void insertUniqueAndOrderedIntoVector( btAlignedObjectArray<T> &vectorToUpdate, T element )
{
int index = 0;
while( index < vectorToUpdate.size() && vectorToUpdate[index] < element )
{
index++;
}
if( index == vectorToUpdate.size() || vectorToUpdate[index] != element )
insertAtIndex( vectorToUpdate, index, element );
}
static void generateLinksPerVertex( int numVertices, btSoftBodyLinkData &linkData, btAlignedObjectArray< int > &listOfLinksPerVertex, btAlignedObjectArray <int> &numLinksPerVertex, int &maxLinks )
{
for( int linkIndex = 0; linkIndex < linkData.getNumLinks(); ++linkIndex )
{
btSoftBodyLinkData::LinkNodePair nodes( linkData.getVertexPair(linkIndex) );
numLinksPerVertex[nodes.vertex0]++;
numLinksPerVertex[nodes.vertex1]++;
}
int maxLinksPerVertex = 0;
for( int vertexIndex = 0; vertexIndex < numVertices; ++vertexIndex )
{
maxLinksPerVertex = btMax(numLinksPerVertex[vertexIndex], maxLinksPerVertex);
}
maxLinks = maxLinksPerVertex;
btAlignedObjectArray< int > linksFoundPerVertex;
linksFoundPerVertex.resize( numVertices, 0 );
listOfLinksPerVertex.resize( maxLinksPerVertex * numVertices );
for( int linkIndex = 0; linkIndex < linkData.getNumLinks(); ++linkIndex )
{
btSoftBodyLinkData::LinkNodePair nodes( linkData.getVertexPair(linkIndex) );
{
// Do vertex 0
int vertexIndex = nodes.vertex0;
int linkForVertex = linksFoundPerVertex[nodes.vertex0];
int linkAddress = vertexIndex * maxLinksPerVertex + linkForVertex;
listOfLinksPerVertex[linkAddress] = linkIndex;
linksFoundPerVertex[nodes.vertex0] = linkForVertex + 1;
}
{
// Do vertex 1
int vertexIndex = nodes.vertex1;
int linkForVertex = linksFoundPerVertex[nodes.vertex1];
int linkAddress = vertexIndex * maxLinksPerVertex + linkForVertex;
listOfLinksPerVertex[linkAddress] = linkIndex;
linksFoundPerVertex[nodes.vertex1] = linkForVertex + 1;
}
}
}
static void computeBatchingIntoWavefronts(
btSoftBodyLinkData &linkData,
int wavefrontSize,
int linksPerWorkItem,
int maxLinksPerWavefront,
btAlignedObjectArray < btAlignedObjectArray <int> > &linksForWavefronts,
btAlignedObjectArray< btAlignedObjectArray < btAlignedObjectArray <int> > > &batchesWithinWaves, /* wave, batch, links in batch */
btAlignedObjectArray< btAlignedObjectArray< int > > &verticesForWavefronts /* wavefront, vertex */
)
{
// Attempt generation of larger batches of links.
btAlignedObjectArray< bool > processedLink;
processedLink.resize( linkData.getNumLinks() );
btAlignedObjectArray< int > listOfLinksPerVertex;
int maxLinksPerVertex = 0;
// Count num vertices
int numVertices = 0;
for( int linkIndex = 0; linkIndex < linkData.getNumLinks(); ++linkIndex )
{
btSoftBodyLinkData::LinkNodePair nodes( linkData.getVertexPair(linkIndex) );
numVertices = btMax( numVertices, nodes.vertex0 + 1 );
numVertices = btMax( numVertices, nodes.vertex1 + 1 );
}
// Need list of links per vertex
// Compute valence of each vertex
btAlignedObjectArray <int> numLinksPerVertex;
numLinksPerVertex.resize(0);
numLinksPerVertex.resize( numVertices, 0 );
generateLinksPerVertex( numVertices, linkData, listOfLinksPerVertex, numLinksPerVertex, maxLinksPerVertex );
// At this point we know what links we have for each vertex so we can start batching
// We want a vertex to start with, let's go with 0
int currentVertex = 0;
int linksProcessed = 0;
btAlignedObjectArray <int> verticesToProcess;
while( linksProcessed < linkData.getNumLinks() )
{
// Next wavefront
int nextWavefront = linksForWavefronts.size();
linksForWavefronts.resize( nextWavefront + 1 );
btAlignedObjectArray <int> &linksForWavefront(linksForWavefronts[nextWavefront]);
verticesForWavefronts.resize( nextWavefront + 1 );
btAlignedObjectArray<int> &vertexSet( verticesForWavefronts[nextWavefront] );
linksForWavefront.resize(0);
// Loop to find enough links to fill the wavefront
// Stopping if we either run out of links, or fill it
while( linksProcessed < linkData.getNumLinks() && linksForWavefront.size() < maxLinksPerWavefront )
{
// Go through the links for the current vertex
for( int link = 0; link < numLinksPerVertex[currentVertex] && linksForWavefront.size() < maxLinksPerWavefront; ++link )
{
int linkAddress = currentVertex * maxLinksPerVertex + link;
int linkIndex = listOfLinksPerVertex[linkAddress];
// If we have not already processed this link, add it to the wavefront
// Claim it as another processed link
// Add the vertex at the far end to the list of vertices to process.
if( !processedLink[linkIndex] )
{
linksForWavefront.push_back( linkIndex );
linksProcessed++;
processedLink[linkIndex] = true;
int v0 = linkData.getVertexPair(linkIndex).vertex0;
int v1 = linkData.getVertexPair(linkIndex).vertex1;
if( v0 == currentVertex )
verticesToProcess.push_back( v1 );
else
verticesToProcess.push_back( v0 );
}
}
if( verticesToProcess.size() > 0 )
{
// Get the element on the front of the queue and remove it
currentVertex = verticesToProcess[0];
removeFromVector( verticesToProcess, 0 );
} else {
// If we've not yet processed all the links, find the first unprocessed one
// and select one of its vertices as the current vertex
if( linksProcessed < linkData.getNumLinks() )
{
int searchLink = 0;
while( processedLink[searchLink] )
searchLink++;
currentVertex = linkData.getVertexPair(searchLink).vertex0;
}
}
}
// We have either finished or filled a wavefront
for( int link = 0; link < linksForWavefront.size(); ++link )
{
int v0 = linkData.getVertexPair( linksForWavefront[link] ).vertex0;
int v1 = linkData.getVertexPair( linksForWavefront[link] ).vertex1;
insertUniqueAndOrderedIntoVector( vertexSet, v0 );
insertUniqueAndOrderedIntoVector( vertexSet, v1 );
}
// Iterate over links mapped to the wave and batch those
// We can run a batch on each cycle trivially
batchesWithinWaves.resize( batchesWithinWaves.size() + 1 );
btAlignedObjectArray < btAlignedObjectArray <int> > &batchesWithinWave( batchesWithinWaves[batchesWithinWaves.size()-1] );
for( int link = 0; link < linksForWavefront.size(); ++link )
{
int linkIndex = linksForWavefront[link];
btSoftBodyLinkData::LinkNodePair vertices = linkData.getVertexPair( linkIndex );
int batch = 0;
bool placed = false;
while( batch < batchesWithinWave.size() && !placed )
{
bool foundSharedVertex = false;
if( batchesWithinWave[batch].size() >= wavefrontSize )
{
// If we have already filled this batch, move on to another
foundSharedVertex = true;
} else {
for( int link2 = 0; link2 < batchesWithinWave[batch].size(); ++link2 )
{
btSoftBodyLinkData::LinkNodePair vertices2 = linkData.getVertexPair( (batchesWithinWave[batch])[link2] );
if( vertices.vertex0 == vertices2.vertex0 ||
vertices.vertex1 == vertices2.vertex0 ||
vertices.vertex0 == vertices2.vertex1 ||
vertices.vertex1 == vertices2.vertex1 )
{
foundSharedVertex = true;
break;
}
}
}
if( !foundSharedVertex )
{
batchesWithinWave[batch].push_back( linkIndex );
placed = true;
} else {
++batch;
}
}
if( batch == batchesWithinWave.size() && !placed )
{
batchesWithinWave.resize( batch + 1 );
batchesWithinWave[batch].push_back( linkIndex );
}
}
}
}
void btSoftBodyLinkDataDX11SIMDAware::generateBatches()
{
btAlignedObjectArray < btAlignedObjectArray <int> > linksForWavefronts;
btAlignedObjectArray < btAlignedObjectArray <int> > wavefrontBatches;
btAlignedObjectArray< btAlignedObjectArray < btAlignedObjectArray <int> > > batchesWithinWaves;
btAlignedObjectArray< btAlignedObjectArray< int > > verticesForWavefronts; // wavefronts, vertices in wavefront as an ordered set
// Group the links into wavefronts
computeBatchingIntoWavefronts( *this, m_wavefrontSize, m_linksPerWorkItem, m_maxLinksPerWavefront, linksForWavefronts, batchesWithinWaves, verticesForWavefronts );
// Batch the wavefronts
generateBatchesOfWavefronts( linksForWavefronts, *this, m_maxVertex, wavefrontBatches );
m_numWavefronts = linksForWavefronts.size();
// At this point we have a description of which links we need to process in each wavefront
// First correctly fill the batch ranges vector
int numBatches = wavefrontBatches.size();
m_wavefrontBatchStartLengths.resize(0);
int prefixSum = 0;
for( int batchIndex = 0; batchIndex < numBatches; ++batchIndex )
{
int wavesInBatch = wavefrontBatches[batchIndex].size();
int nextPrefixSum = prefixSum + wavesInBatch;
m_wavefrontBatchStartLengths.push_back( BatchPair( prefixSum, nextPrefixSum - prefixSum ) );
prefixSum += wavesInBatch;
}
// Also find max number of batches within a wave
m_maxBatchesWithinWave = 0;
m_maxVerticesWithinWave = 0;
m_numBatchesAndVerticesWithinWaves.resize( m_numWavefronts );
for( int waveIndex = 0; waveIndex < m_numWavefronts; ++waveIndex )
{
// See if the number of batches in this wave is greater than the current maxium
int batchesInCurrentWave = batchesWithinWaves[waveIndex].size();
int verticesInCurrentWave = verticesForWavefronts[waveIndex].size();
m_maxBatchesWithinWave = btMax( batchesInCurrentWave, m_maxBatchesWithinWave );
m_maxVerticesWithinWave = btMax( verticesInCurrentWave, m_maxVerticesWithinWave );
}
// Add padding values both for alignment and as dudd addresses within LDS to compute junk rather than branch around
m_maxVerticesWithinWave = 16*((m_maxVerticesWithinWave/16)+2);
// Now we know the maximum number of vertices per-wave we can resize the global vertices array
m_wavefrontVerticesGlobalAddresses.resize( m_maxVerticesWithinWave * m_numWavefronts );
// Grab backup copies of all the link data arrays for the sorting process
btAlignedObjectArray<btSoftBodyLinkData::LinkNodePair> m_links_Backup(m_links);
btAlignedObjectArray<float> m_linkStrength_Backup(m_linkStrength);
btAlignedObjectArray<float> m_linksMassLSC_Backup(m_linksMassLSC);
btAlignedObjectArray<float> m_linksRestLengthSquared_Backup(m_linksRestLengthSquared);
//btAlignedObjectArray<Vectormath::Aos::Vector3> m_linksCLength_Backup(m_linksCLength);
//btAlignedObjectArray<float> m_linksLengthRatio_Backup(m_linksLengthRatio);
btAlignedObjectArray<float> m_linksRestLength_Backup(m_linksRestLength);
btAlignedObjectArray<float> m_linksMaterialLinearStiffnessCoefficient_Backup(m_linksMaterialLinearStiffnessCoefficient);
// Resize to a wavefront sized batch per batch per wave so we get perfectly coherent memory accesses.
m_links.resize( m_maxBatchesWithinWave * m_wavefrontSize * m_numWavefronts );
m_linkVerticesLocalAddresses.resize( m_maxBatchesWithinWave * m_wavefrontSize * m_numWavefronts );
m_linkStrength.resize( m_maxBatchesWithinWave * m_wavefrontSize * m_numWavefronts );
m_linksMassLSC.resize( m_maxBatchesWithinWave * m_wavefrontSize * m_numWavefronts );
m_linksRestLengthSquared.resize( m_maxBatchesWithinWave * m_wavefrontSize * m_numWavefronts );
m_linksRestLength.resize( m_maxBatchesWithinWave * m_wavefrontSize * m_numWavefronts );
m_linksMaterialLinearStiffnessCoefficient.resize( m_maxBatchesWithinWave * m_wavefrontSize * m_numWavefronts );
// Then re-order links into wavefront blocks
// Total number of wavefronts moved. This will decide the ordering of sorted wavefronts.
int wavefrontCount = 0;
// Iterate over batches of wavefronts, then wavefronts in the batch
for( int batchIndex = 0; batchIndex < numBatches; ++batchIndex )
{
btAlignedObjectArray <int> &batch( wavefrontBatches[batchIndex] );
int wavefrontsInBatch = batch.size();
for( int wavefrontIndex = 0; wavefrontIndex < wavefrontsInBatch; ++wavefrontIndex )
{
int originalWavefrontIndex = batch[wavefrontIndex];
btAlignedObjectArray< int > &wavefrontVertices( verticesForWavefronts[originalWavefrontIndex] );
int verticesUsedByWavefront = wavefrontVertices.size();
// Copy the set of vertices into the correctly structured array for use on the device
// Fill the non-vertices with -1s
// so we can mask out those reads
for( int vertex = 0; vertex < verticesUsedByWavefront; ++vertex )
{
m_wavefrontVerticesGlobalAddresses[m_maxVerticesWithinWave * wavefrontCount + vertex] = wavefrontVertices[vertex];
}
for( int vertex = verticesUsedByWavefront; vertex < m_maxVerticesWithinWave; ++vertex )
{
m_wavefrontVerticesGlobalAddresses[m_maxVerticesWithinWave * wavefrontCount + vertex] = -1;
}
// Obtain the set of batches within the current wavefront
btAlignedObjectArray < btAlignedObjectArray <int> > &batchesWithinWavefront( batchesWithinWaves[originalWavefrontIndex] );
// Set the size of the batches for use in the solver, correctly ordered
NumBatchesVerticesPair batchesAndVertices;
batchesAndVertices.numBatches = batchesWithinWavefront.size();
batchesAndVertices.numVertices = verticesUsedByWavefront;
m_numBatchesAndVerticesWithinWaves[wavefrontCount] = batchesAndVertices;
// Now iterate over batches within the wavefront to structure the links correctly
for( int wavefrontBatch = 0; wavefrontBatch < batchesWithinWavefront.size(); ++wavefrontBatch )
{
btAlignedObjectArray <int> &linksInBatch( batchesWithinWavefront[wavefrontBatch] );
int wavefrontBatchSize = linksInBatch.size();
int batchAddressInTarget = m_maxBatchesWithinWave * m_wavefrontSize * wavefrontCount + m_wavefrontSize * wavefrontBatch;
for( int linkIndex = 0; linkIndex < wavefrontBatchSize; ++linkIndex )
{
int originalLinkAddress = linksInBatch[linkIndex];
// Reorder simple arrays trivially
m_links[batchAddressInTarget + linkIndex] = m_links_Backup[originalLinkAddress];
m_linkStrength[batchAddressInTarget + linkIndex] = m_linkStrength_Backup[originalLinkAddress];
m_linksMassLSC[batchAddressInTarget + linkIndex] = m_linksMassLSC_Backup[originalLinkAddress];
m_linksRestLengthSquared[batchAddressInTarget + linkIndex] = m_linksRestLengthSquared_Backup[originalLinkAddress];
m_linksRestLength[batchAddressInTarget + linkIndex] = m_linksRestLength_Backup[originalLinkAddress];
m_linksMaterialLinearStiffnessCoefficient[batchAddressInTarget + linkIndex] = m_linksMaterialLinearStiffnessCoefficient_Backup[originalLinkAddress];
// The local address is more complicated. We need to work out where a given vertex will end up
// by searching the set of vertices for this link and using the index as the local address
btSoftBodyLinkData::LinkNodePair localPair;
btSoftBodyLinkData::LinkNodePair globalPair = m_links[batchAddressInTarget + linkIndex];
localPair.vertex0 = wavefrontVertices.findLinearSearch( globalPair.vertex0 );
localPair.vertex1 = wavefrontVertices.findLinearSearch( globalPair.vertex1 );
m_linkVerticesLocalAddresses[batchAddressInTarget + linkIndex] = localPair;
}
for( int linkIndex = wavefrontBatchSize; linkIndex < m_wavefrontSize; ++linkIndex )
{
// Put 0s into these arrays for padding for cleanliness
m_links[batchAddressInTarget + linkIndex] = btSoftBodyLinkData::LinkNodePair(0, 0);
m_linkStrength[batchAddressInTarget + linkIndex] = 0.f;
m_linksMassLSC[batchAddressInTarget + linkIndex] = 0.f;
m_linksRestLengthSquared[batchAddressInTarget + linkIndex] = 0.f;
m_linksRestLength[batchAddressInTarget + linkIndex] = 0.f;
m_linksMaterialLinearStiffnessCoefficient[batchAddressInTarget + linkIndex] = 0.f;
// For local addresses of junk data choose a set of addresses just above the range of valid ones
// and cycling tyhrough % 16 so that we don't have bank conficts between all dud addresses
// The valid addresses will do scatter and gather in the valid range, the junk ones should happily work
// off the end of that range so we need no control
btSoftBodyLinkData::LinkNodePair localPair;
localPair.vertex0 = verticesUsedByWavefront + (linkIndex % 16);
localPair.vertex1 = verticesUsedByWavefront + (linkIndex % 16);
m_linkVerticesLocalAddresses[batchAddressInTarget + linkIndex] = localPair;
}
}
wavefrontCount++;
}
}
} // void btSoftBodyLinkDataDX11SIMDAware::generateBatches()
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright 2018 Google LLC
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
class riscv_rand_instr_test extends riscv_instr_base_test;
`uvm_component_utils(riscv_rand_instr_test)
`uvm_component_new
virtual function void randomize_cfg();
cfg.instr_cnt = 10000;
cfg.num_of_sub_program = 5;
`DV_CHECK_RANDOMIZE_FATAL(cfg)
`uvm_info(`gfn, $sformatf("riscv_instr_gen_config is randomized:\n%0s",
cfg.sprint()), UVM_LOW)
endfunction
virtual function void apply_directed_instr();
// Mix below directed instruction streams with the random instructions
asm_gen.add_directed_instr_stream("riscv_load_store_rand_instr_stream", 4);
asm_gen.add_directed_instr_stream("riscv_loop_instr", 3);
asm_gen.add_directed_instr_stream("riscv_jal_instr", 4);
asm_gen.add_directed_instr_stream("riscv_hazard_instr_stream", 4);
asm_gen.add_directed_instr_stream("riscv_load_store_hazard_instr_stream", 4);
asm_gen.add_directed_instr_stream("riscv_multi_page_load_store_instr_stream", 4);
asm_gen.add_directed_instr_stream("riscv_mem_region_stress_test", 4);
endfunction
endclass
class riscv_ml_test extends riscv_instr_base_test;
`uvm_component_utils(riscv_ml_test)
`uvm_component_new
virtual function void randomize_cfg();
cfg.addr_translaction_rnd_order_c.constraint_mode(0);
`DV_CHECK_RANDOMIZE_FATAL(cfg)
cfg.addr_translaction_rnd_order_c.constraint_mode(1);
`uvm_info(`gfn, $sformatf("riscv_instr_gen_config is randomized:\n%0s",
cfg.sprint()), UVM_LOW)
endfunction
endclass
|
{
"pile_set_name": "Github"
}
|
<style type="text/css">
%css-style%
</style>
<table width="100%" ><tr><td align="left">
<table class='bubble-red' cellspacing='0' cellpadding='0' style='float:left;'>
<tr>
<td><img src="%style-dir%/img/bubble-red/bubble_TL.png"></td>
<td class='bubble-redTC'></td>
<td><img src="%style-dir%/img/bubble-red/bubble_TR.png"></td>
</tr>
<tr>
<td class='bubble-redCL'></td>
<td class='bubble-redCC'>
%message%
</td>
<td class='bubble-redCR'></td>
</tr>
<tr>
<td><img src="%style-dir%/img/bubble-red/bubble_BL.png"></td>
<td class='bubble-redBC'><img src="%style-dir%/img/bubble-red/bubble_tick.png"></td>
<td><img src="%style-dir%/img/bubble-red/bubble_BR.png"></td>
</tr>
</table>
<table class='bubbleFooter' width='100%'>
<tr>
<td align='left'>
<span class='name'> %name% - </span>
<span width='130' align='right' class='time'>%time%</span>
</td>
</tr>
</table>
</td></tr></table>
|
{
"pile_set_name": "Github"
}
|
/****************************************************************************
Copyright (c) 2012-2013 cocos2d-x.org
http://www.cocos2d-x.org
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
****************************************************************************/
package org.cocos2dx.plugin;
import java.util.Hashtable;
public interface InterfaceAnalytics {
public final int PluginType = 2;
public void startSession(String appKey);
public void stopSession();
public void setSessionContinueMillis(int millis);
public void setCaptureUncaughtException(boolean isEnabled);
public void setDebugMode(boolean isDebugMode);
public void logError(String errorId, String message);
public void logEvent(String eventId);
public void logEvent(String eventId, Hashtable<String, String> paramMap);
public void logTimedEventBegin(String eventId);
public void logTimedEventEnd(String eventId);
public String getSDKVersion();
public String getPluginVersion();
}
|
{
"pile_set_name": "Github"
}
|
/*
* This file is part of the KubeVirt project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* Copyright 2019 Red Hat, Inc.
*
*/
package webhooks
import (
"crypto/x509"
"fmt"
"sync"
k8sv1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/cert"
"kubevirt.io/kubevirt/pkg/virt-operator/creation/components"
"kubevirt.io/kubevirt/pkg/util"
)
type ClientCAManager interface {
GetCurrent() (*x509.CertPool, error)
}
type manager struct {
store cache.Store
lock *sync.Mutex
lastRevision string
namespace string
name string
secretKey string
lastPool *x509.CertPool
}
func NewKubernetesClientCAManager(configMapCache cache.Store) ClientCAManager {
return &manager{
store: configMapCache,
lock: &sync.Mutex{},
namespace: metav1.NamespaceSystem,
name: util.ExtensionAPIServerAuthenticationConfigMap,
secretKey: util.RequestHeaderClientCAFileKey,
}
}
func NewCAManager(configMapCache cache.Store, namespace string, configMapName string) ClientCAManager {
return &manager{
store: configMapCache,
lock: &sync.Mutex{},
namespace: namespace,
name: configMapName,
secretKey: components.CABundleKey,
}
}
func (m *manager) GetCurrent() (*x509.CertPool, error) {
m.lock.Lock()
defer m.lock.Unlock()
obj, exists, err := m.store.GetByKey(m.namespace + "/" + m.name)
if err != nil {
return nil, err
} else if !exists {
if m.lastPool != nil {
return m.lastPool, nil
}
return nil, fmt.Errorf("configmap %s not found. Unable to detect request header CA", m.name)
}
configMap := obj.(*k8sv1.ConfigMap)
// no change detected.
if m.lastRevision == configMap.ResourceVersion {
return m.lastPool, nil
}
requestHeaderClientCA, ok := configMap.Data[m.secretKey]
if !ok {
return nil, fmt.Errorf("requestheader-client-ca-file not found in extension-apiserver-authentication ConfigMap")
}
certs, err := cert.ParseCertsPEM([]byte(requestHeaderClientCA))
if err != nil {
return nil, err
}
pool := x509.NewCertPool()
for _, cert := range certs {
pool.AddCert(cert)
}
m.lastRevision = configMap.ResourceVersion
m.lastPool = pool
return pool, nil
}
|
{
"pile_set_name": "Github"
}
|
package test;
import java.util.ArrayList;
import junit.framework.TestCase;
import org.junit.After;
import org.junit.Before;
import stringCalculator.StringCalculator;
import stringCalculator.StringCalculator.NegativeException;
public class TestStringCalculator extends TestCase{
StringCalculator sCalculator;
@Before
public void setUp() throws Exception {
sCalculator = new StringCalculator();
}
@After
public void tearDown() throws Exception {
}
public void test_addNoneNumbers() throws NegativeException{
String input = "";
int expectedSum = 0;
assertEquals(expectedSum,sCalculator.Add(input));
}
public void test_addOneNumber() throws NegativeException{
String input = "1";
int expectedSum = 1;
assertEquals(expectedSum,sCalculator.Add(input));
}
public void test_addTwoNumbers() throws NegativeException{
String input = "1,3";
int expectedSum = 4;
assertEquals(expectedSum,sCalculator.Add(input));
}
public void test_addFiveNumbers() throws NegativeException{
String input = "1,3,6,3,5";
int expectedSum = 18;
assertEquals(expectedSum,sCalculator.Add(input));
}
public void test_lineAsSeparator() throws NegativeException{
String input = "1\n3,6\n5";
int expectedSum = 15;
assertEquals(expectedSum,sCalculator.Add(input));
}
public void test_CustomSeparator() throws NegativeException{
String input = "//;\n3;6;5;1";
int expectedSum = 15;
assertEquals(expectedSum,sCalculator.Add(input));
}
public void test_OneNegativeException(){
String input = "//;\n3;6;5;-1";
try{
sCalculator.Add(input);
fail();
}catch(NegativeException ex){
//Test Passed
}
}
public void test_MultiplegativeException(){
String input = "3,6,-5,-1";
try{
sCalculator.Add(input);
fail();
}catch(NegativeException ex){
//Test Passed
}
}
public void test_AddMoreThanMax() throws NegativeException{
String input = "//;\n3;6;5;1003";
int expectedSum = 14;
assertEquals(expectedSum,sCalculator.Add(input));
}
public void test_AnyLenghtDelimiter() throws NegativeException{
String input = "//[;;;]\n3;;;6;;;5;;;1003";
int expectedSum = 14;
assertEquals(expectedSum,sCalculator.Add(input));
}
}
|
{
"pile_set_name": "Github"
}
|
## CPU 与 GPU 工作流程
### 介绍

CPU 的任务繁多,做逻辑计算外,还要做内存管理、显示操作,因此
在实际运算的时候性能会大打折扣,在没有 GPU 的时代,不能显示复
杂的图形,其运算速度远跟不上今天复杂三维游戏的要求。即使 CPU
的工作频率超过 2GHz 或更高,对它绘制图形提高也不大。这时 GPU
的设计就出来了
### CPU GPU 架构分析

**由图分析 CPU GPU :**
1. 黄色的 Control 为控制器,用于协调控制整个 CPU 的运行,包括取出指令、控制其它模块的运行等;
2. 绿色的 ALU (Arithmetic Logic Unit) 是算术逻辑单元,用于进行数学、逻辑运行;
3. 橙色的 Cache 和 DRAM 分别为缓存和 RAW,用于存储信息;
**总结**
从 CPU / GPU 结构图可以看出,CPU 的控制器较为复杂,而 ALU 数量较少。因此 CPU 擅长各种复杂的逻辑运算,但不擅长数据尤其是浮点运算。
### 简要执行流程

**栅格化概念: **栅格化是将向量图形格式表示的图像转换成位图来交于显示器
##60 HZ 刷新频率由来
**12 fps:** 由于人类眼睛的特殊生理结构,如果所看到的画面之帧率高于每秒约 10 - 12 帧的时候,就会认为是连贯的;
**24 fps:** 有声电影的拍摄及播放帧率均为 24 帧,对一般人而言可以接受;
**30 fps: ** 早期的高动态电子游戏,帧率少于每秒 30 帧的话就会显得不连贯,这是因为没有动态模糊使流畅度降低;
**60 fps: ** 在于手机交互过程中,如触摸和反馈 60 帧以下,肉眼是能感觉出来的。60 帧以上不能察觉变化。当低于 60 fps 时感觉画面有卡顿现象。
Android 系统每隔 16ms 发出 VSYNC 信号 (1000 ms / 60 = 16.66 ms) ,触发对 UI 进行渲染, 如果每次渲染都成功这样就能够达到流畅的画面所需要的 60 fps ,为了能够实现 60 fps ,这意味着计算渲染的大多数操作都必须在 16ms 内完成
## 卡顿原理分析
### 介绍

当这一帧画面渲染时间操过 16 ms 的时候,垂直同步机制会让显示器硬件等待 GPU 完成栅格化渲染操作,这样会让这一帧画面,多停留了 16 ms,甚至更多,这样就造成了用户看起来画面停顿。
**16 毫秒的时间主要被两件事情所占用**
1. 将 UI 对象转换为一系列多边形和纹理。
2. CPU 传递处理数据到 GPU 。所以很明显,我们要缩短
这两部分的时间,也就是说需要尽量减少对象转换的次数,以及上
传数据的次数。
**如何减少这 2 件事的耗时,以满足在16ms 渲染完成**
1. CPU 减少 xml 转换成对象的时间。
2. GPU 减少重复绘制的时间。
## 过渡绘制优化(主要减少 GPU 工作量)
### 简介

GPU 的绘制过程,就跟刷墙一样,一层一层的进行, 16 ms 刷一次,这样就会造成图层覆盖的现象,即无用的图层还是绘制在底层,造成不必要的浪费。
### GPU 过渡绘制几种情况
1. 自定义控件中 onDraw 方法做了过多重复绘制。
2. 布局层次太深,重叠性太强。用户看不到区域也会渲染,导致耗时增加。
### 过渡绘制查看工具

**真彩色: ** 没有过渡绘制
**浅蓝色: **过渡绘制一次
**浅绿色: **过渡绘制 二次
**粉红色: ** 过渡绘制 三次
**大红色: ** 过渡绘制 四次
**工具查看**

### 优化方案
1. 减少背景重复(非业务需要,不要设置背景)
1. 去掉单个 activity的主题设置,可以在 setContentView 之前 getWindow().setBackgroupDrawable(null);
2. 去掉所有的 activity 主题中的属性
```dart
<item name="android:windowBackground">@null</item>
```
2. 使用裁剪来减少控件之间的重合部分(比如扑克牌)
Android 7.0 之后系统做出了优化 invalidate() 不在执行测量和布局动作。
## 布局的优化(主要减少 CPU 工作量)
### 常用工具
1. UI Automator Viewer (Android / SDK / tool / bin /uiautomator.bat)

uiautomatorviewer 是 android SDK 自带的工具。通过截屏并分析 XML布局文件的方式,为用户提供控件信息查看服务。该工具位于 SDK 目录下的 tools\bin 子目录下。可以看到,它是通过 bat 文件启动的。
2. monitor.bat (Android/sdk/tools/monitor.bat)。
[官网介绍使用](<https://developer.android.com/studio/profile/hierarchy-viewer.html?tdsourcetag=s_pcqq_aiomsg>)
Device Monitor 窗口中 Hierarchy view;
三个点也是代表着 View 的 Measure , Layout 和 Draw 。
绿: 表示该 View 的此项性能比该 View Tree 中超过 50% 的 View 都要快;例如,代表Measure 的是绿点,意味着这个视图的测量时间快于树中的视图对象的 50%。
黄: 表示该 View 的此项性能比该 View Tree 中超过 50% 的 View 都要慢;
红: 表示该 View 的此项性能是 View Tree 中最慢的;
## 总结
1. 自定义中 如果有出现覆盖遮挡的视图,可以按照上一层的位置来进行 裁剪。
2. XML 中层次问题,能在一个平面显示的内容,尽量只用一个容器。
3. 尽可能把相同的容器合并 merge。
4. 能复用的代码,用 include 处理,可以减少 GPU 重复工作。
|
{
"pile_set_name": "Github"
}
|
# -*- coding: utf-8 -*-
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('teachers_digital_platform', '0016_auto_20180829_2323'),
]
operations = [
migrations.RenameModel(
old_name='ActivitySpecialPopulation',
new_name='ActivityStudentCharacteristics',
),
migrations.RenameField(
model_name='activitypage',
old_name='special_population',
new_name='student_characteristics',
),
]
|
{
"pile_set_name": "Github"
}
|
;
; Maciej 'YTM/Elysium' Witkowiak
;
; 22.12.99, 29.07.2000
.import popax
.importzp ptr3, ptr4
.export DoubleSPop
.export SetPtrXY
.include "geossym.inc"
DoubleSPop:
sta ptr4
stx ptr4+1
jsr popax
sta ptr3
stx ptr3+1
; rts
;
; SetPtrXY can be sometimes executed twice, but even this way it is few cycles
; faster...
SetPtrXY:
ldx #ptr4
ldy #ptr3
rts
|
{
"pile_set_name": "Github"
}
|
//
// KRTournamentViewDataSource.swift
// KRTournamentView
//
// Copyright © 2018 Krimpedance. All rights reserved.
//
/// This protocol represents the data model object. as such, it supplies no information about appearance (including the entries and matches)
public protocol KRTournamentViewDataSource: class {
/// Structure of tournament bracket.
///
/// - Parameter tournamentView: The tournament view.
/// - Returns: Bracket you need.
func structure(of tournamentView: KRTournamentView) -> Bracket
/// Entry display.
///
/// - Parameters:
/// - tournamentView: The tournament view.
/// - index: Entry index.
/// - Returns: KRTournamentViewEntry instance.
func tournamentView(_ tournamentView: KRTournamentView, entryAt index: Int) -> KRTournamentViewEntry
/// Match display.
///
/// - Parameters:
/// - tournamentView: The tournament view.
/// - matchPath: layer and number of the match.
/// - Returns: KRTournamentViewMatch instance.
func tournamentView(_ tournamentView: KRTournamentView, matchAt matchPath: MatchPath) -> KRTournamentViewMatch
}
|
{
"pile_set_name": "Github"
}
|
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python Programlamaya Giriş yazı dizimizin bu bölümünün konusu, Python'da isimler ve nesnelerin eşleştirilmesinin ayrıntıları, ve bu ayrıntıların bazen nasıl bizi şaşırtabileceği. Dizinin bütün yazılarına erişmek için <a href=\"http://www.veridefteri.com/category/python-giris/\"><em>Python Programlamaya Giriş</em></a> kategorimize bakabilirsiniz. Bu dizideki yazılar ayrıca Jupyter defterleri halinde <a href=\"https://github.com/sibirbil/VeriDefteri/tree/master/Python_Programlama\">GitHub depomuzda</a> da mevcut.\n",
"\n",
"Nesneler ve referanslar\n",
"===\n",
"Elimizde `a` isimli bir liste olsun. Diyelim bu listeyi kopyalayıp `b` isimli ikinci ve eş bir liste yaratmak istiyoruz. İlk aklımıza gelen şeyi yapalım ve basit bir atama yapalım."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[1, 2, 3]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a = [1,2,3]\n",
"b = a\n",
"b"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Şimdi `a`'yı bir kenara bırakalım ve `b` ile çalışalım. Sözgelişi, `b`'nin ikinci elemanını değiştirelim."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[1, 'abc', 3]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"b[1] = \"abc\"\n",
"b"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ancak, bu değişiklik `b` ile sınırlı kalmaz, `a`'yı da etkiler."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[1, 'abc', 3]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Birçok başka programlama dilinde görülmeyen bu davranışın nedenini anlamak için Python'un atamaları nasıl yaptığına bakmamız gerekiyor.\n",
"\n",
"Basit bir atamayla başlayalım. Yorumlayıcı `a = 42` gibi bir atama ifadesi gördüğünde iki şey yapar: 42 değerini taşıyan bir tamsayı nesnesi yaratır, ayrıca bir `a` ismi yaratır, ve bu ismi 42 nesnesine bağlar. İsim ve işaret ettiği nesne birbirlerinden bağımsız varlıklardır."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Image"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAARIAAABWCAIAAAH77L4sAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAR\n50lEQVR42u1dC1gU1xUeAQWNCTE2kkQTRRIjaGM0xtQoGmNtqBrj137VNo+qaW3SNtG2tk1qku/j\nIcsCgojA8hRcHyAYpPhGtMJC5LXAIkFAHsLK8lpewkKEZbdnZnY3+2Yfs8s+7vnGdZjdO3fO/Oe/\n98ydc8/FxOYSzNw1YdgLX2S0NEcuW4jhYsCJoOzlX818IBQXCeCvEQx70WXbVTU1rZqKhXAE8Gtv\nI2pqZawgyz4B/609i60+NXeR1+TdPRB/f/+KdgG7bdC0NUEd8psBJ1I6g+rZ1Nd0JjPb2Jp4ndpq\n2h/K1FctmfnICpYRn3BcJ50aGxtNdffkf5qQdNyEOJnDxIFf8rfb8qvpF4uFB9gjo1IzmS49p6Sa\npS8/T4k2mFwb1pDyW4efHjA7NvLWQqcHm6ntyb/bSqFBkzaNqf2RjmcvLi5WrYZAx3GJH0d9NSc/\nf+ncAQ+9qmGxWBs3blSohlvjKtfeKFTj5+dnZIOt001T+h21jY1CNTYjtqsP2YgG//23j205A9YP\nzhL86U00UiZtwFWdNPgEP03YEiN/3NvdpU0oBp+NvFQM87QzfGxQn0Zum9p243Zjl3muRnvzNaEn\npaBPSFi49jIZGRnWpI/819Hbn4bPEx/NMf4xQnsfSfpFU6dO1aKPlwMm33Fi8/6utz7pX65YtvrN\nLR97yx9kxMVPFj6kPpiKO6BNH+1wJyWnWBl/ZMJgMLKu50sMjxHL5XLN1jqZRB/U+ViSMuBZyB0U\ndooQMhaEjKAQdjj07SQyMBYIf3p8VSEerROa84KIemEYknR8YRBg79P4ztubf0e6pzBgCWMFD/Br\nEhYJNJuZaKSdOAWhDOG6RjSNQclRcyqjzmXmXDsh8/FVfW21yogwzNlylIFBExi5hXGTofIAtxeX\nYNPXyZRRGl5BTTNSZjKVuVFYnJ1XIvMUIhkJ1qqMpiccGi3IDJfSMTRqpHv2ozJ+xEsXTVt+fr41\nKaO7c2rpytCC6OZX5vTp07ooc2b/ImaToKSQdqFpaJ6GBzVtyjzzaQF8Ovtkyx8sKyuj2nORSGJi\nohZlomoGQRlyP23fgomVCTgUqDg+/sTVewMO3hnyB2tra02hTGdnpxZkMOwx+CSVKa1I/+vljomV\niWHEKQ32O8C/BV9YCGdAGXZVKrNukJoGoLiBZ02t2SFFS1PaHj16ZE3K4F2Nn/qu5s6dO1bWacok\nICCA/EVlay+EL5jNGTGJMkjQQwASPbEZqEgOYlar+6WI5usrQDdsErHBQ9yIV7krnWDEZs7Lzz8O\nzk3VMD5UhXtTIvwN67wdH5PBLfAL5g43bMoM3AlaGWyTNwj0ff6D/aDgTCdc4Rt9+JBvW9afYf+F\nlxfD55J9l+DIENsf9j08l8Cn0+uBuhc0AJvxBYRru8zn4/L2H4hvfsQGWyWJ+oJ372EN5OBaP3w7\naqPYYG8kk/stxM0RcuPxOysVMEyfi/38nE/x++Xg9lVMtkifgnpj86gGrOAN8sijal8Me1MBG2L4\nkcSGGPy0dWyk+koMV8QHZeskLvj4LAw70jC2CMMY98lbIX4Jw051iHQsiHwB5KchQdggbBRkXCQ+\nmXqWHhwCQaC+vr4w8hYRGdXS0oru4ORgMzou8vXzK+X2TTjGwDjOzDh3zpbuy8jYeN+I0MybTtgM\njwrDj0UbMBAUFh5uG9gYMxpmfJyTRmySmGeMqQDemiBsTIJNVe094+uAPsn+sMHf98lel926Fv/c\nE46Yw8w9xwopw0Z1vsePW2st/uTrOu8ZV/wtFb3ioZZqRkZGLPnWywJ8nZ2dU1JSjMWmuRjD5rpK\nsfnHYswrtF7yzuvmFxi2mhpsQsMjtBQub27wp/n/4q3XcZAU34eqvKm0Plm/fj2Px9MXm+yAnz2+\n9TQZ1izjDbmx716G0ybXD1GDDfjHmkq+iGF/uNFL7r83G6xBc7fU2mvhTRbw5p133uHz+Ua2acGr\nlAH+8BofjudlfQn7h273UNnfjI2N3brThPobA3wBGW9WOinhtYwyP62lpSW/ptWegbFcH3pip0DD\ndp1VZJ64AnvHhpTjx49HKsYSqW5F9VwAEhpDWxoXsAJskKBxaCQIG4QNElNgIwkKUBVZ/A2SycJG\nYxCa5rg1JKhNQ20a2aapRsJJ27QxOPIGhA264DGFb0Xmwefj+KRQ7O4jm7w/uL6bnoTxalxf7PlP\nyIOLp8Afru6zYTzeoYN4XakSaqlrQb2xgYnIkJ+MODIC05FHFbH5hoN/9QPnIOyTP6J7Yq/FcW0V\nm68qhon9Pth/KBZf/2DW9F8wya95Z36JPfcZTCiHYE3yCEwrh1SVuhY0ABvVKEV5bNrGiUM9GRi2\nlCyWvdHBk15nq9jwxhVuzkoIo13gKQnO9HwBkryoC4PVsaChfhpMe4c8UjgYQgVs+CIZNq/YAzYS\nfaU3Zzm8V6xTE8MK6QsgHxj8BlKC6VVQP2xUI0gRNrKbc/7dGVNWSkJWui8AGK+qC1HWqaBBvkB5\nAOxAhgU8Ye70dUptmp1jA13tPLwBm7FoLj6B4gYfWi4R5CmGfp9MFAoJi3UuiHxo9HyDBGGDsEGC\nsEGCsLFdbGruNYYfjQw4dAgC2CA6ACZ7pJ5NR7dvMrERikSBtKBjcUlqYw+ybt6GRB6VlZXoPpob\nm8QTp+J1mFNQ3NxjGzFp8mL+yTci3bGBmWlw03WP36GHHNYSwmp1oiX22EQb1KgTNkHBIQac/fCR\noyKRCGFjQmyyLl0rrOfZc8Ct5WITFBJqTB3nz59H2JgEm8SUk0bWERwSirAxCTYGhKgrbaXcAdkM\nI/vBpvDiPmzOn6R/Pty9BZ8+9uzy9/IeUIdN6oUc400gNNTSqbN9+/aBgQHqsOnG38tIsfHEsJf+\nmAo7mdG74MV+KSXYcNu7tBQ+9K4bHjHi4YFfh+t7elVjcY91Utm6dWtfX5+R2PzFC9v6720SbLh3\n4LRs6VfwMi2oeogCbK7/L19zYR5UWSRnJgWaqwmk0awFG5n4+PjIP5/pjk1xfiA2+/3v/vuRXJsm\n23rgzKn3qeBNemaW9vJnz6Xu3btr/iw8yoql+Weff77P6ibiOjo6RkVF6Y9NnyOG5T4QqMXmQ2hi\nFuyjpr8pLivXWJjXCQp4bd8fnV0GF6QdG0gtay28AUhgpSKD27Sv1zivicSnq6tiswPmLjuvLqfK\nT4OcyRq9r++CMMxdMpO99gJolWfN/Q2sG5aQkGCsL8DjqtBvLvHV4FII7Zi7k2IfOiohWUNhPh4n\nOsfL68WnMScviBNJb9FYDcxEtDcfWp43Pk9hU1/7xhKfbwqr6m1gVM1gbNic40pUOlj2kBpsqqu/\nv93YaefJBSx3PM0Y6hTXNHV0dCBsTIWNQCCITTllwNkr8cV+aGKbEMt9f8Nms7PzS+32BYFFY0PC\nw0hK0fG8nA6BOZfIsHdsQCAHGlChoPaB9pNGMxhMJlNsW2Lp2JDS3NwMCEUz4ktbFJKqXrrJ8g8I\niIuLE9uiWAc2SJAgQbRBggTRBgkSRBskSKydNkJ82jrI6lOdur6NMaAIEiSot0GCBPU2il2HaPD7\ntK92vunxpCPxFtxljueG3YFZ9UMiDUXG8EUUMWyRf3XfvfO+O1c+RyzW+th8772xZf2QLWSs81bY\n7rXzZxJZRF5YvSuikC9EqFiPSPD1+KZqkJcbstvb3RU3DKfZi3/+p4hbHUoplUSCexdouzcsnj0N\nh3vqU4ve2kW70CBQ8EomNDDKazQ1bYTcpDX4kjcv7b/aLrk8YW9RkDeev8tl87fdIs20wWXh7xNK\nuyGrlGj4HvPXT0nDT9x3JXP6gCmiobpT7z+DH3H79DZasNrKaIPLvN9EFrTjaSfHH95N3UuEy2Or\nYlvIVvBRXdgbuHk7/vSz9DqSA6KR+xe/eB23Z+cN8c14ujGdDIzSGs1Bm7HGiFcJ0i77JDb3bs+o\nLh2UVMn5/y6XW0BK1HVmDRH9uL9kWK5sz7l1+FEvvTK/IbEE2iz4slJhgbDR+mAv/Lh3Oh73P8Ta\nMxt3J967pDxVQ8T/dhOe1sz9IOcH3QyM0hrN5aQJe0qS/rHtlacUAhZdFu8Iv907ro02a9O65XvG\nngxv/Ogr0hXCSemH1Hhw1BPRxupos/YsX6TRckbrSIvWJqS162BglNdocidN0HmPnXfrTq9QwXus\njlyH54V2WJ/WJdJMG0UlEW3siTbih7k78Zyizj6ZfOXHirGuspt5ZbVtA2M6GhilNZqDNo9qg4kL\nnvEWjdUpMezxwTtx24mmYeXRhlExog2ijVo/Zbjia0+ijZ//fnx5n1Bivyz6RiLX/JM7srrGdTQw\nSms020jacEM2DE14urkQl+Qwc96Kd/8akcv9YYJnG0Qb+6YNSYGB6gzfD9cvIse1sKk/WbxhNwxs\nDcuXnNDAKK+RetogQYIE0QYJEkQbJEgQbZAgQbRBggTRxmAREYLuNRJEGyViiL8rKT/GiIcp7DC7\n/XDE0YSTaWkXr19gld2608TmDsD2v6qm7LwSSMyawEwNDY+An8ESIHEJiZyqKgQDEjuiTSmnmh4S\nCkuqJKdlstsGDctvAIxKPJ2Oky0srL6+HkGCxDZp0zcogO4C1lu5yWmgNkXIdfZdSLl6LCoKkoQi\nbCxQOoZGfe1DqKRNB7/PP+CQpsW8KNyORscGBgZCPj1kqRZIGzPnWzL/RhltxkXiyJg4WhDdYGdM\n/4U/+gIDacnJychYEW2skjYDghE/f//M6wXmVyD9ci64bWNjY8hkEW2siTZdPf3wvH7lNmeydIDx\nNyAt5AhFVotoYx20EY6LYEwZDHdy1fg25xY86iCrRbSxDtoknzx9OCLSEjQJCz9iG8uDItpo2xrz\n9yzEyNUDmU3yX/VeCN8mnfL5hJv7wjkzJX/M2Z50qdWSaANdDZQvbuYbUvcD3tmYg1vWev5khnRi\nq9uS1R/SGexewzRh1bTa3oruZhYWiyXBwsVl27ZtJ0+e7O/vtxzaFOUe9JBNWVagDf/U3ufwyTev\nHbnIVVjQLvM/nmROijP3LYY27Kpq8ND0r3XwRsJmPK+Twyt70hpKpBrmZn6z3JnMnVBRYpAysOIE\n5NhH1m88bVRl2rRpmzdvhvXWent7J4M2Pef+g08+m/3BxavnP3JVog2v+0rO5Yi4lNjyfuUlla59\n7kbMTdNxnVRz0AYW9ITXmga++2/pYLUoacJP3e9OzAk/xzJssePDYTk5Ocj6TUEbVXFycoIVZzkc\njslp01p5YAWkZprlk9LCJtamc1XjpKnZSr4LW47noME8viwvsRwn7fLVnKCQUEMq5jYx/vnOM1MU\nYHjM/WerPIlOyDvDMNrQ6cEFBQXI+k1EG1hndsOGDTExMe3t7WbrbUqLjqwA05+5hV7xULak44S0\nYd+9+vFSJ/yip6/7Vx7fsoYEmu63GrAmYfnds2txZ8xpefD3isu/dyXtmAVfTFmbbhht4GJ6enqQ\n9VNCG+hMNm3aBEtHdXd3T9KzzcCV8LfBUqZ5R1/mKqyEqpk2Q4U3wtfPJscFNh7I7Sy3zJE0eLaB\nKGb9aq3LfJsYBnh2T/bNBxJti0r/+8lq6djHawm5bXprci3/NhqDNlJgAMDIdodS2nBDifZVqyzc\nldNDmlDBla+XEB2M49K/MaofWvQAdH5hIYRs6l1xW8fZiM/WLH6a8Dyx6c++uu4PEUkVfezKRNyJ\nxTz3FfTre06IFaisrESmb/PvbdT0Njxe7E6ii5n5S19Wa15tm/JW11nCs7AogaORx2ISUyb3pU1C\nYlJsbCyyWvukjeSINlmm40r2Zo1JOxIRERV/fLI4w4iLi46ORiZrJ7SxqQjouPj44MPhZr56WB08\nNDT0xIkTyF4RbaySNiBtbW0wlpV2Kdc8l551+Sq83+zq6kLGimhjxbQhJTc3FyfPxesmuuLKdsH5\ni1egisLCQmSmiDY2QhtSeDwejUYLCT0MGTaoutbimqbgkJCgoCDUwyDa2CZtfryPHR3wvA6dQ0x8\nYk5Zjb4dS0FlTWx8PBSHgTLEFkQbe6GNvEAytKqqqpSUFOiIoGI8pVPAIYiIORwWHn4kIohODyDS\nQcFX8AMmk1ldXY0M0bpkeHQcgEUpOP4PzATTqGfD8K4AAAAASUVORK5CYII=\n",
"text/plain": [
"<IPython.core.display.Image object>"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Image(\"img/atamalar1.png\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"İsimlerle nesneler birbirinden ayrıldığı için, Python’da değişkenlerin ne tipte olduğunu (tamsayı, karakter, vs) deklare etmeniz gerekmez. Aynı isim çok farklı tipte nesnelere aktarılabilir. Mesela hemen ardından `a = \"merhaba\"` komutu verirsek yorumlayıcı `\"merhaba\"` dizesini barındıran bir nesne yaratır, ve `a` ismini bu yeni dizeye bağlar."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAT0AAACaCAIAAAEjkDi+AAAACXBIWXMAAA7EAAAOxAGVKw4bAAAb\neUlEQVR42u1dCVgUV7a+gERR3McdFyQuiEtcExdckpkkEyfqSyaaZ8yoyZiYmURf4iQmOsmwbwoC\nwyIosqggigoiBBFQAUVkaVCEtKDI1oiyyNIa9neqqmka6G56gV7P/9Wn1ZequnXuX/+tu9U5pF1J\nIErOmJAp+88VFbovmE4oyHAhODf6A8PSlvbbXPj1kpBXB22I6T3jZfrEKZsLJ5vLkXGx9yLm3GHw\n36pQsvzUpJlzVLWolZYxq5zL32S7kIVYCM/YL/icYMay5d3tCr1ekPDPMaAfqz7JeDghRzOfE7Jc\nooxJH2V8aNVAuI7ZANK7xWfPdSnqsPBIBRU14OHDh8wR1tbWsj1cMmasfTqWH+KLuuem5hnfv39f\nUJN/vfwM9iMt5kTZzJtGyM4zBXfKuRnl3Phj7y0cSPrFYjrfucYHs/gZx7osTijlxpVxM8u5bw7u\nrCrUn+OyuiapNqXKCZoKdOuHqG/Gz9vbW/ZlvGziteOIgehr8jKeO2tyn1hMBJpsBQEf687bh20u\nwYybW9v48rK1s1Ncm0v+Bpf4CkR4xq1t3c9xcXWTJ+OwnxZIlLHAOQ1ijpM8Y3iqt8dWSZWxYoua\n+S+18Jk8ucqeMba5+vl9DIWvzhlPmTKFn3E6/Mt5Ok8PNDXwi/+e33f+Aau8Pu7Cz/S7Q+9Of1ts\nLNANY7EvfLnckE4YoilFLUObqy8zVhdoscFMS9Px24+HrA+GJx+Gy+CnOd1uU2S7l8kXRupairwE\n082NB5W1tMOoHXOrhJgiw2iwJAZ7Hztx7FQo1KQX4pItLS012WAxrxFHRyeF3ZC0YwriG5MiDe71\ncm5ubso1mGlSjCJkrCGJKpbP4DJOueCZ3v96R3DomNl8/fyVaLADq254x/18NneAIhhW+iPNGEx4\nWCCvwQCoooTmlF5QpshKRUEaxvcwGowGq5qGFd0v7VuD6ap4sPi7h6kVUa/rLltxVsajOJUzGAZO\n4F7Ly8t73j3MH1HDGQezOt5JBCaS+AbDdBJz5BDjN5I6TvnSmLBKHsH+MlMD9WBY6x5pLTIY30l9\nbS0MpggktlS0IbcaxS33JuxkO2xiuIUVdfDT5CCrvYndosgbovOFxXzMaCFML+8aQ+28+d7/MmN6\nsOwPZqFLqXtquc2V40lue1lO50FbSw8Iuj5qhks3KdJaYQOR2VcC+SOnPUcwZbO2DeZDVMdamL6H\nJZMwg9+QaT3uVTNisJpvbXvXiX7ULVqL1mqTtRaWltG3WLDYwNXT++rVOI21VugAHZit4BuykBVS\nWCtqLFLBg6/ydIMktbbnqqRumyKnjvrdWitra1UfW3+YGPQIdqoJGSZkXEYqa1MeVgicVg8reWFh\nyR1Vsnb0J5fB2l9jo6mq5Mo3clnbc1CLGeyR01rZFgH0NGORPkUmzS19ewar5bLWLySs8zT2BVgs\nDWt3YL20iljLH4WDNYd6c/9PXt2eDg4RL9qE2+lKtJbZgNvhnWbLYS0go6xejLWtra1Kt7bP6mTx\n2Rw5ckTWfrkKW9uzjRGbmCIbq+phbV+PuahclwOtRWvRWrRWeXWyQr9iQ2vRWkmsFVhHIO7WlwX1\nSKy9zhHaqRisWtaamZn14LaWTN1qvX7k+66po+mewIzPTkaf/C6dNuCflnFU94BTZWqba9CxiI46\nZczGKP+v0uCn7nxWec20H1kK4lbaolm7dm03a68BUXmnYN/VXDftpg3TAVrkW0KWh/C5JQuPCqzc\nrE1lusTHSsfrUCm65udU0VphuqUfyw5r4d8JH3gkRFgnlHG7Wct0fRlrdWbuPu/6MXxCPWHnpdtp\nEWTxMXWxVq1qKbQWre0Kab+1V48P8xHq3S9AqBy7tSx/+6AcYUe22VlYcLHA1Jpdyv8Z/ZXnkgGw\nqmjsrMlDoV129wUktMAOrC+CxTxGmz9jnIPAEUGbxxEdyumM7hJHjSwgsHfyJ3vBQMMBlMHxNdTS\nyLLwr2B/yqzZlD+tPVGQ0pBhBfsmpmbw74CltpKfqBR2W6cx3yK9+1lm+e/0XzrZJctOMAfDh73O\nBcwSMnBVRpo0lF3yuj+zX0QXTkuJL8VNB+DRfvfy88rY3XTfa9xBr0tt0pyoBHYbc+FJfJ1Jacyx\nIGRFF3bpZXoMu/QiQU1nt8Ne3qPfVgnGshuZtNaRhBwpaJ5JiPdjpijaZxBy6kmbhCdiqwqB7CKQ\nXWQXoX3s/t7UhOWlIey6uP3Xyy9Q/KCtja3dtWvXsRDVhl1wm2nvdEjasXmN9ADEoF/nNOTx9SE1\nu46HnWW+p+s5j4uLi5FdFWXXxdVdzttKKaior69HdlWR3dAo0YtcSjnrV5nCOMugcWbeGdVi7szK\nykqb2TWhh2+ZDyRO7Zqou/gIk37hJ1Myaa/S2M1l54vJL6PoScc+NZaWJPrIEydPaym7xeCTZWQG\n7TOM//kLf1uoT0x+zFSmdm/kPBaaWWZeKEx7pPF+PhXP7uHDzgpqDSpqCa0kfPzq8uYr5p58j3AC\n7DasgaVvw97KVHrNDBcVlV+o69f6MOU34TU/Vs0iPbIn+bmoIwMDA7WQ3W7+/hh2j24ZTQz/fOO3\nMt7GrlBym9nJ2UWe5oAiX7oqy67q9ogAwJAMN5SWXxodHa3QfjqyK/NYlZ2d3c0HHElu5ez5izEx\nMUoYhUF25R9nhv6ru7s7CNrZ5cjxwJM+x/ycnA5B9sHBwUoeY1MUu8W1jQreFMeu6o6gEpzdQnaR\nXWQX2UV2kV2NY9dCnYHsqlyPCP1MI7vIbt+y+ygeMk1lw5wKyZCv3MGlvzCHAT23WsirpxeB7puI\nG0N2pdJuLR2A4RkZvIkp98RbZ8YMIgZGK5PKuMcttkHK/M1O/Nmhb7ashpSpKz5JoemBUA3GB9IX\njtEdYfohzW7gt1tWwgFLPzksehKdyiU6MWD4ADLK9J24jihToV4H/kDHC1++zaHHjSG7osHE1mBg\namp67949MaoKeUzPkOb4EzKRNwH6BpkO8dsfXhUMQfKGPvk8vhrYJRO+7tQu+LZg9m3nkyV+IibR\nqVyiSjomiXXIZ3EU60lFvAjGIXuNddeEYXwHSbFt27YnT55I8N4VqDPzTjFBVRhfFxBahZUfBV9H\n8ud0g8OuJJdQ2mVcX3SrmSHkCrjBEDGJTuXCd1Y5j5Cvk2pWDSQLHe8zKX6bR+qsOovs9nmrSiy7\ntJPY3R/CR3RkxKurIwsbmJpZDLsiJtGpXK7GuEM1PG7hh7eY7MqerJw9BtJXf+6akXUcqhhkF9vM\nCGQX2UV2ETjIjEBqEUgtQji1PHcnPcH3boRQU2pF+hgT7ZYMgRUyQhUq5J5O0joq5GZIeR08yg2i\n3M2tdb8B/w6lYlqSvEaNLB/K3j+NgBFiyl4y+UsmcTbl4Xm48WhdcDX2hA7M2sMLn6QnKppaCOS6\nL+MlnfISwrk2daX252zqT79nH4B95iAHU7LYp0RTqT3IekHv18B+XXv71U9GGrwdxPyZE/xnMvFr\nCMgLbgCZFAjLu/9ckaQnKp7ans7vBKktYwLxVJ2DIXLmtEtv6Zo6sDWVWk5rl8JZAv4dp5nyfP6Z\nwrSgvjDvihKeqKQWMsQkNjceRHHZ0oXayjY+tfO1gVqevR2Fs5AQB7YQBz8QW9rx24+pmHnrg6U6\nUaHU9nRqiNTyC+fi+4N1lrgwSc8igcvXhHnNlOhEZTSjMq1hB2JrU1EvDVZ3q5C1nFpoZhjRsU9m\nTqLWtsRXQrXbtkyf8lU8d9Zk+M8pmyvxidj5QSC1CKQWqUUgtQikFoHUIpBahLzUNjRgtChNoTb7\nPtvaxhYcjIVGxSfmUl4CY9Pun7pw2dra5oirW3NzMxaf+lH7sLjMwtIytbBSzErXsKuJ1jY2WILq\nRK2vf1B4QoqES5nB0Xa3z6E0CUpxY9Bf1Doddk4rqpJqofrpi9EFBQWaSq2CV/3fffKiX6gNi/w1\nMbdYhhty9fBuaWlBalWU2ta29l5jjmhbnAoNodbW3kHOAAZsNhupVUVqfQJPq44XUaS2z6hNTk2X\n/7b8T4dqObU7pxMy9gtmP9JlA7VSxRiSyNhNfkqj1s7BUUx+PfygiNzy8/O1ltrbcdTq3Q5qKecj\nl3k+R6jFqsGPlUStGBvij71HdOffoW9x4UDKl4qYO4uIiNBWaquAv5iLn/Ko5Txz9Qng/3UcIfY5\nDcqhVrw3ewn928Dm4+OjndTuW6T3bkDRrYhP+RUyf7tzyxlYv6OsChm6LiLzK3k0XofnXGmZqYGu\n+Tkxd3bixAktpDbt9hFiuB52elKbkRcD5fb9jUqWClIrud8iav4gNlb7qK2Fj56i6ddqN2pvxrsA\nr/viKpTZQnY+4ioqszcHkwk7L1HNhLQISrmLj4k6Mqusrrq6WvuoLSHdMR3Sk3/9N+x559QpufNT\nWVWdVlQjPD+J/VF5eHlreeenU7UcDpSYRVIxP/zTHY7yhiwOubjK2a/VvICLMlNL7XTFgfQ6pVHr\nHxAoD6+xSamNjY1aTq3qzvzY2tnLfE+2trY4PaC61EIjKCD0IlbFGkgtICkpKfjSFanuxt7BobW1\nFalVdWoBHA7H8dAhie6j4oVGTtNqLLUMjhw5cvxkiBhSHR0do6Ki2jUamkktAxhdAl3a29t7eB2F\ncOOubu7QXHJwcNCwGR4x1Co4tmJpbaOCqEWoL5BapBaB1CKQWgRSi0BqEUgtUotAajUVGJNak6lV\n0xBOec9eIrVILVKL1CK1SC1Si9SqBrUfjCYTv7kN8QPMXArkK3eBEMdiN8FoxmI2oTeG1EpB7d6Z\nxPQQe9Mo8uaZCpWiVuiNIbVSUBtpafZDam3AJ3/wzqcKfeoP6bs3LgKSPnBKZpU8XmhkSAwmB+Ty\n1pGn3AqbNvIVoj/ym6B7dMpzODI6aA8hrxy8+Qz2YzOjjIbq6Awz8btXK+ojZpragO1/NAXX9x/b\nxfLupJSzfhWkkEHjzLwzqrvdGFIregQHvuYQcNsg6l1LFTohv8QUZxbepb8NmJZYyg333AwfTlNf\nXd60J0Qv/GEDi1Nlpk+WHsplqCVj3vX0c0/kUKol4zemcrjRAV/BbrqIj5iZXA5cLYfrmOiQeVZ3\nIRw9hM/aeYaqeOMu/Czm206kVgi1fPzyyy/iqDX6ltl/jZCtvzLfWJbCWWnl3I0jydpTHOavmbnB\nQDxD7enCzgr5bFHnfhxH+EfMVC5mzryPObP96OtwM4qedNwG9eF9ElIrFbV6enoeHh69qHYZ7zsa\nqJS/T2Uq1XKG2uk9vtxjqL0h5F1bD/tXy4R/xAy5jP/qFi/T4nT6Olzvf73DHDbE+A2kVgp4eXlJ\n0vkRbOD0pPadoWTFiZKOD8+LgpMLGGoTRVMr9CNmKpelvE9eMzK9CJmZmRcKQQ7TeNd5itT2fb9W\nPLW34/bDTggbmlS1ywzJ0A2hvVIr9CNm5l3rklrF4lSMJ8TcM5/FvgApCaVwYsOXyw2pyrwMqVUg\ntdR309dPTYUWMtHf8FMYv4UshlqhHzFDLmM+v/bX5RCvz+DLE9m8hrTr1xAtymDCa36smkV6ZE/y\nc6QWR6MQSC1Si9QitUgtUovUIrVIbR9QW1bXpI7b04ZmpBaBUMMRcSwCBAJ1i0AgULcIBEJK3bYU\ney+iF9udqmiT8IIynIJAIPB9i0Dg+7bry7Ot/v6Zg1tWmIzQo1dPDhprum6HbfiDhjYRpzQXui+A\nXzOtcmryL1psWTLRgF50OdV819H05+B1t7niuvOOVVOp1Vxk8JTl211vVrYgK+oDHr8mP9+t58Q5\n7TA3Hk49GANGz/7jF67XnzR1PbiNmx9pt2Pd7NGvUHTrj5q5drtdZAG3S7us1wesz3PUeN22lPit\n1IVfM/bGlPPKp6X6tr35IKp83zv/rE20bul4KX87lvasGcryRX7Qh6M61mIbb/fPrgGptjWwT20d\nT6WM253CRUGol24pGH3knlxOeYltrcsL2WXCLJ4/WsRUw41s59cpfenN+/osmxFh28vHl/cvpQQ1\ncJ1vYXO7hA9Yn+aoFbptfuj6Gl1tLfjyaFxeVZMkr+iOUp76Q+ZLgWrwafBKKnXS3juCnp6rwlZT\nqXMc2E2oCPXS7bQfs14Kpjc9cJxDpZufrYRfDUk7R1MNqo1RNd3Ob6s8/yd9qvo+kP27ZA9Yn+ao\nNe3klqo7ft9tmD+qy/dLg2ZvdkmpbhWn21Vnngk2TqrOmVOp810fCdZ6zy+9RVW3pqhbtdPtqtDK\nNpFPThObkZQ4MHKT4AHr8xw1v53MrcjPuHH9XnVLlx5EjvtqCA9JdNecedomWrddSxl1q026ba+L\n2zKMap2+e6Gye9ey+Wl6wo3038pqmyV8wPo0R63QbeNvjnSJDV5rl1TBU1Zr/T2fTXTluMStoKkd\ndYu6FdpSe8H6tyn9lpu61TezpoUnoCSHt4ZSiSM2hz9tlfAB69Mctaad3Pai4BIM0JmOG0SXia6h\n0aL3/+kaV/J7L/1b1K1265bRYG3OOYtta2Yyo7tE/w+z1+2A4d0Xgmf2+oD1eY4aqFsEAoG6RSAQ\nqFsEAnWLQCBQtwgEAnWLQCBQtwgE6haBQKiLblvb2guKy4LPnndxc7dzcLSytra0tOQHW7WysrK1\ns3M67OznH3AnLb21tRVLHIFQgm5hpUnm/QcgRdCnm5fP+atJGWX1EroGjWc9OHbyjI2dPej50uUo\nlDEC0b+6bWtrj4iJo7Tq7Xszv7xPPPwGR8RYWVm7ubk3NjYiE+qIF02tNS9bNHt7/rJFLXVbx315\nyPmItY3ttbuP+slFt+exAEtLq7t376IS1Avq6xVe8u3ukxeCETfVQLfQd/U+fsLG1i6tpEYBBeR/\n5jx0j2tra1EPqFvUrYy6zXlQYGFpeTkpQ5FlBF1lOwcHf39/lATqFnUrtW6v306HrmxqYZVSSsrT\n18/d3R1VgbpF3Uqh2xu3UmG8N6OkVomFdSzgpIuLCwoDdYu6lUi3jS1t8Ka9mpGn9PJycHS8cuUK\nakOLddtw8/IeI/jcfewXQY+6/anu+hXfHeuXThymx3xdP2Hhxp3/vXmjVFt1e/SYn5uXryrUc9ey\n8qEGwTlebdXts+C9M3ke3Lrq9s5NO9oBjd6Mv4dEPmqgEjkVFzy3G9EKNrPMTtNC3UIL+UrafanN\nK/7N5v1xTCHrDjeabGIyfrgur9CHb3Rg1cn4ynVwTE5ORnlonW4LU/9BuWOc9JcfNgzvptuSe9/N\npp31HnqQ0fWs1IT9tHSX2+c0aJdua+oaZGKC47lpDBTY+N3Jt7tWmYGfjqWd6l1KlqnIvHx8T58+\njfLQKt2mJtrOgTp/9FaP3PpbEZ8OF95O7rlVndkzjXaG/m3IYy1732bn3IcZVJktzCwsCA0LsbKz\n2rVr+9trl04dqcN7A5ufS5LpgkEhobq6ukSzgLoVvdVcsn4DOqxD/3I6ju6pSqjbjLzobUy0gml7\n/B80aF3/tqWlBW5R8vXGvI1TcfKbGXSxDZuzae/eQ0Gel9Jj8mE4uiZsn4k8unX39Dp37pwmPeio\nW5FbSe6/V4Kb5CEr3R+kdyT2qts01tnNr9KP3sDl/4x+kqm148nWNjaXEtOkMiztlj1ddMafx1d3\nef3+FrlxNF2my4NvyFRkdnZ26enpqFvN1y2nxHFZry2VSduuVHacUn8j/Me5+kzyFpuUKm2fBzof\nHuHgdFhK2yrP/riIac7qjp2zYPmKOa+OoX4OmLP+M/PBVOtl/9kiqcsrKiERdKthHULUrVSb0Pdt\nxt2Qd2lH6PqLfw5i1+P8LQ+2dvah0QnKnQTKfsKFke3c3FzULepWULcZ2SeWDBD/Wl5wIL1OG3Vb\nWVmp9KUXhw4fDg0N1bwBWNQtrpfqL90CuFwuSPfyzQylFJO9vX1ERIRGTpygblG3/ahbACxUgmUP\nrp5HFbpGKu0u1BdZWVntGgrULeq2f3XLgM1mg5AuJtzq9wIqrnZwcDh+/Hi7RgN1i7pVhG4ZsFgs\nSysrr+MB/fKOvZMFQ1B+fn7asBQZdYu6VZxuGYAXKF9fX3j9evsFphZWylkc0QlJMM1ja2ubk5PT\nrjVA3aJuFa1bQdy7d8/Z2RnscXB08j8dCo4ae7G/tDo8Jt7dwwNerTY2NmFhYc3Nze3aB9Qt6laZ\nuu05iFVQUJCSkhIVFXXmzBl4Lfv4+AQGBoaHhyckJIDDt/r6+naExulW0Fe2ZkMzdYvQTt0iULeo\nWwQCdYu6RaBuEahbBOoWdYtAoG5RtwjULQJ1i0DdIlC3CNQt6lY1oA1xNFVtq/u9BXWLupUL2rDO\nUdW2vGcvodhRt6hb1C3qFoG6xQ11i7pF3eKGukXdom5RtwjUbc9n61H8B7S/+4nf3E5lhy6hnaOa\nuRRkKPNxr42ymUfdx7Kg6xzVu6A0JYa6Rd0qTrdzUbdS6nYu6hZ1q9h2csXRvxgSMmL9haessvxf\nXgOjZ++9+ZyFuhW5SVFiqFvUbS/w8PAQHzJCVP82g50VfCUrkYqp15CanR6c8CCF0/msj9vwt3mD\nmGBchgN5QQYGLrG5Yr2BirpKdAYPeYWXqrvA5mJxp1Ri/XdN48V4HDbeZOakkbzgNiZ/vxDHizT/\nPNKCipxr9NH2GTodAQwGvuN8/zlPZmPNTQ3pxFeGDzPgBx763DuvXspIzh26HbVkGh2myHDSDKMx\nTMAiMvQt94gi6UJDiygx1C3qVnpMmTJFMIDHnDlz/vOf/wj68ZNyXKrjWScmu+I7I3Glxn9Px5sm\nRntTUjv98mbtoWM8mvLiU5cf/WAEpcE1x6+UCl6z7qrX2wPp4Mt0LBKebgmZuiO2UljWM76IFwwC\nVu79P8MoFb8TnixdJGf+BafvjHkmcOTzSOvFVI0x/h8nC/slNDTqVgnQ0dHRgBC+EJR4zZo18Db+\n6aefZNHtwqOxZQLpeacW0dXCdymCLcNyV3PqvWR8MCsNXkeZHowcB0+eZTLLtMs2YzL98tZfcbK8\nU7eLfOPKhGW9xC+uy3usPtZlMXNLV8ukiuTMu+CIrXEp3cwsTNo6nopqtyKovD9CQ6NuERK9b/X0\n9NatW+fp6VleXi7fPJCIPiFPt3O/T60Vqdt019nUMbP29NJP7tDt8pBEibqjXXUrRSRn3gVHbYvv\nodsbH1O6HbQmpKI/QkOjbhH9NC7V97qFALkub1MhWXUWOIQXdxHetaCPqJYumf+vlFo5dStNJOeO\nC+ou29+lmVB19ge6hpmyL/Rxv4SGRt0i1Ee31FYd6fYRb4SHDB47ffbU8YN5rU3TPT659X3wvpUi\nkjPvgkMWvTWZbqWPmjbDiNf61Zux+/K10v4KDY26RShYt7jheikE6hZ1i0CgblG3CNQtbqhbBOoW\nN9QtAnWLukWgbnFD3SLUBNoTR1PV8P/sbo0DXrYS6AAAAABJRU5ErkJggg==\n",
"text/plain": [
"<IPython.core.display.Image object>"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Image(\"img/atamalar2.png\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Bu olunca 42'ye ne olur? Eğer ona referans veren başka bir isim varsa yerinde kalır, yoksa silinir. Her nesneyle beraber ona kaç referans verildiğinin sayısı tutulur. Bu sayı sıfıra indiğinde \"çöp toplayıcı\" o nesneyi bellekten kaldırır.\n",
"\n",
"`a` değişkenini `\"merhaba\"`ya bağladıktan sonra `b = a` gibi bir atama yapıldığında `\"merhaba\"`ya bir de `b` ismi bağlanır. Şimdi aynı nesnenin iki farklı ismi vardır."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a = \"merhaba\"\n",
"b = a\n",
"b is a"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Burada kullandığımız `is` işleminin `True` vermesi iki ismin aynı nesneye işaret ettiğini gösterir. Yani `a` ve `b`'deki değerler eşit olmakla kalmıyorlar, aynılar."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAATMAAACtCAIAAAEkJj8YAAAACXBIWXMAAA7EAAAOxQGMMD9aAAAf\nSUlEQVR42u1dCVQUV7ouSIxLNC4RJREjBhfAxERjzGhiNmeSTPIS85LJMkneZHsumUniyTiTzIvR\nIGuzNYvs+76rLCKyKosCQtOAiKIgsjUou4iIQPP+qmqgabqb7obeiv879/Sprq6uW/d+97v31l3+\nnxhWBwh1xrpbj3j97b8ODw/cK/uVZaLIo9S6PnPqg/mNg8MFvdR9KUwS650SC2LuyxDrcHts4k5d\nxWKt99xMx/TU+pWEAPM0L4e5zb2jQUWxCkepmojHYqV5UHWsukC9orFOzC3pQQyvtra2KooVEBYX\nn5bPVSzH4EY5jb3L3/ubxXvL4HjFYl2BcFZ8q7vQAM4Y6M2yfHc5sXBXYVnxuFinAkEKtob8u7C7\nSKiUjBYU6szq0a/TGasiOTwVlN+8K1dQa92knlht7OzpHC+sbVNRrGZHjqizRlyw05Xp9fCR8Tlc\nqppYAZaWlnSUBZfrmKkcMkpiWxh8QpdHDVEeKuvrm9LduoaHBwniiV9iSYLSP1/Mi/izxCiXr9kA\nUbbxp5wCgtg6i7Aro4vhPbF9UjVxKSyVnMobaugRKj3KhiaeSJTe/oEqSqXpGj36wMXFRal9hunJ\nWEX6KD4+PvDFzMxM3sj27NlDR5nP651NtRCGOkRGI9VaPLSQW3+FPFiyJaGud6WREaeqVBmdsWad\nZ1gEsZH+uoYgdj1KELpbiScOTHNPTNM7f0qsfSS9gasvSmVUPdKiVFKFN3mU81espQ9sWCxVp9LG\nxoZxLYnyio9mvRvNlHdAdaaT7tGWpQaDpqiu9ADZz2yPhZOKjUIqBhi7pDvWg3Uewudtf/r04Xci\naL3D5yaCQD4nS2f5lWv5NTeFayRzCwtGpTMsOk5KvyUo+jhD0jlpF02B7q6mpxNU7p5YLDwMCsHJ\n2YUJ6QyMOi6dTxU8irxDwAq+MvD5fLFXWFlZqSbL6ege/6HgKXb1JA89vqyNjsyLnPngUWLL9I7G\nT7ELDBgcHBz30Ju84POf+V3wRgmTCvNWrjdab0IQs+gUCn+aW5vTB4s+y6BOPgifu3d/KZIXGtR+\nqqjcYjo1sv6cCf2+kpKSBQsWTDWdAaHh5ubmTi6uzOQz70qjpMLd39/PkHRO2jrl5uZqfTrLKi5N\nTOdn+uNSGxweRWgYlNKPv3DhAkPeP7PKJPa2oEvIwPGhtLQ0WOoSHByM42CYTg1Pp8hcOT32h3xq\nOZ8HOLAUoe8+xefoSCl8PufdoMIOwBr4nPPeaTrqufDZe45s5Fnvw5DyQe7d4eHO23D+jRDy4se/\nV7Dc7lg9p2lQOJ0b4dOEVaW6dL4UPSzIaJOxsY6+Zup5qKHz4cGb/GFW1X3F+YR7PUIQw8Makc7q\noE9XrDPVffoAJJMgZouk04Ag1q2Yl9k2hPrEdIosd4tLy2FAj080nZI6ty6ursxJJ6epR8rLytDQ\nEBPSCe+W0l/KEs/kMyGdvmHRk75/MiGd45PEKyA/W/OYl878mltjSeLdNH1/P7e5M5d56TS3sBRK\nUpvuMlPiQdOYOpWmE+ZXlTcYDzeXaXwI2lWGpBMGpiVdVKqSQquidEpKalljl2r0Q6dTeMZScug+\nyxM9mWz59MQrR6dSx6VztOuXxa0qaerxDQhMTk5WdvJgUR/MfI6mExblwtYseGhqgHY2oTNP9xmo\nO7qp9yhdbnPXWh3C8VKXyXxidApUf6Eui3t75C+E8OYQ2AyWz+sVn04VA9IJz7R3717hckuTs3nk\noU0JAiZ5jdauhHRmNJF8ZpBP35Pe1FtSW/3Gq8/r7ogV8FmbC5/RcZGrFuvkSiq36oWUdLLfmAcH\nZ0I+gnTmjJVbMp3fZnaQ12yLoP8S87Nx0XkbOCi5kpSt+elUbj2E6cR0Tms65d3MKXvQoHTKhYsX\nL86I8SGohA8dOsT8dKZTwPE+GdJ5b2DQzsHR0tIqOSWFmeks5Fao5Y1MpemMPJYgqfE56uXLnHRK\nb2f9/PwYMW4y7s1TzNsdQ8aHUgrKhdNpOIuYuFSKeeOa3fnUwfaQZpHFU9q7SEqQzlPnucLp/IVc\nxdwefYNxfFpYWkqvh0qZkc5J69vQ0FCGpDMnN3dGzAsCeDwe8/tDwgC7Ao6OjhwOh8n9eATSidBU\nOru5gTYhFWIvlfITQkPpJI10Uitv73DITXhGJhvg88HnrYR+GoAzL8AiyjkL4OBV12zSXhplm+Zy\nPwMzCDb/r/x8P6Ru/oNkGjM7yVVjTfHfwfET643hc8OPginFLXDB7GXrV5LZUg5rSqmM+tMiOEme\nIVbupa4aMNaBLwtXPwqWHHVbBlVFZ1vaPvIhdJcf9Ejkj/uJfMometFUeyxBPEX/EewrqHJJpyrp\nJF4IpI/rqBwYbPAhWRwBmCV96yTMhA8ZUi+Lz7z1TUnzPZo5+MoTrC4D82gELDGFGez5hiaCf5o8\nAbtZVUTnOoLwvDFAn1xLEGEtfGE6BdbYSDo3Mp9OajXwWObw2yAHqgRV0dBignCqHuivhMrsBfpU\nf4UZQWwfl1EjdJ54d57OFjZ9qjXpU4J4FrtCCKQT6SQrhK4etrPrkSPmtvaO7n5BLp4+1ixbmBYO\njYjEvNMaOi2trXMl79IWDv4RsWlyzrQiVEenlOWaUkLO5frU1FTMTc2i09aBPZU1SFAUMEM1hc5p\nWW3GgNkXhtCZXXFDEkmk3wkY1jAyIt+TF+6SQqe9gwPmqfrpvHL1mnTZwfJrMLMFK7CB0FzJlwWG\nRmCeaoQ6o5MzxDIES8zBPwt9DAvNgc5sRjefZtqMMTqdXVyn2HCW1bd1d3czgE6lWppT3XJ4O3sH\nhe/FqW6qq6tjQH3FHDoBsLfLnu0k740YswhFhE7YxQdb72AaZMNkhkwn3ekHjZTY5aATt/9NNH86\nMYh9MGmbOJydnf1CI6WvfGOxbJk3eiBMp9d/zX/n+K3DzxL7z3VpFJ1iH0y+PTldXV0DAwMMI4/e\nhElj9+7dkEBJlS21ndSIPgbLOAb788nj+lL4Iwfy94NFs1/xo39N93iD0N8HOxnhp6/S2kbp3JPZ\nTvuDgWMwXeD+vp7+vjyBKcT/WTb7rcSRWJ6kTyZZPEfo/x2sHcD1BYLHaKX/q+lbydRI53fffUdv\nH5bedgrrBrahgqPGUW7AuDKsS/hRVMEknTli1NkDx7BPNfQHmEd+ZL99yOlr3XEHjGBjLh3Lki8y\nBXeohZUec8CoA7mv9/39YAcdTDtIeVFEOuXoCkmnk9vcsZzylrtKH5YnEN6VPZPSGfMf0tnuM9u2\nwzKTd77ZQRj+Qsfy8OadsErBgHzFf+BMI2W7AlaFLDMFJ1xgwQLuLmLEAumcMT1bBNKJdCKdMxKH\nDx9W0ZgtQtkwNjZWYJsf0qmJAPsVCm/cRDo1FKhOpHOa6PT0DbCxtcsqrxEzm52cCcPu5eUXkSFN\np9PLPzAx+4Ls/WM7RyfY64tUaRydjTdbIxJTFXvpAdsDyJYG0VlysbKwtnUqkztunt5ImKbQyXZx\nm+JcHS611RQ6b/felT71+vVpWrhdMM4fWitZoB5eyJn66bzX3y+dzvyRr5/qE69E3pRMpydyps7i\nMnpkz3aWQudKglhiCNOtxNp9J6VUtpbYIdIQOuvr6/OqmqYyou961A0zVFPoBPT19flHxuF+I4bQ\nSSMpKSkyKU12Im1YrJ6eHsxKDaVzFAkJCTA4kHAmfyKFYdFxMMiHg0HaRCcCuUQglwjkEiGWS4FB\nKnGQ8hNCE7nkW5uZSTCaLeUnBNaxCNXUsRMsdwp+AuuCBh9/A8dzKd99IR8vB/d9pBXNLbZMzB9Z\n7ZVONA0ru5FUJXPZew5ipE+Vsd7/JbZOmEuCWEP/9CQ8/UEueXS/Cn4dZCiXB7l3qWNyl9ZtcB31\n+eK5b4TQP/Mi/kw8/j0c7NYjDnD6qHN9r7/91/vD4zLq1Afz57x3GvZOwh0aBdk0CGIo6FWFLiea\nYBXicmsAffUOgnCsvk8dkk95n6FcymJ4dqKNX5mt3aqkH8vva7b9CayqEg+/EzGOyxELrsCl8/UB\nxnMpYnV2E0GwqsSntSw1eMfqOaSB5kExpm4H6zwIwkTV7aU4W7rIpTQLwhONNctotlgFuuRvJV1T\nzX5qPcxJE3ZlvcilUObcM6B23a5bQXb6MtvIKvhOiQUcL1+z4RGyW/jysFgT1MPD1UFkPbdinSlZ\nHz99AN9JEMglcolALhEaxmX8qTRYQmBpbeN01MMzIASW7plbWMLSkJrrtZh3WsOlvaOTV1CYlMU+\nVjaslJTTmIMazWVNfbONnf0MNKrHNC5Tz+aFx6fItazSytqaz8eJTQ3jktfaERB1TIFVskdwlaym\ncQl9HIUXsFtbW2NuagqX3gHBU9mMEJee09/fjxmqEVzCXvYpWouysLDADFU/lw0tbUV1nVPk0tXd\nAzNU/Vw6sJ2mbsUNSkNHRwfmqZq5hNdEiSTVXyEnaBYa6C+E5S0Ei3tbCp3p6A1Ok7kUa49aogub\nwEDMUzVzKd2DX0lttbm1+RuvPk8KlDJGLSl4eyvXyoiyjQAwgUspplbFGhaXFGAHJ3KpZi6tbViS\n6FlDEN9mdtDHux4liG0RUrxlVFVVaTWXfQNDWu9Uqqi4RBJDYi3Ei1dwZIzSi57yudRSU9AQxrLG\nJyRi6palkUuN4NJ6CoOxEIprb126dAm51AguySHZ4HCFb6SaiUzkUlYuk04mg6NwBe7i7eOrmoF1\n5FJWLgHgfriooVuuWySnZ1VWVqqo261KLq9nQnSFVdG026+p5LKMnttkNS8p4cHEZE1AQMDJPI6M\nT+ni4trQ0KC6VyjkUi4uAeDjFNq/ogZpMyfp54tUP8ml2joW3Kot4jZdIwhjrkZxKeHBpGVNeXk5\nsOXs6paYdb6wlnQamFdZl3A6E5YQODo6guU1NQxtTDeXenp6wvMBIu1lRGopt/lORNZVOpefhi1c\ns+fDwRbLVHL720PktpIT9eSVaYG74VjfaB25o/Z/j5NDYGamBh99SW04fpPkchnsviEeoXYge17u\nIc1Cvks6g1tpZESeWrhrlEvDWcT8FaRVyQU7XSXNbQg9mGxciqClpUX9w1TTzeWog5GlS5empaVJ\n7vuQubyb8oJZmPlvsoqjzv+4ljCxv0o760ttpK+8PZsgQq6TXBLEKiEHnGvpY8//fuShN+MluMYU\nY79X9rmNmT68OcrlsmXLMjIypHOZ1kQdXw4Dj5b0eecduqsPlnJK3Eg5rjehA6h3e2gzyeVmnzEu\nt/jTx2ns54hNXgKXE3GRu3d/uYr0rkj7xRxnvPczfWJ7SLPscxszncvly5dnZmbK8E4i1JKRXD41\njstiZ4JYLzrNAFxui5zYXgq4FO/jVIwhZtnnNnDaQcb3S2lccnkN8Gt8vcDTKWzB/Fd+t3Qui86D\ng+TVginFK0nw9+yRWH7Jpz3ktsNx9A055jaQy+ngElTo8hFZUT9JGgjRNflxcl2Kd1YrxqK27HMb\nyCVzx32QS+QSuUQukUvkErlELpHLmcRl+c27WhqQS9Xh999/V+54JGaxKsd+YeoJuWQIl8bGxsil\n1uPw4cP0hAxyyQRR0igrK0MutRvpI+BwOMglArlELhEM5PLsuQIwKmFlbQOrLFl29t5+/m3t7Zhx\n2sRlS0eXmdkRv7AYcTYm2tmu7ixbW8w+LeAStryn5JdNOpjr4R8cHh6BmaihXA4O8aXZFJkQYA20\nlZUV5qMmcindCIWkwGKxMCs1i8sjChEJIb/6prKtTiDk4JJ7+VpeVZPCE6Fsl6OYm5rCpaRmUuY9\nSr1eXl6YoerncmCIfzKXM0UurazQhKwGcBkQGjblvYO9KflczFD1c2luYSGNyyXgJ5Aw0CNdfCXU\nSe0E5edjnqqZSyk21Ki9g08K9i5ZPEfo/10Kl1FRUZinatelpRQuF32WIfhamwsudKVwqaurSyAk\nQ/26XPJF5giX2QQxRwqX0dHRqA8169JCenupu1VgG+9nY+KJA9KG9AoLMU/VzGVEdIwULh/evBO8\nIhuQW7EfONMokcjU3AuYoRoxVhCflT/FNfCWOMiu4eM+sgcw8oQZqhFcgsmXrLJqhYl0ZLMxNzWF\nS+k9IOnhwtXG7OxszE0N4lKxmra85S4bRamBXMpLZ1F1k729PeajhnIJsLGxiUnOnJRINzf306fR\nNa1mcwkYHByE9SLu3r6cph4RCs9X1oIW3dzcMPu0g8tR1NfXw4i5i4sLiNXDwyM+Ph5oxozTSi4R\nCAQqE4FAoDIRCFQmAoGYLmUO1nuSxoWJbWE3+bLfU7F/IRAIbDMRCMa1mfyeS1EHP9lutOgBainv\nnGUmr31lFX/1Dl/CvwZqXZ+Bb+vMKzqvnTD7ZMvjlE+kh1ft2O1V3DUEv9886/jVS6tIP0vEvCe2\nfel8rg3H8bUFAnKNDpX38DLsvtqxeiFZKh581PiPe5zPttwffzG/91qS9VevGT9KOdKatWTdq19a\nJ1X3inasJi1gyohU25U52OD/IrnnYO3+082CHBjsKLDZMYfMwbePtfIlK5PEk3/zLWodgNy6ey3k\nwyUjy/RXfxlY1gli5N+pCgMvSIDl+/J7sdRrkTJJGHzkmtdMurcdun05cjflf47Y6lVH17L9VY4v\nkPJ54OnvY6pojfH7bpz85XlSL7Nf86kdGJa9gE13pExQ5kCN87NU1fPMXq+My+33ZfnXSD6u+rlE\nyOsi/1bEi+TZFfsv3BX6b3vcy+RZU1bVfSz2WqRMw/+UjnOpef+qrSl5fkdMG3y7k/v1o2SPaFdy\np2j72HbsT+TW0NW/lt2TuYBNd6RM6c0Otl/w/+d7G5eM2542x/hjdn7HkDRlvhTVKtyFaI8l3X0S\nG52vC9dcXYk7ySrTBJWpXcp8KbqNL7Hk3K+iFSMNtJpkK2DKiFTre7O9N69xss9e7Bgc15OvcH15\nNul07pWoW3zJyhyfj6jMGaPM4dsZnzxCdiDfOt4m+n43cKs4K7v4SlP3gOwFbLojZYIy+6/YUnky\n71Xr3JsC7Qz1XPR+n6rgtrhU3x9GZaIyJ5acu9zfTKhmatVnPiWdgwJ95LJ2LiBPLvo4/taQ7AVs\nuiNlzNjs3epEGOwyWT6HSrXufIPN7/7DOaPh3iTvmajMGaxMWmLdFbFmX7yyjh4mJWYtNX7tKxgn\nvSvSpE1awJQRqZYpE4FAoDIRCAQqE4FgvjIHB4euVF8HrzXHT54KDo/08PZ19/LxDwqOPR6flnm2\nvKKitxdXDiAQSlbmEH+4sPSinYMj2L+0Ztn6hESkc65M5uikLSLhtIOzK5gxsbC0jDt+YmBgAHMf\ngZgGZXb39h319AZpufkG5tfcmqKBvZiULAswmGhtXVlZiTQgEIooM6+ISwrSJ2CKahQbknKLwatR\nYFAQn497xhAI2ZTJqbgCmjyWnqsMTYp0d6H9DAwMREoQCGnK7B8YhHdIJzcPZWtSOJw8x4GKoKKi\nAolBoDLFoLWrBxQiybObsgM0nsnJycgNApU5Ds232syOHEkruqQWWQpcHNvaxcTEID0IVKYAMCNi\naW0TdTJdjbKkA1ihxmFbBCpTgNj4JJadg9plCSHudBbYEUeGEKhMEjDHmJhTpIiWGnnRHr++85LJ\n0nkje12Xb9j2BcuT06H4C6eVNYfDQZIQM12ZMB4Ly3o4Dd1ySqgn0/dt0uyW7savo6ovCE52Zhw/\ntGk2bRSCe0EhZbocdYuLi0OSEDNdmWUVl8wVddHHqWvJrbsz/mRb5P7V1E70uFyF7hkUHsUAJ+RY\nyBBTVWb37R4pruClhYbrnv96U19nXIl8ePUftppQTemOWMWU6eHlHR4ert35i8pETMt7JoyIyjtf\nUnI5+iWy1/rgJttLReN+uuX/8WL4QeelmFwF505s8/LyUJkIVOawj5+/i6ePfBKqOv46Nerz2NeJ\nWY30yTsFRQl7t80XtJ7P+WY0yS3L8xU1sNpB2707arsy+waGFOxGYZhCgDyfMJ85NAR6yOBWyXev\nppZo5+9fNNabRSlx7mPPvvytsz+3k1Pqt5k0xWvyY16XvA9nZ2eXmpqq9TUfKhPDtCgTkF9QYG5u\nMdHRtCpDSHgkm81mQp8ElYlhupQJKCrmwM6sorpOtTyWX0AQuCtnyNsCKhPDNCoTUF1dDd3aU+e5\nqnygUt4dWzu7gIAA5rzHozIxTK8yafj6+lnbsIrkXnugSDiWcNLCwqK1tZVRI2yoTAzKUCbJTV+f\ng4ODlbVNzuV6ZTxEWQu8VUZA+8zIhXiMVeb1zA9Ifz7E4z8UFFZFb6FG/jawqznqLNDdyZZPU87A\nQs7yNO+G8uSYTMocBYyUgn48/abBCBDZcW3pTcnKtrS0hJEeBlvWY6wy60v3ryPLlol9Fed6BuXc\nYN7rUTe5qExJQZ4ck0+Zo7h06ZK9vT0MEXn5BYIVH9kfLr+qIfpYPEyHwHqGEydmhPk8DVfmrVu3\npM8YS+7Ndicd2UAQT/1cCG86rUGfLyXmve95baw0r/q5OOd81L5dm/UoHwhzDV78wC4vF6a1G274\nmX2xyYCa6567cuPHdkGVt8ff+U7++bgfPnnZcDHti3bxqu2f/xByMX9MG11JZqaUk73iUyE/btKD\n9ZsPLTL58OC51hEhBaeVJP/0yYsGC8hVaTqPGD3/uYP/xW75N2CMKjPoVE7Ql380Wfgg6cJoicmb\nn1qnZdTLu6NDUo5NnzJFUFNTExkZ6eTkBJu2QHJmFKB1pT+hVQQp+vr6wmqe/v7+GbeSQ7OVCZSN\nrqZcunTpnj170tPThWtM+d8zR0oz5ejn8On6YlglVltu8c7ikXgM33UuzIEVKbxb8e4fU/27ZX85\n2Ur//cI5m6cpX7RrvwmNr6GWYfPaT4X+cwM5UT77efvKYiFlktB7a8/RY+7+rv84kpHDE4paf9eB\nY1cLedTfg74zpM6ZWNF/l30DhtAN9Xb9mt7MoZ4nOfA7Ix3KYa55eZFydnRMjzIRUqCjo6O9a/FN\nTExyzp1XUJkGP0XeEFrCWRFIeax9/LOUNqGLGx3+QPkiP1hKFvGa9F2LKT9gYTzRFaCVEX8gxWn4\nbWbHmDIf+z68VlzUhv+JqRt/3moj5UrMP4Mn1waMkRtucExuGL9/o8x/E0ms4TcZHcrY0YHKnOkQ\nbjOXLVu2b9++zMxM4f6t4m3m1uAzwu9ml8MoN11P/btQuFfZ7LyD3Eu0mlJm0TnLJyerLDb7NIwp\nc1tktkyvhT1p7OfI85u80pvk2oAhuKH+d+cLRV8ai/eupp7Ht1EZOzpQmTMdMEel6HumnKMmMiiT\ney35TdKz5eztAQ0lopuZ6iLiUiPyqvMaeoWVmSO/MuXZgDFyw+dFF35zSjyoH9Z9n9upjB0dqEzE\ntM9nTkGZzb0FGb9QTRHx2AdukVX0yFB3VoLFVmrAaMF70VlNU1WmPBswxt4zddbtYxe2k/UF7+Yx\n50/1qUZxh/s1jnJ2dKAyEZqlTFpIeWfD9n24Y9VigS/aRWtefu//4pJq74iMzSqoTDk2YAhuqPft\nmfTTrn/ZZkgJcO7yTR/uDSg7z1Pijg5UJmLalYlBVWuAEKhMlAoqE4HKxIDKRKAyUZkIVCYGVCYC\nlYnKRDAa9BJorcPfKZhpLVCZCGbC1NTU2NhYe58flYlgIH7//Xd6+c2hQ4dQmQiERuDixYvCK8vL\nyspQmQiE+gGdWGFlrl+/HpWJQKgZ0H2duHfst99+Q2UiEBpTuLXZCBMqE4HKRGUiEKhMVCYClTkT\nldnedTsuIdme7QQW8iwsrZyOengHh4fEJR5Pz0u9UJFVXpOYfSEiMdU/PNbNJ8DG1g6WNYA56aCQ\n0KtXr2KhQaAyp1OZPXfvBYSS9tTt2c5ye/IbCUUNnb5h0XATRzYbLGJiAUKgMhVUJp8/fDw5DbQU\nHJswvSt3oWkl21tnZ/DRgCUJgcqUVZlD/GF3b18Qz9mKG0pdXO/hF8Q8d0MIVKZSlHkmvwjayZT8\nMpVtfmHZ2bu5uWORQqAyxT86f3jYydWN7eKm+p1pJ3M5UB10dnZiwUKgMkVh7+jk7hukrm2jZ8qv\ngzhbWlqwbCFQmWM46unl6umrVu9opDhhlmUGeihCoDLFK5NbUXXkiLm6/RaSwScwxMvLC4sXApVJ\ngmVrF37ilKLKlMHNoDwB+rR1dXVYwhAzXZkwbwk+L/OuNCqozEncDModHNjsU6dOYQlDzHRlXq2p\nBU/SijvNlsHNoFzBLzjUx8cHSxhipiuztq4eOpAKK3NyN4PyKjMw2M/PD0sYYiZWKyLfQZk5l+sV\nbDOluhlUpDfryE5NTUWSEKjMYTt7B/kXx8rmZlDOUNp8B6oJHo+HJCFQmcM3qA4tp6lHAWVO7mZQ\nnuDt549dWQQqcwy+/gFObh7qtR5/vvwKVBDSHZUjEDNLmYCjbu7O7l7qkmUOtxKXziJQmeLh5+9v\n58hWvSzTc/JBlrhdE4HKlIjr18mV5ccz8lSjyVJelzWYIwkKQlYQiMmnYuPi4szNzVMKypWoyabb\nR93crKysent7kRIEYlh2O0AlJSWw0t3Tf5q3hmUXlYLs/f39h4aGkAwEQm5l0uDz+WlpaWAWxNrG\nJuFMvoKCrG3x8vGFfrKvr29HRwdygEBMVZnCgFaOw+FAc2dpaQkygyUKsLHTPzgsPC7+RNqZk2fO\nxSalBIdHefv6u7i60tc4OzunpKR0dXVhviMQ0vH/b08No/SBCWAAAAAASUVORK5CYII=\n",
"text/plain": [
"<IPython.core.display.Image object>"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Image(\"img/atamalar3.png\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Şimdi yukarıdaki davranışı daha iyi anlayabiliyoruz. Yaptığımız atamalar sonucunda `a` ve `b` aynı listeye işaret ettiğinden, `b` ismi aracılığıyla yapılan bir değişiklik `a` ismine de yansır."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(True, True)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a = [1,2,3]\n",
"b = a\n",
"a is b, a[0] is b[0]"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import IFrame"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Bunu daha iyi görmek için [Python Tutor](http://pythontutor.com/) sitesinden, Python kodunun işletilme aşamalarını görsel olarak sunan bir programcığı kullanabiliriz (internet bağlantısı gerektirir)."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
" <iframe\n",
" width=\"800\"\n",
" height=\"500\"\n",
" src=\"http://pythontutor.com/iframe-embed.html#code=a%20%3D%20%5B1,2,3%5D%0Ab%20%3D%20a%0Ab%5B0%5D%20%3D%20%22abc%22&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false\"\n",
" frameborder=\"0\"\n",
" allowfullscreen\n",
" ></iframe>\n",
" "
],
"text/plain": [
"<IPython.lib.display.IFrame at 0x7fa43c2a1d30>"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"IFrame(src=\"http://pythontutor.com/iframe-embed.html#code=a%20%3D%20%5B1,2,3%5D%0Ab%20%3D%20a%0Ab%5B0%5D%20%3D%20%22abc%22&codeDivHeight=400&codeDivWidth=350&cumulative=false&curInstr=0&heapPrimitives=false&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false\",\n",
" width=800, height=500) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Aynı davranış fonksiyonlarda da görülür. Bir fonksiyon parametre olarak bir liste alır ve listeyi kendi içinde değiştirirse, orijinal liste de değişir."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['merhaba', 2, 3]"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def f(x,L):\n",
" L[0] = x # L'nin ilk elemanına x'i ata.\n",
"\n",
"a = [1,2,3]\n",
"f(\"merhaba\",a)\n",
"a"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Sığ ve derin kopyalama\n",
"====\n",
"Peki ne yapmalıyız? Doğrudan atama yapmak yerine, listenin `copy()` metodunu kullanmamız gerekir. Bu metod ile elde ettiğimiz kopyayı yeni bir isme atadığımızda, artık iki isim aynı nesneye işaret etmez, ve birinde yapılan değişiklik öbürüne aktarılmaz."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a is b? False\n",
"a == b? True\n"
]
}
],
"source": [
"a = [1,2,3]\n",
"b = a.copy()\n",
"print(\"a is b?\", a is b)\n",
"print(\"a == b?\", a == b)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Burada `a` ve `b` aynı değerleri taşıyan ama birbirinden ayrı iki liste oldu. Artık birini değiştirmek öbürünün de değişmesine sebep olmayacak."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAATMAAACtCAIAAAEkJj8YAAAACXBIWXMAAA7EAAAOxQGMMD9aAAAi\n8UlEQVR42u19CVwUR9p3B1FUXE2870g0RBRRozGrG80m7mbduGrevN9n7qg5vmQ32SQm38a8mvyC\nnMN93/cNgsh9e3Ao54AgCsglICPILaAiMLxPTw84DDM9Mw0zMMPz/9UPZnqqq7rq3//q6qeqniKG\nJgPEZOZKEGtPhdcNDfU/Lj7N0mFyKbX2WxPenXd3YCinF749IogNsw8nSch110zCvBii9w+1hcfs\nV2OWa73LywRBJjgf/r0WRuwOJIi5U6+Gi+71jgQGqQieLk0gRJ6juFwJHmTNVfAsCEl3e3PTTj5N\nfTX/85nDy3cKRHuaqxpQL3uuQmUVzPXp1XAaijj3hHMVqiIzMzPGuQqWW7hWx+YKCIyISskuYnZP\nMuF1mrWImKu8czU1t6DurtzaVgap9D4ZlCmQueqfPTuZLeIf9tszaBGF2qaR1kBthw2x6nve57bC\n8guiW0Rm7bC/v7+4FrHoXo9g+YRzPTu6hq+Prx0WV6si2mEjIyPqe05ZnWq2iGSW0LsZuSkUneVv\nxY8ejSu1zqGhgZ/Yj57wvqR+9Bwn+O9is9R9aQ1k2coddwl4GP72WGTNTRKXglLJuHVH3jetiB6h\n3LNsaOQIHXLz8pEyp7q6OmCrtLR0+NzurBQvGUq5acMS6oOdnR3TUnaLzCOuode/Rl4V2y2+cC3C\nrY+7uzt80dfXV0AvcJJEglmqVJbjUYisfb7JaPBGTpi36kXqgymLJX2DV15eLrEpiG2Q1KybmppK\nnyVN60PM3z/y+dOUNvk2eHs866kP7NJgsa3POG8fJXjnnS7vgJNZTqpHa3byfc2DwbwOYT/Zz2wL\nh4PMrJDMALZL+Avmy4E6Z8Hje7VmNw4MgSmTutTtMnb2p+V9W1JemV3dLChfA0NDlSpnYFgETWPl\nGxapgEuR9SWZSXsrMR7j7i79c7CsrEx8ObspW4TmGBstQcwo4jRvPlsseFD3h3iDd1cImSnoygmN\nkFNMgZAF2MbWTh7lhFwCAgLoyylodJdgoOZw3Ctpy+kTGknP52Tct91CtqWV32YLx2mspDc7iegn\ncLlckSU0NjZWTFOhCH1ifwjLieXEck6lcnoHBBkYGNjY2atmObPK74prlPv6+lSw3yeyq5GZman0\n5SwuvTm2nB8uH1Vav6BQQp7w9fWVqd8HYf0vhaOPtFJXLm3/VmTIy8ubrH7fSDnPhNfwL3re39PL\nG9Mrmt3NDUbia38V4fbzvgKJ/b5LxVXiCgldQhXs96WkpMBUFz8/PwU3ifryBPYTVLKcYF8TODjQ\nzEU+lZ7P3qvks5T1DsUnNaZP/d3h1qC4CyI2wF+Y5UplPYcgvlxCfnjz7Q8o0y7MhyWPv0XOpSFW\nfsvkvuU+usdLXbCcevBXh1WhuHIOW6IJQmfkYHGyH1zPiAkb/rIqnjDWJ5cgNKZOOat831+lvUlt\ny089hYbLNmwm5uwTLOdqgtBeNfdi6yDqE8spNN0tIiVDMT0+hZZTXM/Qzl4RL9zQNZNf5/Zpv4/d\nSDPzpndwcFAVygnvlvTxYi5nq0I5PQLDJEaVR9m2bNlCU06+WaOxidB8d9Sb8FWT2Jqe1aNfMrep\nk1+JZV/A35PXOkWXc/RRTg5vlliW/Ms5YkzYvHmzuHLmwYfR5aRC6HfrRLxkav1AV87s6vsCQ07N\nm96Bac8dmQopp4aGhgQ+RZUzv+jcNwlN4gbRxJbTwNBI0MqitnQTob7pXJ1Cx8ukLye7JMS/oltk\nZAnllGizgOfqJJaTZghQxEghfTnBMC2ukNcVMvgpspxvhjbLbAciCLpyiitq8d1OhRmHFNFPEOz6\nXSqqKGzs9vD2iY+PV6QRTKHlxPcVLCeWE8uJ5cRyYjmxnNO9nI/7B8wtrYyMjOMTE1WznLlFpZPy\nRkZB1nXdMoVB7nA5Q85Hi+sEO7h6qE4/nj6ep6enKpRz9Jtn1xXOJMwzVkQ5E3NKBMu5biYxdqqU\nnOxgenp69HaTU0fWCdmHAr9cCX8j/0dHlH1onvR2za5saoWY/z2hhOSHWbNmyWTvg7B9pjATe3Zq\ns3Lb6cqZcK1IsJynsiFeW9gdufM5YtSU1a75+iKiUNT9qUVv7zM0MqK/v69PAbvmyNQv1/cWkZPB\nyhsFZ4itkea+ldjejiy9mES7psgFliPLL6W1a2ZkZk6LcUEAh8OZxP7QJNj7wK+AlZUVm81WZP8T\nKhTn9yHw9RPpBHQV+Zj6l4qKyTXR1+/FClMuOkknnbxV9zvVYUh96Utr/gCdkpKHcGAAPvAm5fbD\nh1dhEuVs8qc/26eT/tLISYFEWZ8KVhDMKFzz0fdQunnqZBkvdpBV0Bj1T/i89qWN5MyK78ghxR62\nAXxer7MZ/qq/YizuRKi9jc/AtwVai8AFplrTgILoHFzHe1vaeuCzwnuPeb+MorORmjTVFk4QutSJ\n4F9BkVM6FUkn8aoP9bmOVzkDDe7U/BgK4Jb0QFxna8rXZH2pLTvjHMMVcyIpEmB3nQ7/TJ21BDFT\nEXT23YJ77VXqSF+pPkHsEaKT742NpFNP9enkzQZ+eq9zyTVPFfymaPA5grCp6tcmCJc7/dShFwki\nsIkr4sShoQuH5j6z05o62BL7PkFsw64QAulEOskGobPb2tb+7FkDMwsrJ09fOxd3E5YZvK4GBIdg\n3SkNnUYmJpniV2kLBq/g8JTUVFWqEX1lhjCdNNM1aUJGWX1ycrLK0CnX9cSKM2aaWVqPJzm4FZDO\nqULnhBRDAaMvU4DObmoI8+kU74YykQOWguFKsvvK+TMItXknHK7SxnzwwX5dSHz3J6a8ccAWKq+x\nA0US6EwvvSMuntGhZZDimvXryYQXHKFJ0cLSUik4o7yuUdDV1b1165asdFJuKzKjvoPP//XjUXo6\nf9xIbLK4zd9a4dIpIEt0TE7TH7fpsanP9SWQMnvYyZlsdJbfrqSPGhYR8uWXx55/jrRQZYqP5hMQ\nrHR0UoDhdGrHBJno5DOUdlKiOvnjtmUJcK7P7R66aHcrqEs64XmjiBmdgLD4NJGR8q+ZEoQW9bmw\nPBaySVf+x2djY2NgYOB4GltZ6UyP+gXOMspuo085IjHjqefBZ4i3zzczpNPWzn6cD87i+taurq5p\n8uykoRNcNS35PGOUS0h1obZgq8hoEC6wDsDPq154Hv6+/J805uoEmFtYMuaSXdU44vZ7utEpFM7s\nmxdY1SOxxqSMxpxOctxkYMDC2kZWLhU2CQVfVBgumrO1tfUMCKGf+cZimamM9UDF6RyLzs7O/v5+\ntIviiAoC6UQgnUgnAulEIJ0IpBOBdCKdCKQToTx0unh4m5qZXyqpFjGaHX8RzO4lJTewBqc6na5e\nPjHpedJbfs2tbGCtr8rUiOqY4O82twTHJDNLC3wPIJ1TiM7CG7dya1tExeuC0VqRrluEgqOLG9I5\nVei0tnMUE09aOpVorlBLS0tFRQVTOsnZCGu1dU4k8O/+sF9flTRXqH0hOadk/tJ55L/4erEx4x3e\nJRPfAH5YiC+iwV1Fm/ZLOmqyzkZ40PtQfDySzhNJ1KV3wlS+gFrxAnV2VQo6x87kAwfu1PRMmSaX\npFjvWL7/dI6kqV+JKQmuhXw+cpP/LXZipkAorIiiXM4wmVzyuK+Pns7s4a/vLydeD2kWT6eL0tEJ\nUzIF9yeT68TMz3TVYY8YCdE4rZl17b8f3wqr2gsYzxWysLaloRNcFC1cB+tQiRe/jqNJ0Uj5O0Ty\nobNnszoxQ/cHCSlXp4J7sBEnWLAA/ouL7QzprK+vz6poHM+j2N7BcXp0hWSemLmAkHZiJojnw7/q\nQYyNb/0rb5wz+R49euQVEjGd1xsx6woJhZSYBGlqTLpojLpCgoiNjQ2JTZGeSFMWq7u7G80IU30m\nX3R0NBgHoi+L2NEjMCwCjHyqZAxSfToROJyCQC4RyCVCJJcj7ryEMeKrCqEsXIr1gynedSYC21iE\nYtrYsZ47R9pYcBK4+uhncHwOz1Ltf3QZ8cxc0i3kTjNVrB/SdeRfn4WCko5diTVfUQfH+isd4xpW\n5IkinKTKncsvlxA/sR/xjjx68+0PnozmkiA2ULFfAL7PFJGfnpBeGAZUlMszRQ95nzvg84OhodSP\nnpvzlj/1Myf478TKb4d6rwJB1JFi1junwutEnghrJ+HDXX41DYAYcnrlz+VYF6yjuNzlTcXeSxBW\nVU+Ghq/yiYpyyRkcVTmiHM+O9fEr4kSR3m4V1I8tTvbbqzWbdDc8MJrLYUeswKVtTb/Kc8l3zztc\nOdsJglUhoqzcR/fMToITWkLzYLDIEwfqnAlCR9HPy7G+dJHLkcoZ60FYlLNmESeKdFss/75PoSF8\nWLZh83yyk7NPuI2d3lzCvIzVZIM6V3sV2em72AotKXcXuZOXhu5LMIRPmBf3ijlxqMqXFO4q7U3k\nE2zLT/hOgkAukUsEcomYYlxGJaTAFAIjE1MbB2cXb3+YumdgaARTQ6prarHulIZLCysbV99AmkkJ\nxqasxMQkrMEpzWV1/T1Tc4tp6FSPfNvoH1Qd9/zJV7KCohJlmjVkbGLC5XJVg8tH/YNKOneL3IZW\nsCSclnbv0PMMUjmrKrNkVYdL6OMwTsjExAS5nCpcunn7jSehiNSMvr4+5HJKcAlr2ceZlqGhIXI5\n+Vw2NLXm13WMMy17J+fpwGWawy7eYOXckSO/7ZnlVkl/1oPjB1+Bc1ZsP5J+l3aPlATHRbOIWYs3\nOuWRnvv9P13Ky0tHBi4trW3Gf1/A3dDe3q4cr2IEIXJVk7Rc6jmMfA3+/U1IjZ5LHVj6+AXpbDvS\n6RgMkOSLq8AcG1jCl1bfW1hdCGl6VfJ8u1eEycYlvCaKjVRfTg7QLFi9fAFMbyFYRQ9okktVkt3g\nBNfSCb4iy8hlz0uwqbO67hJ6LhtujOxHAwHGy0xLRTvg9z/z/ht2FSMb3+iYV0wwl07vLFn+dRb1\n2e+TpRoHYmiS8/HxUTouRwCv27JyaRxDOkhaIUmXgovxIKOQOxKiFdzKhGgB1b1MuKTfwa+wtsrA\nxOCtP5MtvtrecJqYbm5uSsrloUOHOjo6GLSxMnH5MWzQte47+ji/kstqidkvHMjhMOKSZq1awL9h\nlsj87y38kyq7In5aT88lrOBULi4PHz4suOGOXLk8ugGmHuwulK7z4X5sJfHcR0y4NDFliYsBF/A5\nbxE9hCOLwC9GMM1uGeIcsEw1HDlyROSeSXLjslsXJpGseo8+5dyy8tiqbv7X8hCYgsuEy/yCQnEx\nzv3yMjlbcPce6Pkc/Gwvse6UWAWHnJsu7yQycnlgITFzx28SU7Z+a67aTv4LheeJlcTyr5lwCXD3\nDx7/0lzksqiW7LbkCO61Vewt9Gw+XfBgbDReaCU9VqgvfpbcUEwzvbGXIZcm4zDGkl2v2vs3b96c\nnlwKhd+3zcqVosakjMaES9Ik6xfEmEvVGMicAC7rS9V1fpZcY1JGY8xlbFw8bBTOgEg3dw8VMKwz\ntuHJLzCx4T2dGmJpmd/QJVN+8amXBHf/Rdv6lBi/pODt7R2XxZbyfDs7+4aGBpWZI6JqXA7xnAHC\n8y+/gW7kJPVavgoMcqk+lxRKSkqALVt7x5hL13JrySGYrFt10UkXYQqBlZUVeF4bUjmoLJdCaGpq\nGkJMZcMkVgFyiUAuEcglArlELhHIJQK5RCCXCOQSuUQglwjkEoFcIpcI5BKBXCKQSwRyiVwiVIjL\nK1dzwKmEsYkpzLJkmVu4eXq1trVhxSkTl03tnfr6Zz0Dz4nyMdFmbe/EMjPD6lMCLmHJe2J2scQJ\nmc5efkFBwapUHYPcod4ng0oahLkcGOTS+RQZu5S3ts3Y2FhluFSpuc70TijEBRaLhVxOLS7PMiIS\nQnZVs7J4nZgWXBaVVWZVNDJOyNrOAbmcKlyKe0zGG20hdvlLk5arqytyOflc9g9y4zLZ4+TS2NgE\nuZx8Lr0DxLrJl57LxOwiZeHMyMiIMZe8Ne6L1+vs5R/h3J8pySdMrPVhck8tLdhkklj6jpf4mB2L\nSJdbK54F/zszt4MLvfBf31q/YYWMPtQMDem4XAj7BBKrl5B7VkXX0XaCsrOV47V6GELLgWX1PcG+\ndZlKh5ZLcku2uAY+W/A5WIw/vNBv1hJbLanPf5tP7A9tmmAfaiSXxAv8+8twB7H8XzTJhYaGKheX\nI6BckMrIZTecaJ/fJsFXE6fF1s135Osy8X4qBQNw+RdmXMIOMjRcPvthmoAjInWa5NTU1AilxcGD\nB+/db5Wrb8O8a1aQUR5tnHO/fwAem2Zt+pyhTxh6XS78+OIwl+nQnNMkFxYWpnS6/Mc//gEeKpm1\nsTJxyS5Lguz+ky7hdknLq8rIS4Pn2T67ciZcGtI/L9V28W+ZnzcSa3+iM+nl5ioLl+BktLOzk2Hf\nR3Yur160hkx/SmuW1kn2VdiCdA0TLoPDztFwqfnyfuhXrX4Odi+fcVm8u/DkzLzp8E7CgMusxF+B\nSJfSB/Qpv7OQ2O3B95WVlwn722ox9LsVdSl7nO83RspvZJ8ALhtqvvnFoGBU34dDOo3OrE8vb6RC\nHkdUNHgLSPoR1JJwp6eovlKTIPY63GbIpUzDIyIDOHlCLsc6oLwW/YnUfip7L8c5Lobt1ueuPBNd\nw9wfHrh8uVRcxZhIK2vraWKPlein8nXNOdI4b5YyGkPfhjQ9IAld7dt309PTpxGXgnYf4dAZWdkt\nRaVJFY2J3Wc8LW1J00NrlRDlkOr53ZKJzvyqRgsLC5xXMHV9qJmamp6LvyjxZEdHp6QkldqaVjX9\n4Q0MDICJ0snNg90o3Kxfu1ULWnR0dBxSOai4b8P6+nqwmNvZ2YFYnZ2do6KigOYh1YXq7DGMQCCm\nhJkZqwCBQGUiEAhUJgKhosocqHchN04kdgc2c6VMkMEpCAQCn5kIhCo+M7ndN0PPvLdn/bMzeLNY\nZi/VeeO4cdTtHq74U/pr7bfCAW2D0o7KC/rv7Vw5h/xd8/m9X7oWdA7C781XrI6/9vw83p6Ea3cf\ns73aOoDEKAn45K7/raSbk2Z+fK/WAvLGUF+08S//z/ZK05PRkbm9lbEmx9/YuGgWyfXMhdp/PmYS\nW9U7qm8l8QaTKUcpM1V2ZQ40eP2JXHDw4vdJ9/g1MNCeY7p3NlmDb59v4dIqk8QLn3rkt/RDbT2s\n9P/vhcOT1LSO+RR3gBi5PRWBHy4njyz7OrsX73olUiaJ1f/XPuseub3t4IOykC/X847tcq2jWtm+\nCqtXSQXN2PLtuQpKZtxHd+JOvULqReMN99r+ISlvMGlzlDZTVVBmf7XtNl67s/Ur17SytidSPmaH\nq/L5nwsFdl3k3g/+E3l01fd5DwVOb4vYRx7dxKp4gre9Eilz3S/XR22p+eS22Sby+N5zrfCtJ/ME\nudB07pH4DuEnWuv5v5JLQ7VOFz+W7gaTLkfpM1WV3uxAW57Xj4f1Fo6amT1741Hr7PZBCcp8LbRF\nsAvRFr6XPKpnWyPYcnXG7CdbTR1UpnIp87WwVq7YO+dJBSUaOgwLSuINJlWOpFJlyFTpe7O9zZXs\n9Cs32gdGdeNL7fdpQBy110Pvc2mVOboqUZnTRplDD9Lem092IA9Etgq/3/XfL7iUXlDe2NUv5Q0m\nrTKlzVQVlNlXbsark7l/Nsls5gtnsPuG2zu8Bm6nXdWTIVQmKlOkTh4W/arDe0w9/6F7YccAXx+Z\nrP1/IA8+ezTq/qCUN5jUypQyU5WxzT6sigFLl86y2bwiq81b/fKhb2zTGh5Lfs9EZU5jZVIq6yoN\n1//4dW3KTErMXLzxjeNgJ30oeKbEG0ymHKXMVPmUiUAgUJkIBAKViUCovjIHBgbLq2pg15rIuAS/\noBBnNw8nV3cvX7/wyKiUi1dKSkt7e3HmAAIhZ2XCli2512+YW1rBymsTlpm7f3Aqu1zSRietwdFJ\nlrb24MbE0MgoIvJCf38/1j4CMQHK7Op95ODiBtJy9PDJrr4/TscY5xIvGYLDRBOTW7duIQ0IBBNl\nZuUXkYJ095aH75rYzALY1cjH15fLxWVjUw7gFonGMzgGOQXJfoDYpeWgyfOpmfK+FOjuwvPTx8cH\nxYDKxECnzL7+AXiHtHF0VuQFxV1lQ0NQWlqKkkBlojJFoKWzGxQibmc3eQd4eMbHx6MqUJmozFGA\nXZP0z55Nyb85iVfGMjM/d+4cCgOVicp8OiJiZGIaGpc66RcHXqjRbIvKRGXyER4VyzK3nAoXF5F0\nCfyIozYmEDDfo7y8fDKU2c3bBgaweK22DmwGcyKhZfSmovfDfn2VXMys+a6U+1WKCu2wY/Dwws75\nsG/w0nn8L7B7cHw9gwQ74h3eXcRfIrpi7QatZ6k9RGdu/yKawyYjtMGWM9ov6cCuM7xfdE5e65SX\nMmGMMSYjn0lydzlhzqcPvqazeO7wctdlm3d/zHJhtzN/4TQ2YbPZqKiJQl1dneC64S1btoA1gb5j\nMsHK1HNIGr23I2zsfOrIuqfXxFyZrYFfriQXp+ywGd4Cmq+uyP/RoZxniNsOWkxoI7eJBmy1jBVI\nsLD8wt/IZZ/z+LtGD+/ttVOuygR7LBxiN3TJWu8XPd4m3W6p6Z0IrRredrcjLfK37RqUX4iiPEbX\nZ+fgGBERgYqSkzLHQldXF26AmzdvKkSZ3SnWO+DY8v2nYafunLSTK8ajTE5LYkoC7ObtWiisjdzk\nfy/jLQ+TZnNviaGwIopS5l8Uqczi0psGTLfoY9c1ZdYJlbw15Hst3mL0iExGafoGhSr1JuRKDQ0N\nDdiBOz3rqlyfmU/1M05lit+kfTvZSybW/1KYxzydVtjIfc9O7QW8moHt3Fm57UL7YcpXmV0PuhnS\n0FDj8v//tvyZUdRqav1xlw7vUbo3nJkynV3dgoKC8FmngGcm6PDw4cP+/v5CO6nLuzcrP2Wyy5I+\n01UnM52z7z/preNKjdOalleVWdeekZf2+/GtPKX/YZ9deYHClAkAi6is4yWFZWGvkb1W9e1mN/NH\n/XTf6+hz8MMzr53LZDh2YpaVlYWKmnBlzp49+8iRI4GBgV1dXQq1AClCmT1XL1q/Ttlt5u//Ka25\ncKJtk/lXDcmuILHm05Q2xSnT3dPLzsVdtoQqIt/kWX1WnIi5xK/3npz86K92D5vGdnikNcp8cddK\nq8E+odq7O06bURPFKLMnK/HXzbzH5AzdH1xKH4zrmqtTee6HZu32qBfSdl6mPs9mpfXFxXbFKXNw\ncBD0kFZUIVtajU1htt/+aeMS3oOemLNi277Pbb2KOtjXPV8mXfHqfJcl8xWbm5snJyejNlCZ8K7k\nbm7wzS8GZ8JrCsR2ODmu7/EelPP+rp9Zn17eKBwqmvM4UqfGC9lJP67hyVz7q4iEOzwbSn2l28/7\nNHnZ7HW4rdDeLCA7J8fAwHDsRtOKDP5BIdbW1igMVCYZajMp1/1LPs/IEdfDiv5kgQST1tbTBQ+k\nTE0gdF2Oc/zwr3qLKYdhc1dufOtfZ6JrhI1JilEmIL+ADSuz8us6JkWWnt6+sF05qmJaKVOCOac0\n+HXNOXs86yfkpXFiU1OoMgFVVVXQrU24VqRITV7n9JiZm3t7e6MkVFOZ4uYASQidKTEJkZUT1Ymb\nwNQUOAdICB4eniamrHyZ5x4wCeej4wwNDVtaWlAPKqdMDBOtTJKbR48sLS2NTUwzyurlcRHFTfBW\nGQzPZ5yIh8rEIIMyRwCWUtCPi+cEOAEiO65NvYmX0o2MjMDSg571UJkYmCtzBDCv0sLCAkxErp4+\n4MVH+vyyKxrCzkfBcAjMZ7hwAd3noTIxTKgyhVBdXR0SEmJjYwOLtkBy+jzA05X6C09FkKKHhwfM\n5unr68N7XbkATtMoKhEKBvpoRyCmIlCZCAQqE4FAoDIRCFQmAoFAZSIQqEwEAoHKRCAQqEwEApWJ\nQCBQmQgEKhOBQKAyEQhUJgKBQGUiEAhUJgKBykQgEKhMBAKViUAgUJkIBCoTgUAoozLbOh9ERMdb\nWNuAhzxDI2MbB2c3vyD/iJjI1KzkvNJLJdUx6XnBMcleQeGO7t6mZubgDgzcSfv6B9y+fRvrHYGY\nSGV2P3zsHUD6U7ewtpV5J7+RjUEbOjwCwyARK2tr8IiJHCAQDJXJ5Q5FxqeAlvzCoyfW4y08Wsnn\nra0t7NGAZExBUJ6gEVPO3+wgd8jJzQPEc6X0jlydUjt7+uJ2Q1NWmeg0fWr5aL+cnQ/PycTsYoVd\nEMvcwtHRCfWAykRlilYm+My3sXe0tnNU/DXFZbKhOejo6EBVoDJRmcKwsLJx8vCdrMu6XFID4mxq\nakJhoDJRmU/h4OJq7+IhS0Jd8UZbyC13d/lf4UyYOOHicIciVCYqk4+i0oqzZw1kTGjilQnB3cff\n1dUVtaESyhzZDZ7C3DdDm4XjNJT9tmcWofmuW+V4MnpwJdn9+MFXVs6fQeajNm/F9iMnHK6m3x1H\nggmOH+zXXTSLTG/W4o27PzF1ymsr5P/a4v/pUoFyyXM3eJaZedCFBKbK9E3I8D32F50F6vBFY6HO\n3943SUmrZ3590Ketq6tDeUwUoDKpO2jr1q1gCS8vL1e0MvUcksaKpLEp+Pc3NakrG4cy866a6JBJ\nzHjxi5DYmh7yIKc50unYap5GN58tzpd14D3HZht5J8/S/SGeuo0LqwsN3l1BprfsC6/KnqeRK8J2\nylWZMG4Je15mld9lqEzAkiOnU++xyUppi/f55/pnyJraYlCSz+j6LK2tExISUFETrsyx0NPTg71P\ny8rKFKjMnsyo717S4GWvrvtfPx5dMh5lNtz4cSOZ0iaL2+zRP+VeOsUT527T0h4ZEuQ0+Z95/4/b\n9N6wqxiVYH0JlZGOucBxeSvzdnUt7CQte0LDytxsFd8w6id2sdd2UpzrPktrZ3B9nn4B7u7uqCgF\nKHMsdHV1oc9ScqtMfsrMSvEyjqnOo/STdnLF+J6ZYkJb6HfryKxXnwy5M97UCm5lnjm8nExN64eA\n6l7FPTNr6+qBDMbKXP7Pa7lCP9UXfKVFXvLLHneZKNPHz9PTExWlMGVqaGgcPHjQ19d3ZNRKEb1Z\nuSmTXZbw8XpewdZ953O7h3lSdyt+1XtaS7NfOHDC80YOR4HKHOJt/Z1RVs/wmfmKR1rj6KopdOb9\noP1tZgeT3qyVdXJyMipKTsoEHR46dMjf359m9FhJlZlfdO7oBqqQu79JaCqcwIcwh+N+bCWZ8nMf\nuVcqUJnmFpayT459+p75jPbX1rk8yxWn+bzt+7wHv+Zep0q27Bd3/V4PNBMcDgcVNVEABQYEBHR1\ndU2abVbuyuxOj/pFdybvXlz1nlF223iSyi0rj0jMiK3qFv6pPGQn+Y624O3zzYpT5h1eh5bd2M1A\nmUs+v5yaZP9/dq+bS17lnGXb//sr7+JrTMdR3Dy9sCuraqMm8lQmuyTkwEIyk5k7fvOv6B73s7HB\n+i3yRlbbaRMnaD3h3PM8wXtmLv/av0aBz0yAh5e3jaPz5I60XisphwZiYGAAtYHKLKrN/JDX+1ry\neUaOOFkWe+9Up3+J3nq64IGUqQ2H1gusAwuos9UXr3rh+Wf5WWi+/J+09EbFvmfypwE5Otk6uU6W\nLDOKbuHU2emmTAmhsfL3bbNWfpudOyH32MSmpkhlAjy9vMytrBUvy9SMbJAlLtdEZQpY+EvP7Jun\nrvNzYFXPBNxjE5ua4pUJqKkhZ5ZHpmUpRpPXOZ0m4I7E1xf1oLLKpJmdp5RBgbPzxiIiIgJmhyTm\nlMhRk40PHBwdjY2Ne3t7UQyqqEwMclAmhcLCQpjp7uI1wUvD0vOvg+y9vLwGBwdRBqhMDDIrkwKX\ny01JSYHJ0CamptGXsxlmXNvk6u4B/WQPD4/29na8+1GZGMarTEHAU47NZsPjzsjICGQGUxRgYaeX\nX2BQRNSFlMtxl6+Gxyb6BYW6eXjZ2dtTcWxtbRMTEzs7O/GOVxb0DXDhLqlqe4xBkQHq/H8B3dDw\nxEwWYGYAAAAASUVORK5CYII=\n",
"text/plain": [
"<IPython.core.display.Image object>"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Image(\"img/atamalar4.png\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['abc', 2, 3]"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"b[0] = \"abc\"\n",
"b"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[1, 2, 3]"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`copy()` metodu listelerin yanı sıra sözlükler ve kümelerde de mevcuttur. Çokuz ve dizelerde bulunmazlar, ama zaten onlar değiştirilemez (immutable) nesneler oldukları için elemanlarına atama yapılamaz. \n",
"\n",
"Sıralı nesnelerde (liste, çokuz, dize, vs.) dilimleme işlemi de bir kopya üretmek için kullanılabilir. Böylece bir `a` nesnesini kopyalamak için `a[:]` yazımı kullanılabilir."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"False"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a = [1,2,3]\n",
"b = a[:]\n",
"a is b"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ancak, gerek `copy()` metodu gerekse de `a[:]` işlemi _sığ bir kopya_ üretir. Başka bir deyişle, liste elemanlarını birebir kopyalarlar. Ama ya liste elemanının kendisi bir listeyse? O zaman aynı problem daha derin bir seviyede karşımıza çıkar."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a = [5, [4, 9, 3], 7.1]\n",
"b = [8, [4, 9, 3], 7.1]\n"
]
}
],
"source": [
"a = [5, [4,9,3], 7.1]\n",
"b = a.copy()\n",
"b[0] = 8 # a'yı değiştirmez\n",
"print(\"a =\",a)\n",
"print(\"b =\",b)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a = [5, ['merhaba', 9, 3], 7.1]\n",
"b = [8, ['merhaba', 9, 3], 7.1]\n"
]
}
],
"source": [
"b[1][0] = \"merhaba\" # a'yı değiştirir\n",
"print(\"a =\",a)\n",
"print(\"b =\",b)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Aynı sorunla karşılaşmamızın sebebi, `a`'nın ikinci elemanının bir referans barındırmasıdır. `copy()` metoduyla kopyalanan da bu referanstır, listenin kendisi değil."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAaYAAAEICAIAAAHsAvdcAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAy\nFUlEQVR42u19CVxU5f7+kcA9LXc0TcVU1KzMLDXas+3edtvura51b3bL7Hq9/7SsX8CwI/sOIojK\nIgiCgICA7IIwoIYIsoOgbAEBmsryf88cGIdh5jAbw5zheT7vB4/vnOU93/c5z7t/X6pPs0FxI30U\ntWhPaHWl8yNLKRoK3IhcG/vu1Kvdfdld5H83KWrZxDfjVJa+DXqUzYUu8gwjJdJX47GOuXYa+efp\nEGrjEYqaPDbyt+BalzAocBfRy2UJ8qVPmeulpc/E1FTF6RMQj5L7epGrFs29n91s+uTMkhAF06dD\nyCx/+ljyN+1yDXPAv+CjVPrEHsDj8RRIH3k7WZjHGFuR7zc2Mz+vulWxr2xkv48xWr5R8gD20+70\nubi4aET6bnX3yvWlj1r6El02iKbjme0ezMHM8VTAZ3M0In33DBRXBdeuF9RcYo5/Xktplv2yB5f+\nlO4sjUifhvKPq/oirQiB/TQzfYo1ArmcPivb/a7eBzU0faKS4+Pjo3z9maIo1bfflGlfkvKAX+DB\n3CQ+95K0tO7KalM8fduXKNK+1NfXZ9In2tYUDTNUkj4l2+ckfQE/PPXE/2Ux5emBo8eElgus6iqo\nzlM8fZrev6Hp+kw6wlSoCJqcvra+vu7d/Ju3Bf85/bf76wNfU0361qxYqBL7DW7M/qn8PbnAPxMT\nE+GXlXyhbKS/X7nTp6T4SUyfscg7K5W+uoZGset9Dx2Ri3C1tbVEnA0oKqt+cBKNjZmD6QKVNo8q\nV8p+q5bNZg5sbGzktZ9Y4Waqos5dleWvWGNbxfwjsLW1pTu1TUzU0D+J9oeqOyfRP4n0qQny1mRH\nLX1kPCczeidzvO31dcJ4t3MtFGU4+ukjISvyU+bA/0rnc0ENzPHEpa9qUvoq0wc3QJspaoqmpA/9\nkyrCggULoH9Inwan70xmTtjpNFs7O01M30i030aw/eHp6alA+gSzP+728/HrOsSnrdRelndyBJ2+\nisoqsfQFBIXIVU8WvmFmdROL2XISd6nGfh0dHYrl76KHljHlKWn52ju7Ddywky5qlUmfuYWlMHHn\nr2se/wCtlWfOmI/p47Xe9dGUNwKJZpDZmuS/RgLxUGfHNPNcMlG0u9pdNN5oycS67j4yaZRJ6mOa\n0VcO9qnOfNXXGpIvlovqqJmZOWw0vPlCI6NZCqODgaEa2OocufJUbvMN+3iFO33ZQXr9Rc3H9GCf\nsn40tbgutbhevG5WKbXmllxYTcKM94LFh8MFE4XVaj7yYLeoPLEhcBc39xF5PEXt2bNHzHxMWM06\ndUE4qiIMz/rVik8CV5v5AiPjRoX8Yh/v0PGTHNZU5Qz72arHfAQ3b95U52erVdoHwHwwH8wH841R\nWFlZydu613Lz3enpNR5JaLn55F0eoW3DlSo3n7Bm/pcf3hz0U1VGnuDgcKXIyMC+j8jf/64UrJMR\nGYQe2+arrx2YBLRgqEXcr4i0ai4X0wfFQTBfl8RFbAU1hUONomv4g6TlAzAftG9EYWBggIoLqs0w\nH8ynecnVsLU8MJ/qzCexbPaQf86QYpCpjUlRijVOR9Z8Dq4emjDWIaMLG83qrO8dMtLGp//eIxrp\n4OQ0Qh+jrq6uqPmYVsEPq9jM5BsUNjTyvbD+gc30Ezvf+e8HoitO1T1QmU//naKkWyTF2MeYL7/2\nd/L3/dl3jWjn5HJ37oO5hZjtxj8fKHG2nprM5+TswpLb2ZfK1PbxytImZWmZigU1mY+Bn5+f2OPV\nQDq5tA8DlagzA0LbkYmIIpHdDb2wDHinJt51ZZKDC1ZvM7wj/lzpPq99BX23S7rV2hSjn0tcyTJt\nskkU9a/Z9MELr3/MzNUlTmfp+C0B9Mnzd2jKN9t785ogxQLbCaYNO1bcIQ4obqszQQPTlUlvuDDy\nQvwh4fxqJqlWJbc1iXd9vRQ1QXNsV+b/0YLlq3Qe3t2Zz5u7bDU16RlR2z1AUcsXTE5q7oHeoawY\ny7YzNh7kz4TMK4WBZLKdtAaNm5uberrzRq5NNkI9ev22y61uYXl2T08PbCfZdkdDwtifHZWSrQm2\nI172THk8CytrDbKdT0DQqHRFbN68Wcx21Fq6Q0zvuQCyYEMsAYdCo4THltY2g1xGFNeQ1RpfrLjb\n6xf46wtepXenTIyg7Qansl7gB7ApY+RtJ/TBs2nTJlHbWR51Wf7sZ+KkO1fImrsduwc67JiFwWqy\nXU6liNjVN6x6+/uCa63p6rLdUN4xYcuxhkEL1/0Os9hu8wTx/no12c7M3Fzkqc06c1ZRuquOVat1\n6Oeu7erKTtXSWRhUJdVSQ92bUdTi0bEdKUbZxU4NHchDeUfCr4+OZ0kV+6+iU8RGto4ymHqjMN7Y\nbztFRxQlBuI+Xh22k2a+C9fVYThu140Z3Llzx9LKinleeHTcwYMH+9QFztsOgO1gO9hu5FojIwHw\nDryD7WA7ALYbrhSC7WA7FTX1sFxZQWC9HmynGbYTdhFSc74SG/GgKB2xk1NiXcfPWimtY3Hs2U7Q\nNf3NKkrMdox17htso+edSshfQ5sSoYsq2K418WqXRNsN5ReJya7vgu36bSds+Q/9dengyJNlHQLz\nTYftukSHkxjeCS3ot2P1kiVzrfLaiCfo7IGTyQZI9+lSqXXg3WDbSV3mInFNDGyHOgpsp3nIyMhA\nexZ9AaNhOz6fD9upxdYwAWwH28F2sB0A28F2sJ1W244snPH0PxIYFc8zM29qaoKBZLKdiaRN2U9l\nX1RPOoxHGCNoOxPpu9nza9vVYzuuzjceld1/tMF2xoNI155SPwpLLIS2E45XEWSJOkUbcDonbuXK\ndObkV0LvrqGiprxLlsQJxyJG0HZpRTWitqNmvxXj9+9cda0n8/b2HmQ7wXjCi8HXRZ/u6uPHzqzF\nIuNb9E1p23WtV4PtBqejndmLaJ3PVbWtY0xJSRlsu9ZlH9r8+MnqbBHREE1M2uUaMdvtzGgd1Buu\nNtsN3m+qfdzyr487fsQfed6lpaVJ+GZFxrEe966VRc68350uPpKgNtu5uQ/jv/J8VYP69E5gO+HY\ns1CzcmvbnT18LK1thTInZOU44RQT4YHabDdsOVtfX69m2w3vJrCmcFg+qsl2LOZTzz5nnF9P5jTY\n+ezFhhtyjX2MadsBAHqhgDHGvPYCP8uAQkln9loYG3fBYMAIMY/4NiSeD8nBel3ii3HOioX3kir7\nxRskopscEK+MxAXiAx98QY4n0fX/CQEfzKXGTSZHOuuttdJA5H0X/u178oJTdekXTmqlHfHWnfg3\nOV60YiX5u3pnDInp5JuSYwPD1bTP+ifMpV1IPDOsJI0havqSmTrEZte7wTxx5vUsFrQsH3n1i/xr\nfwp+ucs8akO/HwsjirIrY5yCtpFfb2sp86gn/ZjjaoFxumu9acINgHx2r0a3NSd8TdtLZ+4+96he\nKRfS3zMh4mLD/isNF1GUHpg3iHm3isgX/CQTc6uQ7GSyaRDzBE5hGeYJXNJqO/MG3rf/s+xtJi9b\ncouJ67mfohzK7iynKI8qxhR9D1HUkeu9Ei7s64v46+Rx6+2ZyKaTH1HUo2AeAIB5AJh383a3q6c3\n8SFtYWnl6OZJdiIiGxCRrsXQ8BOwHaB65hG3uIkFJbJ0cHv4HRYdPNQCGHMZHGYemYqnwABLym8V\nSUlJWsO8EV3Vx7lhLnUwTwEH6mLOiME8ME9u5qnE4ocOHdIa5pF9IwV9mpPpSbEFHqJuG7OGs4OV\njW1MZsHgmP2RZ7JZLvlknphvyKkST/tl03jGlbjAqzPFOHbmNvOSz5dKM4rZX+eSV1xoYEC/6fS3\nWMy3396eM28+ALJJRG1trWTmDUw5Irvmik20ZAl3p3FJKBZ4w16enbiXoubli8d3riDbe+mumU1R\nDPMY3xCcZ15RSSm7OULCgv71r88fvJ8e+kmXflpgaDjnmCfEDz/8IIV5reRXMk01s7qJzFSlvfiw\n2iqtqIqlQGS3c+aJbRS1iuUEfS1jHkFYXIrEV83NsqSoJcxxfvFJYvdU6XaxsLDgypvv3bt3+NJ2\nQPOYra6ZQDa8Zp/qy+OZSf56Y5Pisi+yMq+Bnptd3zW2mGdn76BkJe/i9U7RYksb6nkDzAv44Sl6\nisBDy8jfJ/4vS7juI5t1wY+jq8epsxdCYpPJsdeho2JrRoZem+JtRM3fKRqzfQk1bWu8ljOPbtta\nWipMu9/q2tU2BVf9zJM4nVyWDelVf+1gH2Bawjx60OLmTWcPb3lNoJ758uhV0dr+PFGYm5sfi01m\nf1V3D09/f38tG9UB80aZeUO1MC8vr7i4uA8A1Mk8AADzADAPAMA8AMwDADAPAPMAAMwDhqPFAJyc\nnMA8QN3k09HRgeYB6oa19ci6KwHzOIxb3b0cHWLG/mZgHpgHgHmANjFvyPxW2lX+kP3KB4U1OtS9\nz9IrlTz/u5G69y/STsvNdqCo8Yk1Xfnl+eSWvqWdQyergnlgHn2ck2ZOzfwkK/JTNuaVhFGUvvC/\nKyjqo9hm9kfkFdGT+w+Xd4F5YJ4E5v28ecJm5yvkYBjmkQ2LPD8Rrvn9aCH1/smm4bc1+Hw+df/f\nwDwwj6W0HZZ5tHfAVMH6t/ySeOIbli/lTPstk3XW968aO7BtPjXvazAPzGNjnlj49dHxOTIQRcbT\nwDwwr2uoj46RDkMdboB56FVBrwoA5oF5AAsWLFgwokv3wTxAAtLT+/dsBfMAtWLrABj/WmAeoD0A\n8wAwDwDzAADM47JxOQ4wj8PMgxHAPDAPzAPzADAPzNM45oVHx5mzOuw+W95gbGLS2NgII4J5KmAe\n2QndxNRUrkkH/LoOMzMzrbGICv0kE+bBT7JMzPPw8VX4taNSz7W3t4N5YJ7czPM8cFDKK7UPuzEN\nE85crLh16xaYR28DZGZO9gi2c3QhBciBo8fAPDbmHQqNZGFebJr/dF1qhuErZH2ldmw0qqur6+Pj\nMyzzRNcxnLJ+VO+5gNTiOkGol7zdnpSdWq3320m3W3tyYXV/4JNFN1Ti1SHn1F2fQn6Y8q4wZj1F\n7cpq4zzzKqpZNvKimRdT2//fx8ZRXyT+Lu1kvyOBHKr+i4Fw0dvbm4V5ZK9Ly6Mui+bev/zZz7Lq\nJe9Iw7ab3pX6YTaoPvIBRa0bGp+R4HuOrJZN3KWFzCPIKL7KwrzEuv7/PkxRO9JbpdnO0cmFW8zT\n19dPTU2VUfNEA9nyccuxBrFIJ3cvaZY5HB7NTrv8S4FknU6+9BO0lnkmJqYszDsd5zyZouY+9l4W\na4Vv5Bz9jVY97y7z6sqIHU4x2l9Pb8oYVCXBAmEJaX7B4XJpIRPWUmyFiTYzj27bHgxQpoarHbuf\nyah5ci9EVe6SoUGrmCdQPhMFrHDhepednZ1W9uepc72qzKGJqSpoFfPk5V9+ZcNIO5fkbn8edtyT\nm3lChIWFkffxOuAXkZBKmiDpF0vDouMdnZx5PN65c+e0clSHvO/FhhtcDNq51yMAgHkAmAcAYB7A\naeZ1k7Z6Q6+E82o81lEbj8BewAgxr9fC2LhL0nntBX6WAYWwF4DSFtDG0raTb0oODAxX0zM4njAX\nKW3vkJgnJ1HUxHvJwXPOqeTvvRPo/vTLt7TSPvT7vnwfRU2g35dauJ2JXDmO/Gf6kpk6ZG+w6910\n1Hpdcs6cFQvp0y7ekHZhX92Jf5P/LVqxkvxdvTMGzBNn3r9mU7v5NwUxN194/ePbg5n3ywX6pz8v\n/ESOmZOsDKnHvWq1lXn7Cm4IjlvJ8R99faf/dv+kLQHMz/WBr1Hzd/R1ZRI6MTEXrN7eE1ot8cK+\nvjZycLW739Tk+83uAvMGM6854Wv6S9WZu889qndQC4M2aF2PIKollKLWMJdFvahjaFWircyr7xlk\nHDJUP3Wx4WoGhosoSq+vr2exYCD1kVe/yL/2p7QLu2u9aakbAJn782p0G5gnoW17If6Q0ZKJNNW6\nBzGvuVfIvLVjgXn97ztgnMcoyqrk9tBTe29es971ET1x+I1AiRd2V7sT79Wo4bExbzlFeVTdYaIe\noqgj13vBPKFxIv46edx6eyaq6SSh2qO3iki1+Ekm5lahMUVtknhhXy+91URJf4W4536Kcii7A+YN\nbmHk88jB3GWrp5F/Jj0jVtqOceaRKu4DgnlTyxeQApNKaiZlau8GPXI4Yc2KheQfmwtdUi7sK/On\nRXHB8lV0Xebh3dA8AADzADAPAMA8YIwxLyn9rCmPR4KNnYOzh4+ljS2ZKG+7366xqQm2A0aEeU5u\nHk4e3lL3c65tNTM3T01LgwUBlTHvenObmbmFTF6katu1Y6WjED299DoMjoLbzEs/ly+vBxrivaa3\nt1c7mNfdy+E97DjMvOa2DsUWexP/BGAemKc483g8M4Vf29zcHMwD8xRhnt+RYGVeO+x02p07d8A8\nME9u5pEeEyXfnEPO89TAPHtnN2NjE9IhRf4GRSWAeZKZ19jafra8Uck3d3H30CLmdTC+S14Ipl2V\nrRFxszdtazy7HTJL6kirSzQmrahqGIchlelizvxeCW2QcFrt5QF3UhJcq3CSeQ5Ozsp/c7m17a2t\nrZx4bR0dHUdHx2GZF9fvu/MP2mtqv6vQuozKP4bVfslVYQvLEXUbyknmsX2RNcX0lJ7pD8ybTpYd\nUFYFbHaPieHG2gKhtIhRUDLzai+RDJfx83M/4C/VuaB0v46igUxvfi9M3Lto+omdJD3v/PeDMcQ8\nt7dnz/s6gzk+9OmcCa9GsViNeHvlFvOEGDdunIODg0Tm5WZZUNPe+fS1DdNmLdrhd14xwWPCaX7R\nyDms5STzTE15rK7KykwtTLc89wQtEkahLGc6OztzlHlLlizJzc2VyLyIvWSOtp5bSsWZjBiyxGzp\nrix295Usvx47dYadeTMoamdGq2JuQznJPBancYe/I/Php31vGxBX2h6224CdeceOHeMW85YuXZqX\nlzdcPU9kwJDvQhYAsViAbEYg7aeQ2KQRdZXMSeZZSN9oahlFfZnU77/3rZkUtTGQbSSXz+fEaxsY\nGEhMqjTm5QgpWJFMSmZ2Ann6H5HmWZH9Qu93p49/PrBgTDEvLTNTqoztXUev59u4ibQv3vjCiFq8\nR9qZoVGxWtarwjCP99Q9uk+5M/EB3yyhFv6HnUC2dg6nsi+Kl8Iy7OmlpJNurvYk+waGKtmrogXz\nViRrXv112o2C3txZ9D9TMwXe8T+ZR83+Mk2aKSKSMok1SAdy/LlCRzdPMbWTcm0n8VwQKOpvvjqP\nPC9L65nHMzNTinn1HSkpKdrJPImhrnT+jrMKbn0m27XyuofnKvMI/ELCx/LQmVzM2/fM1CNlnYox\nT8ZrxxDzDh85mlPZrIAJQiMiuTJ6odjomeZt96gto2dCmFtYyGuFzPOXtaCcHcI87m00ym3mEZCu\n/JTfKmR824Cjgdq0PQGYN5rMIyCdq2aDZ1tIaOGX1pHm282bN/u0CGDeKDOPQVxcHOkO8A04mlFS\nJ3y9rEvlTi4uZCeW6urqPq0DmKcRzBMF4RkpVZubm/sAuexLYS29cswDwDwwD8wD8wAwD8wD88A8\nAMwD88A8MG9sgizvoDgLMA/QQrUG8wAwDwDzAADMA8A8AMwD8wAwDwDzwDxAFSgrK1NDZzKYB0gW\nPAIjIyMwD1Ar1q9fT+9JXl8P5gFqRUVFBep5wOjgmWeeAfMAyRjRvaz27ds3ovcH87jNPI5u3QHm\ngXlgHgDmAWAemAeAeQAnmTfUC2DnAxQVUMF2VUaS35xJ1NT5Dx+62Mp68z8+fpHelmvjp5b5Ulz6\ngXljmnmink8Dvyd7e7Ax71ymFTnB97f2gpryCaxbPj2qS635TwztLvZdfWruPyW6MQXzwDxmX78c\nilownZV5n86nVlr0b1SUFfk5Ne9rKW6xrz/16Fp+//5kF8lT+GAemCeNeat0KNeiDnbmbdSlPort\n91icd86OovSHpdG+N+dRS/4DzQPzJDMvivfUvX85Sg7Ymef13n3Uop39lNo4nqImsd3/aglTt9t2\n4DcwD8yTxLzaIrJdZZ4gcvowLYw2/YHJeRv2eFHUHBk2pKgnJ3uXgnlg3hDmWW8Q9x/w9/jhNwHI\nzTSjJr8l7c5hp+5uJrN+HPX68QYwD8zrYtnVg13zAr9aMOWvEczxh/qUkU+VFJ2rJXeOrmWOrwnb\ny2AemCcT8yTsg1V/jXiC0ZuzjFTxqAUfs2yXFWH1Kn3K0gfJ33X/LxH1PDBvGOYpsg+WzNtlgXlg\nnir30JJ9uywwD8xT5R5aCm+IBeaNWeZhxgAA5oF5YB6YB4B5ADAYP/7444jeH8wDJKCnpwcrvYFR\nwM6dO8E8YBTA9Mzt3bsXzAPUXdTCixmgbgQHB2/dunX58uXkL5gHjEKBi3oeAOYBYB6YB4B5AJgH\n5gFgHgDmgXkAmAeAeWCeGjKP0wDzAG38LGECAMwDwDwAAPMAMA8AwDwAzAMAFTPvdne3q6e3iYmJ\no5vnoWMnwk9nHDhyzN7JlSzTjYiMgu0A1TOvu7fXlMc7kczmMs3Fy9fd3QMWBFTGvJTsPDcffxnd\nFJia8rq7u7XGHOSTM+YsuM28Q4HHTmVflMtHhqOrR3t7u9Ywj6OeSkjgMPMKikqPn05X4J0tLK3A\nPDBPceZZ29op/NqkIQLmgXmKMM/RxU2Z104quNLY2AjmgXlyM89BCvNizB6mNgTI6BQNzKO3lIiM\nIyWAvbOble1+E1NTfl0HmCeVecUVVdIMJDvzvPwCtI55gzxcrxGZNTlta7y0WkdOZcsgsxw6GpaQ\nxmI3stvEYEyVeNovm8Z7CXZ4CvhsjuA0Q21gHksNT8A8/89fMqSoCR9ZJLCTr7y8nBuvTVFbtmxp\nbm6WhXkDXv3/IMexxXWpgpBRKWHjVzcfP4lmcXL3YjFaVnFNcmE1E75YQc14L1iCjv76Ank6wzw6\nlIRoCfPIl8rGPIr66fS1gvoWg3HUw6ZsfS6RkZFcYZ4QL7/8clNT0/DMq71ENq1j//AcXNyl/SRL\n6y07cS9FzcsXj+9cQbY41l0zeywyb3W/1fgXfClqMYvtPD09Occ8UQpeb2yUxrzcLAtq2jufvrZh\n2qxFO/zOS3z95POlLJXgYfeuIPtC7ZawmUSneVQ5OdDXTuaZmrIwb96/s/r/W5NHcoLFfG5ubhxl\n3o4dO4jfOBbNi9hL9mHXc0upOJMRo0tRS3dlDX391MIqll4nduZlnthGUatYTtBO5hkbs2reEz79\nmpfvTlHLWawTERHBLeYxhJOtnnc38PkuEtsBLl4HpHa2W1mzM2/zBOpZv9oxxzwrGxv2ep59TktB\nfQNphBm5SS1Qzl/rKioq4sRrf/fdd6KEk5F5OUIKViRT1LihFtjv6CTRMj6Hg4brW2kgD0qp7xpz\nzCsuKWFh3uwvz7y/cTHZWHz7wQsspjkcdEwre1UY5vGeukf3qf4GRMA3S6iF/5FSnxMvPcLiUg4G\nH2cXPP4FH/YKtNYyj4DMelJ+ew0tZl5B/XXSxKT05s6i/5maWS9141dXHz9zSysyl9HV+yCxSVbp\ndbHeu6GXpHgbUfN3isZsXyLeZai1zNtvZ68M7c6VXSsrK9Nm5im38atSlwwN2sQ8egDN1QMzBkZi\nf1glL9F+5mVnZyfkFSlgCGcXl97eXm1lnrr2h5U1aNXomRAeHp4ZJXVyGSI0/ER1dXWfVgBzVUaN\neQTR0dFBJ+NlfFtrG5vW1tY+bQGYN5rMI+jo6CD1trTLNSzvGR4dZ2Vl1addAPNGmXkMyExPGxsb\nKyvro8cjE7Iv8Gt+j009eyT4mKmpqbe3t3ZU7IYy72LDDY4G7WEeAACAlgGSBwAAJA8AAACSBwAA\noMWS113jsY4el954pEHWLmMFLgEAAEAtDwAAYLRreb0dl4L3fbjJ4L57BNMSJ84xfP4f5ieudPZK\nueROpfMj5H/LTQtbSyOMP1w/fxL945QHjf7lmddGln3daUix+8fTD06lYycv2vi5Y2ZzN3KFO+jP\nX4NfLnbUJ9r8w2jJdJoYujNXvvSVY8r124NP7u0qPWnxj+dXzhxPZ7fejOXPfW5xsqxrUGtgWILJ\n9UQZHwpA8iTqV3et72Yd8r+Hvo+71k+t7t+zLY0m0tR8/XhTr3TJo7H0M5/cpjuEgzdKA96bMbDG\nesnnfhdaicr1dpYcYbwpzf36bBcyhluSR+OBrc4Z126RuJ4/Lgf9y0AQt8GzminBbpXYPUlL0z0P\n7zhWwuhX782q6D1P0EI04Xnvyjt9MhJM1ifK+lAAkidV8u6UOz4qKCkf2e6ZeLnltiwVwwGCPvhD\n/k2RkrcxcDMdu+D7c6LzGFvCnqFjV1mV3EbOcEvyFu89f1M0/vYV61V0vNEx2kFcZ/q2mXQ1/q0Y\n8QUTvc3HX9ajS76fLvwpG8Fke6LsDwUgeawN2+6Wc77/fXPtjEHulyau/MD+7O89bJL3dHCTaFOi\nJdSIjl3rWCFa0LZFvUgX8oaQPM5J3tMhzb1SmXO7hFEjNgwo1bAEk+mJtATK8VAAkie5YdvVUMpP\nTfnt9+5BXSWFzs/Q/kV0ng1u7JUueYMJCskbS5LX90fih9PotuSr4c3ifWh3GvOSU/OK69rvyEgw\nWSVP1ocCkDypNLpVbC0g2+TnLNIb+kWpp+M3r7cFRfJ6p7LbfZA8SJ5EAbpR8LOhoGL14Cfe+a3d\n/cKTbvXivXTkfR+caOyRkWAyS56MDwUgeaw06r1RFkXGvwznThRwSWfqA+v++q1jYu2fw/TlQfLG\ntuQx8tVeGGr892eXM4OnlN6slc//g4ye3hC9cliCyfVEGR8KQPIAAAAgeQAAAJA8AACAMSZ515ta\n8i9eSjiTFhoR5Rdw9ID/ocCQYzHxp7Nycmtqa7u7saYCAABuSl6vYMtvRxc3YxMTnpk52SsoKvUc\nu8fe9OKr/iERZEtXsheThaVl7Kk4rfRtCwCA9kjen3e6fQ8dIZpF9qlOK6pR0m35kYgYUx7Pytq6\noqIC2QAAgAZJ3oXLpUSe7J1dc2vbVb5lQ0RSJtlo42hgIDJDA0H2ZJBh03sEFQct2COVq5JXUlFD\n61FE7Ejn8dnyBjMz84CAAGQJJA8BkjcKktfT22tjZ29r76jOnI5MySEKW1RUhIyB5EHyAPVJXnvX\nTbKBXtjptFHJb3MLi5iYGOQNJA+SB6hD8lr/6CRVrehM/ihmuZW1TVhYGLIHkscS0i5VH09ICz11\n5jT/MiQPUFDyyMwRC7Jl8olTo57rpJpZWFiIHILkiQafw8GkPHZwdRcrkpMKrrj7HiI/WdnuJ/3C\nkDxAVsk7ERtHJE8Tcj08Ic3CwgI5pFpcvXpVacnrSHTZIFiiP2vRckMDQ6NtsU3MT1mRn90nxTPd\ntK3xWcrxwcbOwXq/HV+GOQPuvv5E+xQUvsp0xje3dMx6J6pJplvVN4b8/CTtinTKu16lwviW0J+3\nLF9haLBMn3ahQRnuymqD5I2m5JmZm0ckn1WEK1frQ9x/euNpw1mTB/w6zl298e9WHvzfFWY5mbSc\nlZWFTFJlfg9AR0dn8+bNjo6OtbW1CkreWpe4q6LxDQfemy5wV1xwTqWFX1Z5o4mJ6fHT6bJfklpY\nRZIdFJWgojQ0Be5eLXDjvd2npHPY8/lFZ/a8tfiuSA6SvIFQErIekjfqktfdQ/M7t7ZVTkJ0JPm8\nTm/jo7N2W3DZAN1bE8N/eWwCpcw34OrhGRQUhEwaCckbinHjxm3atMnBwUFUBOWQvNKYLfSuTat3\nn1Xx5E1La1uydEfeqxLyikjK+XUdyiag9vLPTwnqaq8HJdUN/y0k2D9OTp734k/OuS3Zibv0IXma\nLHm/FZeYmvIUYwa/+np6tVgB2Bz0/RKBO9uwdIXueTg4TE9PjwJGCUuXLg2PiJBR8jKC3pwo5T4L\nPvSPqVGmY8sko/iqYj1iIdGJSuldzfmdAi+j930Yk14v9+U5kDwNl7zWtnYF+6prKzz+98q8cYOI\nPmXJUxsMBZU/o1DFJM/noL+vry8yST21PCJwe/bsycvLU6hh23bKz+STjz942yQtQ1Qa6q/5frmQ\nPnnCK/sLFaxwkZU/kSnZcpfBdR0k5alFVYrrXf1Vx9foiqvu076nrypyB0geB/ryyDgpaRHIla/5\nl0Oephuwuo9ZX8od9FOj7wf30y2mp48pJnn77ezi4+ORSSMheRIFTrm+PCmh4sz7s+nTH3au4CtE\nA69DR3k8M3mbqGYWFuaWlkpU8TpT/F+lPSlPftO5uFOxm0DyOCB57p5erj5+8mVtSfgLgiEL/W1R\nyf3fQGd2buT2jVP7P6/HfRLr5G8pVzaSjO/o6EAmjSJklbzypPcXkkq+7gav6nyxfMxzFuw0tvCz\n0y0KCxAZuyDN24TcSzIJjYA5Dq4eSjVpq7O/WCRQavtSvqI3geRxQPLu3LlDuJXyW4V8uVt3PcRx\nx+aVs5mOt0n6jz7zpaNvQSv//IF19N7Jhjsz2uSli6OTE2Yjc0byrnWmhnw2U7C5xKodJ+KqmY6w\nKt9fXqYHcakZr/hV8Qf6QLxtTL/da7ovtCJP/nkqJDHuB/ylTVU5GBhKTjDlmWWW1A3te5HruWdP\nfkm/jt5LjiXSpijU+tvxvt1rsufolVxIHncljyApKYl4wRvdSXkRMXGYlMcpyesfpo8Ptd363OqZ\ngpH6cZP1V2z5Zl9UdZ6kiW+zv0zLVpQe3oeDeGZmpGw2HgDpkCE+zXIqW4adcCfbc9ujTdfQZ69x\niJXWeK/O275kmCmHkDxuSB6teslnSJ+xCgb4FQohxyOsrKyQNxyUPNm6LAoDn50yadOBmnxVsIUk\nZFSeq2yA5GmU5BHk5+eTKezJF8vVyYPzZOq8uweZGoaM4YDkDVl9IVtoS4iKDS9VWWkqs+Sp+LmK\nBqy+0FTJY0Bm5xMXUuphQ1JWLhHZkpIS5IpmS55mBdlreVhjC/TJ4hW5ubnZ3NycOM7LrW4ZoQyO\nTUqlfbdERyM/IHmQPEjeKEseA7I7z+HDh4kwHQmPVlW+ZhdV2NnbEz1lX+YJQPIgeYC6JU9U+xIS\nEng8HnHkeTT8ZFbpdbm66pKy8jw8vYh0enh4NDY2IgMgeZA8SJ5GS54Yurq6kpOTXV1dzczMiJAx\n0wUsBOCJxNjY2AQHB2MzM86B+E80Nrk7F0QDQSTPWBsB7mmo5AHAKDOYAocBSB4AyQMASB4AyQMg\neQAAyQMgeQAAyQMgeQAAyQMgeQAAyQMgeQAAoJiB5AEAAMmD5AEAAMmD5AEAAMmD5AEAAMmD5AEA\nAMmD5AEAAMmD5AEAAMkDAAAYBLLpIDUERkZGkDwAALQT69evF9U7HR2d+vp6SB4AANqJ8vJyUclz\ndnZGwxYAgDHRvOVikxaSBwCAIs1bjjZpIXkAAMgNsmmXi4sLd9MPyQOAUcCtbk3fLVMrA70lHsgH\nAJA8SB4AAJA8SB4AAJA8SB4AAJA8SB4AAJA8SB4AQPJUI3kdiS4bBFODZy1abmhgaLQttkn8nPK0\nbUsFp8z5KqBCsae0nfL++/z+NRcTZy5aPHM8c7jhq8g6viI3bI1xeXdm//30Fy1bcp+O4FjvsX9G\n1gtu2BL685blKwwNlukLfjHcldUGyQMASN6A5K11ibsq4YTsxJ8MhAvEFJS8hgMfCdTpgR3+pZ13\n4+uq7N6YRqLnfZWeLd8NW4K/XUTf8JH9J2vvxucXR7xC32/qi8HX755cErIekgcAkDwZJK8l7MdH\nyC8z/xYdF/HpdIUl70r4Zj1y8ez3T4rXH/l5joYC6fop7w/lxSi/5AQjeS9B8gAAkief5NWc373u\nHoq6/1X/atJOzIpUQvIq0z6eJ3iCfalYGzYjeOtU+pepr4Q2KJr+5mO/frxp/fLpgmro+FVfWuX8\nPugESB4AQPLYJS8322EdqZdNfcOqoL/ypZTkXevKSTMzHEc/ZMKj3/BiSjJqWlIyIne9sYDE6Aok\nb8sxRSWvvjnxXFl69e9p5xJ//ccjdG2SuvcZp+I8SB4AQPJkkLz2U/YvTCDVJSO3WJFuMiUlj3lQ\nRkrAV+8aLV0wZ8b8FWvf+M+vJ6vyyuJeo5uiK/+TqRo9ys3kLaFfZuFnCS2QPACA5A0rebW2T0+g\nhsHSz4WCIlNoT4wKNLVxsklpyB8cH23+KO0o9CnfxDp50lx++u0ZdCt2o0/N4Bt2nUs3XkyncMk/\nk36H5AEAJE/uEdthanm1Fd42pt/uNd0XWpHH0rBN/N8D9DMefOdAUU69oDp2MWHns3QFb9zDpser\n5bsbCWfj/ruQvuE9y7eHxVYJRoFrSr1+eGaKoJls5HIFDVsAgOSNgORVpn8iGJqY/WXaMBNN6hvC\n3He/8LA+MyFv0oLHX/reL7S0Q8G7CSqJZ6JdP3l57ayJgjtOnr9yyzf7IivOiZ0GyQMASJ68kscS\n+IWBz06ZtOmAeBtTsaDau0HyAACSJ//qC9ZlFQlRseFilTXFgwrvhtUXAADJQ4DkAQAkDwGSBwCQ\nPARIHgBA8iB5IB8AqB89vX3GxibGHMSvv/76/fffG3MWkDwAAOQA0btly5ZxN/2QPAAAZEVGRgaz\nbO3HH3+E5AEAoNWN8Z6eBQsWCBfr8vl8SB4AAFqLnTt3ivonMDAwgOQBAKCdSEtLG+qVZc+ePZA8\nAAC0HETsOJx45B8AAJA8AAAASB4AAJA8SB4AAJA8SB4AAJA8SB4AAJA8SB4AAJA8SB4AAJA8SB4A\nAJA8SB6Arw4AIHkAJA8AIHkAJA+A5AEAJA+A5AEAJA+A5AEAJA+A5AEAJA+A5AEAJA/guOT19vWV\nVFQej4rxPXTY2d3D1t7B3MLS3MLCysbWwcnF68DBo8HHcnJzb926BXMDkDyAe5LXcfNWYGi4iakp\nz8zM2dMnMiWHX9cx7D7haZdr/ILDrWz3k91zHRydqmtqYH0AkgdoruSVVl01t7TimZmHxCQNK3DD\nhvTiqw6u7iYmJiciI5ENmozunl4NDETyNDNhKgk9veDd6EkesX7Q8UiiTYFR8cor3dCQWXrNwsra\nytq6tbUV+aFxetfbS2rlI5HvSgYieRqYKlUFYnNwb3QkLzYplYhdUsEVNWSzg4ubra1td3c3cgWS\nB8kD99QteW2dN0x5vANHj6kzpzOKr5qamsbHxyNjIHmQPEB9kldWc83YWE2Vu6HBzsHJz88PeQPJ\ng+QB6pC8kopqYxMTMrwgcw61x5g9TO9dviEgpV41We7k5uHl5YXsgeSJBVIMB0bG+QQE7frfDwFh\nUXE5v0HyAKUkj1CcjMmGJ2bIk0OqlzwSLCws09PTkUOQvIySOhs7e9LjERx9euh0KDJHyszCgkwF\nPc2/DMkD5Ja8w0HHbO0d5cyhEZG8U2cLCMuRQ6qFj49Pc3Oz6iSvI9FlA5311KxFyw0NDI22xTYx\nP2VFfnYfJRnTtsZnycYBfm07KYB9DgfJcnJYPD3UJk/rRCRUpn8yj2LFrHeimmS6VX1jyM9P6pEr\nprzrVSqMbwn9ecvyFYYGy/R16LsZ7spqg+RphOSR6cXRmXwFJW/G+sV0VlNTFzz0wGw9hin3vugc\nWa2g6pGZK6mpqcgkVea3CHR1dbds2cIugrJK3lqXuKui8Q0H3ptOohfvLTinRBuWPDr5Yrnsl+RW\nt5AZ8qFxZ1RU7jYF7l5Nv92C7T4lncMLdNGZPW8tvmvfQZI3EEpC1kPyNEfybt66TWwty1IKyZJH\nLd0WJ1oStp3kPT6ORM/75nClIpxz9/IJCAhAJo2Q5A0FEcGXX37Z29u7qalJKckrjdkylcSu3n22\nXWHFMbe0DIlNlveqtKIa1bTEay///JSgrvZ6UFLdsOd3JNg/TjP9xZ+cc1uyE3fpQ/I4IXm/FV0m\nE1Pk50e/5N33SeJZye0F3U0B1xSgXWDYCT09PQoYDcybN+/bb79NTUtTQPIygt6cKOW2Cz70j6mR\ntT8rp7JZAdqQil5iQYlSeldzfqchndr7PoxJl7+vJgeSxxXJu3HjpkIlZL/kzfh70hDJS/2IlryJ\nzwY1KMA84pLA398fmaSGWp6+vv6OHTtIN0JPT4/SDdu2U34mn3z8wdsmaRmielF/zffLhfTJE17Z\nXzh8S8LM3OL46XR5OXO2vEHZWl79VcfX6Dqq7tO+p68qcgdIHrf68ninsi8q2LDV2bDnrGhGthz7\nYSUdv2h3SJUi1LGxtU1OTkYmqVzy5s+f/91336WlpYkJnOr68qSEijPvz6ZPf9i5gj/cyaRPmQxH\nEAmTizNkuINMXlFC8jpT/F8dT5I4+U3n4k7FbgLJ45Lk+R06bO/sppjkTVn3oqAQ15ux+KEH7h8n\n+Ljueejr6DMKFZVZheWE8cN+k8CIQhHJK096fyHJfd0NXtX5Yh38ec6r6LMXfna6RaYqW1mD7MMR\nifklhDDxuZeUquJVZ3+xSCDK9qV8RW8CyeOS5NEVPVPT2KyCUZ+aZG1jg5VnnJS8a52pIZ/NFBR+\nq3aciGPG62uqfH95ebpgXP8Vv6p+Namt8LYx/Xav6b7QijzW6h5xU2ZpbZN8oUxC0Vh63c7J1cTE\nNCQmUcpAhExP6RfZk1/SKdd7ybFEyjlXa/3teN/uNdlz9EouJE87JK+oqIiUljmVLaOod36HApyc\nnJA93JQ8JrTGh9pufW71zAn07+Mm66/Y8s2+qOo8SbPhZn+Zli1L7amymbjzcTvgv9/R2c7Rhfhq\nPBIeTSYqyzjnToantEebrqFPXeMQK61pUp23fckwswsheRyTPIKSK1eI6hG3TqOid17ePq6ursgb\njkuebJONCwOfnTJp04Ga/JFklHqeIlOA5Gmm5BE0NjYS1VPdrE7ZQl2bpaXl4cOHkTEck7whqy9k\nC20JUbHhpR0jzCv1PIU9YPWFxkseg7DwcB6PR7pL1ECLo0EhZmZmXV1dyBXuSB4C1thql+TRpO/u\n9vbxIWMa8k9ekSmcv9ZJvKYQYb1y5QryA5KHAMkbZckToqCggDR1iTM7lfjRO1/3R8jxCKKkxEkU\nPCFD8hAgeRoneUKQBZienp4kV4j8HY2IIZOnZKvNdZ0+m3/goB9pvZqbm5N5sLA+JA8BkscByRMD\n6X1LSko6ePCgo6OjlZUVaaKSyiDJM1KDs7Cw2L9/v5ubW2hoaFlZGczNRTC5CagZIN4I4f8De51x\nWQg4ywEAAAAASUVORK5CYII=\n",
"text/plain": [
"<IPython.core.display.Image object>"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Image(\"img/atamalar5.png\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Derin seviyelerde kusursuz kopyalama yapabilmek için `copy` modülü içindeki `deepcopy()` fonksiyonunu kullanmak gerekir."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a = [5, [4, 9, 3], 7.1]\n",
"b = [5, ['merhaba', 9, 3], 7.1]\n"
]
}
],
"source": [
"import copy\n",
"\n",
"a = [5, [4,9,3], 7.1]\n",
"b = copy.deepcopy(a)\n",
"\n",
"b[1][0] = \"merhaba\"\n",
"print(\"a =\",a)\n",
"print(\"b =\",b)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Bu sefer derin kopyalama yaptığımız için, `b`'de yaptığımız hiç bir değişiklik artık `a`'ya yansımıyor.\n",
"\n",
"Liste-sayı çarpımı referansları kopyalar\n",
"===\n",
"Son bir örnek olarak, çarpma işlemiyle çoğaltılmış bir listeler listesini ele alalım."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3]]"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"L = [[1,2,3]]*4\n",
"L"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Bildiğimiz gibi bir listeyle bir sayıyı çarpmak, o listenin elemanlarının o sayı kadar tekrarlandığı yeni bir liste yaratır. Burada da beş tane listeden oluşan bir listemiz var. Bu listenin birinci elemanındaki elemanlardan birine bir atama yapalım."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[['abc', 2, 3], ['abc', 2, 3], ['abc', 2, 3], ['abc', 2, 3]]"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"L[0][0] = \"abc\"\n",
"L"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Bu davranışın sebebi liste elemanlarının kendilerinin değil referanslarının kopyalanarak listeye konması. Nitekim `is` işlemi ile kontrol ettiğimizde elemanların aynı nesne olduğunu görüyoruz."
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"L[0] is L[1]"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAASUAAACGCAYAAAHsn6wVAAAACXBIWXMAAA7EAAAOxQGMMD9aAAAw\nS0lEQVR4nO2dCTyU2/vA31ksEUL2pSxFWmghS90ULUIUpVQkdZMKKdr/rUqkJJU2JVKKLiWSFhKF\nUqmUVBIqJDtZxvznzP2NO42ZMasZOt/P5+VdzvLMmWee96zPweLxeISEyTSLwsz7yVoIj8GCP0+/\nIrvA/6NRyZdJ5xMU//3PCKQ4HBWKExgvOOGaddX9HDifqSu3MeVlxSFwXvAiV1FzxOhKQSHhDlJY\nGyN1z4THn46C84jjB02c3H2zUCgUPrMdETMRQBqwxqYz34dEp0RTZqKlpV34/n3h8J6EiYiIcNYx\ndyKepybEjJxu4/CGJBBAR0//K+m8o6MDjcViO0kCAZzXbMqkTBOblZYynFrxFxa+Y1i3SPGBQC3N\nTQKXzwYbLvfYlgHuudlPdZ741/RPLh5bM4BA9XU1wl5LZi8Ov/mYWKr2f2mv3XTgxC19k2nFXUIx\nmnFPkL66ASKi7SSBAGGxDyLIw4lLSP4iCQSIffgulHQOvrouoUhKPW/evOsE5jErEDM/Ckb4raRY\nEYgbcOzr4yR/tlDADhFAMRK214RiVCAAx4Vi5ZVD+evlakktDSmY/i5oycTckrx9je+uDR6oPf8H\nuG+irbClreG7YG45fje1eFwVKtJDJ1U/CJkIzkkCAdoaKgRz8k4eoRWP+zrV8RxzxU7IIehJmzaC\n1e/ILcnxExw4uJ1eFK4LRfqKFpLdyyys3E8vDpZUybNzdElHOjs64q5EmnFPRMboquRtOXT+Aemc\nlxU8fSXUTiwpUf0hWttzSwr3sSIYiRW2k5afjX8UTut5TPgxA4fl63Io73ficCg0BtNVL6epU8AC\nMyII+c+aJJCP6zyHwHPXYyjDkgTyWGKxOCQq+VJjQ53QQDGJVnKBfhOKVEokGLXA5F8fqPqCmiY1\ngQIft+msGF73WVJaphkIhK86P3CgjEsjtTSx4Gui1AtW60fkVV8SQEdAafoYCRYgiEzXfRQNgYhC\nkYTw9fUNwOFwmKCgoA3MCMLAB9jJzIckfEO7ur6+gIAAX2aE4SZcN57M1A5IwEoerOSRw9eVvOkq\nqO2ppfgug2yiLb8l8933A5VZJ1Rkjd1L6cXtta/vzHq9VPAfCESoTKEECN8oT4QC1hzBjsblluTv\nc96TYrnhjpCGwfIdz9Utt7+nF49rQpF/dQBa9XFqYMl78iJvPzs1Ysz4b5SBit8XyMw31VnDlpR0\n4HSdjNMQ63jgJDr1RdjwkXrfqRUSQG24ThX4MF+/FA+aY6jm1atS8glYW4elGduPRN4jv0leMSZH\nUVWtdubMmSkEZrKaIehssrOziysrK1NWUlIqp3xO3nNKybfSz4MUVIbWkq6jwg4ZL3HbmEW6bm9v\nwwgICOJo5R195oih48r1T0jXC6aOdL/64M0JamFJva+ka6wAFtPS46cjA4PB0BSEUUDPLbVC6gk5\nRZU6BF+D6kRJImiC5X2SfkeDvKCAiQC/jPN3y0a6mCu/oYw/YvS/v5i8s17jxq0Izqv5USlK/hy8\nnGk1TLDXLl2YAVogegaTvpBuUtMmwM/Sgp9JSUmzmf2A5IA+VXrPaWkT4N8WD7GQiIRevhMJ/pPa\nAKCQwDW1QgKMNfyrBPwHhQT+p76qCiR/Tl5I5NoEINoo0H4kZhx9+4Kh6azPlBl8zs9osp81ORBR\n1KH3GVlGTx4JYDOJnRxIgybtOHzAb6/itY6zllELOEFxMrdkIIJFI838kAbd9LmZeH+iXxQUKw1d\nZukXBUUOySBzuvD6XUGFhIR4EAjhdLp8X1DMNm+M7NYhuXbrdjMbr6fePL4vKH6hTxYUqRPnVU2n\n+HK9CR5gtI70LOJeqY6zmUrBxEm+HtmPAog/wddfcZKjFDE1RrP8VmdFSF9CybnVM5tnnywoEqMl\n0d0+MP6k3hjErLqAVEiA9y2IyCgEqel4EyKDIAz3rPxGny2omKzKYQ7GskXk2gRYFlt9hTLsPA0M\nsV2ZW1qxJ3TbLMu1fm63mM2vTxYUZQccM6z1u810IQF+67gDJN175LTK1fkw4RQffu6sy3SzqYms\nCtVf6NKmiZPNio7H3LskO2IS8k/WR9KUqgmEV+yEv+cY2OU9zRnNDQFA5ZCZ7lheQCwkaRm5RvIZ\nfJScvpETp6Co/OPb17LBvSca/4D9VlWrRa+ASNx8WhZaXl6uxEonW18He+byzQOz7Zfmd91piBJB\nxJZQ7aJIePhqt/sipRXcEqb9gesQgannSkjXJR8LpYdoaFW3tf7Ckk+WJbHBxWZh0PkE4luNfIIt\nifhLZ8bZLl6ZB9p5wPai0ejfDPDdm1dHmlsvIHbq2RpreMRnfSRWIdzVUBtPFOO70sL+VkA9YDBl\nVhmjYWkBbJCJiUnmo0ePJtELdyv2oq6lvdNLcA4K6FdLs4DwAJHfJi6RCggACujju9eyGtqjKsH1\n33ZTlp2OS79AypNwdMuDVEAAUECng3aZ/r1hVxplOKarAYxOaqFHZmamCUhHWFj4F60wpAIiQVlA\nT9JTNAynzPxIfu/Y/s3mwRcTibOtT8Wm/TbflxG+fHovTe0+04XEbhfFgAEDWggMIF3TKnTflXYL\nAs7EXSVdZz9MVQczrwO2ec729TuaRFlAAP9TV6+RBgdm6clvIP/5dXS0o7FYgU5wTmuAwdRi7ltq\nsmBJv9uuOzTsEeBlWjx+gqMtrccMQV5A9AAFtMredBlBIy4c3esz3XNHIHF8+1tc/DDE79+p8u9e\n5SmoDRtRJSQ8oCNop9esDbuDb4+10SUOM4MCspustTYuozD0QdL1EVNnz+sqgB+issQvpq6megAY\nvhIZKNYGpreTzyYnB7vF3XHR2zrEhxHBbWZMOs5IOFYhN9oAUECk0RPSvSPvSroWE2iPHtc1GAsK\nCPwnn1kFCgj8Jy8gwL+zsRBEQlK6axiOvIDIjTYAKyoq2mShJueZ/KLiKEIHq3Hybt+/f9/Vw+fs\nlxBtUlVlhaTDwoVXfQ5fKaAWaIerxURCAclzQ4B2HF60hyBcHV5ihC7DHXPlygLwv7CwUMvZ2TkC\nGOjw8PDlI0eOfJOcnMw9ARgYNuL20FKP+VPe0NLSKnzy5IkhL4ThV/pkd0lvAwuJAfi+kHpjcLIn\n+L6Q+AFYSAzQrwqJvB3IyZ9pvyok0gw3CQmJOk6m268KiURtbe0gTqbXLwuJ0/B1IbEykgLCc2Lh\nBTl8XUj8AiwkBuiThUTqiMsJnm1k4JX0mPyZo4G0p97KK898V05/RLo3f9yg9RPXXMuRjrVUDytd\nIp/9Ojywe6q06ZOFlFSK348gLSgD99XPyO+XPwhVi86pPtqW6qSGINOJ934WZkpfy6s98ivBbphw\ncltm2KjlDPXCktMnC+lfBnQbQFCaupbofmXyxkr77JcIUVuktEyqlw9B+eKmri2PsEGKWMmpDxcS\ndRZsuGqX/fJ2188p2sPWMrwEH4C0pwqymmafLSRcwU5Zw+l7VoNzYJ8WHXo5yyJNX7r4ebum/hWH\nUaR7l0Pib80ZPXDjeKewFzt9kLus5NUnC+m8/+apvlv97+SW7+6qQ13eqHsb2diGOJGFI94jcONV\nI3H049n5bWNZya9PFhIoIFbijXfxe57tgjxnNl6fLKTepttMNxIV1fWao0eOyFu9ad99aweXF7QS\nuH390ujgPd7mhW8L1KWlpau5JSg36QsTyfidrmW8JMAsnsFy8vUpL74TbJ04kpRXfrinRGbNW/wK\nHMWtyLrirwhiOV7Jo+JbuRT3xIbwK13KpG8y9cPJaw+i/lUk1rn1rDwEdFgsn6Vrn5//chT7IkL6\nCkRlGiyn0HD7+bcgTiYcfvtlrJS0dM3P6mpJTqbbm4DJnMZLwpchiACSXd62B03h/KO5sUEwcIeH\nRV1NtQi49tgekDpUU/sH1cRo5dHehgnevWHG96+lEsA/6Nqt/ndVhmr+ZDR+5MlA48x7ScMkB8s2\nAQen5POSGOWfqNPjH6beHC4+SPIXWF4/bMSYit8C4N5i9FV1toPT4M/4w5QrwElgE24/cO1RkRqi\nRPRH7fWmtZSeFndeVR+NiYlxINDN71Zv8fTp0wnKyspl8vLy31mJL2ByvIxyGf+mv+0XHDwdexXM\nltt55HwCZRxqs4IpCdi2drbPvmPJYNYd+N8tDT35jbTeEsCnAphMKyOn2LB0tU8WOMifLzIbs/ry\nvfyT9PJ/m/9MAfiYBco/d8nfz8BB/txptv7Ki0m5Z4gXmBE4UKcEM5TppYlds8o1IP7xp2P0ArHD\n8uXLw3mpTF++fFHV19fPpbwPZk0DRWd2Lv+LnEeqQJHohQGKtH2No92+49Fx1J4XF70d7OsXmkQ3\nDYIieS6dvfhoZNIlymdKQ9RrgCLRigsUyURNeHtm8S+aP/6OjnYMQZFoWjGgSKbaEpvT3tX505OT\nHCwaje5kNDArcGImOCcBs8lPnTq1ysnJ6SIr8YHnlc7OThTlegNq4Wg9k5aRa+rJlQ2AlsJUfSsX\n6+lVKDZIkq43G0qPL5wAG3kxYhHhP911DURwnzFgAvR/NzBIzMM3xzF0nIgBIiMjl7IrJDsATzSc\nnFGhoDykLv9plgrhFdeqqT26kvI5mGnvv3m11fZDZ28Qb3R+weiraGx/Ut6+h1RW4oOkWsBrBqzv\noFbHAfJ6L5uz8EjEzcvgGjjDTTDKqSP1LA4fpfcd1HMoX00kZo9T8qasulCmMcFkWvGVcyETF7p6\nZFNLA7yqmbFKAOyUySZ3lVSGRCdkl1B14EREbElzbtkSpvthLHRl1lRVVcn0HJJ/wZUkDArdX26+\nZqvfXZJGjplg3OUeEKysMlEfsA28lqQIFgcs2ehSJABalVDfaO9WduTe1Joa6oXAKwUsDgKvMGDN\nSYoEWBjXGkPuxBg4mSUpUuuvFuzTrAdqUtKyTSN0JxDdsFLr0qFMQ0R0YBtJkcCmAdkPUzUkCEpO\ncsX0W50PX4UOPRA0rQyH0B1kIbbmyktLZK2sbVJ2nUp4TC8woxB+cHjP+ZOm93VFAis8srMRuo0T\n0hI9oEis5iMqJt4K/gNFYjYuWHJjMm02S0ONJEBF3HSW7TuaAVAyxFYmOOil09XPlHgzgegmT0ZW\nrjr5RQVLFXKCEnXO1pNbU1FRIffo0SPYo/yH0W3Mqaqygri8sL29XUBPT++lhd3S1wvdNlNduUMw\n+53JUcGDwsNCbAsKCnRA5ZagSHu4LTSn4UCdiinn2VxMg2f85hCcEgEBgfY3b9706GJwvK8Xst3X\ny5uzokH6InA0HMIxoDJBOAZUJg4wZcqUdE50zvK6g5fduiNUJg6QlpZmymsZ+AGoTBCOAZWJj6H2\n2hMXF6+vq6uT4IU8PQGViY8JDg72IhBMfo9fFQkAlYmP8fT0PEquTPzgqYMeUJn4HG65SuAGUJnY\noKMTEenEIxhu5xNy7IT3qtWrz7bhEDFu5yWIoT4llxGgMrGBAAbV1BtLpIzmrUbyKxCmNuNlFXbG\nB6EyQTgGVCYIx4DK1EuAPYGiS/AHZdBIO4LgUEtHCHpLr/AuyAy5N55yRxcSnV/CBk002eJx62ON\nv6xgR/vqUUKebWsL8s+5ad0nD9eU7jZ0qmuWY1pRvr8IqgMPwrW4vXp9Ya3OXXxFmLihWc4qZr29\nsAJUJp6AwUe+xQUhbTcF9QnKRCsUWtWtNrfU7X/zw7DIFG2Rb/cx2G4dmaJTwj7nfED2g/POplLB\nb434AYtmDivklvS0gMrUB7i4WGn+uQpr2fS7DccX9hAWLarWFh629JqFnYuDQ34UW6uzmQUqE58B\n9pd4OXRjK9g/oilt1dBp2wRssjPLjzrRCVd+wUx3rt8Py7T3LwmvOXznmcPXJwtr+zO8KphTQGXi\nJYLWbbkl1r/Vl8A+HPc+yKqCc1HTU5+zMxGqu0KQh1Nadu9lzjLkfzvIoJBNdxrOb+Ku5FSBytRL\nMLozINivxEcGoTrnnqVwcm712a/duF75BkBlgnAMms6+SDQ0tw7eH3jk4O1bN01LPhXJgfBD1DS+\nW1pZJ2/f5O0L9qjqJVkhfAxNq7R5d8DF+6kpkw6ejbtqt3JzKeGIpAzztLxp+5ZVlvOnmOjf9Nuz\nq1e6+jlN7S9E+8NPpKcGEqQHuilSUcm3ifPmzo07n/jkrP1K30/0IoMVoMGRt4hb/xlMNvtwJeLM\nDHV1dbpxIP2T3xQpLDJuPzJAShEoEbMJnYi5F5X58unmO3dSn7u5raLrDwjS/+hSpNhbD9ZLKGhK\nDtPR/cxqYsBRQvkgKe24uDg7AlR9EUH6J0RFauvoHPgg47GVi8fWDHYTBI4WEjLvLbFqbU0UEhJq\nZV9ESF+AqEiKSkrfOOm70sZx5UslJZlvP35UQe+6fwjY+qZf8iev3md6j+yeiLqbH15dXS3dV/1/\nA8Bofm0ngtHe9CYr0kMnlfI52IH+6vlQg1+/WrDmVvMLnNx9M5nN43FaisaVc0cNiX6YLOa+XbJq\nA8Oui0o+FkqfPbJnSt3PapEx+salrl47HjK7WLPqe7lYWOD/TW1pbhIEzk+pvZXOWwguPZHfro6W\ndGmmNcMAa2O34E5g+I3r9DLboI5aP/R6e+I6PSzDfn4Gyyo0WlkZJT5+/NiI0Tj8CNij5t9pIf/x\ns6pCdO/GFXOAwy1z6wVde6xHnQoyUlZVryEoBG1fRv8DeOL1WGKx5Gz8o3Aj0//2eb90+rARcFEI\nFJNefIuxihsSc78c2Rt6qeu7u5t4Tafw9XOFNZv332PkswHvbwlPio/uCDrX5XwMpPHmeY4SaZt1\ngEtyW+Sy/01VoZUWtrqqimtukyWkZdt7DsVdjh8/voYAR7d0Pxu8dwq51zYSwJoA14FlJR8llYdo\n0HXKtdvbxRYoEeX9xX97P/7w7pVs8fsCGbXhOlXU4lrpq65Pfv61W1UEKJ+x6awPN66Ej52zcDnd\n/TqInnqpePsFaUw2t3pPz/0gNbDznNyeMhqYWeYtdWN68xFOs3bt2lBwAAevhw8f9gbLgthJb72T\nleORi4nRtJ4Dt4H03CkDdq93saHniRf4wLQ2GOJ1M6ckmPLZg6TrI4AlohUXuIl+9jhtKD1Fep6d\noZqc110RSQAvc+C1Ses5NbDjJv7rl5AbjJwwpZ5baTNLZ2cnGqwrAwc7SlVfVyPcUxivHYfo7vwD\nnKb2lIbv/uNU3TUH7904Y+rseW/pxd0WcPpmJw6HQmMwVOtLR3Z7z+zy200D4G+8saFOCPjE7ElW\nAFZCenAzIwFZQWiACI7XnjqoQVKqu3fvmh8/G7WH8BJmOC65o1JaqKoPp9vAIDkVpcdQDS2qOxcM\nkuz5+wJ+M8tLPknS8nkpOlC8R+UArqZLiz9IMfJ5AdjaH1WioqJibYwEZpaO1qZ2Xq8opVRkKyur\nxEuXLi0G6+/BNRhrq2RiShiojG7cG9JtdwBysh4ka44ca0DT8fz9xFid7YFnbtJLA+wooKI2rJtk\nPfkSB3wr/TyInuNUAQGBHv24N9TXCmuO6O5mmhZY8D5lxVsrI7zJeSBiqGbFjaSZAihPdHS0o5iY\nGMsLBUkw0sS/k3Bl1ErvnengnHxmI+m5B1mLiBbnQ/ZPnr9sTS6Cr0EFbN9lsdHvaDLYg2VPaNR1\n8CWLiQ/6RSvumoXTl17PLPrPWS1FGoRWWgJoedLz9OvhOGsJM0Nl2NiIExN6quED4leNsk4XQros\nl8LCKwXH3PXu04tz9cIJQ1cH3ioSpy2i48r1T+hVpn1X2i24lv62q5X4PvnksDtGLnXkimTruCKP\n3h4pB7e4WyY+Lf23Qo1vQF+7cEJ/A0EJwCXYXQA46I99+C6UWtys+8maJ69R9AtSpCEjr9TgaK7r\nFn33ZRi1NApe5CpuO3TmBrVntMBqaqj3uAoh6BOeZiuBHm2/WtCsxON3gBItmDrS3d13331Sn1Hh\n6+fyaxfNWHonv/I35XAIC7tV+mGEarc0CEoE0li37eBd0NwG90C9ZvkcI1egRF2vMLQqbpKYYi35\nTg5AicC+KDNsFr62tHciTrcFjt1X2Zsu8w+7ek1OUeX3Rg6VNIASeS21dJxpu+i1hd2SfHAP19GB\n9nKydAQyUds9gR7YuJhLM2Pu54Wysi0UPZoqP9ckJCTYcDJNXgA2qB1i6l3sYCz7W2fs1QdviLsx\ngGYy2GFAa9TY76mvqrr1+tKbGktKA/QbgZ7tMROMy6hZqSPvSrq1Lkmb6xQVvJRznK7nll5Yf4Be\nS4xaGqQpQK+ePVZePseYuDPXseiUKMpwYLPdxx+eKSKIIq3kESxwrbx5xVzb+OwSjk79WDBz4taK\nigq2+mx4DSPzrYdoaLE9BMTsr5+cYTq6xD3dQP8Rq2mMHm9UBv5Ly8o3UnsONiQejyDP1/oht2il\nQRy0LSstkXNy93nksT2Qpb2wKUmKCBT9+vUrbfWF9Du65iN5LHfwepHzyF3PYNIXdhKsLnldY2Iw\nLgODwdDdrgrSv+hSpAkTJjytq7t36f6t2JXTLO17HHSkxsenqXWYtrpye3t7hgYNIf2H36bampmZ\n3Zswoe7puAlar2LSC7sNKNIC7HWyfLaeXeqdO9NlZWVZft/zAgkhpEhPHgngtRxsspMDn4GtNLpN\n/gcu6T4WFapGREQ4HwwM2n7gzD+JSkM0qLqpa/xR2uCzwm7RcheXcy9fvNBlVQhegkIhOCwK4dow\nUW+BRbP/GdhJg+ZyJGdn5whwgPPw8PDliYmJVkVFRcPAOJWmpuYHS0vLWytXrjyTm5PD0V3BIX0T\nhlbbgg2VwcFtYSB9F7hsG8IRoCJBOAJUJAhHgIoE4QhQkSAcASoSB+CH6cTsygA2QWQnPlQkDsDt\n6cRASXg9ZbknoCJBOAJUJAhHgIoE4QhQkSAcASoSnzJ48OAfwJsL6ZrUKuPXSjdUJD7lx48fg/mh\nW4FRoCL1ITZs2MC3U3agIvEx4PUGLBPp+tChQxt5KQ89oCLxMVVVVTJ95fUGFamP4OPj0yv7irAK\nVCQ+R0ZGpgpYpoCAAF9ey0IPqEh8TmVlpWxfeL1BRWKDNhwi1hv5bNqy5VBv5AWWlQlgEJqubugB\nFYkN8iuQXtnIx37t/qbeyEtfCbWT1Q5PqEgQjgAVCcIRoCJBOAJUJAhHgIoE4QhQkXqB9geuQ4yX\nhC9DEAEku7xtD/Asi3R8xVgPU/a1TW6/7KqN+Uw9Jh456zXDOiI+XVdmlEXl8esJEQqCCBUf2W2o\nI3+bzYtNeaIjr2v1PfTaP5EK2Lft+qo628HT4M/4wyYCCNsefekBFamXEDA5XpZ11f0cOP+a6Dbc\ndm3yAkMh+n02a7Ux61uck99lfE7dV5PjpzxDXWRzdlnzv4pIxhxV4S0TQwruZJ7WjvuZvU95pobw\npuyyX3tyy/G73dVQvTLQCxWJB1Q0jhHM+Ry2b6c22oN2KBxqwopNz502ziQuE5I02FY2ENne+aYD\nGTAaS+5+pg21I+LmVf2p2h/AldTE7WWC+B34EhwiqIahZr24A1QkHjB2ofvrnkNh8Ms2HkjrusR9\nwDQhA9AjsUjL7+EE8fpTLT+QrnJP2+t3SJr/6k0lAkBF6gvg61EWasM3b7jbfJ7ytUai8/MhqYkm\nPusEJNRavc9m9Pp+wlCR+Bx83RNho5HGm/yyfoWZqQpW0AqHHrrxZ275xt1Az6xV0Vvl3uBPmooh\nXNkahBpQkfiZpqdChiOnbfqnuNNfUYDGq6r1seC6pccdjl2Nivz3BgoxkkbXZOa1KZtOEYSK9EdC\nsfnMnBH6G0LfdgZ3UyLycEIT2/OzjNUOJGwfv8VG+1nzp2tS8ZV42X8mC/a4NQgngYrEAxYNQW3+\n0IEIES/MsM5gZ5kVqe2Rq7QbSro2n8G9x3zDIQLu2mgv8rhr0jvOL1NvKP9vkxo0Pr20du/eJVMX\nGq17ZSmnY1YZ8br9sBIa4crWabSAisQDLpfg/ak/UUW6Np/BDMeBfqAewwFQEvgdl/Iu7+CSvIwA\nFamXwJUkDArdX26+ZqvfXXoTfqhtPsNSOHwVOvRA0LQyHCLIjJysAhWpFxCYeq4kOxvp3TVpKJnO\ntVv974KjN7KDigThCFCRIBwBKhKEI2DxeNZWujQ0t8pei7+18nL0JfustDs6Q4eN+DHWYHKJutbI\nKhW1YdWy8koNIgPFWrFYbCcBVGN9nfDPHxWiJZ/eS38ueivzPDtDFex+aDzZNG/pYsfz8+1so0RE\nRPr8niB9kdpfiJb5NNPbzx6nD+W1LJA/m/FGU4oZfru14/Aix85c3Htg15aV+pPMisHO5WNM51US\njgTC44Se4ktISreAneFJO19SoJjyvMg/MuyQ8cPb8cN27Nixb4272xGCQetg5gNBWAYFDBLtrisI\npHcAq+J6NEpxSWmebi6Oe+2cVz9dvm5rRvIc58PcEIZQu/q59eCpRMIBfJMN3OIfeuN8yD6jc2dO\nu9jY2MRzI08IBMJ/0DRKpy7G+u3w9Vx3NCrpUvLzr1wxRLQA3hMclq/LAUdZycepqmoapzb5bNiz\nxt39eG/KAYFAep9uRqmwuNxokqF+yt7jl64n5ZX3qjGihvIQjZrrmR9Pvn/zYvRgGdmfjzIeGmtr\na7O0IzgEAuF/fjNKB4+dO3Y58rzNrWdlR9BoNF/5eho+Uu/77ZeVIWtdbaLM/jKM27JlywFeywSB\nQDhPl1HauGN/dOH7ohFn4x/x9f7a/ucSEk8d8J3s5eUlFxwc7MVreSAQCGchGqVTEVd3ZWakGxyL\nToliOaWGKJENuktXPcQNF3VKeBOzTg9bxDEpKVi1JSDnwAanyceOHVtH4Bi38oHQBnjLmO0UvrRe\ncUaLnc3YVyIIplPG1LNkgbFsEb35xe3tbZjLZ4INL50KMtLRMyifbG71Xm24TpW0jFxTU0O9UOX3\ncrHsh6kaacn/aI8ab1i2ZvP+e0M0tKo5LT+YpnLjSvjYc8F7/1JQHlI3ebp1oYbWqEppWfnGXy3N\nAlXfy8VzH91Xe3jnhtboCUal7pv87g/V1P7Bibzf5j9TOOG/1ay46K0MyFdX36RURl6xAYPBdv6o\n+Cr25kWuYnpKgra4hGTLmi3774HRbk7kS0lpcZFUWOD/Tc3JuKtuNHXWh3ET/yqRllNoFBIU6vhR\n+X3gu9d5CuDzD5Ic3NyjHPgq9LMLwbqPv+GkkaYXQnciU3QrxF3aDz8PD2PWqwm2rrFFznuN65bk\nvK98u/8FNXwPhmdajJXbM3/+/Gvy8vLfeS3PnwpmiE2ty2b3BzJopJ1eOGBwXOYYuZrNtnv798bd\naWBKCbVwI3QnIFNm2hT6+oUmget3r/IUZo9X9t584MStv2bMYXsNHzCK7gvMnIAhBHnYOq7IoxXW\n3HrBmy0HwxLBOZhTZ20wxMtzR2AquM9K3hdC/Sfd/ufS6CMXbl6mVwEA6YN8wHldTfUAN/upzmrD\nRvzYRCgDVvKlJPFahO7pQzunBl24cdnvxGWay/5n2y/N9951JAWc19fVCK9ZOH2prIJy/c4j57tP\nAULJdI538Xs+nnCKrwgTL4hOGVFBdKDMPFi/g4cP2ju754oMFOvVtXbsgsFiO93Wb72xf//+rSEh\nIXQ8x/Rvrl+/Ps/Ozi4OjUZ3GhkZPQZGGhyKiopfeS0biedPHg7Zu8F1ztX0guNYrEAnM3G1R4/7\nlvSs7DD4QafEXx5F70fUE8XvC2RWO5g5xT0sDBUVE2fKkcswHd2KmzklwVfPhxqsd7ZedCTi5mVG\n43bicKiF5mNWr995OGXZ2s2PmMkXzO8Li30Q8fHda9lZYxU2XM94f4yd3+rfdlOW2SxyfX4j+3Mw\nM/EItbZfx6+kRpaXfJKcMUbW51r621AgG6ty0AObkpw4xfP/gu5zI3FuY2xu+93Dcfr8P9kokSA0\nR9CZmZkm4PDy8gom3Sc3Vvb29rFKSkrlvSlXc1Oj4Ppl1ouSnpYdZtYgkQN+zPs2rphz5VzIxIWu\nHtnMxgeGYZW96bKolLxTzBokcha4rM0BP8xzR/f95eq5/SEjcXatX2a7yNUz28h05kdW89XQHlW5\n8/D5BC9nK8fTcekXWEnjyC7vmYZTZny0tHd6yaocYAJ00PmEy6sXmDlHp74IYzUdemBLP3+UActC\nuJE4t5FTUf9VWlqqgsPhMARwvJaHH2HEWM2wtn/BrfyzHiRrGk2Z+ZETNXELuyX5Z4J2T2HFKD17\nkj5UU3t0BWh+sCuH5Xynl7vXu9gwapTu34rT2R545ia7+YJ+H2Dgqiu/DwR9X8zGT70RMzIiOfcM\nu3KAVRntba2Y4qK3g0Gzkt30KMGCiYr8urtzT6BRCK6vys4LhIWFf5mbm98FhsjW1jZeXFyc+AOt\n/YVocyvPgWISrY0NdUKcSKuhrlZYTGLQL1biioqKtTY1NXBEDtA/xkxtS3iASDuoMQoKCbO9bOpX\nc5PAANGBLBl4IAeIz64MRDlamgUGiIjS7UdkFexQdc2vXz69l5YaLMvSdl68pLK0CKemplYMa0m/\nQ8v48ALQXAjes2HGq7wnyqPHGVJb98gwYKTMe3fwbVbi6ujpfxUQEMQ9Sb+jAWRiR47wEL/Jjiu8\nnjAaHtTsIkL9J3n+36E77OQbE37MAHSCi7BolEDT8+LJQJNtAafZqrXdSbgyCvSxySup1rGTDi2w\nVtZzbj1Ium6kZzDpCzcy4CZZd/5RsLa2Zrta3JfR1NT8EBMT42BpaXlLVFSUL18sF5Nyz9j/pb12\ni39YImiCMBu/hfB2d7E2XLFqw+60sRMn/66nnV8wV+YPsw960qmt7pmdE+077jaGhrPpU7FpFxym\njXIH0wymzp73llk5cB0d6NULpjnPsF30ytRi7n+rCnqQYcX6/0s/sMnNKmDb2tmkUUVmuXgiwOTZ\n47ShRyOTLtEM1IMcC1d4Zh/d6zMdND2pjqAxwK3Yi7pXzh6deDH5KdvNQFpgt/mu3ywlPbjO1Wv7\nQ/FBUuz3pne8x1y0FHC8SDMABlFxy8i7usMoEUtDeRgB1dnWdCzogNfHjx81WE2jPzBmzJh8cPBa\nDnqAZgsY7bmbeE1npq7cRt/9x5PMLO0LeopXVfFVbJv7IjsJgl5G3n52GtR0ugVCq+IWXnsT22Gr\n5/BQSqqFXlsejNjGPnwXmp6SoAXk8Nl3LJmR4f2G+lph0Mn+s6pCNPjirehuTTcGZABTC0CLZI6h\nmqfF3MWv/t6wKw3IQy9f0DURd/HkhLOEGmLguX9iaE2jYEYOMNXg65fiQbbGGh7TCZ/dzWfvg57k\nANy7FasTsHXN7B2HwxPAd9FTeHbACgoKtkVGRi3zcZ3rfyouPYLllMSWNAd9WnKEg7LRBIVC8PvX\nL5kUGBjoIy0t3Sc76f9EzK3mF4ADjIQlxUaOSb4eNSb/aZaKjLxSA5jAWE340Vd9LxcDL0fTWbbv\nwJu955GmZtQr/7nGpxo9JE85D42ntSUHOWAeFDiAHGDOTtK1i7oF+U8VQXNETkG5vqmxQajia6l4\nR3s7ZvIM68LFf3s/Png69iq7MqiqD6++8aSY6HQ2N/O+Wsy5kIlPsx4MVVRRq5WVV6rvwHVg6muq\nB3wpLpIaNXZiud1St6d2Tqufgik7PX0mZuRQVFWrjc/6GEKU49E9tUunDhu9yH2kqqY54gfoQBcS\nHtAOXghgGoLKUM2fYC6XtYPLC0ZeJJyAOKN7ru2cmI+fPmluWTV/0YFT12J7I2NWAQYpbLe7hvbw\nYa9XrlzJtSokhDHaM9coz1ZZsxXsg6i96UVWhIfO3Z4MAxqDwYOJeeAg3QN+dFjz5ySCH731VUbG\nViSD2ZhADqv5zi/BwZ4czMugbzKtGBy/3WO5DNiQY5JZMeVMbZbkwL3FnLfSdTyR365OuoUWRVhy\n2vjf2jdvLz8VZaXiJTP0Ai8k5Z5jZ04Jt8Cg8C3ejmaz58+3j3F3dz/Ba3n+ZIDH8dTSc/t4LQeE\nT8CMwLkkt0W6cCCp37wEOCyYHz3d3Oz2GD3NN147g9N/68zjMV/yH9ascXHYnJOTY6CiolLKa3kg\nEAh36OZPSUpK6mfZlxKF1NTU6RZ6cjH7T8XGdxvx6EWqPr2sWbnQ0ufQoUMbv337psArOSAQSO9A\n0/Pk9OnTU6sqK6RevHihN2+SRrzhtNlf1m4LfMiJCWA9gcF3NF45vnvIpYvn50ZHRzuWlZUpczvP\nP51xRlNKQF8CL2Xgdf4kWJVj1DjDstd5T9jS1b5eBuzC0MYBenp6Lz59+jgUnCcnJ1t4enmFyCmq\nNixx35I33mQaW5PhSKBRSMeXN09azoUcMM9/+UIH1IoO7t97iHCs40T6EPoMEkbePctKG8pLGfhl\nZQEv5YBl8C9M7dVlYWGRTDiGgXMg9KNHGZMuX768KD4+3haDFcBPNPnrlYbWyArlocPq5RRVGgQH\niCLCwiLt7a0t+NaWJvTPynJsaXHRoE/vC+SyszL06uvrxaysrBIXLVp02XratPu25gl9yn0KBALh\nPCxvIAis6eTJkzPAceLECXfGYumzmh0EAvlDgLuaQiAQvgIaJQgEwldAowSBQPgKaJQgEAhfAY0S\nBALhK6BRgkAgfAU0ShC+gB8mDQLYkYPdSYf9oQw4ATRKEAiEr4BGCQKB8BXQKEEgEL4CGiUIBMJX\nQKMEgUD4CmiUIBAmOXr0qCf5bsPkgBE48uv169cfOXz4sHevCNZPgEYJAmEST0/Po3v37t1RXV0t\nTS8c2AQUGiTmgUYJAmGBHz9+DKasFVFSV1cn0Vvy9CegUYJAWCQkJMSDQAi1Zz4+PoG9LU9/ARol\nCIRF1q1bdww046qqqmTI70tISNQFBAT48kquvg40ShAIG1RWVspSNuNqa2sH8UicfgE0ShAIm5A3\n4zZv3uzPa3n6OtAoQSBsAppx+/bt297a2ip04MCBLbyWp68DjRKEJxiYmBbmZqUP57UcnKanEbm+\nxATjKR9yM9OG9Xa+0ChBeAIwSLnl+N28lgNCG15tSAmNEgQC4SugUYJAIHwFNEoQCISvgEYJAoHw\nFdAoQSAQvgIaJUi/of2B65DZTuFL6xVntNjZjH0lgmA6ZUw9SxYYyxaRPOG3lcQPOrl+leXlnEpN\nHBi8x47FuaXkRrlqYz4zl1sH6tvNlcPXeUXYlvxChAXEVVpV1BVrhBqKsZ8+VUq3IiiUhJF/aehF\nnxjtAUhTj8nha1Evj1gZeh/JNKvHYzAi8iMa1ZXEajt+fhjwqbhKqg2PRokb7is7dnFLjI5IVfOz\nC8G6j7/hpJGmF0J3IlN0K8Rd2g8/Dw8zEUAamC03fgMaJUi/AzPEptZls/sDGTTS/t/dRtRtV5k5\ne9/MUHcLeZ2YpfckdvcYm1VJLYg48zngUB+O6E9ceujlTLRB0NfrsesvqmCQVvLnn44aGCwO8J21\nfBFm8bXr3heU0Egb7fQaUPfXqM3enNA4QdUtKy95h1GiIMFM/fe8A/UhWN9gaeDWWSucBy68fnXd\nxfEufs/HE57gK8LEC6JTRlQgCF/shMIJoFGC/CEMxM8615Iwi3TZhgiynhYGr7n++ZPH65En1J+j\n8VLy8g1oFILHozF4dI/pieGnnai5lXMCuUX9eQfS3NAihEfQiJiyWoMEmnCjHwONEgTCUZpQ745Z\n6K8OeDQT0fKtPhnleVmBbi2pO51l4RLbnQPmv22oGfizsmpgq/AQ3EizZR/9M3xOmaqLfOeW5PwC\nNEoQCCdoL8be2zF36p6ofKM2BetG75T6i/N1BpawkhRaeXnd/nvLz3Z2NGJqv+RL5CUeHXE6dLeJ\nb+IZrSkB6Xf9HTSeYH5r3vUvoFGCQNgAX5s2IHTRnPmRr1rUBk89WHzy7frDOmKoRk6kjcYOxEmp\nG/809zDONF/tmuunb7Ei3mea+RHVD7UbjQTeciIPfgQaJQiEFfBV6HRv/em+18oN5Wyj3t1IdDgg\nj2GumdZFRz729Kzxi8+8Qw3V3vjs8Xmv0alYippQx7uHkq9qOsURtFKnvAKGI0aPX4FGCfJH0Fl8\nSnLHujM2pbj/dXDj61BfmxFxpLMQE+c+0Spd6H8GBTMMZ3ss6vY8NUw50vkFc2X+MPugJ53a6p7Z\nOdG+424Tm034b+j4JcPm7k9rHoUdPKxF4mOghK9V4DKqGWN0cAtOXrhlpYL+TjM97JiOv28XRQ3x\nnj5jz6ExhkaHMEYiclpNaiqSNUh9iWDxh/LBzTgELaBsX/9/V6JjrIeiv/ZWufECaJQgfwRotVU1\nfomrLjAXSRW38Nqb2A5bPYeHUlItXWPuKIVO20uNcbYIEseR9ADYobiZIUXJM0OQZKbS7IdAowSB\n0KQZ9cp/rvGpRg/JU85D49Fsdy5zOr3+CTRKkH5He+Ya5dkqa7YiiACivelFVoSHzl3WDIAIfvTW\nVxkZW5EMzkjGofRwbzHnrXQdT+S3q5NuoUWRZrbF4xOgUYL0GwSmnitJLT23j9dycB3MCJxLcluk\nC6/l4BLQKEEgEL4CGiUIBMJXQKME4Qn6xlMKeeUDmltgsQK4jo52DK/l4BSE76iIF/n+P0QGcN+A\n6KPTAAAAAElFTkSuQmCC\n",
"text/plain": [
"<IPython.core.display.Image object>"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Image(\"img/atamalar6.png\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Çarpımla çoğaltılan listelerde elemanların değil referansların kopyalanması genel olarak verimlilik sağlayan bir özellik. Aynı nesneleri dört kere değil bir milyon kere çoğalttığımızı düşünün; kopyalamakla gereksiz yere bellek işgal etmiş oluruz. Yine de eğer gerçek kopyalama istiyorsak, `deepcopy()` fonksiyonunu liste kurma işlemi ile beraber şu şekilde kullanabiliriz."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3]]"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"L = [copy.deepcopy(i) for i in [[1,2,3]]*4]\n",
"L"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Böylece elemanlar aynı nesne olmaktan çıkar ve birine yapılan atama diğerlerini etkilemez."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"False"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"L[0] is L[1]"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[['abc', 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3]]"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"L[0][0]=\"abc\"\n",
"L"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
|
{
"pile_set_name": "Github"
}
|
package com.wanjian.sak.system.traversals;
import android.view.View;
public interface ViewTraversalsListener {
void onBeforeTraversal(View rootView);
void onAfterTraversal(View rootView);
}
|
{
"pile_set_name": "Github"
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/plotData.R
\name{plotDataScatter}
\alias{plotDataScatter}
\title{Scatterplots of feature values against latent factors}
\usage{
plotDataScatter(object, view, factor, features = 10, color_by = NULL,
name_color = "", shape_by = NULL, name_shape = "",
showMissing = TRUE)
}
\arguments{
\item{object}{a \code{\link{MOFAmodel}} object.}
\item{view}{character vector with a view name, or numeric vector with the index of the view.}
\item{factor}{character vector with a factor name, or numeric vector with the index of the factor.}
\item{features}{if an integer, the total number of features to plot (10 by default).
If a character vector, a set of manually-defined features.}
\item{color_by}{specifies groups or values used to color the samples.
This can be either:
(a) a character giving the name of a feature,
(b) a character giving the name of a covariate (only if using MultiAssayExperiment as input), or
(c) a vector of the same length as the number of samples
specifying discrete groups or continuous numeric values.}
\item{name_color}{name for the color legend}
\item{shape_by}{specifies groups or values used to shape the samples.
This can be either:
(a) a character giving the name of a feature present in the training data,
(b) a character giving the name of a covariate (only if using MultiAssayExperiment as input), or
(c) a vector of the same length as the number of samples specifying discrete groups.}
\item{name_shape}{name for the shape legend}
\item{showMissing}{logical indicating whether to show samples
with missing values for the color or the shape.
Default is TRUE.}
}
\value{
a scatterplot of featurea against a factor
}
\description{
Function to do a scatterplot of the feature(s) values against the latent factor values.
}
\details{
One of the first steps for the annotation of a given factor
is to visualise the corresponding loadings,
using for example \code{\link{plotWeights}} or \code{\link{plotTopWeights}}.
These functions display the top features that are driving the heterogeneity captured by a factor. \cr
However, one might also be interested in visualising the coordinated heterogeneity in the input data,
rather than looking at "abstract" weights. \cr
This function generates scatterplots of features against factors (each dot is a sample),
so that you can observe the association between them. \cr
A similar function for doing heatmaps rather than scatterplots is \code{\link{plotDataHeatmap}}.
}
\examples{
# Load CLL data
filepath <- system.file("extdata", "CLL_model.hdf5", package = "MOFAdata")
MOFA_CLL <- loadModel(filepath)
# plot scatter for top 5 features on factor 1 in the view mRNA:
plotDataScatter(MOFA_CLL, view="mRNA", factor=1, features=5)
# coloring by the IGHV status (features in Mutations view), not showing samples with missing IGHV:
plotDataScatter(MOFA_CLL, view="mRNA", factor=1, features=5, color_by="IGHV", showMissing=FALSE)
}
|
{
"pile_set_name": "Github"
}
|
<?php
/*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* This software consists of voluntary contributions made by many individuals
* and is licensed under the MIT license. For more information, see
* <http://www.doctrine-project.org>.
*/
namespace Doctrine\DBAL;
/**
* Class to store and retrieve the version of Doctrine.
*
* @link www.doctrine-project.org
* @since 2.0
* @author Benjamin Eberlei <kontakt@beberlei.de>
* @author Guilherme Blanco <guilhermeblanco@hotmail.com>
* @author Jonathan Wage <jonwage@gmail.com>
* @author Roman Borschel <roman@code-factory.org>
*/
class Version
{
/**
* Current Doctrine Version.
*/
const VERSION = '2.5.12';
/**
* Compares a Doctrine version with the current one.
*
* @param string $version The Doctrine version to compare to.
*
* @return integer -1 if older, 0 if it is the same, 1 if version passed as argument is newer.
*/
public static function compare($version)
{
$currentVersion = str_replace(' ', '', strtolower(self::VERSION));
$version = str_replace(' ', '', $version);
return version_compare($version, $currentVersion);
}
}
|
{
"pile_set_name": "Github"
}
|
library(h2o)
h2o.init(nthreads = -1)
## If possible download from the s3 link and change the path to the dataset.
small_test <- "http://h2o-public-test-data.s3.amazonaws.com/bigdata/laptop/lending-club/LoanStats3a.csv"
## Task 1: Import Data
loanStats <- h2o.importFile(path = small_test, parse = F)
## Parse with user imposed schema which changes the column types of column:
## 'int_rate', 'revol_util', 'emp_length', 'verification_status' to String instead of Enum
col_types <- c('Numeric', 'Numeric', 'Numeric', 'Numeric', 'Numeric', 'Enum', 'String', 'Numeric',
'Enum', 'Enum', 'Enum', 'String', 'Enum', 'Numeric', 'String', 'Time', 'Enum', 'Enum',
'String', 'Enum', 'Enum', 'Enum', 'Enum', 'Enum', 'Numeric', 'Numeric', 'Time', 'Numeric',
'Enum', 'Enum', 'Numeric', 'Numeric', 'Numeric', 'String', 'Numeric', 'Enum', 'Numeric',
'Numeric', 'Numeric', 'Numeric', 'Numeric', 'Numeric', 'Numeric', 'Numeric', 'Numeric',
'Enum', 'Numeric', 'Enum', 'Time', 'Numeric', 'Enum', 'Numeric')
loanStats <- h2o.parseRaw(data = loanStats, destination_frame = "loanStats", col.types = col_types)
## Task 2: Look at the levels in the response column loan_status
## Hint: Use h2o.table function on the response column, use as.data.frame to return the table to R
## Task 3: Filter out all loans that are completed, aka subset data
## Hint: "Current", "In Grace Period", "Late (16-30 days)", "Late (31-120 days)" are ongoing loans
## Task 4: Bin the response variable to good/bad loans only, use your best judgment for what is a good/bad loan
## Create new column called bad_loan which should be a binary variable
## Hint: You can turn the bad_loan column into factor using as.factor
## Task 5: String munging to clean string columns before converting to numeric
## Hint: Columns that need munging includes "int_rate", "revol_util", "emp_length"
## Example for int_rate using h2o.strsplit, trim, as.numeric
loanStats$int_rate <- h2o.strsplit(loanStats$int_rate, split = "%")
loanStats$int_rate <- h2o.trim(loanStats$int_rate)
loanStats$int_rate <- as.numeric(loanStats$int_rate)
## Now try for revol_util yourself
## Now we're going to clean up emp_length.
## Use h2o.sub to remove " year" and " years", also translate n/a to ""
loanStats$emp_length <- h2o.sub(x = loanStats$emp_length, pattern = "([ ]*+[a-zA-Z].*)|(n/a)", replacement = "")
## Use h2o.trim to remove any trailing spaces
loanStats$emp_length <- h2o.trim(loanStats$emp_length)
## Use h2o.sub to convert < 1 to 0 years and do the same for 10 + to 10, then convert to numeric
## Hint: Be mindful of spaces between characters
## Task 6: Create new feature called "credit_length_in_years"
## Hint: Use the columns "earliest_cr_line" and "issue_d"
## Task 7: Use h2o.sub to create two levels for column "verification_status" ie "verified" and "not verified"
## Hint: Use h2o.table to examine levels within "verification_status"
## Task 8: Define your response and predictor variables
myY <- "bad_loan"
myX <- c()
## Task 9: Do a test-train split (80-20)
## Hint: Use h2o.splitFrame ONLY once
## Hint: Use h2o.table to see if the ratio of the response class is maintained
## Task 10: Build model predicting good/bad loan
## Note: Use any of the classification methods available including GLM, GBM, Random Forest, and Deep Learning
## Task 11: Plot the scoring history to make sure you're not overfitting
## Hint: Use plot function on the model object
## Task 12: Plot the ROC curve for the binomial models and get auc using h2o.auc
## Hint: Use h2o.performance and plot to grab the modelmetrics and then plotting the modelmetrics
## Task 13: Check the variable importance and generate confusion matrix for max F1 threshold
## Hint: Use h2o.varimp for non-GLM model and use h2o.confusionMatrix
## Task 14: Score the entire data set using the model
## Hint: Use h2o.predict.
## Extra: Calculate the money gain/loss if model is implemented
## Calculate the total amount of money earned or lost per loan
loanStats$earned <- loanStats$total_pymnt - loanStats$loan_amnt
## Calculate how much money will be lost to false negative, vs how much will be saved due to true positives
loanStats$pred <- pred[,1]
net <- as.data.frame(h2o.group_by(data = loanStats, by = c("bad_loan", "pred"), gb.control = list(na.methods = "ignore"), sum("earned")))
n1 <- net[ net$bad_loan == 0 & net$pred == 0, 3]
n2 <- net[ net$bad_loan == 0 & net$pred == 1, 3]
n3 <- net[ net$bad_loan == 1 & net$pred == 1, 3]
n4 <- net[ net$bad_loan == 1 & net$pred == 0, 3]
## Function defined to pretty print numerics as dollars
printMoney <- function(x){
x <- round(abs(x),2)
format(x, digits=10, nsmall=2, decimal.mark=".", big.mark=",")
}
## Calculate the amount of earned
print(paste0("Total amount of profit still earned using the model : $", printMoney(n1) , ""))
print(paste0("Total amount of profit forfeitted using the model : $", printMoney(n2) , ""))
print(paste0("Total amount of loss that could have been prevented : $", printMoney(n3) , ""))
print(paste0("Total amount of loss that still would've accrued : $", printMoney(n4) , ""))
## Calculate Net
print(paste0("Total profit by implementing model : $", printMoney( n1 - n2 + n3 - n4)))
|
{
"pile_set_name": "Github"
}
|
name: "CaffeNet"
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mirror: true
crop_size: 227
mean_file: "data/ilsvrc12/imagenet_mean.binaryproto"
}
# mean pixel / channel-wise mean instead of mean image
# transform_param {
# crop_size: 227
# mean_value: 104
# mean_value: 117
# mean_value: 123
# mirror: true
# }
data_param {
source: "examples/imagenet/ilsvrc12_train_lmdb"
batch_size: 256
backend: LMDB
}
}
layer {
name: "data"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mirror: false
crop_size: 227
mean_file: "data/ilsvrc12/imagenet_mean.binaryproto"
}
# mean pixel / channel-wise mean instead of mean image
# transform_param {
# crop_size: 227
# mean_value: 104
# mean_value: 117
# mean_value: 123
# mirror: false
# }
data_param {
source: "examples/imagenet/ilsvrc12_val_lmdb"
batch_size: 50
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 96
kernel_size: 11
stride: 4
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "norm1"
type: "LRN"
bottom: "pool1"
top: "norm1"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "norm1"
top: "conv2"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "norm2"
type: "LRN"
bottom: "pool2"
top: "norm2"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layer {
name: "conv3"
type: "Convolution"
bottom: "norm2"
top: "conv3"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}
layer {
name: "conv4"
type: "Convolution"
bottom: "conv3"
top: "conv4"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "relu4"
type: "ReLU"
bottom: "conv4"
top: "conv4"
}
layer {
name: "conv5"
type: "Convolution"
bottom: "conv4"
top: "conv5"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
group: 2
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "relu5"
type: "ReLU"
bottom: "conv5"
top: "conv5"
}
layer {
name: "pool5"
type: "Pooling"
bottom: "conv5"
top: "pool5"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "fc6"
type: "InnerProduct"
bottom: "pool5"
top: "fc6"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 4096
weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "fc6"
top: "fc6"
}
layer {
name: "drop6"
type: "Dropout"
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc7"
type: "InnerProduct"
bottom: "fc6"
top: "fc7"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 4096
weight_filler {
type: "gaussian"
std: 0.005
}
bias_filler {
type: "constant"
value: 1
}
}
}
layer {
name: "relu7"
type: "ReLU"
bottom: "fc7"
top: "fc7"
}
layer {
name: "drop7"
type: "Dropout"
bottom: "fc7"
top: "fc7"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "fc8"
type: "InnerProduct"
bottom: "fc7"
top: "fc8"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
inner_product_param {
num_output: 1000
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "fc8"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "fc8"
bottom: "label"
top: "loss"
}
|
{
"pile_set_name": "Github"
}
|
/* @flow */
import type { L10nsStrings } from '../formatters/buildFormatter'
// Latvian
const strings: L10nsStrings = {
prefixAgo: 'pirms',
prefixFromNow: null,
suffixAgo: null,
suffixFromNow: 'no šī brīža',
seconds: '%d sek.',
minute: 'min.',
minutes: '%d min.',
hour: 'st.',
hours: '%d st.',
day: '1 d.',
days: '%d d.',
month: 'mēnesis.',
months: '%d mēnesis.',
year: 'gads',
years: '%d gads',
wordSeparator: ' ',
}
export default strings
|
{
"pile_set_name": "Github"
}
|
{
"index_name": "reason-node",
"start_urls": [
"https://kennetpostigo.github.io/reason-node/docs/en/whatWhy.html",
"https://kennetpostigo.github.io/reason-node/docs/"
],
"stop_urls": [],
"selectors": {
"lvl0": {
"selector": "//*[contains(@class, 'navItemActive')]/../../preceding::h3[1]",
"type": "xpath",
"global": true,
"default_value": "Documentation"
},
"lvl1": ".post h1",
"lvl2": ".post h2",
"lvl3": ".post h3",
"lvl4": ".post h4",
"text": ".post article p, .post article li"
},
"min_indexed_level": 1,
"conversation_id": [
"496613272"
],
"nb_hits": 583
}
|
{
"pile_set_name": "Github"
}
|
/*
* DeleteReservationModal.js
*
* The DeleteReservationModal component allows a user to delete a
* reservation.
*
* Initially, the modal is hidden. To show the modal, the properties
* of the DeleteReservationModal component should be set, then the
* "show()" method can be called. The modal will hide itself
* automatically when the user submits a command or closes it
* manually. If necessary, the "hide()" method also closes it.
*
* A DeleteReservationModal emits a "deleted" event when a reservation
* is deleted. There is no associated payload.
*
*/
(function() {
const template = `
<div>
<!-- Delete reservation modal -->
<div
aria-hidden="true"
aria-labelledby="Delete Reservation?"
class="modal fade"
ref="modal"
role="dialog"
tabindex="-1"
>
<div class="modal-dialog modal-sm modal-dialog-centered" role="document">
<div class="modal-content">
<div
class="modal-header m-3"
style="padding-bottom: 0px; margin-bottom: 5px !important; border: none;"
>
<h5 class="modal-title text-center col-12">
<b>Delete reservation "{{reservation.Name}}"?</b>
</h5>
<button
aria-label="Close"
class="close"
data-dismiss="modal"
style="position: absolute; right: 15px; top: 10px;"
type="button"
>
<span aria-hidden="true">×</span>
</button>
</div>
<!-- Buttons at bottom of modal -->
<div
class="modal-footer m-3"
style="padding-top: 20px; margin-top: 20px;"
>
<!-- Cancel, exits modal, only shows on main reservation page -->
<button
class="modalbtn igorbtn btn btn-secondary mr-auto cancel"
data-dismiss="modal"
type="button"
>Cancel</button>
<!-- Delete, sends a igor del command to the server -->
<button
class="modalbtn gobtn igorbtn btn btn-primary modalcommand"
style="background-color: #a975d6; border-color: #a975d6;"
type="button"
v-on:click="deleteReservation()"
>
<span>Delete</span>
</button>
</div>
</div>
</div>
</div>
<loading-modal
body="This may take some time..."
header="Deleting Reservation"
ref="loadingModal"
></loading-modal>
</div>
`;
window.DeleteReservationModal = {
template: template,
components: {
LoadingModal,
},
data() {
return {
reservation: {},
};
},
methods: {
show() {
this.reservation = this.$store.state.selectedReservation;
$(this.$refs['modal']).modal('show');
},
hide() {
$(this.$refs['modal']).modal('hide');
},
showLoading() {
this.$refs['loadingModal'].show();
},
hideLoading() {
this.$refs['loadingModal'].hide();
},
deleteReservation() {
this.hide();
this.showLoading();
$.get(
'run/',
{run: `igor del ${this.reservation.Name}`},
(data) => {
const response = JSON.parse(data);
let msg = response.Message;
if (msg == '\n') {
msg = `Successfully deleted ${this.reservation.Name}`;
}
this.$store.commit('updateReservations', response.Extra);
this.$store.commit('setAlert', msg);
setTimeout(() => {
this.hideLoading();
this.$emit('deleted');
}, 500);
}
);
},
},
};
})();
|
{
"pile_set_name": "Github"
}
|
klepto module documentation
===========================
archives module
---------------
.. automodule:: klepto.archives
:members:
:undoc-members:
:private-members:
:special-members:
:show-inheritance:
:imported-members:
.. :exclude-members:
crypto module
-------------
.. automodule:: klepto.crypto
:members:
:undoc-members:
:private-members:
:special-members:
:show-inheritance:
:imported-members:
.. :exclude-members:
keymaps module
--------------
.. automodule:: klepto.keymaps
:members:
:undoc-members:
:private-members:
:special-members:
:show-inheritance:
:imported-members:
.. :exclude-members:
rounding module
---------------
.. automodule:: klepto.rounding
:members:
:undoc-members:
:private-members:
:special-members:
:show-inheritance:
:imported-members:
.. :exclude-members:
safe module
-----------
.. automodule:: klepto.safe
:members:
:undoc-members:
:private-members:
:special-members:
:show-inheritance:
:imported-members:
.. :exclude-members:
tools module
------------
.. automodule:: klepto.tools
:members:
:undoc-members:
:private-members:
:special-members:
:show-inheritance:
:imported-members:
.. :exclude-members:
|
{
"pile_set_name": "Github"
}
|
/* crc32.c -- compute the CRC-32 of a data stream
* Copyright (C) 1995-2006, 2010 Mark Adler
* For conditions of distribution and use, see copyright notice in zlib.h
*
* Thanks to Rodney Brown <rbrown64@csc.com.au> for his contribution of faster
* CRC methods: exclusive-oring 32 bits of data at a time, and pre-computing
* tables for updating the shift register in one step with three exclusive-ors
* instead of four steps with four exclusive-ors. This results in about a
* factor of two increase in speed on a Power PC G4 (PPC7455) using gcc -O3.
*/
/* \param (#) $Id$ */
/*
Note on the use of DYNAMIC_CRC_TABLE: there is no mutex or semaphore
protection on the static variables used to control the first-use generation
of the crc tables. Therefore, if you #define DYNAMIC_CRC_TABLE, you should
first call get_crc_table() to initialize the tables before allowing more than
one thread to use crc32().
*/
#ifdef MAKECRCH
# include <stdio.h>
# ifndef DYNAMIC_CRC_TABLE
# define DYNAMIC_CRC_TABLE
# endif /* !DYNAMIC_CRC_TABLE */
#endif /* MAKECRCH */
#include "zutil.h" /* for STDC and FAR definitions */
#define local static
/* Find a four-byte integer type for crc32_little() and crc32_big(). */
#ifndef NOBYFOUR
# ifdef STDC /* need ANSI C limits.h to determine sizes */
# include <limits.h>
# define BYFOUR
# if (UINT_MAX == 0xffffffffUL)
typedef unsigned int u4;
# else
# if (ULONG_MAX == 0xffffffffUL)
typedef unsigned long u4;
# else
# if (USHRT_MAX == 0xffffffffUL)
typedef unsigned short u4;
# else
# undef BYFOUR /* can't find a four-byte integer type! */
# endif
# endif
# endif
# endif /* STDC */
#endif /* !NOBYFOUR */
/* Definitions for doing the crc four data bytes at a time. */
#ifdef BYFOUR
# define REV(w) ((((w)>>24)&0xff)+(((w)>>8)&0xff00)+ \
(((w)&0xff00)<<8)+(((w)&0xff)<<24))
local unsigned long crc32_little OF((unsigned long,
const unsigned char FAR *, unsigned));
local unsigned long crc32_big OF((unsigned long,
const unsigned char FAR *, unsigned));
# define TBLS 8
#else
# define TBLS 1
#endif /* BYFOUR */
/* Local functions for crc concatenation */
local unsigned long gf2_matrix_times OF((unsigned long *mat,
unsigned long vec));
local void gf2_matrix_square OF((unsigned long *square, unsigned long *mat));
local uLong crc32_combine_(uLong crc1, uLong crc2, z_off64_t len2);
#ifdef DYNAMIC_CRC_TABLE
local volatile int crc_table_empty = 1;
local unsigned long FAR crc_table[TBLS][256];
local void make_crc_table OF((void));
#ifdef MAKECRCH
local void write_table OF((FILE *, const unsigned long FAR *));
#endif /* MAKECRCH */
/*
Generate tables for a byte-wise 32-bit CRC calculation on the polynomial:
x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x+1.
Polynomials over GF(2) are represented in binary, one bit per coefficient,
with the lowest powers in the most significant bit. Then adding polynomials
is just exclusive-or, and multiplying a polynomial by x is a right shift by
one. If we call the above polynomial p, and represent a byte as the
polynomial q, also with the lowest power in the most significant bit (so the
byte 0xb1 is the polynomial x^7+x^3+x+1), then the CRC is (q*x^32) mod p,
where a mod b means the remainder after dividing a by b.
This calculation is done using the shift-register method of multiplying and
taking the remainder. The register is initialized to zero, and for each
incoming bit, x^32 is added mod p to the register if the bit is a one (where
x^32 mod p is p+x^32 = x^26+...+1), and the register is multiplied mod p by
x (which is shifting right by one and adding x^32 mod p if the bit shifted
out is a one). We start with the highest power (least significant bit) of
q and repeat for all eight bits of q.
The first table is simply the CRC of all possible eight bit values. This is
all the information needed to generate CRCs on data a byte at a time for all
combinations of CRC register values and incoming bytes. The remaining tables
allow for word-at-a-time CRC calculation for both big-endian and little-
endian machines, where a word is four bytes.
*/
local void make_crc_table()
{
unsigned long c;
int n, k;
unsigned long poly; /* polynomial exclusive-or pattern */
/* terms of polynomial defining this crc (except x^32): */
static volatile int first = 1; /* flag to limit concurrent making */
static const unsigned char p[] = {0,1,2,4,5,7,8,10,11,12,16,22,23,26};
/* See if another task is already doing this (not thread-safe, but better
than nothing -- significantly reduces duration of vulnerability in
case the advice about DYNAMIC_CRC_TABLE is ignored) */
if (first) {
first = 0;
/* make exclusive-or pattern from polynomial (0xedb88320UL) */
poly = 0UL;
for (n = 0; n < sizeof(p)/sizeof(unsigned char); n++)
poly |= 1UL << (31 - p[n]);
/* generate a crc for every 8-bit value */
for (n = 0; n < 256; n++) {
c = (unsigned long)n;
for (k = 0; k < 8; k++)
c = c & 1 ? poly ^ (c >> 1) : c >> 1;
crc_table[0][n] = c;
}
#ifdef BYFOUR
/* generate crc for each value followed by one, two, and three zeros,
and then the byte reversal of those as well as the first table */
for (n = 0; n < 256; n++) {
c = crc_table[0][n];
crc_table[4][n] = REV(c);
for (k = 1; k < 4; k++) {
c = crc_table[0][c & 0xff] ^ (c >> 8);
crc_table[k][n] = c;
crc_table[k + 4][n] = REV(c);
}
}
#endif /* BYFOUR */
crc_table_empty = 0;
}
else { /* not first */
/* wait for the other guy to finish (not efficient, but rare) */
while (crc_table_empty)
;
}
#ifdef MAKECRCH
/* write out CRC tables to crc32.h */
{
FILE *out;
out = fopen("crc32.h", "w");
if (out == NULL) return;
fprintf(out, "/* crc32.h -- tables for rapid CRC calculation\n");
fprintf(out, " * Generated automatically by crc32.c\n */\n\n");
fprintf(out, "local const unsigned long FAR ");
fprintf(out, "crc_table[TBLS][256] =\n{\n {\n");
write_table(out, crc_table[0]);
# ifdef BYFOUR
fprintf(out, "#ifdef BYFOUR\n");
for (k = 1; k < 8; k++) {
fprintf(out, " },\n {\n");
write_table(out, crc_table[k]);
}
fprintf(out, "#endif\n");
# endif /* BYFOUR */
fprintf(out, " }\n};\n");
fclose(out);
}
#endif /* MAKECRCH */
}
#ifdef MAKECRCH
local void write_table(out, table)
FILE *out;
const unsigned long FAR *table;
{
int n;
for (n = 0; n < 256; n++)
fprintf(out, "%s0x%08lxUL%s", n % 5 ? "" : " ", table[n],
n == 255 ? "\n" : (n % 5 == 4 ? ",\n" : ", "));
}
#endif /* MAKECRCH */
#else /* !DYNAMIC_CRC_TABLE */
/* ========================================================================
* Tables of CRC-32s of all single-byte values, made by make_crc_table().
*/
#include "crc32.h"
#endif /* DYNAMIC_CRC_TABLE */
/* =========================================================================
* This function can be used by asm versions of crc32()
*/
const unsigned long FAR * ZEXPORT get_crc_table()
{
#ifdef DYNAMIC_CRC_TABLE
if (crc_table_empty)
make_crc_table();
#endif /* DYNAMIC_CRC_TABLE */
return (const unsigned long FAR *)crc_table;
}
/* ========================================================================= */
#define DO1 crc = crc_table[0][((int)crc ^ (*buf++)) & 0xff] ^ (crc >> 8)
#define DO8 DO1; DO1; DO1; DO1; DO1; DO1; DO1; DO1
/* ========================================================================= */
unsigned long ZEXPORT crc32(crc, buf, len)
unsigned long crc;
const unsigned char FAR *buf;
uInt len;
{
if (buf == Z_NULL) return 0UL;
#ifdef DYNAMIC_CRC_TABLE
if (crc_table_empty)
make_crc_table();
#endif /* DYNAMIC_CRC_TABLE */
#ifdef BYFOUR
if (sizeof(void *) == sizeof(ptrdiff_t)) {
u4 endian;
endian = 1;
if (*((unsigned char *)(&endian)))
return crc32_little(crc, buf, len);
else
return crc32_big(crc, buf, len);
}
#endif /* BYFOUR */
crc = crc ^ 0xffffffffUL;
while (len >= 8) {
DO8;
len -= 8;
}
if (len) do {
DO1;
} while (--len);
return crc ^ 0xffffffffUL;
}
#ifdef BYFOUR
/* ========================================================================= */
#define DOLIT4 c ^= *buf4++; \
c = crc_table[3][c & 0xff] ^ crc_table[2][(c >> 8) & 0xff] ^ \
crc_table[1][(c >> 16) & 0xff] ^ crc_table[0][c >> 24]
#define DOLIT32 DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4
/* ========================================================================= */
local unsigned long crc32_little(crc, buf, len)
unsigned long crc;
const unsigned char FAR *buf;
unsigned len;
{
register u4 c;
register const u4 FAR *buf4;
c = (u4)crc;
c = ~c;
while (len && ((ptrdiff_t)buf & 3)) {
c = crc_table[0][(c ^ *buf++) & 0xff] ^ (c >> 8);
len--;
}
buf4 = (const u4 FAR *)(const void FAR *)buf;
while (len >= 32) {
DOLIT32;
len -= 32;
}
while (len >= 4) {
DOLIT4;
len -= 4;
}
buf = (const unsigned char FAR *)buf4;
if (len) do {
c = crc_table[0][(c ^ *buf++) & 0xff] ^ (c >> 8);
} while (--len);
c = ~c;
return (unsigned long)c;
}
/* ========================================================================= */
#define DOBIG4 c ^= *buf4++; \
c = crc_table[4][c & 0xff] ^ crc_table[5][(c >> 8) & 0xff] ^ \
crc_table[6][(c >> 16) & 0xff] ^ crc_table[7][c >> 24]
#define DOBIG32 DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4
/* ========================================================================= */
local unsigned long crc32_big(crc, buf, len)
unsigned long crc;
const unsigned char FAR *buf;
unsigned len;
{
register u4 c;
register const u4 FAR *buf4;
c = REV((u4)crc);
c = ~c;
while (len && ((ptrdiff_t)buf & 3)) {
c = crc_table[4][(c >> 24) ^ *buf++] ^ (c << 8);
len--;
}
buf4 = (const u4 FAR *)(const void FAR *)buf;
while (len >= 32) {
DOBIG32;
len -= 32;
}
while (len >= 4) {
DOBIG4;
len -= 4;
}
buf = (const unsigned char FAR *)buf4;
if (len) do {
c = crc_table[4][(c >> 24) ^ *buf++] ^ (c << 8);
} while (--len);
c = ~c;
return (unsigned long)(REV(c));
}
#endif /* BYFOUR */
#define GF2_DIM 32 /* dimension of GF(2) vectors (length of CRC) */
/* ========================================================================= */
local unsigned long gf2_matrix_times(mat, vec)
unsigned long *mat;
unsigned long vec;
{
unsigned long sum;
sum = 0;
while (vec) {
if (vec & 1)
sum ^= *mat;
vec >>= 1;
mat++;
}
return sum;
}
/* ========================================================================= */
local void gf2_matrix_square(square, mat)
unsigned long *square;
unsigned long *mat;
{
int n;
for (n = 0; n < GF2_DIM; n++)
square[n] = gf2_matrix_times(mat, mat[n]);
}
/* ========================================================================= */
local uLong crc32_combine_(crc1, crc2, len2)
uLong crc1;
uLong crc2;
z_off64_t len2;
{
int n;
unsigned long row;
unsigned long even[GF2_DIM]; /* even-power-of-two zeros operator */
unsigned long odd[GF2_DIM]; /* odd-power-of-two zeros operator */
/* degenerate case (also disallow negative lengths) */
if (len2 <= 0)
return crc1;
/* put operator for one zero bit in odd */
odd[0] = 0xedb88320UL; /* CRC-32 polynomial */
row = 1;
for (n = 1; n < GF2_DIM; n++) {
odd[n] = row;
row <<= 1;
}
/* put operator for two zero bits in even */
gf2_matrix_square(even, odd);
/* put operator for four zero bits in odd */
gf2_matrix_square(odd, even);
/* apply len2 zeros to crc1 (first square will put the operator for one
zero byte, eight zero bits, in even) */
do {
/* apply zeros operator for this bit of len2 */
gf2_matrix_square(even, odd);
if (len2 & 1)
crc1 = gf2_matrix_times(even, crc1);
len2 >>= 1;
/* if no more bits set, then done */
if (len2 == 0)
break;
/* another iteration of the loop with odd and even swapped */
gf2_matrix_square(odd, even);
if (len2 & 1)
crc1 = gf2_matrix_times(odd, crc1);
len2 >>= 1;
/* if no more bits set, then done */
} while (len2 != 0);
/* return combined crc */
crc1 ^= crc2;
return crc1;
}
/* ========================================================================= */
uLong ZEXPORT crc32_combine(crc1, crc2, len2)
uLong crc1;
uLong crc2;
z_off_t len2;
{
return crc32_combine_(crc1, crc2, len2);
}
uLong ZEXPORT crc32_combine64(crc1, crc2, len2)
uLong crc1;
uLong crc2;
z_off64_t len2;
{
return crc32_combine_(crc1, crc2, len2);
}
|
{
"pile_set_name": "Github"
}
|
---
id: 5900f4f21000cf542c510005
challengeType: 5
title: 'Problem 390: Triangles with non rational sides and integral area'
videoUrl: ''
localeTitle: 问题390:具有非理性边和积分面积的三角形
---
## Description
<section id="description">考虑边长为√5,√65和√68的三角形。可以看出,该三角形具有区域9。 <p> S(n)是所有三角形的面积之和,其边长为√(1 + b2),√(1 + c2)和√(b2 + c2)(对于正整数b和c),其积分面积不超过ñ。 </p><p>示例三角形的b = 2且c = 8。 </p><p> S(106)= 18018206。 </p><p>找到S(1010)。 </p></section>
## Instructions
<section id="instructions">
</section>
## Tests
<section id='tests'>
```yml
tests:
- text: <code>euler390()</code>应该返回2919133642971。
testString: assert.strictEqual(euler390(), 2919133642971);
```
</section>
## Challenge Seed
<section id='challengeSeed'>
<div id='js-seed'>
```js
function euler390() {
// Good luck!
return true;
}
euler390();
```
</div>
</section>
## Solution
<section id='solution'>
```js
// solution required
```
/section>
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0"?>
<ZopeData>
<record id="1" aka="AAAAAAAAAAE=">
<pickle>
<global name="Category" module="erp5.portal_type"/>
</pickle>
<pickle>
<dictionary>
<item>
<key> <string>_Add_portal_content_Permission</string> </key>
<value>
<tuple>
<string>Assignor</string>
<string>Manager</string>
</tuple>
</value>
</item>
<item>
<key> <string>_Add_portal_folders_Permission</string> </key>
<value>
<tuple>
<string>Assignor</string>
<string>Manager</string>
</tuple>
</value>
</item>
<item>
<key> <string>_Copy_or_Move_Permission</string> </key>
<value>
<tuple>
<string>Assignor</string>
<string>Manager</string>
</tuple>
</value>
</item>
<item>
<key> <string>_Delete_objects_Permission</string> </key>
<value>
<tuple>
<string>Assignor</string>
<string>Manager</string>
</tuple>
</value>
</item>
<item>
<key> <string>_Modify_portal_content_Permission</string> </key>
<value>
<tuple>
<string>Assignee</string>
<string>Assignor</string>
<string>Manager</string>
<string>Owner</string>
</tuple>
</value>
</item>
<item>
<key> <string>default_reference</string> </key>
<value> <string>6857</string> </value>
</item>
<item>
<key> <string>description</string> </key>
<value> <string>inländische Kap.Ges.</string> </value>
</item>
<item>
<key> <string>id</string> </key>
<value> <string>19</string> </value>
</item>
<item>
<key> <string>int_index</string> </key>
<value> <int>19</int> </value>
</item>
<item>
<key> <string>portal_type</string> </key>
<value> <string>Category</string> </value>
</item>
<item>
<key> <string>title</string> </key>
<value> <string>Aufwendungen aus der Veräußerung von Anteilen an Kapitalgesellschaften 100 %/50 % nicht abzugsfähig</string> </value>
</item>
</dictionary>
</pickle>
</record>
</ZopeData>
|
{
"pile_set_name": "Github"
}
|
// (C) Copyright 2005 Matthias Troyer
// (C) Copyright 2006 Douglas Gregor <doug.gregor -at- gmail.com>
// Use, modification and distribution is subject to the Boost Software
// License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
// Authors: Matthias Troyer
// Douglas Gregor
/** @file packed_oarchive.hpp
*
* This header provides the facilities for unpacking Serializable
* data types from a buffer using @c MPI_Unpack. The buffers are
* typically received via MPI and have been packed either by via the
* facilities in @c packed_iarchive.hpp or @c MPI_Pack.
*/
#ifndef BOOST_MPI_PACKED_OARCHIVE_HPP
#define BOOST_MPI_PACKED_OARCHIVE_HPP
#include <boost/mpi/datatype.hpp>
#include <boost/archive/basic_archive.hpp>
#include <boost/archive/detail/auto_link_archive.hpp>
#include <boost/archive/detail/common_oarchive.hpp>
#include <boost/mpi/detail/packed_oprimitive.hpp>
#include <boost/mpi/detail/binary_buffer_oprimitive.hpp>
#include <boost/serialization/string.hpp>
#include <boost/serialization/collection_size_type.hpp>
#include <boost/serialization/item_version_type.hpp>
namespace boost { namespace mpi {
#ifdef BOOST_MPI_HOMOGENEOUS
typedef binary_buffer_oprimitive oprimitive;
#else
typedef packed_oprimitive oprimitive;
#endif
/** @brief An archive that packs binary data into an MPI buffer.
*
* The @c packed_iarchive class is an Archiver (as in the
* Boost.Serialization library) that packs binary data into a buffer
* for transmission via MPI. It can operate on any Serializable data
* type and will use the @c MPI_Pack function of the underlying MPI
* implementation to perform serialization.
*/
class BOOST_MPI_DECL packed_oarchive
: public oprimitive
, public archive::detail::common_oarchive<packed_oarchive>
{
public:
/**
* Construct a @c packed_oarchive for transmission over the given
* MPI communicator and with an initial buffer.
*
* @param comm The communicator over which this archive will be
* sent.
*
* @param b A user-defined buffer that will be filled with the
* binary representation of serialized objects.
*
* @param flags Control the serialization of the data types. Refer
* to the Boost.Serialization documentation before changing the
* default flags.
*
* @param position Set the offset into buffer @p b at which
* deserialization will begin.
*/
packed_oarchive( MPI_Comm const & comm, buffer_type & b, unsigned int flags = boost::archive::no_header)
: oprimitive(b,comm),
archive::detail::common_oarchive<packed_oarchive>(flags)
{}
/**
* Construct a @c packed_oarchive for transmission over the given
* MPI communicator.
*
* @param comm The communicator over which this archive will be
* sent.
*
* @param s The size of the buffer to be received.
*
* @param flags Control the serialization of the data types. Refer
* to the Boost.Serialization documentation before changing the
* default flags.
*/
packed_oarchive ( MPI_Comm const & comm, unsigned int flags = boost::archive::no_header)
: oprimitive(internal_buffer_,comm),
archive::detail::common_oarchive<packed_oarchive>(flags)
{}
// Save everything else in the usual way, forwarding on to the Base class
template<class T>
void save_override(T const& x, mpl::false_)
{
archive::detail::common_oarchive<packed_oarchive>::save_override(x);
}
// Save it directly using the primitives
template<class T>
void save_override(T const& x, mpl::true_)
{
oprimitive::save(x);
}
// Save all supported datatypes directly
template<class T>
void save_override(T const& x)
{
typedef typename mpl::apply1<use_array_optimization,T>::type use_optimized;
save_override(x, use_optimized());
}
// output archives need to ignore the optional information
void save_override(const archive::class_id_optional_type & ){}
// explicitly convert to char * to avoid compile ambiguities
void save_override(const archive::class_name_type & t){
const std::string s(t);
* this->This() << s;
}
void save_override(const archive::class_id_type & t){
const boost::int_least16_t x = t;
* this->This() << x;
}
void save_override(const archive::version_type & t){
const boost::int_least8_t x = t;
* this->This() << x;
}
private:
/// An internal buffer to be used when the user does not supply his
/// own buffer.
buffer_type internal_buffer_;
};
} } // end namespace boost::mpi
// required by export
BOOST_SERIALIZATION_REGISTER_ARCHIVE(boost::mpi::packed_oarchive)
BOOST_SERIALIZATION_USE_ARRAY_OPTIMIZATION(boost::mpi::packed_oarchive)
#endif // BOOST_MPI_PACKED_OARCHIVE_HPP
|
{
"pile_set_name": "Github"
}
|
#!/usr/bin/env jake
'use strict';
/* eslint-disable no-undef */
/* eslint-disable no-console */
const migrateTests = require('./jakelib/migrate-test').taskList;
const tests = [
...migrateTests
];
task('default', tests, () => {
console.log('All done.');
});
|
{
"pile_set_name": "Github"
}
|
/*!
* Ext JS Library 3.1.1
* Copyright(c) 2006-2010 Ext JS, LLC
* licensing@extjs.com
* http://www.extjs.com/license
*/
.x-panel {
border-color: #d0d0d0;
}
.x-panel-header {
color:#333;
font-weight:bold;
font-size: 11px;
font-family: tahoma,arial,verdana,sans-serif;
border-color:#d0d0d0;
background-image: url(../images/gray/panel/white-top-bottom.gif);
}
.x-panel-body {
border-color:#d0d0d0;
background-color:#fff;
}
.x-panel-bbar .x-toolbar, .x-panel-tbar .x-toolbar {
border-color:#d0d0d0;
}
.x-panel-tbar-noheader .x-toolbar, .x-panel-mc .x-panel-tbar .x-toolbar {
border-top-color:#d0d0d0;
}
.x-panel-body-noheader, .x-panel-mc .x-panel-body {
border-top-color:#d0d0d0;
}
.x-panel-tl .x-panel-header {
color:#333;
font:bold 11px tahoma,arial,verdana,sans-serif;
}
.x-panel-tc {
background-image: url(../images/gray/panel/top-bottom.gif);
}
.x-panel-tl, .x-panel-tr, .x-panel-bl, .x-panel-br{
background-image: url(../images/gray/panel/corners-sprite.gif);
border-bottom-color:#d0d0d0;
}
.x-panel-bc {
background-image: url(../images/gray/panel/top-bottom.gif);
}
.x-panel-mc {
font: normal 11px tahoma,arial,helvetica,sans-serif;
background-color:#f1f1f1;
}
.x-panel-ml {
background-color: #fff;
background-image:url(../images/gray/panel/left-right.gif);
}
.x-panel-mr {
background-image: url(../images/gray/panel/left-right.gif);
}
.x-tool {
background-image:url(../images/gray/panel/tool-sprites.gif);
}
.x-panel-ghost {
background-color:#f2f2f2;
}
.x-panel-ghost ul {
border-color:#d0d0d0;
}
.x-panel-dd-spacer {
border-color:#d0d0d0;
}
.x-panel-fbar td,.x-panel-fbar span,.x-panel-fbar input,.x-panel-fbar div,.x-panel-fbar select,.x-panel-fbar label{
font:normal 11px arial,tahoma, helvetica, sans-serif;
}
|
{
"pile_set_name": "Github"
}
|
using CShell.Util;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Text;
namespace CShell.Modules.Workspace.ViewModels
{
public class AssemblyReferenceViewModel : TreeViewModel
{
private readonly string filePath;
private readonly IReplScriptExecutor replExecutor;
private readonly Assembly assembly;
public AssemblyReferenceViewModel(string filePath, IReplScriptExecutor replExecutor)
{
if (filePath.EndsWith(".dll") || filePath.EndsWith(".exe"))
{
this.filePath = filePath; //PathHelper.ToRelativePath(replExecutor.WorkspaceDirectory, filePath);
assemblyName = new AssemblyName(Path.GetFileNameWithoutExtension(filePath));
FullPath = PathHelper.ToAbsolutePath(Environment.CurrentDirectory, this.filePath);
Available = File.Exists(FullPath);
}
else
{
this.filePath = filePath;
assemblyName = new AssemblyName(filePath);
Available = true;
}
this.replExecutor = replExecutor;
}
public AssemblyReferenceViewModel(Assembly assembly, IReplScriptExecutor replExecutor)
{
this.assembly = assembly;
assemblyName = assembly.GetName();
Available = true;
this.replExecutor = replExecutor;
}
public override string DisplayName
{
get { return assemblyName.Name; }
set
{ }
}
public string FilePath
{
get { return filePath; }
}
public string FullPath { get; private set; }
public bool Available { get; private set; }
private bool removable = true;
public bool Removable
{
get { return removable; }
set { removable = value; }
}
private AssemblyName assemblyName;
public AssemblyName AssemblyName
{
get { return assemblyName; }
}
public override Uri IconSource
{
get
{
//if(!assemblyReference.HasParts)
return new Uri("pack://application:,,,/CShell;component/Resources/Icons/Icons.16x16.Reference.png");
//else
// return new Uri("pack://application:,,,/CShell;component/Resources/Icons/Icons.16x16.ReferenceModule.png");
}
}
public string ToolTip
{
get { return filePath ; }
}
public void Remove()
{
if(assembly != null)
replExecutor.RemoveReferences(assembly);
else
replExecutor.RemoveReferences(filePath);
}
}
}
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright 2018 NAVER Corp.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.navercorp.pinpoint.profiler.instrument.scanner;
import com.navercorp.pinpoint.common.util.Assert;
import java.io.IOException;
import java.io.InputStream;
import java.util.jar.JarEntry;
import java.util.jar.JarFile;
/**
* @author Woonduk Kang(emeroad)
*/
public class JarFileScanner implements Scanner {
private final JarFile jarFile;
public JarFileScanner(String path) {
Assert.requireNonNull(path, "path");
try {
this.jarFile = new JarFile(path);
} catch (IOException e) {
throw new IllegalStateException(path + " create fail");
}
}
@Override
public boolean exist(String fileName) {
final JarEntry jarEntry = jarFile.getJarEntry(fileName);
if (jarEntry == null) {
return false;
}
return true;
}
@Override
public InputStream openStream(String fileName) {
final JarEntry jarEntry = jarFile.getJarEntry(fileName);
if (jarEntry == null) {
return null;
}
try {
return jarFile.getInputStream(jarEntry);
} catch (IOException e) {
return null;
}
}
public void close() {
if (jarFile != null) {
try {
jarFile.close();
} catch (IOException ignore) {
;
}
}
}
@Override
public String toString() {
return "JarFileScanner{" +
"jarFile=" + jarFile.getName() +
'}';
}
}
|
{
"pile_set_name": "Github"
}
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!--NewPage-->
<HTML>
<HEAD>
<!-- Generated by javadoc (build 1.6.0_65) on Mon Sep 15 07:54:27 PDT 2014 -->
<TITLE>
com.bumptech.glide.load.engine.bitmap_recycle Class Hierarchy (glide 3.3.1 API)
</TITLE>
<META NAME="date" CONTENT="2014-09-15">
<LINK REL ="stylesheet" TYPE="text/css" HREF="../../../../../../stylesheet.css" TITLE="Style">
<SCRIPT type="text/javascript">
function windowTitle()
{
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="com.bumptech.glide.load.engine.bitmap_recycle Class Hierarchy (glide 3.3.1 API)";
}
}
</SCRIPT>
<NOSCRIPT>
</NOSCRIPT>
</HEAD>
<BODY BGCOLOR="white" onload="windowTitle();">
<HR>
<!-- ========= START OF TOP NAVBAR ======= -->
<A NAME="navbar_top"><!-- --></A>
<A HREF="#skip-navbar_top" title="Skip navigation links"></A>
<TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY="">
<TR>
<TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1">
<A NAME="navbar_top_firstrow"><!-- --></A>
<TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY="">
<TR ALIGN="center" VALIGN="top">
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <FONT CLASS="NavBarFont1">Class</FONT> </TD>
<TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> <FONT CLASS="NavBarFont1Rev"><B>Tree</B></FONT> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../../index-all.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A> </TD>
</TR>
</TABLE>
</TD>
<TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM>
</EM>
</TD>
</TR>
<TR>
<TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2">
<A HREF="../../../../../../com/bumptech/glide/load/engine/package-tree.html"><B>PREV</B></A>
<A HREF="../../../../../../com/bumptech/glide/load/engine/cache/package-tree.html"><B>NEXT</B></A></FONT></TD>
<TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2">
<A HREF="../../../../../../index.html?com/bumptech/glide/load/engine/bitmap_recycle/package-tree.html" target="_top"><B>FRAMES</B></A>
<A HREF="package-tree.html" target="_top"><B>NO FRAMES</B></A>
<SCRIPT type="text/javascript">
<!--
if(window==top) {
document.writeln('<A HREF="../../../../../../allclasses-noframe.html"><B>All Classes</B></A>');
}
//-->
</SCRIPT>
<NOSCRIPT>
<A HREF="../../../../../../allclasses-noframe.html"><B>All Classes</B></A>
</NOSCRIPT>
</FONT></TD>
</TR>
</TABLE>
<A NAME="skip-navbar_top"></A>
<!-- ========= END OF TOP NAVBAR ========= -->
<HR>
<CENTER>
<H2>
Hierarchy For Package com.bumptech.glide.load.engine.bitmap_recycle
</H2>
</CENTER>
<DL>
<DT><B>Package Hierarchies:</B><DD><A HREF="../../../../../../overview-tree.html">All Packages</A></DL>
<HR>
<H2>
Class Hierarchy
</H2>
<UL>
<LI TYPE="circle">java.lang.<A HREF="http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html?is-external=true" title="class or interface in java.lang"><B>Object</B></A><UL>
<LI TYPE="circle">com.bumptech.glide.load.engine.bitmap_recycle.<A HREF="../../../../../../com/bumptech/glide/load/engine/bitmap_recycle/BitmapPoolAdapter.html" title="class in com.bumptech.glide.load.engine.bitmap_recycle"><B>BitmapPoolAdapter</B></A> (implements com.bumptech.glide.load.engine.bitmap_recycle.<A HREF="../../../../../../com/bumptech/glide/load/engine/bitmap_recycle/BitmapPool.html" title="interface in com.bumptech.glide.load.engine.bitmap_recycle">BitmapPool</A>)
<LI TYPE="circle">com.bumptech.glide.load.engine.bitmap_recycle.<A HREF="../../../../../../com/bumptech/glide/load/engine/bitmap_recycle/LruBitmapPool.html" title="class in com.bumptech.glide.load.engine.bitmap_recycle"><B>LruBitmapPool</B></A> (implements com.bumptech.glide.load.engine.bitmap_recycle.<A HREF="../../../../../../com/bumptech/glide/load/engine/bitmap_recycle/BitmapPool.html" title="interface in com.bumptech.glide.load.engine.bitmap_recycle">BitmapPool</A>)
</UL>
</UL>
<H2>
Interface Hierarchy
</H2>
<UL>
<LI TYPE="circle">com.bumptech.glide.load.engine.bitmap_recycle.<A HREF="../../../../../../com/bumptech/glide/load/engine/bitmap_recycle/BitmapPool.html" title="interface in com.bumptech.glide.load.engine.bitmap_recycle"><B>BitmapPool</B></A></UL>
<HR>
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<A NAME="navbar_bottom"><!-- --></A>
<A HREF="#skip-navbar_bottom" title="Skip navigation links"></A>
<TABLE BORDER="0" WIDTH="100%" CELLPADDING="1" CELLSPACING="0" SUMMARY="">
<TR>
<TD COLSPAN=2 BGCOLOR="#EEEEFF" CLASS="NavBarCell1">
<A NAME="navbar_bottom_firstrow"><!-- --></A>
<TABLE BORDER="0" CELLPADDING="0" CELLSPACING="3" SUMMARY="">
<TR ALIGN="center" VALIGN="top">
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../../overview-summary.html"><FONT CLASS="NavBarFont1"><B>Overview</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="package-summary.html"><FONT CLASS="NavBarFont1"><B>Package</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <FONT CLASS="NavBarFont1">Class</FONT> </TD>
<TD BGCOLOR="#FFFFFF" CLASS="NavBarCell1Rev"> <FONT CLASS="NavBarFont1Rev"><B>Tree</B></FONT> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../../deprecated-list.html"><FONT CLASS="NavBarFont1"><B>Deprecated</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../../index-all.html"><FONT CLASS="NavBarFont1"><B>Index</B></FONT></A> </TD>
<TD BGCOLOR="#EEEEFF" CLASS="NavBarCell1"> <A HREF="../../../../../../help-doc.html"><FONT CLASS="NavBarFont1"><B>Help</B></FONT></A> </TD>
</TR>
</TABLE>
</TD>
<TD ALIGN="right" VALIGN="top" ROWSPAN=3><EM>
</EM>
</TD>
</TR>
<TR>
<TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2">
<A HREF="../../../../../../com/bumptech/glide/load/engine/package-tree.html"><B>PREV</B></A>
<A HREF="../../../../../../com/bumptech/glide/load/engine/cache/package-tree.html"><B>NEXT</B></A></FONT></TD>
<TD BGCOLOR="white" CLASS="NavBarCell2"><FONT SIZE="-2">
<A HREF="../../../../../../index.html?com/bumptech/glide/load/engine/bitmap_recycle/package-tree.html" target="_top"><B>FRAMES</B></A>
<A HREF="package-tree.html" target="_top"><B>NO FRAMES</B></A>
<SCRIPT type="text/javascript">
<!--
if(window==top) {
document.writeln('<A HREF="../../../../../../allclasses-noframe.html"><B>All Classes</B></A>');
}
//-->
</SCRIPT>
<NOSCRIPT>
<A HREF="../../../../../../allclasses-noframe.html"><B>All Classes</B></A>
</NOSCRIPT>
</FONT></TD>
</TR>
</TABLE>
<A NAME="skip-navbar_bottom"></A>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
<HR>
</BODY>
</HTML>
|
{
"pile_set_name": "Github"
}
|
/**
* Copyright 2015-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://aws.amazon.com/apache2.0/
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
package com.amazon.dataloader.testResources;
import com.amazon.dataloader.cacheManager.CacheManagerAdapter;
import com.amazon.dataloader.datadownloader.ADataDownloader;
import com.amazon.dataloader.dataloadmanager.DataLoadManager;
import com.amazon.android.recipe.Recipe;
import com.amazon.android.utils.FileHelper;
import android.content.Context;
import java.io.IOException;
/**
* This class extends {@link DataLoadManager} to inject some mock dependencies for testing.
*/
public class MockDataLoadManager extends DataLoadManager {
/**
* Constructs a {@link MockDataDownloader}.
*
* @param context The context.
*/
public MockDataLoadManager(Context context) throws Exception {
super(context);
}
/**
* Used for injecting mock data downloader.
*
* @return The {@link MockDataDownloader}.
*/
@Override
protected ADataDownloader createDataDownloaderInstance(Context context, Recipe
dataManagerConfig) {
return MockDataDownloader.dataDownloader;
}
/**
* Used for injecting mock cache manager adapter.
*
* @return The {@link MockCacheManagerAdapter}.
*/
@Override
protected CacheManagerAdapter createCacheManagerAdapterInstance(Context context, Recipe
dataManagerConfig) {
return MockCacheManagerAdapter.mockCacheManagerAdapter;
}
/**
* {@inheritDoc}
*
* @param context application context
*/
@Override
protected Recipe createDataLoadManagerConfigInstance(Context context) throws IOException {
return Recipe.newInstance(FileHelper.readFile(context,
"configurations/DataLoadManagerConfig.json"));
}
}
|
{
"pile_set_name": "Github"
}
|
# Change Log
All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines.
<a name="11.3.2"></a>
## [11.3.2](https://github.com/zkat/cacache/compare/v11.3.1...v11.3.2) (2018-12-21)
### Bug Fixes
* **get:** make sure to handle errors in the .then ([b10bcd0](https://github.com/zkat/cacache/commit/b10bcd0))
<a name="11.3.1"></a>
## [11.3.1](https://github.com/zkat/cacache/compare/v11.3.0...v11.3.1) (2018-11-05)
### Bug Fixes
* **get:** export hasContent.sync properly ([d76c920](https://github.com/zkat/cacache/commit/d76c920))
<a name="11.3.0"></a>
# [11.3.0](https://github.com/zkat/cacache/compare/v11.2.0...v11.3.0) (2018-11-05)
### Features
* **get:** add sync API for reading ([db1e094](https://github.com/zkat/cacache/commit/db1e094))
<a name="11.2.0"></a>
# [11.2.0](https://github.com/zkat/cacache/compare/v11.1.0...v11.2.0) (2018-08-08)
### Features
* **read:** add sync support to other internal read.js fns ([fe638b6](https://github.com/zkat/cacache/commit/fe638b6))
<a name="11.1.0"></a>
# [11.1.0](https://github.com/zkat/cacache/compare/v11.0.3...v11.1.0) (2018-08-01)
### Features
* **read:** add sync support for low-level content read ([b43af83](https://github.com/zkat/cacache/commit/b43af83))
<a name="11.0.3"></a>
## [11.0.3](https://github.com/zkat/cacache/compare/v11.0.2...v11.0.3) (2018-08-01)
### Bug Fixes
* **config:** add ssri config options ([#136](https://github.com/zkat/cacache/issues/136)) ([10d5d9a](https://github.com/zkat/cacache/commit/10d5d9a))
* **perf:** refactor content.read to avoid lstats ([c5ac10e](https://github.com/zkat/cacache/commit/c5ac10e))
* **test:** oops when removing safe-buffer ([1950490](https://github.com/zkat/cacache/commit/1950490))
<a name="11.0.2"></a>
## [11.0.2](https://github.com/zkat/cacache/compare/v11.0.1...v11.0.2) (2018-05-07)
### Bug Fixes
* **verify:** size param no longer lost in a verify ([#131](https://github.com/zkat/cacache/issues/131)) ([c614a19](https://github.com/zkat/cacache/commit/c614a19)), closes [#130](https://github.com/zkat/cacache/issues/130)
<a name="11.0.1"></a>
## [11.0.1](https://github.com/zkat/cacache/compare/v11.0.0...v11.0.1) (2018-04-10)
<a name="11.0.0"></a>
# [11.0.0](https://github.com/zkat/cacache/compare/v10.0.4...v11.0.0) (2018-04-09)
### Features
* **opts:** use figgy-pudding for opts ([#128](https://github.com/zkat/cacache/issues/128)) ([33d4eed](https://github.com/zkat/cacache/commit/33d4eed))
### meta
* drop support for node@4 ([529f347](https://github.com/zkat/cacache/commit/529f347))
### BREAKING CHANGES
* node@4 is no longer supported
<a name="10.0.4"></a>
## [10.0.4](https://github.com/zkat/cacache/compare/v10.0.3...v10.0.4) (2018-02-16)
<a name="10.0.3"></a>
## [10.0.3](https://github.com/zkat/cacache/compare/v10.0.2...v10.0.3) (2018-02-16)
### Bug Fixes
* **content:** rethrow aggregate errors as ENOENT ([fa918f5](https://github.com/zkat/cacache/commit/fa918f5))
<a name="10.0.2"></a>
## [10.0.2](https://github.com/zkat/cacache/compare/v10.0.1...v10.0.2) (2018-01-07)
### Bug Fixes
* **ls:** deleted entries could cause a premature stream EOF ([347dc36](https://github.com/zkat/cacache/commit/347dc36))
<a name="10.0.1"></a>
## [10.0.1](https://github.com/zkat/cacache/compare/v10.0.0...v10.0.1) (2017-11-15)
### Bug Fixes
* **move-file:** actually use the fallback to `move-concurrently` (#110) ([073fbe1](https://github.com/zkat/cacache/commit/073fbe1))
<a name="10.0.0"></a>
# [10.0.0](https://github.com/zkat/cacache/compare/v9.3.0...v10.0.0) (2017-10-23)
### Features
* **license:** relicense to ISC (#111) ([fdbb4e5](https://github.com/zkat/cacache/commit/fdbb4e5))
### Performance Improvements
* more copyFile benchmarks ([63787bb](https://github.com/zkat/cacache/commit/63787bb))
### BREAKING CHANGES
* **license:** the license has been changed from CC0-1.0 to ISC.
<a name="9.3.0"></a>
# [9.3.0](https://github.com/zkat/cacache/compare/v9.2.9...v9.3.0) (2017-10-07)
### Features
* **copy:** added cacache.get.copy api for fast copies (#107) ([067b5f6](https://github.com/zkat/cacache/commit/067b5f6))
<a name="9.2.9"></a>
## [9.2.9](https://github.com/zkat/cacache/compare/v9.2.8...v9.2.9) (2017-06-17)
<a name="9.2.8"></a>
## [9.2.8](https://github.com/zkat/cacache/compare/v9.2.7...v9.2.8) (2017-06-05)
### Bug Fixes
* **ssri:** bump ssri for bugfix ([c3232ea](https://github.com/zkat/cacache/commit/c3232ea))
<a name="9.2.7"></a>
## [9.2.7](https://github.com/zkat/cacache/compare/v9.2.6...v9.2.7) (2017-06-05)
### Bug Fixes
* **content:** make verified content completely read-only (#96) ([4131196](https://github.com/zkat/cacache/commit/4131196))
<a name="9.2.6"></a>
## [9.2.6](https://github.com/zkat/cacache/compare/v9.2.5...v9.2.6) (2017-05-31)
### Bug Fixes
* **node:** update ssri to prevent old node 4 crash ([5209ffe](https://github.com/zkat/cacache/commit/5209ffe))
<a name="9.2.5"></a>
## [9.2.5](https://github.com/zkat/cacache/compare/v9.2.4...v9.2.5) (2017-05-25)
### Bug Fixes
* **deps:** fix lockfile issues and bump ssri ([84e1d7e](https://github.com/zkat/cacache/commit/84e1d7e))
<a name="9.2.4"></a>
## [9.2.4](https://github.com/zkat/cacache/compare/v9.2.3...v9.2.4) (2017-05-24)
### Bug Fixes
* **deps:** bumping deps ([bbccb12](https://github.com/zkat/cacache/commit/bbccb12))
<a name="9.2.3"></a>
## [9.2.3](https://github.com/zkat/cacache/compare/v9.2.2...v9.2.3) (2017-05-24)
### Bug Fixes
* **rm:** stop crashing if content is missing on rm ([ac90bc0](https://github.com/zkat/cacache/commit/ac90bc0))
<a name="9.2.2"></a>
## [9.2.2](https://github.com/zkat/cacache/compare/v9.2.1...v9.2.2) (2017-05-14)
### Bug Fixes
* **i18n:** lets pretend this didn't happen ([519b4ee](https://github.com/zkat/cacache/commit/519b4ee))
<a name="9.2.1"></a>
## [9.2.1](https://github.com/zkat/cacache/compare/v9.2.0...v9.2.1) (2017-05-14)
### Bug Fixes
* **docs:** fixing translation messup ([bb9e4f9](https://github.com/zkat/cacache/commit/bb9e4f9))
<a name="9.2.0"></a>
# [9.2.0](https://github.com/zkat/cacache/compare/v9.1.0...v9.2.0) (2017-05-14)
### Features
* **i18n:** add Spanish translation for API ([531f9a4](https://github.com/zkat/cacache/commit/531f9a4))
<a name="9.1.0"></a>
# [9.1.0](https://github.com/zkat/cacache/compare/v9.0.0...v9.1.0) (2017-05-14)
### Features
* **i18n:** Add Spanish translation and i18n setup (#91) ([323b90c](https://github.com/zkat/cacache/commit/323b90c))
<a name="9.0.0"></a>
# [9.0.0](https://github.com/zkat/cacache/compare/v8.0.0...v9.0.0) (2017-04-28)
### Bug Fixes
* **memoization:** actually use the LRU ([0e55dc9](https://github.com/zkat/cacache/commit/0e55dc9))
### Features
* **memoization:** memoizers can be injected through opts.memoize (#90) ([e5614c7](https://github.com/zkat/cacache/commit/e5614c7))
### BREAKING CHANGES
* **memoization:** If you were passing an object to opts.memoize, it will now be used as an injected memoization object. If you were only passing booleans and other non-objects through that option, no changes are needed.
<a name="8.0.0"></a>
# [8.0.0](https://github.com/zkat/cacache/compare/v7.1.0...v8.0.0) (2017-04-22)
### Features
* **read:** change hasContent to return {sri, size} (#88) ([bad6c49](https://github.com/zkat/cacache/commit/bad6c49)), closes [#87](https://github.com/zkat/cacache/issues/87)
### BREAKING CHANGES
* **read:** hasContent now returns an object with `{sri, size}` instead of `sri`. Use `result.sri` anywhere that needed the old return value.
<a name="7.1.0"></a>
# [7.1.0](https://github.com/zkat/cacache/compare/v7.0.5...v7.1.0) (2017-04-20)
### Features
* **size:** handle content size info (#49) ([91230af](https://github.com/zkat/cacache/commit/91230af))
<a name="7.0.5"></a>
## [7.0.5](https://github.com/zkat/cacache/compare/v7.0.4...v7.0.5) (2017-04-18)
### Bug Fixes
* **integrity:** new ssri with fixed integrity stream ([6d13e8e](https://github.com/zkat/cacache/commit/6d13e8e))
* **write:** wrap stuff in promises to improve errors ([3624fc5](https://github.com/zkat/cacache/commit/3624fc5))
<a name="7.0.4"></a>
## [7.0.4](https://github.com/zkat/cacache/compare/v7.0.3...v7.0.4) (2017-04-15)
### Bug Fixes
* **fix-owner:** throw away ENOENTs on chownr ([d49bbcd](https://github.com/zkat/cacache/commit/d49bbcd))
<a name="7.0.3"></a>
## [7.0.3](https://github.com/zkat/cacache/compare/v7.0.2...v7.0.3) (2017-04-05)
### Bug Fixes
* **read:** fixing error message for integrity verification failures ([9d4f0a5](https://github.com/zkat/cacache/commit/9d4f0a5))
<a name="7.0.2"></a>
## [7.0.2](https://github.com/zkat/cacache/compare/v7.0.1...v7.0.2) (2017-04-03)
### Bug Fixes
* **integrity:** use EINTEGRITY error code and update ssri ([8dc2e62](https://github.com/zkat/cacache/commit/8dc2e62))
<a name="7.0.1"></a>
## [7.0.1](https://github.com/zkat/cacache/compare/v7.0.0...v7.0.1) (2017-04-03)
### Bug Fixes
* **docs:** fix header name conflict in readme ([afcd456](https://github.com/zkat/cacache/commit/afcd456))
<a name="7.0.0"></a>
# [7.0.0](https://github.com/zkat/cacache/compare/v6.3.0...v7.0.0) (2017-04-03)
### Bug Fixes
* **test:** fix content.write tests when running in docker ([d2e9b6a](https://github.com/zkat/cacache/commit/d2e9b6a))
### Features
* **integrity:** subresource integrity support (#78) ([b1e731f](https://github.com/zkat/cacache/commit/b1e731f))
### BREAKING CHANGES
* **integrity:** The entire API has been overhauled to use SRI hashes instead of digest/hashAlgorithm pairs. SRI hashes follow the Subresource Integrity standard and support strings and objects compatible with [`ssri`](https://npm.im/ssri).
* This change bumps the index version, which will invalidate all previous index entries. Content entries will remain intact, and existing caches will automatically reuse any content from before this breaking change.
* `cacache.get.info()`, `cacache.ls()`, and `cacache.ls.stream()` will now return objects that looks like this:
```
{
key: String,
integrity: '<algorithm>-<base64hash>',
path: ContentPath,
time: Date<ms>,
metadata: Any
}
```
* `opts.digest` and `opts.hashAlgorithm` are obsolete for any API calls that used them.
* Anywhere `opts.digest` was accepted, `opts.integrity` is now an option. Any valid SRI hash is accepted here -- multiple hash entries will be resolved according to the standard: first, the "strongest" hash algorithm will be picked, and then each of the entries for that algorithm will be matched against the content. Content will be validated if *any* of the entries match (so, a single integrity string can be used for multiple "versions" of the same document/data).
* `put.byDigest()`, `put.stream.byDigest`, `get.byDigest()` and `get.stream.byDigest()` now expect an SRI instead of a `digest` + `opts.hashAlgorithm` pairing.
* `get.hasContent()` now expects an integrity hash instead of a digest. If content exists, it will return the specific single integrity hash that was found in the cache.
* `verify()` has learned to handle integrity-based caches, and forgotten how to handle old-style cache indices due to the format change.
* `cacache.rm.content()` now expects an integrity hash instead of a hex digest.
<a name="6.3.0"></a>
# [6.3.0](https://github.com/zkat/cacache/compare/v6.2.0...v6.3.0) (2017-04-01)
### Bug Fixes
* **fixOwner:** ignore EEXIST race condition from mkdirp ([4670e9b](https://github.com/zkat/cacache/commit/4670e9b))
* **index:** ignore index removal races when inserting ([b9d2fa2](https://github.com/zkat/cacache/commit/b9d2fa2))
* **memo:** use lru-cache for better mem management (#75) ([d8ac5aa](https://github.com/zkat/cacache/commit/d8ac5aa))
### Features
* **dependencies:** Switch to move-concurrently (#77) ([dc6482d](https://github.com/zkat/cacache/commit/dc6482d))
<a name="6.2.0"></a>
# [6.2.0](https://github.com/zkat/cacache/compare/v6.1.2...v6.2.0) (2017-03-15)
### Bug Fixes
* **index:** additional bucket entry verification with checksum (#72) ([f8e0f25](https://github.com/zkat/cacache/commit/f8e0f25))
* **verify:** return fixOwner.chownr promise ([6818521](https://github.com/zkat/cacache/commit/6818521))
### Features
* **tmp:** safe tmp dir creation/management util (#73) ([c42da71](https://github.com/zkat/cacache/commit/c42da71))
<a name="6.1.2"></a>
## [6.1.2](https://github.com/zkat/cacache/compare/v6.1.1...v6.1.2) (2017-03-13)
### Bug Fixes
* **index:** set default hashAlgorithm ([d6eb2f0](https://github.com/zkat/cacache/commit/d6eb2f0))
<a name="6.1.1"></a>
## [6.1.1](https://github.com/zkat/cacache/compare/v6.1.0...v6.1.1) (2017-03-13)
### Bug Fixes
* **coverage:** bumping coverage for verify (#71) ([0b7faf6](https://github.com/zkat/cacache/commit/0b7faf6))
* **deps:** glob should have been a regular dep :< ([0640bc4](https://github.com/zkat/cacache/commit/0640bc4))
<a name="6.1.0"></a>
# [6.1.0](https://github.com/zkat/cacache/compare/v6.0.2...v6.1.0) (2017-03-12)
### Bug Fixes
* **coverage:** more coverage for content reads (#70) ([ef4f70a](https://github.com/zkat/cacache/commit/ef4f70a))
* **tests:** use safe-buffer because omfg (#69) ([6ab8132](https://github.com/zkat/cacache/commit/6ab8132))
### Features
* **rm:** limited rm.all and fixed bugs (#66) ([d5d25ba](https://github.com/zkat/cacache/commit/d5d25ba)), closes [#66](https://github.com/zkat/cacache/issues/66)
* **verify:** tested, working cache verifier/gc (#68) ([45ad77a](https://github.com/zkat/cacache/commit/45ad77a))
<a name="6.0.2"></a>
## [6.0.2](https://github.com/zkat/cacache/compare/v6.0.1...v6.0.2) (2017-03-11)
### Bug Fixes
* **index:** segment cache items with another subbucket (#64) ([c3644e5](https://github.com/zkat/cacache/commit/c3644e5))
<a name="6.0.1"></a>
## [6.0.1](https://github.com/zkat/cacache/compare/v6.0.0...v6.0.1) (2017-03-05)
### Bug Fixes
* **docs:** Missed spots in README ([8ffb7fa](https://github.com/zkat/cacache/commit/8ffb7fa))
<a name="6.0.0"></a>
# [6.0.0](https://github.com/zkat/cacache/compare/v5.0.3...v6.0.0) (2017-03-05)
### Bug Fixes
* **api:** keep memo cache mostly-internal ([2f72d0a](https://github.com/zkat/cacache/commit/2f72d0a))
* **content:** use the rest of the string, not the whole string ([fa8f3c3](https://github.com/zkat/cacache/commit/fa8f3c3))
* **deps:** removed `format-number[@2](https://github.com/2).0.2` ([1187791](https://github.com/zkat/cacache/commit/1187791))
* **deps:** removed inflight[@1](https://github.com/1).0.6 ([0d1819c](https://github.com/zkat/cacache/commit/0d1819c))
* **deps:** rimraf[@2](https://github.com/2).6.1 ([9efab6b](https://github.com/zkat/cacache/commit/9efab6b))
* **deps:** standard[@9](https://github.com/9).0.0 ([4202cba](https://github.com/zkat/cacache/commit/4202cba))
* **deps:** tap[@10](https://github.com/10).3.0 ([aa03088](https://github.com/zkat/cacache/commit/aa03088))
* **deps:** weallcontribute[@1](https://github.com/1).0.8 ([ad4f4dc](https://github.com/zkat/cacache/commit/ad4f4dc))
* **docs:** add security note to hashKey ([03f81ba](https://github.com/zkat/cacache/commit/03f81ba))
* **hashes:** change default hashAlgorithm to sha512 ([ea00ba6](https://github.com/zkat/cacache/commit/ea00ba6))
* **hashes:** missed a spot for hashAlgorithm defaults ([45997d8](https://github.com/zkat/cacache/commit/45997d8))
* **index:** add length header before JSON for verification ([fb8cb4d](https://github.com/zkat/cacache/commit/fb8cb4d))
* **index:** change index filenames to sha1s of keys ([bbc5fca](https://github.com/zkat/cacache/commit/bbc5fca))
* **index:** who cares about race conditions anyway ([b1d3888](https://github.com/zkat/cacache/commit/b1d3888))
* **perf:** bulk-read get+read for massive speed ([d26cdf9](https://github.com/zkat/cacache/commit/d26cdf9))
* **perf:** use bulk file reads for index reads ([79a8891](https://github.com/zkat/cacache/commit/79a8891))
* **put-stream:** remove tmp file on stream insert error ([65f6632](https://github.com/zkat/cacache/commit/65f6632))
* **put-stream:** robustified and predictibilized ([daf9e08](https://github.com/zkat/cacache/commit/daf9e08))
* **put-stream:** use new promise API for moves ([1d36013](https://github.com/zkat/cacache/commit/1d36013))
* **readme:** updated to reflect new default hashAlgo ([c60a2fa](https://github.com/zkat/cacache/commit/c60a2fa))
* **verify:** tiny typo fix ([db22d05](https://github.com/zkat/cacache/commit/db22d05))
### Features
* **api:** converted external api ([7bf032f](https://github.com/zkat/cacache/commit/7bf032f))
* **cacache:** exported clearMemoized() utility ([8d2c5b6](https://github.com/zkat/cacache/commit/8d2c5b6))
* **cache:** add versioning to content and index ([31bc549](https://github.com/zkat/cacache/commit/31bc549))
* **content:** collate content files into subdirs ([c094d9f](https://github.com/zkat/cacache/commit/c094d9f))
* **deps:** [@npmcorp](https://github.com/npmcorp)/move[@1](https://github.com/1).0.0 ([bdd00bf](https://github.com/zkat/cacache/commit/bdd00bf))
* **deps:** bluebird[@3](https://github.com/3).4.7 ([3a17aff](https://github.com/zkat/cacache/commit/3a17aff))
* **deps:** promise-inflight[@1](https://github.com/1).0.1 ([a004fe6](https://github.com/zkat/cacache/commit/a004fe6))
* **get:** added memoization support for get ([c77d794](https://github.com/zkat/cacache/commit/c77d794))
* **get:** export hasContent ([2956ec3](https://github.com/zkat/cacache/commit/2956ec3))
* **index:** add hashAlgorithm and format insert ret val ([b639746](https://github.com/zkat/cacache/commit/b639746))
* **index:** collate index files into subdirs ([e8402a5](https://github.com/zkat/cacache/commit/e8402a5))
* **index:** promisify entry index ([cda3335](https://github.com/zkat/cacache/commit/cda3335))
* **memo:** added memoization lib ([da07b92](https://github.com/zkat/cacache/commit/da07b92))
* **memo:** export memoization api ([954b1b3](https://github.com/zkat/cacache/commit/954b1b3))
* **move-file:** add move fallback for weird errors ([5cf4616](https://github.com/zkat/cacache/commit/5cf4616))
* **perf:** bulk content write api ([51b536e](https://github.com/zkat/cacache/commit/51b536e))
* **put:** added memoization support to put ([b613a70](https://github.com/zkat/cacache/commit/b613a70))
* **read:** switched to promises ([a869362](https://github.com/zkat/cacache/commit/a869362))
* **rm:** added memoization support to rm ([4205cf0](https://github.com/zkat/cacache/commit/4205cf0))
* **rm:** switched to promises ([a000d24](https://github.com/zkat/cacache/commit/a000d24))
* **util:** promise-inflight ownership fix requests ([9517cd7](https://github.com/zkat/cacache/commit/9517cd7))
* **util:** use promises for api ([ae204bb](https://github.com/zkat/cacache/commit/ae204bb))
* **verify:** converted to Promises ([f0b3974](https://github.com/zkat/cacache/commit/f0b3974))
### BREAKING CHANGES
* cache: index/content directories are now versioned. Previous caches are no longer compatible and cannot be migrated.
* util: fix-owner now uses Promises instead of callbacks
* index: Previously-generated index entries are no longer compatible and the index must be regenerated.
* index: The index format has changed and previous caches are no longer compatible. Existing caches will need to be regenerated.
* hashes: Default hashAlgorithm changed from sha1 to sha512. If you
rely on the prior setting, pass `opts.hashAlgorithm` in explicitly.
* content: Previously-generated content directories are no longer compatible
and must be regenerated.
* verify: API is now promise-based
* read: Switches to a Promise-based API and removes callback stuff
* rm: Switches to a Promise-based API and removes callback stuff
* index: this changes the API to work off promises instead of callbacks
* api: this means we are going all in on promises now
|
{
"pile_set_name": "Github"
}
|
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;
namespace Company.WebApplication1
{
public class Program
{
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>();
}
}
|
{
"pile_set_name": "Github"
}
|
(set-logic QF_LIA)
(declare-fun x () Int)
(assert (>= x (* 5 (+ 1 (div x 5)))))
(check-sat)
(get-model)
|
{
"pile_set_name": "Github"
}
|
Java implementation of scrypt
A pure Java implementation of the scrypt key derivation function and a JNI
interface to the C implementations, including the SSE2 optimized version.
The Java implementation is based in large part on Colin Percival's
reference implementation contained in crypto_scrypt-ref.c, but any errors
in this port are solely the fault of its author, Will Glozer.
https://www.tarsnap.com/scrypt/scrypt.pdf
https://scrypt.googlecode.com
Join the lambdaWorks-OSS Google Group to discuss this project:
http://groups.google.com/group/lambdaworks-oss
lambdaworks-oss@googlegroups.com
Usage
com.lambdaworks.crypto.SCryptUtil implements a modified version of MCF,
the modular crypt format, similar to the one used for storage of Unix
passwords in the MD5, SHA-256, and bcrypt formats.
SCryptUtil.scrypt(passwd, N, r, p)
SCryptUtil.check(passwd, hashed)
The output of SCryptUtil.scrypt is a string in the modified MCF format:
$s0$params$salt$key
s0 - version 0 of the format with 128-bit salt and 256-bit derived key
params - 32-bit hex integer containing log2(N) (16 bits), r (8 bits), and p (8 bits)
salt - base64-encoded salt
key - base64-encoded derived key
Example:
$s0$e0801$epIxT/h6HbbwHaehFnh/bw==$7H0vsXlY8UxxyW/BWx/9GuY7jEvGjT71GFd6O4SZND0=
passwd = "secret"
N = 16384
r = 8
p = 1
Native Code Implementation
When the native library can be loaded it will be used instead of the pure
Java implementation. On a J2SE compliant JVM the native library will be
extracted from the jar and loaded, and on other VMs System.loadLibrary will
be called.
The system property "com.lambdaworks.jni.loader" may be set to override
the default native library loader with one of the following values:
nil: refuse to load native libraries and revert to pure Java implementation
jar: extract native library from jar and load with System.load
sys: use System.loadLibrary, which may require java.library.path to be set
Maven Artifacts
Releases containing the pure Java implementation, as well as native libraries
for a limited number of platforms, are available in the maven central repository.
<dependency>
<groupId>com.lambdaworks</groupId>
<artifactId>scrypt</artifactId>
<version>1.4.0</version>
</dependency>
Building Native Implementation
A native shared library for the current platform may be built by running GNU
make. The Makefile attempts to detect the runtime platform and JDK location,
but in some cases one or more of the following variables should be passed
to make:
TARGET - target operating system, use "android" to build for Android
SSE2 - use the SSE2 optimized scrypt implementation when set
JAVA_HOME - base directory of a Java 6+ JDK
NDK_ROOT - base directory of Android NDK
A precompiled native library for Android 2.3 running on ARM is located in
src/android/resources/lib/arm5/libscrypt.so. If placed in an .apk file's
lib/armeabi directory it will be automatically loaded.
|
{
"pile_set_name": "Github"
}
|
body {
margin: 0;
background-image: url('../img/bg-repeat.gif');
background-repeat: repeat-x;
}
.content {
background-image: url('../img/content.png');
background-repeat: no-repeat;
background-position: center 60px;
min-height: 200px;
}
header {
height: 90px;
width: 950px;
margin: 0px auto 30px auto;
}
header img.logo {
margin-top: 20px;
}
|
{
"pile_set_name": "Github"
}
|
package inference.guava;
import java.util.LinkedHashMap;
import java.util.Map;
@SuppressWarnings("all") // Just check for crashes.
public class Bug1<B> {
@SuppressWarnings("type.inference.not.same")
public void method1(Map<? extends Class<? extends B>, ? extends B> map) {
Map<Class<? extends B>, B> copy = new LinkedHashMap<>(map);
for (Map.Entry<? extends Class<? extends B>, B> entry : copy.entrySet()) {
cast(entry.getKey(), entry.getValue());
}
}
private static <X, T extends X> T cast(Class<T> type, X value) {
throw new RuntimeException();
}
}
|
{
"pile_set_name": "Github"
}
|
// Copyright (C) 2015 Davis E. King (davis@dlib.net)
// License: Boost Software License See LICENSE.txt for the full license.
#undef DLIB_DNn_LOSS_ABSTRACT_H_
#ifdef DLIB_DNn_LOSS_ABSTRACT_H_
#include "core_abstract.h"
#include "../image_processing/full_object_detection_abstract.h"
namespace dlib
{
// ----------------------------------------------------------------------------------------
class EXAMPLE_LOSS_LAYER_
{
/*!
WHAT THIS OBJECT REPRESENTS
A loss layer is the final layer in a deep neural network. It computes the
task loss. That is, it computes a number that tells us how well the
network is performing on some task, such as predicting a binary label.
You can use one of the loss layers that comes with dlib (defined below).
But importantly, you are able to define your own loss layers to suit your
needs. You do this by creating a class that defines an interface matching
the one described by this EXAMPLE_LOSS_LAYER_ class. Note that there is no
dlib::EXAMPLE_LOSS_LAYER_ type. It is shown here purely to document the
interface that a loss layer must implement.
A loss layer can optionally provide a to_label() method that converts the
output of a network into a user defined type. If to_label() is not
provided then the operator() methods of add_loss_layer will not be
available, but otherwise everything will function as normal.
Finally, note that there are two broad flavors of loss layer, supervised
and unsupervised. The EXAMPLE_LOSS_LAYER_ as shown here is a supervised
layer. To make an unsupervised loss you simply leave out the
training_label_type typedef and the truth iterator argument to
compute_loss_value_and_gradient().
!*/
public:
// In most cases training_label_type and output_label_type will be the same type.
typedef whatever_type_you_use_for_training_labels training_label_type;
typedef whatever_type_you_use_for_outout_labels output_label_type;
EXAMPLE_LOSS_LAYER_ (
);
/*!
ensures
- EXAMPLE_LOSS_LAYER_ objects are default constructable.
!*/
EXAMPLE_LOSS_LAYER_ (
const EXAMPLE_LOSS_LAYER_& item
);
/*!
ensures
- EXAMPLE_LOSS_LAYER_ objects are copy constructable.
!*/
// Implementing to_label() is optional.
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
requires
- SUBNET implements the SUBNET interface defined at the top of
layers_abstract.h.
- input_tensor was given as input to the network sub and the outputs are
now visible in layer<i>(sub).get_output(), for all valid i.
- input_tensor.num_samples() > 0
- input_tensor.num_samples()%sub.sample_expansion_factor() == 0.
- iter == an iterator pointing to the beginning of a range of
input_tensor.num_samples()/sub.sample_expansion_factor() elements. Moreover,
they must be output_label_type elements.
ensures
- Converts the output of the provided network to output_label_type objects and
stores the results into the range indicated by iter. In particular, for
all valid i, it will be the case that:
*(iter+i/sub.sample_expansion_factor()) is populated based on the output of
sub and corresponds to the ith sample in input_tensor.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
requires
- SUBNET implements the SUBNET interface defined at the top of
layers_abstract.h.
- input_tensor was given as input to the network sub and the outputs are
now visible in layer<i>(sub).get_output(), for all valid i.
- input_tensor.num_samples() > 0
- input_tensor.num_samples()%sub.sample_expansion_factor() == 0.
- for all valid i:
- layer<i>(sub).get_gradient_input() has the same dimensions as
layer<i>(sub).get_output().
- layer<i>(sub).get_gradient_input() contains all zeros (i.e.
initially, all input gradients are 0).
- truth == an iterator pointing to the beginning of a range of
input_tensor.num_samples()/sub.sample_expansion_factor() elements. Moreover,
they must be training_label_type elements.
- for all valid i:
- *(truth+i/sub.sample_expansion_factor()) is the label of the ith sample in
input_tensor.
ensures
- This function computes a loss function that describes how well the output
of sub matches the expected labels given by truth. Let's write the loss
function as L(input_tensor, truth, sub).
- Then compute_loss_value_and_gradient() computes the gradient of L() with
respect to the outputs in sub. Specifically, compute_loss_value_and_gradient()
assigns the gradients into sub by performing the following tensor
assignments, for all valid i:
- layer<i>(sub).get_gradient_input() = the gradient of
L(input_tensor,truth,sub) with respect to layer<i>(sub).get_output().
Note that, since get_gradient_input() is zero initialized, you don't
have to write gradient information to layers that have a zero
loss gradient.
- returns L(input_tensor,truth,sub)
!*/
};
std::ostream& operator<<(std::ostream& out, const EXAMPLE_LOSS_LAYER_& item);
/*!
print a string describing this layer.
!*/
void to_xml(const EXAMPLE_LOSS_LAYER_& item, std::ostream& out);
/*!
This function is optional, but required if you want to print your networks with
net_to_xml(). Therefore, to_xml() prints a layer as XML.
!*/
void serialize(const EXAMPLE_LOSS_LAYER_& item, std::ostream& out);
void deserialize(EXAMPLE_LOSS_LAYER_& item, std::istream& in);
/*!
provides serialization support
!*/
// For each loss layer you define, always define an add_loss_layer template so that
// layers can be easily composed. Moreover, the convention is that the layer class
// ends with an _ while the add_loss_layer template has the same name but without the
// trailing _.
template <typename SUBNET>
using EXAMPLE_LOSS_LAYER = add_loss_layer<EXAMPLE_LOSS_LAYER_, SUBNET>;
// ----------------------------------------------------------------------------------------
// ----------------------------------------------------------------------------------------
// ----------------------------------------------------------------------------------------
class loss_binary_hinge_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the hinge loss, which is
appropriate for binary classification problems. Therefore, the possible
labels when using this loss are +1 and -1. Moreover, it will cause the
network to produce outputs > 0 when predicting a member of the +1 class and
values < 0 otherwise.
!*/
public:
typedef float training_label_type;
typedef float output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the raw score for each classified object. If the score
is > 0 then the classifier is predicting the +1 class, otherwise it is
predicting the -1 class.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- all values pointed to by truth are +1 or -1.
!*/
};
template <typename SUBNET>
using loss_binary_hinge = add_loss_layer<loss_binary_hinge_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_binary_log_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the log loss, which is
appropriate for binary classification problems. Therefore, there are two possible
classes of labels: positive (> 0) and negative (< 0) when using this loss.
The absolute value of the label represents its weight. Putting a larger weight
on a sample increases the importance of getting its prediction correct during
training. A good rule of thumb is to use weights with absolute value 1 unless
you have a very unbalanced training dataset, in that case, give larger weight
to the class with less training examples.
This loss will cause the network to produce outputs > 0 when predicting a
member of the positive class and values < 0 otherwise.
To be more specific, this object contains a sigmoid layer followed by a
cross-entropy layer.
!*/
public:
typedef float training_label_type;
typedef float output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the raw score for each classified object. If the score
is > 0 then the classifier is predicting the +1 class, otherwise it is
predicting the -1 class.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- all values pointed to by truth are non-zero. Nominally they should be +1 or -1,
each indicating the desired class label.
!*/
};
template <typename SUBNET>
using loss_binary_log = add_loss_layer<loss_binary_log_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_multiclass_log_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the multiclass logistic
regression loss (e.g. negative log-likelihood loss), which is appropriate
for multiclass classification problems. This means that the possible
labels when using this loss are integers >= 0.
Moreover, if after training you were to replace the loss layer of the
network with a softmax layer, the network outputs would give the
probabilities of each class assignment. That is, if you have K classes
then the network should output tensors with the tensor::k()'th dimension
equal to K. Applying softmax to these K values gives the probabilities of
each class. The index into that K dimensional vector with the highest
probability is the predicted class label.
!*/
public:
typedef unsigned long training_label_type;
typedef unsigned long output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the predicted class for each classified object. The number
of possible output classes is sub.get_output().k().
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- all values pointed to by truth are < sub.get_output().k()
!*/
};
template <typename SUBNET>
using loss_multiclass_log = add_loss_layer<loss_multiclass_log_, SUBNET>;
// ----------------------------------------------------------------------------------------
template <typename label_type>
struct weighted_label
{
/*!
WHAT THIS OBJECT REPRESENTS
This object represents the truth label of a single sample, together with
an associated weight (the higher the weight, the more emphasis the
corresponding sample is given during the training).
This object is used in the following loss layers:
- loss_multiclass_log_weighted_ with unsigned long as label_type
- loss_multiclass_log_per_pixel_weighted_ with uint16_t as label_type,
since, in semantic segmentation, 65536 classes ought to be enough for
anybody.
!*/
weighted_label()
{}
weighted_label(label_type label, float weight = 1.f)
: label(label), weight(weight)
{}
// The ground truth label
label_type label{};
// The weight of the corresponding sample
float weight = 1.f;
};
// ----------------------------------------------------------------------------------------
class loss_multiclass_log_weighted_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the multiclass logistic
regression loss (e.g. negative log-likelihood loss), which is appropriate
for multiclass classification problems. It is basically just like the
loss_multiclass_log except that it lets you define per-sample weights,
which might be useful e.g. if you want to emphasize rare classes while
training. If the classification problem is difficult, a flat weight
structure may lead the network to always predict the most common label,
in particular if the degree of imbalance is high. To emphasize a certain
class or classes, simply increase the weights of the corresponding samples,
relative to the weights of other pixels.
Note that if you set all the weights equals to 1, then you get
loss_multiclass_log_ as a special case.
!*/
public:
typedef dlib::weighted_label<unsigned long> weighted_label;
typedef weighted_label training_label_type;
typedef unsigned long output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the predicted class for each classified object. The number
of possible output classes is sub.get_output().k().
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- all values pointed to by truth are < sub.get_output().k()
!*/
};
template <typename SUBNET>
using loss_multiclass_log_weighted = add_loss_layer<loss_multiclass_log_weighted_, SUBNET>;// ----------------------------------------------------------------------------------------
// ----------------------------------------------------------------------------------------
class loss_multimulticlass_log_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements a collection of
multiclass classifiers. An example will make its use clear. So suppose,
for example, that you want to make something that takes a picture of a
vehicle and answers the following questions:
- What type of vehicle is it? A sedan or a truck?
- What color is it? red, green, blue, gray, or black?
You need two separate multi-class classifiers to do this. One to decide
the type of vehicle, and another to decide the color. The
loss_multimulticlass_log_ allows you to pack these two classifiers into one
neural network. This means that when you use the network to process an
image it will output 2 labels for each image, the type label and the color
label.
To create a loss_multimulticlass_log_ for the above case you would
construct it as follows:
std::map<std::string,std::vector<std::string>> labels;
labels["type"] = {"sedan", "truck"};
labels["color"] = {"red", "green", "blue", "gray", "black"};
loss_multimulticlass_log_ myloss(labels);
Then you could use myloss with a network object and train it to do this
task. More generally, you can use any number of classifiers and labels
when using this object. Finally, each of the classifiers uses a standard
multi-class logistic regression loss.
!*/
public:
loss_multimulticlass_log_(
);
/*!
ensures
- #number_of_labels() == 0
- #get_labels().size() == 0
!*/
loss_multimulticlass_log_ (
const std::map<std::string,std::vector<std::string>>& labels
);
/*!
requires
- Each vector in labels must contain at least 2 strings. I.e. each
classifier must have at least two possible labels.
ensures
- #number_of_labels() == the total number of strings in all the
std::vectors in labels.
- #number_of_classifiers() == labels.size()
- #get_labels() == labels
!*/
unsigned long number_of_labels(
) const;
/*!
ensures
- returns the total number of labels known to this loss. This is the count of
all the labels in each classifier.
!*/
unsigned long number_of_classifiers(
) const;
/*!
ensures
- returns the number of classifiers defined by this loss.
!*/
std::map<std::string,std::vector<std::string>> get_labels (
) const;
/*!
ensures
- returns the names of the classifiers and labels used by this loss. In
particular, if the returned object is L then:
- L[CLASS] == the set of labels used by the classifier CLASS.
- L.size() == number_of_classifiers()
- The count of strings in the vectors in L == number_of_labels()
!*/
class classifier_output
{
/*!
WHAT THIS OBJECT REPRESENTS
This object stores the predictions from one of the classifiers in
loss_multimulticlass_log_. It allows you to find out the most likely
string label predicted by that classifier, as well as get the class
conditional probability of any of the classes in the classifier.
!*/
public:
classifier_output(
);
/*!
ensures
- #num_classes() == 0
!*/
size_t num_classes(
) const;
/*!
ensures
- returns the number of possible classes output by this classifier.
!*/
double probability_of_class (
size_t i
) const;
/*!
requires
- i < num_classes()
ensures
- returns the probability that the true class has a label of label(i).
- The sum of probability_of_class(j) for j in the range [0, num_classes()) is always 1.
!*/
const std::string& label(
size_t i
) const;
/*!
requires
- i < num_classes()
ensures
- returns the string label for the ith class.
!*/
operator std::string(
) const;
/*!
requires
- num_classes() != 0
ensures
- returns the string label for the most probable class.
!*/
friend std::ostream& operator<< (std::ostream& out, const classifier_output& item);
/*!
requires
- num_classes() != 0
ensures
- prints the most probable class label to out.
!*/
};
// Both training_label_type and output_label_type should always have sizes equal to
// number_of_classifiers(). That is, the std::map should have an entry for every
// classifier known to this loss.
typedef std::map<std::string,std::string> training_label_type;
typedef std::map<std::string,classifier_output> output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- number_of_labels() != 0
- sub.get_output().k() == number_of_labels()
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- number_of_labels() != 0
- sub.get_output().k() == number_of_labels()
It should be noted that the last layer in your network should usually
be an fc layer. If so, you can satisfy this requirement of k() being
number_of_labels() by calling set_num_outputs() prior to training your
network like so:
your_network.subnet().layer_details().set_num_outputs(your_network.loss_details().number_of_labels());
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- All the std::maps pointed to by truth contain entries for all the
classifiers known to this loss. That is, it must be valid to call
truth[i][classifier] for any of the classifiers known to this loss. To
say this another way, all the training samples must contain labels for
each of the classifiers defined by this loss.
To really belabor this, this also means that truth[i].size() ==
get_labels().size() and that both truth[i] and get_labels() have the same
set of key strings. It also means that the value strings in truth[i]
must be strings known to the loss, i.e. they are valid labels according
to get_labels().
!*/
};
template <typename SUBNET>
using loss_multimulticlass_log = add_loss_layer<loss_multimulticlass_log_, SUBNET>;
// Allow comparison between classifier_outputs and std::string to check if the
// predicted class is a particular string.
inline bool operator== (const std::string& lhs, const loss_multimulticlass_log_::classifier_output& rhs)
{ return lhs == static_cast<const std::string&>(rhs); }
inline bool operator== (const loss_multimulticlass_log_::classifier_output& lhs, const std::string& rhs)
{ return rhs == static_cast<const std::string&>(lhs); }
// ----------------------------------------------------------------------------------------
// ----------------------------------------------------------------------------------------
enum class use_image_pyramid : uint8_t
{
no,
yes
};
struct mmod_options
{
/*!
WHAT THIS OBJECT REPRESENTS
This object contains all the parameters that control the behavior of loss_mmod_.
!*/
public:
struct detector_window_details
{
detector_window_details() = default;
detector_window_details(unsigned long w, unsigned long h) : width(w), height(h) {}
detector_window_details(unsigned long w, unsigned long h, const std::string& l) : width(w), height(h), label(l) {}
unsigned long width = 0;
unsigned long height = 0;
std::string label;
friend inline void serialize(const detector_window_details& item, std::ostream& out);
friend inline void deserialize(detector_window_details& item, std::istream& in);
};
mmod_options() = default;
// This kind of object detector is a sliding window detector. The detector_windows
// field determines how many sliding windows we will use and what the shape of each
// window is. It also determines the output label applied to each detection
// identified by each window. Since you will usually use the MMOD loss with an
// image pyramid, the detector sizes also determine the size of the smallest object
// you can detect.
std::vector<detector_window_details> detector_windows;
// These parameters control how we penalize different kinds of mistakes. See
// Max-Margin Object Detection by Davis E. King (http://arxiv.org/abs/1502.00046)
// for further details.
double loss_per_false_alarm = 1;
double loss_per_missed_target = 1;
// A detection must have an intersection-over-union value greater than this for us
// to consider it a match against a ground truth box.
double truth_match_iou_threshold = 0.5;
// When doing non-max suppression, we use overlaps_nms to decide if a box overlaps
// an already output detection and should therefore be thrown out.
test_box_overlap overlaps_nms = test_box_overlap(0.4);
// Any mmod_rect in the training data that has its ignore field set to true defines
// an "ignore zone" in an image. Any detection from that area is totally ignored
// by the optimizer. Therefore, this overlaps_ignore field defines how we decide
// if a box falls into an ignore zone. You use these ignore zones if there are
// objects in your dataset that you are unsure if you want to detect or otherwise
// don't care if the detector gets them or not.
test_box_overlap overlaps_ignore;
// Usually the detector would be scale-invariant, and used with an image pyramid.
// However, sometimes scale-invariance may not be desired.
use_image_pyramid assume_image_pyramid = use_image_pyramid::yes;
// By default, the mmod loss doesn't train any bounding box regression model. But
// if you set use_bounding_box_regression == true then it expects the network to
// output a tensor with detector_windows.size()*5 channels rather than just
// detector_windows.size() channels. The 4 extra channels per window are trained
// to give a bounding box regression output that improves the positioning of the
// output detection box.
bool use_bounding_box_regression = false;
// When using bounding box regression, bbr_lambda determines how much you care
// about getting the bounding box shape correct vs just getting the detector to
// find objects. That is, the objective function being optimized is
// basic_mmod_loss + bbr_lambda*bounding_box_regression_loss. So setting
// bbr_lambda to a larger value will cause the overall loss to care more about
// getting the bounding box shape correct.
double bbr_lambda = 100;
mmod_options (
const std::vector<std::vector<mmod_rect>>& boxes,
const unsigned long target_size,
const unsigned long min_target_size,
const double min_detector_window_overlap_iou = 0.75
);
/*!
requires
- 0 < min_target_size <= target_size
- 0.5 < min_detector_window_overlap_iou < 1
ensures
- use_image_pyramid_ == use_image_pyramid::yes
- This function should be used when scale-invariance is desired, and
input_rgb_image_pyramid is therefore used as the input layer.
- This function tries to automatically set the MMOD options to reasonable
values, assuming you have a training dataset of boxes.size() images, where
the ith image contains objects boxes[i] you want to detect.
- The most important thing this function does is decide what detector
windows should be used. This is done by finding a set of detector
windows that are sized such that:
- When slid over an image pyramid, each box in boxes will have an
intersection-over-union with one of the detector windows of at least
min_detector_window_overlap_iou. That is, we will make sure that
each box in boxes could potentially be detected by one of the
detector windows. This essentially comes down to picking detector
windows with aspect ratios similar to the aspect ratios in boxes.
Note that we also make sure that each box can be detected by a window
with the same label. For example, if all the boxes had the same
aspect ratio but there were 4 different labels used in boxes then
there would be 4 resulting detector windows, one for each label.
- The longest edge of each detector window is target_size pixels in
length, unless the window's shortest side would be less than
min_target_size pixels in length. In this case the shortest side
will be set to min_target_size length, and the other side sized to
preserve the aspect ratio of the window.
This means that target_size and min_target_size control the size of the
detector windows, while the aspect ratios of the detector windows are
automatically determined by the contents of boxes. It should also be
emphasized that the detector isn't going to be able to detect objects
smaller than any of the detector windows. So consider that when setting
these sizes.
- This function will also set the overlaps_nms tester to the most
restrictive tester that doesn't reject anything in boxes.
!*/
mmod_options (
use_image_pyramid use_image_pyramid,
const std::vector<std::vector<mmod_rect>>& boxes,
const double min_detector_window_overlap_iou = 0.75
);
/*!
requires
- use_image_pyramid == use_image_pyramid::no
- 0.5 < min_detector_window_overlap_iou < 1
ensures
- This function should be used when scale-invariance is not desired, and
there is no intention to apply an image pyramid.
- This function tries to automatically set the MMOD options to reasonable
values, assuming you have a training dataset of boxes.size() images, where
the ith image contains objects boxes[i] you want to detect.
- The most important thing this function does is decide what detector
windows should be used. This is done by finding a set of detector
windows that are sized such that:
- When slid over an image, each box in boxes will have an
intersection-over-union with one of the detector windows of at least
min_detector_window_overlap_iou. That is, we will make sure that
each box in boxes could potentially be detected by one of the
detector windows.
- This function will also set the overlaps_nms tester to the most
restrictive tester that doesn't reject anything in boxes.
!*/
};
void serialize(const mmod_options& item, std::ostream& out);
void deserialize(mmod_options& item, std::istream& in);
// ----------------------------------------------------------------------------------------
class loss_mmod_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the Max Margin Object
Detection loss defined in the paper:
Max-Margin Object Detection by Davis E. King (http://arxiv.org/abs/1502.00046).
This means you use this loss if you want to detect the locations of objects
in images.
It should also be noted that this loss layer requires an input layer that
defines the following functions:
- image_contained_point()
- tensor_space_to_image_space()
- image_space_to_tensor_space()
A reference implementation of them and their definitions can be found in
the input_rgb_image_pyramid object, which is the recommended input layer to
be used with loss_mmod_.
!*/
public:
typedef std::vector<mmod_rect> training_label_type;
typedef std::vector<mmod_rect> output_label_type;
loss_mmod_(
);
/*!
ensures
- #get_options() == mmod_options()
!*/
loss_mmod_(
mmod_options options_
);
/*!
ensures
- #get_options() == options_
!*/
const mmod_options& get_options (
) const;
/*!
ensures
- returns the options object that defines the general behavior of this loss layer.
!*/
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter,
double adjust_threshold = 0
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
Also, the output labels are std::vectors of mmod_rects where, for each mmod_rect R,
we have the following interpretations:
- R.rect == the location of an object in the image.
- R.detection_confidence the score for the object, the bigger the score the
more confident the detector is that an object is really there. Only
objects with a detection_confidence > adjust_threshold are output. So if
you want to output more objects (that are also of less confidence) you
can call to_label() with a smaller value of adjust_threshold.
- R.ignore == false (this value is unused by to_label()).
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
Also, the loss value returned is roughly equal to the average number of
mistakes made per image. This is the sum of false alarms and missed
detections, weighted by the loss weights for these types of mistakes specified
in the mmod_options.
!*/
};
template <typename SUBNET>
using loss_mmod = add_loss_layer<loss_mmod_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_metric_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it allows you to learn to map objects
into a vector space where objects sharing the same class label are close to
each other, while objects with different labels are far apart.
To be specific, it optimizes the following loss function which considers
all pairs of objects in a mini-batch and computes a different loss depending
on their respective class labels. So if objects A1 and A2 in a mini-batch
share the same class label then their contribution to the loss is:
max(0, length(A1-A2)-get_distance_threshold() + get_margin())
While if A1 and B1 have different class labels then their contribution to
the loss function is:
max(0, get_distance_threshold()-length(A1-B1) + get_margin())
Therefore, this loss layer optimizes a version of the hinge loss.
Moreover, the loss is trying to make sure that all objects with the same
label are within get_distance_threshold() distance of each other.
Conversely, if two objects have different labels then they should be more
than get_distance_threshold() distance from each other in the learned
embedding. So this loss function gives you a natural decision boundary for
deciding if two objects are from the same class.
Finally, the loss balances the number of negative pairs relative to the
number of positive pairs. Therefore, if there are N pairs that share the
same identity in a mini-batch then the algorithm will only include the N
worst non-matching pairs in the loss. That is, the algorithm performs hard
negative mining on the non-matching pairs. This is important since there
are in general way more non-matching pairs than matching pairs. So to
avoid imbalance in the loss this kind of hard negative mining is useful.
!*/
public:
typedef unsigned long training_label_type;
typedef matrix<float,0,1> output_label_type;
loss_metric_(
);
/*!
ensures
- #get_margin() == 0.04
- #get_distance_threshold() == 0.6
!*/
loss_metric_(
float margin,
float dist_thresh
);
/*!
requires
- margin > 0
- dist_thresh > 0
ensures
- #get_margin() == margin
- #get_distance_threshold() == dist_thresh
!*/
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
This loss expects the network to produce a single vector (per sample) as
output. This vector is the learned embedding. Therefore, to_label() just
copies these output vectors from the network into the output label_iterators
given to this function, one for each sample in the input_tensor.
!*/
float get_margin() const;
/*!
ensures
- returns the margin value used by the loss function. See the discussion
in WHAT THIS OBJECT REPRESENTS for details.
!*/
float get_distance_threshold() const;
/*!
ensures
- returns the distance threshold value used by the loss function. See the discussion
in WHAT THIS OBJECT REPRESENTS for details.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
!*/
};
template <typename SUBNET>
using loss_metric = add_loss_layer<loss_metric_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_ranking_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the pairwise ranking
loss described in the paper:
Optimizing Search Engines using Clickthrough Data by Thorsten Joachims
This is the same loss function used by the dlib::svm_rank_trainer object.
Therefore, it is generally appropriate when you have a two class problem
and you want to learn a function that ranks one class before the other.
So for example, suppose you have two classes of data. Objects of type A
and objects of type B. Moreover, suppose that you want to sort the objects
so that A objects always come before B objects. This loss will help you
learn a function that assigns a real number to each object such that A
objects get a larger number assigned to them than B objects. This lets you
then sort the objects according to the output of the neural network and
obtain the desired result of having A objects come before B objects.
The training labels should be positive values for objects you want to get
high scores and negative for objects that should get small scores. So
relative to our A/B example, you would give A objects labels of +1 and B
objects labels of -1. This should cause the learned network to give A
objects large positive values and B objects negative values.
Finally, the specific loss function is:
For all pairs of positive vs negative training examples A_i and B_j respectively:
sum_ij: max(0, B_i - A_j + margin_ij)
where margin_ij = the label for A_j minus the label for B_i. If you
always use +1 and -1 labels then the margin is always 2. However, this
formulation allows you to give certain training samples different weight by
adjusting the training labels appropriately.
!*/
public:
typedef float training_label_type;
typedef float output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the predicted ranking score.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
!*/
};
template <typename SUBNET>
using loss_ranking = add_loss_layer<loss_ranking_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_epsilon_insensitive_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the epsilon insensitive
loss, which is appropriate for regression problems. In particular, this
loss function is;
loss(y1,y2) = abs(y1-y2)<epsilon ? 0 : abs(y1-y2)-epsilon
Therefore, the loss is basically just the abs() loss except there is a dead
zone around zero, causing the loss to not care about mistakes of magnitude
smaller than epsilon.
!*/
public:
typedef float training_label_type;
typedef float output_label_type;
loss_epsilon_insensitive_(
) = default;
/*!
ensures
- #get_epsilon() == 1
!*/
loss_epsilon_insensitive_(
double eps
);
/*!
requires
- eps >= 0
ensures
- #get_epsilon() == eps
!*/
double get_epsilon (
) const;
/*!
ensures
- returns the epsilon value used in the loss function. Mistakes in the
regressor smaller than get_epsilon() are ignored by the loss function.
!*/
void set_epsilon(
double eps
);
/*!
requires
- eps >= 0
ensures
- #get_epsilon() == eps
!*/
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the predicted continuous variable.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
!*/
};
template <typename SUBNET>
using loss_epsilon_insensitive = add_loss_layer<loss_epsilon_insensitive_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_mean_squared_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the mean squared loss, which is
appropriate for regression problems.
!*/
public:
typedef float training_label_type;
typedef float output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the predicted continuous variable.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
!*/
};
template <typename SUBNET>
using loss_mean_squared = add_loss_layer<loss_mean_squared_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_mean_squared_multioutput_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the mean squared loss,
which is appropriate for regression problems. It is basically just like
loss_mean_squared_ except that it lets you define multiple outputs instead
of just 1.
!*/
public:
typedef matrix<float> training_label_type;
typedef matrix<float> output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the predicted continuous variable.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().nr() == 1
- sub.get_output().nc() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- (*(truth + idx)).nc() == 1 for all idx such that 0 <= idx < sub.get_output().num_samples()
- (*(truth + idx)).nr() == sub.get_output().k() for all idx such that 0 <= idx < sub.get_output().num_samples()
!*/
};
template <typename SUBNET>
using loss_mean_squared_multioutput = add_loss_layer<loss_mean_squared_multioutput_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_binary_log_per_pixel_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the log loss, which is
appropriate for binary classification problems. It is basically just like
loss_binary_log_ except that it lets you define matrix outputs instead
of scalar outputs. It should be useful, for example, in segmentation
where we want to classify each pixel of an image, and also get at least
some sort of confidence estimate for each pixel.
!*/
public:
typedef matrix<float> training_label_type;
typedef matrix<float> output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the raw score for each classified object. If the score
is > 0 then the classifier is predicting the +1 class, otherwise it is
predicting the -1 class.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- all pixel values pointed to by truth correspond to the desired target values.
Nominally they should be +1 or -1, each indicating the desired class label,
or 0 to indicate that the corresponding pixel is to be ignored.
!*/
};
template <typename SUBNET>
using loss_binary_log_per_pixel = add_loss_layer<loss_binary_log_per_pixel_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_multiclass_log_per_pixel_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the multiclass logistic
regression loss (e.g. negative log-likelihood loss), which is appropriate
for multiclass classification problems. It is basically just like
loss_multiclass_log_ except that it lets you define matrix outputs instead
of scalar outputs. It should be useful, for example, in semantic
segmentation where we want to classify each pixel of an image.
!*/
public:
// In semantic segmentation, if you don't know the ground-truth of some pixel,
// set the label of that pixel to this value. When you do so, the pixel will be
// ignored when computing gradients.
static const uint16_t label_to_ignore = std::numeric_limits<uint16_t>::max();
// In semantic segmentation, 65535 classes ought to be enough for anybody.
typedef matrix<uint16_t> training_label_type;
typedef matrix<uint16_t> output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the predicted class for each classified element. The number
of possible output classes is sub.get_output().k().
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- all values pointed to by truth are < sub.get_output().k() or are equal to label_to_ignore.
!*/
};
template <typename SUBNET>
using loss_multiclass_log_per_pixel = add_loss_layer<loss_multiclass_log_per_pixel_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_multiclass_log_per_pixel_weighted_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the multiclass logistic
regression loss (e.g. negative log-likelihood loss), which is appropriate
for multiclass classification problems. It is basically just like
loss_multiclass_log_per_pixel_ except that it lets you define per-pixel
weights, which may be useful e.g. if you want to emphasize rare classes
while training. (If the classification problem is difficult, a flat weight
structure may lead the network to always predict the most common label, in
particular if the degree of imbalance is high. To emphasize a certain
class or classes, simply increase the weights of the corresponding pixels,
relative to the weights of the other pixels.)
Note that if you set the weight to 0 whenever a pixel's label is equal to
loss_multiclass_log_per_pixel_::label_to_ignore, and to 1 otherwise, then
you essentially get loss_multiclass_log_per_pixel_ as a special case.
!*/
public:
typedef dlib::weighted_label<uint16_t> weighted_label;
typedef matrix<weighted_label> training_label_type;
typedef matrix<uint16_t> output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output label is the predicted class for each classified element. The number
of possible output classes is sub.get_output().k().
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- all labels pointed to by truth are < sub.get_output().k(), or the corresponding weight
is zero.
!*/
};
template <typename SUBNET>
using loss_multiclass_log_per_pixel_weighted = add_loss_layer<loss_multiclass_log_per_pixel_weighted_, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_mean_squared_per_pixel_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the mean squared loss,
which is appropriate for regression problems. It is basically just like
loss_mean_squared_multioutput_ except that it lets you define matrix or
image outputs, instead of vector.
!*/
public:
typedef matrix<float> training_label_type;
typedef matrix<float> output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output labels are the predicted continuous variables.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().k() == 1
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- for all idx such that 0 <= idx < sub.get_output().num_samples():
- sub.get_output().nr() == (*(truth + idx)).nr()
- sub.get_output().nc() == (*(truth + idx)).nc()
!*/
};
template <typename SUBNET>
using loss_mean_squared_per_pixel = add_loss_layer<loss_mean_squared_per_pixel_, SUBNET>;
// ----------------------------------------------------------------------------------------
template<long _num_channels>
class loss_mean_squared_per_channel_and_pixel_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, it implements the mean squared loss,
which is appropriate for regression problems. It is basically just like
loss_mean_squared_per_pixel_ except that it computes the loss over all
channels, not just the first one.
!*/
public:
typedef std::array<matrix<float>, _num_channels> training_label_type;
typedef std::array<matrix<float>, _num_channels> output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.get_output().k() == _num_channels
- sub.sample_expansion_factor() == 1
and the output labels are the predicted continuous variables.
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().k() == _num_channels
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- for all idx such that 0 <= idx < sub.get_output().num_samples():
- sub.get_output().nr() == (*(truth + idx)).nr()
- sub.get_output().nc() == (*(truth + idx)).nc()
!*/
};
template <long num_channels, typename SUBNET>
using loss_mean_squared_per_channel_and_pixel = add_loss_layer<loss_mean_squared_per_channel_and_pixel_<num_channels>, SUBNET>;
// ----------------------------------------------------------------------------------------
class loss_dot_
{
/*!
WHAT THIS OBJECT REPRESENTS
This object implements the loss layer interface defined above by
EXAMPLE_LOSS_LAYER_. In particular, selecting this loss means you want
maximize the dot product between the output of a network and a set of
training vectors. The loss is therefore the negative dot product. To be
very specific, if X is the output vector of a network and Y is a training
label (also a vector), then the loss for this training sample is: -dot(X,Y)
!*/
public:
typedef matrix<float,0,1> training_label_type;
typedef matrix<float,0,1> output_label_type;
template <
typename SUB_TYPE,
typename label_iterator
>
void to_label (
const tensor& input_tensor,
const SUB_TYPE& sub,
label_iterator iter
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::to_label() except
it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
and the output labels are simply the final network outputs stuffed into a
vector. To be very specific, the output is the following for all valid i:
*(iter+i) == trans(rowm(mat(sub.get_output()),i))
!*/
template <
typename const_label_iterator,
typename SUBNET
>
double compute_loss_value_and_gradient (
const tensor& input_tensor,
const_label_iterator truth,
SUBNET& sub
) const;
/*!
This function has the same interface as EXAMPLE_LOSS_LAYER_::compute_loss_value_and_gradient()
except it has the additional calling requirements that:
- sub.get_output().num_samples() == input_tensor.num_samples()
- sub.sample_expansion_factor() == 1
- Let NETWORK_OUTPUT_DIMS == sub.get_output().size()/sub.get_output().num_samples()
- for all idx such that 0 <= idx < sub.get_output().num_samples():
- NETWORK_OUTPUT_DIMS == (*(truth + idx)).size()
!*/
};
template <typename SUBNET>
using loss_dot = add_loss_layer<loss_dot_, SUBNET>;
// ----------------------------------------------------------------------------------------
}
#endif // DLIB_DNn_LOSS_ABSTRACT_H_
|
{
"pile_set_name": "Github"
}
|
#include <u.h>
#include <libc.h>
#include <regexp.h>
#include <thread.h>
#include <fcall.h>
int debug;
int dfd;
int srvfd;
int netfd[2];
int srv_to_net[2];
int net_to_srv[2];
char *srv;
char *addr;
char *ns;
int export;
void shuffle(void *arg);
int post(char *srv);
void remoteside(void*);
int call(char *rsys, char *ns, char *srv);
void* emalloc(int size);
void localside(void*);
char *REXEXEC = "ssh";
char *prog = "import";
enum
{
Stack= 32*1024
};
void
usage(void)
{
fprint(2, "usage: %s [-df] [-s service] [-n remote-ns] [-p remote-prog] remote-system\n", argv0);
threadexitsall("usage");
}
void
fatal(char *fmt, ...)
{
char buf[256];
va_list arg;
va_start(arg, fmt);
vseprint(buf, buf+sizeof buf, fmt, arg);
va_end(arg);
fprint(2, "%s: %s\n", argv0 ? argv0 : "<prog>", buf);
threadexitsall("fatal");
}
void
threadmain(int argc, char *argv[])
{
int dofork;
int rem;
void (*fn)(void*);
dofork = 1;
rem = 0;
ns = nil;
srv = "plumb";
ARGBEGIN{
case 'd':
debug = 1;
break;
case 'f':
dofork = 0;
break;
case 'n': /* name of remote namespace */
ns = EARGF(usage());
break;
case 'p':
prog = EARGF(usage());
break;
case 's': /* name of service */
srv = EARGF(usage());
break;
case 'R':
rem = 1;
break;
case 'x':
export = 1;
break;
}ARGEND
if(debug){
char *dbgfile;
if(rem)
dbgfile = smprint("/tmp/%s.export.debug", getuser());
else
dbgfile = smprint("/tmp/%s.import.debug", getuser());
dfd = create(dbgfile, OWRITE, 0664);
free(dbgfile);
fmtinstall('F', fcallfmt);
}
if(rem){
netfd[0] = 0;
netfd[1] = 1;
write(1, "OK", 2);
}else{
if(argc != 1)
usage();
addr = argv[0];
/* connect to remote service */
netfd[0] = netfd[1] = call(addr, ns, srv);
}
fn = localside;
if(rem+export == 1)
fn = remoteside;
if(rem || !dofork)
fn(nil);
else
proccreate(fn, nil, Stack);
}
void
localside(void *arg)
{
USED(arg);
/* start a loal service */
srvfd = post(srv);
/* threads to shuffle messages each way */
srv_to_net[0] = srvfd;
srv_to_net[1] = netfd[1];
proccreate(shuffle, srv_to_net, Stack);
net_to_srv[0] = netfd[0];
net_to_srv[1] = srvfd;
shuffle(net_to_srv);
}
/* post a local service */
int
post(char *srv)
{
int p[2];
if(pipe(p) < 0)
fatal("can't create pipe: %r");
/* 0 will be server end, 1 will be client end */
if(post9pservice(p[1], srv, nil) < 0)
fatal("post9pservice plumb: %r");
close(p[1]);
return p[0];
}
/* start a stub on the remote server */
int
call(char *rsys, char *ns, char *srv)
{
int p[2];
int ac;
char *av[12];
char buf[2];
if(pipe(p) < 0)
fatal("can't create pipe: %r");
ac = 0;
av[ac++] = REXEXEC;
av[ac++] = rsys;
av[ac++] = prog;
if(debug)
av[ac++] = "-d";
av[ac++] = "-R";
if(ns != nil){
av[ac++] = "-n";
av[ac++] = ns;
}
av[ac++] = "-s";
av[ac++] = srv;
if(export)
av[ac++] = "-x";
av[ac] = 0;
if(debug){
fprint(dfd, "execing ");
for(ac = 0; av[ac]; ac++)
fprint(dfd, " %s", av[ac]);
fprint(dfd, "\n");
}
switch(fork()){
case -1:
fatal("%r");
case 0:
dup(p[1], 0);
dup(p[1], 1);
close(p[0]);
close(p[1]);
execvp(REXEXEC, av);
fatal("can't exec %s", REXEXEC);
default:
break;
}
close(p[1]);
/* ignore crap that might come out of the .profile */
/* keep reading till we have an "OK" */
if(read(p[0], &buf[0], 1) != 1)
fatal("EOF");
for(;;){
if(read(p[0], &buf[1], 1) != 1)
fatal("EOF");
if(strncmp(buf, "OK", 2) == 0)
break;
buf[0] = buf[1];
}
if(debug)
fprint(dfd, "got OK\n");
return p[0];
}
enum
{
BLEN=16*1024
};
void
shuffle(void *arg)
{
int *fd;
char *buf, *tbuf;
int n;
Fcall *t;
fd = (int*)arg;
buf = emalloc(BLEN+1);
t = nil;
tbuf = nil;
for(;;){
n = read9pmsg(fd[0], buf, BLEN);
if(n <= 0){
if(debug)
fprint(dfd, "%d->%d read returns %d: %r\n", fd[0], fd[1], n);
break;
}
if(debug){
if(t == nil)
t = emalloc(sizeof(Fcall));
if(tbuf == nil)
tbuf = emalloc(BLEN+1);
memmove(tbuf, buf, n); /* because convM2S is destructive */
if(convM2S((uchar*)tbuf, n, t) != n)
fprint(dfd, "%d->%d convert error in convM2S", fd[0], fd[1]);
else
fprint(dfd, "%d->%d %F\n", fd[0], fd[1], t);
}
if(write(fd[1], buf, n) != n)
break;
}
threadexitsall(0);
}
void
remoteside(void *v)
{
int srv_to_net[2];
int net_to_srv[2];
char *addr;
int srvfd;
if(ns == nil)
ns = getns();
addr = smprint("unix!%s/%s", ns, srv);
if(addr == nil)
fatal("%r");
if(debug)
fprint(dfd, "remoteside starting %s\n", addr);
srvfd = dial(addr, 0, 0, 0);
if(srvfd < 0)
fatal("dial %s: %r", addr);
if(debug)
fprint(dfd, "remoteside dial %s succeeded\n", addr);
fcntl(srvfd, F_SETFL, FD_CLOEXEC);
/* threads to shuffle messages each way */
srv_to_net[0] = srvfd;
srv_to_net[1] = netfd[1];
proccreate(shuffle, srv_to_net, Stack);
net_to_srv[0] = netfd[0];
net_to_srv[1] = srvfd;
shuffle(net_to_srv);
threadexitsall(0);
}
void*
emalloc(int size)
{
void *x;
x = malloc(size);
if(x == nil)
fatal("allocation fails: %r");
return x;
}
|
{
"pile_set_name": "Github"
}
|
/*************************************************************************
* *
* XV Olimpiada Informatyczna *
* *
* Zadanie: Plakatowanie (PLA) *
* Plik: plas4.cpp *
* Autor: Michal Pilipczuk *
* Opis: Rozwiazanie nieefektywne O(n^2), implementacja z *
* minimalizowana stala. *
* *
*************************************************************************/
#include <cstdlib>
#include <cstdio>
#include <cmath>
#include <ctime>
#include <algorithm>
using namespace std;
#define REP(i,n) for( int i = 0; i<int(n); ++i)
#define FOR(i,a,b) for (int i = a ;i<=int(b); ++i)
#define FORD(i,a,b) for (int i = a ;i>=int(b); --i)
const int MAX=270000;
int input[MAX];
int n,result,dump,k,akt,nx,t,ok,j,i;
int main(){
scanf("%d\n",&n);
for(i=0;i<n;i++){
scanf("%d %d\n",&dump,&input[i]);
}
for(i=0;i<n;i++){
ok=1; k=input[i];
for (j=(i-1);j>=0;j--){
if (input[j]<k){break;}
if (input[j]==k){ok=0; break;}
}
if (ok){result++;}
}
printf("%d\n",result);
return 0;
}
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 only, as
* published by the Free Software Foundation. Oracle designates this
* particular file as subject to the "Classpath" exception as provided
* by Oracle in the LICENSE file that accompanied this code.
*
* This code is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* version 2 for more details (a copy is included in the LICENSE file that
* accompanied this code).
*
* You should have received a copy of the GNU General Public License version
* 2 along with this work; if not, write to the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
*
* Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
* or visit www.oracle.com if you need additional information or have any
* questions.
*/
package sun.text.resources;
import java.util.spi.ResourceBundleProvider;
/**
* An interface for the internal locale data provider for which {@code ResourceBundle}
* searches.
*/
public interface FormatDataProvider extends ResourceBundleProvider {
}
|
{
"pile_set_name": "Github"
}
|
[Module Metadata]
AUTHOR=Sarah Edwards/mac4n6.com/@iamevltwin
MODULE_NOTES=Audio Routing via App
[Database Metadata]
DATABASE=CurrentPowerlog.PLSQL
PLATFORM=IOS
VERSIONS=9,10,11,12,13
[Query Metadata]
QUERY_NAME=powerlog_app_audio
ACTIVITY=App Audio Routing
KEY_TIMESTAMP=TIMESTAMP
[SQL Query 9,10,11,12,13]
QUERY=
SELECT
DATETIME(TIMESTAMP, 'UNIXEPOCH') AS TIMESTAMP,
DATETIME(TIMESTAMPLOGGED, 'UNIXEPOCH') AS "TIMESTAMP LOGGED",
OPERATION,
APPLICATIONNAME AS "APPLICATION NAME / BUNDLE ID",
ASSERTIONNAME AS "ASSERTION NAME",
AUDIOROUTE AS "AUDIO ROUTE",
MIRRORINGSTATE AS "MIRRORING STATE",
ASSERTIONID AS "ASERTION ID",
PID,
ID AS "PLAUDIOAGENT_EVENTPOINT_AUDIOAPP TABLE ID"
FROM
PLAUDIOAGENT_EVENTPOINT_AUDIOAPP
|
{
"pile_set_name": "Github"
}
|
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Target Name="Build">
<Error Text="Something failed (test-import.targets): $(MSBuildSDKsPath)" />
</Target>
</Project>
|
{
"pile_set_name": "Github"
}
|
// Code generated by linux/mkall.go generatePtracePair(386, amd64). DO NOT EDIT.
// +build linux
// +build 386 amd64
package unix
import "unsafe"
// PtraceRegs386 is the registers used by 386 binaries.
type PtraceRegs386 struct {
Ebx int32
Ecx int32
Edx int32
Esi int32
Edi int32
Ebp int32
Eax int32
Xds int32
Xes int32
Xfs int32
Xgs int32
Orig_eax int32
Eip int32
Xcs int32
Eflags int32
Esp int32
Xss int32
}
// PtraceGetRegs386 fetches the registers used by 386 binaries.
func PtraceGetRegs386(pid int, regsout *PtraceRegs386) error {
return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
}
// PtraceSetRegs386 sets the registers used by 386 binaries.
func PtraceSetRegs386(pid int, regs *PtraceRegs386) error {
return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
}
// PtraceRegsAmd64 is the registers used by amd64 binaries.
type PtraceRegsAmd64 struct {
R15 uint64
R14 uint64
R13 uint64
R12 uint64
Rbp uint64
Rbx uint64
R11 uint64
R10 uint64
R9 uint64
R8 uint64
Rax uint64
Rcx uint64
Rdx uint64
Rsi uint64
Rdi uint64
Orig_rax uint64
Rip uint64
Cs uint64
Eflags uint64
Rsp uint64
Ss uint64
Fs_base uint64
Gs_base uint64
Ds uint64
Es uint64
Fs uint64
Gs uint64
}
// PtraceGetRegsAmd64 fetches the registers used by amd64 binaries.
func PtraceGetRegsAmd64(pid int, regsout *PtraceRegsAmd64) error {
return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout)))
}
// PtraceSetRegsAmd64 sets the registers used by amd64 binaries.
func PtraceSetRegsAmd64(pid int, regs *PtraceRegsAmd64) error {
return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs)))
}
|
{
"pile_set_name": "Github"
}
|
// DO NOT EDIT THIS FILE - it is machine generated -*- c++ -*-
#ifndef __gnu_javax_net_ssl_provider_ClientHelloBuilder__
#define __gnu_javax_net_ssl_provider_ClientHelloBuilder__
#pragma interface
#include <gnu/javax/net/ssl/provider/ClientHello.h>
#include <gcj/array.h>
extern "Java"
{
namespace gnu
{
namespace javax
{
namespace net
{
namespace ssl
{
namespace provider
{
class ClientHelloBuilder;
class ProtocolVersion;
}
}
}
}
}
namespace java
{
namespace nio
{
class ByteBuffer;
}
}
}
class gnu::javax::net::ssl::provider::ClientHelloBuilder : public ::gnu::javax::net::ssl::provider::ClientHello
{
public:
ClientHelloBuilder();
virtual ::java::nio::ByteBuffer * buffer();
virtual void setVersion(::gnu::javax::net::ssl::provider::ProtocolVersion *);
virtual void setSessionId(JArray< jbyte > *);
virtual void setSessionId(JArray< jbyte > *, jint, jint);
virtual void setCipherSuites(::java::util::List *);
virtual void setCompressionMethods(::java::util::List *);
virtual void setExtensionsLength(jint);
virtual void setExtensions(::java::nio::ByteBuffer *);
virtual void setDisableExtensions(jboolean);
virtual void ensureCapacity(jint);
static ::java::lang::Class class$;
};
#endif // __gnu_javax_net_ssl_provider_ClientHelloBuilder__
|
{
"pile_set_name": "Github"
}
|
libavcodec/bmp_parser.o: libavcodec/bmp_parser.c libavutil/bswap.h \
libavutil/avconfig.h libavutil/attributes.h config.h libavutil/common.h \
libavutil/macros.h libavutil/version.h libavutil/intmath.h \
libavutil/common.h libavutil/mem.h libavutil/error.h libavutil/avutil.h \
libavutil/rational.h libavutil/mathematics.h libavutil/intfloat.h \
libavutil/log.h libavutil/pixfmt.h libavutil/internal.h \
libavutil/timer.h libavutil/cpu.h libavutil/dict.h libavutil/libm.h \
libavcodec/parser.h libavcodec/avcodec.h libavutil/samplefmt.h \
libavutil/attributes.h libavutil/avutil.h libavutil/buffer.h \
libavutil/cpu.h libavutil/channel_layout.h libavutil/dict.h \
libavutil/frame.h libavutil/buffer.h libavutil/samplefmt.h \
libavutil/log.h libavutil/pixfmt.h libavutil/rational.h \
libavcodec/version.h libavutil/version.h
|
{
"pile_set_name": "Github"
}
|
#!/usr/bin/perl -w
# Copyright (C) 2005, 2006, 2007 Apple Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. Neither the name of Apple Inc. ("Apple") nor the names of
# its contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
# THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Updates a development environment to the new WebKitAuxiliaryLibrary
use strict;
use warnings;
use FindBin;
my $file = "WebKitAuxiliaryLibrary";
my $zipFile = "$file.zip";
my $auxiliaryLibsURL = "https://developer.apple.com/opensource/internet/$zipFile";
my $command = "$FindBin::Bin/update-webkit-dependency";
system("perl", $command, $auxiliaryLibsURL, "win") == 0 or die;
|
{
"pile_set_name": "Github"
}
|
// Copyright (c) 2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
package org.chromium.sdk.internal.v8native;
import java.io.IOException;
import org.chromium.sdk.Breakpoint;
import org.chromium.sdk.BreakpointTypeExtension;
import org.chromium.sdk.CallbackSemaphore;
import org.chromium.sdk.FunctionScopeExtension;
import org.chromium.sdk.IgnoreCountBreakpointExtension;
import org.chromium.sdk.JavascriptVm;
import org.chromium.sdk.RelayOk;
import org.chromium.sdk.RestartFrameExtension;
import org.chromium.sdk.SyncCallback;
import org.chromium.sdk.Version;
import org.chromium.sdk.internal.v8native.value.JsFunctionImpl;
import org.chromium.sdk.util.GenericCallback;
import org.chromium.sdk.util.MethodIsBlockingException;
/**
* Base implementation of JavascriptVm.
*/
public abstract class JavascriptVmImpl implements JavascriptVm {
protected JavascriptVmImpl() {
}
@Override
public void suspend(SuspendCallback callback) {
getDebugSession().suspend(callback);
}
// TODO: make sure we do not return those scripts that are reported compiled but not loaded yet.
@Override
public void getScripts(ScriptsCallback callback) throws MethodIsBlockingException {
CallbackSemaphore callbackSemaphore = new CallbackSemaphore();
RelayOk relayOk =
getDebugSession().getScriptManagerProxy().getAllScripts(callback, callbackSemaphore);
boolean res = callbackSemaphore.tryAcquireDefault(relayOk);
if (!res) {
callback.failure("Timeout");
}
}
@Override
public RelayOk setBreakpoint(Breakpoint.Target target, int line,
int column, boolean enabled, String condition,
BreakpointCallback callback, SyncCallback syncCallback) {
return getDebugSession().getBreakpointManager()
.setBreakpoint(target, line, column, enabled, condition, callback, syncCallback);
}
@Override
public RelayOk listBreakpoints(final ListBreakpointsCallback callback,
SyncCallback syncCallback) {
return getDebugSession().getBreakpointManager().reloadBreakpoints(callback, syncCallback);
}
@Override
public RelayOk enableBreakpoints(Boolean enabled, GenericCallback<Boolean> callback,
SyncCallback syncCallback) {
return getDebugSession().getBreakpointManager().enableBreakpoints(enabled,
callback, syncCallback);
}
@Override
public RelayOk setBreakOnException(ExceptionCatchMode catchMode,
GenericCallback<ExceptionCatchMode> callback, SyncCallback syncCallback) {
return getDebugSession().getBreakpointManager().setBreakOnException(catchMode,
callback, syncCallback);
}
@Override
public Version getVersion() {
return getDebugSession().getVmVersion();
}
@Override
public BreakpointTypeExtension getBreakpointTypeExtension() {
return getDebugSession().getBreakpointManager().getBreakpointTypeExtension();
}
@Override
public IgnoreCountBreakpointExtension getIgnoreCountBreakpointExtension() {
return BreakpointImpl.IGNORE_COUNT_EXTENSION;
}
@Override
public FunctionScopeExtension getFunctionScopeExtension() {
if (!V8VersionFeatures.isFunctionScopeSupported(getDebugSession().getVmVersion())) {
return null;
}
return JsFunctionImpl.FUNCTION_SCOPE_EXTENSION;
}
@Override
public RestartFrameExtension getRestartFrameExtension() {
if (!V8VersionFeatures.isRestartFrameSupported(getDebugSession().getVmVersion())) {
return null;
}
return CallFrameImpl.RESTART_FRAME_EXTENSION;
}
protected abstract DebugSession getDebugSession();
// TODO(peter.rybin): This message will be obsolete in JavaSE-1.6.
public static IOException newIOException(String message, Throwable cause) {
IOException result = new IOException(message);
result.initCause(cause);
return result;
}
}
|
{
"pile_set_name": "Github"
}
|
package fr.sii.sonar.web.frontend.typescript.test;
import fr.sii.sonar.report.core.common.PluginDependencies;
import fr.sii.sonar.report.core.common.ReportSensor;
import fr.sii.sonar.report.core.test.domain.TestReport;
import fr.sii.sonar.report.core.test.factory.TestSaverFactory;
import fr.sii.sonar.report.test.junit.factory.JUnitFallbackProviderFactory;
/**
* Sensor specialized to load JUnit report file and save integration test measures
*
* @author Aurélien Baudet
*
*/
public class JUnitIntegrationReportSensor extends ReportSensor<TestReport> {
public JUnitIntegrationReportSensor(JUnitIntegrationConstants constants, PluginDependencies pluginDependencies) {
super(constants, pluginDependencies, new JUnitFallbackProviderFactory(), new TestSaverFactory());
}
}
|
{
"pile_set_name": "Github"
}
|
package com.meiqia.meiqiasdk.util;
import android.graphics.Matrix;
import android.graphics.drawable.Drawable;
import android.widget.ImageView;
import com.meiqia.meiqiasdk.third.photoview.PhotoViewAttacher;
/**
* 作者:王浩 邮件:bingoogolapple@gmail.com
* 创建时间:16/7/15 下午2:14
* 描述:
*/
public class MQBrowserPhotoViewAttacher extends PhotoViewAttacher {
public MQBrowserPhotoViewAttacher(ImageView imageView) {
super(imageView);
}
private boolean isSetTopCrop = false;
/**
* 必须重写此方法,防止其他函数覆盖,导致setTopCrop不成功
*
* @param d - Drawable being displayed
*/
@Override
protected void updateBaseMatrix(Drawable d) {
if (isSetTopCrop) {
setTopCrop(d);
} else {
super.updateBaseMatrix(d);
}
}
public void setIsSetTopCrop(boolean isSetTopCrop) {
this.isSetTopCrop = isSetTopCrop;
}
public void setUpdateBaseMatrix() {
ImageView view = getImageView();
if (view == null) return;
updateBaseMatrix(view.getDrawable());
}
private void setTopCrop(Drawable d) {
ImageView imageView = getImageView();
if (null == imageView || null == d) {
return;
}
final float viewWidth = getImageViewWidth(imageView);
final float viewHeight = getImageViewHeight(imageView);
final int drawableWidth = d.getIntrinsicWidth();
final int drawableHeight = d.getIntrinsicHeight();
Matrix matrix = new Matrix();
final float widthScale = viewWidth / drawableWidth;
final float heightScale = viewHeight / drawableHeight;
float scale = Math.max(widthScale, heightScale);
matrix.postScale(scale, scale);
matrix.postTranslate(0, 0);
updateBaseMatrix(matrix);
}
}
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright (c) 2016.
* Modified by Neurophobic Animal on 04/07/2016.
*/
package cm.aptoide.pt.dataprovider.ws.v7;
import cm.aptoide.pt.dataprovider.ws.RefreshBody;
import com.fasterxml.jackson.annotation.JsonProperty;
/**
* Base body that every request should use. If more information should be provided this class
* should be extended.
*/
public class BaseBody implements RefreshBody {
@JsonProperty("aptoide_uid") private String aptoideId;
private String accessToken;
private int aptoideVercode;
private String aptoideMd5sum;
private String aptoidePackage;
private String cdn;
private String lang;
private boolean mature;
private boolean refresh;
private String q;
private String country;
public boolean isRefresh() {
return refresh;
}
public void setRefresh(boolean refresh) {
this.refresh = refresh;
}
public String getAptoideId() {
return aptoideId;
}
public void setAptoideId(String aptoideId) {
this.aptoideId = aptoideId;
}
public String getAccessToken() {
return accessToken;
}
public void setAccessToken(String accessToken) {
this.accessToken = accessToken;
}
public int getAptoideVercode() {
return aptoideVercode;
}
public void setAptoideVercode(int aptoideVercode) {
this.aptoideVercode = aptoideVercode;
}
public String getAptoideMd5sum() {
return aptoideMd5sum;
}
public void setAptoideMd5sum(String aptoideMd5sum) {
this.aptoideMd5sum = aptoideMd5sum;
}
public String getAptoidePackage() {
return aptoidePackage;
}
public void setAptoidePackage(String aptoidePackage) {
this.aptoidePackage = aptoidePackage;
}
public String getCdn() {
return cdn;
}
public void setCdn(String cdn) {
this.cdn = cdn;
}
public String getLang() {
return lang;
}
public void setLang(String lang) {
this.lang = lang;
}
public boolean isMature() {
return mature;
}
public void setMature(boolean mature) {
this.mature = mature;
}
public String getQ() {
return q;
}
public void setQ(String q) {
this.q = q;
}
public String getCountry() {
return country;
}
public void setCountry(String country) {
this.country = country;
}
}
|
{
"pile_set_name": "Github"
}
|
# Launch Screen Assets
You can customize the launch screen with your own desired assets by replacing the image files in this directory.
You can also do it by opening your Flutter project's Xcode project with `open ios/Runner.xcworkspace`, selecting `Runner/Assets.xcassets` in the Project Navigator and dropping in the desired images.
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0" encoding="utf-8"?>
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
<SignAssembly>true</SignAssembly>
<AssemblyOriginatorKeyFile>AsyncAwaitBestPracticesUnitTests.snk</AssemblyOriginatorKeyFile>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="nunit" Version="3.12.0" />
<PackageReference Include="NUnit3TestAdapter" Version="3.17.0"><IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
<PrivateAssets>all</PrivateAssets>
</PackageReference>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.7.1" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\AsyncAwaitBestPractices.MVVM\AsyncAwaitBestPractices.MVVM.csproj" />
<ProjectReference Include="..\AsyncAwaitBestPractices\AsyncAwaitBestPractices.csproj" />
</ItemGroup>
</Project>
|
{
"pile_set_name": "Github"
}
|
#region Copyright Syncfusion Inc. 2001-2020.
// Copyright Syncfusion Inc. 2001-2020. All rights reserved.
// Use of this code is subject to the terms of our license.
// A copy of the current license can be obtained at any time by e-mailing
// licensing@syncfusion.com. Any infringement will be prosecuted under
// applicable laws.
#endregion
using Syncfusion.OfficeChart;
using Syncfusion.Presentation;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Text;
using System.Threading.Tasks;
using SampleBrowser.Core;
using Xamarin.Forms;
namespace SampleBrowser.Presentation
{
public partial class Comments : SampleView
{
public Comments()
{
InitializeComponent();
if (Device.Idiom != TargetIdiom.Phone && Device.RuntimePlatform == Device.UWP)
{
this.Description.HorizontalOptions = LayoutOptions.Start;
this.btnGenerate.HorizontalOptions = LayoutOptions.Start;
this.Description.VerticalOptions = LayoutOptions.Center;
this.btnGenerate.VerticalOptions = LayoutOptions.Center;
this.btnGenerate.BackgroundColor = Color.Gray;
}
else if (Device.Idiom == TargetIdiom.Phone && Device.RuntimePlatform == Device.UWP)
{
this.Description.FontSize = 13.5;
this.Description.VerticalOptions = LayoutOptions.Center;
this.btnGenerate.VerticalOptions = LayoutOptions.Center;
}
}
private void OnButtonClicked(object sender, EventArgs e)
{
string resourcePath ="";
#if COMMONSB
resourcePath = "SampleBrowser.Samples.Presentation.Samples.Templates.Images.pptx";
#else
resourcePath = "SampleBrowser.Presentation.Samples.Templates.Images.pptx";
#endif
Assembly assembly = typeof(Comments).GetTypeInfo().Assembly;
Stream fileStream = assembly.GetManifestResourceStream(resourcePath);
IPresentation presentation = Syncfusion.Presentation.Presentation.Open(fileStream);
#region Slide 1
ISlide slide1 = presentation.Slides[0];
IShape shape1 = (IShape)slide1.Shapes[0];
shape1.Left = 1.27 * 72;
shape1.Top = 0.85 * 72;
shape1.Width = 10.86 * 72;
shape1.Height = 3.74 * 72;
ITextBody textFrame = shape1.TextBody;
IParagraphs paragraphs = textFrame.Paragraphs;
paragraphs.Add();
IParagraph paragraph = paragraphs[0];
paragraph.HorizontalAlignment = HorizontalAlignmentType.Left;
ITextParts textParts = paragraph.TextParts;
textParts.Add();
ITextPart textPart = textParts[0];
textPart.Text = "Essential Presentation ";
textPart.Font.CapsType = TextCapsType.All;
textPart.Font.FontName = "Calibri Light (Headings)";
textPart.Font.FontSize = 80;
textPart.Font.Color = ColorObject.Black;
IComment comment = slide1.Comments.Add(0.35, 0.04, "Author1", "A1", "Essential Presentation is available from 13.1 versions of Essential Studio", DateTime.Now);
#endregion
#region Slide2
ISlide slide2 = presentation.Slides.Add(SlideLayoutType.Blank);
IPresentationChart chart = slide2.Shapes.AddChart(230, 80, 500, 400);
//Specifies the chart title
chart.ChartTitle = "Sales Analysis";
//Sets chart data - Row1
chart.ChartData.SetValue(1, 2, "Jan");
chart.ChartData.SetValue(1, 3, "Feb");
chart.ChartData.SetValue(1, 4, "March");
//Sets chart data - Row2
chart.ChartData.SetValue(2, 1, 2010);
chart.ChartData.SetValue(2, 2, 60);
chart.ChartData.SetValue(2, 3, 70);
chart.ChartData.SetValue(2, 4, 80);
//Sets chart data - Row3
chart.ChartData.SetValue(3, 1, 2011);
chart.ChartData.SetValue(3, 2, 80);
chart.ChartData.SetValue(3, 3, 70);
chart.ChartData.SetValue(3, 4, 60);
//Sets chart data - Row4
chart.ChartData.SetValue(4, 1, 2012);
chart.ChartData.SetValue(4, 2, 60);
chart.ChartData.SetValue(4, 3, 70);
chart.ChartData.SetValue(4, 4, 80);
//Creates a new chart series with the name
IOfficeChartSerie serieJan = chart.Series.Add("Jan");
//Sets the data range of chart serie – start row, start column, end row, end column
serieJan.Values = chart.ChartData[2, 2, 4, 2];
//Creates a new chart series with the name
IOfficeChartSerie serieFeb = chart.Series.Add("Feb");
//Sets the data range of chart serie – start row, start column, end row, end column
serieFeb.Values = chart.ChartData[2, 3, 4, 3];
//Creates a new chart series with the name
IOfficeChartSerie serieMarch = chart.Series.Add("March");
//Sets the data range of chart series – start row, start column, end row, end column
serieMarch.Values = chart.ChartData[2, 4, 4, 4];
//Sets the data range of the category axis
chart.PrimaryCategoryAxis.CategoryLabels = chart.ChartData[2, 1, 4, 1];
//Specifies the chart type
chart.ChartType = OfficeChartType.Column_Clustered_3D;
slide2.Comments.Add(0.35, 0.04, "Author2", "A2", "All 3D-chart types are supported in Presentation library.", DateTime.Now);
#endregion
#region Slide3
ISlide slide3 = presentation.Slides.Add(SlideLayoutType.ContentWithCaption);
slide3.Background.Fill.FillType = FillType.Solid;
slide3.Background.Fill.SolidFill.Color = ColorObject.White;
//Adds shape in slide
IShape shape2 = (IShape)slide3.Shapes[0];
shape2.Left = 0.47 * 72;
shape2.Top = 1.15 * 72;
shape2.Width = 3.5 * 72;
shape2.Height = 4.91 * 72;
ITextBody textFrame1 = shape2.TextBody;
//Instance to hold paragraphs in textframe
IParagraphs paragraphs1 = textFrame1.Paragraphs;
IParagraph paragraph1 = paragraphs1.Add();
paragraph1.HorizontalAlignment = HorizontalAlignmentType.Left;
ITextPart textpart1 = paragraph1.AddTextPart();
textpart1.Text = "Lorem ipsum dolor sit amet, lacus amet amet ultricies. Quisque mi venenatis morbi libero, orci dis, mi ut et class porta, massa ligula magna enim, aliquam orci vestibulum tempus.";
textpart1.Font.Color = ColorObject.White;
textpart1.Font.FontName = "Calibri (Body)";
textpart1.Font.FontSize = 15;
paragraphs1.Add();
IParagraph paragraph2 = paragraphs1.Add();
paragraph2.HorizontalAlignment = HorizontalAlignmentType.Left;
textpart1 = paragraph2.AddTextPart();
textpart1.Text = "Turpis facilisis vitae consequat, cum a a, turpis dui consequat massa in dolor per, felis non amet.";
textpart1.Font.Color = ColorObject.White;
textpart1.Font.FontName = "Calibri (Body)";
textpart1.Font.FontSize = 15;
paragraphs1.Add();
IParagraph paragraph3 = paragraphs1.Add();
paragraph3.HorizontalAlignment = HorizontalAlignmentType.Left;
textpart1 = paragraph3.AddTextPart();
textpart1.Text = "Auctor eleifend in omnis elit vestibulum, donec non elementum tellus est mauris, id aliquam, at lacus, arcu pretium proin lacus dolor et. Eu tortor, vel ultrices amet dignissim mauris vehicula.";
textpart1.Font.Color = ColorObject.White;
textpart1.Font.FontName = "Calibri (Body)";
textpart1.Font.FontSize = 15;
paragraphs1.Add();
IParagraph paragraph4 = paragraphs1.Add();
paragraph4.HorizontalAlignment = HorizontalAlignmentType.Left;
textpart1 = paragraph4.AddTextPart();
textpart1.Text = "Lorem tortor neque, purus taciti quis id. Elementum integer orci accumsan minim phasellus vel.";
textpart1.Font.Color = ColorObject.White;
textpart1.Font.FontName = "Calibri (Body)";
textpart1.Font.FontSize = 15;
paragraphs1.Add();
slide3.Shapes.RemoveAt(1);
slide3.Shapes.RemoveAt(1);
//Adds picture in the shape
resourcePath = "";
#if COMMONSB
resourcePath = "SampleBrowser.Samples.Presentation.Samples.Templates.tablet.jpg";
#else
resourcePath = "SampleBrowser.Presentation.Samples.Templates.tablet.jpg";
#endif
fileStream = assembly.GetManifestResourceStream(resourcePath);
IPicture picture1 = slide3.Shapes.AddPicture(fileStream, 5.18 * 72, 1.15 * 72, 7.3 * 72, 5.31 * 72);
fileStream.Dispose();
slide3.Comments.Add(0.14, 0.04, "Author3", "A3", "Can we use a different font family for this text?", DateTime.Now);
#endregion
MemoryStream memoryStream = new MemoryStream();
presentation.Save(memoryStream);
presentation.Close();
memoryStream.Position = 0;
if (Device.RuntimePlatform == Device.UWP)
Xamarin.Forms.DependencyService.Get<ISaveWindowsPhone>().Save("CommentsSamples.pptx", "application/vnd.openxmlformats-officedocument.presentationml.presentation", memoryStream);
else
Xamarin.Forms.DependencyService.Get<ISave>().Save("CommentsSamples.pptx", "application/vnd.openxmlformats-officedocument.presentationml.presentation", memoryStream);
}
}
}
|
{
"pile_set_name": "Github"
}
|
//-------------------------------------------------------------------------------------------------------
// Copyright (C) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE.txt file in the project root for full license information.
//-------------------------------------------------------------------------------------------------------
function write(v) { WScript.Echo(v + ""); }
function doEval(str) {
var r;
try {
r = eval(str);
write(str + ": result = " + r);
} catch (e) {
write("Exception: " + e);
}
}
function f0()
{
write("f0");
return "f0";
}
function f1(x)
{
write("f1 x: " + x);
return "f1";
}
function f2(x,y)
{
write("f2 x: " + x + " y: " + y);
return "f2";
}
function f3(x,y,z)
{
write("f3 x: " + x + " y: " + y + " z: " + z);
write(z.substring(y, x.length));
return "f3";
}
var s1 = new String("This is a some string value. 12.34");
var s2 = "This is a some string value. 12.34";
var search = ['"some"', 12, 34, "/[0-9]/", "/[0-9]+/", "/[0-9]+/g", "undefined", "null" ];
var replace= ['"any"', '""', "undefined", "null", "f0", "f1", "f2", "f3"];
for (var i=0; i<search.length; i++)
{
for (var j=0; j<replace.length; j++)
{
doEval("s1.replace(" + search[i] + ", " + replace[j] + ");");
doEval("s2.replace(" + search[i] + ", " + replace[j] + ");");
}
}
//implicit calls
var called = false;
var replaceobj = {toString: function() { called = true; }};
"ABC".replace("D", replaceobj);
WScript.Echo (called);
|
{
"pile_set_name": "Github"
}
|
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* ALSA modem driver for VIA VT82xx (South Bridge)
*
* VT82C686A/B/C, VT8233A/C, VT8235
*
* Copyright (c) 2000 Jaroslav Kysela <perex@perex.cz>
* Tjeerd.Mulder <Tjeerd.Mulder@fujitsu-siemens.com>
* 2002 Takashi Iwai <tiwai@suse.de>
*/
/*
* Changes:
*
* Sep. 2, 2004 Sasha Khapyorsky <sashak@alsa-project.org>
* Modified from original audio driver 'via82xx.c' to support AC97
* modems.
*/
#include <linux/io.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <sound/core.h>
#include <sound/pcm.h>
#include <sound/pcm_params.h>
#include <sound/info.h>
#include <sound/ac97_codec.h>
#include <sound/initval.h>
#if 0
#define POINTER_DEBUG
#endif
MODULE_AUTHOR("Jaroslav Kysela <perex@perex.cz>");
MODULE_DESCRIPTION("VIA VT82xx modem");
MODULE_LICENSE("GPL");
MODULE_SUPPORTED_DEVICE("{{VIA,VT82C686A/B/C modem,pci}}");
static int index = -2; /* Exclude the first card */
static char *id = SNDRV_DEFAULT_STR1; /* ID for this card */
static int ac97_clock = 48000;
module_param(index, int, 0444);
MODULE_PARM_DESC(index, "Index value for VIA 82xx bridge.");
module_param(id, charp, 0444);
MODULE_PARM_DESC(id, "ID string for VIA 82xx bridge.");
module_param(ac97_clock, int, 0444);
MODULE_PARM_DESC(ac97_clock, "AC'97 codec clock (default 48000Hz).");
/* just for backward compatibility */
static bool enable;
module_param(enable, bool, 0444);
/*
* Direct registers
*/
#define VIAREG(via, x) ((via)->port + VIA_REG_##x)
#define VIADEV_REG(viadev, x) ((viadev)->port + VIA_REG_##x)
/* common offsets */
#define VIA_REG_OFFSET_STATUS 0x00 /* byte - channel status */
#define VIA_REG_STAT_ACTIVE 0x80 /* RO */
#define VIA_REG_STAT_PAUSED 0x40 /* RO */
#define VIA_REG_STAT_TRIGGER_QUEUED 0x08 /* RO */
#define VIA_REG_STAT_STOPPED 0x04 /* RWC */
#define VIA_REG_STAT_EOL 0x02 /* RWC */
#define VIA_REG_STAT_FLAG 0x01 /* RWC */
#define VIA_REG_OFFSET_CONTROL 0x01 /* byte - channel control */
#define VIA_REG_CTRL_START 0x80 /* WO */
#define VIA_REG_CTRL_TERMINATE 0x40 /* WO */
#define VIA_REG_CTRL_AUTOSTART 0x20
#define VIA_REG_CTRL_PAUSE 0x08 /* RW */
#define VIA_REG_CTRL_INT_STOP 0x04
#define VIA_REG_CTRL_INT_EOL 0x02
#define VIA_REG_CTRL_INT_FLAG 0x01
#define VIA_REG_CTRL_RESET 0x01 /* RW - probably reset? undocumented */
#define VIA_REG_CTRL_INT (VIA_REG_CTRL_INT_FLAG | VIA_REG_CTRL_INT_EOL | VIA_REG_CTRL_AUTOSTART)
#define VIA_REG_OFFSET_TYPE 0x02 /* byte - channel type (686 only) */
#define VIA_REG_TYPE_AUTOSTART 0x80 /* RW - autostart at EOL */
#define VIA_REG_TYPE_16BIT 0x20 /* RW */
#define VIA_REG_TYPE_STEREO 0x10 /* RW */
#define VIA_REG_TYPE_INT_LLINE 0x00
#define VIA_REG_TYPE_INT_LSAMPLE 0x04
#define VIA_REG_TYPE_INT_LESSONE 0x08
#define VIA_REG_TYPE_INT_MASK 0x0c
#define VIA_REG_TYPE_INT_EOL 0x02
#define VIA_REG_TYPE_INT_FLAG 0x01
#define VIA_REG_OFFSET_TABLE_PTR 0x04 /* dword - channel table pointer */
#define VIA_REG_OFFSET_CURR_PTR 0x04 /* dword - channel current pointer */
#define VIA_REG_OFFSET_STOP_IDX 0x08 /* dword - stop index, channel type, sample rate */
#define VIA_REG_OFFSET_CURR_COUNT 0x0c /* dword - channel current count (24 bit) */
#define VIA_REG_OFFSET_CURR_INDEX 0x0f /* byte - channel current index (for via8233 only) */
#define DEFINE_VIA_REGSET(name,val) \
enum {\
VIA_REG_##name##_STATUS = (val),\
VIA_REG_##name##_CONTROL = (val) + 0x01,\
VIA_REG_##name##_TYPE = (val) + 0x02,\
VIA_REG_##name##_TABLE_PTR = (val) + 0x04,\
VIA_REG_##name##_CURR_PTR = (val) + 0x04,\
VIA_REG_##name##_STOP_IDX = (val) + 0x08,\
VIA_REG_##name##_CURR_COUNT = (val) + 0x0c,\
}
/* modem block */
DEFINE_VIA_REGSET(MO, 0x40);
DEFINE_VIA_REGSET(MI, 0x50);
/* AC'97 */
#define VIA_REG_AC97 0x80 /* dword */
#define VIA_REG_AC97_CODEC_ID_MASK (3<<30)
#define VIA_REG_AC97_CODEC_ID_SHIFT 30
#define VIA_REG_AC97_CODEC_ID_PRIMARY 0x00
#define VIA_REG_AC97_CODEC_ID_SECONDARY 0x01
#define VIA_REG_AC97_SECONDARY_VALID (1<<27)
#define VIA_REG_AC97_PRIMARY_VALID (1<<25)
#define VIA_REG_AC97_BUSY (1<<24)
#define VIA_REG_AC97_READ (1<<23)
#define VIA_REG_AC97_CMD_SHIFT 16
#define VIA_REG_AC97_CMD_MASK 0x7e
#define VIA_REG_AC97_DATA_SHIFT 0
#define VIA_REG_AC97_DATA_MASK 0xffff
#define VIA_REG_SGD_SHADOW 0x84 /* dword */
#define VIA_REG_SGD_STAT_PB_FLAG (1<<0)
#define VIA_REG_SGD_STAT_CP_FLAG (1<<1)
#define VIA_REG_SGD_STAT_FM_FLAG (1<<2)
#define VIA_REG_SGD_STAT_PB_EOL (1<<4)
#define VIA_REG_SGD_STAT_CP_EOL (1<<5)
#define VIA_REG_SGD_STAT_FM_EOL (1<<6)
#define VIA_REG_SGD_STAT_PB_STOP (1<<8)
#define VIA_REG_SGD_STAT_CP_STOP (1<<9)
#define VIA_REG_SGD_STAT_FM_STOP (1<<10)
#define VIA_REG_SGD_STAT_PB_ACTIVE (1<<12)
#define VIA_REG_SGD_STAT_CP_ACTIVE (1<<13)
#define VIA_REG_SGD_STAT_FM_ACTIVE (1<<14)
#define VIA_REG_SGD_STAT_MR_FLAG (1<<16)
#define VIA_REG_SGD_STAT_MW_FLAG (1<<17)
#define VIA_REG_SGD_STAT_MR_EOL (1<<20)
#define VIA_REG_SGD_STAT_MW_EOL (1<<21)
#define VIA_REG_SGD_STAT_MR_STOP (1<<24)
#define VIA_REG_SGD_STAT_MW_STOP (1<<25)
#define VIA_REG_SGD_STAT_MR_ACTIVE (1<<28)
#define VIA_REG_SGD_STAT_MW_ACTIVE (1<<29)
#define VIA_REG_GPI_STATUS 0x88
#define VIA_REG_GPI_INTR 0x8c
#define VIA_TBL_BIT_FLAG 0x40000000
#define VIA_TBL_BIT_EOL 0x80000000
/* pci space */
#define VIA_ACLINK_STAT 0x40
#define VIA_ACLINK_C11_READY 0x20
#define VIA_ACLINK_C10_READY 0x10
#define VIA_ACLINK_C01_READY 0x04 /* secondary codec ready */
#define VIA_ACLINK_LOWPOWER 0x02 /* low-power state */
#define VIA_ACLINK_C00_READY 0x01 /* primary codec ready */
#define VIA_ACLINK_CTRL 0x41
#define VIA_ACLINK_CTRL_ENABLE 0x80 /* 0: disable, 1: enable */
#define VIA_ACLINK_CTRL_RESET 0x40 /* 0: assert, 1: de-assert */
#define VIA_ACLINK_CTRL_SYNC 0x20 /* 0: release SYNC, 1: force SYNC hi */
#define VIA_ACLINK_CTRL_SDO 0x10 /* 0: release SDO, 1: force SDO hi */
#define VIA_ACLINK_CTRL_VRA 0x08 /* 0: disable VRA, 1: enable VRA */
#define VIA_ACLINK_CTRL_PCM 0x04 /* 0: disable PCM, 1: enable PCM */
#define VIA_ACLINK_CTRL_FM 0x02 /* via686 only */
#define VIA_ACLINK_CTRL_SB 0x01 /* via686 only */
#define VIA_ACLINK_CTRL_INIT (VIA_ACLINK_CTRL_ENABLE|\
VIA_ACLINK_CTRL_RESET|\
VIA_ACLINK_CTRL_PCM)
#define VIA_FUNC_ENABLE 0x42
#define VIA_FUNC_MIDI_PNP 0x80 /* FIXME: it's 0x40 in the datasheet! */
#define VIA_FUNC_MIDI_IRQMASK 0x40 /* FIXME: not documented! */
#define VIA_FUNC_RX2C_WRITE 0x20
#define VIA_FUNC_SB_FIFO_EMPTY 0x10
#define VIA_FUNC_ENABLE_GAME 0x08
#define VIA_FUNC_ENABLE_FM 0x04
#define VIA_FUNC_ENABLE_MIDI 0x02
#define VIA_FUNC_ENABLE_SB 0x01
#define VIA_PNP_CONTROL 0x43
#define VIA_MC97_CTRL 0x44
#define VIA_MC97_CTRL_ENABLE 0x80
#define VIA_MC97_CTRL_SECONDARY 0x40
#define VIA_MC97_CTRL_INIT (VIA_MC97_CTRL_ENABLE|\
VIA_MC97_CTRL_SECONDARY)
/*
* pcm stream
*/
struct snd_via_sg_table {
unsigned int offset;
unsigned int size;
} ;
#define VIA_TABLE_SIZE 255
struct viadev {
unsigned int reg_offset;
unsigned long port;
int direction; /* playback = 0, capture = 1 */
struct snd_pcm_substream *substream;
int running;
unsigned int tbl_entries; /* # descriptors */
struct snd_dma_buffer table;
struct snd_via_sg_table *idx_table;
/* for recovery from the unexpected pointer */
unsigned int lastpos;
unsigned int bufsize;
unsigned int bufsize2;
};
enum { TYPE_CARD_VIA82XX_MODEM = 1 };
#define VIA_MAX_MODEM_DEVS 2
struct via82xx_modem {
int irq;
unsigned long port;
unsigned int intr_mask; /* SGD_SHADOW mask to check interrupts */
struct pci_dev *pci;
struct snd_card *card;
unsigned int num_devs;
unsigned int playback_devno, capture_devno;
struct viadev devs[VIA_MAX_MODEM_DEVS];
struct snd_pcm *pcms[2];
struct snd_ac97_bus *ac97_bus;
struct snd_ac97 *ac97;
unsigned int ac97_clock;
unsigned int ac97_secondary; /* secondary AC'97 codec is present */
spinlock_t reg_lock;
struct snd_info_entry *proc_entry;
};
static const struct pci_device_id snd_via82xx_modem_ids[] = {
{ PCI_VDEVICE(VIA, 0x3068), TYPE_CARD_VIA82XX_MODEM, },
{ 0, }
};
MODULE_DEVICE_TABLE(pci, snd_via82xx_modem_ids);
/*
*/
/*
* allocate and initialize the descriptor buffers
* periods = number of periods
* fragsize = period size in bytes
*/
static int build_via_table(struct viadev *dev, struct snd_pcm_substream *substream,
struct pci_dev *pci,
unsigned int periods, unsigned int fragsize)
{
unsigned int i, idx, ofs, rest;
struct via82xx_modem *chip = snd_pcm_substream_chip(substream);
if (dev->table.area == NULL) {
/* the start of each lists must be aligned to 8 bytes,
* but the kernel pages are much bigger, so we don't care
*/
if (snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, snd_dma_pci_data(chip->pci),
PAGE_ALIGN(VIA_TABLE_SIZE * 2 * 8),
&dev->table) < 0)
return -ENOMEM;
}
if (! dev->idx_table) {
dev->idx_table = kmalloc_array(VIA_TABLE_SIZE,
sizeof(*dev->idx_table),
GFP_KERNEL);
if (! dev->idx_table)
return -ENOMEM;
}
/* fill the entries */
idx = 0;
ofs = 0;
for (i = 0; i < periods; i++) {
rest = fragsize;
/* fill descriptors for a period.
* a period can be split to several descriptors if it's
* over page boundary.
*/
do {
unsigned int r;
unsigned int flag;
unsigned int addr;
if (idx >= VIA_TABLE_SIZE) {
dev_err(&pci->dev, "too much table size!\n");
return -EINVAL;
}
addr = snd_pcm_sgbuf_get_addr(substream, ofs);
((u32 *)dev->table.area)[idx << 1] = cpu_to_le32(addr);
r = PAGE_SIZE - (ofs % PAGE_SIZE);
if (rest < r)
r = rest;
rest -= r;
if (! rest) {
if (i == periods - 1)
flag = VIA_TBL_BIT_EOL; /* buffer boundary */
else
flag = VIA_TBL_BIT_FLAG; /* period boundary */
} else
flag = 0; /* period continues to the next */
/*
dev_dbg(&pci->dev,
"tbl %d: at %d size %d (rest %d)\n",
idx, ofs, r, rest);
*/
((u32 *)dev->table.area)[(idx<<1) + 1] = cpu_to_le32(r | flag);
dev->idx_table[idx].offset = ofs;
dev->idx_table[idx].size = r;
ofs += r;
idx++;
} while (rest > 0);
}
dev->tbl_entries = idx;
dev->bufsize = periods * fragsize;
dev->bufsize2 = dev->bufsize / 2;
return 0;
}
static int clean_via_table(struct viadev *dev, struct snd_pcm_substream *substream,
struct pci_dev *pci)
{
if (dev->table.area) {
snd_dma_free_pages(&dev->table);
dev->table.area = NULL;
}
kfree(dev->idx_table);
dev->idx_table = NULL;
return 0;
}
/*
* Basic I/O
*/
static inline unsigned int snd_via82xx_codec_xread(struct via82xx_modem *chip)
{
return inl(VIAREG(chip, AC97));
}
static inline void snd_via82xx_codec_xwrite(struct via82xx_modem *chip, unsigned int val)
{
outl(val, VIAREG(chip, AC97));
}
static int snd_via82xx_codec_ready(struct via82xx_modem *chip, int secondary)
{
unsigned int timeout = 1000; /* 1ms */
unsigned int val;
while (timeout-- > 0) {
udelay(1);
if (!((val = snd_via82xx_codec_xread(chip)) & VIA_REG_AC97_BUSY))
return val & 0xffff;
}
dev_err(chip->card->dev, "codec_ready: codec %i is not ready [0x%x]\n",
secondary, snd_via82xx_codec_xread(chip));
return -EIO;
}
static int snd_via82xx_codec_valid(struct via82xx_modem *chip, int secondary)
{
unsigned int timeout = 1000; /* 1ms */
unsigned int val, val1;
unsigned int stat = !secondary ? VIA_REG_AC97_PRIMARY_VALID :
VIA_REG_AC97_SECONDARY_VALID;
while (timeout-- > 0) {
val = snd_via82xx_codec_xread(chip);
val1 = val & (VIA_REG_AC97_BUSY | stat);
if (val1 == stat)
return val & 0xffff;
udelay(1);
}
return -EIO;
}
static void snd_via82xx_codec_wait(struct snd_ac97 *ac97)
{
struct via82xx_modem *chip = ac97->private_data;
int err;
err = snd_via82xx_codec_ready(chip, ac97->num);
/* here we need to wait fairly for long time.. */
msleep(500);
}
static void snd_via82xx_codec_write(struct snd_ac97 *ac97,
unsigned short reg,
unsigned short val)
{
struct via82xx_modem *chip = ac97->private_data;
unsigned int xval;
if(reg == AC97_GPIO_STATUS) {
outl(val, VIAREG(chip, GPI_STATUS));
return;
}
xval = !ac97->num ? VIA_REG_AC97_CODEC_ID_PRIMARY : VIA_REG_AC97_CODEC_ID_SECONDARY;
xval <<= VIA_REG_AC97_CODEC_ID_SHIFT;
xval |= reg << VIA_REG_AC97_CMD_SHIFT;
xval |= val << VIA_REG_AC97_DATA_SHIFT;
snd_via82xx_codec_xwrite(chip, xval);
snd_via82xx_codec_ready(chip, ac97->num);
}
static unsigned short snd_via82xx_codec_read(struct snd_ac97 *ac97, unsigned short reg)
{
struct via82xx_modem *chip = ac97->private_data;
unsigned int xval, val = 0xffff;
int again = 0;
xval = ac97->num << VIA_REG_AC97_CODEC_ID_SHIFT;
xval |= ac97->num ? VIA_REG_AC97_SECONDARY_VALID : VIA_REG_AC97_PRIMARY_VALID;
xval |= VIA_REG_AC97_READ;
xval |= (reg & 0x7f) << VIA_REG_AC97_CMD_SHIFT;
while (1) {
if (again++ > 3) {
dev_err(chip->card->dev,
"codec_read: codec %i is not valid [0x%x]\n",
ac97->num, snd_via82xx_codec_xread(chip));
return 0xffff;
}
snd_via82xx_codec_xwrite(chip, xval);
udelay (20);
if (snd_via82xx_codec_valid(chip, ac97->num) >= 0) {
udelay(25);
val = snd_via82xx_codec_xread(chip);
break;
}
}
return val & 0xffff;
}
static void snd_via82xx_channel_reset(struct via82xx_modem *chip, struct viadev *viadev)
{
outb(VIA_REG_CTRL_PAUSE | VIA_REG_CTRL_TERMINATE | VIA_REG_CTRL_RESET,
VIADEV_REG(viadev, OFFSET_CONTROL));
inb(VIADEV_REG(viadev, OFFSET_CONTROL));
udelay(50);
/* disable interrupts */
outb(0x00, VIADEV_REG(viadev, OFFSET_CONTROL));
/* clear interrupts */
outb(0x03, VIADEV_REG(viadev, OFFSET_STATUS));
outb(0x00, VIADEV_REG(viadev, OFFSET_TYPE)); /* for via686 */
// outl(0, VIADEV_REG(viadev, OFFSET_CURR_PTR));
viadev->lastpos = 0;
}
/*
* Interrupt handler
*/
static irqreturn_t snd_via82xx_interrupt(int irq, void *dev_id)
{
struct via82xx_modem *chip = dev_id;
unsigned int status;
unsigned int i;
status = inl(VIAREG(chip, SGD_SHADOW));
if (! (status & chip->intr_mask)) {
return IRQ_NONE;
}
// _skip_sgd:
/* check status for each stream */
spin_lock(&chip->reg_lock);
for (i = 0; i < chip->num_devs; i++) {
struct viadev *viadev = &chip->devs[i];
unsigned char c_status = inb(VIADEV_REG(viadev, OFFSET_STATUS));
c_status &= (VIA_REG_STAT_EOL|VIA_REG_STAT_FLAG|VIA_REG_STAT_STOPPED);
if (! c_status)
continue;
if (viadev->substream && viadev->running) {
spin_unlock(&chip->reg_lock);
snd_pcm_period_elapsed(viadev->substream);
spin_lock(&chip->reg_lock);
}
outb(c_status, VIADEV_REG(viadev, OFFSET_STATUS)); /* ack */
}
spin_unlock(&chip->reg_lock);
return IRQ_HANDLED;
}
/*
* PCM callbacks
*/
/*
* trigger callback
*/
static int snd_via82xx_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
{
struct via82xx_modem *chip = snd_pcm_substream_chip(substream);
struct viadev *viadev = substream->runtime->private_data;
unsigned char val = 0;
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
case SNDRV_PCM_TRIGGER_SUSPEND:
val |= VIA_REG_CTRL_START;
viadev->running = 1;
break;
case SNDRV_PCM_TRIGGER_STOP:
val = VIA_REG_CTRL_TERMINATE;
viadev->running = 0;
break;
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
val |= VIA_REG_CTRL_PAUSE;
viadev->running = 0;
break;
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
viadev->running = 1;
break;
default:
return -EINVAL;
}
outb(val, VIADEV_REG(viadev, OFFSET_CONTROL));
if (cmd == SNDRV_PCM_TRIGGER_STOP)
snd_via82xx_channel_reset(chip, viadev);
return 0;
}
/*
* pointer callbacks
*/
/*
* calculate the linear position at the given sg-buffer index and the rest count
*/
#define check_invalid_pos(viadev,pos) \
((pos) < viadev->lastpos && ((pos) >= viadev->bufsize2 ||\
viadev->lastpos < viadev->bufsize2))
static inline unsigned int calc_linear_pos(struct via82xx_modem *chip,
struct viadev *viadev,
unsigned int idx,
unsigned int count)
{
unsigned int size, res;
size = viadev->idx_table[idx].size;
res = viadev->idx_table[idx].offset + size - count;
/* check the validity of the calculated position */
if (size < count) {
dev_err(chip->card->dev,
"invalid via82xx_cur_ptr (size = %d, count = %d)\n",
(int)size, (int)count);
res = viadev->lastpos;
} else if (check_invalid_pos(viadev, res)) {
#ifdef POINTER_DEBUG
dev_dbg(chip->card->dev,
"fail: idx = %i/%i, lastpos = 0x%x, bufsize2 = 0x%x, offsize = 0x%x, size = 0x%x, count = 0x%x\n",
idx, viadev->tbl_entries, viadev->lastpos,
viadev->bufsize2, viadev->idx_table[idx].offset,
viadev->idx_table[idx].size, count);
#endif
if (count && size < count) {
dev_dbg(chip->card->dev,
"invalid via82xx_cur_ptr, using last valid pointer\n");
res = viadev->lastpos;
} else {
if (! count)
/* bogus count 0 on the DMA boundary? */
res = viadev->idx_table[idx].offset;
else
/* count register returns full size
* when end of buffer is reached
*/
res = viadev->idx_table[idx].offset + size;
if (check_invalid_pos(viadev, res)) {
dev_dbg(chip->card->dev,
"invalid via82xx_cur_ptr (2), using last valid pointer\n");
res = viadev->lastpos;
}
}
}
viadev->lastpos = res; /* remember the last position */
if (res >= viadev->bufsize)
res -= viadev->bufsize;
return res;
}
/*
* get the current pointer on via686
*/
static snd_pcm_uframes_t snd_via686_pcm_pointer(struct snd_pcm_substream *substream)
{
struct via82xx_modem *chip = snd_pcm_substream_chip(substream);
struct viadev *viadev = substream->runtime->private_data;
unsigned int idx, ptr, count, res;
if (snd_BUG_ON(!viadev->tbl_entries))
return 0;
if (!(inb(VIADEV_REG(viadev, OFFSET_STATUS)) & VIA_REG_STAT_ACTIVE))
return 0;
spin_lock(&chip->reg_lock);
count = inl(VIADEV_REG(viadev, OFFSET_CURR_COUNT)) & 0xffffff;
/* The via686a does not have the current index register,
* so we need to calculate the index from CURR_PTR.
*/
ptr = inl(VIADEV_REG(viadev, OFFSET_CURR_PTR));
if (ptr <= (unsigned int)viadev->table.addr)
idx = 0;
else /* CURR_PTR holds the address + 8 */
idx = ((ptr - (unsigned int)viadev->table.addr) / 8 - 1) %
viadev->tbl_entries;
res = calc_linear_pos(chip, viadev, idx, count);
spin_unlock(&chip->reg_lock);
return bytes_to_frames(substream->runtime, res);
}
/*
* hw_params callback:
* allocate the buffer and build up the buffer description table
*/
static int snd_via82xx_hw_params(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params *hw_params)
{
struct via82xx_modem *chip = snd_pcm_substream_chip(substream);
struct viadev *viadev = substream->runtime->private_data;
int err;
err = snd_pcm_lib_malloc_pages(substream, params_buffer_bytes(hw_params));
if (err < 0)
return err;
err = build_via_table(viadev, substream, chip->pci,
params_periods(hw_params),
params_period_bytes(hw_params));
if (err < 0)
return err;
snd_ac97_write(chip->ac97, AC97_LINE1_RATE, params_rate(hw_params));
snd_ac97_write(chip->ac97, AC97_LINE1_LEVEL, 0);
return 0;
}
/*
* hw_free callback:
* clean up the buffer description table and release the buffer
*/
static int snd_via82xx_hw_free(struct snd_pcm_substream *substream)
{
struct via82xx_modem *chip = snd_pcm_substream_chip(substream);
struct viadev *viadev = substream->runtime->private_data;
clean_via_table(viadev, substream, chip->pci);
snd_pcm_lib_free_pages(substream);
return 0;
}
/*
* set up the table pointer
*/
static void snd_via82xx_set_table_ptr(struct via82xx_modem *chip, struct viadev *viadev)
{
snd_via82xx_codec_ready(chip, chip->ac97_secondary);
outl((u32)viadev->table.addr, VIADEV_REG(viadev, OFFSET_TABLE_PTR));
udelay(20);
snd_via82xx_codec_ready(chip, chip->ac97_secondary);
}
/*
* prepare callback for playback and capture
*/
static int snd_via82xx_pcm_prepare(struct snd_pcm_substream *substream)
{
struct via82xx_modem *chip = snd_pcm_substream_chip(substream);
struct viadev *viadev = substream->runtime->private_data;
snd_via82xx_channel_reset(chip, viadev);
/* this must be set after channel_reset */
snd_via82xx_set_table_ptr(chip, viadev);
outb(VIA_REG_TYPE_AUTOSTART|VIA_REG_TYPE_INT_EOL|VIA_REG_TYPE_INT_FLAG,
VIADEV_REG(viadev, OFFSET_TYPE));
return 0;
}
/*
* pcm hardware definition, identical for both playback and capture
*/
static const struct snd_pcm_hardware snd_via82xx_hw =
{
.info = (SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_INTERLEAVED |
SNDRV_PCM_INFO_BLOCK_TRANSFER |
SNDRV_PCM_INFO_MMAP_VALID |
/* SNDRV_PCM_INFO_RESUME | */
SNDRV_PCM_INFO_PAUSE),
.formats = SNDRV_PCM_FMTBIT_U8 | SNDRV_PCM_FMTBIT_S16_LE,
.rates = SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_16000 | SNDRV_PCM_RATE_KNOT,
.rate_min = 8000,
.rate_max = 16000,
.channels_min = 1,
.channels_max = 1,
.buffer_bytes_max = 128 * 1024,
.period_bytes_min = 32,
.period_bytes_max = 128 * 1024,
.periods_min = 2,
.periods_max = VIA_TABLE_SIZE / 2,
.fifo_size = 0,
};
/*
* open callback skeleton
*/
static int snd_via82xx_modem_pcm_open(struct via82xx_modem *chip, struct viadev *viadev,
struct snd_pcm_substream *substream)
{
struct snd_pcm_runtime *runtime = substream->runtime;
int err;
static const unsigned int rates[] = { 8000, 9600, 12000, 16000 };
static const struct snd_pcm_hw_constraint_list hw_constraints_rates = {
.count = ARRAY_SIZE(rates),
.list = rates,
.mask = 0,
};
runtime->hw = snd_via82xx_hw;
if ((err = snd_pcm_hw_constraint_list(runtime, 0, SNDRV_PCM_HW_PARAM_RATE,
&hw_constraints_rates)) < 0)
return err;
/* we may remove following constaint when we modify table entries
in interrupt */
if ((err = snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS)) < 0)
return err;
runtime->private_data = viadev;
viadev->substream = substream;
return 0;
}
/*
* open callback for playback
*/
static int snd_via82xx_playback_open(struct snd_pcm_substream *substream)
{
struct via82xx_modem *chip = snd_pcm_substream_chip(substream);
struct viadev *viadev = &chip->devs[chip->playback_devno + substream->number];
return snd_via82xx_modem_pcm_open(chip, viadev, substream);
}
/*
* open callback for capture
*/
static int snd_via82xx_capture_open(struct snd_pcm_substream *substream)
{
struct via82xx_modem *chip = snd_pcm_substream_chip(substream);
struct viadev *viadev = &chip->devs[chip->capture_devno + substream->pcm->device];
return snd_via82xx_modem_pcm_open(chip, viadev, substream);
}
/*
* close callback
*/
static int snd_via82xx_pcm_close(struct snd_pcm_substream *substream)
{
struct viadev *viadev = substream->runtime->private_data;
viadev->substream = NULL;
return 0;
}
/* via686 playback callbacks */
static const struct snd_pcm_ops snd_via686_playback_ops = {
.open = snd_via82xx_playback_open,
.close = snd_via82xx_pcm_close,
.ioctl = snd_pcm_lib_ioctl,
.hw_params = snd_via82xx_hw_params,
.hw_free = snd_via82xx_hw_free,
.prepare = snd_via82xx_pcm_prepare,
.trigger = snd_via82xx_pcm_trigger,
.pointer = snd_via686_pcm_pointer,
.page = snd_pcm_sgbuf_ops_page,
};
/* via686 capture callbacks */
static const struct snd_pcm_ops snd_via686_capture_ops = {
.open = snd_via82xx_capture_open,
.close = snd_via82xx_pcm_close,
.ioctl = snd_pcm_lib_ioctl,
.hw_params = snd_via82xx_hw_params,
.hw_free = snd_via82xx_hw_free,
.prepare = snd_via82xx_pcm_prepare,
.trigger = snd_via82xx_pcm_trigger,
.pointer = snd_via686_pcm_pointer,
.page = snd_pcm_sgbuf_ops_page,
};
static void init_viadev(struct via82xx_modem *chip, int idx, unsigned int reg_offset,
int direction)
{
chip->devs[idx].reg_offset = reg_offset;
chip->devs[idx].direction = direction;
chip->devs[idx].port = chip->port + reg_offset;
}
/*
* create a pcm instance for via686a/b
*/
static int snd_via686_pcm_new(struct via82xx_modem *chip)
{
struct snd_pcm *pcm;
int err;
chip->playback_devno = 0;
chip->capture_devno = 1;
chip->num_devs = 2;
chip->intr_mask = 0x330000; /* FLAGS | EOL for MR, MW */
err = snd_pcm_new(chip->card, chip->card->shortname, 0, 1, 1, &pcm);
if (err < 0)
return err;
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_via686_playback_ops);
snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_via686_capture_ops);
pcm->dev_class = SNDRV_PCM_CLASS_MODEM;
pcm->private_data = chip;
strcpy(pcm->name, chip->card->shortname);
chip->pcms[0] = pcm;
init_viadev(chip, 0, VIA_REG_MO_STATUS, 0);
init_viadev(chip, 1, VIA_REG_MI_STATUS, 1);
snd_pcm_lib_preallocate_pages_for_all(pcm, SNDRV_DMA_TYPE_DEV_SG,
snd_dma_pci_data(chip->pci),
64*1024, 128*1024);
return 0;
}
/*
* Mixer part
*/
static void snd_via82xx_mixer_free_ac97_bus(struct snd_ac97_bus *bus)
{
struct via82xx_modem *chip = bus->private_data;
chip->ac97_bus = NULL;
}
static void snd_via82xx_mixer_free_ac97(struct snd_ac97 *ac97)
{
struct via82xx_modem *chip = ac97->private_data;
chip->ac97 = NULL;
}
static int snd_via82xx_mixer_new(struct via82xx_modem *chip)
{
struct snd_ac97_template ac97;
int err;
static struct snd_ac97_bus_ops ops = {
.write = snd_via82xx_codec_write,
.read = snd_via82xx_codec_read,
.wait = snd_via82xx_codec_wait,
};
if ((err = snd_ac97_bus(chip->card, 0, &ops, chip, &chip->ac97_bus)) < 0)
return err;
chip->ac97_bus->private_free = snd_via82xx_mixer_free_ac97_bus;
chip->ac97_bus->clock = chip->ac97_clock;
memset(&ac97, 0, sizeof(ac97));
ac97.private_data = chip;
ac97.private_free = snd_via82xx_mixer_free_ac97;
ac97.pci = chip->pci;
ac97.scaps = AC97_SCAP_SKIP_AUDIO | AC97_SCAP_POWER_SAVE;
ac97.num = chip->ac97_secondary;
if ((err = snd_ac97_mixer(chip->ac97_bus, &ac97, &chip->ac97)) < 0)
return err;
return 0;
}
/*
* proc interface
*/
static void snd_via82xx_proc_read(struct snd_info_entry *entry, struct snd_info_buffer *buffer)
{
struct via82xx_modem *chip = entry->private_data;
int i;
snd_iprintf(buffer, "%s\n\n", chip->card->longname);
for (i = 0; i < 0xa0; i += 4) {
snd_iprintf(buffer, "%02x: %08x\n", i, inl(chip->port + i));
}
}
static void snd_via82xx_proc_init(struct via82xx_modem *chip)
{
snd_card_ro_proc_new(chip->card, "via82xx", chip,
snd_via82xx_proc_read);
}
/*
*
*/
static int snd_via82xx_chip_init(struct via82xx_modem *chip)
{
unsigned int val;
unsigned long end_time;
unsigned char pval;
pci_read_config_byte(chip->pci, VIA_MC97_CTRL, &pval);
if((pval & VIA_MC97_CTRL_INIT) != VIA_MC97_CTRL_INIT) {
pci_write_config_byte(chip->pci, 0x44, pval|VIA_MC97_CTRL_INIT);
udelay(100);
}
pci_read_config_byte(chip->pci, VIA_ACLINK_STAT, &pval);
if (! (pval & VIA_ACLINK_C00_READY)) { /* codec not ready? */
/* deassert ACLink reset, force SYNC */
pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL,
VIA_ACLINK_CTRL_ENABLE |
VIA_ACLINK_CTRL_RESET |
VIA_ACLINK_CTRL_SYNC);
udelay(100);
#if 1 /* FIXME: should we do full reset here for all chip models? */
pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL, 0x00);
udelay(100);
#else
/* deassert ACLink reset, force SYNC (warm AC'97 reset) */
pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL,
VIA_ACLINK_CTRL_RESET|VIA_ACLINK_CTRL_SYNC);
udelay(2);
#endif
/* ACLink on, deassert ACLink reset, VSR, SGD data out */
pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL, VIA_ACLINK_CTRL_INIT);
udelay(100);
}
pci_read_config_byte(chip->pci, VIA_ACLINK_CTRL, &pval);
if ((pval & VIA_ACLINK_CTRL_INIT) != VIA_ACLINK_CTRL_INIT) {
/* ACLink on, deassert ACLink reset, VSR, SGD data out */
pci_write_config_byte(chip->pci, VIA_ACLINK_CTRL, VIA_ACLINK_CTRL_INIT);
udelay(100);
}
/* wait until codec ready */
end_time = jiffies + msecs_to_jiffies(750);
do {
pci_read_config_byte(chip->pci, VIA_ACLINK_STAT, &pval);
if (pval & VIA_ACLINK_C00_READY) /* primary codec ready */
break;
schedule_timeout_uninterruptible(1);
} while (time_before(jiffies, end_time));
if ((val = snd_via82xx_codec_xread(chip)) & VIA_REG_AC97_BUSY)
dev_err(chip->card->dev,
"AC'97 codec is not ready [0x%x]\n", val);
snd_via82xx_codec_xwrite(chip, VIA_REG_AC97_READ |
VIA_REG_AC97_SECONDARY_VALID |
(VIA_REG_AC97_CODEC_ID_SECONDARY << VIA_REG_AC97_CODEC_ID_SHIFT));
end_time = jiffies + msecs_to_jiffies(750);
snd_via82xx_codec_xwrite(chip, VIA_REG_AC97_READ |
VIA_REG_AC97_SECONDARY_VALID |
(VIA_REG_AC97_CODEC_ID_SECONDARY << VIA_REG_AC97_CODEC_ID_SHIFT));
do {
if ((val = snd_via82xx_codec_xread(chip)) & VIA_REG_AC97_SECONDARY_VALID) {
chip->ac97_secondary = 1;
goto __ac97_ok2;
}
schedule_timeout_uninterruptible(1);
} while (time_before(jiffies, end_time));
/* This is ok, the most of motherboards have only one codec */
__ac97_ok2:
/* route FM trap to IRQ, disable FM trap */
// pci_write_config_byte(chip->pci, VIA_FM_NMI_CTRL, 0);
/* disable all GPI interrupts */
outl(0, VIAREG(chip, GPI_INTR));
return 0;
}
#ifdef CONFIG_PM_SLEEP
/*
* power management
*/
static int snd_via82xx_suspend(struct device *dev)
{
struct snd_card *card = dev_get_drvdata(dev);
struct via82xx_modem *chip = card->private_data;
int i;
snd_power_change_state(card, SNDRV_CTL_POWER_D3hot);
for (i = 0; i < chip->num_devs; i++)
snd_via82xx_channel_reset(chip, &chip->devs[i]);
synchronize_irq(chip->irq);
snd_ac97_suspend(chip->ac97);
return 0;
}
static int snd_via82xx_resume(struct device *dev)
{
struct snd_card *card = dev_get_drvdata(dev);
struct via82xx_modem *chip = card->private_data;
int i;
snd_via82xx_chip_init(chip);
snd_ac97_resume(chip->ac97);
for (i = 0; i < chip->num_devs; i++)
snd_via82xx_channel_reset(chip, &chip->devs[i]);
snd_power_change_state(card, SNDRV_CTL_POWER_D0);
return 0;
}
static SIMPLE_DEV_PM_OPS(snd_via82xx_pm, snd_via82xx_suspend, snd_via82xx_resume);
#define SND_VIA82XX_PM_OPS &snd_via82xx_pm
#else
#define SND_VIA82XX_PM_OPS NULL
#endif /* CONFIG_PM_SLEEP */
static int snd_via82xx_free(struct via82xx_modem *chip)
{
unsigned int i;
if (chip->irq < 0)
goto __end_hw;
/* disable interrupts */
for (i = 0; i < chip->num_devs; i++)
snd_via82xx_channel_reset(chip, &chip->devs[i]);
__end_hw:
if (chip->irq >= 0)
free_irq(chip->irq, chip);
pci_release_regions(chip->pci);
pci_disable_device(chip->pci);
kfree(chip);
return 0;
}
static int snd_via82xx_dev_free(struct snd_device *device)
{
struct via82xx_modem *chip = device->device_data;
return snd_via82xx_free(chip);
}
static int snd_via82xx_create(struct snd_card *card,
struct pci_dev *pci,
int chip_type,
int revision,
unsigned int ac97_clock,
struct via82xx_modem **r_via)
{
struct via82xx_modem *chip;
int err;
static struct snd_device_ops ops = {
.dev_free = snd_via82xx_dev_free,
};
if ((err = pci_enable_device(pci)) < 0)
return err;
if ((chip = kzalloc(sizeof(*chip), GFP_KERNEL)) == NULL) {
pci_disable_device(pci);
return -ENOMEM;
}
spin_lock_init(&chip->reg_lock);
chip->card = card;
chip->pci = pci;
chip->irq = -1;
if ((err = pci_request_regions(pci, card->driver)) < 0) {
kfree(chip);
pci_disable_device(pci);
return err;
}
chip->port = pci_resource_start(pci, 0);
if (request_irq(pci->irq, snd_via82xx_interrupt, IRQF_SHARED,
KBUILD_MODNAME, chip)) {
dev_err(card->dev, "unable to grab IRQ %d\n", pci->irq);
snd_via82xx_free(chip);
return -EBUSY;
}
chip->irq = pci->irq;
if (ac97_clock >= 8000 && ac97_clock <= 48000)
chip->ac97_clock = ac97_clock;
synchronize_irq(chip->irq);
if ((err = snd_via82xx_chip_init(chip)) < 0) {
snd_via82xx_free(chip);
return err;
}
if ((err = snd_device_new(card, SNDRV_DEV_LOWLEVEL, chip, &ops)) < 0) {
snd_via82xx_free(chip);
return err;
}
/* The 8233 ac97 controller does not implement the master bit
* in the pci command register. IMHO this is a violation of the PCI spec.
* We call pci_set_master here because it does not hurt. */
pci_set_master(pci);
*r_via = chip;
return 0;
}
static int snd_via82xx_probe(struct pci_dev *pci,
const struct pci_device_id *pci_id)
{
struct snd_card *card;
struct via82xx_modem *chip;
int chip_type = 0, card_type;
unsigned int i;
int err;
err = snd_card_new(&pci->dev, index, id, THIS_MODULE, 0, &card);
if (err < 0)
return err;
card_type = pci_id->driver_data;
switch (card_type) {
case TYPE_CARD_VIA82XX_MODEM:
strcpy(card->driver, "VIA82XX-MODEM");
sprintf(card->shortname, "VIA 82XX modem");
break;
default:
dev_err(card->dev, "invalid card type %d\n", card_type);
err = -EINVAL;
goto __error;
}
if ((err = snd_via82xx_create(card, pci, chip_type, pci->revision,
ac97_clock, &chip)) < 0)
goto __error;
card->private_data = chip;
if ((err = snd_via82xx_mixer_new(chip)) < 0)
goto __error;
if ((err = snd_via686_pcm_new(chip)) < 0 )
goto __error;
/* disable interrupts */
for (i = 0; i < chip->num_devs; i++)
snd_via82xx_channel_reset(chip, &chip->devs[i]);
sprintf(card->longname, "%s at 0x%lx, irq %d",
card->shortname, chip->port, chip->irq);
snd_via82xx_proc_init(chip);
if ((err = snd_card_register(card)) < 0) {
snd_card_free(card);
return err;
}
pci_set_drvdata(pci, card);
return 0;
__error:
snd_card_free(card);
return err;
}
static void snd_via82xx_remove(struct pci_dev *pci)
{
snd_card_free(pci_get_drvdata(pci));
}
static struct pci_driver via82xx_modem_driver = {
.name = KBUILD_MODNAME,
.id_table = snd_via82xx_modem_ids,
.probe = snd_via82xx_probe,
.remove = snd_via82xx_remove,
.driver = {
.pm = SND_VIA82XX_PM_OPS,
},
};
module_pci_driver(via82xx_modem_driver);
|
{
"pile_set_name": "Github"
}
|
//--------------------------------------------------------------------------------------
// File: Keyboard.h
//
// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF
// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A
// PARTICULAR PURPOSE.
//
// Copyright (c) Microsoft Corporation. All rights reserved.
//
// http://go.microsoft.com/fwlink/?LinkId=248929
//--------------------------------------------------------------------------------------
#pragma once
// VS 2010/2012 do not support =default =delete
#ifndef DIRECTX_CTOR_DEFAULT
#if defined(_MSC_VER) && (_MSC_VER < 1800)
#define DIRECTX_CTOR_DEFAULT {}
#define DIRECTX_CTOR_DELETE ;
#else
#define DIRECTX_CTOR_DEFAULT =default;
#define DIRECTX_CTOR_DELETE =delete;
#endif
#endif
#pragma warning(push)
#pragma warning(disable : 4005)
#include <stdint.h>
#pragma warning(pop)
#include <memory>
#if defined(WINAPI_FAMILY) && (WINAPI_FAMILY == WINAPI_FAMILY_APP)
namespace ABI { namespace Windows { namespace UI { namespace Core { struct ICoreWindow; } } } }
#endif
namespace DirectX
{
class Keyboard
{
public:
Keyboard();
Keyboard(Keyboard&& moveFrom);
Keyboard& operator= (Keyboard&& moveFrom);
virtual ~Keyboard();
enum Keys
{
None = 0,
Back = 0x8,
Tab = 0x9,
Enter = 0xd,
Pause = 0x13,
CapsLock = 0x14,
Kana = 0x15,
Kanji = 0x19,
Escape = 0x1b,
ImeConvert = 0x1c,
ImeNoConvert = 0x1d,
Space = 0x20,
PageUp = 0x21,
PageDown = 0x22,
End = 0x23,
Home = 0x24,
Left = 0x25,
Up = 0x26,
Right = 0x27,
Down = 0x28,
Select = 0x29,
Print = 0x2a,
Execute = 0x2b,
PrintScreen = 0x2c,
Insert = 0x2d,
Delete = 0x2e,
Help = 0x2f,
D0 = 0x30,
D1 = 0x31,
D2 = 0x32,
D3 = 0x33,
D4 = 0x34,
D5 = 0x35,
D6 = 0x36,
D7 = 0x37,
D8 = 0x38,
D9 = 0x39,
A = 0x41,
B = 0x42,
C = 0x43,
D = 0x44,
E = 0x45,
F = 0x46,
G = 0x47,
H = 0x48,
I = 0x49,
J = 0x4a,
K = 0x4b,
L = 0x4c,
M = 0x4d,
N = 0x4e,
O = 0x4f,
P = 0x50,
Q = 0x51,
R = 0x52,
S = 0x53,
T = 0x54,
U = 0x55,
V = 0x56,
W = 0x57,
X = 0x58,
Y = 0x59,
Z = 0x5a,
LeftWindows = 0x5b,
RightWindows = 0x5c,
Apps = 0x5d,
Sleep = 0x5f,
NumPad0 = 0x60,
NumPad1 = 0x61,
NumPad2 = 0x62,
NumPad3 = 0x63,
NumPad4 = 0x64,
NumPad5 = 0x65,
NumPad6 = 0x66,
NumPad7 = 0x67,
NumPad8 = 0x68,
NumPad9 = 0x69,
Multiply = 0x6a,
Add = 0x6b,
Separator = 0x6c,
Subtract = 0x6d,
Decimal = 0x6e,
Divide = 0x6f,
F1 = 0x70,
F2 = 0x71,
F3 = 0x72,
F4 = 0x73,
F5 = 0x74,
F6 = 0x75,
F7 = 0x76,
F8 = 0x77,
F9 = 0x78,
F10 = 0x79,
F11 = 0x7a,
F12 = 0x7b,
F13 = 0x7c,
F14 = 0x7d,
F15 = 0x7e,
F16 = 0x7f,
F17 = 0x80,
F18 = 0x81,
F19 = 0x82,
F20 = 0x83,
F21 = 0x84,
F22 = 0x85,
F23 = 0x86,
F24 = 0x87,
NumLock = 0x90,
Scroll = 0x91,
LeftShift = 0xa0,
RightShift = 0xa1,
LeftControl = 0xa2,
RightControl = 0xa3,
LeftAlt = 0xa4,
RightAlt = 0xa5,
BrowserBack = 0xa6,
BrowserForward = 0xa7,
BrowserRefresh = 0xa8,
BrowserStop = 0xa9,
BrowserSearch = 0xaa,
BrowserFavorites = 0xab,
BrowserHome = 0xac,
VolumeMute = 0xad,
VolumeDown = 0xae,
VolumeUp = 0xaf,
MediaNextTrack = 0xb0,
MediaPreviousTrack = 0xb1,
MediaStop = 0xb2,
MediaPlayPause = 0xb3,
LaunchMail = 0xb4,
SelectMedia = 0xb5,
LaunchApplication1 = 0xb6,
LaunchApplication2 = 0xb7,
OemSemicolon = 0xba,
OemPlus = 0xbb,
OemComma = 0xbc,
OemMinus = 0xbd,
OemPeriod = 0xbe,
OemQuestion = 0xbf,
OemTilde = 0xc0,
OemOpenBrackets = 0xdb,
OemPipe = 0xdc,
OemCloseBrackets = 0xdd,
OemQuotes = 0xde,
Oem8 = 0xdf,
OemBackslash = 0xe2,
ProcessKey = 0xe5,
OemCopy = 0xf2,
OemAuto = 0xf3,
OemEnlW = 0xf4,
Attn = 0xf6,
Crsel = 0xf7,
Exsel = 0xf8,
EraseEof = 0xf9,
Play = 0xfa,
Zoom = 0xfb,
Pa1 = 0xfd,
OemClear = 0xfe,
};
struct State
{
bool Reserved0 : 8;
bool Back : 1; // VK_BACK, 0x8
bool Tab : 1; // VK_TAB, 0x9
bool Reserved1 : 3;
bool Enter : 1; // VK_RETURN, 0xD
bool Reserved2 : 2;
bool Reserved3 : 3;
bool Pause : 1; // VK_PAUSE, 0x13
bool CapsLock : 1; // VK_CAPITAL, 0x14
bool Kana : 1; // VK_KANA, 0x15
bool Reserved4 : 2;
bool Reserved5 : 1;
bool Kanji : 1; // VK_KANJI, 0x19
bool Reserved6 : 1;
bool Escape : 1; // VK_ESCAPE, 0x1B
bool ImeConvert : 1; // VK_CONVERT, 0x1C
bool ImeNoConvert : 1; // VK_NONCONVERT, 0x1D
bool Reserved7 : 2;
bool Space : 1; // VK_SPACE, 0x20
bool PageUp : 1; // VK_PRIOR, 0x21
bool PageDown : 1; // VK_NEXT, 0x22
bool End : 1; // VK_END, 0x23
bool Home : 1; // VK_HOME, 0x24
bool Left : 1; // VK_LEFT, 0x25
bool Up : 1; // VK_UP, 0x26
bool Right : 1; // VK_RIGHT, 0x27
bool Down : 1; // VK_DOWN, 0x28
bool Select : 1; // VK_SELECT, 0x29
bool Print : 1; // VK_PRINT, 0x2A
bool Execute : 1; // VK_EXECUTE, 0x2B
bool PrintScreen : 1; // VK_SNAPSHOT, 0x2C
bool Insert : 1; // VK_INSERT, 0x2D
bool Delete : 1; // VK_DELETE, 0x2E
bool Help : 1; // VK_HELP, 0x2F
bool D0 : 1; // 0x30
bool D1 : 1; // 0x31
bool D2 : 1; // 0x32
bool D3 : 1; // 0x33
bool D4 : 1; // 0x34
bool D5 : 1; // 0x35
bool D6 : 1; // 0x36
bool D7 : 1; // 0x37
bool D8 : 1; // 0x38
bool D9 : 1; // 0x39
bool Reserved8 : 6;
bool Reserved9 : 1;
bool A : 1; // 0x41
bool B : 1; // 0x42
bool C : 1; // 0x43
bool D : 1; // 0x44
bool E : 1; // 0x45
bool F : 1; // 0x46
bool G : 1; // 0x47
bool H : 1; // 0x48
bool I : 1; // 0x49
bool J : 1; // 0x4A
bool K : 1; // 0x4B
bool L : 1; // 0x4C
bool M : 1; // 0x4D
bool N : 1; // 0x4E
bool O : 1; // 0x4F
bool P : 1; // 0x50
bool Q : 1; // 0x51
bool R : 1; // 0x52
bool S : 1; // 0x53
bool T : 1; // 0x54
bool U : 1; // 0x55
bool V : 1; // 0x56
bool W : 1; // 0x57
bool X : 1; // 0x58
bool Y : 1; // 0x59
bool Z : 1; // 0x5A
bool LeftWindows : 1; // VK_LWIN, 0x5B
bool RightWindows : 1; // VK_RWIN, 0x5C
bool Apps : 1; // VK_APPS, 0x5D
bool Reserved10 : 1;
bool Sleep : 1; // VK_SLEEP, 0x5F
bool NumPad0 : 1; // VK_NUMPAD0, 0x60
bool NumPad1 : 1; // VK_NUMPAD1, 0x61
bool NumPad2 : 1; // VK_NUMPAD2, 0x62
bool NumPad3 : 1; // VK_NUMPAD3, 0x63
bool NumPad4 : 1; // VK_NUMPAD4, 0x64
bool NumPad5 : 1; // VK_NUMPAD5, 0x65
bool NumPad6 : 1; // VK_NUMPAD6, 0x66
bool NumPad7 : 1; // VK_NUMPAD7, 0x67
bool NumPad8 : 1; // VK_NUMPAD8, 0x68
bool NumPad9 : 1; // VK_NUMPAD9, 0x69
bool Multiply : 1; // VK_MULTIPLY, 0x6A
bool Add : 1; // VK_ADD, 0x6B
bool Separator : 1; // VK_SEPARATOR, 0x6C
bool Subtract : 1; // VK_SUBTRACT, 0x6D
bool Decimal : 1; // VK_DECIMANL, 0x6E
bool Divide : 1; // VK_DIVIDE, 0x6F
bool F1 : 1; // VK_F1, 0x70
bool F2 : 1; // VK_F2, 0x71
bool F3 : 1; // VK_F3, 0x72
bool F4 : 1; // VK_F4, 0x73
bool F5 : 1; // VK_F5, 0x74
bool F6 : 1; // VK_F6, 0x75
bool F7 : 1; // VK_F7, 0x76
bool F8 : 1; // VK_F8, 0x77
bool F9 : 1; // VK_F9, 0x78
bool F10 : 1; // VK_F10, 0x79
bool F11 : 1; // VK_F11, 0x7A
bool F12 : 1; // VK_F12, 0x7B
bool F13 : 1; // VK_F13, 0x7C
bool F14 : 1; // VK_F14, 0x7D
bool F15 : 1; // VK_F15, 0x7E
bool F16 : 1; // VK_F16, 0x7F
bool F17 : 1; // VK_F17, 0x80
bool F18 : 1; // VK_F18, 0x81
bool F19 : 1; // VK_F19, 0x82
bool F20 : 1; // VK_F20, 0x83
bool F21 : 1; // VK_F21, 0x84
bool F22 : 1; // VK_F22, 0x85
bool F23 : 1; // VK_F23, 0x86
bool F24 : 1; // VK_F24, 0x87
bool Reserved11 : 8;
bool NumLock : 1; // VK_NUMLOCK, 0x90
bool Scroll : 1; // VK_SCROLL, 0x91
bool Reserved12 : 6;
bool Reserved13 : 8;
bool LeftShift : 1; // VK_LSHIFT, 0xA0
bool RightShift : 1; // VK_RSHIFT, 0xA1
bool LeftControl : 1; // VK_LCONTROL, 0xA2
bool RightControl : 1; // VK_RCONTROL, 0xA3
bool LeftAlt : 1; // VK_LMENU, 0xA4
bool RightAlt : 1; // VK_RMENU, 0xA5
bool BrowserBack : 1; // VK_BROWSER_BACK, 0xA6
bool BrowserForward : 1; // VK_BROWSER_FORWARD, 0xA7
bool BrowserRefresh : 1; // VK_BROWSER_REFRESH, 0xA8
bool BrowserStop : 1; // VK_BROWSER_STOP, 0xA9
bool BrowserSearch : 1; // VK_BROWSER_SEARCH, 0xAA
bool BrowserFavorites : 1; // VK_BROWSER_FAVORITES, 0xAB
bool BrowserHome : 1; // VK_BROWSER_HOME, 0xAC
bool VolumeMute : 1; // VK_VOLUME_MUTE, 0xAD
bool VolumeDown : 1; // VK_VOLUME_DOWN, 0xAE
bool VolumeUp : 1; // VK_VOLUME_UP, 0xAF
bool MediaNextTrack : 1; // VK_MEDIA_NEXT_TRACK, 0xB0
bool MediaPreviousTrack : 1;// VK_MEDIA_PREV_TRACK, 0xB1
bool MediaStop : 1; // VK_MEDIA_STOP, 0xB2
bool MediaPlayPause : 1; // VK_MEDIA_PLAY_PAUSE, 0xB3
bool LaunchMail : 1; // VK_LAUNCH_MAIL, 0xB4
bool SelectMedia : 1; // VK_LAUNCH_MEDIA_SELECT, 0xB5
bool LaunchApplication1 : 1;// VK_LAUNCH_APP1, 0xB6
bool LaunchApplication2 : 1;// VK_LAUNCH_APP2, 0xB7
bool Reserved14 : 2;
bool OemSemicolon : 1; // VK_OEM_1, 0xBA
bool OemPlus : 1; // VK_OEM_PLUS, 0xBB
bool OemComma : 1; // VK_OEM_COMMA, 0xBC
bool OemMinus : 1; // VK_OEM_MINUS, 0xBD
bool OemPeriod : 1; // VK_OEM_PERIOD, 0xBE
bool OemQuestion : 1; // VK_OEM_2, 0xBF
bool OemTilde : 1; // VK_OEM_3, 0xC0
bool Reserved15 : 7;
bool Reserved16 : 8;
bool Reserved17 : 8;
bool Reserved18 : 3;
bool OemOpenBrackets : 1; // VK_OEM_4, 0xDB
bool OemPipe : 1; // VK_OEM_5, 0xDC
bool OemCloseBrackets : 1; // VK_OEM_6, 0xDD
bool OemQuotes : 1; // VK_OEM_7, 0xDE
bool Oem8 : 1; // VK_OEM_8, 0xDF
bool Reserved19 : 2;
bool OemBackslash : 1; // VK_OEM_102, 0xE2
bool Reserved20 : 2;
bool ProcessKey : 1; // VK_PROCESSKEY, 0xE5
bool Reserved21 : 2;
bool Reserved22 : 8;
bool Reserved23 : 2;
bool OemCopy : 1; // 0XF2
bool OemAuto : 1; // 0xF3
bool OemEnlW : 1; // 0xF4
bool Reserved24 : 1;
bool Attn : 1; // VK_ATTN, 0xF6
bool Crsel : 1; // VK_CRSEL, 0xF7
bool Exsel : 1; // VK_EXSEL, 0xF8
bool EraseEof : 1; // VK_EREOF, 0xF9
bool Play : 1; // VK_PLAY, 0xFA
bool Zoom : 1; // VK_ZOOM, 0xFB
bool Reserved25 : 1;
bool Pa1 : 1; // VK_PA1, 0xFD
bool OemClear : 1; // VK_OEM_CLEAR, 0xFE
bool Reserved26: 1;
bool __cdecl IsKeyDown(Keys key) const
{
if (key >= 0 && key <= 0xff)
{
auto ptr = reinterpret_cast<const uint32_t*>(this);
unsigned int bf = 1u << (key & 0x1f);
return (ptr[(key >> 5)] & bf) != 0;
}
return false;
}
bool __cdecl IsKeyUp(Keys key) const
{
if (key >= 0 && key <= 0xfe)
{
auto ptr = reinterpret_cast<const uint32_t*>(this);
unsigned int bf = 1u << (key & 0x1f);
return (ptr[(key >> 5)] & bf) == 0;
}
return false;
}
};
class KeyboardStateTracker
{
public:
State released;
State pressed;
KeyboardStateTracker() { Reset(); }
void __cdecl Update(const State& state);
void __cdecl Reset();
bool __cdecl IsKeyPressed(Keys key) const { return pressed.IsKeyDown(key); }
bool __cdecl IsKeyReleased(Keys key) const { return released.IsKeyDown(key); }
public:
State lastState;
};
// Retrieve the current state of the keyboard
State __cdecl GetState() const;
// Reset the keyboard state
void __cdecl Reset();
#if !defined(WINAPI_FAMILY) || (WINAPI_FAMILY == WINAPI_FAMILY_DESKTOP_APP) && defined(WM_USER)
static void __cdecl ProcessMessage(UINT message, WPARAM wParam, LPARAM lParam);
#endif
#if defined(WINAPI_FAMILY) && (WINAPI_FAMILY == WINAPI_FAMILY_APP)
void __cdecl SetWindow(ABI::Windows::UI::Core::ICoreWindow* window);
#ifdef __cplusplus_winrt
void __cdecl SetWindow(Windows::UI::Core::CoreWindow^ window)
{
// See https://msdn.microsoft.com/en-us/library/hh755802.aspx
SetWindow(reinterpret_cast<ABI::Windows::UI::Core::ICoreWindow*>(window));
}
#endif
#endif
// Singleton
static Keyboard& __cdecl Get();
private:
// Private implementation.
class Impl;
std::unique_ptr<Impl> pImpl;
// Prevent copying.
Keyboard(Keyboard const&) DIRECTX_CTOR_DELETE
Keyboard& operator=(Keyboard const&) DIRECTX_CTOR_DELETE
};
}
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved.
* Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
*
* This copyrighted material is made available to anyone wishing to use,
* modify, copy, or redistribute it subject to the terms and conditions
* of the GNU General Public License version 2.
*/
#ifndef __DIR_DOT_H__
#define __DIR_DOT_H__
#include <linux/dcache.h>
#include <linux/crc32.h>
struct inode;
struct gfs2_inode;
struct gfs2_inum;
struct buffer_head;
struct gfs2_dirent;
struct gfs2_diradd {
unsigned nr_blocks;
struct gfs2_dirent *dent;
struct buffer_head *bh;
int save_loc;
};
extern struct inode *gfs2_dir_search(struct inode *dir,
const struct qstr *filename,
bool fail_on_exist);
extern int gfs2_dir_check(struct inode *dir, const struct qstr *filename,
const struct gfs2_inode *ip);
extern int gfs2_dir_add(struct inode *inode, const struct qstr *filename,
const struct gfs2_inode *ip, struct gfs2_diradd *da);
static inline void gfs2_dir_no_add(struct gfs2_diradd *da)
{
if (da->bh)
brelse(da->bh);
da->bh = NULL;
}
extern int gfs2_dir_del(struct gfs2_inode *dip, const struct dentry *dentry);
extern int gfs2_dir_read(struct inode *inode, struct dir_context *ctx,
struct file_ra_state *f_ra);
extern int gfs2_dir_mvino(struct gfs2_inode *dip, const struct qstr *filename,
const struct gfs2_inode *nip, unsigned int new_type);
extern int gfs2_dir_exhash_dealloc(struct gfs2_inode *dip);
extern int gfs2_diradd_alloc_required(struct inode *dir,
const struct qstr *filename,
struct gfs2_diradd *da);
extern int gfs2_dir_get_new_buffer(struct gfs2_inode *ip, u64 block,
struct buffer_head **bhp);
extern void gfs2_dir_hash_inval(struct gfs2_inode *ip);
static inline u32 gfs2_disk_hash(const char *data, int len)
{
return crc32_le((u32)~0, data, len) ^ (u32)~0;
}
static inline void gfs2_str2qstr(struct qstr *name, const char *fname)
{
name->name = fname;
name->len = strlen(fname);
name->hash = gfs2_disk_hash(name->name, name->len);
}
/* N.B. This probably ought to take inum & type as args as well */
static inline void gfs2_qstr2dirent(const struct qstr *name, u16 reclen, struct gfs2_dirent *dent)
{
dent->de_inum.no_addr = cpu_to_be64(0);
dent->de_inum.no_formal_ino = cpu_to_be64(0);
dent->de_hash = cpu_to_be32(name->hash);
dent->de_rec_len = cpu_to_be16(reclen);
dent->de_name_len = cpu_to_be16(name->len);
dent->de_type = cpu_to_be16(0);
memset(dent->__pad, 0, sizeof(dent->__pad));
memcpy(dent + 1, name->name, name->len);
}
extern struct qstr gfs2_qdot;
extern struct qstr gfs2_qdotdot;
#endif /* __DIR_DOT_H__ */
|
{
"pile_set_name": "Github"
}
|
```tut:book
"File " + 2 + "b"
```
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.heibaiying</groupId>
<artifactId>spring-rabbitmq</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<spring-base-version>5.1.3.RELEASE</spring-base-version>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring-base-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-beans</artifactId>
<version>${spring-base-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>${spring-base-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>${spring-base-version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
<version>${spring-base-version}</version>
</dependency>
<!--spring rabbitmq 整合依赖-->
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
<version>2.1.2.RELEASE</version>
</dependency>
<!--rabbitmq 传输对象序列化依赖了这个包-->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.9.8</version>
</dependency>
<!--单元测试相关包-->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<version>${spring-base-version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<version>1.18.4</version>
<scope>provided</scope>
</dependency>
</dependencies>
<build>
<finalName>spring-rabbitmq</finalName>
<resources>
<resource>
<directory>src/main/resources</directory>
</resource>
<resource>
<directory>src/main/java</directory>
</resource>
</resources>
</build>
</project>
|
{
"pile_set_name": "Github"
}
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package stream
import (
"context"
metrics "github.com/rcrowley/go-metrics"
"mosn.io/api"
"mosn.io/mosn/pkg/log"
"mosn.io/mosn/pkg/types"
"mosn.io/pkg/buffer"
)
// stream.Client
// types.ReadFilter
// types.StreamConnectionEventListener
type client struct {
Protocol types.ProtocolName
Connection types.ClientConnection
Host types.Host
ClientStreamConnection types.ClientStreamConnection
StreamConnectionEventListener types.StreamConnectionEventListener
ConnectedFlag bool
}
// NewStreamClient
// Create a codecclient used as a client to send/receive stream in a connection
func NewStreamClient(ctx context.Context, prot api.Protocol, connection types.ClientConnection, host types.Host) Client {
client := &client{
Protocol: prot,
Connection: connection,
Host: host,
}
if factory, ok := streamFactories[prot]; ok {
client.ClientStreamConnection = factory.CreateClientStream(ctx, connection, client, client)
} else {
return nil
}
connection.AddConnectionEventListener(client)
connection.FilterManager().AddReadFilter(client)
connection.SetNoDelay(true)
return client
}
// NewBiDirectStreamClient
// Create a bidirectional client used to realize bidirectional communication
func NewBiDirectStreamClient(ctx context.Context, prot api.Protocol, connection types.ClientConnection, host types.Host,
serverCallbacks types.ServerStreamConnectionEventListener) Client {
client := &client{
Protocol: prot,
Connection: connection,
Host: host,
}
if factory, ok := streamFactories[prot]; ok {
client.ClientStreamConnection = factory.CreateBiDirectStream(ctx, connection, client, serverCallbacks)
} else {
return nil
}
connection.AddConnectionEventListener(client)
connection.FilterManager().AddReadFilter(client)
connection.SetNoDelay(true)
return client
}
// Client
func (c *client) ConnID() uint64 {
return c.Connection.ID()
}
func (c *client) Connect() error {
return c.Connection.Connect()
}
func (c *client) AddConnectionEventListener(listener api.ConnectionEventListener) {
c.Connection.AddConnectionEventListener(listener)
}
func (c *client) ActiveRequestsNum() int {
return c.ClientStreamConnection.ActiveStreamsNum()
}
func (c *client) SetConnectionCollector(read, write metrics.Counter) {
c.Connection.SetCollector(read, write)
}
func (c *client) SetStreamConnectionEventListener(listener types.StreamConnectionEventListener) {
c.StreamConnectionEventListener = listener
}
func (c *client) NewStream(context context.Context, respReceiver types.StreamReceiveListener) types.StreamSender {
// oneway
if respReceiver == nil {
log.DefaultLogger.Debugf("oneway client NewStream")
return c.ClientStreamConnection.NewStream(context, nil)
}
wrapper := &clientStreamReceiverWrapper{
streamReceiver: respReceiver,
}
streamSender := c.ClientStreamConnection.NewStream(context, wrapper)
wrapper.stream = streamSender.GetStream()
return streamSender
}
func (c *client) Close() {
c.Connection.Close(api.NoFlush, api.LocalClose)
}
// types.StreamConnectionEventListener
func (c *client) OnGoAway() {
c.StreamConnectionEventListener.OnGoAway()
}
// types.ConnectionEventListener
// conn callbacks
func (c *client) OnEvent(event api.ConnectionEvent) {
log.DefaultLogger.Debugf("client OnEvent %v, connected %v", event, c.ConnectedFlag)
switch event {
case api.Connected:
c.ConnectedFlag = true
}
if reason, ok := c.ClientStreamConnection.CheckReasonError(c.ConnectedFlag, event); !ok {
c.ClientStreamConnection.Reset(reason)
}
}
// types.ReadFilter
// read filter, recv upstream data
func (c *client) OnData(buffer buffer.IoBuffer) api.FilterStatus {
c.ClientStreamConnection.Dispatch(buffer)
return api.Stop
}
func (c *client) OnNewConnection() api.FilterStatus {
return api.Continue
}
func (c *client) InitializeReadFilterCallbacks(cb api.ReadFilterCallbacks) {}
// uniform wrapper to destroy stream at client side
type clientStreamReceiverWrapper struct {
stream types.Stream
streamReceiver types.StreamReceiveListener
}
func (w *clientStreamReceiverWrapper) OnReceive(ctx context.Context, headers types.HeaderMap, data types.IoBuffer, trailers types.HeaderMap) {
w.stream.DestroyStream()
w.streamReceiver.OnReceive(ctx, headers, data, trailers)
}
func (w *clientStreamReceiverWrapper) OnDecodeError(ctx context.Context, err error, headers types.HeaderMap) {
w.streamReceiver.OnDecodeError(ctx, err, headers)
}
|
{
"pile_set_name": "Github"
}
|
Subject: january - meter 2186 clear lake city gate
i have flow without a nom each day at meter 2186 lst = 1375
2 nd = 28
3 rd = 2532
4 th = 5952
i assume that this is entex meter . is this volume captured at meter 2000 ? i
will need a nom to support this flow . please advise . thank you .
|
{
"pile_set_name": "Github"
}
|
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at http://mozilla.org/MPL/2.0/.
// +build integration_cli
package base
import (
"fmt"
"io/ioutil"
"math/rand"
"os"
"os/exec"
"path/filepath"
"regexp"
"strings"
"time"
"github.com/stretchr/testify/suite"
"github.com/talos-systems/go-retry/retry"
"github.com/talos-systems/talos/pkg/cluster"
"github.com/talos-systems/talos/pkg/cmd"
"github.com/talos-systems/talos/pkg/machinery/config/types/v1alpha1/machine"
"github.com/talos-systems/talos/pkg/machinery/constants"
)
// CLISuite is a base suite for CLI tests.
type CLISuite struct {
suite.Suite
TalosSuite
}
// DiscoverNodes provides list of Talos nodes in the cluster.
//
// As there's no way to provide this functionality via Talos CLI, it relies on cluster info.
func (cliSuite *CLISuite) DiscoverNodes() cluster.Info {
discoveredNodes := cliSuite.TalosSuite.DiscoverNodes()
if discoveredNodes != nil {
return discoveredNodes
}
discoveredNodes = cliSuite.discoverKubectl()
if discoveredNodes != nil {
return discoveredNodes
}
// still no nodes, skip the test
cliSuite.T().Skip("no nodes were discovered")
return nil
}
// RandomNode returns a random node of the specified type (or any type if no types are specified).
func (cliSuite *CLISuite) RandomDiscoveredNode(types ...machine.Type) string {
nodeInfo := cliSuite.DiscoverNodes()
var nodes []string
if len(types) == 0 {
nodes = nodeInfo.Nodes()
} else {
for _, t := range types {
nodes = append(nodes, nodeInfo.NodesByType(t)...)
}
}
cliSuite.Require().NotEmpty(nodes)
return nodes[rand.Intn(len(nodes))]
}
func (cliSuite *CLISuite) discoverKubectl() cluster.Info {
// pull down kubeconfig into temporary directory
tempDir, err := ioutil.TempDir("", "talos")
cliSuite.Require().NoError(err)
defer os.RemoveAll(tempDir) //nolint: errcheck
// rely on `nodes:` being set in talosconfig
cliSuite.RunCLI([]string{"kubeconfig", tempDir}, StdoutEmpty())
masterNodes, err := cmd.Run(cliSuite.KubectlPath, "--kubeconfig", filepath.Join(tempDir, "kubeconfig"), "get", "nodes",
"-o", "jsonpath={.items[*].status.addresses[?(@.type==\"InternalIP\")].address}", fmt.Sprintf("--selector=%s", constants.LabelNodeRoleMaster))
cliSuite.Require().NoError(err)
workerNodes, err := cmd.Run(cliSuite.KubectlPath, "--kubeconfig", filepath.Join(tempDir, "kubeconfig"), "get", "nodes",
"-o", "jsonpath={.items[*].status.addresses[?(@.type==\"InternalIP\")].address}", fmt.Sprintf("--selector=!%s", constants.LabelNodeRoleMaster))
cliSuite.Require().NoError(err)
return &infoWrapper{
masterNodes: strings.Fields(strings.TrimSpace(masterNodes)),
workerNodes: strings.Fields(strings.TrimSpace(workerNodes)),
}
}
func (cliSuite *CLISuite) buildCLICmd(args []string) *exec.Cmd {
// TODO: add support for calling `talosctl config endpoint` before running talosctl
args = append([]string{"--talosconfig", cliSuite.TalosConfig}, args...)
return exec.Command(cliSuite.TalosctlPath, args...)
}
// RunCLI runs talosctl binary with the options provided.
func (cliSuite *CLISuite) RunCLI(args []string, options ...RunOption) {
Run(&cliSuite.Suite, cliSuite.buildCLICmd(args), options...)
}
func (cliSuite *CLISuite) RunAndWaitForMatch(args []string, regex *regexp.Regexp, duration time.Duration, options ...retry.Option) {
cliSuite.Assert().NoError(retry.Constant(duration, options...).Retry(func() error {
stdout, _, err := RunAndWait(&cliSuite.Suite, cliSuite.buildCLICmd(args))
if err != nil {
return retry.UnexpectedError(err)
}
if !regex.MatchString(stdout.String()) {
return retry.ExpectedError(fmt.Errorf("stdout doesn't match: %q", stdout))
}
return nil
}))
}
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright (C) 2014-2018 Paul Davis <paul@linuxaudiosystems.com>
* Copyright (C) 2014-2018 Robin Gareus <robin@gareus.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#ifndef __libbackend_dummy_audiobackend_h__
#define __libbackend_dummy_audiobackend_h__
#include <string>
#include <vector>
#include <map>
#include <set>
#include <stdint.h>
#include <pthread.h>
#include <ltc.h>
#include <boost/shared_ptr.hpp>
#include "pbd/natsort.h"
#include "pbd/ringbuffer.h"
#include "ardour/types.h"
#include "ardour/audio_backend.h"
#include "ardour/dsp_load_calculator.h"
#include "ardour/port_engine_shared.h"
namespace ARDOUR {
class DummyAudioBackend;
namespace DummyMidiData {
typedef struct _MIDISequence {
float beat_time;
uint8_t size;
uint8_t event[3];
} MIDISequence;
};
class DummyMidiEvent {
public:
DummyMidiEvent (const pframes_t timestamp, const uint8_t* data, size_t size);
DummyMidiEvent (const DummyMidiEvent& other);
~DummyMidiEvent ();
size_t size () const { return _size; };
pframes_t timestamp () const { return _timestamp; };
const unsigned char* const_data () const { return _data; };
unsigned char* data () { return _data; };
bool operator< (const DummyMidiEvent &other) const { return timestamp () < other.timestamp (); };
private:
size_t _size;
pframes_t _timestamp;
uint8_t *_data;
};
typedef std::vector<boost::shared_ptr<DummyMidiEvent> > DummyMidiBuffer;
class DummyPort : public BackendPort {
protected:
DummyPort (DummyAudioBackend &b, const std::string&, PortFlags);
public:
virtual ~DummyPort ();
void next_period () { _gen_cycle = false; }
protected:
/* random number generator */
void setup_random_number_generator ();
inline float randf ();
inline uint32_t randi ();
uint32_t _rseed;
/* engine time */
pframes_t pulse_position () const;
// signal generator
volatile bool _gen_cycle;
Glib::Threads::Mutex generator_lock;
private:
AudioBackend& _engine;
}; // class DummyPort
class DummyAudioPort : public DummyPort {
public:
DummyAudioPort (DummyAudioBackend &b, const std::string&, PortFlags);
~DummyAudioPort ();
DataType type () const { return DataType::AUDIO; };
Sample* buffer () { return _buffer; }
const Sample* const_buffer () const { return _buffer; }
void* get_buffer (pframes_t nframes);
enum GeneratorType {
Silence,
DC05,
Demolition,
UniformWhiteNoise,
GaussianWhiteNoise,
PinkNoise,
PonyNoise,
SineWave,
SineWaveOctaves,
SquareWave,
KronekerDelta,
SineSweep,
SineSweepSwell,
SquareSweep,
SquareSweepSwell,
OneHz,
LTC,
Loopback,
};
std::string setup_generator (GeneratorType const, float const, int, int);
void fill_wavetable (const float* d, size_t n_samples) { assert(_wavetable != 0); memcpy(_wavetable, d, n_samples * sizeof(float)); }
void midi_to_wavetable (DummyMidiBuffer const * const src, size_t n_samples);
private:
Sample _buffer[8192];
// signal generator ('fake' physical inputs)
void generate (const pframes_t n_samples);
GeneratorType _gen_type;
// generator buffers
// pink-noise filters
float _b0, _b1, _b2, _b3, _b4, _b5, _b6;
// generated sinf() samples
Sample * _wavetable;
uint32_t _gen_period;
uint32_t _gen_offset;
uint32_t _gen_perio2;
uint32_t _gen_count2;
// gaussian noise generator
float grandf ();
bool _pass;
float _rn1;
// LTC generator
LTCEncoder* _ltc;
PBD::RingBuffer<Sample>* _ltcbuf;
float _ltc_spd;
float _ltc_rand;
}; // class DummyAudioPort
class DummyMidiPort : public DummyPort {
public:
DummyMidiPort (DummyAudioBackend &b, const std::string&, PortFlags);
~DummyMidiPort ();
DataType type () const { return DataType::MIDI; };
void* get_buffer (pframes_t nframes);
const DummyMidiBuffer * const_buffer () const { return &_buffer; }
std::string setup_generator (int, float const);
void set_loopback (DummyMidiBuffer const * const src);
private:
DummyMidiBuffer _buffer;
DummyMidiBuffer _loopback;
// midi event generator ('fake' physical inputs)
void midi_generate (const pframes_t n_samples);
float _midi_seq_spb; // samples per beat
int64_t _midi_seq_time;
uint32_t _midi_seq_pos;
DummyMidiData::MIDISequence const * _midi_seq_dat;
}; // class DummyMidiPort
class DummyAudioBackend : public AudioBackend, public PortEngineSharedImpl {
public:
DummyAudioBackend (AudioEngine& e, AudioBackendInfo& info);
~DummyAudioBackend ();
bool is_running () const { return _running; }
/* AUDIOBACKEND API */
std::string name () const;
bool is_realtime () const;
bool requires_driver_selection() const { return true; }
std::string driver_name () const;
std::vector<std::string> enumerate_drivers () const;
int set_driver (const std::string&);
std::vector<DeviceStatus> enumerate_devices () const;
std::vector<float> available_sample_rates (const std::string& device) const;
std::vector<uint32_t> available_buffer_sizes (const std::string& device) const;
uint32_t available_input_channel_count (const std::string& device) const;
uint32_t available_output_channel_count (const std::string& device) const;
bool can_change_sample_rate_when_running () const;
bool can_change_buffer_size_when_running () const;
bool can_measure_systemic_latency () const { return true; }
int set_device_name (const std::string&);
int set_sample_rate (float);
int set_buffer_size (uint32_t);
int set_interleaved (bool yn);
int set_input_channels (uint32_t);
int set_output_channels (uint32_t);
int set_systemic_input_latency (uint32_t);
int set_systemic_output_latency (uint32_t);
int set_systemic_midi_input_latency (std::string const, uint32_t) { return 0; }
int set_systemic_midi_output_latency (std::string const, uint32_t) { return 0; }
int reset_device () { return 0; };
/* Retrieving parameters */
std::string device_name () const;
float sample_rate () const;
uint32_t buffer_size () const;
bool interleaved () const;
uint32_t input_channels () const;
uint32_t output_channels () const;
uint32_t systemic_input_latency () const;
uint32_t systemic_output_latency () const;
uint32_t systemic_midi_input_latency (std::string const) const { return 0; }
uint32_t systemic_midi_output_latency (std::string const) const { return 0; }
/* External control app */
std::string control_app_name () const { return std::string (); }
void launch_control_app () {}
/* MIDI */
std::vector<std::string> enumerate_midi_options () const;
int set_midi_option (const std::string&);
std::string midi_option () const;
std::vector<DeviceStatus> enumerate_midi_devices () const {
return std::vector<AudioBackend::DeviceStatus> ();
}
int set_midi_device_enabled (std::string const, bool) {
return 0;
}
bool midi_device_enabled (std::string const) const {
return true;
}
bool can_set_systemic_midi_latencies () const {
return false;
}
/* State Control */
protected:
int _start (bool for_latency_measurement);
public:
int stop ();
int freewheel (bool);
float dsp_load () const;
size_t raw_buffer_size (DataType t);
/* Process time */
samplepos_t sample_time ();
samplepos_t sample_time_at_cycle_start ();
pframes_t samples_since_cycle_start ();
int create_process_thread (boost::function<void()> func);
int join_process_threads ();
bool in_process_thread ();
uint32_t process_thread_count ();
void update_latencies ();
/* PORTENGINE API */
void* private_handle () const;
const std::string& my_name () const;
/* PortEngine API - forwarded to PortEngineSharedImpl */
bool port_is_physical (PortEngine::PortHandle ph) const { return PortEngineSharedImpl::port_is_physical (ph); }
void get_physical_outputs (DataType type, std::vector<std::string>& results) { PortEngineSharedImpl::get_physical_outputs (type, results); }
void get_physical_inputs (DataType type, std::vector<std::string>& results) { PortEngineSharedImpl::get_physical_inputs (type, results); }
ChanCount n_physical_outputs () const { return PortEngineSharedImpl::n_physical_outputs (); }
ChanCount n_physical_inputs () const { return PortEngineSharedImpl::n_physical_inputs (); }
uint32_t port_name_size () const { return PortEngineSharedImpl::port_name_size(); }
int set_port_name (PortEngine::PortHandle ph, const std::string& name) { return PortEngineSharedImpl::set_port_name (ph, name); }
std::string get_port_name (PortEngine::PortHandle ph) const { return PortEngineSharedImpl::get_port_name (ph); }
PortFlags get_port_flags (PortEngine::PortHandle ph) const { return PortEngineSharedImpl::get_port_flags (ph); }
PortEngine::PortPtr get_port_by_name (std::string const & name) const { return PortEngineSharedImpl::get_port_by_name (name); }
int get_port_property (PortEngine::PortHandle ph, const std::string& key, std::string& value, std::string& type) const { return PortEngineSharedImpl::get_port_property (ph, key, value, type); }
int set_port_property (PortEngine::PortHandle ph, const std::string& key, const std::string& value, const std::string& type) { return PortEngineSharedImpl::set_port_property (ph, key, value, type); }
int get_ports (const std::string& port_name_pattern, DataType type, PortFlags flags, std::vector<std::string>& results) const { return PortEngineSharedImpl::get_ports (port_name_pattern, type, flags, results); }
DataType port_data_type (PortEngine::PortHandle ph) const { return PortEngineSharedImpl::port_data_type (ph); }
PortEngine::PortPtr register_port (const std::string& shortname, ARDOUR::DataType type, ARDOUR::PortFlags flags) { return PortEngineSharedImpl::register_port (shortname, type, flags); }
void unregister_port (PortHandle ph) { if (!_running) return; PortEngineSharedImpl::unregister_port (ph); }
int connect (const std::string& src, const std::string& dst) { return PortEngineSharedImpl::connect (src, dst); }
int disconnect (const std::string& src, const std::string& dst) { return PortEngineSharedImpl::disconnect (src, dst); }
int connect (PortEngine::PortHandle ph, const std::string& other) { return PortEngineSharedImpl::connect (ph, other); }
int disconnect (PortEngine::PortHandle ph, const std::string& other) { return PortEngineSharedImpl::disconnect (ph, other); }
int disconnect_all (PortEngine::PortHandle ph) { return PortEngineSharedImpl::disconnect_all (ph); }
bool connected (PortEngine::PortHandle ph, bool process_callback_safe) { return PortEngineSharedImpl::connected (ph, process_callback_safe); }
bool connected_to (PortEngine::PortHandle ph, const std::string& other, bool process_callback_safe) { return PortEngineSharedImpl::connected_to (ph, other, process_callback_safe); }
bool physically_connected (PortEngine::PortHandle ph, bool process_callback_safe) { return PortEngineSharedImpl::physically_connected (ph, process_callback_safe); }
int get_connections (PortEngine::PortHandle ph, std::vector<std::string>& results, bool process_callback_safe) { return PortEngineSharedImpl::get_connections (ph, results, process_callback_safe); }
/* MIDI */
int midi_event_get (pframes_t& timestamp, size_t& size, uint8_t const** buf, void* port_buffer, uint32_t event_index);
int midi_event_put (void* port_buffer, pframes_t timestamp, const uint8_t* buffer, size_t size);
uint32_t get_midi_event_count (void* port_buffer);
void midi_clear (void* port_buffer);
/* Monitoring */
bool can_monitor_input () const;
int request_input_monitoring (PortHandle, bool);
int ensure_input_monitoring (PortHandle, bool);
bool monitoring_input (PortHandle);
/* Latency management */
void set_latency_range (PortHandle, bool for_playback, LatencyRange);
LatencyRange get_latency_range (PortHandle, bool for_playback);
/* Getting access to the data buffer for a port */
void* get_buffer (PortHandle, pframes_t);
void* main_process_thread ();
static size_t max_buffer_size() {return _max_buffer_size;}
private:
enum MidiPortMode {
MidiNoEvents,
MidiGenerator,
MidiOneHz,
MidiLoopback,
MidiToAudio,
};
struct DriverSpeed {
std::string name;
float speedup;
DriverSpeed (const std::string& n, float s) : name (n), speedup (s) {}
};
std::string _instance_name;
static std::vector<std::string> _midi_options;
static std::vector<AudioBackend::DeviceStatus> _device_status;
static std::vector<DummyAudioBackend::DriverSpeed> _driver_speed;
bool _running;
bool _freewheel;
bool _freewheeling;
float _speedup;
std::string _device;
float _samplerate;
size_t _samples_per_period;
float _dsp_load;
DSPLoadCalculator _dsp_load_calc;
static size_t _max_buffer_size;
uint32_t _n_inputs;
uint32_t _n_outputs;
uint32_t _n_midi_inputs;
uint32_t _n_midi_outputs;
MidiPortMode _midi_mode;
uint32_t _systemic_input_latency;
uint32_t _systemic_output_latency;
samplecnt_t _processed_samples;
pthread_t _main_thread;
/* process threads */
static void* dummy_process_thread (void *);
std::vector<pthread_t> _threads;
struct ThreadData {
DummyAudioBackend* engine;
boost::function<void ()> f;
size_t stacksize;
ThreadData (DummyAudioBackend* e, boost::function<void ()> fp, size_t stacksz)
: engine (e) , f (fp) , stacksize (stacksz) {}
};
/* port engine */
int register_system_ports ();
BackendPort* port_factory (std::string const & name, ARDOUR::DataType type, ARDOUR::PortFlags);
}; // class DummyAudioBackend
} // namespace
#endif /* __libbackend_dummy_audiobackend_h__ */
|
{
"pile_set_name": "Github"
}
|
// Copyright (c) Facebook, Inc. and its affiliates.
// All rights reserved.
//
// Copyright 2019 Google LLC
//
// This source code is licensed under the BSD-style license found in the
// LICENSE file in the root directory of this source tree.
#include <assert.h>
#include <math.h>
#include <stddef.h>
#include <stdint.h>
#include <stdlib.h>
#include <fp16.h>
#include <xnnpack.h>
#include <xnnpack/allocator.h>
#include <xnnpack/log.h>
#include <xnnpack/operator.h>
#include <xnnpack/params-init.h>
#include <xnnpack/params.h>
static enum xnn_status create_global_average_pooling_nwc(
size_t channels,
size_t input_stride,
size_t output_stride,
uint32_t flags,
uint32_t log2_element_size,
size_t params_offset,
const void* params,
size_t params_size,
uint32_t datatype_init_flags,
enum xnn_operator_type operator_type,
xnn_operator_t* global_average_pooling_op_out)
{
xnn_operator_t global_average_pooling_op = NULL;
enum xnn_status status = xnn_status_uninitialized;
if ((xnn_params.init_flags & XNN_INIT_FLAG_XNNPACK) == 0) {
xnn_log_error("failed to create %s operator: XNNPACK is not initialized",
xnn_operator_type_to_string(operator_type));
goto error;
}
status = xnn_status_unsupported_hardware;
if ((xnn_params.init_flags & datatype_init_flags) == 0) {
xnn_log_error("failed to create %s operator: operations on data type are not supported",
xnn_operator_type_to_string(operator_type));
goto error;
}
status = xnn_status_invalid_parameter;
if (channels == 0) {
xnn_log_error(
"failed to create %s operator with %zu channels: number of channels must be non-zero",
xnn_operator_type_to_string(operator_type), channels);
goto error;
}
if (input_stride < channels) {
xnn_log_error(
"failed to create %s operator with input element stride of %zu: "
"stride must be at least as large as the number of channels (%zu)",
xnn_operator_type_to_string(operator_type), input_stride, channels);
goto error;
}
if (output_stride < channels) {
xnn_log_error(
"failed to create %s operator with output element stride of %zu: "
"stride must be at least as large as the number of channels (%zu)",
xnn_operator_type_to_string(operator_type), output_stride, channels);
goto error;
}
status = xnn_status_out_of_memory;
global_average_pooling_op = xnn_allocate_zero_simd_memory(sizeof(struct xnn_operator));
if (global_average_pooling_op == NULL) {
xnn_log_error(
"failed to allocate %zu bytes for %s operator descriptor",
sizeof(struct xnn_operator), xnn_operator_type_to_string(operator_type));
goto error;
}
const size_t zero_size = (channels << log2_element_size) + XNN_EXTRA_BYTES;
void* zero_buffer = xnn_allocate_zero_simd_memory(zero_size);
if (zero_buffer == NULL) {
xnn_log_error(
"failed to allocate %zu bytes for %s operator zero padding",
zero_size, xnn_operator_type_to_string(operator_type));
goto error;
}
global_average_pooling_op->zero_buffer = zero_buffer;
global_average_pooling_op->channels = channels;
global_average_pooling_op->input_pixel_stride = input_stride;
global_average_pooling_op->output_pixel_stride = output_stride;
memcpy((void*) ((uintptr_t) global_average_pooling_op + params_offset), params, params_size);
global_average_pooling_op->type = operator_type;
global_average_pooling_op->ukernel.type = xnn_ukernel_type_global_average_pooling;
global_average_pooling_op->state = xnn_run_state_invalid;
*global_average_pooling_op_out = global_average_pooling_op;
return xnn_status_success;
error:
xnn_delete_operator(global_average_pooling_op);
return status;
}
static enum xnn_status setup_global_average_pooling_nwc(
xnn_operator_t global_average_pooling_op,
size_t batch_size,
size_t width,
const void* input,
void* output,
size_t log2_element_size,
const struct gavgpool_parameters gavgpool[restrict XNN_MIN_ELEMENTS(1)],
uint32_t datatype_init_flags,
enum xnn_operator_type expected_operator_type,
const void* params,
size_t params_size,
void (*update_params)(xnn_operator_t, size_t),
pthreadpool_t threadpool)
{
if (global_average_pooling_op->type != expected_operator_type) {
xnn_log_error("failed to setup operator: operator type mismatch (expected %s, got %s)",
xnn_operator_type_to_string(expected_operator_type),
xnn_operator_type_to_string(global_average_pooling_op->type));
return xnn_status_invalid_parameter;
}
global_average_pooling_op->state = xnn_run_state_invalid;
if ((xnn_params.init_flags & XNN_INIT_FLAG_XNNPACK) == 0) {
xnn_log_error("failed to setup %s operator: XNNPACK is not initialized",
xnn_operator_type_to_string(global_average_pooling_op->type));
return xnn_status_uninitialized;
}
if ((xnn_params.init_flags & datatype_init_flags) == 0) {
xnn_log_error("failed to setup %s operator: operations on data type are not supported",
xnn_operator_type_to_string(global_average_pooling_op->type));
return xnn_status_unsupported_hardware;
}
if (width == 0) {
xnn_log_error("failed to setup %s operator with width %zu: width must be non-zero",
xnn_operator_type_to_string(global_average_pooling_op->type), width);
return xnn_status_invalid_parameter;
}
if (batch_size == 0) {
global_average_pooling_op->state = xnn_run_state_skip;
return xnn_status_success;
}
global_average_pooling_op->batch_size = batch_size;
global_average_pooling_op->input_width = width;
global_average_pooling_op->input = input;
global_average_pooling_op->output = output;
update_params(global_average_pooling_op, width);
assert(gavgpool->mr != 0);
const size_t input_stride_in_bytes = global_average_pooling_op->input_pixel_stride << log2_element_size;
const size_t channels = global_average_pooling_op->channels;
global_average_pooling_op->context.global_average_pooling_nwc = (struct global_average_pooling_nwc_context) {
.input = input,
.zero = global_average_pooling_op->zero_buffer,
.input_pixel_stride = input_stride_in_bytes,
.input_batch_stride = input_stride_in_bytes * width,
.input_elements = width,
.channels = channels,
.output = output,
.output_batch_stride = (global_average_pooling_op->output_pixel_stride << log2_element_size),
};
memcpy(&global_average_pooling_op->context.global_average_pooling_nwc.params, params, params_size);
global_average_pooling_op->compute.type = xnn_parallelization_type_1d;
global_average_pooling_op->compute.range[0] = batch_size;
if (width <= gavgpool->mr) {
global_average_pooling_op->compute.task_1d = (pthreadpool_task_1d_t) xnn_compute_global_average_pooling_nwc_unipass;
global_average_pooling_op->context.global_average_pooling_nwc.unipass_ukernel = gavgpool->up;
} else {
global_average_pooling_op->compute.task_1d = (pthreadpool_task_1d_t) xnn_compute_global_average_pooling_nwc_multipass;
global_average_pooling_op->context.global_average_pooling_nwc.multipass_ukernel = gavgpool->mp;
}
global_average_pooling_op->state = xnn_run_state_ready;
return xnn_status_success;
}
enum xnn_status xnn_create_global_average_pooling_nwc_qu8(
size_t channels,
size_t input_stride,
size_t output_stride,
uint8_t input_zero_point,
float input_scale,
uint8_t output_zero_point,
float output_scale,
uint8_t output_min,
uint8_t output_max,
uint32_t flags,
xnn_operator_t* global_average_pooling_op_out)
{
if (input_scale <= 0.0f || !isnormal(input_scale)) {
xnn_log_error(
"failed to create %s operator with %.7g input scale: scale must be finite, normalized, and positive",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_qu8), input_scale);
return xnn_status_invalid_parameter;
}
if (output_scale <= 0.0f || !isnormal(output_scale)) {
xnn_log_error(
"failed to create %s operator with %.7g output scale: scale must be finite, normalized, and positive",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_qu8), output_scale);
return xnn_status_invalid_parameter;
}
if (output_min >= output_max) {
xnn_log_error(
"failed to create %s operator with [%" PRIu8 ", %" PRIu8 "] output range: range min must be below range max",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_qu8), output_min, output_max);
return xnn_status_invalid_parameter;
}
const float input_output_scale = input_scale / output_scale;
if (input_output_scale < 0x1.0p-8f || input_output_scale >= 0x1.0p+8f) {
xnn_log_error(
"failed to create %s operator with %.7g input-to-output scale ratio: scale ratio must be in [2**-8, 2**8) range",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_qu8), input_output_scale);
return xnn_status_unsupported_parameter;
}
const union xnn_qu8_avgpool_params params =
xnn_init_qu8_avgpool_params(
0 /* bias */, 1.0f /* scale */,
output_zero_point, output_min, output_max);
const enum xnn_status status = create_global_average_pooling_nwc(
channels, input_stride, output_stride, flags,
0 /* log2(sizeof(uint8_t)) */,
offsetof(struct xnn_operator, params.qu8_gavgpool),
¶ms, sizeof(params),
XNN_INIT_FLAG_QU8,
xnn_operator_type_global_average_pooling_nwc_qu8,
global_average_pooling_op_out);
if (status == xnn_status_success) {
xnn_operator_t global_average_pooling_op = *global_average_pooling_op_out;
global_average_pooling_op->input_zero_point = (int32_t) (uint32_t) input_zero_point;
global_average_pooling_op->input_scale = input_scale;
global_average_pooling_op->output_scale = output_scale;
}
return status;
}
enum xnn_status xnn_create_global_average_pooling_nwc_qs8(
size_t channels,
size_t input_stride,
size_t output_stride,
int8_t input_zero_point,
float input_scale,
int8_t output_zero_point,
float output_scale,
int8_t output_min,
int8_t output_max,
uint32_t flags,
xnn_operator_t* global_average_pooling_op_out)
{
if (input_scale <= 0.0f || !isnormal(input_scale)) {
xnn_log_error(
"failed to create %s operator with %.7g input scale: scale must be finite, normalized, and positive",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_qs8), input_scale);
return xnn_status_invalid_parameter;
}
if (output_scale <= 0.0f || !isnormal(output_scale)) {
xnn_log_error(
"failed to create %s operator with %.7g output scale: scale must be finite, normalized, and positive",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_qs8), output_scale);
return xnn_status_invalid_parameter;
}
if (output_min >= output_max) {
xnn_log_error(
"failed to create %s operator with [%" PRId8 ", %" PRId8 "] output range: range min must be below range max",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_qs8), output_min, output_max);
return xnn_status_invalid_parameter;
}
const float input_output_scale = input_scale / output_scale;
if (input_output_scale < 0x1.0p-8f || input_output_scale >= 0x1.0p+8f) {
xnn_log_error(
"failed to create %s operator with %.7g input-to-output scale ratio: scale ratio must be in [2**-8, 2**8) range",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_qs8), input_output_scale);
return xnn_status_unsupported_parameter;
}
const union xnn_qs8_avgpool_params params =
xnn_init_qs8_avgpool_params(
0 /* bias */, 1.0f /* scale */,
output_zero_point, output_min, output_max);
const enum xnn_status status = create_global_average_pooling_nwc(
channels, input_stride, output_stride, flags,
0 /* log2(sizeof(int8_t)) */,
offsetof(struct xnn_operator, params.qs8_gavgpool),
¶ms, sizeof(params),
XNN_INIT_FLAG_QS8,
xnn_operator_type_global_average_pooling_nwc_qs8,
global_average_pooling_op_out);
if (status == xnn_status_success) {
xnn_operator_t global_average_pooling_op = *global_average_pooling_op_out;
global_average_pooling_op->input_zero_point = (int32_t) input_zero_point;
global_average_pooling_op->input_scale = input_scale;
global_average_pooling_op->output_scale = output_scale;
}
return status;
}
enum xnn_status xnn_create_global_average_pooling_nwc_f16(
size_t channels,
size_t input_stride,
size_t output_stride,
float output_min,
float output_max,
uint32_t flags,
xnn_operator_t* global_average_pooling_op_out)
{
if (isnan(output_min)) {
xnn_log_error(
"failed to create %s operator with NaN output lower bound: lower bound must be non-NaN",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_f16));
return xnn_status_invalid_parameter;
}
if (isnan(output_max)) {
xnn_log_error(
"failed to create %s operator with NaN output upper bound: upper bound must be non-NaN",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_f16));
return xnn_status_invalid_parameter;
}
if (fp16_ieee_to_fp32_value(fp16_ieee_from_fp32_value(output_min)) >= fp16_ieee_to_fp32_value(fp16_ieee_from_fp32_value(output_max))) {
xnn_log_error(
"failed to create %s operator with [%.7g, %.7g] output range: lower bound must be below upper bound",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_f16),
fp16_ieee_to_fp32_value(fp16_ieee_from_fp32_value(output_min)),
fp16_ieee_to_fp32_value(fp16_ieee_from_fp32_value(output_max)));
return xnn_status_invalid_parameter;
}
const struct xnn_f16_scaleminmax_params params =
xnn_init_f16_scaleminmax_params(
UINT16_C(0x7E00) /* NaN */,
fp16_ieee_from_fp32_value(output_min),
fp16_ieee_from_fp32_value(output_max));
return create_global_average_pooling_nwc(
channels, input_stride, output_stride, flags,
1 /* log2(sizeof(uint16_t)) */,
offsetof(struct xnn_operator, params.f16_scaleminmax),
¶ms, sizeof(params),
XNN_INIT_FLAG_F16,
xnn_operator_type_global_average_pooling_nwc_f16,
global_average_pooling_op_out);
}
enum xnn_status xnn_create_global_average_pooling_nwc_f32(
size_t channels,
size_t input_stride,
size_t output_stride,
float output_min,
float output_max,
uint32_t flags,
xnn_operator_t* global_average_pooling_op_out)
{
if (isnan(output_min)) {
xnn_log_error(
"failed to create %s operator with NaN output lower bound: lower bound must be non-NaN",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_f32));
return xnn_status_invalid_parameter;
}
if (isnan(output_max)) {
xnn_log_error(
"failed to create %s operator with NaN output upper bound: upper bound must be non-NaN",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_f32));
return xnn_status_invalid_parameter;
}
if (output_min >= output_max) {
xnn_log_error(
"failed to create %s operator with [%.7g, %.7g] output range: lower bound must be below upper bound",
xnn_operator_type_to_string(xnn_operator_type_global_average_pooling_nwc_f32), output_min, output_max);
return xnn_status_invalid_parameter;
}
const union xnn_f32_scaleminmax_params params =
xnn_init_f32_scaleminmax_params(
0.0f /* scale */, output_min, output_max);
return create_global_average_pooling_nwc(
channels, input_stride, output_stride, flags,
2 /* log2(sizeof(float)) */,
offsetof(struct xnn_operator, params.f32_scaleminmax),
¶ms, sizeof(params),
XNN_INIT_FLAG_F32,
xnn_operator_type_global_average_pooling_nwc_f32,
global_average_pooling_op_out);
}
static void update_params_qu8(
xnn_operator_t global_average_pooling_op,
size_t width)
{
const int32_t bias = -((int32_t) width * global_average_pooling_op->input_zero_point);
const float scale = global_average_pooling_op->input_scale / (global_average_pooling_op->output_scale * (float) width);
xnn_update_qu8_avgpool_params(&global_average_pooling_op->params.qu8_gavgpool, bias, scale);
}
enum xnn_status xnn_setup_global_average_pooling_nwc_qu8(
xnn_operator_t global_average_pooling_op,
size_t batch_size,
size_t width,
const uint8_t* input,
uint8_t* output,
pthreadpool_t threadpool)
{
return setup_global_average_pooling_nwc(
global_average_pooling_op,
batch_size, width,
input, output,
0 /* log2(sizeof(uint8_t)) */,
&xnn_params.qu8.gavgpool,
XNN_INIT_FLAG_QU8,
xnn_operator_type_global_average_pooling_nwc_qu8,
&global_average_pooling_op->params.qu8_gavgpool,
sizeof(global_average_pooling_op->params.qu8_gavgpool),
update_params_qu8,
threadpool);
}
static void update_params_qs8(
xnn_operator_t global_average_pooling_op,
size_t width)
{
const int32_t bias = -((int32_t) width * global_average_pooling_op->input_zero_point);
const float scale = global_average_pooling_op->input_scale / (global_average_pooling_op->output_scale * (float) width);
xnn_update_qs8_avgpool_params(&global_average_pooling_op->params.qs8_gavgpool, bias, scale);
}
enum xnn_status xnn_setup_global_average_pooling_nwc_qs8(
xnn_operator_t global_average_pooling_op,
size_t batch_size,
size_t width,
const int8_t* input,
int8_t* output,
pthreadpool_t threadpool)
{
return setup_global_average_pooling_nwc(
global_average_pooling_op,
batch_size, width,
input, output,
0 /* log2(sizeof(int8_t)) */,
&xnn_params.qs8.gavgpool,
XNN_INIT_FLAG_QS8,
xnn_operator_type_global_average_pooling_nwc_qs8,
&global_average_pooling_op->params.qs8_gavgpool,
sizeof(global_average_pooling_op->params.qs8_gavgpool),
update_params_qs8,
threadpool);
}
static void update_params_f16(
xnn_operator_t global_average_pooling_op,
size_t width)
{
xnn_update_f16_scaleminmax_params(
&global_average_pooling_op->params.f16_scaleminmax,
fp16_ieee_from_fp32_value(1.0f / (float) width));
}
enum xnn_status xnn_setup_global_average_pooling_nwc_f16(
xnn_operator_t global_average_pooling_op,
size_t batch_size,
size_t width,
const void* input,
void* output,
pthreadpool_t threadpool)
{
return setup_global_average_pooling_nwc(
global_average_pooling_op,
batch_size, width,
input, output,
1 /* log2(sizeof(uint16_t)) */,
&xnn_params.f16.gavgpool,
XNN_INIT_FLAG_F16,
xnn_operator_type_global_average_pooling_nwc_f16,
&global_average_pooling_op->params.f16_scaleminmax,
sizeof(global_average_pooling_op->params.f16_scaleminmax),
update_params_f16,
threadpool);
}
static void update_params_f32(
xnn_operator_t global_average_pooling_op,
size_t width)
{
xnn_update_f32_scaleminmax_params(&global_average_pooling_op->params.f32_scaleminmax, 1.0f / (float) width);
}
enum xnn_status xnn_setup_global_average_pooling_nwc_f32(
xnn_operator_t global_average_pooling_op,
size_t batch_size,
size_t width,
const float* input,
float* output,
pthreadpool_t threadpool)
{
return setup_global_average_pooling_nwc(
global_average_pooling_op,
batch_size, width,
input, output,
2 /* log2(sizeof(float)) */,
&xnn_params.f32.gavgpool,
XNN_INIT_FLAG_F32,
xnn_operator_type_global_average_pooling_nwc_f32,
&global_average_pooling_op->params.f32_scaleminmax,
sizeof(global_average_pooling_op->params.f32_scaleminmax),
update_params_f32,
threadpool);
}
|
{
"pile_set_name": "Github"
}
|
@inject HttpClient HttpClient
<div class="dialog-container">
<div class="dialog">
<div class="dialog-title">
<h2>@Pizza.Special.Name</h2>
@Pizza.Special.Description
</div>
<form class="dialog-body">
<div>
<label>Size:</label>
<input type="range" min="@Pizza.MinimumSize" max="@Pizza.MaximumSize" step="1" @bind="Pizza.Size" @bind:event="oninput" />
<span class="size-label">
@(Pizza.Size)" (£@(Pizza.GetFormattedTotalPrice()))
</span>
</div>
<div>
<label>Extra Toppings:</label>
@if (toppings == null)
{
<select class="custom-select" disabled>
<option>(loading...)</option>
</select>
}
else if (Pizza.Toppings.Count >= 6)
{
<div>(maximum reached)</div>
}
else
{
<select class="custom-select" @onchange="ToppingSelected">
<option value="-1" disabled selected>(select)</option>
@for (var i = 0; i < toppings.Count; i++)
{
<option value="@i">@toppings[i].Name - (£@(toppings[i].GetFormattedPrice()))</option>
}
</select>
}
</div>
<div class="toppings">
@foreach (var topping in Pizza.Toppings)
{
<div class="topping">
@topping.Topping.Name
<span class="topping-price">@topping.Topping.GetFormattedPrice()</span>
<button type="button" class="delete-topping" @onclick="@(() => RemoveTopping(topping.Topping))">x</button>
</div>
}
</div>
</form>
<div class="dialog-buttons">
<button class="btn btn-secondary mr-auto" @onclick="OnCancel">Cancel</button>
<span class="mr-center">
Price: <span class="price">@(Pizza.GetFormattedTotalPrice())</span>
</span>
<button class="btn btn-success ml-auto" @onclick="OnConfirm">Order ></button>
</div>
</div>
</div>
@code {
List<Topping> toppings;
[Parameter] public Pizza Pizza { get; set; }
[Parameter] public EventCallback OnCancel { get; set; }
[Parameter] public EventCallback OnConfirm { get; set; }
protected async override Task OnInitializedAsync()
{
toppings = await HttpClient.GetFromJsonAsync<List<Topping>>("toppings");
}
void ToppingSelected(ChangeEventArgs e)
{
if (int.TryParse((string)e.Value, out var index) && index >= 0)
{
AddTopping(toppings[index]);
}
}
void AddTopping(Topping topping)
{
if (Pizza.Toppings.Find(pt => pt.Topping == topping) == null)
{
Pizza.Toppings.Add(new PizzaTopping() { Topping = topping });
}
}
void RemoveTopping(Topping topping)
{
Pizza.Toppings.RemoveAll(pt => pt.Topping == topping);
}
}
|
{
"pile_set_name": "Github"
}
|
//
// Generated by class-dump 3.5 (64 bit) (Debug version compiled Sep 17 2017 16:24:48).
//
// class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2015 by Steve Nygard.
//
#import "NSObject-Protocol.h"
@class QryCancelECardDescRes, WCPayECardDetailViewController;
@protocol WCPayECardDetailViewControllerDelegate <NSObject>
- (void)ecardDetailVC:(WCPayECardDetailViewController *)arg1 didClickCloseWith:(QryCancelECardDescRes *)arg2;
@end
|
{
"pile_set_name": "Github"
}
|
/**
* External imports
*/
import {
createAndKeyEntitiesByPrimaryKeyValue,
keyEntitiesByPrimaryKeyValue,
singularModelName,
pluralModelName,
stripBaseRouteFromUrl,
getPrimaryKeyQueryString,
getPrimaryKey,
getEndpoint,
modelNameForQueryString,
} from '@eventespresso/model';
import {
isModelEntityFactoryOfModel,
isModelEntity,
} from '@eventespresso/validators';
import { InvalidModelEntity } from '@eventespresso/eejs';
import warning from 'warning';
import { isEmpty, isUndefined, isArray } from 'lodash';
import { Map as ImmutableMap } from 'immutable';
import { sprintf } from '@eventespresso/i18n';
/**
* Internal Imports
*/
import {
fetch,
dispatch,
select,
resolveSelect,
resolveGetEntityByIdForIds,
resolveGetRelatedEntities,
} from '../../base-controls';
import {
receiveEntityRecords,
receiveRelatedEntities,
} from './../actions';
import { keepExistingEntitiesInObject } from '../../base-model';
import { REDUCER_KEY as CORE_REDUCER_KEY } from '../constants';
import { REDUCER_KEY as SCHEMA_REDUCER_KEY } from '../../schema/constants';
import { appendCalculatedFieldsToPath } from './utils';
const DEFAULT_EMPTY_ARRAY = [];
/**
* A resolver for getting relation entities for the given model name and entity
* for that model.
*
* @param {BaseEntity} entity
* @param {string} relationModelName
* @param {Array} calculatedFields
* @return {[]|Array<BaseEntity>} If there are relations, returns an array of
* BaseEntity instances for the relations, otherwise an empty array.
*/
export function* getRelatedEntities(
entity,
relationModelName,
calculatedFields = []
) {
if ( ! isModelEntity( entity ) ) {
throw new InvalidModelEntity( '', entity );
}
// if entity is new then there won't be any relations for it on the server
// yet, so let's just return early.
if ( entity.isNew ) {
return DEFAULT_EMPTY_ARRAY;
}
relationModelName = singularModelName( relationModelName );
const pluralRelationName = pluralModelName( relationModelName );
const modelName = entity.modelName.toLowerCase();
const relationResourceProperty = pluralRelationName + 'Resource';
const relationEndpoint = entity[ relationResourceProperty ] ?
stripBaseRouteFromUrl(
entity[ relationResourceProperty ].resourceLink
) :
'';
if ( relationEndpoint === '' ) {
warning(
false,
sprintf(
'There is no relation resource for the given model (%s) and requested relation (%s)',
modelName,
pluralRelationName
)
);
return DEFAULT_EMPTY_ARRAY;
}
yield dispatch(
SCHEMA_REDUCER_KEY,
'receiveRelationEndpointForModelEntity',
modelName,
entity.id,
relationModelName,
relationEndpoint
);
yield dispatch(
'core/data',
'finishResolution',
SCHEMA_REDUCER_KEY,
'receiveRelationEndpointForModelEntity',
[ modelName, entity.id, relationModelName, relationEndpoint ]
);
// add calculatedFields to endpoint?
const path = appendCalculatedFieldsToPath(
relationEndpoint,
calculatedFields
);
let relationEntities = yield fetch( { path } );
relationEntities = ! isEmpty( relationEntities ) ?
relationEntities :
DEFAULT_EMPTY_ARRAY;
relationEntities = ! isArray( relationEntities ) ?
[ relationEntities ] :
relationEntities;
if ( ! relationEntities.length ) {
return relationEntities;
}
const factory = yield resolveSelect(
SCHEMA_REDUCER_KEY,
'getFactoryForModel',
relationModelName
);
if ( ! isModelEntityFactoryOfModel(
factory,
relationModelName
) ) {
return DEFAULT_EMPTY_ARRAY;
}
let fullEntities = keyEntitiesByPrimaryKeyValue(
relationModelName,
relationEntities
);
fullEntities = createAndKeyEntitiesByPrimaryKeyValue(
factory,
fullEntities,
);
const entityIds = Array.from( fullEntities.keys() );
// are there already entities for the ids in the store? If so...we use
// those.
const existingEntities = yield select(
CORE_REDUCER_KEY,
'getEntitiesByIds',
relationModelName,
entityIds
);
if ( ! isEmpty( existingEntities ) ) {
fullEntities = keepExistingEntitiesInObject(
existingEntities.reduce(
( entitiesObject, entityObj ) => {
entitiesObject[ entityObj.id ] = entity;
return entitiesObject;
},
{}
),
fullEntities,
);
}
// if fullEntities is not a map, then we need to make it a map
const entityArray = fullEntities instanceof Map ?
Array.from( fullEntities.values() ) :
fullEntities;
yield receiveEntityRecords(
relationModelName,
entityArray
);
yield receiveRelatedEntities(
modelName,
entity.id,
relationModelName,
entityIds,
);
yield resolveGetRelatedEntities(
entity,
fullEntities,
entityIds,
);
yield resolveGetEntityByIdForIds(
relationModelName,
entityIds
);
return entityArray;
}
/**
* Resolver for the getRelatedEntitiesForIds selector
*
* @param {string} modelName
* @param {Array<number>} entityIds
* @param {string} relationName
* @param {Array} calculatedFields This will retrieve any named calculated
* fields for the related entities.
*
* @return {Array|undefined} If there is no schema for the relation, an
* empty array is returned.
*/
export function* getRelatedEntitiesForIds(
modelName,
entityIds,
relationName,
calculatedFields = []
) {
modelName = singularModelName( modelName );
relationName = singularModelName( relationName );
const hasJoinTable = yield resolveSelect(
SCHEMA_REDUCER_KEY,
'hasJoinTableRelation',
modelName,
relationName,
);
const relationSchema = yield resolveSelect(
SCHEMA_REDUCER_KEY,
'getRelationSchema',
modelName,
relationName,
);
if ( relationSchema === null ) {
return DEFAULT_EMPTY_ARRAY;
}
const relationType = relationSchema.relation_type;
const factory = yield resolveSelect(
SCHEMA_REDUCER_KEY,
'getFactoryForModel',
relationName
);
const response = yield fetch( {
path: getRelationRequestUrl(
modelName,
entityIds,
relationName,
relationSchema,
relationType,
hasJoinTable,
calculatedFields,
),
} );
if ( ! response.length ) {
return DEFAULT_EMPTY_ARRAY;
}
const relationPrimaryKey = getPrimaryKey( relationName );
const modelPrimaryKey = getPrimaryKey( modelName );
const pluralRelationName = pluralModelName( relationName );
let hasSetMap = ImmutableMap();
if ( hasJoinTable ) {
while ( response.length > 0 ) {
const record = response.pop();
let relationRecords = record[ pluralRelationName ] || null;
relationRecords = relationRecords === null &&
! isUndefined( record[ relationName ] ) ?
record[ relationName ] :
relationRecords;
relationRecords = relationRecords !== null &&
! isArray( relationRecords ) ?
[ relationRecords ] :
relationRecords;
if ( relationRecords !== null ) {
while ( relationRecords.length > 0 ) {
const modelId = record[ modelPrimaryKey ];
const relationId = record[ relationPrimaryKey ];
const relationRecord = relationRecords.pop();
if ( relationRecord !== null &&
! hasSetMap.hasIn( [ modelId, relationId ] )
) {
const relationEntity = factory.fromExisting(
relationRecord );
yield dispatch(
CORE_REDUCER_KEY,
'resolveRelationRecordForRelation',
relationEntity,
modelName,
modelId,
);
hasSetMap = hasSetMap.setIn(
[ modelId, relationId ],
true
);
}
}
}
}
} else {
while ( response.length > 0 ) {
const record = response.pop();
const modelId = isBelongsToRelation( relationType ) ?
record[ modelPrimaryKey ] :
record[ modelName ].id;
const relationId = record[ relationPrimaryKey ];
if ( ! hasSetMap.hasIn( [ modelId, relationId ] ) ) {
const relationEntity = factory.fromExisting(
record[ relationName ]
);
yield dispatch(
CORE_REDUCER_KEY,
'resolveRelationRecordForRelation',
relationEntity,
modelName,
modelId,
);
hasSetMap = hasSetMap.setIn(
[ modelId, relationId ],
true
);
}
}
}
}
/**
* Constructs and returns the url for a relation entity request using the given
* arguments
*
* @param {string} modelName
* @param {Array} entityIds
* @param {string} relationName
* @param {Object} relationSchema
* @param {string} relationType
* @param {boolean} hasJoinTable
* @param {Array} calculatedFields
* @return {string} A path to use for a relation request.
*/
const getRelationRequestUrl = (
modelName,
entityIds,
relationName,
relationSchema,
relationType,
hasJoinTable,
calculatedFields,
) => {
let path;
modelName = singularModelName( modelName );
relationName = singularModelName( relationName );
switch ( true ) {
case hasJoinTable:
path = getEndpoint(
singularModelName( relationSchema.joining_model_name )
.toLowerCase()
);
path += '/?where' + getPrimaryKeyQueryString(
modelName,
entityIds
);
path += `&include=${ modelNameForQueryString( relationName ) }.*`;
path = appendCalculatedFieldsToPath(
path,
calculatedFields,
relationName
);
break;
case isBelongsToRelation( relationType ):
path = getEndpoint( modelName );
path += `/?where${ getPrimaryKeyQueryString( modelName, entityIds ) }`;
path += `&include=${ modelNameForQueryString( relationName ) }.*`;
path = appendCalculatedFieldsToPath(
path,
calculatedFields,
relationName
);
break;
default:
// we do the reverse endpoint so that we are getting the belongs to
// relation responses back and including the relation entities we
// want in the response (belongs to). So for instance if the
// incoming arguments are:
// `getRelatedEntitiesForEntityIds(
// 'attendee',
// [ 10, 20],
// 'registration'
// )
// then the query would be:
// /registrations/?where[ATT_ID][IN]=10,20&include=Attendee.*
// basically the goal here is to get one to one relations returned
// in the query for easier parsing/dispatching.
// @todo, currently this will NOT account for paging.
path = getEndpoint( relationName );
path += `/?where${ getPrimaryKeyQueryString( modelName, entityIds ) }`;
path += `&include=${ modelNameForQueryString( modelName ) }.*`;
path = appendCalculatedFieldsToPath(
path,
calculatedFields,
);
break;
}
return path;
};
/**
* Returns whether the given relationType is equal to `EE_Belongs_To_Relation`
*
* @param {string} relationType
* @return {boolean} True means the given relationType is `EE_Belongs_To_Relation`
*/
const isBelongsToRelation = ( relationType ) => {
return relationType === 'EE_Belongs_To_Relation';
};
|
{
"pile_set_name": "Github"
}
|
extern crate dynet;
extern crate rand;
use std::fs::File;
use std::io::{BufReader, Read, Seek, SeekFrom};
use std::path::Path;
use dynet::*;
use rand::{seq::SliceRandom, thread_rng};
#[derive(Copy, Clone, Debug)]
pub enum Activation {
Sigmoid,
Tanh,
Relu,
Softmax,
Linear,
}
impl Activation {
fn forward<E: AsRef<Expression>>(&self, x: E) -> Expression {
let x = x.as_ref();
match *self {
Activation::Sigmoid => logistic(x),
Activation::Tanh => tanh(x),
Activation::Relu => rectify(x),
Activation::Softmax => softmax(x, 0),
Activation::Linear => x.clone(),
}
}
}
#[derive(Debug)]
pub struct Layer {
pw: Parameter,
pb: Parameter,
activation: Activation,
dropout_rate: f32,
}
impl Layer {
pub fn new(
in_size: u32,
out_size: u32,
activation: Activation,
dropout_rate: f32,
model: &mut ParameterCollection,
) -> Self {
Self::with_initializer(
in_size,
out_size,
activation,
dropout_rate,
model,
&ParameterInitGlorot::default(),
)
}
pub fn with_initializer<I: ParameterInit>(
in_size: u32,
out_size: u32,
activation: Activation,
dropout_rate: f32,
model: &mut ParameterCollection,
initializer: &I,
) -> Layer {
Layer {
pw: model.add_parameters([out_size, in_size], initializer),
pb: model.add_parameters([out_size], &ParameterInitConst::new(0.)),
activation,
dropout_rate,
}
}
pub fn forward<E: AsRef<Expression>>(
&mut self,
x: E,
cg: &mut ComputationGraph,
train: bool,
) -> Expression {
let w = parameter(cg, &mut self.pw);
let b = parameter(cg, &mut self.pb);
let mut y = self
.activation
.forward(affine_transform([&b, &w, x.as_ref()]));
if train && self.dropout_rate > 0. {
y = dropout(y, self.dropout_rate)
}
y
}
}
#[derive(Debug)]
pub struct MLP {
layers: Vec<Layer>,
}
impl MLP {
pub fn new(
units: &[u32],
out_size: u32,
activation: Activation,
dropout_rate: f32,
model: &mut ParameterCollection,
) -> Self {
let n_layers = units.len();
if n_layers < 1 {
panic!("number of layers must be greater than 0.");
}
MLP {
layers: units
.iter()
.enumerate()
.map(|(i, &u)| {
if i < n_layers - 1 {
Layer::new(u, units[i + 1], activation, dropout_rate, model)
} else {
Layer::new(u, out_size, Activation::Linear, 0.0, model)
}
})
.collect(),
}
}
pub fn forward<E: AsRef<Expression>>(
&mut self,
x: E,
cg: &mut ComputationGraph,
train: bool,
) -> Expression {
let mut y = x.as_ref().clone();
for layer in &mut self.layers {
y = layer.forward(y, cg, train);
}
y
}
}
const NUM_TRAIN_SAMPLES: u32 = 60000;
const NUM_TEST_SAMPLES: u32 = 10000;
const NUM_INPUT_UNITS: u32 = 28 * 28;
const NUM_HIDDEN_UNITS: u32 = 512;
const NUM_OUTPUT_UNITS: u32 = 10;
const BATCH_SIZE: u32 = 200;
const NUM_TRAIN_BATCHES: u32 = NUM_TRAIN_SAMPLES / BATCH_SIZE;
const NUM_TEST_BATCHES: u32 = NUM_TEST_SAMPLES / BATCH_SIZE;
const MAX_EPOCH: u32 = 100;
fn load_images<P: AsRef<Path>>(filename: P, n: u32) -> Vec<f32> {
let mut reader = BufReader::new(File::open(filename.as_ref()).unwrap());
reader.seek(SeekFrom::Start(16)).unwrap();
let size = (n * NUM_INPUT_UNITS) as usize;
let mut buf: Vec<u8> = Vec::with_capacity(size);
reader.read_to_end(&mut buf).unwrap();
let mut ret: Vec<f32> = Vec::with_capacity(size);
for i in 0..size {
ret.push(buf[i] as f32 / 255.0);
}
ret
}
fn load_labels<P: AsRef<Path>>(filename: P, n: u32) -> Vec<u8> {
let mut reader = BufReader::new(File::open(filename.as_ref()).unwrap());
reader.seek(SeekFrom::Start(8)).unwrap();
let mut ret: Vec<u8> = Vec::with_capacity(n as usize);
reader.read_to_end(&mut ret).unwrap();
ret
}
fn main() {
dynet::initialize(&mut DynetParams::from_args(false));
let train_images = load_images("data/train-images-idx3-ubyte", NUM_TRAIN_SAMPLES);
let train_labels = load_labels("data/train-labels-idx1-ubyte", NUM_TRAIN_SAMPLES);
let test_images = load_images("data/t10k-images-idx3-ubyte", NUM_TEST_SAMPLES);
let test_labels = load_labels("data/t10k-labels-idx1-ubyte", NUM_TEST_SAMPLES);
let mut m = ParameterCollection::new();
let mut trainer = AdamTrainer::default(&mut m);
let mut nn = MLP::new(
&[NUM_INPUT_UNITS, NUM_HIDDEN_UNITS],
NUM_OUTPUT_UNITS,
Activation::Relu,
0.2,
&mut m,
);
let mut rng = thread_rng();
let mut ids: Vec<usize> = (0usize..NUM_TRAIN_SAMPLES as usize).collect();
let mut cg = ComputationGraph::new();
for epoch in 0..MAX_EPOCH {
ids.shuffle(&mut rng);
let mut loss = 0.;
for batch in 0..NUM_TRAIN_BATCHES {
print!("\rTraining... {} / {}", batch + 1, NUM_TRAIN_BATCHES);
let mut inputs: Vec<f32> = Vec::with_capacity((BATCH_SIZE * NUM_INPUT_UNITS) as usize);
let mut labels: Vec<u32> = vec![0; BATCH_SIZE as usize];
for i in 0..BATCH_SIZE {
let id = ids[(i + batch * BATCH_SIZE) as usize];
let from = id * NUM_INPUT_UNITS as usize;
let to = (id + 1) * NUM_INPUT_UNITS as usize;
inputs.extend_from_slice(&train_images[from..to]);
labels[i as usize] = train_labels[id] as u32;
}
cg.clear();
let x = input(&mut cg, ([NUM_INPUT_UNITS], BATCH_SIZE), &inputs);
let y = nn.forward(x, &mut cg, true);
let loss_expr = sum_batches(pickneglogsoftmax(y, &labels));
loss += cg.forward(&loss_expr).as_scalar();
cg.backward(&loss_expr);
trainer.update();
}
println!(", E = {}", loss);
let mut correct = 0;
for batch in 0..NUM_TEST_BATCHES {
print!("\rTesting... {} / {}", batch + 1, NUM_TEST_BATCHES);
let mut inputs: Vec<f32> = Vec::with_capacity((BATCH_SIZE * NUM_INPUT_UNITS) as usize);
let from = (batch * BATCH_SIZE * NUM_INPUT_UNITS) as usize;
let to = ((batch + 1) * BATCH_SIZE * NUM_INPUT_UNITS) as usize;
inputs.extend_from_slice(&test_images[from..to]);
cg.clear();
let x = input(&mut cg, ([NUM_INPUT_UNITS], BATCH_SIZE), &inputs);
let y = nn.forward(x, &mut cg, false);
let y_val = cg.forward(&y).as_vector();
for i in 0..BATCH_SIZE {
let mut maxval = -1e10;
let mut argmax: i32 = -1;
for j in 0..NUM_OUTPUT_UNITS {
let v = y_val[(j + i * NUM_OUTPUT_UNITS) as usize];
if v > maxval {
maxval = v;
argmax = j as i32;
}
}
if argmax == test_labels[(i + batch * BATCH_SIZE) as usize] as i32 {
correct += 1;
}
}
}
let accuracy = 100.0 * correct as f32 / NUM_TEST_SAMPLES as f32;
println!("\nepoch {}: accuracy: {:.2}%", epoch, accuracy);
}
}
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright (c) 2008-2020, Hazelcast, Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.hazelcast.query.impl.predicates;
import com.hazelcast.internal.json.JsonValue;
import com.hazelcast.internal.json.NonTerminalJsonValue;
import com.hazelcast.nio.ObjectDataInput;
import com.hazelcast.nio.ObjectDataOutput;
import com.hazelcast.internal.serialization.BinaryInterface;
import com.hazelcast.nio.serialization.IdentifiedDataSerializable;
import com.hazelcast.query.Predicate;
import com.hazelcast.query.QueryException;
import com.hazelcast.query.impl.AttributeType;
import com.hazelcast.query.impl.Extractable;
import com.hazelcast.query.impl.QueryableEntry;
import com.hazelcast.query.impl.getters.AbstractJsonGetter;
import com.hazelcast.query.impl.getters.MultiResult;
import java.io.IOException;
import java.util.Collection;
import java.util.List;
import java.util.Map;
import static com.hazelcast.internal.serialization.impl.FactoryIdHelper.PREDICATE_DS_FACTORY_ID;
import static com.hazelcast.query.impl.IndexUtils.canonicalizeAttribute;
import static com.hazelcast.query.impl.predicates.PredicateUtils.isNull;
/**
* Provides base features for predicates, such as extraction and conversion of the attribute's value.
* It also handles apply() on MultiResult.
*/
@BinaryInterface
public abstract class AbstractPredicate<K, V> implements Predicate<K, V>, IdentifiedDataSerializable {
String attributeName;
private transient volatile AttributeType attributeType;
protected AbstractPredicate() {
}
protected AbstractPredicate(String attributeName) {
this.attributeName = canonicalizeAttribute(attributeName);
}
@Override
public boolean apply(Map.Entry<K, V> mapEntry) {
Object attributeValue = readAttributeValue(mapEntry);
if (attributeValue instanceof MultiResult) {
return applyForMultiResult((MultiResult) attributeValue);
} else if (attributeValue instanceof Collection || attributeValue instanceof Object[]) {
throw new IllegalArgumentException(String.format("Cannot use %s predicate with an array or a collection attribute",
getClass().getSimpleName()));
}
return convertAndApplyForSingleAttributeValue(attributeValue);
}
private boolean applyForMultiResult(MultiResult result) {
List results = result.getResults();
for (Object o : results) {
Comparable entryValue = (Comparable) o;
// it's enough if there's only one result in the MultiResult that satisfies the predicate
boolean satisfied = convertAndApplyForSingleAttributeValue(entryValue);
if (satisfied) {
return true;
}
}
return false;
}
private boolean convertAndApplyForSingleAttributeValue(Object attributeValue) {
if (attributeValue instanceof JsonValue) {
if (attributeValue == NonTerminalJsonValue.INSTANCE) {
return false;
}
attributeValue = AbstractJsonGetter.convertFromJsonValue((JsonValue) attributeValue);
}
return applyForSingleAttributeValue((Comparable) attributeValue);
}
protected abstract boolean applyForSingleAttributeValue(Comparable attributeValue);
/**
* Converts givenAttributeValue to the type of entryAttributeValue
* Good practice: do not invoke this method if entryAttributeValue == null
*
* @param entryAttributeValue attribute value extracted from the entry
* @param givenAttributeValue given attribute value to be converted
* @return converted givenAttributeValue
*/
protected Comparable convert(Comparable entryAttributeValue, Comparable givenAttributeValue) {
if (isNull(givenAttributeValue)) {
return givenAttributeValue;
}
AttributeType type = attributeType;
if (type == null) {
if (entryAttributeValue == null) {
// we can't convert since we cannot infer the entry's type from a null attribute value.
// Returning unconverted value is an optimization since the given value will be compared with null.
return givenAttributeValue;
}
type = QueryableEntry.extractAttributeType(entryAttributeValue);
attributeType = type;
}
return convert(type, entryAttributeValue, givenAttributeValue);
}
private Comparable convert(AttributeType entryAttributeType, Comparable entryAttributeValue, Comparable givenAttributeValue) {
Class<?> entryAttributeClass = entryAttributeValue != null ? entryAttributeValue.getClass() : null;
if (entryAttributeType == AttributeType.ENUM) {
// if attribute type is enum, convert given attribute to enum string
return entryAttributeType.getConverter().convert(givenAttributeValue);
} else {
// if given attribute value is already in expected type then there's no need for conversion.
if (entryAttributeClass != null && entryAttributeClass.isAssignableFrom(givenAttributeValue.getClass())) {
return givenAttributeValue;
} else if (entryAttributeType != null) {
return entryAttributeType.getConverter().convert(givenAttributeValue);
} else {
throw new QueryException("Unknown attribute type: " + givenAttributeValue.getClass().getName()
+ " for attribute: " + attributeName);
}
}
}
private Object readAttributeValue(Map.Entry entry) {
Extractable extractable = (Extractable) entry;
return extractable.getAttributeValue(attributeName);
}
Object convertEnumValue(Object attributeValue) {
if (attributeValue != null && attributeValue.getClass().isEnum()) {
attributeValue = attributeValue.toString();
}
return attributeValue;
}
@Override
public void writeData(ObjectDataOutput out) throws IOException {
out.writeUTF(attributeName);
}
@Override
public void readData(ObjectDataInput in) throws IOException {
attributeName = in.readUTF();
}
@Override
public int getFactoryId() {
return PREDICATE_DS_FACTORY_ID;
}
@Override
public boolean equals(Object o) {
if (!(o instanceof AbstractPredicate)) {
return false;
}
AbstractPredicate<?, ?> that = (AbstractPredicate<?, ?>) o;
if (!that.canEqual(this)) {
return false;
}
return attributeName != null ? attributeName.equals(that.attributeName) : that.attributeName == null;
}
@SuppressWarnings("BooleanMethodIsAlwaysInverted")
public boolean canEqual(Object other) {
return (other instanceof AbstractPredicate);
}
@Override
public int hashCode() {
return attributeName != null ? attributeName.hashCode() : 0;
}
}
|
{
"pile_set_name": "Github"
}
|
<?php
class Options_Framework {
const VERSION = '1.9.1';
function get_option_name() {
$name = '';
if ( function_exists( 'optionsframework_option_name' ) ) {
$name = optionsframework_option_name();
}
if ( '' == $name ) {
$name = get_option( 'stylesheet' );
$name = preg_replace( "/\W/", "_", strtolower( $name ) );
}
return apply_filters( 'options_framework_option_name', $name );
}
static function &_optionsframework_options() {
static $options = null;
if ( !$options ) {
$location = apply_filters( 'options_framework_location', array( 'options.php' ) );
if ( $optionsfile = locate_template( $location ) ) {
$maybe_options = load_template( $optionsfile );
if ( is_array( $maybe_options ) ) {
$options = $maybe_options;
} else if ( function_exists( 'optionsframework_options' ) ) {
$options = optionsframework_options();
}
}
$options = apply_filters( 'of_options', $options );
}
return $options;
}
}
|
{
"pile_set_name": "Github"
}
|
/*! X-editable - v1.4.1
* In-place editing with Twitter Bootstrap, jQuery UI or pure jQuery
* http://github.com/vitalets/x-editable
* Copyright (c) 2013 Vitaliy Potapov; Licensed MIT */
.editableform {
margin-bottom: 0; /* overwrites bootstrap margin */
}
.editableform .control-group {
margin-bottom: 0; /* overwrites bootstrap margin */
white-space: nowrap; /* prevent wrapping buttons on new line */
}
.editable-buttons {
display: inline-block; /* should be inline to take effect of parent's white-space: nowrap */
vertical-align: top;
margin-left: 7px;
/* inline-block emulation for IE7*/
zoom: 1;
*display: inline;
}
.editable-input {
vertical-align: top;
display: inline-block; /* should be inline to take effect of parent's white-space: nowrap */
width: auto; /* bootstrap-responsive has width: 100% that breakes layout */
white-space: normal; /* reset white-space decalred in parent*/
/* display-inline emulation for IE7*/
zoom: 1;
*display: inline;
}
.editable-buttons .editable-cancel {
margin-left: 7px;
}
/*for jquery-ui buttons need set height to look more pretty*/
.editable-buttons button.ui-button-icon-only {
height: 24px;
width: 30px;
}
.editableform-loading {
background: url('../img/loading.gif') center center no-repeat;
height: 25px;
width: auto;
min-width: 25px;
}
.editable-inline .editableform-loading {
background-position: left 5px;
}
.editable-error-block {
max-width: 300px;
margin: 5px 0 0 0;
width: auto;
white-space: normal;
}
/*add padding for jquery ui*/
.editable-error-block.ui-state-error {
padding: 3px;
}
.editable-error {
color: red;
}
.editableform .editable-date {
padding: 0;
margin: 0;
float: left;
}
/* checklist vertical alignment */
.editable-checklist label input[type="checkbox"],
.editable-checklist label span {
vertical-align: middle;
margin: 0;
}
.editable-checklist label {
white-space: nowrap;
}
/* set exact width of textarea to fit buttons toolbar */
.editable-wysihtml5 {
width: 566px;
height: 250px;
}
/* clear button shown as link in date inputs */
.editable-clear {
clear: both;
font-size: 0.9em;
text-decoration: none;
text-align: right;
}
/* IOS-style clear button for text inputs */
.editable-clear-x {
background: url('../img/clear.png') center center no-repeat;
display: block;
width: 13px;
height: 13px;
position: absolute;
opacity: 0.6;
z-index: 100;
}
.editable-clear-x:hover {
opacity: 1;
}
.editable-container {
max-width: none !important; /* without this rule poshytip/tooltip does not stretch */
}
.editable-container.popover {
/* width: 300px;*/ /* debug */
width: auto; /* without this rule popover does not stretch */
}
.editable-container.editable-inline {
display: inline-block;
vertical-align: middle;
width: auto;
/* inline-block emulation for IE7*/
zoom: 1;
*display: inline;
}
.editable-container.ui-widget {
font-size: inherit; /* jqueryui widget font 1.1em too big, overwrite it */
}
.editable-click,
a.editable-click,
a.editable-click:hover {
text-decoration: none;
border: solid 1px #ddd;
display: block;
padding: 5px 10px;
}
.editable-click.editable-disabled,
a.editable-click.editable-disabled,
a.editable-click.editable-disabled:hover {
color: #585858;
cursor: default;
border-bottom: none;
}
.editable-empty, .editable-empty:hover{
font-style: italic;
color: #DD1144;
border-bottom: none;
text-decoration: none;
}
.editable-unsaved {
font-weight: bold;
}
.editable-unsaved:after {
/* content: '*'*/
}
/*!
* Datepicker for Bootstrap
*
* Copyright 2012 Stefan Petre
* Improvements by Andrew Rowls
* Licensed under the Apache License v2.0
* http://www.apache.org/licenses/LICENSE-2.0
*
*/
.datepicker {
padding: 4px;
margin-top: 1px;
-webkit-border-radius: 4px;
-moz-border-radius: 4px;
border-radius: 4px;
direction: ltr;
/*.dow {
border-top: 1px solid #ddd !important;
}*/
}
.datepicker-inline {
width: 220px;
}
.datepicker.datepicker-rtl {
direction: rtl;
}
.datepicker.datepicker-rtl table tr td span {
float: right;
}
.datepicker-dropdown {
top: 0;
left: 0;
}
.datepicker-dropdown:before {
content: '';
display: inline-block;
border-left: 7px solid transparent;
border-right: 7px solid transparent;
border-bottom: 7px solid #ccc;
border-bottom-color: rgba(0, 0, 0, 0.2);
position: absolute;
top: -7px;
left: 6px;
}
.datepicker-dropdown:after {
content: '';
display: inline-block;
border-left: 6px solid transparent;
border-right: 6px solid transparent;
border-bottom: 6px solid #ffffff;
position: absolute;
top: -6px;
left: 7px;
}
.datepicker > div {
display: none;
}
.datepicker.days div.datepicker-days {
display: block;
}
.datepicker.months div.datepicker-months {
display: block;
}
.datepicker.years div.datepicker-years {
display: block;
}
.datepicker table {
margin: 0;
}
.datepicker td,
.datepicker th {
text-align: center;
width: 20px;
height: 20px;
-webkit-border-radius: 4px;
-moz-border-radius: 4px;
border-radius: 4px;
border: none;
}
.table-striped .datepicker table tr td,
.table-striped .datepicker table tr th {
background-color: transparent;
}
.datepicker table tr td.day:hover {
background: #eeeeee;
cursor: pointer;
}
.datepicker table tr td.old,
.datepicker table tr td.new {
color: #999999;
}
.datepicker table tr td.disabled,
.datepicker table tr td.disabled:hover {
background: none;
color: #999999;
cursor: default;
}
.datepicker table tr td.today,
.datepicker table tr td.today:hover,
.datepicker table tr td.today.disabled,
.datepicker table tr td.today.disabled:hover {
background-color: #fde19a;
background-image: -moz-linear-gradient(top, #fdd49a, #fdf59a);
background-image: -ms-linear-gradient(top, #fdd49a, #fdf59a);
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#fdd49a), to(#fdf59a));
background-image: -webkit-linear-gradient(top, #fdd49a, #fdf59a);
background-image: -o-linear-gradient(top, #fdd49a, #fdf59a);
background-image: linear-gradient(top, #fdd49a, #fdf59a);
background-repeat: repeat-x;
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#fdd49a', endColorstr='#fdf59a', GradientType=0);
border-color: #fdf59a #fdf59a #fbed50;
border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25);
filter: progid:DXImageTransform.Microsoft.gradient(enabled=false);
}
.datepicker table tr td.today:hover,
.datepicker table tr td.today:hover:hover,
.datepicker table tr td.today.disabled:hover,
.datepicker table tr td.today.disabled:hover:hover,
.datepicker table tr td.today:active,
.datepicker table tr td.today:hover:active,
.datepicker table tr td.today.disabled:active,
.datepicker table tr td.today.disabled:hover:active,
.datepicker table tr td.today.active,
.datepicker table tr td.today:hover.active,
.datepicker table tr td.today.disabled.active,
.datepicker table tr td.today.disabled:hover.active,
.datepicker table tr td.today.disabled,
.datepicker table tr td.today:hover.disabled,
.datepicker table tr td.today.disabled.disabled,
.datepicker table tr td.today.disabled:hover.disabled,
.datepicker table tr td.today[disabled],
.datepicker table tr td.today:hover[disabled],
.datepicker table tr td.today.disabled[disabled],
.datepicker table tr td.today.disabled:hover[disabled] {
background-color: #fdf59a;
}
.datepicker table tr td.today:active,
.datepicker table tr td.today:hover:active,
.datepicker table tr td.today.disabled:active,
.datepicker table tr td.today.disabled:hover:active,
.datepicker table tr td.today.active,
.datepicker table tr td.today:hover.active,
.datepicker table tr td.today.disabled.active,
.datepicker table tr td.today.disabled:hover.active {
background-color: #fbf069 \9;
}
.datepicker table tr td.active,
.datepicker table tr td.active:hover,
.datepicker table tr td.active.disabled,
.datepicker table tr td.active.disabled:hover {
background-color: #006dcc;
background-image: -moz-linear-gradient(top, #0088cc, #0044cc);
background-image: -ms-linear-gradient(top, #0088cc, #0044cc);
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#0088cc), to(#0044cc));
background-image: -webkit-linear-gradient(top, #0088cc, #0044cc);
background-image: -o-linear-gradient(top, #0088cc, #0044cc);
background-image: linear-gradient(top, #0088cc, #0044cc);
background-repeat: repeat-x;
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#0088cc', endColorstr='#0044cc', GradientType=0);
border-color: #0044cc #0044cc #002a80;
border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25);
filter: progid:DXImageTransform.Microsoft.gradient(enabled=false);
color: #fff;
text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25);
}
.datepicker table tr td.active:hover,
.datepicker table tr td.active:hover:hover,
.datepicker table tr td.active.disabled:hover,
.datepicker table tr td.active.disabled:hover:hover,
.datepicker table tr td.active:active,
.datepicker table tr td.active:hover:active,
.datepicker table tr td.active.disabled:active,
.datepicker table tr td.active.disabled:hover:active,
.datepicker table tr td.active.active,
.datepicker table tr td.active:hover.active,
.datepicker table tr td.active.disabled.active,
.datepicker table tr td.active.disabled:hover.active,
.datepicker table tr td.active.disabled,
.datepicker table tr td.active:hover.disabled,
.datepicker table tr td.active.disabled.disabled,
.datepicker table tr td.active.disabled:hover.disabled,
.datepicker table tr td.active[disabled],
.datepicker table tr td.active:hover[disabled],
.datepicker table tr td.active.disabled[disabled],
.datepicker table tr td.active.disabled:hover[disabled] {
background-color: #0044cc;
}
.datepicker table tr td.active:active,
.datepicker table tr td.active:hover:active,
.datepicker table tr td.active.disabled:active,
.datepicker table tr td.active.disabled:hover:active,
.datepicker table tr td.active.active,
.datepicker table tr td.active:hover.active,
.datepicker table tr td.active.disabled.active,
.datepicker table tr td.active.disabled:hover.active {
background-color: #003399 \9;
}
.datepicker table tr td span {
display: block;
width: 23%;
height: 54px;
line-height: 54px;
float: left;
margin: 1%;
cursor: pointer;
-webkit-border-radius: 4px;
-moz-border-radius: 4px;
border-radius: 4px;
}
.datepicker table tr td span:hover {
background: #eeeeee;
}
.datepicker table tr td span.disabled,
.datepicker table tr td span.disabled:hover {
background: none;
color: #999999;
cursor: default;
}
.datepicker table tr td span.active,
.datepicker table tr td span.active:hover,
.datepicker table tr td span.active.disabled,
.datepicker table tr td span.active.disabled:hover {
background-color: #006dcc;
background-image: -moz-linear-gradient(top, #0088cc, #0044cc);
background-image: -ms-linear-gradient(top, #0088cc, #0044cc);
background-image: -webkit-gradient(linear, 0 0, 0 100%, from(#0088cc), to(#0044cc));
background-image: -webkit-linear-gradient(top, #0088cc, #0044cc);
background-image: -o-linear-gradient(top, #0088cc, #0044cc);
background-image: linear-gradient(top, #0088cc, #0044cc);
background-repeat: repeat-x;
filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#0088cc', endColorstr='#0044cc', GradientType=0);
border-color: #0044cc #0044cc #002a80;
border-color: rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.1) rgba(0, 0, 0, 0.25);
filter: progid:DXImageTransform.Microsoft.gradient(enabled=false);
color: #fff;
text-shadow: 0 -1px 0 rgba(0, 0, 0, 0.25);
}
.datepicker table tr td span.active:hover,
.datepicker table tr td span.active:hover:hover,
.datepicker table tr td span.active.disabled:hover,
.datepicker table tr td span.active.disabled:hover:hover,
.datepicker table tr td span.active:active,
.datepicker table tr td span.active:hover:active,
.datepicker table tr td span.active.disabled:active,
.datepicker table tr td span.active.disabled:hover:active,
.datepicker table tr td span.active.active,
.datepicker table tr td span.active:hover.active,
.datepicker table tr td span.active.disabled.active,
.datepicker table tr td span.active.disabled:hover.active,
.datepicker table tr td span.active.disabled,
.datepicker table tr td span.active:hover.disabled,
.datepicker table tr td span.active.disabled.disabled,
.datepicker table tr td span.active.disabled:hover.disabled,
.datepicker table tr td span.active[disabled],
.datepicker table tr td span.active:hover[disabled],
.datepicker table tr td span.active.disabled[disabled],
.datepicker table tr td span.active.disabled:hover[disabled] {
background-color: #0044cc;
}
.datepicker table tr td span.active:active,
.datepicker table tr td span.active:hover:active,
.datepicker table tr td span.active.disabled:active,
.datepicker table tr td span.active.disabled:hover:active,
.datepicker table tr td span.active.active,
.datepicker table tr td span.active:hover.active,
.datepicker table tr td span.active.disabled.active,
.datepicker table tr td span.active.disabled:hover.active {
background-color: #003399 \9;
}
.datepicker table tr td span.old {
color: #999999;
}
.datepicker th.switch {
width: 145px;
}
.datepicker thead tr:first-child th,
.datepicker tfoot tr:first-child th {
cursor: pointer;
}
.datepicker thead tr:first-child th:hover,
.datepicker tfoot tr:first-child th:hover {
background: #eeeeee;
}
.input-append.date .add-on i,
.input-prepend.date .add-on i {
display: block;
cursor: pointer;
width: 16px;
height: 16px;
}
|
{
"pile_set_name": "Github"
}
|
# Tenko parser autogenerated test case
- From: tests/testcases/regexes/assertions_and_quantifiers/autogen.md
- Path: tests/testcases/regexes/assertions_and_quantifiers/gen/start_of_input_u-flag/2b.md
> :: regexes : assertions and quantifiers : gen : start of input u-flag
>
> ::> 2b
## Input
`````js
/^+foo/u
`````
## Output
_Note: the whole output block is auto-generated. Manual changes will be overwritten!_
Below follow outputs in five parsing modes: sloppy, sloppy+annexb, strict script, module, module+annexb.
Note that the output parts are auto-generated by the test runner to reflect actual result.
### Sloppy mode
Parsed with script goal and as if the code did not start with strict mode header.
`````
throws: Lexer error!
Regex: Encountered unescaped quantifier (ord=43) without a value to quantify
start@1:0, error@1:0
╔══╦════════════════
1 ║ /^+foo/u
║ ^^^^^^^------- error
╚══╩════════════════
`````
### Strict mode
Parsed with script goal but as if it was starting with `"use strict"` at the top.
_Output same as sloppy mode._
### Module goal
Parsed with the module goal.
_Output same as sloppy mode._
### Sloppy mode with AnnexB
Parsed with script goal with AnnexB rules enabled and as if the code did not start with strict mode header.
_Output same as sloppy mode._
### Module goal with AnnexB
Parsed with the module goal with AnnexB rules enabled.
_Output same as sloppy mode._
|
{
"pile_set_name": "Github"
}
|
<timeinfo>
<indefinite>1</indefinite>
<duration>60.000000000</duration>
<introDuration>0.000000000</introDuration>
<outroDuration>0.000000000</outroDuration>
</timeinfo>
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0" encoding="UTF-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:state_pressed="true" android:drawable="@drawable/remote_xbox_tv_down" />
<item android:state_pressed="false" android:drawable="@drawable/remote_xbox_tv_up" />
</selector>
|
{
"pile_set_name": "Github"
}
|
**Added:**
* Testing PyNE build without PyMOAB using Python 2 & 3
* Testing PyNE with PyMOAB (and DAGMC) using Python 2
* Added FindDAGMC.cmake file
* Dockerfile to build many variations of PyNE docker image, with python script CLI
**Changed:**
* PyNE can be built with PyMOAB whitout DAGMC with limited capabilities
* "--dagmc" flag added to `setup.py` in order to build PyNE against DAGMC
* pyne.mesh now takes advantage of PyMOAB instead of PyTAPS:
- IMeshTag changed to NativeMeshTag, with according tagetype name change:
from 'imesh' to 'nat_mesh'
- write_hdf5(self, filename) -> write_hdf5(self, filename, write_mats)
- new save(self, filename, write_mats) (alias for write hdf5)
- new class MeshSetIterator()
- new get_tag(self, tag_name) and delete_tag(self, tag_name) methods
- when tagging the root set of a mesh, a new syntax is available:
- `mymesh.mytag[mesh.mesh.getRootSet()] = val` can now be written as `mymesh.mytag[mymesh] = val`
- direct call to the mesh entities change accordingly for example:
- getEntSets() -> get_entities_by_type( , )
- getTagHandle('XXX') -> tag_get_handle(types.XXXXX)
- iterate() -> mesh_iterate()
- getAllTags(xx) -> tag_get_tags_on_entity(xx)
- mesh.destroyTag(self, boolean) -> mesh.delete_tag(self)
- ... (see PyTAPS and PyMOAB respective documentation)
- those changes have been propagated in mcnp.py, alara.py, ccc.py, dagmc.pyx,
r2s.py, variancereduction.py, expand_tags.py, and their respective tests...
* test_fluka:
- added test to check the data tag name of the different tally part and
error.
* dagmc_bridge: added a static DagMC instance
* utils.py: updated the download timeout time to 120sec (from 30sec)
* updated CI to use CircleCI 2.1 workflows: now build separately from tests with state saved between runs
**Deprecated:** None
**Removed:** None
**Fixed:** None
**Security:** None
|
{
"pile_set_name": "Github"
}
|
<!ENTITY title "Dizinleri Eşitle">
<!ENTITY file.label "Dosya">
<!ENTITY reason.label "Neden">
<!ENTITY action.label "İşlem">
<!ENTITY download.label "İndir">
<!ENTITY upload.label "Karşıya Yükle">
<!ENTITY delete.label "Delete">
<!ENTITY nothing.label "Hiçbir Şey Yapma">
<!ENTITY defaults.label "Seçimi Değiştir">
<!ENTITY local.label "Eksik yerel dosyalar için:">
<!ENTITY remote.label "Eksik uzak dosyalar için:">
<!ENTITY diff.label "Farklı boyuttaki dosyalar için:">
<!ENTITY newer.label "For newer local files:">
<!ENTITY older.label "For newer remote files:">
<!ENTITY lDownload.access "İ">
<!ENTITY lDelete.access "e">
<!ENTITY lNothing.access "H">
<!ENTITY rUpload.access "K">
<!ENTITY rDelete.access "l">
<!ENTITY rNothing.access "n">
<!ENTITY dDownload.access "w">
<!ENTITY dUpload.access "p">
<!ENTITY dNothing.access "h">
<!ENTITY diff.note "Not: Farklılık özelliği dosyaları içerik bazında karşılaştırmaz. Daha ziyade karşılaştırılan klasörlerde dosyanın olup olmadığı ya da dosya boyutlarına bakar. 'Zaman damgalarını senkronize tut' seçiliyse zaman damgaları da karşılaştırılır.">
|
{
"pile_set_name": "Github"
}
|
#include <inttypes.h>
// GDB JIT interface
enum JITAction { JIT_NOACTION, JIT_REGISTER_FN, JIT_UNREGISTER_FN };
struct JITCodeEntry
{
struct JITCodeEntry* next;
struct JITCodeEntry* prev;
const char *symfile_addr;
uint64_t symfile_size;
};
struct JITDescriptor
{
uint32_t version;
uint32_t action_flag;
struct JITCodeEntry* relevant_entry;
struct JITCodeEntry* first_entry;
};
struct JITDescriptor __jit_debug_descriptor = { 1, JIT_NOACTION, 0, 0 };
void __jit_debug_register_code()
{
}
// end GDB JIT interface
struct JITCodeEntry entry;
int main()
{
// Create a code entry with a bogus size
entry.next = entry.prev = 0;
entry.symfile_addr = (char *)&entry;
entry.symfile_size = (uint64_t)47<<32;
__jit_debug_descriptor.relevant_entry = __jit_debug_descriptor.first_entry = &entry;
__jit_debug_descriptor.action_flag = JIT_REGISTER_FN;
__jit_debug_register_code();
return 0;
}
|
{
"pile_set_name": "Github"
}
|
/// Copyright (c) 2012 Ecma International. All rights reserved.
/**
* This test is actually testing the [[Delete]] internal method (8.12.8). Since the
* language provides no way to directly exercise [[Delete]], the tests are placed here.
*
* @path ch11/11.4/11.4.1/11.4.1-4.a-17.js
* @description delete operator returns true on deleting a arguments element
*/
function testcase() {
function foo(a,b)
{
var d = delete arguments[0];
return (d === true && arguments[0] === undefined);
}
if(foo(1,2) === true)
return true;
}
runTestCase(testcase);
|
{
"pile_set_name": "Github"
}
|
/*
* Copyright © 2012-2013 Blue Brain Project, BBP/EPFL. All rights reserved.
* Copyright © 2012-2013 Inria. All rights reserved.
* See COPYING in top-level directory.
*/
#include <private/autogen/config.h>
#include <hwloc.h>
#include <hwloc/plugins.h>
/* private headers allowed for convenience because this plugin is built within hwloc */
#include <private/misc.h>
#include <private/debug.h>
#include <stdarg.h>
#include <errno.h>
#include <X11/Xlib.h>
#include <NVCtrl/NVCtrl.h>
#include <NVCtrl/NVCtrlLib.h>
#define HWLOC_GL_SERVER_MAX 10
#define HWLOC_GL_SCREEN_MAX 10
struct hwloc_gl_backend_data_s {
unsigned nr_display;
struct hwloc_gl_display_info_s {
char name[10];
unsigned port, device;
unsigned pcidomain, pcibus, pcidevice, pcifunc;
char *productname;
} display[HWLOC_GL_SERVER_MAX*HWLOC_GL_SCREEN_MAX];
};
static void
hwloc_gl_query_devices(struct hwloc_gl_backend_data_s *data)
{
int err;
unsigned i,j;
/* mark the number of display as 0 in case we fail below,
* so that we don't try again later.
*/
data->nr_display = 0;
for (i = 0; i < HWLOC_GL_SERVER_MAX; ++i) {
Display* display;
char displayName[10];
int opcode, event, error;
/* open X server */
snprintf(displayName, sizeof(displayName), ":%u", i);
display = XOpenDisplay(displayName);
if (!display)
continue;
/* Check for NV-CONTROL extension (it's per server) */
if(!XQueryExtension(display, "NV-CONTROL", &opcode, &event, &error)) {
XCloseDisplay(display);
continue;
}
for (j = 0; j < (unsigned) ScreenCount(display) && j < HWLOC_GL_SCREEN_MAX; j++) {
struct hwloc_gl_display_info_s *info = &data->display[data->nr_display];
const int screen = j;
unsigned int *ptr_binary_data;
int data_length;
int gpu_number;
int nv_ctrl_pci_bus;
int nv_ctrl_pci_device;
int nv_ctrl_pci_domain;
int nv_ctrl_pci_func;
char *productname;
/* the server supports NV-CONTROL but it may contain non-NVIDIA screen that don't support it */
if (!XNVCTRLIsNvScreen(display, screen))
continue;
/* Gets the GPU number attached to the default screen. */
/* For further details, see the <NVCtrl/NVCtrlLib.h> */
err = XNVCTRLQueryTargetBinaryData (display, NV_CTRL_TARGET_TYPE_X_SCREEN, screen, 0,
NV_CTRL_BINARY_DATA_GPUS_USED_BY_XSCREEN,
(unsigned char **) &ptr_binary_data, &data_length);
if (!err)
continue;
gpu_number = ptr_binary_data[1];
free(ptr_binary_data);
#ifdef NV_CTRL_PCI_DOMAIN
/* Gets the ID's of the GPU defined by gpu_number
* For further details, see the <NVCtrl/NVCtrlLib.h> */
err = XNVCTRLQueryTargetAttribute(display, NV_CTRL_TARGET_TYPE_GPU, gpu_number, 0,
NV_CTRL_PCI_DOMAIN, &nv_ctrl_pci_domain);
if (!err)
continue;
#else
nv_ctrl_pci_domain = 0;
#endif
err = XNVCTRLQueryTargetAttribute(display, NV_CTRL_TARGET_TYPE_GPU, gpu_number, 0,
NV_CTRL_PCI_BUS, &nv_ctrl_pci_bus);
if (!err)
continue;
err = XNVCTRLQueryTargetAttribute(display, NV_CTRL_TARGET_TYPE_GPU, gpu_number, 0,
NV_CTRL_PCI_DEVICE, &nv_ctrl_pci_device);
if (!err)
continue;
err = XNVCTRLQueryTargetAttribute(display, NV_CTRL_TARGET_TYPE_GPU, gpu_number, 0,
NV_CTRL_PCI_FUNCTION, &nv_ctrl_pci_func);
if (!err)
continue;
productname = NULL;
err = XNVCTRLQueryTargetStringAttribute(display, NV_CTRL_TARGET_TYPE_GPU, gpu_number, 0,
NV_CTRL_STRING_PRODUCT_NAME, &productname);
snprintf(info->name, sizeof(info->name), ":%u.%u", i, j);
info->port = i;
info->device = j;
info->pcidomain = nv_ctrl_pci_domain;
info->pcibus = nv_ctrl_pci_bus;
info->pcidevice = nv_ctrl_pci_device;
info->pcifunc = nv_ctrl_pci_func;
info->productname = productname;
hwloc_debug("GL device %s (product %s) on PCI 0000:%02x:%02x.%u\n", info->name, productname,
nv_ctrl_pci_domain, nv_ctrl_pci_bus, nv_ctrl_pci_device, nv_ctrl_pci_func);
/* validate this device */
data->nr_display++;
}
XCloseDisplay(display);
}
}
static int
hwloc_gl_backend_notify_new_object(struct hwloc_backend *backend, struct hwloc_backend *caller __hwloc_attribute_unused,
struct hwloc_obj *pcidev)
{
struct hwloc_topology *topology = backend->topology;
struct hwloc_gl_backend_data_s *data = backend->private_data;
unsigned i, res;
if (!(hwloc_topology_get_flags(topology) & (HWLOC_TOPOLOGY_FLAG_IO_DEVICES|HWLOC_TOPOLOGY_FLAG_WHOLE_IO)))
return 0;
if (!hwloc_topology_is_thissystem(topology)) {
hwloc_debug("%s", "\nno GL detection (not thissystem)\n");
return 0;
}
if (HWLOC_OBJ_PCI_DEVICE != pcidev->type)
return 0;
if (data->nr_display == (unsigned) -1) {
/* first call, lookup all display */
hwloc_gl_query_devices(data);
/* if it fails, data->nr_display = 0 so we won't do anything below and in next callbacks */
}
if (!data->nr_display)
/* found no display */
return 0;
/* now the display array is ready to use */
res = 0;
for(i=0; i<data->nr_display; i++) {
struct hwloc_gl_display_info_s *info = &data->display[i];
hwloc_obj_t osdev;
if (info->pcidomain != pcidev->attr->pcidev.domain)
continue;
if (info->pcibus != pcidev->attr->pcidev.bus)
continue;
if (info->pcidevice != pcidev->attr->pcidev.dev)
continue;
if (info->pcifunc != pcidev->attr->pcidev.func)
continue;
osdev = hwloc_alloc_setup_object(HWLOC_OBJ_OS_DEVICE, -1);
osdev->name = strdup(info->name);
osdev->logical_index = -1;
osdev->attr->osdev.type = HWLOC_OBJ_OSDEV_GPU;
hwloc_obj_add_info(osdev, "Backend", "GL");
hwloc_obj_add_info(osdev, "GPUVendor", "NVIDIA Corporation");
if (info->productname)
hwloc_obj_add_info(osdev, "GPUModel", info->productname);
hwloc_insert_object_by_parent(topology, pcidev, osdev);
res++;
/* there may be others */
}
return res;
}
static void
hwloc_gl_backend_disable(struct hwloc_backend *backend)
{
struct hwloc_gl_backend_data_s *data = backend->private_data;
unsigned i;
if (data->nr_display != (unsigned) -1) { /* could be -1 if --no-io */
for(i=0; i<data->nr_display; i++) {
struct hwloc_gl_display_info_s *info = &data->display[i];
free(info->productname);
}
}
free(backend->private_data);
}
static struct hwloc_backend *
hwloc_gl_component_instantiate(struct hwloc_disc_component *component,
const void *_data1 __hwloc_attribute_unused,
const void *_data2 __hwloc_attribute_unused,
const void *_data3 __hwloc_attribute_unused)
{
struct hwloc_backend *backend;
struct hwloc_gl_backend_data_s *data;
if (hwloc_plugin_check_namespace(component->name, "hwloc_backend_alloc") < 0)
return NULL;
/* thissystem may not be fully initialized yet, we'll check flags in discover() */
backend = hwloc_backend_alloc(component);
if (!backend)
return NULL;
data = malloc(sizeof(*data));
if (!data) {
free(backend);
return NULL;
}
/* the first callback will initialize those */
data->nr_display = (unsigned) -1; /* unknown yet */
backend->private_data = data;
backend->disable = hwloc_gl_backend_disable;
backend->notify_new_object = hwloc_gl_backend_notify_new_object;
return backend;
}
static struct hwloc_disc_component hwloc_gl_disc_component = {
HWLOC_DISC_COMPONENT_TYPE_MISC,
"gl",
HWLOC_DISC_COMPONENT_TYPE_GLOBAL,
hwloc_gl_component_instantiate,
10, /* after pci */
NULL
};
#ifdef HWLOC_INSIDE_PLUGIN
HWLOC_DECLSPEC extern const struct hwloc_component hwloc_gl_component;
#endif
const struct hwloc_component hwloc_gl_component = {
HWLOC_COMPONENT_ABI,
HWLOC_COMPONENT_TYPE_DISC,
0,
&hwloc_gl_disc_component
};
|
{
"pile_set_name": "Github"
}
|
using System;
namespace NaughtyAttributes
{
[AttributeUsage(AttributeTargets.Field, AllowMultiple = false, Inherited = true)]
public class ValidateInputAttribute : ValidatorAttribute
{
public string CallbackName { get; private set; }
public string Message { get; private set; }
public ValidateInputAttribute(string callbackName, string message = null)
{
this.CallbackName = callbackName;
this.Message = message;
}
}
}
|
{
"pile_set_name": "Github"
}
|
require 'uv/ctypes/init'
local ffi = require 'ffi'
local timer = require 'uv.timer'
local libuv = require 'uv/libuv'
local libuv2 = require 'uv/libuv2'
local uv_idle_t = require 'uv/ctypes/uv_idle_t'
local uv_prepare_t = require 'uv/ctypes/uv_prepare_t'
local uv_check_t = require 'uv/ctypes/uv_check_t'
local join = require 'uv/util/join'
local loop = {}
function loop.run(callback)
if callback then
timer.set(0, callback)
end
return libuv.uv_default_loop():run()
end
function loop.alive()
return libuv.uv_loop_alive(libuv.uv_default_loop()) ~= 0
end
function loop.stop()
libuv.uv_default_loop():stop()
end
function loop.idle(callback)
join(coroutine.create(function()
uv_idle_t():start(callback)
end))
end
function loop.yield(callback)
join(coroutine.create(function()
uv_prepare_t():start(callback)
end))
end
function loop.resume(callback)
join(coroutine.create(function()
uv_check_t():start(callback)
end))
end
return loop
|
{
"pile_set_name": "Github"
}
|
package ecs
//Licensed under the Apache License, Version 2.0 (the "License");
//you may not use this file except in compliance with the License.
//You may obtain a copy of the License at
//
//http://www.apache.org/licenses/LICENSE-2.0
//
//Unless required by applicable law or agreed to in writing, software
//distributed under the License is distributed on an "AS IS" BASIS,
//WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
//See the License for the specific language governing permissions and
//limitations under the License.
//
// Code generated by Alibaba Cloud SDK Code Generator.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
// DiskDeviceMapping is a nested struct in ecs response
type DiskDeviceMapping struct {
Progress string `json:"Progress" xml:"Progress"`
Format string `json:"Format" xml:"Format"`
Device string `json:"Device" xml:"Device"`
Size string `json:"Size" xml:"Size"`
RemainTime int `json:"RemainTime" xml:"RemainTime"`
SnapshotId string `json:"SnapshotId" xml:"SnapshotId"`
ImportOSSObject string `json:"ImportOSSObject" xml:"ImportOSSObject"`
ImportOSSBucket string `json:"ImportOSSBucket" xml:"ImportOSSBucket"`
Type string `json:"Type" xml:"Type"`
}
|
{
"pile_set_name": "Github"
}
|
/*
Copyright 2020 Sergey Vlasov <sigprof@gmail.com>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include "config_common.h"
/* USB Device descriptor parameter */
#define VENDOR_ID 0x6964 /* "id" */
#define PRODUCT_ID 0x0080
#define DEVICE_VER 0x0001
#define MANUFACTURER IDOBAO
#define PRODUCT ID80
#define DESCRIPTION A 75% hotswap keyboard
/* key matrix size */
#define MATRIX_ROWS 9
#define MATRIX_COLS 11
/*
* Keyboard Matrix Assignments
*
* Change this to how you wired your keyboard
* COLS: AVR pins used for columns, left to right
* ROWS: AVR pins used for rows, top to bottom
* DIODE_DIRECTION: COL2ROW = COL = Anode (+), ROW = Cathode (-, marked on diode)
* ROW2COL = ROW = Anode (+), COL = Cathode (-, marked on diode)
*
* The matrix description in the vendor-supplied JSON file for kbfirmware.com
* had 9 columns:
* { D0, D1, D2, D3, D5, D4, D6, D7, B4 }
* and 12 rows:
* { B7, B3, B2, B1, B0, E6, F0, F1, F4, F5, F6, F7 }
* However, the row 6 was completely empty, and the pin F0 was not actually
* routed anywhere on the PCB, therefore this row was removed to save some
* resources (the EEPROM space for dynamic keymaps is especially scarce).
*
* After doing the above change, the matrix was transposed (rows and columns
* were swapped), because a matrix with the COL2ROW layout can be scanned much
* more efficiently than a matrix with the ROW2COL layout (depending on various
* optimizations, the difference in scan rate can be over 2 times). Because of
* this, the "columns" in the matrix layout now mostly correspond to physical
* rows, and the "rows" have mostly vertical physical orientation.
*/
#define MATRIX_ROW_PINS { D0, D1, D2, D3, D5, D4, D6, D7, B4 }
#define MATRIX_COL_PINS { B7, B3, B2, B1, B0, E6, F1, F4, F5, F6, F7 }
#define DIODE_DIRECTION COL2ROW
#define BACKLIGHT_PIN B6
#define BACKLIGHT_BREATHING
#define BACKLIGHT_LEVELS 3
#define CAPS_LOCK_LED_PIN C7
#define RGB_DI_PIN E2
#ifdef RGB_DI_PIN
#define RGBLED_NUM 20 /* 16 underglow LEDs, 4 top LEDs */
#define RGBLIGHT_HUE_STEP 8
#define RGBLIGHT_SAT_STEP 8
#define RGBLIGHT_VAL_STEP 8
#define RGBLIGHT_LIMIT_VAL 255 /* The maximum brightness level */
#define RGBLIGHT_SLEEP /* If defined, the RGB lighting will be switched off when the host goes to sleep */
/*== all animations enable ==*/
#define RGBLIGHT_ANIMATIONS
/*== or choose animations ==*/
// #define RGBLIGHT_EFFECT_BREATHING
// #define RGBLIGHT_EFFECT_RAINBOW_MOOD
// #define RGBLIGHT_EFFECT_RAINBOW_SWIRL
// #define RGBLIGHT_EFFECT_SNAKE
// #define RGBLIGHT_EFFECT_KNIGHT
// #define RGBLIGHT_EFFECT_CHRISTMAS
// #define RGBLIGHT_EFFECT_STATIC_GRADIENT
// #define RGBLIGHT_EFFECT_RGB_TEST
// #define RGBLIGHT_EFFECT_ALTERNATING
#endif
/* Bootmagic Lite key configuration: use the Esc key */
#define BOOTMAGIC_LITE_ROW 0
#define BOOTMAGIC_LITE_COLUMN 5
|
{
"pile_set_name": "Github"
}
|
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#00000000" >
<!-- 标题 -->
<FrameLayout
android:id="@+id/bdp_paycenter_title_frame"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentTop="true" >
</FrameLayout>
<FrameLayout
android:id="@+id/bdp_paycenter_content_frame"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_below="@+id/bdp_paycenter_title_frame"
android:background="@drawable/bdp_paycenter_dialog_bottom_bg" >
<!-- 订单列表 -->
<com.baidu.platformsdk.widget.AmazingListView
android:id="@+id/alsv_order"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:fadingEdge="none"
/>
<!-- 空值提示 -->
<TextView
android:id="@+id/txt_empty"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:gravity="center"
android:text="@string/bdp_paycenter_order_list_empty_consume"
android:textColor="@color/bdp_gray"
android:textSize="14sp"
/>
</FrameLayout>
</RelativeLayout>
|
{
"pile_set_name": "Github"
}
|
package breeze.signal.support
import breeze.signal.{OptWindowFunction}
import breeze.linalg._
import breeze.numerics.{cos, isOdd, isEven, sincpi}
import scala.math.Pi
import breeze.macros.expand
/**
* Construction delegate trait for convolving type InputType.</p>
* Implementation details (especially
* option arguments) may be added in the future, so it is recommended not
* to call these implicit delegates directly. Instead, use convolve(x: DenseVector).
*
* @author ktakagaki
*/
trait CanFirwin[Output] {
def apply(
taps: Int,
omegas: DenseVector[Double],
nyquist: Double,
zeroPass: Boolean,
scale: Boolean,
multiplier: Double,
optWindow: OptWindowFunction): FIRKernel1D[Output]
}
/**
* Construction delegate for firwin filter design.</p>
* Implementation details (especially
* option arguments) may be added in the future, so it is recommended not
* to call these implicit delegates directly. Instead, use firwin(x: DenseVector).
*
* @author ktakagaki
*/
object CanFirwin {
/** Use via implicit delegate syntax firwin(xxxx)
*
*/
implicit def firwinDouble: CanFirwin[Double] = {
new CanFirwin[Double] {
def apply(
taps: Int,
omegas: DenseVector[Double],
nyquist: Double,
zeroPass: Boolean,
scale: Boolean,
multiplier: Double,
optWindow: OptWindowFunction): FIRKernel1D[Double] = new FIRKernel1D[Double](
firwinDoubleImpl(taps, omegas, nyquist, zeroPass, scale, optWindow) * multiplier,
multiplier,
"FIRKernel1D(firwin): " + taps + " taps, " + omegas + ", " + optWindow + ", zeroPass=" + zeroPass + ", nyquist=" + nyquist + ", scale=" + scale
)
}
}
@expand
implicit def firwinT[@expand.args(Int, Long, Float) T]: CanFirwin[T] = {
new CanFirwin[T] {
def apply(
taps: Int,
omegas: DenseVector[Double],
nyquist: Double,
zeroPass: Boolean,
scale: Boolean,
multiplier: Double,
optWindow: OptWindowFunction): FIRKernel1D[T] = new FIRKernel1D[T](
convert(firwinDoubleImpl(taps, omegas, nyquist, zeroPass, scale, optWindow) * multiplier, T),
multiplier,
"FIRKernel1D(firwin): " + taps + " taps, " + omegas + ", " + optWindow + ", zeroPass=" + zeroPass + ", nyquist=" + nyquist + ", scale=" + scale
)
}
}
def firwinDoubleImpl(
taps: Int,
omegas: DenseVector[Double],
nyquist: Double,
zeroPass: Boolean,
scale: Boolean,
optWindow: OptWindowFunction): DenseVector[Double] = {
//various variable conditions which must be met
require(omegas.length > 0, "At least one cutoff frequency must be given!")
require(min(omegas) >= 0, "The cutoff frequencies must be bigger than zero!")
require(max(omegas) <= nyquist, "The cutoff frequencies must be smaller than the nyquist frequency!")
if (omegas.length > 1) {
require(min(diff(omegas)) > 0, "The cutoff frequency must be monotonically increasing.")
}
val nyquistPass = (zeroPass != isOdd(omegas.length))
var tempCutoff = (omegas / nyquist).toArray
if (zeroPass) tempCutoff = tempCutoff.+:(0d)
if (nyquistPass) tempCutoff = tempCutoff.:+(1d)
val scaledCutoff = DenseVector(tempCutoff)
//ToDo: Is the following statement translated from numpy code correctly???
//https://github.com/scipy/scipy/blob/v0.13.0/scipy/signal/fir_filter_design.py#L138
require(
!(nyquistPass && isEven(taps)),
"A filter with an even number of taps must have zero response at the Nyquist rate.")
//val bands = scaledCutoff.reshape(-1, 2)
val alpha = 0.5 * (taps - 1)
val m = DenseVector.tabulate(taps)(i => i.toDouble) - alpha
val h = DenseVector.zeros[Double](m.length)
for (band <- scaledCutoff.toArray.zipWithIndex) {
if (isEven(band._2)) h -= sincpi(m *:* band._1) *:* band._1
else h += sincpi(m *:* band._1) *:* band._1
}
val win = optWindow match {
case OptWindowFunction.Hamming(alpha, beta) => WindowFunctions.hammingWindow(taps, alpha, beta)
case OptWindowFunction.Hanning(alpha, beta) => WindowFunctions.hammingWindow(taps, alpha, beta)
case OptWindowFunction.Blackman(a0, a1, a2) => WindowFunctions.blackmanWindow(taps, a0, a1, a2)
case OptWindowFunction.None => DenseVector.ones[Double](taps)
case OptWindowFunction.User(dv) => {
require(dv.length == taps, "Length of specified window function is not the same as taps option!")
dv
}
}
h *= win
if (scale) {
val scaleFrequency =
if (scaledCutoff(0) == 0d) 0d
else if (scaledCutoff(1) == 1d) 1d
else (scaledCutoff(0) + scaledCutoff(1)) / 2d
val c: DenseVector[Double] = cos(m *:* (Pi * scaleFrequency))
val s: Double = sum(h *:* c)
h /= s
}
h
}
}
|
{
"pile_set_name": "Github"
}
|
/* infback.c -- inflate using a call-back interface
* Copyright (C) 1995-2009 Mark Adler
* For conditions of distribution and use, see copyright notice in zlib.h
*/
/*
This code is largely copied from inflate.c. Normally either infback.o or
inflate.o would be linked into an application--not both. The interface
with inffast.c is retained so that optimized assembler-coded versions of
inflate_fast() can be used with either inflate.c or infback.c.
*/
#include "zutil.h"
#include "inftrees.h"
#include "inflate.h"
#include "inffast.h"
/* function prototypes */
local void fixedtables OF((struct inflate_state FAR *state));
/*
strm provides memory allocation functions in zalloc and zfree, or
Z_NULL to use the library memory allocation functions.
windowBits is in the range 8..15, and window is a user-supplied
window and output buffer that is 2**windowBits bytes.
*/
int ZEXPORT inflateBackInit_(strm, windowBits, window, version, stream_size)
z_streamp strm;
int windowBits;
unsigned char FAR *window;
const char *version;
int stream_size;
{
struct inflate_state FAR *state;
if (version == Z_NULL || version[0] != ZLIB_VERSION[0] ||
stream_size != (int)(sizeof(z_stream)))
return Z_VERSION_ERROR;
if (strm == Z_NULL || window == Z_NULL ||
windowBits < 8 || windowBits > 15)
return Z_STREAM_ERROR;
strm->msg = Z_NULL; /* in case we return an error */
if (strm->zalloc == (alloc_func)0) {
strm->zalloc = zcalloc;
strm->opaque = (voidpf)0;
}
if (strm->zfree == (free_func)0) strm->zfree = zcfree;
state = (struct inflate_state FAR *)ZALLOC(strm, 1,
sizeof(struct inflate_state));
if (state == Z_NULL) return Z_MEM_ERROR;
Tracev((stderr, "inflate: allocated\n"));
strm->state = (struct internal_state FAR *)state;
state->dmax = 32768U;
state->wbits = windowBits;
state->wsize = 1U << windowBits;
state->window = window;
state->wnext = 0;
state->whave = 0;
return Z_OK;
}
/*
Return state with length and distance decoding tables and index sizes set to
fixed code decoding. Normally this returns fixed tables from inffixed.h.
If BUILDFIXED is defined, then instead this routine builds the tables the
first time it's called, and returns those tables the first time and
thereafter. This reduces the size of the code by about 2K bytes, in
exchange for a little execution time. However, BUILDFIXED should not be
used for threaded applications, since the rewriting of the tables and virgin
may not be thread-safe.
*/
local void fixedtables(state)
struct inflate_state FAR *state;
{
#ifdef BUILDFIXED
static int virgin = 1;
static code *lenfix, *distfix;
static code fixed[544];
/* build fixed huffman tables if first call (may not be thread safe) */
if (virgin) {
unsigned sym, bits;
static code *next;
/* literal/length table */
sym = 0;
while (sym < 144) state->lens[sym++] = 8;
while (sym < 256) state->lens[sym++] = 9;
while (sym < 280) state->lens[sym++] = 7;
while (sym < 288) state->lens[sym++] = 8;
next = fixed;
lenfix = next;
bits = 9;
inflate_table(LENS, state->lens, 288, &(next), &(bits), state->work);
/* distance table */
sym = 0;
while (sym < 32) state->lens[sym++] = 5;
distfix = next;
bits = 5;
inflate_table(DISTS, state->lens, 32, &(next), &(bits), state->work);
/* do this just once */
virgin = 0;
}
#else /* !BUILDFIXED */
# include "inffixed.h"
#endif /* BUILDFIXED */
state->lencode = lenfix;
state->lenbits = 9;
state->distcode = distfix;
state->distbits = 5;
}
/* Macros for inflateBack(): */
/* Load returned state from inflate_fast() */
#define LOAD() \
do { \
put = strm->next_out; \
left = strm->avail_out; \
next = strm->next_in; \
have = strm->avail_in; \
hold = state->hold; \
bits = state->bits; \
} while (0)
/* Set state from registers for inflate_fast() */
#define RESTORE() \
do { \
strm->next_out = put; \
strm->avail_out = left; \
strm->next_in = next; \
strm->avail_in = have; \
state->hold = hold; \
state->bits = bits; \
} while (0)
/* Clear the input bit accumulator */
#define INITBITS() \
do { \
hold = 0; \
bits = 0; \
} while (0)
/* Assure that some input is available. If input is requested, but denied,
then return a Z_BUF_ERROR from inflateBack(). */
#define PULL() \
do { \
if (have == 0) { \
have = in(in_desc, &next); \
if (have == 0) { \
next = Z_NULL; \
ret = Z_BUF_ERROR; \
goto inf_leave; \
} \
} \
} while (0)
/* Get a byte of input into the bit accumulator, or return from inflateBack()
with an error if there is no input available. */
#define PULLBYTE() \
do { \
PULL(); \
have--; \
hold += (unsigned long)(*next++) << bits; \
bits += 8; \
} while (0)
/* Assure that there are at least n bits in the bit accumulator. If there is
not enough available input to do that, then return from inflateBack() with
an error. */
#define NEEDBITS(n) \
do { \
while (bits < (unsigned)(n)) \
PULLBYTE(); \
} while (0)
/* Return the low n bits of the bit accumulator (n < 16) */
#define BITS(n) \
((unsigned)hold & ((1U << (n)) - 1))
/* Remove n bits from the bit accumulator */
#define DROPBITS(n) \
do { \
hold >>= (n); \
bits -= (unsigned)(n); \
} while (0)
/* Remove zero to seven bits as needed to go to a byte boundary */
#define BYTEBITS() \
do { \
hold >>= bits & 7; \
bits -= bits & 7; \
} while (0)
/* Assure that some output space is available, by writing out the window
if it's full. If the write fails, return from inflateBack() with a
Z_BUF_ERROR. */
#define ROOM() \
do { \
if (left == 0) { \
put = state->window; \
left = state->wsize; \
state->whave = left; \
if (out(out_desc, put, left)) { \
ret = Z_BUF_ERROR; \
goto inf_leave; \
} \
} \
} while (0)
/*
strm provides the memory allocation functions and window buffer on input,
and provides information on the unused input on return. For Z_DATA_ERROR
returns, strm will also provide an error message.
in() and out() are the call-back input and output functions. When
inflateBack() needs more input, it calls in(). When inflateBack() has
filled the window with output, or when it completes with data in the
window, it calls out() to write out the data. The application must not
change the provided input until in() is called again or inflateBack()
returns. The application must not change the window/output buffer until
inflateBack() returns.
in() and out() are called with a descriptor parameter provided in the
inflateBack() call. This parameter can be a structure that provides the
information required to do the read or write, as well as accumulated
information on the input and output such as totals and check values.
in() should return zero on failure. out() should return non-zero on
failure. If either in() or out() fails, than inflateBack() returns a
Z_BUF_ERROR. strm->next_in can be checked for Z_NULL to see whether it
was in() or out() that caused in the error. Otherwise, inflateBack()
returns Z_STREAM_END on success, Z_DATA_ERROR for an deflate format
error, or Z_MEM_ERROR if it could not allocate memory for the state.
inflateBack() can also return Z_STREAM_ERROR if the input parameters
are not correct, i.e. strm is Z_NULL or the state was not initialized.
*/
int ZEXPORT inflateBack(strm, in, in_desc, out, out_desc)
z_streamp strm;
in_func in;
void FAR *in_desc;
out_func out;
void FAR *out_desc;
{
struct inflate_state FAR *state;
unsigned char FAR *next; /* next input */
unsigned char FAR *put; /* next output */
unsigned have, left; /* available input and output */
unsigned long hold; /* bit buffer */
unsigned bits; /* bits in bit buffer */
unsigned copy; /* number of stored or match bytes to copy */
unsigned char FAR *from; /* where to copy match bytes from */
code here; /* current decoding table entry */
code last; /* parent table entry */
unsigned len; /* length to copy for repeats, bits to drop */
int ret; /* return code */
static const unsigned short order[19] = /* permutation of code lengths */
{16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15};
/* Check that the strm exists and that the state was initialized */
if (strm == Z_NULL || strm->state == Z_NULL)
return Z_STREAM_ERROR;
state = (struct inflate_state FAR *)strm->state;
/* Reset the state */
strm->msg = Z_NULL;
state->mode = TYPE;
state->last = 0;
state->whave = 0;
next = strm->next_in;
have = next != Z_NULL ? strm->avail_in : 0;
hold = 0;
bits = 0;
put = state->window;
left = state->wsize;
/* Inflate until end of block marked as last */
for (;;)
switch (state->mode) {
case TYPE:
/* determine and dispatch block type */
if (state->last) {
BYTEBITS();
state->mode = DONE;
break;
}
NEEDBITS(3);
state->last = BITS(1);
DROPBITS(1);
switch (BITS(2)) {
case 0: /* stored block */
Tracev((stderr, "inflate: stored block%s\n",
state->last ? " (last)" : ""));
state->mode = STORED;
break;
case 1: /* fixed block */
fixedtables(state);
Tracev((stderr, "inflate: fixed codes block%s\n",
state->last ? " (last)" : ""));
state->mode = LEN; /* decode codes */
break;
case 2: /* dynamic block */
Tracev((stderr, "inflate: dynamic codes block%s\n",
state->last ? " (last)" : ""));
state->mode = TABLE;
break;
case 3:
strm->msg = (char *)"invalid block type";
state->mode = BAD;
}
DROPBITS(2);
break;
case STORED:
/* get and verify stored block length */
BYTEBITS(); /* go to byte boundary */
NEEDBITS(32);
if ((hold & 0xffff) != ((hold >> 16) ^ 0xffff)) {
strm->msg = (char *)"invalid stored block lengths";
state->mode = BAD;
break;
}
state->length = (unsigned)hold & 0xffff;
Tracev((stderr, "inflate: stored length %u\n",
state->length));
INITBITS();
/* copy stored block from input to output */
while (state->length != 0) {
copy = state->length;
PULL();
ROOM();
if (copy > have) copy = have;
if (copy > left) copy = left;
zmemcpy(put, next, copy);
have -= copy;
next += copy;
left -= copy;
put += copy;
state->length -= copy;
}
Tracev((stderr, "inflate: stored end\n"));
state->mode = TYPE;
break;
case TABLE:
/* get dynamic table entries descriptor */
NEEDBITS(14);
state->nlen = BITS(5) + 257;
DROPBITS(5);
state->ndist = BITS(5) + 1;
DROPBITS(5);
state->ncode = BITS(4) + 4;
DROPBITS(4);
#ifndef PKZIP_BUG_WORKAROUND
if (state->nlen > 286 || state->ndist > 30) {
strm->msg = (char *)"too many length or distance symbols";
state->mode = BAD;
break;
}
#endif
Tracev((stderr, "inflate: table sizes ok\n"));
/* get code length code lengths (not a typo) */
state->have = 0;
while (state->have < state->ncode) {
NEEDBITS(3);
state->lens[order[state->have++]] = (unsigned short)BITS(3);
DROPBITS(3);
}
while (state->have < 19)
state->lens[order[state->have++]] = 0;
state->next = state->codes;
state->lencode = (code const FAR *)(state->next);
state->lenbits = 7;
ret = inflate_table(CODES, state->lens, 19, &(state->next),
&(state->lenbits), state->work);
if (ret) {
strm->msg = (char *)"invalid code lengths set";
state->mode = BAD;
break;
}
Tracev((stderr, "inflate: code lengths ok\n"));
/* get length and distance code code lengths */
state->have = 0;
while (state->have < state->nlen + state->ndist) {
for (;;) {
here = state->lencode[BITS(state->lenbits)];
if ((unsigned)(here.bits) <= bits) break;
PULLBYTE();
}
if (here.val < 16) {
NEEDBITS(here.bits);
DROPBITS(here.bits);
state->lens[state->have++] = here.val;
}
else {
if (here.val == 16) {
NEEDBITS(here.bits + 2);
DROPBITS(here.bits);
if (state->have == 0) {
strm->msg = (char *)"invalid bit length repeat";
state->mode = BAD;
break;
}
len = (unsigned)(state->lens[state->have - 1]);
copy = 3 + BITS(2);
DROPBITS(2);
}
else if (here.val == 17) {
NEEDBITS(here.bits + 3);
DROPBITS(here.bits);
len = 0;
copy = 3 + BITS(3);
DROPBITS(3);
}
else {
NEEDBITS(here.bits + 7);
DROPBITS(here.bits);
len = 0;
copy = 11 + BITS(7);
DROPBITS(7);
}
if (state->have + copy > state->nlen + state->ndist) {
strm->msg = (char *)"invalid bit length repeat";
state->mode = BAD;
break;
}
while (copy--)
state->lens[state->have++] = (unsigned short)len;
}
}
/* handle error breaks in while */
if (state->mode == BAD) break;
/* check for end-of-block code (better have one) */
if (state->lens[256] == 0) {
strm->msg = (char *)"invalid code -- missing end-of-block";
state->mode = BAD;
break;
}
/* build code tables -- note: do not change the lenbits or distbits
values here (9 and 6) without reading the comments in inftrees.h
concerning the ENOUGH constants, which depend on those values */
state->next = state->codes;
state->lencode = (code const FAR *)(state->next);
state->lenbits = 9;
ret = inflate_table(LENS, state->lens, state->nlen, &(state->next),
&(state->lenbits), state->work);
if (ret) {
strm->msg = (char *)"invalid literal/lengths set";
state->mode = BAD;
break;
}
state->distcode = (code const FAR *)(state->next);
state->distbits = 6;
ret = inflate_table(DISTS, state->lens + state->nlen, state->ndist,
&(state->next), &(state->distbits), state->work);
if (ret) {
strm->msg = (char *)"invalid distances set";
state->mode = BAD;
break;
}
Tracev((stderr, "inflate: codes ok\n"));
state->mode = LEN;
case LEN:
/* use inflate_fast() if we have enough input and output */
if (have >= 6 && left >= 258) {
RESTORE();
if (state->whave < state->wsize)
state->whave = state->wsize - left;
inflate_fast(strm, state->wsize);
LOAD();
break;
}
/* get a literal, length, or end-of-block code */
for (;;) {
here = state->lencode[BITS(state->lenbits)];
if ((unsigned)(here.bits) <= bits) break;
PULLBYTE();
}
if (here.op && (here.op & 0xf0) == 0) {
last = here;
for (;;) {
here = state->lencode[last.val +
(BITS(last.bits + last.op) >> last.bits)];
if ((unsigned)(last.bits + here.bits) <= bits) break;
PULLBYTE();
}
DROPBITS(last.bits);
}
DROPBITS(here.bits);
state->length = (unsigned)here.val;
/* process literal */
if (here.op == 0) {
Tracevv((stderr, here.val >= 0x20 && here.val < 0x7f ?
"inflate: literal '%c'\n" :
"inflate: literal 0x%02x\n", here.val));
ROOM();
*put++ = (unsigned char)(state->length);
left--;
state->mode = LEN;
break;
}
/* process end of block */
if (here.op & 32) {
Tracevv((stderr, "inflate: end of block\n"));
state->mode = TYPE;
break;
}
/* invalid code */
if (here.op & 64) {
strm->msg = (char *)"invalid literal/length code";
state->mode = BAD;
break;
}
/* length code -- get extra bits, if any */
state->extra = (unsigned)(here.op) & 15;
if (state->extra != 0) {
NEEDBITS(state->extra);
state->length += BITS(state->extra);
DROPBITS(state->extra);
}
Tracevv((stderr, "inflate: length %u\n", state->length));
/* get distance code */
for (;;) {
here = state->distcode[BITS(state->distbits)];
if ((unsigned)(here.bits) <= bits) break;
PULLBYTE();
}
if ((here.op & 0xf0) == 0) {
last = here;
for (;;) {
here = state->distcode[last.val +
(BITS(last.bits + last.op) >> last.bits)];
if ((unsigned)(last.bits + here.bits) <= bits) break;
PULLBYTE();
}
DROPBITS(last.bits);
}
DROPBITS(here.bits);
if (here.op & 64) {
strm->msg = (char *)"invalid distance code";
state->mode = BAD;
break;
}
state->offset = (unsigned)here.val;
/* get distance extra bits, if any */
state->extra = (unsigned)(here.op) & 15;
if (state->extra != 0) {
NEEDBITS(state->extra);
state->offset += BITS(state->extra);
DROPBITS(state->extra);
}
if (state->offset > state->wsize - (state->whave < state->wsize ?
left : 0)) {
strm->msg = (char *)"invalid distance too far back";
state->mode = BAD;
break;
}
Tracevv((stderr, "inflate: distance %u\n", state->offset));
/* copy match from window to output */
do {
ROOM();
copy = state->wsize - state->offset;
if (copy < left) {
from = put + copy;
copy = left - copy;
}
else {
from = put - state->offset;
copy = left;
}
if (copy > state->length) copy = state->length;
state->length -= copy;
left -= copy;
do {
*put++ = *from++;
} while (--copy);
} while (state->length != 0);
break;
case DONE:
/* inflate stream terminated properly -- write leftover output */
ret = Z_STREAM_END;
if (left < state->wsize) {
if (out(out_desc, state->window, state->wsize - left))
ret = Z_BUF_ERROR;
}
goto inf_leave;
case BAD:
ret = Z_DATA_ERROR;
goto inf_leave;
default: /* can't happen, but makes compilers happy */
ret = Z_STREAM_ERROR;
goto inf_leave;
}
/* Return unused input */
inf_leave:
strm->next_in = next;
strm->avail_in = have;
return ret;
}
int ZEXPORT inflateBackEnd(strm)
z_streamp strm;
{
if (strm == Z_NULL || strm->state == Z_NULL || strm->zfree == (free_func)0)
return Z_STREAM_ERROR;
ZFREE(strm, strm->state);
strm->state = Z_NULL;
Tracev((stderr, "inflate: end\n"));
return Z_OK;
}
|
{
"pile_set_name": "Github"
}
|
# Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for vote.py."""
from google.appengine.ext import ndb
from upvote.gae.datastore import test_utils
from upvote.gae.datastore import utils as datastore_utils
from upvote.gae.datastore.models import vote as vote_models
from upvote.gae.lib.testing import basetest
class VoteTest(basetest.UpvoteTestCase):
def setUp(self):
super(VoteTest, self).setUp()
self.blockable = test_utils.CreateBlockable()
self.user = test_utils.CreateUser()
def testSetKey(self):
expected_key = ndb.Key(flat=(
self.blockable.key.flat() + self.user.key.flat() +
('Vote', vote_models._IN_EFFECT_KEY_NAME)))
key = vote_models.Vote.GetKey(self.blockable.key, self.user.key)
self.assertEqual(expected_key, key)
def testSetKey_NotInEffect(self):
expected_key = ndb.Key(flat=(
self.blockable.key.flat() + self.user.key.flat() +
('Vote', None)))
key = vote_models.Vote.GetKey(
self.blockable.key, self.user.key, in_effect=False)
self.assertEqual(expected_key, key)
# Putting the vote results in a random ID being generated.
vote = test_utils.CreateVote(self.blockable)
vote.key = key
vote.put()
self.assertIsNotNone(vote.key.id())
def testBlockableKey(self):
vote = test_utils.CreateVote(self.blockable, user_email=self.user.email)
vote.key = vote_models.Vote.GetKey(self.blockable.key, self.user.key)
self.assertEqual(self.blockable.key, vote.blockable_key)
def testBlockableKey_MultiPartKey(self):
vote = test_utils.CreateVote(self.blockable, user_email=self.user.email)
# Add another test_blockable key to simulate a length-two blockable key.
vote.key = datastore_utils.ConcatenateKeys(
self.blockable.key,
vote_models.Vote.GetKey(self.blockable.key, self.user.key))
self.assertIsNotNone(vote.blockable_key)
self.assertLen(vote.blockable_key.pairs(), 2)
self.assertEqual(self.blockable.key, vote.blockable_key.parent())
def testBlockableKey_NoKey(self):
vote = test_utils.CreateVote(self.blockable, user_email=self.user.email)
vote.key = None
self.assertIsNone(vote.blockable_key)
def testBlockableKey_BadKey(self):
vote = test_utils.CreateVote(self.blockable, user_email=self.user.email)
# Take out User key section.
vote.key = datastore_utils.ConcatenateKeys(
self.blockable.key, ndb.Key(vote_models.Vote, vote.key.id()))
self.assertIsNone(vote.blockable_key)
def testUserKey(self):
vote = test_utils.CreateVote(self.blockable, user_email=self.user.email)
self.assertEqual(self.user.key, vote.user_key)
def testInEffect(self):
vote = test_utils.CreateVote(self.blockable)
self.assertTrue(vote.in_effect)
vote.key = None
self.assertFalse(vote.in_effect)
if __name__ == '__main__':
basetest.main()
|
{
"pile_set_name": "Github"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.