text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
For Day 13, the challenge is to figure out a safe path through some moving "scanners". My Python solution is below: import itertools layers = {} max_depth = 0 with open("input.txt", "r") as o: for line in o.readlines(): depth, r = line.strip().split(": ") layers[int(depth)] = int(r) max_depth = int(depth) def check(layers, delay=0, return_early=False): depth = 0 caught = [] for picosecond in layers: if (picosecond + delay) % (2 * layers[picosecond] - 2) == 0: caught.append(picosecond) if return_early: return caught return caught print "Part 1", sum([layers[d] * d for d in check(layers)]) print "Part 2", next(i for i in itertools.count() if len(check(layers, i, return_early=True)) == 0) I struggled with this one until I realised the very simple modulus solution to figure out if it's safe to pass. The second part is pretty slow and I think there will be a quicker way by calculating the safe numbers for each of the sensors individually rather than trying each delay. Advent of Code runs every day up to Christmas, you should join in!. Get the latest posts delivered right to your inbox.
https://blog.jscott.me/advent-of-code-day-13/
CC-MAIN-2018-43
refinedweb
187
63.39
Basic security features on Red Hat OpenShift Container Platform Cloud Pak for Data builds on OpenShift® security features that protect sensitive customer data with strong encryption controls and improve access control across applications and the platform. Features Red Hat® OpenShift Container Platform enables an improved security posture with the addition of many capabilities that greatly increase the security of the platform. - Uses Red Hat CoreOS as the immutable host operating system. - Provides stronger platform security with FIPS (Federal Information Processing Standard) compliant encryption (FIPS 140-2 Level 1). For more information, see Services that support FIPS - Uses the Node Tuning Operator, which provides opportunities to further reduce privilege requirements in the security context constraints (SCC). For more information, see Using the Node Tuning Operator. - Supports encrypting data that is stored in etcd, which provides extra protection for secrets that are stored in the etcd database. For more information, see Encrypting etcd data. - Provides a Network Bound Disk Encryption (NBDE) feature that can be used to automate remote enablement of LUKS encrypted volumes, making it better protected against physical theft of host storage. - Enables SELinux as mandatory on Red Hat OpenShift Container Platform. Service accounts and roles Cloud Pak for Data runs in a separate namespace or project on the OpenShift cluster. In the OpenShift project, Cloud Pak for Data creates service accounts and RBAC role bindings for pods to use within that namespace. - No cluster level access is permitted. All roles impose a restriction to work within that namespace only. - Two roles are created: cpd-admin-roleand cpd-viewer-role. These roles allow Cloud Pak for Data to ensure that the principle of least privilege can be applied even within the same namespace. - Four service accounts are created: zen-admin-sa, zen-editor-sa, zen-viewer-sa, and zen-norbac-sa. No SCCs are explicitly bound to these service accounts and hence they pick up restricted SCC by default.. - The default service account that is automatically created in every OpenShift project is not granted any RBAC privileges; that is, no roles are bound. - The expectation is that the default service account will be used for user workloads such as Notebooks and Python jobs. It will not be allowed to perform any kind of actions inside the namespace. - The default service account is associated with the restricted security context constraints (SCCs). However, some add-on services might still need custom SCCs, for example to support IPCs. For more information, see Security context constraints in the IBM® Cloud Platform Common Services documentation. However, if you plan to install certain Cloud Pak for Data services, you might need to create some custom SCCs. For more information, see Creating required security context constraints for services. Service UIDs Services use UIDs based on the Red Hat OpenShift Container Platform project where they are installed. When you create a project, Red Hat OpenShift assigns a unique range of UIDs to the project. To determine the UIDs that are associated with a project, run the following command: oc describe project project-name Replace project-name with the name of the project where Cloud Pak for Data is installed.Additionally, if a service uses a custom SCC, it reserves one or more UIDs: - The Db2® as a service restricted SCC reserves UID 500. - The IBM® Db2 SCC reserves the following UIDs: 500, 501, 505, 600, 700, and 1001. - The Watson™ Knowledge Catalog SCC reserves UID 10032. For details on which services use these SCCs, see Custom SCCs for services. Security hardening Security hardening is enforced on Cloud Pak for Data on Red Hat OpenShift. The following security hardening actions are taken: - Only non-root processes are run in containers. The UIDs of the processes are in the OpenShift Project's pre-defined range only, enforced by the use of the restricted SCCs. The restricted SCC does not allow running containers as root.Attention: Some services still do require a fixed UID. Such services use a custom SCC for that purpose. - Cluster Admin privileges are not required for Cloud Pak for Data workloads at runtime. Cluster Admin authority is needed only to set up projects and custom SCCs (and only for the services that do need them). Service accounts in each Cloud Pak for Data instance are granted privileges that are only scoped within their OpenShift project. - Cloud Pak for Data users are typically not granted OpenShift Kubernetes access, and even if they are, it would be only for express purpose of installing or upgrading services inside their assigned OpenShift project. - Strict use of service accounts with RBAC privileges is enforced, and the least privilege principle is applied. Cloud Pak for Data ensures that any pod that is running user code (such as scripts or analytics environments) is not granted any RBAC privileges. - No host access is required for Cloud Pak for Data workloads at runtime. This restriction is enforced by the SCCs. There is no access to host paths or networks. - All pods have restricted resource consumption. Pod resource requests and limits are set for each pod, which restricts the consumption. This approach helps protect against noisy neighbors that cause resource contention. - Reliability gauges (liveness and readiness probes) are present for each pod to ensure that the pods are working correctly. - For consumption monitoring, each of the pods on Cloud Pak for Data is annotated with metering annotations to uniquely identify add-on service workloads on the cluster. Prescriptive security practices during installation You don't need an SSH to OpenShift cluster nodes to deploy or manage Cloud Pak for Data and its add-on services. The OpenShift oc command-line interface and the cloudctl command-line interface are used to deploy and manage the IBM Cloud Pak® for Data platform operator and its services. You can install the software on a cluster that is connected to the internet or a cluster that is air-gapped. - Installing Cloud Pak for Data on clusters that are connected to the internet - Recommended actions for additional security when you access images and to ensure reliability. It is highly recommended that you use the cloudctlutility from a client workstation to download the CASE packages and mirror images from the IBM Entitled Registry and other public container registries into your private container registry. - Installing Cloud Pak for Data on air-gapped clusters - Cloud Pak for Data supports mirroring of images to your private container registry. This procedure does not require that your private container registry and Red Hat OpenShift Container Platform cluster are able to access the internet. You are able to download the CASE packages and mirror images by using the cloudctlutility from a bastion node. If the bastion node does not have a direct access to the private container registry, then an intermediary container registry can be used. By using the intermediary container registry, you are able to mirror the images first on the bastion node. Next, you transfer those images from a network that does have access to your private container registry. Therefore, your Red Hat OpenShift Container Platform cluster would be configured to pull from the private container registry. That way you are able to install the Cloud Pak for Data operators and services without access to the internet. - Namespace scope and Operator privileges - Cloud Pak for Data uses the Operator pattern to manage its workloads, which allows for a separation of concerns. That way Operators that are resident in one central namespace are granted access to manage the Cloud Pak for Data Service workloads in multiple different "instance" namespaces. Users in those instance namespaces can thus be granted far lesser privileges, scoped within that namespace. Operators too are scoped to operate only within these specific namespaces and are, by design, not permitted to manage non-Cloud Pak for Data namespaces in the cluster.You need the following scope and user privileges. - To install the Operator Catalog Source, you need an Red Hat OpenShift Container Platform user, with privileges in the openshift-marketplace namespace. - To create an "own Namespace" mode for the OperatorGroup and any Subscriptions for the needed Operators, you need users with privileges in either the ibm-common-services or cpd-operators.Note: For security reasons, the "All Namespaces" mode for the OperatorGroup is not recommended. - To to restrict the namespaces that the Cloud Pak for Data Operators have authority over, use the IBM Cloud Pak foundational services Namespace-scope Operator. You can expand the Operator access to more Cloud Pak for Data instance namespaces where the service workloads are then deployed. For more information, see Authorizing foundational services to perform operations on workloads in a namespace. - To create custom resources to deploy the individual Cloud Pak for Data services or to upgrade or scale them postinstallation, you need users with Project admin privileges in the "instance" namespaces. - Cluster admin responsibility - Cluster Admins are expected to manage the OpenShift cluster and prepare it for use by the Cloud Pak for Data Services. Such tasks include: - Node tuning and machine pool configurations for kernel settings and cri-o settings (such as pids-limit, ulimit). - Only for services that need them. - Set up the image content source policy and any secrets. - The setup is done to pull images from the private container registry. - Create OpenShift Projects. - For the IBM Cloud Pak foundational services and Operators. - For the instances of Cloud Pak for Data. - Configure the namespaces. - Define namespace quotas and Limit Ranges. - Grant access of the Cloud Pak for Data Admins to specific instance namespaces. - Create custom SCCs. - Only for services that need them. - Storage installation and configuration. - Storage that is used by the workloads. - Securely manage OpenShift. - Handle encryption and auditing as well as other operations such as adding nodes, replacing nodes and others.
https://www.ibm.com/support/producthub/icpdata/docs/content/SSQNUZ_latest/cpd/plan/security_features.html
CC-MAIN-2021-43
refinedweb
1,616
54.63
Scala Language Integrated Connection Kit distinctOn(_.id)and mysql bit(1)data type for a Booleanscala field. min(...)like this example select min(field1), min(field2), min(myBit1Field) from ... group by id. Unfortunately, that aggregation doesn't play well with mysql's bit(1)which in my case ends up being truein my scala object when the record field in DB is 0x00 (false). also it seems like travis only does one build at a time on the repo we can live with that, I think...? especially now that my recent branch protection change cut the number of builds per PR from 2 back down to 1 O.AutoIncon it to ignore the value and use the default which is specified in database but unfortunately insertOrUpdatedoesn't respect O.AutoIncand I get following error when the id is None PSQLException: ERROR: null value in column "id" violates not-null constraint def name = column[Option[String]]("NAME", O NotNull) import profile.api._
https://gitter.im/slick/slick?at=60779349b6a4714a29c5a697
CC-MAIN-2021-25
refinedweb
163
55.54
#include <mnt_pump.h> Inheritance diagram for MntPump: Rather, the role of a pump is to periodically send data other objects. The method start_pumping and stop_pumping is called to start and stop the periodic timer respectively. Periodically, pump_some() is called. Method is_running() can be used to check if the pump is working. To use a pump, typically a subclass is created, with pump_some() overwritten to generate proper data and send push() to components downstream. pump_some() should check if the pump is running, and schedule the next pump_some() to be called before exits. Further, a Tcl instproc called "on_stop_pumping" should be defined as a callback whenever stop_pumping is called. This Tcl procedure may clean up data/files at the Tcl-level.
http://www.comp.nus.edu.sg/~cs5248/0506S1/proj2/doc/html/class_mnt_pump.html
crawl-003
refinedweb
119
56.76
Test Code Structure This topic contains the following sections: - Fixtures - Tests - Specifying the Start Webpage - Specifying Testing Metadata - Initialization and Clean-Up - Skipping Tests - Inject Scripts into Tested Pages - Disable Page Caching If you use eslint in your project, use the TestCafe plugin to avoid the 'fixture' is not definedand 'test' is not definederrors. Fixtures # TestCafe tests must be organized into categories called fixtures. A JavaScript, TypeScript or CoffeeScript file with TestCafe tests can contain one or more fixtures. To declare a test fixture, use the fixture function. fixture( fixtureName ) fixture `fixtureName` This function returns the fixture object that allows you to configure the fixture - specify the start webpage, metadata and initialization and clean-up code for tests included in the fixture. Tests # To introduce a test, call the test function and pass the test code inside it. test( testName, fn(t) ) fixture `MyFixture`; test('Test1', async t => { /* Test 1 Code */ }); test('Test2', async t => { /* Test 2 Code */ }); You can arrange test code in any manner and reference any modules or libraries. TestCafe tests are executed on the server side. You can use test actions to manipulate the tested webpage. To determine page elements' state or obtain any other data from the client side, use the selectors and client functions. To check if the page state matches the expected one, use assertions. Test Controller # A test controller object t exposes the test API's methods. That is why it is passed to each function that is expected to contain server-side test code (like test, beforeEach or afterEach). Use the test controller to call test actions, handle browser dialogs, use the wait function or execute assertions. fixture `My fixture` .page ``; test('My Test', async t => { await t .setNativeDialogHandler(() => true) .click('#populate') .click('#submit-button'); const location = await t.eval(() => window.location); await t.expect(location.pathname).eql('/testcafe/example/thank-you.html'); }); The test controller also provides access to the internal context the test API requires to operate. This is why selectors and client functions need the test controller object when they are called from Node.js callbacks. Using Test Controller Outside of Test Code # There may be times when you need to call the test API from outside the test code. For instance, your page model can contain methods that perform common operations used in different tests (like authentication). import { Selector } from 'testcafe'; class Page { constructor () { this.loginInput = Selector('#login'); this.passwordInput = Selector('#password'); this.signInButton = Selector('#sign-in-button'); } async login (t) { await t .typeText(this.loginInput, 'MyLogin') .typeText(this.passwordInput, 'Pa$$word') .click(this.signInButton); } } export default new Page(); In this instance, you need to access the test controller from the page model's login method. TestCafe allows you to avoid passing the test controller to the method explicitly. Instead, you can import t to the page model file. import { Selector, t } from 'testcafe'; class Page { constructor () { this.loginInput = Selector('#login'); this.passwordInput = Selector('#password'); this.signInButton = Selector('#sign-in-button'); } async login () { await t .typeText(this.loginInput, 'MyLogin') .typeText(this.passwordInput, 'Pa$$word') .click(this.signInButton); } } export default new Page(); TestCafe implicitly resolves test context and provides the right test controller. Setting Test Speed # TestCafe allows you to specify the test execution speed. Tests are run at the maximum speed by default. You can use the t.setTestSpeed method to specify the speed. t.setTestSpeed( factor ) If the speed is also specified for an individual action, the action's speed setting overrides the test speed. Example import { Selector } from 'testcafe'; fixture `Test Speed` .page ``; const nameInput = Selector('#developer-name'); test(`Test Speed`, async t => { await t .typeText(nameInput, 'Peter') .setTestSpeed(0.1) .typeText(nameInput, ' Parker'); }); Setting Page Load Timeout # The page load timeout defines the time passed after the DOMContentLoaded event within which the window.load event should be raised. After the timeout passes or the window.load event is raised (whichever happens first), TestCafe starts the test. To specify the page load timeout in test code, use the t.setPageLoadTimeout method. t.setPageLoadTimeout( duration ) You can also set the page load timeout when launching tests via the command line or API. Example fixture `Page load timeout` .page ``; test(`Page load timeout`, async t => { await t .setPageLoadTimeout(0) .navigateTo(''); }); Note that the DOMContentLoadedevent is raised after the HTML document is loaded and parsed, while window.loadis raised after all stylesheets, images and subframes are loaded. That is why window.loadis fired after the DOMContentLoadedevent with a certain delay. Specifying the Start Webpage # You can specify the web page where all tests in a fixture start using the fixture.page function. fixture.page( url ) fixture.page `url` Similarly, you can specify a start page for individual tests using the test.page function that overrides the fixture.page. test.page( url ) test.page `url` fixture `MyFixture` .page ``; test('Test1', async t => { // Starts at }); test .page `` ('Test2', async t => { // Starts at }); If the start page is not specified, it defaults to about:blank. You can use the file:// scheme or relative paths to test web pages in local directories. fixture `MyFixture` .page ``; fixture `MyFixture` .page `../my-project/index.html`; Specifying Testing Metadata # TestCafe allows you to specify additional information for tests in the form of key-value metadata and use it in reports. To define metadata, use the meta method. You can call this method for a fixture and a test. The meta method allows you to specify one or several metadata entries: Specifying one metadata entry. fixture.meta('key1', 'value1') test.meta('key2', 'value2') Specifying a set of metadata entries. fixture.meta({ key1: 'value1', key2: 'value2', key3: 'value3' }) test.meta({ key4: 'value1', key5: 'value2', key6: 'value3' }) Examples fixture `My fixture` .meta('fixtureID', 'f-0001') .meta({ author: 'John', creationDate: '05/03/2018' }); test .meta('testID', 't-0005') .meta({ severity: 'critical', testedAPIVersion: '1.0' }) ('MyTest', async t => { /* ... */}); Run Tests by Metadata # You can run tests or fixtures whose metadata contains specific values. Use the following options to filter tests by metadata: - the --test-meta and --fixture-meta command line options - the testMetaand fixtureMetaparameters in the runner.filter method - the filter.testMeta and filter.fixtureMeta configuration file properties Using Metadata in Reports # You can include testing metadata to reports using a custom reporter. The reporter's reportFixtureStart and reportTestDone methods can access the fixture and test metadata. Initialization and Clean-Up # TestCafe allows you to specify functions that are executed before a fixture or test is started and after it is finished. These functions are called hook functions or hooks. Test Hooks # Test hooks are executed in each test run before a test is started and after it is finished. If a test runs in several browsers, test hooks are executed in each browser. At the moment test hooks run, the tested webpage is already loaded, so that you can use test actions and other test run API inside test hooks. You can specify a hook for each test in a fixture using the beforeEach and afterEach methods in the fixture declaration. fixture.beforeEach( fn(t) ) fixture.afterEach( fn(t) ) You can also specify hooks for an individual test using the test.before and test.after methods. test.before( fn(t) ) test.after( fn(t) ) If test.beforeor test.afteris specified, it overrides the corresponding fixture.beforeEachand fixture.afterEachhook, so that the latter are not executed. The test.before, test.after, fixture.beforeEach and fixture.afterEach methods accept the following parameters: Example fixture `My fixture` .page `` .beforeEach( async t => { /* test initialization code */ }) .afterEach( async t => { /* test finalization code */ }); test .before( async t => { /* test initialization code */ }) ('MyTest', async t => { /* ... */ }) .after( async t => { /* test finalization code */ }); Sharing Variables Between Test Hooks and Test Code # You can share variables between test hook functions and test code by using the test context object. Test context is available through the t.ctx property. t.ctx Instead of using a global variable, assign the object you want to share directly to t.ctx or create a property as in the following example: fixture `Fixture1` .beforeEach(async t => { t.ctx.someProp = 123; }); test ('Test1', async t => { console.log(t.ctx.someProp); // > 123 }) .after(async t => { console.log(t.ctx.someProp); // > 123 }); Each test run has its own test context. t.ctxis initialized with an empty object without a prototype. You can iterate its keys without the hasOwnPropertycheck. Fixture Hooks # Fixture hooks are executed before the first test in a fixture is started and after the last test is finished. Unlike test hooks, fixture hooks are executed between test runs and do not have access to the tested page. Use them to perform server-side operations like preparing the server that hosts the tested app. To specify fixture hooks, use the fixture.before and fixture.after methods. fixture.before( fn(ctx) ) fixture.after( fn(ctx) ) Example fixture `My fixture` .page `` .before( async ctx => { /* fixture initialization code */ }) .after( async ctx => { /* fixture finalization code */ }); Sharing Variables Between Fixture Hooks and Test Code # Hook functions passed to fixture.before and fixture.after methods take a ctx parameter that contains fixture context. You can add properties to this parameter to share the value or object with test code. fixture `Fixture1` .before(async ctx => { ctx.someProp = 123; }) .after(async ctx => { console.log(ctx.someProp); // > 123 }); To access fixture context from tests, use the t.fixtureCtx property. t.fixtureCtx Test code can read from t.fixtureCtx, assign to its properties or add new ones, but it cannot overwrite the entire t.fixtureCtx object. Example fixture `Fixture1` .before(async ctx => { ctx.someProp = 123; }) .after(async ctx => { console.log(ctx.newProp); // > abc }); test('Test1', async t => { console.log(t.fixtureCtx.someProp); // > 123 }); test('Test2', async t => { t.fixtureCtx.newProp = 'abc'; }); Skipping Tests # TestCafe allows you to specify that a particular test or fixture should be skipped when running tests. Use the fixture.skip and test.skip methods for this. fixture.skip test.skip You can also use the only method to specify that only a particular test or fixture should run while all others should be skipped. fixture.only test.only If several tests or fixtures are marked with only, all the marked tests and fixtures are run. Examples fixture.skip `Fixture1`; // All tests in this fixture are skipped test('Fixture1Test1', () => {}); test('Fixture1Test2', () => {}); fixture `Fixture2`; test('Fixture2Test1', () => {}); test.skip('Fixture2Test2', () => {}); // This test is skipped test('Fixture2Test3', () => {}); fixture.only `Fixture1`; test('Fixture1Test1', () => {}); test('Fixture1Test2', () => {}); fixture `Fixture2`; test('Fixture2Test1', () => {}); test.only('Fixture2Test2', () => {}); test('Fixture2Test3', () => {}); // Only tests in Fixture1 and the Fixture2Test2 test are run Inject Scripts into Tested Pages # TestCafe allows you to inject custom scripts into pages visited during the tests. You can add scripts that mock browser API or provide helper functions. Use the fixture.clientScripts and test.clientScripts methods to add scripts to pages visited during a particular test or fixture. fixture.clientScripts( script[, script2[, ...[, scriptN]]] ) test.clientScripts( script[, script2[, ...[, scriptN]]] ) Relative paths are resolved against the test file location. You can use the page option to specify pages into which scripts should be injected. Otherwise, TestCafe injects scripts into all pages visited during the test or fixture. Hooks attached to the fixture run first when you attach request hooks to the fixture and test. Examples fixture `My fixture` .page `` .clientScripts('assets/jquery.js'); test ('My test', async t => { /* ... */ }) .clientScripts({ module: 'async' }); test ('My test', async t => { /* ... */ }) .clientScripts({ page: /\/user\/profile\//, content: 'Geolocation.prototype.getCurrentPosition = () => new Positon(0, 0);' }); To inject scripts into pages visited during all tests, use either of the following: - the --cs (--client-scripts) command line option - the runner.clientScripts method - the clientScripts configuration file property See Inject Scripts into Tested Pages for more information. Disable Page Caching # When navigation to a cached page occurs in role code, local and session storage content is not preserved. See Troubleshooting: Test Actions Fail After Authentication for more information. You can disable page caching to keep items in these storages after navigation. Use the fixture.disablePageCaching and test.disablePageCaching methods to disable caching during a particular fixture or test. fixture.disablePageCaching test.disablePageCaching Examples fixture .disablePageCaching `My fixture` .page ``; test .disablePageCaching ('My test', async t => { /* ... */ }); To disable page caching during the entire test run, use either of the following options: - the --disable-page-caching command line flag, - the disablePageCachingoption in the runner.run method, - the disablePageCaching configuration file option.
https://devexpress.github.io/testcafe/documentation/test-api/test-code-structure.html
CC-MAIN-2020-05
refinedweb
2,033
51.95
SOA for the Business Programmer: Concepts, BPEL and SCA by Ben Margolis (Paperback, 2007) Be the first to write a review. OUR TOP PICK $20.18+ ervice-Oriented Architecture (SOA) is a way of organizing software. If your company's development projects adhere to the principles of SOA, the outcome will be an inventory of modular units called services, which allow for a quick response to change. This book tells the SOA story in a simple, straightforward manner that will help you understand t only the buzzwords and benefits, but also the techlogies that underlie SOA: XML, WSDL, SOAP, XPath, BPEL, SCA, and SDO. And through it all, the authors provide business examples and illustrations, giving a practical meaning to abstract ideas. SOA for the Business Developer * Gives a detailed overview of Extensible Markup Language (XML), including namespaces and XML schema. * Describes Web Services Definition Language (WSDL) and SOAP, the standard SOA techlogies. * Gives a clear tutorial on XML Path Language (XPath), a language for deriving data from transmitted messages and other sources. XPath is useful for working with a variety of other techlogies,.
http://www.ebay.com.au/p/SOA-for-the-Business-Programmer-Concepts-BPEL-and-SCA-by-Ben-Margolis-Paperback-2007/95240827
CC-MAIN-2017-13
refinedweb
184
53.1
In this Python data analysis tutorial, we will focus on how to carry out between-subjects ANOVA in Python. As mentioned in an earlier post (Repeated measures ANOVA with Python) ANOVAs are commonly used in Psychology. We start with some brief introduction to the theory of ANOVA. If you are more interested in the four methods to carry out one-way ANOVA with Python click here. In this post we will learn how to carry out ANOVA using SciPy, calculating it “by hand” in Python, using Statsmodels, and Pyvttbl. Update: the Python package Pyvttbl is not maintained for a couple of years but there’s a new package called Pingouin. As a bonus, how to use this package is added at the end of the post. Prerequisites In this post, you will need to install the following Python packages: - SciPy - NumPy - Pandas - Statsmodels - Pingouin Of course, you don’t have to install all of these packages to perform the ANOVA with Python. Now, if you only want to do the data analysis you can choose to install either SciPy, Statsmodels, or Pingouin. However, Pandas will be used to read the example datasets and carry out some simple descriptive stats as well as visualization of the data. Installing Python packages can be done with either pip or conda, for example. Here’s how to install all of the above packages: pip install scipy numpy pandas statsmodels pingouin Now, pip can also be used to install a specific version of a package. To install an older version you add “==” followed by the version you want to be installed. In the next section, you will get a brief introduction to ANOVA, in general. Introduction to ANOVA Before we learn how to do ANOVA in Python, we are briefly discussing what ANOVA is. ANOVA is a means of comparing the ratio of systematic variance to unsystematic variance in an experimental study. Variance in the ANOVA is partitioned into: $latex y_i = b_0+b_1X_{1,i} +…+b_{j-1,i} + e_i&s=2$ There is a more elegant way to parametrize the model. In this way, the group means are represented as deviations from the grand mean by grouping their coefficients under a single term. I will not go into detail on this equation: $latex y_{ij} = \mu_{grand} + \tau_j + \varepsilon_{ij}&s=2$ As for all parametric tests the data need to be normally distributed (each group’s. A Priori Tests When conducting ANOVA in Python, it is usually best to restrict the testing to a small set of possible hypotheses. Furthermore, these tests should be motivated by theory and are known as a priori or planned comparisons. As the names imply, these tests should be planned before the data is collected. Post-Hoc Tests (Pairwise Comparisons) in Python Even though studies can have a strong theoretical motivation, as well as a priori hypotheses, there will be times when the pattern occurs after the data is collected. Note, if there are many possible tests, these post-hoc tests, the error rate is determined by the number of tests that might have been carried out. There are a number of possible post-hoc tests that can be carried out. In this ANOVA in Python tutorial, we will use the Tukey’s honestly significant difference (Tukey-HSD) test 6 Steps to Carry Out ANOVA in Python Now, before getting into details here are 6 steps to carry out ANOVA in Python: - Install the Python package Statsmodels ( pip install statsmodels) - Import statsmodels api and ols: import statsmodels.api as smand from statsmodels.formula.api import ols - Import data using Pandas - Set up your model mod = ols('weight ~ group', data=data).fit() - Carry out the ANOVA: aov_table = sm.stats.anova_lm(mod, typ=2) - Print the results: print(aov_table) Now, sometimes when we install packages with Pip we may notice that we don’t have the latest version installed. If we want to, we can of course, update pip to the latest version using pip or conda. ANOVA using Python In the four Python ANOVA examples in this tutorial we are going to use the dataset “PlantGrowth” that originally was available in R. However, it can be downloaded using this link: PlantGrowth. In the first three examples, we are going to use Pandas DataFrame. All three Python ANOVA examples below are using Pandas to load data from a CSV file. Note, we can also use Pandas read excel if we have our data in an Excel file (e.g., .xlsx). grps} from the control group. Python ANOVA YouTube Video: - If you want to learn how to work with Pandas dataframe see the post A Basic Pandas Dataframe Tutorial - Also see the Python Pandas Groupby Tutorial for more about working with the groupby method. - How to Perform a Two-Sample T-test with Python: 3 Different Methods or carry out the Mann-Whitney U test in Python. ANOVA in Python using SciPy We start this Python ANOVA tutorial (i.e., pure Python ANOVA) A one-way ANOVA in Python is quite easy to calculate so below I am going to show how to do it. First, we need to calculate the sum of squares between (SSbetween), sum of squares within (SSwithin), and sum of squares total (SSTotal). Sum of Squares Between We start by calculating the Sum of Squares between. Sum of Squares Between is the variability due to interaction between the groups. Sometimes known as the Sum of Squares of the Model. $latex SSbetween = \frac{\sum(\sum k_i) ^2} {n} – \frac{T^2}{N}&s=2$ SSbetween = (sum(data.groupby('group').sum()['weight']**2)/n) \ - (data['weight'].sum()**2)/N How to Calculate the Sum of Squares Within The variability in the data due to differences within people. The calculation of Sum of Squares Within can be carried out according to this formula: $latex SSwithin = \sum Y^2 – \frac{\sum (\sum a_i)^2}{n}&s=2$ sum_y_squared = sum([value**2 for value in data['weight'].values]) SSwithin = sum_y_squared - sum(data.groupby('group').sum()['weight']**2)/n Calculation of Sum of Squares Total Sum of Squares Total will be needed to calculate eta-squared later. This is the total variability in the data. $latex SStotal = \sum Y^2 – \frac{T^2}{N}&s=2$ SStotal = sum_y_squared - (data['weight'].sum()**2)/N How to Calculate Mean Square Between Mean square between is the sum of squares within divided by degree of freedom between. MSbetween = SSbetween/DFbetween Calculation of an F-value table based on the DFwithin and DFbetween. However, there is a method in SciPy for obtaining a p-value. p = stats.f.sf(F, DFbetween, DFwithin) Finally, we are also going to calculate the..846, p = .016, η² = .264. If you want to report Omega Squared: ω2 = .204. That was it, now we know how to do ANOVA in Python by calculating everything “by hand”. ANOVA in Python using Statsmodels In this section of the Python ANOVA tutorial, we will use Statsmodels. First, we start by using the ordinary least squares (ols) method and then the anova_lm method. Also, if you are familiar with R-syntax, Statsmodels have a formula APIwhere our model is very intuitively formulated. Here’s three simple step for carrying out ANOVA using Statsmodels: Time needed: 1 minute. In the ANOVA how-to below, it is assumed that the data is in a Pandas dataframe (i.e., df). - Import the needed Python packages First, we import statsmodels API and ols: - Set up the ANOVA model Second, we use ols to set up our model using a formula - Carry out the ANOVA We can now use anova_lm to carry out the ANOVA in Python In the ANOVA example below, we import the API and the formula API. Second, we use ordinary least squares regression with our data. The object obtained is a fitted model that we later use with the anova_lm method to obtain an ANOVA table. In the final part of this section, we are going to carry out pairwise comparisons using Statsmodels. import statsmodels.api as sm from statsmodels.formula.api import ols mod = ols('weight ~ group', data=data).fit() aov_table = sm.stats.anova_lm(mod, typ=2) print(aov_table) ANOVA Python table: Note, no effect sizes are calculated when we use Statsmodels. To calculate eta squared we can use the sum of squares from the table: esq_sm = aov_table['sum_sq'][0]/(aov_table['sum_sq'][0]+aov_table['sum_sq'][1])aov_table['EtaSq'] = [esq_sm, 'NaN']print(aov_table) Python ANOVA: Pairwise Comparisons It is, of course, also possible to calculate pairwise comparisons for our Python ANOVA using Statsmodels. In the next example, we are going to use the t_test_pairwise method. Conducting post-hoc tests, corrections for familywise error can be carried out using a number of methods (e.g., Bonferroni, Šidák) pair_t = mod.t_test_pairwise('group') pair_t.result_frame Note, if we want to use another correction method, we add the parameter method and add “bonferroni” or “sidak”, for instance (e.g., method=”sidak”). If we were to carry out regression analysis, using Python, we might have to convert the categorical variables to dummy variables using Pandas get_dummies() method. Python ANOVA using pyvttbl anova1way In this section, we are going to learn how to carry out an ANOVA in Python using the method anova1way from the Python package pyvttbl. This package also has a DataFrame method. We have to use this method instead of Pandas DataFrame to be able to carry out the one-way ANOVA in Python. Note, Pyvttbl is old and outdated. It requires Numpy to be at most version 1.1.x or else you will run into an error ( “unsupported operand type(s) for +: ‘float’ and ‘NoneType’”). This can, of course, be solved by downgrading Numpy (see my solution using a virtual environment Step-by-step guide for solving the Pyvttbl Float and NoneType error). However, it may be better to use pingouin for carrying out Python ANOVAs (see the next section of this blog post).) We get a lot of more information using the anova1way method. What may be of particular interest here is that we get results from a post-hoc test (i.e., Tukey HSD). Whereas the ANOVA only lets us know that there was a significant effect of treatment the post-hoc analysis reveals where this effect may be (between which groups). If you have more than one dependent variable a multivariate method may be more suitable. Learn more on how to carry out a Multivariate Analysis of Variance (ANOVA) using Python: Python ANOVA using Pingouin (bonus) In this section, we are going to learn how to carry out ANOVA in Python using the package pingouin. This package is, as with Statsmodels, very simple to use. If we want to carry out an ANOVA we just use the method called anova. import pandas as pd import pingouin as pg data = "" df = pd.read_csv(data, index_col=0) aov = pg.anova(data=df, dv='weight', between='group', detailed=True) print(aov) As can be seen in the ANOVA table above, we get the degrees of freedom, the mean square error, F- and p-values, as well as the partial eta squared when using pingouin. Pairwise Comparisons in Python (Tukey-HSD) One neat thing with Pingouin is that we can also carry post-hoc tests. We are now going to carry out the Tukey-HSD test as a follow up on our ANOVA. This is also very simple we use the pairwise_tukey method to carry out the pairwise comparisons: pt = pg.pairwise_tukey(dv='weight', between='group', data=df) print(pt) Note, if we want another type of effect size we can add the argument effsize and choose between six different effect sizes (or none): cohen, hedges, glass, eta-square, odds-ratio, and AUC. In the last code example we change the default effect size (hedges) to cohen: cpt = pg.pairwise_tukey(dv='weight', between='group', effsize='cohen', data=df) print(pt) Conclusion: Python ANOVA. Heck of a job there, it aboesutlly helps me out. Thank you for your effort, very clearly set. However, I am hitting a problem using ANOVA1Way, I wonder if you have any suggestions. When I make a copy of PlantGrowth.csv and type in new numbers for “weight” and then run your code, I get: Error: new-line character seen in unquoted field – do you need to open the file in universal-newline mode? Thanks and Regards Hey Umit, I cannot really answer your question since the error does not happen on my computer. I did find this:. Maybe you could test that and see if it works. If you solve your problem, or have already solved it, please let me know how. Thanks and regards, Erik Hi Erik, thanks for the great post. I wanted to offer an update to part 2 (python based ANOVA) for when the groups have different sample sizes. First, rewrite the calculation for n: n = data.groupby(var).size().values Then the calculation for SSbetween and SSwithin needs to be modified: SSbetween = (sum(data.groupby(var).sum()[‘LogSalePrice’].values**2/n)) – (data[‘LogSalePrice’].sum()**2)/N SSwithin = sum_y_squared – sum(data.groupby(var).sum()[‘LogSalePrice’].values**2/n) It just takes the division by n (element-wise) inside the outer sum in both cases. I tested this by comparing with the output from f_oneway and it seems to work. It should also generalize well to the case where n is the same for all groups. Thanks again for the write-up! Hi Joel, thanks for your comment and thanks for the update! I’ll add this to the post (with a reference to your comment, of course). Erik Thanks for your post… It was super useful for me Hi Erik! Thank you for the post. I’ve been working recently on a Python stats package that implements several ANOVA-related functions and post-hocs tests. Just thought I’d mention it in case this would turn useful to you or others: All the best, Raphael Hey Raphael, This looks really interesting! Will install this later today and play around with it. I might just add it to one of my posts listing useful Python packages. We’ll see! Maybe I’ll also update this post (or write a new one). I’ll send you an email, if I do. Thanks for letting us know about the package, Best, Erik
https://www.marsja.se/four-ways-to-conduct-one-way-anovas-using-python/
CC-MAIN-2020-45
refinedweb
2,388
62.88
This appendix describes the keywords used in XD/Replay scripts. The XD/Replay script keywords have been divided into the following subsections according to their functions: in - specify the context of subsequent actions in a script ApplicationShell - the top level shell of the application shell_widget - the name of a shell widget other than the main application shell XD/Replay scripts consist of actions on widgets. These actions have to take place within the context of the shell (i.e. dialog) which contains that widget. If the shell is not realized, the script will fail at that point. The in command cannot be nested. Once you have come out of a shell (to go into another shell), you must go back in to that shell before attempting any further actions within that context. push - press and release a mouse button doubleclick - doubleclick mouse button widget - the name of a widget modifier - a keyboard modifier button[1-5] - the number of the mouse button (default is button 1 with no modifiers). push simulates a single click (a mouse button press/release sequence) using a mouse button on the named widget. The with keyword allows you to specify a particular mouse button. If this is not used, button1 (the left mouse button) is used. A keyboard modifier (such as the Shift key) can be used to extend the permutations of mouse button events. The permitted modifiers are alt, ctrl and shift. doubleclick simulates a doubleclick with the left mouse button. This can be used in any widget but is especially useful for selecting from a text widget (see Text Entry). In some widgets, where the user clicks with the mouse is unimportant. For example, clicking on a button widget in any part of it will activate that button. However, for other widgets, the position is significant; for example pushing on a scale widget will have different effects depending upon the where the push was made. The following table lists those widgets which are position and non-position dependent: Refer to Button Actions (Position Dependent Controls) for details on recording and replaying the other widgets in the position-dependent list. cascade - post a pulldown menu pullright - post a pullright menu from a pulldown menu cascadebutton - the name of a cascadebutton widget - the name of a widget within the cascade button's pulldown menu cascade is a shorthand way of describing menu operations. You can also post a menu by pushing on the associated cascade button or using a keyboard accelerator. Similarly, menu options can be selected using accelerators or keyboard mnemonics. cascade posts a pulldown menu to allow a selection to be made from it. The selection may be a widget (i.e. an option in that menu) or a cascadebutton which displays a pullright menu. XD/Replay only supports one level of pullright menu to conform to the Motif style guide. You can however use the push command in your scripts to select pullright menus in succeeding levels. option opmenu-widget::member_widget option selects an option from an option menu. The next example only selects an option if the option menu itself is sensitive to user input: If you want to check the current setting of the optionmenu (i.e. what was last selected), you simply examine the option menu menuHistory resource, for example: An alternative method of selecting a member of an option menu is to push the option button and then push the appropriate member widget. However, we recommend use of the option syntax as it more closely mimics user actions. alt - select current word ctrl - select current line key - enter a keysym from the keyboard char - a single character keysym - any X keysym (see X11/keysymdef.h for list) Keyboard input is directed at the widget that has the focus. XD/Replay does not require any extra programming to enter input from the keyboard. Users and test scripts alike have to work with the window manager when entering text. Where explicit focus is in place (i.e. you have to click in a window to get the focus), you will have to program this into the test script. A push or a doubleclick in a text field has the side effect of taking the focus. This is the only place in XD/Replay that focus is handled directly. Data entry into text fields often overrides what is already there and will be preceded by a doubleclick or a multiclick. type - enter text from the keyboard key - enter a keysym from the keyboard doubleclick - select current word multiclick - select current line keysym - any X keysym (see X11/keysymdef.h for list) without the XK_ prefix textwidget - the name of a text widget text - a text string Most text widgets in an application are used for single line data entry (for example the selection fields in a File Selection Box). XD/Replay allows testers to replace the default content of the field with a known value and then check the consequences. type enters text into a text widget. doubleclick and multiclick program word and line selection respectively. multiclick is most commonly used in test scripts, when you want to replace the contents of the text field, regardless of how many words there are on the line. There is a limit of 512 characters to the length of a line which can be handled by XD/Replay. In you want to enter a text string whose length exceeds this limit, split the text and type in each section. XD/Replay works around a problem in some versions of Motif where triple-click is not properly handled in XmTextField widgets. In these circumstances, if your script contains multiclick, it will be converted to doubleclick. push - press and release a mouse button drag - combine a press and release within the same widget widget - a widget name name, name1, name2 - application/widget dependent description In some widgets (e.g. drawing areas) where you click is important. In the case of drawing areas, a position within the drawing area is needed. For lists, you need an indication of which item has been selected. The version of push listed above is intended for such position-dependent widgets. In these widgets, you will often need to do more than just click. You may need to press down at one point and release at another. An example is the setting up of attachments between widgets in the X-Designer form layout editor. This may involve a server grab, so it is described as a single drag operation where the first part describes where you pressed and the second where you released the button. This mechanism can be used for single user-defined widget instances, such as the drawing areas within your application and also for entire widget classes (as we have done for XmList, XmScale and XmScrollBar and various 3rd party widget sets). The first example shows how the Motif DrawingArea widget has been implemented for X-Designer testing: In the next example we show how attachments are made between the frame1 and button_box widgets in the X-Designer form layout editor: You can try out these effects in X-Designer. Information on how to handle your own position-dependent widgets, or those from a 3rd party supplier, are given in Extending the XD/Replay Widget Set. printres - print the value of a widget resource printres widget->resource widget - the name of a widget resource - the name of the widget resource printres prints the current value of a specified resource within a selected widget. This is especially useful in test scripts where a known resource value is expected. The name of the resource must be specified without any "XmN" prefix, e.g. "labelString". Your scripts are more likely to include resource evaluation within conditional expressions. tree - produce recursive listing of current widget hierarchy dump - show resources assigned to widget snapshot - produce recursive listing of current widget hierarchy and the resources assigned to each widget widget - the name of a widget The tree, dump and snapshot commands allow you to analyze the structure of the widgets within an application interface and the values of resources assigned to those widgets. The results from the analysis are displayed on standard error. tree gives a recursive listing of widget names in the widget hierarchy from the nominated widget. dump displays the resource settings of the nominated widget. snapshot displays the resource settings of the nominated widget and all other widgets in the widget hierarchy from the nominated widget. The following command displays the resources allocated to the button1 widget: Part of the example output is shown below: The next command displays the widget hierarchy from the form1 widget: Part of the example output is shown below: XD/Replay assigns a unique name to widgets which share a common widget name within a shell (e.g., HorScrollBar#1, HorScrollBar#2, Apply#3, Apply#5, etc.). Where the replay name is different from the actual widget name, it is given within the brackets. delay - pause replay of user actions message - print message sequence - label part of a script shell - execute shell command duration - time in seconds text - a text string widget - the name of a widget status - either 1 or 0 delay allows you to insert a pause in a script. This is useful when you wish to visually inspect the application at particular points in its execution. The next action in the script will continue after the pause. message displays a message on standard error. This allows you to label different parts of the script and communicate expected results and errors to testers. The message text does not have to be enclosed in quotes. sequence is used to label different sections of a script. Then if an error occurs, you can skip to the next labelled sequence and continue from that point. To use sequence, you must invoke xdreplay with the -skip-on-error flag. By default, xdreplay is run with the -user-on-error flag which will stop the test and stay in the application when an error occurs. The remaining error flag, -exit-on-error causes will terminate the application when an error occurs. shell executes a shell command from a script. The script continues when the shell command has terminated. This facility allows you to enrich your scripts to do far more than simply re-running user actions. setenv is used in conjunction with the shell command to pass information to the shell through environment variables. setenv has two arguments. The first is the name of the variable; the second is an expression that can combine widget resource values and one of the following convenience functions: breakpoint is used, in conjunction with a debugger, to set a breakpoint in a script when a nominated widget is activated. You can then examine the internals of individual widgets. A script which contains the breakpoint keyword should be invoked as follows: xdreplay -f script debugger app where script is the name of the script, debugger is the name of your debugger and app is the name of the application to be exercised by the script. The debugger is run by XD/Replay. At the breakpoint keyword, the application will stop as if you set the breakpoint directly. This will allow you to inspect widget internals even if your application has been optimized. exit terminates the script with the specified exit status. To delay for 5 seconds after pushing a widget: To take a screen dump of a shell without window manager decorations: To take a screen dump with window manager decorations: To take a screen dump of a pulldown menu, when you only know the name of its cascade button: To do the same with an OptionMenu: To note the background color of the cascade button's parent: if else elif endif expression - an expression which evaluates to true or false actions - one or more user actions The if statement allows the control flow through a script to be sensitive to conditions inside the application as it is being run. For each if there must be a matching endif. If necessary the statement can include optional alternatives (elif) and a default catch-all else condition. IsPseudoColor IsDirectColor IsTrueColor IsStaticColor IsStaticGrey IsGreyScale expression - one of the keywords listed above You cannot guarantee that a script recorded on one display will necessarily work on another of a different type. Certain applications make heavy use of color and may display a color restriction message to a user if he is running the application on a display with a limited color map. Your scripts must accommodate such situations. IsVisible IsManaged IsRealized IsHere expression - one of the keywords listed above Where parts of a dialog are selectively displayed, you can check which parts are managed and realized using the IsManaged and IsRealized expressions. IsVisible is intended for small (VGA) displays where the whole of a dialog may not be visible on the screen. This is important as Motif TAB navigation traversal model ignores controls which are off screen. IsHere simply checks whether the widget exists in the current shell. import - load a module of additional commands user - invoke a command from a loaded module module - the name of the module command - the name of the command text - parameters passed to the command The command set of XD/Replay is intended for replaying user actions and for checking the state of an application with respect to its widget hierarchy and its resource settings. There is nothing to stop you adding your own commands to meet your own needs. For example: import allows you to load a module of your own commands into a script. Once the module has been loaded the commands in it can be invoked using the user command. You can import as many modules as you wish. The shell and setenv interface is the preferred route if the actions you need to perform do not involve extensive access to the widget hierarchy, or inspection of the internals of your program. In the latter case, see Adding Your Own XD/Replay Commands to see how to add your own commands to XD/Replay. In XD/Replay, the widget name is what you use to reference a widget. One of the main tasks for any widget-based testing tool is identifying the right widget. The naming convention must be unambiguous, without being over-complicated. Here are the rules used by XD/Replay: This outputs a recursive listing of the widget hierarchy. The listing contains the actual widget name, and in parenthesis, the name you should use for XD/Replay, if it is different from the actual name.
http://docs.oracle.com/cd/E19422-01/819-3700/ReplayKeywords.html
CC-MAIN-2016-07
refinedweb
2,437
59.64
Goals - Practice with binary numbers and IEC prefixes. - Learn how to compile and run a C program on the EECS instructional computers. - Examine different types of control flow in C. - Look at the internal representation of numbers. SetupCopy the contents of ~cs61c/labs/su14/02to your home directory. E.g. $ cp -r ~cs61c/labs/su14/02 ~/lab02 Exercises Exercise 1: Practice with Numbers This exercise can be done WITHOUT a lab account! Part 1: Cisco Binary! Part 2: IEC Prefixes -- Practice converting between powers of 2 and IEC prefixes (in both directions!) until you can do so consistently and relatively quickly. This will help you a lot later on in the course! Check-off - Score over 18,000 points on the binary game and show your TA (pause the game or take a screenshot)! - Your TA will give you and your partner a random power of two and a random IEC prefix (one to each of you). Convert it correctly in under 5 seconds. Exercise 2: Simple C program The following C program, also in output0.c, is supposed to output two zeros but currently doesn't work! Make changes to the two lines specified to produce the desired behavior. Don't change anything else in the program. The following references may help: ASCII Table and printf. #include <stdio.h> int main(void) { int n; n = 0; /* fix 1: only change this line */ printf("fix 1: %c\n",n); n = 0; printf("fix 2: %c\n",n); /* fix 2: only change this line */ return 0; } To verify your solution, compile it and run the resulting executable: $ gcc -o output0 output0.c $ ./output0 fix 1: 0 fix 2: 0 Note: In general the -o "NAME" tag specifies the name of your executable. Without it (i.e. gcc output0.c), the executable defaults to a.out. Check-off - Show your code to your TA and explain the changes you made to output0.c. Exercise 3: C control flow Berkeley is well-known for its academics as well as its eccentrics! Look at the code contained in eccentric.c. In it are four different examples of basic C control flow. Compile and run the program to see what it does. Modifying ONLY the initialization values of variables, make the program produce the following output: $ gcc -o eccentric eccentric.c $ ./eccentric Berkeley eccentrics: ==================== Happy Happy Happy Yoshua Go BEARS! Check-off - Show your modified code to your TA and explain the changes you made. - If you are allowed to replace instances of the control variables (i,j,k,m) with each other in the code, what is the minimum # of variables you could use to produce the same output above? Exercise 4: The Biggest Integer In class we discussed number representation. In particular, we discussed unsigned integers and two's complement, the almost use the provided information to answer the following questions: - Based on the value of the most significant bit (MSB) of an unsigned int, how many bits does the C data type unsigned int have on your current machine? - Based on the value of the largest positive signed long, how many bits does the C data type long have on your current machine? - Based on the value of the most negative signed int, do the unsigned int and signed int have the same number of bits on your current machine? - Two's complement number representations have one more negative number than they do positive numbers (i.e. the most negative number does not have a positive counter-part). The final piece of information printed is the signed value of what you get when you try to negate the most negative value. Why does this happen? Check-off - Show the output of biggestInt.c to your TA (results will vary between different machines). - Give your answers AND your reasoning to the above four questions.
https://www-inst.eecs.berkeley.edu/~cs61c/su14/labs/02/
CC-MAIN-2021-43
refinedweb
641
65.42
This is the mail archive of the cygwin mailing list for the Cygwin project. Alright, I'm aware of the "check for invalid memory region and throw exception" issue present when debugging pthread applications under gdb and that the actual segfault is innocuous. However, the following solutions: 1. "handle SIGSEGV nostop" "handle SIGTRAP nostop" 2. (hit continue on every SIGSEGV raised). Are unacceptable to me. I have functions which initialize mutexes for 1000s of objects at load time. e.g. parse, alloc struct, init mutex within struct. For one, it becomes fruitless to try and debug a real segfault issue when using gdb and pthreads under cygwin. The only option when working with mass mutexes is disable stopping on SIGSEGV within gdb - rendering useless the debugging of an actual real segfault situation. This is the best I could come up with to get around it, and frankly it's a ridiculous hack, that's entirely non-portable, possibly even with future versions of cygwin libraries - but it's necessary to keep my sanity when debugging pthreads based apps under cygwin: #ifdef __CYGWIN__ # define PT_m_init(x, y) \ { \ *x = malloc(sizeof(struct __pthread_mutex_t)); \ (**x).__dummy = 56; \ } #else # define PT_m_init(x, y) \ pthread_mutex_init((x), (y)) #endif Someone throw me a bone here, please. -cl -- Unsubscribe info: Problem reports: Documentation: FAQ:
http://sourceware.org/ml/cygwin/2006-05/msg00139.html
crawl-002
refinedweb
217
55.44
Whenever creating new components, we need to wrap each and every element inside a parent element, precisely a div. But, when our app gets large, there are now too many components, and too many divs, this can cause a div soup. It’s not considered a better practice, as per the performance perspective, because, React will render these divs. We might not see the problem, but behind the scenes, there’s a big issue that you might want to fix & can affect your site performance. Then you might say, in what parent element should we wrap all the child elements? And that is a valid question. So, now let me introduce the solution to you. There are two solutions for this problem, the 1st one is just for an explanation of how it works, and the 2nd one is the actual solution. #1 Custom Wrapper Component – What we can do is, we can create a simple JavaScript component, just to wrap all of the child elements. How does it sound? Let me show you in practice. We’ll create a Helper folder inside our component folder and there we’ll create a file called Wrapper.jsx. Add these codes inside the Wrapper component, const Wrapper = (props) => { return props.children; };. import React from "react"; import Wrapper from "../Helpers/Wrapper; const Navbar = () => { return ( <Wrapper> <div></div> <div></div> <div></div> </Wrapper> ); };. Also, Check out, React Styled Components for Beginners #2 React Fragments – Sure you might have heard of React Fragments already, or you might’ve used it. But there’s something that you need to know about them. There are two ways that you can use React Fragments, - This will always work and it also supports keys const Navbar = () => { return ( <React.Fragment> <div></div> <div></div> <div></div> </React.Fragment> ); }; With Keys, function Glossary(props) { return ( <dl> {props.items.map(item => ( // Without the `key`, React will fire a key warning <React.Fragment key={item.id}> <dt>{item.term}</dt> <dd>{item.description}</dd> </React.Fragment> ))} </dl> ); } 2. This one needs to be supported by our Project Workflow & it does not support keys const Navbar = () => { return ( <> <div></div> <div></div> <div></div> </> ); }; The benefit of using React Fragments is that it allows us to write cleaner JSX code, and also helps us to avoid the rendering of unnecessary HTML elements and most important the div soup. You can learn more about React Fragments on React official documentation. Do you use React Fragments in your Projects? Do let us know in the comments below. Also, share this post if it was helpful.
https://waystoweb.com/why-to-use-react-fragments/
CC-MAIN-2022-40
refinedweb
428
65.12
> I am trying to code an assignment that calculates the > payments for a mortgage of a loan amount using 3 > separate rates and term years. I have it working but > it wants to calculate each loan 3 times with all > three rates and with the years so instead of having > the printout with 7 years, 15 years and 30 years I > have each 3 times with the different interest rate. > Any help is appreciated. > > Thanks > > Here is the code I have: > import java.io.*; // java input output package import java.text.DecimalFormat; class PaymentArray2 { public static void main(String[] arguments) { double amount = 200000; int[]term = {7, 15, 30}; double[] rate = {.0535, .055, .0575}; DecimalFormat twoDigits = new DecimalFormat("$000.00"); System.out.println("With a loan amount of " +twoDigits.format(amount)); for (int i = 0; i < term.length; i++) { for (int j = 0; j < rate.length; j++) { System.out.print("for " + term[i] + " years"); System.out.print("\tat a rate of " + rate[j]); double payment = (amount*(rate[j]/12))/(1-(Math.pow(1/(1+(rate[j]/12)),(term[i]*12)))); System.out.println("\tThe monthly payment will be " + twoDigits.format(payment)); } } } }
https://www.daniweb.com/programming/software-development/threads/46549/mortgage-array-help-please
CC-MAIN-2017-34
refinedweb
190
60.72
. Corporate financeinvestment,? This walkthrough will explore each of these business decisions in greater depth.. Corporation Like the LLC, the corporate structure distinguishes the business entity from its owner and can reduce liability. However, it is considered more complicated to run a corporation articles of incorporation, pay filing fees and follow any other specific state/national requirements. (Find out how becoming a corporation can protect and further your finances. SeeShould You Incorporate Your Business?) There are two types of corporations: C corporations (C corps) and S corporations double taxation: Be a domestic corporation Have only allowable shareholders, including individuals, certain trusts and estates Not include partnerships, corporations or non-resident alien shareholders Have no more than 100 shareholders Have one class of stock general partnership, all partners are personally liable for business debts, any partner can be held totally responsible for the business and any partner can make decisions that affect the whole business. In a limited partnership,... Positive and negative trends in this ratio are, for the most part, directly attributable to management decisions.: To be profitable, companies must not only earn revenues, but also control costs. If costs are too high, profit margins will be too low, making it difficult for a company to succeed against its competitors. In the case of a public company, if costs are too high, the company may find that its share price is depressed and that it is difficult to attract investors. When examining whether costs are reasonable or unreasonable, it's important to consider industry standards. Many firms examine their costs during the drafting of their annual budgets.. The size of a market is always in flux, but the rate of change depends on whether the market is growing or mature. Market share increases and decreases can be a sign of the relative competitiveness of the company's products or services. As the total market for a product or service grows, a company that is maintaining its market share is growing revenues at the same rate as the total market. A company that is growing its market share will be growing its revenues faster than its competitors. Technology companies often operate in a growth market, while consumer goods companies generally operate in a mature market. New companies that are starting from scratch can experience fast gains in market share. Once a company achieves a large market share, however, it will have a more difficult time growing its sales because there aren't as many potential customers available. 1. Stockholders versus Managers If the manager owns less than 100% of the firm's common stock, a potential agency problem between mangers and stockholders exists. Managers may make decisions that conflict with the best interests of the shareholders. For example, managers may grow their firms a creditors' main concern. Stockholders, however, have control of such decisions through the managers. Since stockholders will make decisions based on their best interests, Shareholders' Best Interests There are four primary mechanisms for motivating: o Performance shares, where managers will receive a certain number shares based on the company's performance o can exert influence on mangers and, as a result, the firm's operations. 3. Threat of Firing If stockholders are unhappy with current management, they can encourage the existing board of directors to change the existing management, or stockholders may. banks can lend money at a higher interest rate than they have to pay for funds and operating costs, they make money. Banks also serve often under-appreciated roles as payment agents within a country and between nations. Not only do banks issues provide. Investment Banks The stock market crash of 1929 and ensuing Great Depression caused the United States government to increase financial market regulation. The Glass-Steagall Act of 1933 resulted in the separation of investment banking from commercial banking. While investment banks may be called "banks," their operations are far different than depositgathering commercial banks. An investment bank is a financial intermediary that performs a variety of services for businesses and some governments. These services include underwriting debt and equity offerings, acting as an intermediary between an issuer of securities and the investing public, making markets, facilitating mergers and other corporate reorganizations, and acting as a broker for institutional clients. They may also provide research and financial advisory services to companies. As a general rule, investment banks focus on initial public offerings (IPOs) and large public and private share offerings. Traditionally, investment banks do not deal with the general public. However, some of the big names in investment banking, such as JP Morgan Chase, Bank of America and Citigroup, also operate commercial banks. Other past and present investment banks you may have heard of include Morgan Stanley, Goldman Sachs, Lehman Brothers and First Boston.. Insurance Companies Insurance companies pool risk by collecting premiums from a large group of people who want to protect themselves and/or their loved ones against a particular loss, such as a fire, car accident, illness, lawsuit, disability or death. Insurance helps individuals and companies manage risk and preserve wealth. By insuring a large number of people, insurance companies can operate profitably and at the same time pay for claims that may arise. Insurance companies use statistical analysis to project what their actual losses will be within a given class. They know that not all insured individuals will suffer losses at the same time or at all. Brokerages A brokerage acts as an intermediary between buyers and sellers to facilitate securities transactions. Brokerage companies are compensated via commission after the transaction has been successfully completed. For example, when a trade order for a stock is carried out, an individual often pays a transaction fee for the brokerage company's efforts to execute the trade. A brokerage can be either full service or discount. A full service brokerage provides investment advice, portfolio management and trade execution. In exchange for this high level of service, customers pay significant commissions on each trade. Discount brokers allow investors to perform their own investment research and make their own decisions. The brokerage still executes the investor's trades, but since it doesn't provide the other services of a full-service brokerage, its trade commissions are much smaller. Investment Companies. There are three fundamental types of investment companies: unit investment trusts (UITs), face amount certificate companies and managed investment companies. All three types have the following things in common: An undivided interest in the fund proportional to the number of shares held Diversification in a large number of securities Professional management Specific investment objectives Let's take a closer look at each type of investment company. Unit the investment company sells.. Nonbank Financial Institutions The following institutions are not technically banks but provide some of the same services as banks. emergedterm.).)dollars, CDs. difference . There is no central marketplace for currency exchange; trade is conducted over the counter. The forex market is open 24 hours a day, five days a week and currencies are traded worldwide among the major financial centers of London, New York, Tokyo, Zrich, Frankfurt, Hong Kong, Singapore, Paris and Sydney. Until recently, forex trading in the currency market had largely. (For further reading, see The Foreign Exchange Interbank Market.) Primary Markets vs.. (For more on the primary market, see our IPO Basics Tutorial.). (To learn more about the primary and secondary market, read Markets Demystified.). . 1. The company will continue to operate (going-concern assumptions). 2. Revenues are reported as they are earned within the specified accounting period (revenuesrecognition dont: Source: are the entries youll: o Machinery and equipment - This category represents the total machinery, equipment and furniture used in the company's operations. These assets are reported at their historical cost less accumulated depreciation. o Buildings or Plants - These are buildings that the company uses for its operations. These assets are depreciated and are reported at historical cost less accumulated depreciation. oencomp: Multi-Step Single-Step Format Format Net Sales Net Sales Cost of Sales Materials and Production Gross Income* Marketing and Administrative Selling, General Research and Development and Administrative Expenses(R&D) Expenses (SG&A) Operating Other Income & Expenses Income* Other Income & Pretax Income Expenses Pretax Income* Taxes Taxes Net Income Net Income (after -tax)*Sample Income Statement Now let's take a look at a sample income statement for company XYZ for Fiscal Year (FY) ending 2008 and 2009. Expenses are in parentheses. (Figures USD) Net Sales Cost of Sales Gross Income Operating Expenses (SG&A) Operating Income Other Income (Expense) Extraordinary Gain (Loss) Interest Expense Net Profit Before Taxes (Pretax Income) Taxes Net Income 2008 1,500,000 (350,000) 1,150,000 (235,000) 915,000 40,000 (50,000) 905,000 (300,000) 605,000 2009 2,000,000 (375,000) 1,625,000 (260,000) 1,365,000 60,000 (15,000) (50,000) 1,360,000 (475,000) 885,000.EverBank Online MMA employeeseparation.EverBank Online MMA Unusual or Infrequent Items Included in this category are items that are either unusual or infrequent in nature but they cannot be both. Examples of unusual or infrequent items: Gains (or losses) as a result of the disposition of a company's business segment including: o Plant shutdown costs o Lease-breaking fees o Employee-separation costs Gains (or losses) as a result of the disposition of a company's assets or investments (including investments in subsidiary segments) including: o Plant shut-down costs o: 1. As a result of a change in an accounting principle. 2. successfulefforts (+) 1. Revenue from sale of goods and services 2. Interest (from debt instruments of other entities) 3. Dividends (from equities of other entities) Cash outflow (-) 1. Payments to suppliers 2. Payments to employees 3. Payments to government 4. Payments to lenders 5. Payments for other expenses 2. Cash Flow from Investing Activities (CFI) CFI is cash flow that arises from investment activities such as the acquisition or disposition of current and fixed assets. This includes: Cash inflow (+) 1. Sale of property, plant and equipment 2. Sale of debt or equity securities (other entities) 3. Collection of principal on loans to other entities Cash outflow (-) 1. Purchase of property, plant and equipment 2. Purchase of debt or equity securities (other entities) 3. (+) 1. Sale of equity securities 2. Issuance of debt securities Cash outflow (-) 1. Dividends to shareholders 2. Redemption of long-term debt 3. Redemption of capital stock Reporting Non-Cash Investing and Financing Transactions Information for the preparation of the statement of cash flow is derived from three sources: 1. Comparative balance sheets 2. Current income statements 3... Straight-line depreciation produces a constant depreciation expense. At the end of the assets. Depreciation Expense = Total Acquisition Cost - Salvage Value / Estimated Total UnitsEstimated assets.High Yield Money Market Hours-of-Service Depreciation This is the same concept as unit of production depreciation except that the depreciation expense is a function of total hours of service used during an accounting period. if it becomes obsolete ( computers, for example). (and therefore higher valuation). The two most common accelerated-depreciation methods are the sum-of-year (SYD) method and double-declining-balance method (DDB): Sum-of-Year Method:.High Yield Money Market. The carrying costs of natural resources are allocated to an accounting period by means of the unitsofaccounting. Cash Flow And Relationships Between Financial Statement - The Relationship Between Financial StatementsT companys financial condition; together, they provide a more complete picture. The Relationship Between the Financial Statements Stockholders and potential creditors analyze a companys financial statements and calculate a number of financial ratios with the data they contain to identify the companys.) Cash Flow And Relationships Between Financial Statement - Free Cash FlowBy: Cash Flow From Operations (Operating Cash) - Capital Expenditure ---------------------------= Free Cash FlowTo do it another way, grab): Net income + Depreciation/Amortization - Change in Working Capital - Capital Expenditure ---------------------------= Free Cash FlowIt might seem odd to add back depreciation/amortization since it accounts for capital spending. The reasoning behind the adjustment. What Does Free Cash Flow Indicate?. Chapter 3 If you choose is calculated by multiplying the principal amount of $10,000 by the interest rate of 4.5% and then adding the interest gained to the principal amount: Future value of investment at end of first year: = ($10,000 x 0.045) + $10,000 = $10,450You of : We can see that the exponent is equal to the number of years for which the money is earning interest in an investment. So, the equation for calculating the three-year future value of the investment would look like this:High Yield Money Market This calculation means: Pn = P0(1+r)n Pnis future value of P0 P0 is original amount invested r is the rate of interest n is the number of compounding periods (years, months, etc.)Note in the example below that when you increase the frequency of compounding, you also increase the future value of your investment. P0 = $10,000 Pn is the future value of P0 n = 10 years r = 9% Example 1- If interest is compounded annually, the future value (Pn) is $23,674. Pn = $10,000(1 + .09)10 = $23,674 Example 2 - If interest is compounded monthly, the future value (Pn) is $24,514. Pn = $10,000(1 + .09/12)120 = $24,514 Let's walk backwards:. 3.2 Discounted Cash Flow Valuation - Introduction To Discounted Cash Flow ValuationDiscounted cash flow (DCF) is a valuation method used to estimate the attractiveness of an investment opportunity. DCF analysis uses future free cash flow projections and discounts them (most often using the weighted average cost of capital, which well should. Discounted Cash Flow Valuation - Annuities And The Future Value And Present Value Of Multiple Cash Flowsyear.)EverBank Online MMA:EverBank Online MMA. A delayed perpetuity is perpetual stream of cash flows that starts at a predetermined date in the future. For example, preferred fixed dividend paying shares are often valued using a perpetuity formula. If the dividends are going to originate (start) five years from now, rather than next year, the stream of cash flows would be considered a delayed perpetuity. Although it may seem a bit illogical, an infinite series of cash flows can have a finite present value. Because of the time value of money, each payment is only a fraction of the last. The net present value (NPV) of a delayed perpetuity is less than a comparable ordinary perpetuity because, based on time value of money principles, the payments have to be discounted to account for the delay. Retirement products are often structured as delayed perpetuities. Examples of Perpetuities The perpetuity is not as abstract a concept as you may think. The British-issued bonds, called consols, are a great example of a perpetuity. By purchasing a consol from the British government, the bondholder is entitled to receive annual interest payments forever. Another example is a type of government bond called an undated issue that has no maturity date and pays interest in perpetuity. While the government can redeem an undated issue if it so chooses, since most existing undated issues have very low coupons, there is little or no incentive for redemption. Undated issues are treated as equity for all practical purposes due to their perpetual nature, but are also known as perpetual bonds. Perhaps the best-known undated issues are the U.K. government's undated bonds or gilts, of which there are eight issues in existence, some of which date back to the 19th century. The largest of these issues presently is the War Loan, with an issue size of 1.9 billion and a coupon rate of 3.5% that was issued in the early 20th century. Perpetuities and the Dividend Discount Model The concept of a perpetuity is used often in financial theory, particularly with the dividend discount model (DDM). Unfortunately, the theory is the easy part. The model requires a number of assumptions about a company's dividend payments, growth patterns and future interest rates. Difficulties spring up in the search for sensible numbers to fold into the equation. Here we'll examine this model and show you how to calculate it. The basic idea is that any stock is ultimately worth no more than what it will provide investors in current and future dividends. Financial theory says that the value of a stock is worth all of the future cash flows expected to be generated by the firm, discounted by an appropriate risk-adjusted rate. According to the DDM, dividends are the cash flows that are returned to the shareholder. To value a company using the DDM, calculate the value of dividend payments that you think a stock will generate in the years ahead. Here is what the model says: Where: P= the price at time 0 r= discount rate For simplicity's sake, consider a company with a $1 annual dividend. If you figure the company will pay that dividend indefinitely, you must ask yourself what you are willing to pay for that company. Assume the expected return (or the required rate of return) is 5%. According to the dividend discount model, the company should be worth $20 ($1.00 / .05). How do we get to the formula above? It's actually just an application of the formula for a perpetuity: The obvious shortcoming of the model above is that you'd expect most companies to grow over time. If you think this is the case, then the denominator equals the expected return less the dividend growth rate. This is known as the constant growth DDM or the Gordon model after its creator, Myron Gordon. Let's say you think the company's dividend will grow by 3% annually. The company's value should then be $1 / (.05 - .03) = $50. Here is the formula for valuing a company with a constantly growing dividend, as well as the proof of the formula: The classic dividend discount model works best when valuing a mature company that pays a hefty portion of its earnings as dividends, such as a utility company. The Problem of Forecasting Proponents of the dividend discount model say that only future cash dividends can give you a reliable estimate of a company's intrinsic value. Buying a stock for any other reason - say, paying 20 times the company's earnings today because somebody will pay 30 times tomorrow - is mere speculation. In truth, the dividend discount model requires an enormous amount of speculation in trying to forecast future dividends. Even when you apply it to steady, reliable, dividend-paying companies, you still need to make plenty of assumptions about their future. This model is only as good as the assumptions it is based upon. Furthermore, the inputs that produce valuations are always changing and susceptible to error. The first big assumption that the DDM makes is that dividends are steady or grow at a constant rate indefinitely. But even for steady, utility-type stocks, it can be tricky to forecast exactly what the dividend payment will be next year, never mind a dozen years from now. (Find out some of the reasons why companies cut dividends inYour Dividend Payout: Can You Count On It?) Multi-Stage Dividend Discount Models To get around the problem posed by unsteady dividends, multi-stage models take the DDM a step closer to reality by assuming that the company will experience differing growth phases.. However, such an approach brings even more assumptions into the model - although it doesn't assume that a dividend will grow at a constant rate, it must guess when and by how much a dividend will change over time. What Should Be Expected? Another sticking point with the DDM is that no one really knows for certain the appropriate expected rate of return to use. It's not always wise simply to use the long-term interest rate because the appropriateness of this can change. The High-Growth Problem No fancy DDM model is able to solve the problem of high-growth stocks. If the company's dividend growth rate exceeds the expected return rate, you cannot calculate a value because you get a negative denominator in the formula. Stocks don't have a negative value. Consider a company with a dividend growing at 20% while the expected return rate is only 5%: in the denominator (r-g) you would have -15% (5%-20%)! In fact, even if the growth rate does not exceed the expected return rate, growth stocks, which do not pay dividends, are even tougher to value using this model. If you hope to value a growth stock with the dividend discount model, your valuation will be based on nothing more than guesses about the company's future profits and dividend policy decisions. Most growth stocks do not pay out dividends. Rather, they reinvest earnings into the company with the hope of providing shareholders with returns by means of a higher share price. Consider Microsoft, which did not pay a dividend for decades. Given this fact, the model might suggest the company was worthless at that time, which is completely absurd. Remember, only about one-third of all public companies pay dividends. Furthermore, even companies that do offer payouts are allocating less and less of their earnings to shareholders. The dividend discount model is by no means the be-all and end-all for valuation. However, learning about the dividend discount model does encourage critical thinking. It forces investors to evaluate different assumptions about growth and future prospects. If nothing else, the DDM demonstrates the underlying principle that a company is worth the sum of its discounted future cash flows. Whether or not dividends are the correct measure of cash flow is another question. The challenge is to make the model as applicable to reality as possible, which means using the most reliable assumptions available. true partly because, unlike the trigonometry or calculus you studied back in high school, compounding can be applied to everyday life.. To demonstrate, let's look at another. Pam and Sam are the same age. When Pam was 25 she invested $15,000 at an interest rate of 5.5%. For simplicity, let's assume the interest. The following chart shows Pam and Sam's earnings:. Pam's line gets even steeper (her rate of return increases) in another 10 years. At age 60 she would have nearly $100,000 in her bank account, while Sam would only have around $60,000 - a $40,000 difference!. Specifically, we end up with $100 x 1.01^12 at $112.68. The final amount is higher because the interest compounded more frequently. Compounding amplifies the growth of your working money and maximizes the earning potential of your investments - but remember, because time and reinvesting make compounding work, you must keep your hands off the principal and earned interest. (For related reading, see Overcoming Compounding's Dark Side. For a more advanced discussion of compound interest, read Accelerating Returns With Continuous Compounding.. An unsecured loan is issued and supported only by the borrower's creditworthiness, rather than by some sort of collateral. Generally, a borrower must have a high credit rating to receive an unsecured loan. Commercial paper is an example of an unsecured loan. A secured loan is backed by collateral; if it is not repaid, the lender can seize the collateral and sell it to recover the funds it lent. An acquisition loan helps a company purchase a specific asset that is determined before the loan is granted. Acquisition loans are sought when a company wants to complete an acquisition for an asset but does not have enough liquid capital to do so. The company may be able to get more favorable terms on an acquisition loan because the assets being purchased have a tangible value, as opposed to capital being used to fund daily operations or release a new product line. The acquisition loan is typically only available to be used for a short window of time and only for specific purposes. Once repaid, funds available through an acquisition loan cannot be re-borrowed as with a revolving line of credit at a bank. Revolving Credit Revolving credit is another way businesses can borrow money, but the structure is a bit different than an ordinary loan. A line of credit establishes a maximum loan balance that the bank will permit the borrower to maintain. The borrower can draw down on the line of credit at any time, as long as he or she does not exceed the maximum set in the agreement. The advantage of a line of credit over a regular loan is that interest is usually charged only on the part of the line of credit that is used, and the borrower can draw on the line of credit at any time. Depending on the agreement with the financial institution, the line of credit may be classified as a demand loan, which means that any outstanding balance will have to be paid immediately at the financial institution's request. Revolving credit may also be called an evergreen loan or a standing loan. Credit cards are also a type of revolving credit. More Complex Loans A self-liquidating loan is a type of short or intermediate-term credit that is repaid with money generated by the assets purchased. The repayment schedule and maturity of a self-liquidating loan are designed to coincide with the timing of the assets' income generation. These loans are intended to finance purchases that will quickly and reliably generate cash. A business might use a self-liquidating loan to purchase extra inventory in anticipation of the holiday shopping season. The revenue generated from selling that inventory would be used to repay the loan. Self-liquidating loans are not always a good credit choice. For example, they do not make sense for fixed assets, such as real estate, or depreciable assets, such as machinery. Another type of loan related to a businesss assets is an asset-conversion loan, a short-term loan that is typically repaid by converting an asset, usually inventory or receivables, into cash. For example, let's say the TSJ Sports Conglomerate is short on cash it needs to pay its employees this month. One option they might explore is trying to get an asset-conversion loan to fill that short-term cash void. Another type of loan that can help a business meet its day to day needs is a cash flow loan. Reasons for needing a cash flow loan could be seasonal-demand changes, business expansion or changes in the business cycle. Cash-flow loans can help in temporary situations, but if cash flow problems persist then companies need to improve their cash conversion cycle and get customers to pay faster. A working capital loan can also be used to finance everyday operations of a company. It is not used to buy long-term assets or investments, but rather to clear up accounts payable, pay wages and salaries, and so on. A company can also pledge its accounts receivable (AR) as collateral for a loan. A non-notification loan is a type of full-recourse loan that is securitized by accounts receivable. Customers making accounts-receivable payments are not notified that their account/payment is being used as collateral for a loan. They continue making payments to the company that rendered services or made the original loan, and the company then uses those payments to repay their lender for financing obtained. If customers do not pay accounts receivable, the company is still liable for repaying the loan it obtained using the AR as security. A bridge loan, also known as "interim financing," "gap financing" or a "swing loan," is a short-term loan that is used until. As the term implies, these loans "bridge the gap" between times when financing is needed. They can be customized for many different situations. For example, let's say that a company is doing a round of equity financing that is expecting to close in six months. A bridge loan could be used to secure working capital until the round of funding goes through. This is not an exhaustive list of the types of loans available to businesses, but it gives a general idea of the different options available. Businesses should shop around at different institutions to determine which lender offers the best terms for the loan. The dangers of factoring can be exacerbated when business owners do not know who they. 2. Hedge-Fund Lenders According to an August 2008 Businessweek article, hedge fund lenders are being referred to as "the new corporate AT. (For related reading, see A Brief History Of The Hedge Fund.) or discounted products from you. (For more on this type of loan, read, according to a May 2005 article in Businessweek. Often used by owners to start a business, financing from credit cards has the benefit of easy and early access to cash if your credit history is good. This method. (For related reading, see Six Major Credit Card Mistakes.). It is also less risky for the lender than a straight equity investment if the lender just wants to be paid back with a return and does." payment and the breakdown of the principal and the interest that comprise each payment can be known in advance. If the loan is an adjustable-rate months. In the world of corporate finance, many chief financial officers (CFOs) view banks as lenders of last resort because of the restrictive debt covenants that banks place on direct corporate loans. Covenants are rules placed on debt that are designed to stabilize corporate performance and reduce the risk to which a bank is exposed when it gives a large loan to a company. In other words, restrictive covenants protect the bank's interests; they're written by securities lawyers and are based on what analysts have determined to be risks to that company's performance. Here are a few examples of the restrictive covenants faced by companies: they can't issue any more debt until the bank loan is completely paid off; they can't participate in any share offerings until the bank loan is paid off; they can't acquire any companies until the bank loan is paid off, and so on. Relatively speaking, these are straightforward, unrestrictive covenants that may be placed on corporate borrowing. However, debt covenants are often much more convoluted and carefully tailored to fit the borrower's business risks. Some of the more restrictive covenants may state that the interest rate on the debt increases substantially should the chief executive officer (CEO) quit or if earnings per share drop in a given time period. Covenants are a way for banks to mitigate the risk of holding debt, but for borrowing companies they are seen as an increased risk. Simply put, banks place greater restrictions on what a company can do with a loan and are more concerned about debt repayment than bondholders. Bond markets tend to be more forgiving than banks and are often seen as being easier to deal with. As a result, companies are more likely to finance operations by issuing bonds than by borrowing from a bank..High Yield Money Market.EverBank Online MMA.. Year Four = $70 Year Five = $1,070 Thus, the PV of the cash flows is as follows: Year Year Year Year Year One = $70 / (1.05) to the 1st power = $66.67 Two = $70 / (1.05) to the 2nd power = $ 63.49 Three = $70 / (1.05) to the 3rd power = $ 60.47 Four = $70 / (1.05) to the 4th power = $ 57.59 Five = . = $1,070 / (1.10) to the 5th power = $ 664.60High Yield Money Market; the lower the discount rate, the higher the value of the bond. Look Out! If the discount rate is higher than the coupon rate the PV will be less than par. If the discount rate is lower than the coupon rate, the PV will be higher than par value, then adding it to the change in the discount rate. The two figures should equal the overall change in the bond's price.High Yield Money Market: Maturity value / (1 + I) to the power of the number of years * 2 Where I is the semi-annual discount rate. Risk these bonds are so risky, they have to offer much higher yields than any other debt. Bonds are not inherently safer than stocks. Certain types of bonds can be just as risky, if not riskier, than stocks. Rating the creditworthiness of a bond issuer, despite the number crunching, is as much an art form as it is a science. While companies like Moody's and A.M. Best gather and analyze mountains of data, the rating itself comes down to the informed opinion of an analyst or a rating committee. The organizations that rate bonds look at an issuer's assets, debts, income, expenses and financial history. In addition, they give special attention to the trustworthiness of a company to repay previous bond issues on time and in full. Rating agencies regularly review bond ratings every six to 12 months. However, a bond may be reviewed at any time the agency deems necessary for reasons including missed or delayed payments to investors, issuance of new bonds, changes to an issuer's underlying financial fundamentals, or other broad economic developments. (For more on this subject, read The Debt Ratings Debate.) Institutional and individual investors rely on bond rating agencies and their in-depth research to make investment decisions. Rating agencies play an integral role in the investment process and can make or break a company's success in both the primary and secondary bond markets. While the rating agencies provide a robust service and are worth the fees they earn, the value of such ratings has been widely questioned since the 2008 financial crisis, and the agencies' timing and opinions have been criticized when dramatic downgrades have come very quickly. Investors should not rely solely on the bond rating agency's rating and should supplement the ratings with their own research. It's also important to frequently review the ratings over the life of a bond. (Read more in Bond Rating Agencies: Can You Trust Them? and Why Bad Bonds Get Good Ratings.) Occasionally, firms will not have their bonds rated, in which case it is solely up to the investor to judge a firm's repayment ability. Because the rating systems differ for each agency and change from time to time, it is prudent to research the rating definition for the bond issue you are considering one offering. As a result, a lot of work needs to be done to prepare for the offering, such as creating a prospectus and other legal documents. In general, the need for underwriters is greatest for the corporate debt market because there are more risks associated with this type of debt., Japan is a major holder of U.S. government debt...)EverBank Online MMA to learn more about their relationship.)EverBank Online M sectionconvexity to become a seasoned bond market investor. Bonds - DurationBond duration is a measure of the sensitivity of the price (the value of principal) of a fixed-income investment to a change in interest rates. Duration is expressed as a number of years. Rising interest rates mean falling bond prices, while declining interest rates mean rising bond prices. The duration number is a complicated calculation involving present value, yield, coupon, final maturity and call features. Fortunately for investors, this indicator is a standard data point provided in the presentation of comprehensive bond and bond mutual fund information. The bigger the duration number, the greater the interest-rate risk or reward for bond prices. It is a common misconception among non-professional investors that bonds and bond funds are riskfree. They are not. As you learned in the last section, investors need to be aware of two main risks that can affect a bond's investment value: credit risk (default) and interest rate risk (rate fluctuations). The duration indicator addresses the latter issue. zerocoupon or straight bond that pays coupons annually and matures in five years. Its cash flows consist of five annual coupon payments and the last payment includes the face value of the bond. The money bags.High Yield Money Market: Other Factors Besides the movement of time and the payment of coupons, there are other factors that affect a bond's duration, including, although it = required yield M = maturity (par) value P = bond price Remember that bond price equals: Example 1: Betty holds a five-year bond with a par value of $1,000 and coupon rate of 5%. For simplicity, let's assume that the coupon is paid annually and that interest rates are 5%. What is the Macaulay duration of the bond? = 4.55 years Fortunately, if you are seeking the Macaulay duration of a zero-coupon bond, the duration would be equal to the bond's maturity, so there is no calculation required. Modified Duration Modified duration is a modified version of the Macaulay model that accounts for changing interest rates. Because they affect yield, fluctuating interest rates will affect duration, so this modified formula shows how much the duration changes for each percentage change in yield. For bonds without any embedded features, bond price and interest rate move in opposite directions, so there is an inverse relationship between modified duration and an approximate 1% change in yield. Because the modified duration formula shows how a bond's duration changes in relation to interest rate movements, the formula is appropriate for investors wishing to measure the volatility of a particular bond. Modified duration is calculated as the following: OR Let's continue to analyze Betty's bond and run through the calculation of her modified duration. Currently her bond is selling at $1,000, or par, which translates to a yield to maturity of 5%. Remember that we calculated a Macaulay duration of 4.55. = 4.33 years Our example shows that if the bond's yield changed from 5% to 6%, the duration of the bond will decline to 4.33 years. Because it calculates how duration will change when interest increases by 100 basis points, the modified duration will always be lower than the Macaulay duration. Effective Duration The modified duration formula discussed above assumes that the expected cash flows will remain constant, even if prevailing interest rates change; this is also the case for option-free fixedincome securities. On the other hand, cash flows from securities with embedded options or redemption features will change when interest rates change. For calculating the duration of these types of bonds, effective duration is the most appropriate method. Effective duration requires the use of binomial trees to calculate the option-adjusted spread (OAS). There are entire courses built around just those two topics, so the calculations involved for effective duration are beyond the scope of this section. There are, however, many programs available to investors wishing to calculate effective duration. Key-Rate Duration The final duration calculation to learn is key-rate duration, which calculates the spot durations of each of the 11 "key" maturities along a spot rate curve. These 11 key maturities are at the threemonth and one, two, three, five, seven, 10, 15, 20, 25, and 30-year portions of the curve. In essence, key-rate duration, while holding the yield for all other maturities constant, allows the duration of a portfolio to be calculated for a one-basis-point change in interest rates. The key-rate method is most often used for portfolios such as the bond ladder, which consists of fixed-income securities with differing maturities. Here is the formula for key-rate duration: The sum of the key-rate durations along the curve is equal to the effective duration.High Yield Money Market Duration and Bond Price Volatility be altered in the face of a change in prevailing interest rates. These factors work together and against each other. $1,200 at the end of two years. However, if the investor were to reinvest each of the bond cash flows until maturity, he or she would have more than $1,200 in two years. Therefore, the extra interest accumulated on the reinvested coupons would allow the bondholder to satisfy a future $1,200. History dictates that common stocks average 11-12% per year and outperform just about every other type of security including bonds and preferred shares. Stocks provide potential for capital appreciation and income and offer protection against moderate inflation. The risks associated with stocks can vary widely, and they usually depend on the company. Purchasing stock in a well-established and profitable company means there is much less risk you'll lose your investment, whereas purchasing a penny stock increases your risks substantially. If you use margin, you can also dramatically increase your leverage in a stock, but this is only recommended for experienced investors. (For more on stock investing, read Buffett: Penny Stocks, Day Trading Are My Real Key To Wealth.) 1. Capital Appreciation 2. Income 3. Liquidity Because so much of the commentary about preferreds compares them to bonds and other debt instruments, let's first look at the similarities and differences between preferreds and bonds. Bonds and Preferreds: Similarities Interest Rate Sensitivity Preferreds are issued with a fixed par value and pay dividends based on a percentage of that par at a fixed rate. Just like bonds, which also make fixed payments, the market value of preferred shares is sensitive to changes in interest rates. If interest rates rise, the value of the preferred shares would need to fall to offer investors a better rate. If rates fall, the opposite would hold true. However, the relative move of preferred yields is usually less dramatic than that of bonds. (For further reading, check out Trying To Predict Interest Rates.) Callability Preferreds technically have an unlimited life because they have no fixed maturity date, but they may be called by the issuer after a certain date. The motivation for the redemption is generally the same as for bonds; a company calls securities that pay higher rates than what the market is currently offering. Also, as is the case with bonds, the redemption price may be at a premium to par to enhance the preferred's initial marketability. (To read more, see Call Features: Don't Get Caught Off Guard.) further reading, see Introduction To Convertible Preferred Shares and Convertible Bonds: An Introduction.)creditors. (For more insight, read What Is A Corporate Credit Rating?) Bonds and Preferreds:. This is because bonds are issued with the protection of an indenture. With preferreds, if a company has a cash problem, the board of directors can decide to withhold preferred dividends; the trust indenture prevents companies from taking the same action on bonds. Another difference is that preferred dividends are paid from the company's after-tax profits, while bond interest is paid before taxes. This factor makes it more expensive for the issuing company to issue and pay dividends on preferred stocks. (To read more, see How And Why Do Companies Pay Dividends?)High Yield Money Market Yields Computing current yields on preferreds is similar to performing the same calculation on bonds: the annual dividend is divided by the price. For example, if a preferred stock is paying an annualized dividend of $1.75 and is currently trading in the market at $25, the current yield is: $1.75/$25 = for the Average Investor Information about a company's preferred shares is easier to access than information about the company's bonds, making preferreds, in a general sense, easier to trade (and perhaps more liquid). The low par values of the preferred shares also make investing easier because bonds, with par values around $1,000, often have minimum purchase amounts (i.e. five bonds). Common and Preferred Stocks: Similarities Payments Both common and preferred stocks are equity instruments that pay dividends from the company's after-tax profits. Common and Preferred Stocks: in the same company might only increase by a few points. The lower volatility of preferred stocks may look attractive, but preferreds will not share in a company's success to the same degree as common stock. (To learn more, read 5 Signs Of A Market-Beating Stock.)High Yield Money Market Voting Whereas common stock is often called voting equity, preferred stocks usually have no voting rights.PSs are keyed to yields on U.S. government issues, providing the investor limited protection against adverse interest rate markets. Why Preferreds? A company may choose to issue preferreds for a couple of reasons: Flexibility of payments: Preferred dividends may be suspended in case of corporate cash problems. Easier to market: The majority of preferred stock is bought and held by institutions,. In many cases, the individual tax rate under the new rules is 15%. That compares favorably with paying taxes at the ordinary rate on interest received from corporate bonds. However, because the 15% rate is not an across-the-board fact, investors should seek competent tax advice before diving into preferreds.chip.EverBank Online MMA: 2005 Operating Cash Flow Capital Expenditures Free Cash Flow 438 785 -347.EverBank Online MMA. dividend and will be the basis of the valuation method for a preferred share. These payments could come quarterly, monthly or yearly, depending on the policy stated by the company. divide.High Yield Money Market The added g is the growth of the payments. By subtracting the growth number, the cash flows are discounted by a lower number resulting in a higher value. of return and the growth or length of higher returns. The dividend payment is usually easy to find; the difficult part comes when this payment is changing or potentially could change in the future. Also, finding a proper discount rate is very difficult and if this figure is off, it could drastically change the calculated value of the shares. When it comes to classroom homework, these numbers will be simply given, but in the real world we are left to estimate the discount rate or pay a company to do the calculation.. And while the actual tape has been done away with, it has retained the name. (See How Has The Stock Market Changed? to learn more about the evolution of trading.)appear again with the latest trading activity. Reading the Ticker Tape Here's an example of a quote shown on a typical ticker tape: The unique characters used to identify the company. The volume for the trade being quoted. Abbreviations are Shares Traded K = 1,000, M = 1,000,000 and B = 1,000,000,000. The price per share for the particular trade (the Price Traded lastbid price). Shows whether the stock is trading higher or lower than Change Direction the previous day's closing price. Change Amount The difference in price from the previous day's close. Ticker SymbolThroughout. To learn more, see Why do some stock symbols have three letters while others have four?High Yield Money Market On many tickers, colors are also used to indicate how the stock is trading. Here is the color scheme most TV networks use: Greenindicates the stock is trading higher than the previous day's close. Redindicates small.High Yield Money Market, it people to get stock quotes off the internet. This method is superior because most sites update throughout the day and give you more information, news, charting and research. Chapter 4 Net Present Value And Internal Rate Of Return - Introduction To Net Present Value And Internal Rate Of ReturnNet. NPV is calculated using the following formula: If the NPV of a prospective project is positive, the project should be accepted. However, if NPV is negative, the project should probably be rejected because cash flows will also be negative. For example, if a retail clothing business wants to purchase an existing store, it would first estimate the future cash flows that store would generate, then discount those cash flows into one lump-sum present value amount, say $565,000. If the owner of the store was willing to sell his business for less than $565,000, the purchasing company would likely accept the offer as it presents a positive NPV investment. Conversely, if the owner would not sell for less than $565,000, the purchaser would not buy the store, as the investment would present a negative NPV. (Sometimes losing investments aren't what they seem. Learn more in How To Profit From Investment "Losers".). You can would still provide a much better chance of strong growth. IRRs can also be compared against prevailing rates of return in the securities market. If a firm can't find any projects with IRRs greater than the returns that can be generated in the financial markets, it may simply choose to invest its retained earnings into the market. (For related reading, see The Top New Investment: Doing Nothing.) Differences between NPV and IRR and Their Uses Both NPV and IRR are primarily used in capital budgeting, the process by which companies determine whether a new investment or expansion opportunity is worthwhile. Given an investment opportunity, a firm needs to decide whether undertaking the investment will generate net economic profits or losses for the company. (NPV) of the investment.High Yield Money Market Let's illustrate with an example: suppose JKL Media wants to buy a small publishing company. JKL determines that the future cash flows generated by the publisher, when discounted at a 12% annual rate, yield a present value of $23.5 million. If the publishing company's owner is willing to sell for $20 million, then the NPV of the project would be $3.5 million ($23.5 - $20 = $3.5). The $3.5 million dollar NPV represents the intrinsic value that will be added to JKL Media if it undertakes this acquisition.). For this example, the project's IRR could, depending on the timing and proportions of cash flow distributions, be equal to 17.15%. Thus, JKL Media, given its projected cash flows, has a project with a 17.15% return. If there were a project that JKL could undertake with a higher IRR, it would probably pursue the higher-yielding project instead. Thus, you can see that the usefulness of the IRR measurement lies in its ability to represent any investment opportunity's return and to compare it with other possible investments. Net Present Value And Internal Rate Of Return - Net Present ValueThe. In the two examples below, assuming a discount rate of 10%, project A and project B have respective NPVs of $126,000 and $1,200,000. These results signal that both capital budgeting projects would increase the value of the firm, but if the company only has $1 million to invest at the moment, project B is superior. Year 2 300,000 Year 3 300,000 Year 4 300,000 Year 5 300,000 Year 2 -300,000 Year 5 3,000,000 Some of the major advantages of the NPV approach include the overall usefulness and easy understandability of the figure. NPV provides a direct measure of added profitability, allowing one to simultaneously compare multiple mutually exclusive projects and even though the discount rate it subject to change, a sensitivity analysis of the NPV can typically signal any overwhelming potential future concerns. Although the NPV approach is subject to fair criticisms that the value-added figure does not factor in the overall magnitude of the project, the profitability index (PI), a metric derived from discounted cash flow calculations, can easily fix this concern. We'll discuss the profitability index in a later section. (It's never too early to start learning about money. Read 5 Ways To Teach Your Kids The Value Of A Dollar.) Here is another example of how companies use NPV. Using the company's cost of capital, the net present value (NPV) is the sum of the discounted cash flows minus the original investment. Projects with NPV > 0 increase stockholders' return Projects with NPV < 0 decrease stockholders' return Example: Net Present Value Assume Newco is deciding between two machines (Machine A and Machine B) in order to add capacity to its existing plant. Using the cash flows in the table below, let's calculate the NPV for each machine and decide which project Newco should accept. Assume Newco's cost of capital is 8.4%. Expected after-tax cash flows for the new machines Calculation and Given that both machines have NPV > 0, both projects are acceptable. However, for mutually exclusive projects, the decision rule is to choose the project with the greatest NPV. Since the NPVB > NPVA, Newco should choose the project for Machine B. We'll discuss additional applications of NPV in the following pages.. Investment Inflows Year 0 Year 1 Year 2 Year 3 Year 4 Year 5 -1,000,000 300,000 300,000 300,000 300,000 300,000Pay.EverBank Online MMA. Investment Inflows Year 0 Year 1 Year 2 Year 3 Year 4 Year 5 -1,000,000 250,000 250,000 250,000 250,000 15,000,000Since. Net Present Value And Internal Rate Of Return - Average Accounting ReturnAverage accounting return, also called accounting rate of return or ARR, is an accounting method used for the purposes of comparison with other capital budgeting calculations, such as NPV, PB period and IRR. ARR provides a quick estimate of a project's worth over its useful life. ARR is calculated by finding a capital investment's average operating profits before interest and taxes but after depreciation and amortization (also known as "EBIT") and dividing that number by the book value of the average amount invested. It can be expressed as the following: ARR = Average Profit / Average Investment The result is expressed as a percentage. In other words, ARR compares the amount invested to the profits earned over the course of a project's life. The higher the ARR, the better. The major drawbacks of ARR are as follows: 1. It uses operating profit rather than cash flows. Some capital investments have high upkeep and maintenance costs, which bring down profit levels. 2. Unlike NPV and IRR, it does not account for the time value of money. By ignoring the time value of money, the capital investment under consideration will appear to have a higher level of return than what will occur in reality. The capital investment may appear to be more lucrative than the alternatives, such as investing in the financial markets, when it is actually less lucrative. Here is a simple example of an ARR calculation: A project requiring an average investment of $1,000,000 and generating an average annual profit of $150,000 would have an ARR of 15%. While ARR is easy to calculate and can be used to gauge the results of other capital budgeting calculations, it is not the most accurate metric. Net Present Value And Internal Rate Of Return - Internal Rate Of ReturnThe internal rate of return (IRR) is frequently used by corporations to compare and decide between capital projects.. (For more insight, read the Discounted Cash Flow Analysis tutorial.) For example, a corporation will evaluate an investment in a new plant versus an extension of an existing plant based on the IRR of each project. In such a case, each new capital project must produce an IRR that is higher than the company's cost of capital. Once this hurdle is surpassed, the project with the highest IRR would be the wiser investment, all other factors (including risk) being equal.: IRR = .400.EverBank Online MMA Other IRR Uses. What if you don't want to reinvest dividends, but need them as income when paid? And if dividends are not assumed to be reinvested, are they paid out or are they left in cash? What is the assumed return on the cash? IRR and other assumptions are particularly important on instruments like whole life insurance policies and annuities, where the cash flows can become complex. Recognizing the differences in the assumptions is the only way to compare products accurately. . Investment Inflows Year 0 Year 1 Year 2 Year 3 Year 4 Year 5 -1,000,000 300,00 300,000 300,000 300,000 300,000.EverBank Online MMA%. Year 2 -10,000,000 The IRR is a useful valuation measure when analyzing individual capital budgeting projects, not those which are mutually exclusive. It provides a better valuation alternative to the PB method, yet falls short on several key requirements. Net Present Value And Internal Rate Of Return - Advantages And Disadvantages Of NPV and IRRNow that you're familiar with both NPV and IRR and understand the shortcomings of PB period and ARR, let's compare the advantages and disadvantages of NPV and IRR. Advantages: The NPV method is a direct measure of the dollar contribution to the stockholders. The IRR method shows the return on the original money invested. Disadvantages: The NPV method does not measure the project size. The IRR method can, at times, give you conflicting answers when compared to NPV for mutually exclusive projects. The "multiple IRR problem" can also be an issue, as discussed below. The Multiple IRR Problem A multiple IRR problem occurs when cash flows during the project lifetime. The timing of cash flows as well as project sizes can produce conflicting results in the NPV and IRR methods. Example: NPV and IRR Analysis Assume: We first determine the NPV for each machine as follows: NPVA = ($5,000) + $2,768 + $2.553 = $321 NPVB = ($10,000) + $5,350 + $5,106 = $456 According to the NPV analysis alone, Machine B is the most appropriate choice for Newco to purchase. The next step is to determine the IRR for each machine using our financial calculator. The IRR for Machine A is equal to 13%, whereas the IRR for Machine B is equal to 11%. According to the IRR analysis alone, Machine A is the most appropriate choice for Newco to purchase. The NPV and IRR analysis for these two projects give us conflicting results. This is most likely due to the timing of the cash flows for each project as well as the size difference between the two projects.- 12% in the last 20 years, so clearly the discount rate is changing. Without modification, IRR does not account for changing discount rates, so it's just not adequate for longer-term projects with discount rates that are expected to vary..FXCM -Online Currency Trading Free $50,000 Practice Account. Capital investments are funds invested in a firm or enterprise issuance, it may also be through straight or convertible debt. Funding may range from an amount of less than $100,000 in seed financing for a start-up to amounts in the hundreds of millions for massive projects in capital-intensive sectors like mining, utilities and infrastructure. In this section, we'll examine various components of a company's capital investment decisions, including project cash flows, incremental cash flows and more. Tips and Tricks The key metrics for determining the terminal cash flow are salvage value of the asset, net working capital and tax benefit/loss from the asset.The terminal cash flow can be calculated as illustrated: Return of net working capital +$300 Salvage value of the machine +$800 Tax reduction from loss (salvage < BV) +$80 Net terminal cash flow $1,180 Operating CF5 +$780 Total year-five cash flow $1,960 For determining the tax benefit or loss, a benefit is received if the book value of the asset is more than the salvage value, and a tax loss is recorded if the book value of the asset is less than the salvage value.. Capital Investment Decisions - Pro Forma Financial Statements Manyformform. Capital Investment Decisions - Operating Cash Flow And Alternative Definitions Of Operating Cash Flow (EPS). In this situation, investors should determine the source of the cash hemorrhage (inventories, receivables, etc.) and whether this situation is a short-term issue or longterm problem. (For more on cash flow manipulation, see Cash Flow On Steroids: Why Companies Cheat.). Step 1: Identify capital spending. In this case, it is $100,000. Step 2: Identify the salvage value of the new computer system using the following calculation: Salvage Value x (1 -0 35) Salvage Value = $25,000 x (0.65) = $16,250 Step 3: Calculate the actual annual savings from improved efficiency, taking taxes and depreciation into account. The computer system upgrade will save $25,000 a year. In other words, it will increase operating cash flow by $25,000 a year. On the plus side, the additional depreciation expense of $20,000 a year ($100,000 / 5). Subtracting the depreciation deduction from the increase in operating income gives us $25,000 - $20,000 = $5,000, or earnings before interest and taxes (EBIT). This $5,000 increase in cash flow will be taxed at the company's 35% tax rate, yielding $5,000 x 0.35 = $1,750 in additional tax liability for the company each year. EBIT + Depreciation - Taxes = OCF, so $5,000 + $20,000 - $1,750 = $23,250. (Learn more about depreciation in Depreciation: Straight-Line Vs. Double-Declining Methods.) Step 4: Calculate the annual cash flows from undertaking the system upgrade. Year 0: -$100,000 Years 1 - 4: $23,250/yr. = $23,250 x 4 = $93,000 Year 5: $23,250 + $16,250 salvage value = $39,500 Total: -$100,000 + $93,000 + $39,500 = $32,500 Step 5: Calculate the net present value (NPV) using the discount rate, project life, initial cost and each year's cash flows using an NPV calculator and determine if the upgrade is truly cost-saving. In this case, the discount rate is 10%, the project life is five years, the initial cost is $100,000 and each year's cash flows are provided in step 4. The result is an NPV of -$1,774,24, so the system upgrade would actually not cut costs and thus should not be undertaken. (For related reading, see Should computer software be classified as an intangible asset or part of property, plant and equipment? and Lady Godiva Accounting Principles.) Asset Replacement Earlier in this section, we discussed how to determine a project's cash flows. Here, we'll consider how to analyze those cash flows to determine whether a company should undertake a replacement project. Replacement projects are projects that companies invest in to replace old assets in order to maintain efficiencies. Assume Newco is planning to add new machinery to its current plant. There are two machines Newco is considering, with cash flows as follows: Discounted Cash Flows for Machine A and Machine B Calculate the NPV for each machine and decide which machine Newco should invest in. As calculated previously, Newco's cost of capital is 8.4%. Formula: When considering mutually exclusive projects and NPV alone, remember that the decision rule is to invest in the project with the greatest NPV. As Machine B has the greatest NPV, Newco should invest in Machine B. Example: Replacement Project Now, let us assume that rather than investing in an additional machine, as in our earlier expansion project example, Newco is exploring replacing its current machine with a newer, more efficient machine. Based on the current market, Newco can sell the old machine for $200, but this machine has a book value of $500. The new machine Newco is looking to invest capital in has a cost of $2,000, with shipping and installation expenses of $500 and $300 in net working capital. Newco expects the machine to last for five years, at which point Machine B would have a book value of $1,000 ($2,000 minus five years of $200 annual depreciation) and a potential market value of $800. With respect to cash flows, Newco expects the new machine to generate an additional $1,500 in revenues and costs of $200. We will assume Newco has a tax rate of 40%. The maximum payback period that the company established is five years. As required in the LOS, calculate the project's initial investment outlay, operating cash flow over the project's life and the terminal-year cash flow for the replacement project. Answer: Initial Investment Outlay Computing the initial investment outlay of a replacement project is slightly different than the computation for an existing project. This is primarily In the analysis of either an expansion or a replacement project, the operating cash flows and terminal cash flows are calculated the same.Operating cash flow: CFt = (revenues - costs)*(1 - tax rate) CF1 = ($1,500 - $200)*(1 - 40%) = $780 CF2 = ($1,500 - $200)*(1 - 40%) = $780 CF3 = ($1,500 - $200)*(1 - 40%) = $780 CF4 = ($1,500 - $200)*(1 - 40%) = $780 CF5 = ($1,500 - $200)*(1 - 40%) = $780 Terminal Cash Flow: The terminal cash flow can be calculated as illustrated: Return of net working capital +$300 Salvage value of the machine +$800 Tax reduction from loss (salvage < BV) +$80 Net terminal cash flow $1,180 Operating CF5 +$780 Total year 5 cash flow $1,960 4.3 Project Analysis And Valuation - Introduction To Project Analysis And Valuation. typical method is to perform multi-factor analysis (models containing multiple variables) in the following ways: Creating a Fixed Number of Scenarios o Determining the High/Low Spread o Creating Intermediate Scenarios Random Factor Analysis o Numerous to Infinite Number of Scenarios o. The figure below uses a three scenario method evaluating a base case (B) (mean value), upside case (U) and a downside case (D).EverBank Online MMA) =100. The figure belowfactor.EverBank Online MMA variable's own probability distribution. (To learn more about this analysis, read Introduction To Monte Carlo Simulation.). Labor costs in a factory are semi-variable. The fixed portion is the wage paid to workers for their regular hours. The variable portion is the overtime pay they receive when they exceed their regular hours. The next step in break-even analysis is determining what price to charge for your good or service. Let's look at some of the pricing strategies companies use. With competition-driven pricing, the seller establishes its prices based on what its competitors charge. Competition-driven pricing focuses on determining a price that will achieve the most profitable market share and does not always mean that the price is identical to the competitions'. Determining how to profitably achieve the greatest market share without incurring excessive costs requires strategic decision making. The firm must focus not only on obtaining the largest market share, but in finding the combination of margin and market share that will be the most profitable in the long run. Penetration pricing is a marketing strategy firms use to attract customers to a new product or service. This strategy means offering a low price for a new product or service during its debut in order to attract customers away from competitors. The goal of this pricing strategy is to make customers aware of the new product due to its lower price in the marketplace relative to rivals. When applied correctly, penetration pricing can increase both market share and sales volume. High sales volume may then lead to lower production costs and higher inventory turnover, both of which are positive for any firm with fixed overhead. The chief disadvantage of penetration pricing is that the increase in sales volume may not lead to a profit if prices are kept too low. As well, if the price is only an introductory campaign, customers may leave the brand once prices begin to rise to levels more in line with competitors' prices. Variable cost-plus pricing is a pricing method in which the selling price is established by adding a markup to total variable costs. The expectation is that the markup will contribute to meeting all or a part of fixed costs and generate some level of profit. Variable cost-plus pricing is especially useful in competitive scenarios such as contract bidding, but is not suitable in situations where fixed costs are a major component of total costs.High Yield Money Market For example, assume total variable costs for manufacturing one unit of a product are $10 and a markup of 50% is added. The selling price as determined by this variable cost-plus pricing method would be $15. If the contribution to fixed costs per unit is estimated at $4, then profit per unit would be $1. With customer-driven pricing, the seller makes a pricing decision based on what the customer can justify paying given the value of the product or service from the consumer's perspective. To optimize this pricing strategy, companies need to consider how to best segment the market so that prices reflect the differences in value perceived by different types of consumers. Companies must ensure that there is a comprehensive understanding of the customer and what he or she values. A company will make the most money if they can figure out the maximum each customer will pay and charge them that amount. Companies can charge high prices on some products relative to their production costs and consumers will still buy them. For example, movie theater popcorn is dramatically marked up compared to the grocery store equivalent, and bottled water is exponentially more expensive than tap water. Other products have very thin profit margins. (For more on this topic, see 6 Outrageously Overpriced Products.) To learn more about pricing strategies, read 4 Pricing Strategies That Increase Your Spending, 2 Key Tactics Retailers Use To Increase Sales and The Pros And Cons Of Price Wars. Once you know a company's production costs and its pricing strategy, you can project when it is likely to break even and when it is likely to generate a profit. If the product or service is new, it can be difficult to predict demand, which is a third factor in break-even analysis. If the company must sell 1,000 units to break even, there has to be a demand for 1,001 units before the company will see a profit. If expected demand is only 200 units, the product or service may be a bad investment. If the company is established and has a history of selling the same product or service, it may be able to predict demand more accurately and thus perform a more accurate break-even.
https://it.scribd.com/document/126582848/Corporate-Finance
CC-MAIN-2020-05
refinedweb
11,721
52.6
On Tue, 25 Jan 2005 16:46:54 -0600, Serge Hallyn <serue us ibm com> wrote: > On Tue, 2005-01-25 at 15:25 -0600, Timothy R. Chavez wrote: > > Any accesses on that inode, > > in that namespace (presumably the only access we care about), by an > > audited syscall will be noted and sent to userspace. Isn't that > > sufficient? > > Not quite right: Any access to that inode from any namespace. Another > namespace might simply mean that you have a different path to the inode. > Alright, I see better now the concern. But because the audit information is associated with the inode via an administrator action, it still remains true that any access to that inode will be caught, from any namespace. Correct? I guess the assumption here is that the administrator knows that he/she is in the right namespace when adding/removing watches so that they tag the appropriate inodes. > -- > Serge Hallyn <serue us ibm com> > > -- - Timothy R. Chavez
https://www.redhat.com/archives/linux-audit/2005-January/msg00241.html
CC-MAIN-2015-22
refinedweb
160
62.78
With Linux 2.4 right around the corner, now would be a very good time to discuss the new packet observation and filtering mechanism that were introduced during the 2.3 kernel development, which iscalled netfilter. I discussed the netfilter architecture briefly back in my Best Defense column in October 1999 (), and more thoroughly in the January 2000 issue of Linux Magazine. netfilter is a framework inside the kernel that allows a module to observe and modify packets as they pass through the IP stack. Well, since I wrote that article in January, netfilter hooks have been added to the IPv6 (the next-generation of IP) and DECnet (a more obscure protocol) layers that are similar to those described here for IPv4. Inside the kernel you will see calls such as the following throughout the protocol code (this is from ip_local_ deliver() in net/ipv4/ip_input.c): return NF_HOOK(PF_INET, NF_IP_LOCAL_IN, skb, skb->dev, NULL, ip_local_deliver_finish); NF_HOOK is a macro that calls any registered netfilter hooks for the given protocol (PF_INET) and hook (NF_IP_ LOCAL_IN), with the given packet (skb). It also handles information on the incoming and outgoing devices (skb-> dev and NULL, respectively). Once everyone registered to listen on that hook has returned NF_ACCEPT, the function specified by the last argument is called to continue packet traversal (ip_local_deliver_finish). If a hook returns NF_DROP, the packet is freed, and the function is never called. If CONFIG_NETFILTER is set to n when the kernel is compiled, then the above macro simply calls the final argument, which is declared inline (a gcc extension taken from C++) so there is no overhead for that case. Where to put these NF_HOOK calls in your protocol stack is of fairly limited interest (there are only about a dozen protocols in the Linux kernel), but of more interest is the other side of the framework: How do you register to listen for packets at a certain point? Many people have specialized packet watching or mangling needs, so I’ll explain what they can expect. First, you have to decide what protocol you wish to hook into. netfilter divides up hooks on a per-protocol basis: there is no way to hook into all packets at once, for example. Usually this will be IP (protocol PF_INET inside the kernel). Each protocol defines a number of points you can hook into. IPv4 defines five points, and the other protocols have so far followed the model shown in Figure One (although DECnet added some new ones). As you can see in the figure, a hook can observe all valid incoming packets by registering at NF_IP_PRE_ROUTING. If you only want to observe packets destined for this IP address, you can do that by hooking into NF_IP_LOCAL_IN, and locally generated packets at NF_IP_LOCAL_OUT. Packets being forwarded through the machine will hit the NF_IP_FORWARD hook, and immediately before IP packets are transmitted they will pass through the NF_IP_POST_ ROUTING hook. Since many hooks can be registered at the same point, some priority must be assigned to each hook to determine what order they are executed in. Hooks with a lower-priority number are called first. For IPv4, linux/netfilter_ipv4.h has an enumerated type that offers some standard values. Traditionally, 0 is for packet filtering, so negative numbers are used for executing hooks before filtering, and positive numbers for after filtering. To register a hook, you fill in an nf_hook_ops structure with the priority, hook point, and a pointer to your hook function, and call nf_register_hook(). In keeping with kernel tradition, this function returns 0 for success, and a negative error number for failure. A good example to look at is Jamal Salim’s ingress filtering in net/sched/sch_ingress.c, which uses a single netfilter hook, or the more complex examples in the net/ipv4/netfilter/ directory. A Silly Example For the purposes of this article we’re going to work a little bit on the demonstration-only linuxmag.o kernel module. This tiny module will corrupt locally generated IP packets that are of length 100, and drop packets that are of length 200. First, we define the nf_hook_ops structure: static struct nf_hook_ops linuxmag_ops = { { NULL, NULL }, linuxmag_hook, PF_INET, NF_IP_LOCAL_OUT, NF_IP_PRI_FILTER-1 }; The first element in the structure ({ NULL, NULL},) is a doubly-linked-list element, which is used internally. The second is the function to call (which in this case is the linuxmag_hook function). Following that is the protocol (PF_INET), the hook point (NF_IP_LOCAL_OUT) for locally generated packets, and the priority (just before packet filtering). All we need to do now is write the function that does the actual work (see Listing One ). Listing One: The linuxmag_hook Function static unsigned int linuxmag_hook(unsigned int hook, struct sk_buff **pskb, const struct net_device *indev, const struct net_device *outdev, int (*okfn)(struct sk_buff *)) { /*; } } We can see that the hook function takes five arguments: 1. The Hook. This will always be NF_IP_ LOCAL_OUT in this module, as that is the only place we register this function. 2. A Pointer to a Pointer to the skbuff. This represents the packet. We will use the double-pointer so that we can replace the entire packet with another one if that becomes necessary. 3. A Pointer to the Input Device. This is set to NULL for the NF_IP_LOCAL_OUT hook. 4.A Pointer to the Output Device. This is set to the interface the packet is heading out for the NF_IP_ LOCAL_OUT hook. 5.A Pointer to the Function that Will be Called if All the Hooks are Successful. This should never be called directly, except for special effects (it is a hack for modules that need to fragment packets). In this function, we only care about the packet itself, so we use only the pskb parameter. The first thing we do is obtain a pointer to the packet’s IP header. We know this field (nh.iph) is valid, because we registered this as a PF_INET hook, so we will only ever be passed IP packets. The second thing we do is a little tricky. Each skbuff has a field that should identify which skbuff fields were examined by a hook. Values for this are given in include/linux/netfilter/netfilter_ipv4.h. For example, if a module examined the source IP address, we would set the NFC_IP_SRC bit in the nfcache field. In the future this field could be used to cache the decisions made by modules. There is no field for packet length, so we set the NFC_UNKNOWN bit, which means “I looked at something that the framework doesn’t understand, so make sure I get every packet.” Next, we decide what to do based on packet length. If the packet length is 100, we increment the last byte. Because we altered the packet, we must mark it altered, by setting the NFC_ALTERED bit. This is particularly important for NF_IP_LOCAL_OUT hook, which needs to look up the route on the packet again in case we were to change the way routing should be done. We then return NF_ACCEPT, which means to let the packet through. If the length is 200, we simply return NF_DROP, which means the packet should be dropped. Otherwise, the packet passes unscathed, by returning NF_ACCEPT. Polishing Our Example We need very little else to turn these two code fragments into a complete kernel module. At the top of the code, we need the headers and a comment: /* Example kernel module for Linux Magazine. */ #include <linux/config.h> #include <linux/module.h> #include <linux/netfilter_ipv4.h> #include <linux/ip.h> Following this comes the linuxmag_hook function, then the linuxmag_ops structure, then finally the glue needed to turn it into a module: static int __init init(void) { return nf_register_hook(&linuxmag_ops); } static void __exit fini(void) { nf_unregister_hook(&linuxmag_ops); } module_init(init); module_exit(fini); static void __exit fini(void) { nf_unregister_hook(&linuxmag_ops); } module_init(init); module_exit(fini); So now we have a complete kernel module: the init function loads and registers our hook function (returning a negative error code if it fails) and the fini function unregisters it. Then we only need to use the module_init and module_ exit macros to tell the kernel that these are our module initialization functions. The _init and _exit keywords are used if this is built into the kernel: It means that the init function will be discarded after boot, freeing memory, and that the fini function will never be needed at all, and hence should not be included in the kernel image. Testing Our Example Let’s look at what happens when we install our module and test it using the ping program: # insmod ./linuxmag.o # ping -c1 linuxcare.com.au PING linuxcare.com.au (203.29.91.49): 56 data bytes 64 bytes from 203.29.91.49: icmp_seq=0 ttl=249 time=204.0 ms — linuxcare.com.au ping statistics — 1 packets transmitted, 1 packets received, 0% packet loss — linuxcare.com.au ping statistics — 1 packets transmitted, 1 packets received, 0% packet loss Now let’s send a packet of length 200 (which means we must use the ping option -s172, since there are 20 bytes for the IP header, and 8 for the ICMP header): # ping -c1 -s172 linuxcare.com.au PING linuxcare.com.au (203.29.91.49): 172 data bytes ping: sendto: Operation not permitted ping: wrote linuxcare.com.au 180 chars, ret=-1 — linuxcare.com.au ping statistics — 1 packets transmitted, 0 packets received, 100% packet loss — linuxcare.com.au ping statistics — 1 packets transmitted, 0 packets received, 100% packet loss And from dmesg we can see: # dmesg -c linuxmag: dropping packet A packet of length 100 is corrupted (the ICMP checksum will be incorrect after we’ve modified it), and so we will receive no reply: # ping -c1 -s72 linuxcare.com.au PING linuxcare.com.au (203.29.91.49): 72 data bytes — linuxcare.com.au ping statistics — 1 packets transmitted, 0 packets received, 100% packet loss — linuxcare.com.au ping statistics — 1 packets transmitted, 0 packets received, 100% packet loss And once again dmesg shows our little message: # dmesg -c linuxmag: corrupting packet If you were to do a tcpdump on a remote machine, you would see the modified packet on the wire. Beyond Our Example Hook functions can return things other than NF_ ACCEPT and NF_DROP. You can return NF_STOLEN, which means “I’ve taken control of the packet, so don’t refer to it again.” This is different from NF_DROP, which tries to free the packet using kfree_skb(). You can also return NF_ REPEAT, which is like NF_ACCEPT, but calls this hook function again, rather than moving on to the next one. Finally, you can also return NF_QUEUE, which allows the packet to be queued for asynchronous packet handling. If a handler is registered (for IP, this is in net/ipv4/netfilter/ip_queue.c) then it will be handed the packet, and then processing will finish. At some later time, the packet will be reinjected, and processing will continue. This is a very useful technique for dealing with packets in userspace, where the kernel cannot wait while the processing is going on. In fact, if ultra-high speed is not a requirement, you can do everything you would do in the kernel in a simple userspace program, using James Morris’ libipq. Where to Find Out More As well as building on top of the netfilter framework directly, there are elements which already exist which provide higher-level functionality for IP (especially for packet filtering). You can find details on all these in the netfilter-hacking-HOWTO, which is available in my Unreliable Guides collection at. The mailing list for serious kernel network development under Linux is called netdev, and is hosted by SGI: netdev@oss.sgi.com. There is also a netfilter mailing list, which is hosted by the SAMBA team and can be found on one of the three netfilter mirrors: * * * The netfilter core team generally does not answer netfilter help requests that are sent to them directly, so these resources are your best starting point. Happy hacking! Paul “Rusty” Russell is the Linux kernel IP packet filter maintainer, and gets to develop cool networky stuff for the Linux kernel. He can be reached at paul.russell@rustcorp.com.au.
http://www.linux-mag.com/id/529/
CC-MAIN-2018-22
refinedweb
2,039
61.77
Is there any way to declare .NET events in IronPython? Here is a simple example in C# that "abstracts" a button pressed event into a "hello" event: public delegate void HelloInvoked(); public class HelloControl : Panel { public event HelloInvoked Hello; public HelloControl() { Button b = new Button(); b.Text = "Hello"; b.Click += new EventHandler(b_Click); b.Dock = DockStyle.Fill; this.Controls.Add(b); } void b_Click(object sender, EventArgs e) { if (Hello != null) Hello(); } } I can do something equivalent with Python functions: class HelloControl(Panel): Hello = [] def __init__(self): b = Button(Text="Hello",Dock=DockStyle.Fill) b.Click += self.on_Click self.Controls.Add(b) def on_Click(self,sender,e): for h in self.Hello: h() def Foo(): print "hello" hc.Hello += [Foo] But for consistency with my other C# code I'd like to use events if possible. Any thoughts? Many thanks, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/ironpython-users/2005-November/001213.html
CC-MAIN-2016-40
refinedweb
148
60.31
Just a Whole Bunch of Different Tests I’ve been working on a bunch of longform obligation pieces and while they’re a lot of fun, they’re also steadily driving me insane. So I took a day off to write about all of the kinds of automated testing I know about. I’m defining tests here to be “an independent verification program that, as part of verification, executes the code we want to verify.” This means types are not tests, as they don’t involve execution of the code, and contracts are not tests, because they’re not executed as an independent program. It also means that we’re (for the purposes of this essay) excluding things like load testing or performance testing. I tried to include some categorization and links and stuff but if you have a better reference for any of these, feel free to email me. All examples are written in pseudo-python with pytest unless otherwise noted. Automanual Tests Automanual tests are the most common type of automated test. They’re any tests where the setup, input, and assertions are all exactly specified by the programmer. This is in contrast to generative tests, where the testing program is allowed to determine its own parameters. The advantage of automanual tests is that they are easy to write, are (supposed to be) deterministic, and don’t need heavy infrastructure. The disadvantage is that each one only tests a tiny sliver of the overall state space, meaning that they don’t scale well. Agile and XP practices really heavily emphasize automanual tests, which has mixed benefits. Agilists tend to talk about a testing pyramid which consists of many unit tests, fewer integration tests, and fewest acceptance tests. In other disciplines, automanual tests are often used as “base cases” to guarantee a system does what you want it to in the happy path, and to check complicated edge cases that would be too hard or specific for generative testing. Unit Tests - Also called - Regression tests, mock tests A tests that tests a “unit”. There’s no consensus on what a “unit” is or the boundary between a unit test and an integration test. I think a good difference is that unit tests are unaffected by emergence, or complex behavior that arises from multiple interacting components. As a corollary of this, unit tests cannot test connections to third party services or significant side effects. Agilists suggest that unit tests are supposed to be completely isolated from each other and complete extremely quickly: a hundred unit tests should complete in less than a second. def test_flatten_list_of_lists(): assert flatten([[1], [4, 1], [9, [10]]]) == [1, 4, 1, 9, 10] To keep unit tests small, fast, and isolated, people sometimes replace code that’s not part of the “unit” with doubles, minimal code substitutions for the purposes of testing. There are different kinds of code doubles: stubs return canned values, mocks assert they were called with the right values, and spies combine stubs and mocks. One common use case of unit tests is to catch regressions. If you find a bug in your program, you write a unit test that checks the bug isn’t there. If you later refactor the code and accidentally reintroduce the bug, the unit test will catch it. Integration Tests - Also called - Contract tests, boundary tests A test that tests whether units “integrate”. As mentioned before, the boundaries between units, integrations, etc can get very fuzzy. Most people seem to implicitly use it as shorthand for “tests that might fail due to emergence.” If an integration test spanning two units fails, the issue could be in either of the two units, or both of them, or in the specific ways they interact. They may involve side effects or I/O, but calls to third party services are usually stubbed out. def test_grade_job_updates_grades(): # A bunch of setup assert student.class_grade == "F" ProblemSet.grade_for(teacher.class) assert student.class_grade == "F-" Integration tests test more than unit tests, but have three comparative disadvantages. First of all, integration tests are brittler and more likely to flake, or fail nondeterministically. Second, they can be considerably slower than unit tests. Finally, they don’t localize errors well: it’s often hard figure out what part of the code is causing the integration test to fail. For all of these reasons, the Testing Pyramid-style recommends you write more unit tests than integration tests. You might hear that integration tests don’t test as much as unit tests do. This is technically untrue. Rather, a given integration test covers a smaller percentage of the total possible integrations than a unit test covers the total existing units. This is because emergence is a hellishly complicated problem and many, many bugs hide in the interactions between units. For this reason, it’s generally a good idea to do generative integration testing instead of automanual testing. You might see integration tests called “contract tests”, as they test the “contract” between the code and the caller. I really hate this name, as contracts are a well-defined verification discipline. Calling integration tests “contract tests” is completely missing the point of contracts. Acceptance Tests - Also called - End-to-end tests, feature tests A test that only interacts with the program through the public API. For a web app, this could mean running a fake browser and simulate clicking buttons. For a script, this could mean invoking it from a shell and checking the file it modified. The purpose of acceptance testing is to check that a “user” will see the program behave correctly. Hence the name: will the user accept the program as meeting the required functionality? def test_clicking_button_changes_page(): driver = webdriver.Firefox() driver.get("") driver.find_element_by_id("button").click() assert driver.title == "New Page" Acceptance tests make up the top of the Testing Pyramid. They’re considered important but should be used sparingly. They are even slower than integration tests and can have serious nondeterminism issues. This is especially true with browser automation, as the browser still has to wait on server response, load all of the assets, run js, etc. And what happens if the server never responds? The test would hang unless you include a timeout, but then what if the server is responds, just slowly? And we haven’t even discussed the challenges of simulating the entire system, or dealing with third-party interactions. Once we get to the level of “product” we’re dealing with reality and reality is pretty damn hard to test. Another problem with acceptance tests is it’s hard to isolate them, especially if the program is supposed to have effects, especially especially if the driver also has state. Feature Tests - Also called - Gherkin tests, Cucumber tests, behavior tests A test that also acts as documentation for the user. Feature tests are defined by their syntax, not their semantics or scope. This was popularized by the BDD movement and the Cucumber tool in particular. Their goal was to make it possible for business clients to both understand the tests and, hopefully, write some themselves. Here’s an example of what Gherkin (the testing DSL) looks like: SCENARIO: Trying to withdraw too much money GIVEN I have $10 in my account AND I have $0 in my pocket WHEN I try to withdraw $15 THEN I have $0 in my account AND I have $10 in my pocket Each line would match to a regex rule with a corresponding code snippet, and the parser would construct an automanual test out of the matching snippets. Since the snippets can be arbitrary, this means that you could describe unit tests, integration tests, and acceptance tests in this style. Feature tests simultaneously try to be tests, specifications, and documents. Unfortunately, all three of these have different requirements, f.ex a good specification isn’t necessarily a good test. This can make striking the right balance very challenging, and in practice this has impeded wider adoption of feature testing.1 Diff Tests - Also called - Snapshot tests, record tests, comparison tests A test that compare the output against some reference data to see if they match. What makes this different from ‘regular’ automanual tests is that a failure could iindicate the reference is out of date. For example, if you are diffing against an html output, changing the internals of your server shouldn’t change the output but changing the layout of the webpage should. If you do the latter, the proper way to fix the failing test is to update the reference output. Diff testing is heavily used where there’s no way to “break down” the output into decoupled parts, such as screenshots or graphics. It’s also used for comparing large amounts of structured data, such as html. Parameterized Tests A test template that takes a set of parameters and generates a test from that. The tester manually determines a list of such sets to pass in, with the intent of checking multiple cases. As an example, a single unit test versus the parameterized version, using the syntax of the DDT Python library: def individual_test(): assert 1 + 2 == 3 assert 1 + 3 == 5 assert 1 + 4 == 6 @data(*[(1, 2, 3), (1, 3, 5), (1, 4, 6)]) def parameterized_test(a, b, c): assert a + b == c In addition to being more compact, the parameterized test provides more information. In Python, the first test will fail on the second statement and never check the third statement. The parameterized test, though, will generate three subtests and evaluate them all, correctly surfacing both errors. Parameterized tests are usually unit tests, but this seems more of a social thing than a technical restriction. Most libraries will let you load a file of values in for the test. They still count as automanual tests, though, as a human is expected to come up with all of the individual cases. Generative Tests In generative tests, instead of specifying the whole test the programmer defines an assertion, a test template, and input rules. The program is then free to search for an input that makes a failing test. The search can be exhaustive, meaning it will check every possible input, or nonexhaustive, where it only tries a subset. Since most functions can take an infinite number of possible inputs, exhaustive generative tests are pretty rare. Generative tests are more powerful than automanual tests, as they explore a much wider space. A unit test might test one input, while a property test might check several hundred. For this reason, generative tests are often better at finding edge cases or integration bugs than humans are. The price is specificity: while automanual tests give you complete information about a single input, generative tests only give you partial information on a range of inputs. They also often require more testing infrastructure than automanual tests do. A common concern with generative testing is that, since most are probabilistic, they might have nondeterministic failures. For this reason most testing libraries track failing cases to specifically retry on future runs. Property Tests - Also called - Property-based tests, PBT, Quickcheck, Invariant tests Tests which check that the code preserves some invariant on the input space. This is the most common type of generative test. An example, using the Hypothesis Python library: @given(lists(integers(), min_size=1)) def test_f_in(l): assert f(l) in l @given(recursive(booleans(), lists)) def test_flatten_reduces_depth_by_one(l): assume(max_depth(l) > 1) assert max_depth(flatten(l)) == max_depth(l) - 1 The first asserts that for the user-created function f, f(l) will never return something outside of l. The second asserts that the flatten function will always reduce the nested depth of an arbitrarily nested list of lists by one, ie max_depth(flatten([[],[[]]]) = 2. Most PBT frameworks also provide shrinking, where they take a failing test and find the smallest possible failing input. For example, if we are asserting that for all integers 2*x > x, Hypothesis might first find x = -12491 as a counterexample, but would quickly shrink that down to x = 0. Finding good invariants to test can be a very difficult problem. Property testers often collect ideas for invariant “tactics” they can apply to many kinds of problems.2 A popular one is the encode/decode invariant, where you check that a property is reversable. Another is the oracle invariant, where you decide in advance what the answer is going to be and back-construct the test to match it. Many property testers frame PBT as verifying mathematical properties, but that’s certainly not the only way to think about it. Fuzz Tests Tests where you don’t assert anything on the output. If the program doesn’t do something “stupid”, like crash or memory leak, then the test passes. One of the oldest forms of testing, dating back to when programmers would pull punch cards out of the trash and feed them into programs. Fuzzers are usually classified by how they generate their inputs. Dumb fuzzers use random junk as inputs, like passing ]9{{{{ as JSON. Structured fuzzers pass in valid data to confirm that the program handles them properly, like passing {"/*":"*/","//":"",/*"//"*/"/*/"://\n"//"} as JSON. Since that’s valid JSON, the program shouldn’t violate any internal assertions or invariants in processing it. Genetic or evolutionary fuzzers adapt their input to the program responses, for example by measuring which inputs lead to higher memory consumption. The most famous fuzzer in this category is American Fuzzy Lop, which is smart enough to generate valid jpegs from first principles. Fuzz testing is heavily used in systems programming and infosec. For higher level systems people usually fuzz via a mix of property testing and code contracts. Combining fuzzing and contracts makes for a pretty decent integration test. Transition system tests - Also called - Stateful tests, Rules-based stateful tests, Model-based tests Transition system tests generalize PBT. The test is modeled as a state machine and can choose its own transitions. That way not only can the test search for failing inputs, it can also search for failing steps. This is still in the realm of “wildly experimental” so here’s some handwavey pseudocode: @rule(i=integer()) def add_to_stack(i): stack.attempt_push(i) @rule def pop_from_stack(): stack.pop() @rule def add_top_two(): i = stack.pop() j = stack.pop() stack.attempt_push(i + j) @invariant def stack_is_unique(): len(stack) == len(set(stack)) @test def start_from_empty(): apply_rules(empty_stack()) In this case we have an (admittedly arbitrary) implementation of a stack that’s supposed to be unique. The test is required to start with an empty stack but otherwise is allowed to apply whatever rules it wants with whatever values it wants to whatever depth it wants. While generalized transition testing is mostly unexplored territory, we’ve historically used it to find complex concurrency bugs, such as with the Go Race Detector. Another special case is model-based testing, where you use transition system to drive both the code and a simplified code model, then make sure they match. I’ve also seen experiments on using transition systems to cover UI interactions3 or derive tests from UML diagrams. The biggest issue with this is that the explorable space can get extremely large. You also can’t easily shrink the failing examples, so it can be hard to find out what actually caused the bug. Usually people take a failing path test, investigate it, and then write a more specific automanual bug. The EiffelStudio IDE tries to do this automatically but it’s relatively crude. The other biggest issue is that they can require significant infrastructure to set up and run, which means that there’s a good chance that your testing code will have bugs in it. A lot of people don’t like the idea of having to test your tests. However, the payoff can be pretty big for complex systems. Miscellaneous Tests Mutation Tests A means of ensuring your tests are nontrivial and properly cover your code. Mutation tests rerun your other tests on randomly modified versions of the code: if(x)replaced with if(!x), false flipped with true, etc. If your tests still pass, they’re probably broken. Mutation testing usually doesn’t find errors in your code, but it does find errors in your tests. Doctests Tests embedded in your documentation. They’re usually unit tests or property tests, rarely more complicated than that. Their purpose is to validate the documentation: if the doctest fails, then your documentation is wrong or out of date. def add(a, b): """ This subtracts the two numbers from each other. >>> add(1, 2) -1 """ return 1 + 2 This is intended to help keep the documentation in sync with the code. If you update the code but forget to update the docs, the doctest will alert you. This list is non-exhaustive, but if there’s an obvious kind I missed, feel free to ping me. - I’m irrationally against feature testing because they’ve gotten everybody to think that tests are specs, which (like contracts) are a different thing entirely. [return] - There’s no widely-used term for this kind of thing, but I like tactic a lot and that’s how I think about it my head. [return] - I call “a script that randomly clicks buttons on the GUI and sees if anything crashes” a salamander and I have no idea why. [return]
https://hillelwayne.com/post/a-bunch-of-tests/
CC-MAIN-2018-17
refinedweb
2,892
61.67
Is there a way to pre-calculate an object in Python? Like when you use a constructor just like: master = Tk() I think what you're looking for is the pickle module to serialize an object. In Python 2 there is pickle and cPickle, which is the same but faster, but iirc Python 3 only has pickle (which, under the hood, is equivalent to cPickle from Python 2). This would allow you to save an object with its pre-calculated attributes. import cPickle as pickle import time class some_object(object): def __init__(self): self.my_val = sum([x**2 for x in xrange(1000000)]) start = time.time() obj = some_object() print "Calculated value = {}".format(obj.my_val) with open('saved_object.pickle', 'w') as outfile: #Save the object pickle.dump(obj, outfile) interim = time.time() reload_obj = pickle.load(open('saved_object.pickle','r')) print "Precalculated value = {}".format(reload_obj.my_val) end = time.time() print "Creating object took {}".format(interim - start) print "Reloading object took {}".format(end - interim)
https://codedump.io/share/v59EYUphVuHB/1/pre-calculated-objects-in-python
CC-MAIN-2016-50
refinedweb
161
53.98
There are many ways to do this. If you only want to run a few commands, then the Python subprocess module might be best. If you are only working in Python 2.X, then Fabric might be best. Since I wanted 2.X or 3.X and I wanted to run lots of commands, I went with Paramiko. Here is the solution: import paramiko IDENTITY = 'path to private key' REMOTE = 'url of remote' k = paramiko.RSAKey.from_private_key_file(IDENTITY) with paramiko.SSHClient() as ssh: ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect(REMOTE, username='vagrant', pkey=k) for comand in commands: ssh.exec_command(command)
https://snakeycode.wordpress.com/tag/ssh/
CC-MAIN-2019-30
refinedweb
101
63.66
RationalWiki:Saloon bar/Archive153 Contents - 1 It's almost 9 am - 2 Goat wiki - 3 April Fools - 4 Where will America go if the Affordable Care Act is struck down? - 5 Science or denialism? - 6 I just had a scary thought - 7 Diagnosis via internet - 8 Cover articles and Twitter feed - 9 Opencourseware class on the NHS? - 10 Publish or perish - 11 The Internet looks like crap - 12 Priest accidentally displayed gay porn during first communion meeting - 13 Speaking of the Daily Fail - 14 Scientific literacy test - 15 1,000,000th edit!!! - 16 My cache or yours? - 17 Delusional - 18 Things that happen in Ireland - 19 Somebody found his balls! - 20 FTB has anti-fans and they cite...us? - 21 Titanic - now in 3D - 22 Gods as Topological Invariants - 23 Spring cleaning - 24 Can you Brits take Monckton back? - 25 2016 Republican Bogyman. - 26 Now wash your hands please - 27 The Hunt for AI - 28 This seems like the wrong place but - 29 Conservapedia logo with graffiti - 30 War on Women - 31 The internet is useless for solving problems - 32 Schadenfreude - 33 Kids in Sweden know what's up - 34 Sarah Palin was Right: A commentary by the Symphony of Noise - 35 Have sympathy, please - 36 If you couldn't hate George Lucas any further... - 37 Fun-space Clean-up - 38 Humorous poetry topic. - 39 KONY 2012: Part II - 40 Trolling some Paulbots - 41 Free market double standards - 42 The French election is turning into a joke - 43 Are we seeing a change around here? - 44 Seeing as this the place for unanswerable technical questions It's almost 9 am[edit] ...and my neighbour has been blasting his stereo for the past 5 hours, stopping me from getting any sleep. Please remind me why it would be bad form to go and kill that imbecile. Vulpius (talk) 05:49, 1 April 2012 (UTC) - its almost 1 am here. ... do you live in central europe? --il'Dictator Mikal 05:51, 1 April 2012 (UTC) - Finland actually. Vulpius (talk) 05:55, 1 April 2012 (UTC) - Finland you say? Yeah, kill him—it wont make much of a dent in your appalling homicide rate. ;) Peter tanquam ex ungue leonem 06:04, 1 April 2012 (UTC) - Good to hear that at least someone is watching Tuomas Enbuske. Vulpius (talk) 06:34, 1 April 2012 (UTC) - Won't the Finnish police do anything if you complain? Proxima Centauri (talk) 06:36, 1 April 2012 (UTC) - A bit late for that, but I'll definitely call them if this keeps up until next night. (Six hours and counting!) Vulpius (talk) 07:28, 1 April 2012 (UTC) - Maybe knock and ask him to be quiet? EddyP Great King! Disaster! 10:31, 1 April 2012 (UTC) - You can get ear plugs from a chemist and they work as a temporary solution. If you use them night after night wax builds up in your ears. Proxima Centauri (talk) 10:33, 1 April 2012 (UTC) - Oh, I tried knocking, repeatedly. My theory is that he had simply passed out or wasn't even home since it had zero effect. The music finally stopped around noon in any case and I managed to take a little nap. Gonna sleep like a log tonight, as long as it's quiet. Vulpius (talk) 15:19, 1 April 2012 (UTC) - If your neighbour wasn't answering the door you should've thrown a rock through the window with a note attached saying "shut the fuck up." Failing that, follow the rock through the window and turn off the sounds yourself. El TajDon't make me do stuff 09:48, 2 April 2012 (UTC) Goat wiki[edit] Is the goat in our logo our April fool's joke or am I missing something?--Bob"What can be asserted without evidence can also be dismissed without evidence." 10:24, 1 April 2012 (UTC) - Joke's on you Bob: the seekrit cabal had a meeting and decided it was time to change to logo. Anarcho Symphony Noise Swatting Assflys is how I earn my living 13:06, 1 April 2012 (UTC) - Wow. So sekrit!--Bob"What can be asserted without evidence can also be dismissed without evidence." 13:43, 1 April 2012 (UTC) - I guess I missed that one completely... ħuman 23:05, 1 April 2012 (UTC) - An April Fool's joke, courtesy of your's truly :-) Radioactive Misanthrope 06:08, 2 April 2012 (UTC) - A picture of Andy would have been funnier. Make a note of that for next year. Sophiebecause liberals 10:13, 2 April 2012 (UTC) - I figured we were trying to move past Conservapedia. Radioactive Misanthrope 22:44, 2 April 2012 (UTC) April Fools[edit] I'm outraged RatWikians don't celebrate this great day. While Wikipedia is celebrating it on main page, the lack of humor in this site is appalling. --SupernovaExplosion (talk) 14:26, 1 April 2012 (UTC) - I've been dangling my penis out of my window at oncoming traffic on the road below with a sign that says "honk if you like fishing!" in honour of this great day. El TajDon't make me do stuff 14:31, 1 April 2012 (UTC) - I fucking hate april fool's day. It's an entire day of nothing but paranoia. X Stickman (talk) 14:54, 1 April 2012 (UTC) - Although i find it fun sometimes, i've never been a fan of the day. Also; SN, as you well know MP doesn't get to do anything without a massive fight with cries of NOT A ENCYCLOPEDIA. --il'Dictator Mikal 15:08, 1 April 2012 (UTC) - I'm going to just stay inside and avoid people today. April Fools on a college campus either means apathy or people running around and flashing them or something. ± KnightOfTL;DRyeah, well you fight like a cow! 15:19, 1 April 2012 (UTC) - I wanted to do something on the Intercom, but alas, no mod powers. :( Osaka Sun (talk) 17:12, 1 April 2012 (UTC) - April Fools stuff is usually pretty dumb. One of my Facebook friends posted "I got the job in Australia" & got a mixture of congratulations & "is this an April Fools stunt?" responses (which of course it was). I guess it's just a bit of fun but I'd feel horribly awkward pulling that kind of joke. ΨΣΔξΣΓΩΙÐ Methinks it is a Weasel 20:58, 1 April 2012 (UTC) - Hey, hey! I changed our logo and everything! Radioactive Misanthrope 04:48, 2 April 2012 (UTC) - And I quit and became a singularitarian, and Maddox grew up. gnostic 10:54, 2 April 2012 (UTC) Where will America go if the Affordable Care Act is struck down?[edit] The ACA's prospects aren't looking too good. I don't think it will cost Obama the election, though it will kick Health Care Reform back some 20 years, and Obama won't have the same kind of "signature accomplishment" that he previously had (one could argue Dodd-Frank and DADT's repeal though). It may also guarantee a Republican Congress. What do people here think? Mr. Anon (talk) 23:37, 1 April 2012 (UTC) - Your insights seem to be against the grain from what most pundits are saying. They general statement out there is that if it goes down, it will be GOOD for the president and a democratic election in general, because he says "we tried, they blocked us". And he really did give in to a whole bunch of stuff, which will come back to bite the repubs. most of america wants reform. something like 70% is FOR this package. they will not look to favorably on a RIGHT WING "all men's club" Court overturning it.-- Godot What do cats dream about? 23:45, 1 April 2012 (UTC) - The short answer is "to hell in a handbasket." Radioactive Misanthrope 23:51, 1 April 2012 (UTC) - General strike. Anarcho-syndicalist revolution now! Secret Squirrel (talk) 00:00, 2 April 2012 (UTC) - I hear the Mediterranean is nice this tyime of year. Of course, it could always go somewhere where it's off-season and avoid the crowds. P-Foster Talk ""Santorum is the cream rising to the top."" 00:03, 2 April 2012 (UTC) - If it goes down quietly, I am going to be very angry. I want to see angry people, I want to see teach-ins, I want to see strikes, I want to see the whole works. I really want to see people caring about this, not just going 'oh, too bad. Oh well. I wonder what else is on TV.' ± KnightOfTL;DRlavishly loquacious 00:19, 2 April 2012 (UTC) - People should be angry, not just about it being struck down but about what it became in the first place. We shouldn't be compelling people into private insurance's customer base just to subsidize a broken and parasitic system for another twenty years. If we want to actually reform anything, we need to criticize the way medical services are produced: the anti-competitive insurance system, intellectual property rights for pharmaceuticals, ridiculous educational requirements that do not nothing but limit the pool of qualified doctors... But yeah, general strike. Do that. Syndicalism (talk) 00:29, 2 April 2012 (UTC) - ridiculous educational requirements that do not nothing but limit the pool of qualified doctors If you haven't been sufficiently educated about the human body, you aren't qualified to be a doctor. QED. - It's not that the educational requirements are ridiculous, it's that (a) the cost of that education is ridiculous ($500,000+ for the full medical doctor route) and (b) medical schools have ridiculously high standards. There is little functional difference between the top 2% med schools currently accept, and the top, say, 10%. - But, having that education is important. How can a doctor be qualified if they haven't been sufficiently educated? And what's the alternative to ridiculously educated doctors? Pulling volunteers off the streets, and having them intern? Practical experience alone is not enough, because that ridiculous education provides them the knowledge to understand why so-called alternative medicine is and will always be a pile of horseshit. Otherwise, there's nothing to stop an under-educated doctor from being bamboozled by every quack and huckster out there. Radioactive Misanthrope 06:05, 2 April 2012 (UTC) - (EC) Very, very difficult to say. It's hard to even tell what the court will rule based on oral arguments and past decisions. There is some evidence that the side that gets questioned harder is likely to lose, in which case the administration is in a rough spot. But frankly, this case is probably not about the law, and so precedent of both personal and legal varieties are not reliable. There has only been one case of a similar scope with this court - Citizens United (maybe with the addition of Heller), although SCOTUS remains very similar to the one that decided Bush v Gore, so perhaps you might also include that. While the Roberts court has a record of partisan close-split decisions, almost all of them are defensible under the terms of the law - except these high-profile political cases. - SCOTUS has, until recently, always been extremely careful about major decisions. Things like Brown and Roe had serious weight to them in favor of the majority. But the turn taken with Citizens and (arguably) Bush seem to indicate that this concern is less important to the current justices. More plainly: it's starting to seem like they no longer care much about their reputation as neutral arbiters. However, Kennedy seems like he genuinely does care about how he is perceived, and it's very possible that he will be unwilling to enact another decision of this political scope along such a nakedly partisan basis. - Not a goddamn person knows which way it's going to go, in other words. - If the mandate does get struck down, it seems likely that Obamacare will be gone as well, rather than that the mandate will just be severed. Again, though, this is just speculation, and it may be that there will be a forced compromise on this issue (something that has happened in the past with major decisions). - So anyway, operating on the very uncertain premise that the mandate will be struck down, and the uncertain premise that Obamacare as a whole will also be struck down, and the reasonable assumption the decision would be 5-4: - First of all, this would cause some serious immediate problems for the GOP. The popular provisions of healthcare would also be repealed, including the ability for young adults to stay on their parents' plan and the prohibition on discriminating against pre-existing conditions. Several million people would therefore be shoved off their healthcare. Insurers aren't going to be able to just kick folks off, of course, but rhetorically it will be very effective to declare how many millions are going to lose coverage. The GOP, having now "won," will be under pressure to do that "replace" part of their "repeal and replace." However, they're going to have a very nasty fight. They can't actually re-institute those elements of Obamacare without coming up with a way to pay for it, and such a way would be very unpopular. Democrats might be pressured to work out something, but practical concerns trump the political ones: it's a damned hard problem to solve, especially with the burden of the past few years' hysteria on their backs. - The GOP will get a slight boost for being able to label Obama as the "unconstitutional President," but the 5-4 split will undermine this a lot: one of the problems with Republican justices sacrificing prestige for partisan advantage is that they've effectively and dramatically downsized the club they'd be able to use to beat Obama with. If the decision is 6-3, then Obama will be hurt a lot worse, but this is unlikely. - Overall, it seems likely that Obamacare's loss would help the Democrats a minimal amount in an electoral sense. But it would be a severe loss for America, since universal healthcare won't be passed anytime soon (no one is going to risk voting to end an entire industry).-- talk 00:36, 2 April 2012 (UTC) - Unrelated, but Universal Health Care doesn't necessarily mean the end of the health insurance industry. Australia's UHC, for example, relies heavily on private insurance. Mr. Anon (talk) 02:08, 2 April 2012 (UTC) - A few quick points: Amazing, some of the comments here are just totally out of touch. The coalition that passed Obamacare was booted out of Congress nearly two years ago -- for passing Obamacare. Secondly, the Republicans have had absolutely nothing to do with the failures of the President and his Party, their overreaching, the loss of public support, or SCOTUS tossing their imaginary accomplishments. Failure to recognize any of this is like Republicans arguing today Bush was right. To put it crudely, blaming others for your own stupidity and failures is not leadership. And finally, if the Court rules against the Obama/Reid/Pelosi cabal of collective dictatorship, Obama's in serious trouble in November, since he's pissed away four years on nonsense and hasn't done jack-diddly for 6 million yet unemployed. nobsDebate: Should AIG sell health insurance and buy FANNIE MAE securities with the insured's premiums? 02:39, 2 April 2012 (UTC) - Above statement implies HCR is his only accomplishment. He has Wall Street Reform (which 60% of Americans support, widely considered among his greatest legislative accomplishments), Hate Crimes Bill, Better Pay Act, Stimulus, Automaker Bailouts, Bin Laden's death, end of Iraq War, scheduled end to Afghanistan War (which even GOP candidates support now), funding of Stem Cell research, Sotamayor and Kagan, and lowering of deficit (I saved this for last because it is true. 2009 is Bush's budget). Mr. Anon (talk) 02:45, 2 April 2012 (UTC) - Note that I'm primarily concerned with the actual state of health care in America. FDR also had some of his signature accomplishments overturned by the Supreme Court, as did Wilson, so this won't have too much of an impact on Obama himself. Mr. Anon (talk) 02:48, 2 April 2012 (UTC) - If it get's tossed, Healthcare goes on the back-burner, except for the crisis the Democrats created for themselves when premiums paid to their donors -- healthcare insurers, skyrocket. And the problems with Dodd-Frank. While depositors, account holders, and local mom n' pop banks got screwed, the big Wall Street crooks -- Obama's donors again, are bigger and more powerful today than ever. nobsDebate: Should AIG sell health insurance and buy FANNIE MAE securities with the insured's premiums? 02:56, 2 April 2012 (UTC) - Youre absolutely right about Dodd-Frank. These attempts always end up creating nothing but barriers to entry and competition. I don't consider that any kind of win, not that it's really an Obama thing. Banks got the same treatment under republican administrations. And the FDR situation isn't really the same. It worked out OK for him, because it made it look like he was fighting these old, entrenched interests. We don't have the same perception of Obama, and it'll just confirm what the right already believes and make the left more skeptical. We won't have the energy to try this a second time during this administration. Syndicalism (talk) 03:14, 2 April 2012 (UTC) - I agree that there is a lot more Dodd-Frank could have done, but it was an important legislation that restored much of Glass-Steagal (hope I spelled that right) and made it so that taxpayers never have to bail out Wall Street again. It also regulated the Federal Reserve and ended "too big to fail" banks. Not to mention establishing the Consumer Financial Protection Bureau. Mr. Anon (talk) 03:45, 2 April 2012 (UTC) - You raise a good point though: FDR and Wilson didn't face much serious opposition. Nonetheless, Obama does have other things to fall back to, as I pointed out. Mr. Anon (talk) 03:50, 2 April 2012 (UTC) My goodness, every time I see a post from Robbie I prepare for brain cell loss. "Cabal of collective dictatorship?" Are you trying to make yourself look like an utter dumbass? Osaka Sun (talk) 04:53, 2 April 2012 (UTC) - Eh, I have no beef with obvious trolling. To be fair to him, Pelosi/Reid/Obama were arguably the most productive congress of all, with Obama being better at getting congressional votes than any other president (). However, Rob, you forget that congresses are directly voted by the people. Guess Republicans were just that unpopular. Mr. Anon (talk) 05:00, 2 April 2012 (UTC) - Dodd-Frank has serious issues; in my town, two very well managed local banks with long histories both we're taken over or forced to merge when bank regulators forced them to raise their reserve requirements to pay for the TARP program (the cost of TARP was born by the banking industry as a whole, not the US Treasury. IOW all 14,000+ banks in the US, and their depositors & customers, bore the costs for the 5 sinners on Wall Street, BoA, Goldman Sachs, JP Morgan, etc.) Those 5 "too big to fail" just three years later are now twice as big. The crooks were rewarded while the little guys got screwed. The Volker Rule still allows wp:proprietary trading, so Glass-Steigal has not been re-instituted. Too big will be even worse next time. nobsDebate: Should AIG sell health insurance and buy FANNIE MAE securities with the insured's premiums? 20:41, 2 April 2012 (UTC) Science or denialism?[edit] Here's something for those who like their science controversial. The latest episode of the Brain Science Podcast interviewed Bill Uttal, who has made a reputation for himself as the gadfly of cognitive neuroscience. Some of Uttal's claims verge on or cross into outright denialism, IMO. (Here's a review of one of his earlier books that I largely agree with.) However, he's bringing up a lot of good criticism in the process. He also tackles some more uncontroversial stuff toward the end, like fMRI being used for lie detection. Well worth a listen/read. Nebuchadnezzar (talk) 07:29, 2 April 2012 (UTC) I just had a scary thought[edit] It was pretty obvious that Mitt Romney was the Republican anointed one this US presidential election cycle, but since Santorum garnered more votes than anyone googling his name would expect might it be that next time round the GOP might decide he's their man? It's not like they've got a whole heap of better people waiting in the wings. The scary part is considering how cyclical party democracies tend to be, he might actually win in 2016. President Frothy... it scarcely bears thinking about. --JeevesMkII The gentleman's gentleman at the other site 11:19, 31 March 2012 (UTC) - This is actually an interesting facet of the GOP race: there's nothing but severe elite pressure to get Santorum to drop out. Staying in can't hurt him very much, but it can help him a lot. If he keeps on winning a few states here and there and keeping his name out front, he can be the "next in line" person in 2016. While not a slam-dunk, that's a big point in his favor next time around. And of course even if he doesn't run, he can still sell books and stuff in the meantime with his raised profile. He's just spending other people's money to campaign, after all, so why not go on a nationwide vanity tour on his donor's dime? Worked for Gingrich, after all.-- talk 03:20, 1 April 2012 (UTC) - Biden said somethings very interesting on Face the Nation today (a portion seems edited out of the transcript). Look at this, "..this is not your father's Republican party. This is a different party than I'm used to. And I've been around for a while.? I mean it's just a different...it seems there's almost a different language. .." nobsDebate topic: Should AIG reorganize, get into the healthcare insurance business, and purchase FANNIE MAE securities with the insured's premiums? 17:46, 1 April 2012 (UTC) - I know lots of rich people. A few of them have "come out" and said that, while they think the republican party has gone completely insane, they will continue to vote republican because they believe it will let them keep more money. A friend's father told me that voting for McCain with Palin was the most regrettable vote he'd ever made. And this year, he'll be voting for Romney. What's he think of Romney? A shameless shapeshifter...with the best chance of lowering his taxes. Occasionaluse (talk) 17:07, 3 April 2012 (UTC) What's that got to do with the price of fish? -P-Foster Talk ""Santorum is the cream rising to the top."" 17:49, 1 April 2012 (UTC) Diagnosis via internet[edit] Not really sure how to describe what happened, fit or seizue probably isn't accurate and 'blacked out' doesn't really describe it either. I will go with blacked out. Last night I blacked out in my bathroom. I hadn't been feeling well, and I needed to be sick. Went to the bath room, and in the process of vomiting caused some pain in my chest. A few seconds I felt my self going all dark so I sat down on the floor, and found my self in some of kind weird disorienting dream that I couldn't come out of. I evnenually did, coming round with my flatmate standing over me, asking me if I was ok. This has happened before, every year or so. When it first happened, in my early teens, I was diagnosed with epilepsy. A few years back, a doctor told me I wasn't eileptic and probably never was. Black outs remained unexplained. Thse back outs usually happen when i'm eating and something goes down wrong causing pain in my mid chest area. They have occurred without this happening but there no witnesses to the events,so i am unsure if it was the same thing. Any doctors in the house? AMassiveGay (talk) 08:14, 2 April 2012 (UTC) - You live in UK, right? See a doctor and get it sorted properly because unexplained blackouts could kill you as a primary or secondary consequence. And, anything "mid-chest" needs urgent attention, if you'd rung NHS direct they'd have sent an ambulance round like a shot. Генгисmutating 08:48, 2 April 2012 (UTC) - I have been to the doctor on other occasions - this has been happening for years. they tell me nothing hence me asking here. And I should clarify the mid chest thing - its more like pain in the esophogus rather than heart attack type thing. In most cases it seems to be the trigger. AMassiveGay (talk) 09:15, 2 April 2012 (UTC) - IANAD (I am not a doctor) but I have learned me some things about neurology and it sounds like deglutition syncope. Probably nothing serious, but you need to go to the doctor. Nebuchadnezzar (talk) 09:44, 2 April 2012 (UTC) - Ugh. I do not understand this approach. You asked actual medical professionals and they "tell [you] nothing" which I'll generously take to mean that they've not found any cause, although it could as well mean "I'm not inclined to listen to what they say" -- but you figure on a web forum you'll find the answers? This happens in my profession too and it's infuriating. If you just want sympathy then write "I don't want any actual answers, just make sympathetic noises" so that people know that. If you want answers, you're asking the wrong people. Why is this so hard to understand? 82.69.171.94 (talk) 09:53, 2 April 2012 (UTC) - When I say they tell me nothing i mean just that. All I hae been told is its not Epilepsy, the original diagnosis. No further tests or anything. I assume it is because it is nothing serious and I am loathe to bother the NHS any further - they are very busy. If i felt like my health was in any serious danger I would go to dr. I am aware that there are folk on here who a wide range of knowledge who might be able to suggest areas to look at. Its this or googling stuff i don't understand. BON - please try to be less of a dick. AMassiveGay (talk) 10:36, 2 April 2012 (UTC) - Dissociative attack? CrundyTalk nerdy to me 15:59, 2 April 2012 (UTC) - Dissociative attack doesn't sound like it fits. Deglutition syncope sounds closer. I am leaning to towards vasovagal syncope as fits with the pain after swallowing as trigger. I would not be concerned if it weren't for the frightening 'dreams' that i experience when unconscious. Is that normal for folk when blacked out? Maybe these are visions and I am a prophet of some kind. AMassiveGay (talk) 18:34, 2 April 2012 (UTC) - Speaking of which, my pug had a seizure yesterday morning. Apparently the breed is quite prone to it. I guess he'll never be able to apply for a driver's licence now :( CrundyTalk nerdy to me 11:39, 3 April 2012 (UTC) Cover articles and Twitter feed[edit] We have two recent cover articles, 101 evidences for a young age of the Earth and the universe (today) and Freeman on the land (a week ago) which haven't been plugged via @rationalwiki yet. Who runs that thing? Please plug. TYVM :-) BTW, what article shall we hound to cover status next? What's nearly cooked and just needs a week of peer review? WND is good, but has severely out of date bits, and is of a structure that can go out of date quickly ... - David Gerard (talk) 14:02, 2 April 2012 (UTC) - Osaka Sun runs the official Twitter account. Radioactive Misanthrope 14:04, 2 April 2012 (UTC) - The Twitter feed is looking a bit stale. Is there any way to wire up mediawiki to twitter to tweet stuff automatically, like WIGO:CP posts or something? CrundyTalk nerdy to me 14:15, 2 April 2012 (UTC) - Spaminating is probably bad. One or two tweets a day by hand is fine, except responses to people - David Gerard (talk) 16:21, 2 April 2012 (UTC) - There are RSS feeds set up for WIGO, I don't think broadcasting those via Twitter is the best idea. theist 16:32, 2 April 2012 (UTC) - Sorry, I've been in final exam mode for the past two weeks. I'll be back to full output in the next few days. Osaka Sun (talk) 16:43, 2 April 2012 (UTC) - Perhaps you could share the password with some other users? (hmm elections for who gets the password?) Not me, though, I hate twitter. Sophiebecause liberals 19:37, 2 April 2012 (UTC) - I refuse to use it too. gnostic 12:12, 3 April 2012 (UTC) - I refuse to use it until I understand it fully. (I understand there may a logical flaw there somewhere.)--Bob"What can be asserted without evidence can also be dismissed without evidence." 15:09, 3 April 2012 (UTC) Opencourseware class on the NHS?[edit] Does anyone know where I could find an online class/resource (sort of like opencourseware) that would have a class on the NHS? So preferably health economics? I've found there is only so much I can learn from the wiki article. Lectures always help me pick up stuff more. άλφαΤαλκ 15:54, 2 April 2012 (UTC) Publish or perish[edit] Good post on said phenomenon. Always brings to mind the words of Jacques Barzun: "Everybody shall produce written research in order to live, and it shall be decreed a knowledge explosion." Nebuchadnezzar (talk) 17:01, 2 April 2012 (UTC) - Funding is always going to be an issue and I don't think that there's a system that will work to satisfy everyone and be cost-effective. Even giving pre-tenure academics a set amount of funding causes issues because "hey, you gave it your best shot" is hardly good compensation for the failures that happen all the time. It'd be converting the low-probability high-impact science into a randomised talent show that chews people up and spits them out when it turns out that work that was going to fail from the off actually fails. What would be needed, more than a funding change, is an overall attitude change that says "hey, negative data is a good thing". Fuck, I'm pretty sure some people have tried and failed the stuff I do many times before, but I have no idea about it. In fact, I'm sure of it because I wasted three weeks on a prep that other people had tried but had never been properly written up as a failure ("So I tried to make Rh(I) precursor complex..." - "Oh yeah, we tried that reaction, it just produces brick dust at the end, it's crap and isn't stable." - "Really? Thanks for telling me, guys!!!"). - Also, I'm not convinced that the retraction rate is the best metric to judge by. At least not without a lot of qualifiers. Whether the proportion of outright malicious fraud going up, for instance, as that would represent only a fraction of all retractions. moral 11:22, 3 April 2012 (UTC) - I saw one comment to the effect that the increase in retractions is a sign of better watchdog efforts (e.g., online access to journals, science blogging). I definitely agree on negative data -- there are some journals and archives, even specifically for psych, to fight publication bias but they're not very big yet. Nebuchadnezzar (talk) 15:29, 3 April 2012 (UTC) The Internet looks like crap[edit] At least, to iPad 3 users. Apparently, that new resolution it's boasting is too good, because all the images look like blurry, pixelated garbage now, even the high-quality professional ones. RationalWiki, we must not be left in the digital dust! If we are to satisfy the discerning tastes of the technocratic public we cater to, we must DOUBLE the size of ALL our most important logos and buttons! Or, we could, you know, tell Apple users to sod off and preserve bandwidth like everyone else has done for years. --CoyoteSans (talk) 20:03, 2 April 2012 (UTC) - Or we could just, like, not give a shit either way. ωεαşεζøίɗ Methinks it is a Weasel 20:14, 2 April 2012 (UTC) - I'm in the view of not giving a shit as well. Osaka Sun (talk) 20:16, 2 April 2012 (UTC) - Personally, the more shitty the web looks to people using an iPad, the happier I am. People who buy iCrap should be punished for it. --JeevesMkII The gentleman's gentleman at the other site 20:32, 2 April 2012 (UTC) - The idea that you need higher resolution images for a handheld device makes my head hurt. Vulpius (talk) 20:35, 2 April 2012 (UTC) - I have a philosophical allergy to iProducts. To me they embody the perverse value system of marketing, the redefinition of yourself in terms of your possessions. If you watched or read Fight Club, you know what I mean. --Tweenk (talk) 21:02, 2 April 2012 (UTC) - You could ask my 7 year old iPod. No, really you can, the thing is still running smoothly, even though in my mothers hands. For all the overpricing and design fetishes, Apple produces products are pretty well rounded hardware and software wise. Not getting a error every second day also counts as "user friendly". --★uːʤɱ heretic 23:44, 2 April 2012 (UTC) - I have an iPhone 4, but I got it for free from a family member who thought it was ruined after taking a plunge in the sink. Turns out, not so much. I don't think I would've shelled out for it otherwise, even though I find it pretty useful. Edit: But I fucking hate iTunes. --YossarianSpeak, Memory 00:38, 3 April 2012 (UTC) - So you're living the dream? pathetic 11:05, 3 April 2012 (UTC) - For me, besides the philosophical objections (which mostly result from the fact that I'm an open source person), there are a few practical problems: 1. Can't upload music through USB mass storage interface. 2. You have to pay extra to run your own damn programs on your own damn hardware. 3. Too expensive. --Tweenk (talk) 09:39, 3 April 2012 (UTC) - Interesting. It's something to think about. What really annoys me is how the search engines (especially Google images) have changed their protocol. For instance I just searched images for Taylor Swift (I wanted to see her dress from last night's Country music awards) Please don't ridicule me for this, her dress was pretty... but anyway, all the images are close-up head shots. Google images has done this recently. Previously, a search would bring up images of many types, some full length, or with other people, or different backgrounds, etc. I don't want 500 images of close-up headshots only. Grumble. Refugeetalk page 21:20, 2 April 2012 (UTC) - I'm not a techie guy, but it sounds like eventually new standards will be required - formats like JPG might not be able to scale adequately from their shrunken file into a pretty image. I suspect that later formats will become increasingly more common as people grapple with this, but it will be a slow change and in the meantime new software will fill the gap.-- talk 00:49, 3 April 2012 (UTC) - Nah. JFIF (the JPEG file format) is fine. We can do better now, but there's no incentive to bother because everyone would have to upgrade or else you don't gain anything (the "flag day" problem). The file size is proportional to how "pretty" it will be when reproducing photographs. There is an existing problem that graphic designers use JPEG (which was designed from the outset only to represent naturalistic images, like trees, rocks, people's faces) to handle things like logos, which are usually abstract not naturalistic and so look crap when JPEG encoded. But that's user error, perfectly nice and well-supported formats named PNG and SVG exist for abstract imagery it's just that some so-called "experts" don't know what they're doing. The article also lists other examples where designers don't know what they're doing, e.g. turning text into images 82.69.171.94 (talk) 08:47, 3 April 2012 (UTC) Priest accidentally displayed gay porn during first communion meeting[edit] Title speaks for itself. Classic. Uke Blue 06:25, 3 April 2012 (UTC) - Whyyyyy Helloooo~ --Dumpling (talk) 06:46, 3 April 2012 (UTC) - The best part is he's trying the Austin Powers "it's not mine!" defence. Though I guess he'd be out of a job if he just came out, so he's got a big incentive. --JeevesMkII The gentleman's gentleman at the other site 12:21, 3 April 2012 (UTC) - The funny part is that they need a police investigation over this. "Oh no, a penis! Somebody call the cops!!" Nebuchadnezzar (talk) 15:17, 3 April 2012 (UTC) Speaking of the Daily Fail[edit] Scientist calls out the rag for quote mining his research. Nebuchadnezzar (talk) 18:31, 3 April 2012 (UTC) Scientific literacy test[edit] On Talk:WIGOCP there was a short discussion about a terrible online test that purports to test scientific literacy (it in fact tests one's ability to parrot grade-school facts). It leaves much to be desired. So I had an idea. Even if RationalWiki isn't really the right place for the sort of online tests we all love to hate, I bet there's a sizeable number of us who would nonetheless still be interested in collaboratively building a test that, says, truly tests scientific literacy - even if it has to be tucked away off mainspace. I'm sure such a project wouldn't take long to be infested with trolls and/or go down the path of unrelenting shit-throwing like so many other projects (RW itself, some might say). Thoughts? ONE / TALK 21:10, 3 April 2012 (UTC) - I think any 'scientific literacy' test would test the ability to parrot facts, why not do a reasoning test to detect illogical thinking? For example, what type of illogical thinking does DMorris use in his vision of the liberal paradise? TheCheatI run on alcohol 21:16, 3 April 2012 (UTC) - In semi-defense of the test, it's clearly meant to test your knowledge of some basic facts about science, rather than judge your deeper understanding of science, which a multiple choice test would have a hell of a time doing. It's trivia, really, but not the really trivial kind of trivia; it's scientific cultural literacy. Obviously knowing that the "A" in "AM" stands for amplitude doesn't make one a great scientist, but it is something that your basic educated person should probably know. (FWIW I got 42 out of 50; not bad for someone who has hardly had any science since high school.) Turpis 3:16 (talk) 21:33, 3 April 2012 (UTC) - It's problem beyond being trivia mostly, is it's a test; and therefor is a judgement on what you know/remember at that given moment in time. But that's a flaw in most tests. I can score a 100% on a test today, but probably not tomorrow with the same questions.--il'Dictator Mikal 23:15, 3 April 2012 (UTC) - I loved the speed of light one - choices: a,b,c, or d! Witty. And yeah, this test is not parroting, you gotta know some wide-ranging shit. ħuman 01:55, 4 April 2012 (UTC) - Hmmm, 46/50. Not embarrassed about not knowing who Joule was (or caring about the other couple "names" questions), but an interesting range of questions. ħuman 02:13, 4 April 2012 (UTC) - 48. Missed the one on clouds and the one about catlytic converters. ТyYes? 02:15, 4 April 2012 (UTC) - What level of scientific literacy? What science? Sorry if I'm being pedantic here, but as an Ivory Tower type, I find the word "science" to be useless in practical terms. Nebuchadnezzar (talk) 02:41, 4 April 2012 (UTC) - It's your standard natural sciences stuff (bio, chem, physics, Earth science, etc.) and a decent range of stuff, but nothing terribly in-depth (it's supposed to be literacy, not expertise). I missed a few on classical physics and mechanics (which I never took in school), one on bio and a meteorology one, as well as two I knew immediately after I selected the wrong answer (and I admit a couple I got were complete guesses). At one point they ask how old the Earth is (and yes, something in the neighborhood of 6000 years is a choice). Turpis 3:16 (talk) 03:03, 4 April 2012 (UTC) - Heh, I actually meant that in reply to the OP in terms of devising a new test, but I guess my comment was out-of-context. Nebuchadnezzar (talk) 03:09, 4 April 2012 (UTC) 1,000,000th edit!!![edit] Based on our current edit rate our 1,000,000th edit should occur in the next two days. You will be able to see it here when it happens. Just in case anyone care. This is total edits to the wiki, not just those that remain, so spam and other things that were deleted counts :-( As a side note a certain wiki starting with C will not have its 1,000,000th edit for at least a month, maybe more. - π 01:11, 1 April 2012 (UTC) - Bugger, so it wasn't me all those years ago? CrundyTalk nerdy to me 14:08, 2 April 2012 (UTC) - No that was using the page edit number on the statistics page, which is a) unreliable and b) once again claiming we have had just over 1,000,000 edits. The revision id number is the only reliable way to tell. Pi 3:14 (talk) 00:12, 4 April 2012 (UTC) - *removes award from userpage* CrundyTalk nerdy to me 08:46, 4 April 2012 (UTC) - I'm so sorry for yer loss, Crundy. Anarcho Symphony Noise Swatting Assflys is how I earn my living 09:11, 4 April 2012 (UTC) My cache or yours?[edit] Is anyone else noticing the random featured article on the front page no longer rotates, it's just stuck on one article? The template still seems to be randomising fine, but no amount of ctrl-shift-R seems to get me a different random article on the main page. Have we broken it somehow, or is it just caching on my end? --JeevesMkII The gentleman's gentleman at the other site 11:38, 3 April 2012 (UTC) - On my pathetic, wannabe, crappy iPad I got Poe's Law immediately followed by Homeopathy and then Non-materialistic Neuroscience. So it's you. Генгисmutating 11:48, 3 April 2012 (UTC) - It caches on the server. ("MediaWiki: There's Always Another Layer Of Caching™.") Tack "?action=purge" on the end of the URL if you want to force rotation. I did this to check how the blurbs for the new cover articles looked in practice - David Gerard (talk) 12:07, 3 April 2012 (UTC) - I thought we wrote a way to force a given cover story to do that... but that thought seems about five years old so who knows? I may have just done it in html on my hard drive (could I have done that? I doubt it. First theory more likely.). But, yeah. Rotate. A is right next to capslock. Kill capslock. ħuman 02:27, 4 April 2012 (UTC) - All rational persons make Caps Lock another control key - David Gerard (talk) 08:09, 4 April 2012 (UTC) - Nonsense. All rational persons have a second keyboard on their mouse. Radioactive Misanthrope 05:46, 5 April 2012 (UTC) Delusional[edit] Apologies for the Daily Fail link, but is it just me or is this article a belated april fools joke? CrundyTalk nerdy to me 14:26, 3 April 2012 (UTC) - She's cute and all--Goat only knows how many photos it took to get all of those nice-looking ones, of course--but I see dozens of more remarkable-looking people on a daily basis. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 14:31, 3 April 2012 (UTC) - Even by UK standards she's hardly a stunner. A reasonable looking woman who I wouldn't turn down if the opportunity arose but not a "what are you looking at" from the missus. Генгисmutating 14:45, 3 April 2012 (UTC) - Seems to have gone viral. And some good images already. I'm guessing the DM did it on purpose to get some traffic. CrundyTalk nerdy to me 14:55, 3 April 2012 (UTC) - As you say she's nothing remarkable. What I want to know is why Crundy was reading the Daily Mail.--Bob"What can be asserted without evidence can also be dismissed without evidence." 15:01, 3 April 2012 (UTC) - I know! It's Derren Brown's fault. He tweeted it. CrundyTalk nerdy to me 15:10, 3 April 2012 (UTC) - Just a minute. The same woman who complains about women treating her unfairly because of the way she looks also writes I use my sex appeal to get ahead at work... and so does ANY woman with any sense It's just the Daily Fail being stupid.--Bob"What can be asserted without evidence can also be dismissed without evidence." 15:07, 3 April 2012 (UTC) - I love the second picture in that article. It looks like the caption should be "Last known photograph of victim. Call 999 if you see this man" Cow...Hammertime! 15:17, 3 April 2012 (UTC) - Thank God I have a ugly mug, muffin tops and a penis. - Btw: Doesn't she say in the article, that she doesn't drink? Is she sending the many many bottles of champgane she gets offered, back or is she just pouring the Dom Perignon on the floor?--Th. Bernhard (talk) 17:34, 3 April 2012 (UTC) - Apparently, dear Samantha has her own website. Генгисmutating 18:39, 3 April 2012 (UTC) - Her face looks weird. El TajDon't make me do stuff 18:26, 4 April 2012 (UTC) - Second article in response to the criticism from the first. "Their level of anger only underlines that no one in this world is more reviled than a pretty woman." Fucking hell... El TajDon't make me do stuff 18:35, 4 April 2012 (UTC) So I've never looked at the Mail's website before[edit] Yup, it's a rag. That said, their coverage of the Oikos school shooting is a pretty impressive collection of photos--all gleaned from other sites, not their own photojournalism, but still, if I were looking for images from the event, that seems to be the place to go. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 15:18, 3 April 2012 (UTC) - At least it wasn't Mad Mel. Sometimes ignorance is bliss. Nebuchadnezzar (talk) 16:17, 3 April 2012 (UTC) - Some hints about reading the Mail without earning 'em dosh. Scream!! (talk) 17:02, 3 April 2012 (UTC) In the Negaverse, this rag would be making a legit point[edit] This can be interpreted as an honestly important feminist issue. Although many societal advantages are awarded to women who fit the image of social desirability, those advantages are given for just that. While a tweenager may dream of someday being 'So beautiful, it's a curse!' there are equally as many women out there who cannot advance unless they use their looks to get it, even if they have adequate credentials. They are at a high risk for rape in many areas, and in witness stands their credibility is impacted by their image. So yeah, the shallow advantages of this position mask underlying problems with society. Less magazine-cut women wish they could be looked upon with worship like pretty models, while people who actually are closer to that standard are at higher risks of rape and are dogged by unpleasant stereotypes and those magazine expectations all the time. No way to win. And every person that gives into the trap and uses their bustline to get a promotion (which works even on Wall Street, as a good insider friend of mine informs me; many of her bosses have been instated that way) just makes it worse. ± KnightOfTL;DRfree guybrush threepwood! no new taxes! down with porcelain! 16:08, 3 April 2012 (UTC) - Do you have any sort of statistical evidence to back up your assertion that there's a causal relationship between conformity to societal standards of beauty and the risk of rape, or did you just pull that out of your ass? P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 16:14, 3 April 2012 (UTC) - No specific statistics, unfortunately, but the defense 'She dressed provocatively, she was asking for it!' is still an effective defense in some parts of the USA, indicating this bias. I'm fairly sure there were specific statistics in my Womens' Studies class last semester, so if you would like me to go and ask them about it I would be all too glad to do so. ± KnightOfTL;DRwalls of text while-u-wait 16:28, 3 April 2012 (UTC) - I apologize, I have made an error. I am referring not to increased frequency of rapes, but to charge and punish rapes. I should have been more specific. A woman who conforms to societal standards of beauty has a harder time prosecuting rape charges in parts of the USA because the defense 'she was asking for it because of how she dressed, and what place she went to, she most likely consented' is still able to hold even though it is abhorrent. ± KnightOfTL;DRjust shut up already 16:37, 3 April 2012 (UTC) - But as I pointed out above it's the same woman who says it's advisable to use sex appeal to get what you want. There may be a contradiction here.--Bob"What can be asserted without evidence can also be dismissed without evidence." 18:28, 3 April 2012 (UTC) - Um, no. No it's not. Are you saying that women who dress up, put on makeup to go to a bar are the same as people who use sex appeal to get their boss to give them a promotion? And if you are, how does that make a 'no' from her any less legitimate when it comes to sex? ± KnightOfTL;DRcritical thinking is the key to success! 19:14, 3 April 2012 (UTC) - She doesn't mention rape in either article and I don't see how it has much bearing on either of the issues (using looks to get ahead / being hated for it). Щєазєюіδ Methinks it is a Weasel 19:39, 3 April 2012 (UTC) - *sigh* I'm not talking about her article in specific, that's why I said 'in the negaverse,' implying 'in a universe where they were trying to make a legitimate point.' What I mean is that this article feels like a poke at 'oh no, I am hot so society treats me differently, it's a curse' when in fact yes, this is a legitimate statement with serious implications in the real world. It's just that this article didn't hit upon them. ± KnightOfTL;DRgoing galt: the literal crazy train 20:14, 3 April 2012 (UTC) - She was trying to make a legitimate point, but it was badly undermined by her hypocrisy & feeble arguments. Bob points this out & you accuse him of thinking no means yes, or something? OK, so society is sexist and makes judgements about women's appearance. This is hardly news. But the point Brick's article is making (other women hate me because I'm cuter than them) is completely different from the one you're making about risks of rape & blaming the victim. ωεαşεζøίɗ Methinks it is a Weasel 23:04, 3 April 2012 (UTC) - I would argue the point is not. They're both symptoms of the same society. What I am trying to get at is that this article, while it brings up the issue, fails in that it also ends up trivializing the argument in its failure. My crux point is the fact that women are judged by their appearance effects pretty much everything for us: from not being taken seriously at work, to not being taken seriously when trying to get justice for crimes done against us. Which this article sort of almost touched upon, but failed spectacularly in a way that it hurt its own case. Also, I'm sorry if I came across as accusatory. His statement in justaposition to my own confused me. It sounded a little like the old mysogynist argument, 'women want it both ways!' but I didn't think that was what Bob meant at all, so I wanted to hear some more clarification. :( ± KnightOfTL;DRgarrulous en guerre 23:16, 3 April 2012 (UTC) On that point I agree: really beautiful people, women especially, are quite likely to get reduced to their physical appearance and not taken seriously as complete human beings--and that's not just a grown-up/sexual phenomenon, either. My mother-in-law is very conscious to never tell my baby niece how beautiful or cute she is; she tells her how clever and smart she it at every turn, though. Kids who get noticed/complimented on their appearance are bound to see that as what matters most about them and carry the values attached to that into adulthood. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 20:31, 3 April 2012 (UTC) Tim Dowling's response[edit] From the Guardian. Very amusing. El TajDon't make me do stuff 18:26, 4 April 2012 (UTC) - Almost as hilarious as her own response to the backlash. Wildly missing the point. I laughed out loud at the last couple of paragraphs. ΨΣΔξΣΓΩΙÐ Methinks it is a Weasel 20:23, 4 April 2012 (UTC) Things that happen in Ireland[edit] "How was work today, dear?" "Oh, much as usual. A horse tried to sodomise me at one point." --JeevesMkII The gentleman's gentleman at the other site 17:55, 3 April 2012 (UTC) - LOL is a much abused phrase on the internet, but I really did laugh out loud at that. Animal rape is the height of humour and sophistication. AMassiveGay (talk) 19:53, 3 April 2012 (UTC) - I think you could of phrased your last sentence better AMG... While a horse humping a cop is hilarious, a cop humping a horse is less so. TheCheatI run on alcohol 20:57, 3 April 2012 (UTC) - Jeez, NSFW that please. Osaka Sun (talk) 21:02, 3 April 2012 (UTC) - Yeah Jeeves, please remember to NSFW your ambiguous, undescribed links. I had no idea what I was getting myself into there, because I'm an idiot. ONE / TALK 21:17, 3 April 2012 (UTC) - Osaka, Be thankful that I didn't link you to the original source. The thread includes My Little Pony: Friendship Is Magic scat porn. Rule 34 in action. --JeevesMkII The gentleman's gentleman at the other site 21:49, 3 April 2012 (UTC) - Rule 34 is "War is good for business." -- Seth Peck (talk) 18:29, 4 April 2012 (UTC) - You're looking at the wrong list. Ŵêâŝêîôîď Methinks it is a Weasel 20:02, 4 April 2012 (UTC) Somebody found his balls![edit] It kinda was time, if you'd ask me. --★uːʤɱ atheist 20:13, 3 April 2012 (UTC) - Jesus. Who's out of touch? the GOP is about to nominate a moderate RINO that independents (who voted for Obama and later rejected) love and Republicans conservatives feel uneasy about. nobsDebate: Should AIG sell health insurance and buy FANNIE MAE securities with the insured's premiums? 20:38, 3 April 2012 (UTC) - Nobody loves Romney, he has no characteristics constant enough for such a feeling to appear. --★uːʤɱ atheist 20:46, 3 April 2012 (UTC) - I can't wait for him to legalize polygamy. TheCheatI run on alcohol 20:51, 3 April 2012 (UTC) - You only want polygamy because it's a short step from there to being allowed to marry animals. --JeevesMkII The gentleman's gentleman at the other site 21:56, 3 April 2012 (UTC) - I can't remember meeting any of these so called middle-ground independents in years. Do they really exist? In recent times the only thing that seems to matter is how well you can motivate your potential supporters to get out and vote. Anyway, calling Republicans out on their social darwinism is a good start, even if we all know both parties are full of corporate stooges. Q0 (talk) 01:12, 4 April 2012 (UTC) - Every single independent I met who has "rejected" Obama has "rejected" him for being too right-wing. I don't think anybody really loves Romney. He's the John Kerry of the Republican party. Omar (gibber) 14:30, 4 April 2012 (UTC) - Except Kerry's not a chickenhawk. -- Seth Peck (talk) 17:04, 4 April 2012 (UTC) - Obama is President today because the Stock Market Crash of October 6, 2008, one month prior to the November General Election. This crash wiped out the 401(k) retirement accounts of independent white voters age 55 plus, Obama's weakest demographic group without which he could not win. Within three months, that demographic group of independent moderates, who pinned their retirement hopes on Wall Street, realized Obama was an anti-capitalist with his focus on Stimulus spending and government mandated healthcare. He lost them then, and has never gained them back. This has always been Romney's core constituency -- outside the GOP. nobsDebate: Should AIG sell health insurance and buy FANNIE MAE securities with the insured's premiums? 19:38, 4 April 2012 (UTC) - More "Obama is anti-capitalist" lies, Rob? How, then, do you explain "The Dow rose 66 points on Friday to close out the best first-quarter point gain—994.48 points—in its history, and the best first-quarter percentage performance—8.1%—since 1998.", then? Sophiebecause liberals 19:46, 4 April 2012 (UTC) - Bloated executive compensation? Obama's corporate donors? nobsbullies are people, too. 19:57, 4 April 2012 (UTC) - What's that got to do with the price of fish? -- Seth Peck (talk) 19:52, 4 April 2012 (UTC) - Rob, why don't you sod off back to CP or some other hate site instead of hanging round here like a bad smell? Sophiebecause liberals 20:04, 4 April 2012 (UTC) This primary is marked as the first time since he promulgated it that Republicans are breaking Reagan's "first commandment", at least on a national level. ħuman 02:19, 4 April 2012 (UTC) - The Bush-McCain primaries were pretty nasty. nobsDebate: Should AIG sell health insurance and buy FANNIE MAE securities with the insured's premiums? 19:43, 4 April 2012 (UTC) FTB has anti-fans and they cite...us?[edit] Apparently there is an anti-Pharyngula/FTB wiki called Phawrongula and they cite RW on some pages (e.g., here). Can't vouch for the factual accuracy of the site, just found the page randomly and read one or two articles. Nebuchadnezzar (talk) 23:21, 3 April 2012 (UTC) - I'm inclined to agree with their assessment of Greg Laden, but the half dozen other articles I read were pretty bad. Radioactive Misanthrope 23:44, 3 April 2012 (UTC) - I got up to where it says that Rebecca Watson called for a Richard Dawkins boycott and stopped. Nihilist 23:45, 3 April 2012 (UTC) - So is this just a repository of Elevatorgate butthurt? If so, I regret even posting it -- didn't really read much besides a few lines of a couple of articles as it looked like documentation of internet drama and that puts me to sleep. Nebuchadnezzar (talk) 01:40, 4 April 2012 (UTC) - Yah, I completely dismissed any paragraph that mentioned Rebecca Watson. They reeked of butthurt. Radioactive Misanthrope 02:13, 4 April 2012 (UTC) - Are they still fighting over this? Osaka Sun (talk) 02:18, 4 April 2012 (UTC) - Only the very dumbest parts of "they." Radioactive Misanthrope 03:01, 4 April 2012 (UTC) - Even Watson herself has grew some sense and stopped mentioning it (well, she seems to be over it.)... But did you read the part that says "The Phawrongula Wiki is a resource for documenting systemic fact distortion and historical revision by self-described freethinkers and secularists who are actively harming freethought and secular communities in the pursuit of self-interest and ideology." - and this phrasing didn't set off alarm bells at all? Sounds like something SCEPCOP would write. - Also, consider "The custom FFTB definition is a nonsense and is nothing more than an exercise in self-justification for the general vacuity of its readership who continuously show absolute inability to defend any of their derangements using acceptable methods and thus have no tools other than direct personal abuse to resort to." from the ad hom article, in light of "Please stick to basic facts and leave the colorful language and editorializing for elsewhere." which is on the mainpage. sshole 12:25, 4 April 2012 (UTC) - Meh, MRAs stoking grievances. Also, experts predict, sun to rise in East tomorrow. Godspeed (talk) 18:47, 4 April 2012 (UTC) - I was willing to read it out, (I've always felt that PZ was a bit of a dickhole sometimes), UNTILL they started defending The Amazing Atheist AKA TJ AKA Distressed watcher AKA whatever you wana call him. If PZ is a bit of a Dickhole sometimes, AA is a huge, throbbing, infected asshole of a human being. All the "clever" puns and inserts also get old REALLY fast. --Revolverman (talk) 20:34, 4 April 2012 (UTC) Titanic - now in 3D[edit] So I guess we are supposed to get all excited. People are running columns about how amazing this film was. Is it only me who thought it was "fine"... The acting was ok, the plot predictable (not the boat part, the "love story") part, the story, characters and presentation were rather banal. Yet so many people rate it "masterpiece". What am I missing?-- Godot What do cats dream about? 03:08, 4 April 2012 (UTC) - I think this first half was crap and the second half was quite good. Also, all the main characters were irritating cliches, but some of the minor characters were really well done. A father putting his daughter on a life boat stirred up more emotion in a few seconds than the entire cast of leads did for the combined 3 hours. I haven't been to a 3D movie since the neo-3D craze began, and assuming I do go to one it will be one shot in 3D rather than having the effects done in post. Maybe I'm waiting for a re-release of Avatar (which we know is coming) as I never got around to seeing it. Turpis 3:16 (talk) 03:15, 4 April 2012 (UTC) - because of how my eyes are, ive never really been able to watch anything in 3-D, so i never cared much for it. as for titanic... it was ok. the love plot conflict is sorta like twister having a guy whos evil cause he wanted to actually be able to be a real storm chaser and have the money and tools to do it because mother nature isnt a "Villian". The other problem is my memory of titanic is marred by the two animated italian movies.--il'Dictator Mikal 03:22, 4 April 2012 (UTC) - The rerelease of Avatar will occur only after the spread of 4-D technology. Radioactive Misanthrope 03:23, 4 April 2012 (UTC) - The latest Spy Kids flick bills itself as 4D, the 4th "dimension" being smell. It was a cool gimmick when John Waters did it in Polyester years ago, but I don't think this will catch on. Turpis 3:16 (talk) 03:26, 4 April 2012 (UTC) Nekkid Kate winslet in 3-D would be worth 12 bucks to me. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 03:29, 4 April 2012 (UTC) - 1. This trend of 3D remakes is getting awfully annoying. - 2. No one must say a word about "My Heart Will Go On." Celine! Osaka Sun (talk) 03:40, 4 April 2012 (UTC) - God how I loath her. And I don't use that word often or easily. She can turn an interesting, even powerful song into a horror show, screeching and wailing...Her version of "Halleluiah" causes me to want to break things. There's this amazing song 'la memoire d'abharam' about a contemplative pray on life, a quiet, gentle song and she just goes wonkers polluting it with that hyper voice. it's a PRAYER. Ok sorry.. shutting up. Godot What do cats dream about? 04:38, 4 April 2012 (UTC) This was totally me during Leo's death scene. Turpis 3:16 (talk) 14:12, 4 April 2012 (UTC) Ignore it and it will go away[edit] 3D movies are a recurring phase which recycles every 30 years or so. (1950, 1980, 2010) After a while people realise that it doesn't add anything to the movie and the glasses are a pain in the fundament to wear so it dies down again. Plus ca change... Jack Hughes (talk) 13:10, 4 April 2012 (UTC) - Possibly, but recent changes -- being able to see it without glasses and have it in your own TV, and eventual price-drops -- could make it stay for good. Nihilist 13:17, 4 April 2012 (UTC) But the starfield will be correct![edit] now astronomers and the like can't complain that isn't what it should look like!--il'Dictator Mikal 15:19, 4 April 2012 (UTC) - Now I have to go see it again in theaters and buy a new Blu-Ray special extended version! Nihilist 15:29, 4 April 2012 (UTC) Gods as Topological Invariants[edit] This came across my RSS feed a couple days ago, perhaps some will appreciate: ." --216.155.153.104 (talk) 03:38, 4 April 2012 (UTC) - "the number of gods must equal some such science type of thing" except... by they're nature gods tend to be in a world separate and not under this worlds control, so no, it doesn't. --il'Dictator Mikal 03:41, 4 April 2012 (UTC) - Yet their effect on this world will be observable, regardless of "where" they might be and you can invent as many plains of existence as you like for that. All that matters is their observable effect. If you demand that they have no observable effect then there is no effect, and so their existence can be happily made utterly pointless on all conveievably useful levels. narchist 12:18, 4 April 2012 (UTC) - I find this sentence particularly convincing: - "We show that the number of gods in a universe must equal the Euler characteristics of its underlying manifold." - Sounds like something our Muslim post-editor might have come up with.--Bob"What can be asserted without evidence can also be dismissed without evidence." 15:41, 4 April 2012 (UTC) Spring cleaning[edit] I know most of you probably don't want to hear about it, but our CP space clean-out is still ongoing and could use more input. I think we started with something like 674 articles listed and are now down to a more manageable 244, with probably somewhere between 150 and 200 deleted so far. If anyone with any interest in our CP space takes 15 minutes to go through some of these we can probably remove a few dozen more pretty quickly. The MO I've been basically using is when votes are a 4:0 or 5:1 ratio it's closed with that result, though goat votes can complicate it, as that's where 3rd options and other suggestions have been listed. Concentrating on the ones that are currently around 3:1 could close them soon, or prevent them from being closed prematurely if you disagree with the majority opinion. That should leave just the more debatable ones, which can linger for a while and be closed when we are hopefully able to reach some sort of consensus (or maybe just majority opinion; I don't know if we want to take a more WPish approach here - that can be hammered out later). If you are going to vote, please do at least skim the article in question first. It's clear that isn't happening in some cases. Thanks. Turpis 3:16 (talk) 14:27, 4 April 2012 (UTC) - I'm going through some more right now. I actually find this pretty interesting. I wasn't here for most of these developments, so reading about them is pretty fascinating. And in some cases repulsive. But mostly fascinating. ± KnightOfTL;DRlongissimus non legeri 14:34, 4 April 2012 (UTC) - It's not the mainspace, no need to go overboard... ħuman 02:31, 5 April 2012 (UTC) - Deleting is so much easier work than creating, after all. ħuman 02:31, 5 April 2012 (UTC) - I basically agree on both points, although there was a lot of chaff to separate from the wheat. The discussion, as it were, does seem to a little heavy on the editors more on the "torch everything" end of the spectrum, who may or may not be representative of the site in general. My goal is to try to get it down to a manageable size for the general population without deleting anything that it seems there any real enthusiasm to keep. It's a rather fine line to walk with the limited participation we have so far, although the list is now less than 200 articles, from a start of nearly 700. My biggest concern it that it seems we're in danger of losing a slew of side-by-side rebuttals, even though I get the feeling there's a reasonable amount of support for keeping most of them, by and large. There's probably already been some deletes people will balk at, the discussions for which will need to be revived with greater input. I hope they're few though. Turpis 3:16 (talk) 02:44, 5 April 2012 (UTC) Can you Brits take Monckton back?[edit] The Christopher Monckton comedy tour continues as usual. Potholer54 gets blown off by Monckton because's he's too busy spewing denialist PRATTs in the California legislature. At least he's not calling people Nazis this time. Nebuchadnezzar (talk) 16:16, 4 April 2012 (UTC) - No, you can keep him. No, really, you can. Don't mention it. Ajkgordon (talk) 16:43, 4 April 2012 (UTC) 2016 Republican Bogyman.[edit] A trend I am noticing about the Republican party is their creation of a scary bogyman to distract from the real issues of an election. In 2008, it was ACORN, which was apparently criminally registering poor people to vote, obviously "bad" because poor people would be able to vote. Rather than the melting economy or Iraq or other relevant issues, the 2008 cycle was about how Obama was conspiring with a former domestic terrorist and a voter registration group to do something really bad. In 2012, the current bogyman is Planned Parenthood, which is overall the evil epitome of evil evilness, because 3% of their healthcare services are abortion related, and the majority of their services actually help extend the health of usually poor women. This is obviously "bad" because the poor might have the ability to make choices about their own lives, and obviously are part of some plan to destroy America through some genocidal plot. With that in mind, what group do you think will be victimized by the bully tactics of the Republican Party in 2016, bent on distracting the debate from whatever crock of madness they try to cook up in the next 4 years? I am wagering on the American Academy Of Sciences. ĴαʊΆʃÇä₰ Who said anything about fair?! 13:09, 4 April 2012 (UTC) - I have no idea what 2016 will bring, but I too have a feeling that they're trying to go for the immediate-panic lifestyle issues. I want to say they are going for the 'easy' issues, ones that may not even matter much on whole but seem like a big deal to people who aren't thinking about it. People who feel they are personally threatened, I think, are more likely to vote in what they feel to be self defense than with larger solutions in mind... and the Republican Party knows that this time, it's got nothing. It's a party making laws to support the very richest, but supported by some of the very poorest in our country. They have no plans to help many of their supporters out, but they have to make said supporters enthusiastic somehow. Immediate ethos reactions like "EW, GAY PEOPLE," or 'EW, THINGS THAT ARE AGAINST MY PERSONAL IDEOLOGY" require much much less effort than understanding more important issues on debate. And it's kind of sad, but many people I have known in my life don't really look deeper than what they hear in passing, and many may not even read the news. Much less news that's about something other than themselves. I think it's just easier overall to knee-jerk against a supposed assault on one's way of life (real or not) than to think about long-term decisions that would help more people than oneself overall... and the Republican party knows that. Though, I think there are also shades of unrest and power-redistribution in the Republican party and doubt about their normal way of doing things. These 'easy' issues and crazy-sauce candidates may also be a mask for a Republican Party running on essentially Headless Chicken Mode. ± KnightOfTL;DRwalls of text while-u-wait 14:01, 4 April 2012 (UTC) - I can definitely see a shift in focus away from Planned Parenthood and towards the "homosexual agenda" if DADT remains over and more states continue to legalize gay marriage. Of course, if they do go that way they'll just end up alienating the majority of the mainstream electorate, since "EWW GAYS" is becoming an increasingly less common reaction - but it's the next "easy target" for Republicans. Omar (gibber) 14:19, 4 April 2012 (UTC) - "These 'easy' issues and crazy-sauce candidates may also be a mask for a Republican Party running on essentially Headless Chicken Mode." Considering the trouble hacks like Boehner have had keeping the 'baggers in line, this may not be too far off. They drank a bit too much of their own Kool-Aid. Nebuchadnezzar (talk) 15:43, 4 April 2012 (UTC) - The Republicans have been using bogeymen since at least 1917. First it was REDSREDSREDSREDSREDSREDS!!!!!!. That worked well for a while, but they focussed more on Scary Negroes starting in 1968. Gays, the big bogeyman of the 90's and early 00's, had a shorter shelflife. The new one seems to be "secularism." Vid. Romney saying "We are all Catholics now." (Insert molestation joke here.) This isn't going to be as successful: Their central issue (employer-provided contraception) is a complicated topic on which the majority of Catholics disagree with their (our) church. They'll still be able to use it to fire up part of their base, but it won't be as effective in reaching moderate voters. Godspeed (talk) 17:23, 4 April 2012 (UTC) - Jesus said, O faithless and perverse generation, how much longer must I endure with you? - employer-provided contraception. - Well dah. Isn't the idea of an individual mandate intended to encourage terminating employer provided healthcare benefits, which is already happening? - The bogeyman is out-of-the-closet communists. nobsbullies are people, too. 21:32, 5 April 2012 (UTC) Now wash your hands please[edit] Anyone want to write up something on the Save White People website? I would, but I have to go and share my dinner with Armitage Shanks. Sophiebecause liberals 19:36, 4 April 2012 (UTC) - Who wants to get on a watchlist and give that phone number a ring? Vulpius (talk) 19:46, 4 April 2012 (UTC) - I literally cannot look at sites like that long enough to take enough notes to write an article from, sorry. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 19:51, 4 April 2012 (UTC) - - Me neither, I found it by accident while following Buzzfeed links. Sophiebecause liberals 20:06, 4 April 2012 (UTC) - There are loads of hate sites/blogs/forums out there & most of them aren't very notable. I've occasionally found one when looking for something else, & considered whether to mention it on RW, but in the end I don't want to risk raising their profile by drawing attention to them. If they get to be well known or significant like Stormfront, GHF, etc. then obviously we should write about them; but otherwise they're best left in obscurity. Wẽãšẽĩõĩď Methinks it is a Weasel 20:36, 4 April 2012 (UTC) - Bookmarked for the crazy. I might do an essay on them when I want something painfully easy to refute. Radioactive Misanthrope 20:44, 4 April 2012 (UTC) - My God. This post is glorious—"everything I don't like is part of the same massive conspiracy to kill me!" (Also, the "white women are stupid, and that's why they screw black men" angle.) And the most ironic Tweet I've seen this year. Radioactive Misanthrope 20:49, 4 April 2012 (UTC) - - First, love your header font mate. Second, in a post talking about the Zimmerman case (wonderful...) He left turns right into a Truther Screed! Got to love Crank Magnetism. --Revolverman (talk) 20:53, 4 April 2012 (UTC) The Hunt for AI[edit] du Sautoy is back in Horizon's "the Hunt for AI". Anyone watched it yet? I'm curious how the "media" of BBC thinks AI is going. I'm also curious how much they will exaggerate (if at all) about what we understand of the brain and intelligence. Finally, i get to see a Horizon where I can at least hold my own, rather than just "ohhh.... ahhhh... stars... quantums" Godot What do cats dream about? 20:45, 4 April 2012 (UTC) - I read that as "Al" and got immediately confused. "Al who? Gore? Bundy?" Vulpius (talk) 21:17, 4 April 2012 (UTC) - Me too, Vulpius. I may go off sans serif fonts if this sort of thing carries on. Sophiebecause liberals 21:29, 4 April 2012 (UTC) - Mmmm. Just because we don't understand how it works doesn't mean we can't harness it anyway. Our distant ancestors had mastery over fire long before anybody had a coherent theory of combustion. - My old AI professor was very insistent that disembodied AIs were a stupid concept, unlikely to ever work. If his ideas were right, or even if his ideas were wrong but his conclusion close to the mark, the place to look for AI breakthroughs is not something like Watson but somewhere like Robocup. All divisions (except Simulation) of Robocup involve actual robots, whether they're moving tiny pallets of goods around a semi-imaginary factory or kicking balls around on a soccer pitch. The very human problems of understanding what you're seeing, and trying to walk around without crashing into stuff or falling over must be solved year-on-year by these robots, and if there's something about solving those type of environmental problems that's important to our development of general intelligence (whatever that is) then the robots will be where the big achievements happen. 82.69.171.94 (talk) 01:02, 5 April 2012 (UTC) - We were just talking about this on WfG's hangout. Rolf Pfeifer is saying more or less the same thing. Nebuchadnezzar (talk) 17:50, 5 April 2012 (UTC) - I thought it was very good, WfG. Not much about the human brain and intelligence - more about how AI needs to evolve on its own terms in much the same way as human intelligence did and does. The presenter, a mathematician, seemed to steer the programme towards looking at intelligence as a rather more abstract concept than many engineers do. Worth a watch and thought-provoking, at least for this layman. Ajkgordon (talk) 14:32, 5 April 2012 (UTC) This seems like the wrong place but[edit] Is there anywhere I can ask for help, or any newbie-friendly people I can turn to? — Haamer (talk) 23:08, 4 April 2012 (UTC) - Sure, what do you need, n00b? P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 23:15, 4 April 2012 (UTC) - I need some assumption of good faith and competence. I just came here, made a string of what I thought were improvements to an article, an had all of them arbitrarily reverted with less than useful explanations by other editors. Right now I don't feel very encouraged to continue or even to explain my edits because I have a feeling I'll just get pwned and dismissed, whatever I might say. I didn't come here to argue with anyone, I joined specifically because it seems there'd be likeminded people here. — Haamer (talk) 23:36, 4 April 2012 (UTC) - We're not really "likeminded" on a lot of things, and as a new guy, you have to face a culture where being a little assertive will get you places. Best bet is to go to the talkpage of the article in question, and start a conversation about why some jerk reverted you. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 23:43, 4 April 2012 (UTC) - I'm actually pretty sure all regulars here are pretty likeminded about otherkin (the article in question). I did go to the talk page with one user and messaged on user talk with the other user. I wasn't really sure if I wanted to continue the article talk discussion if all I'll accomplish is making myself look like an annoying n00b. — Haamer (talk) 00:05, 5 April 2012 (UTC) - Hey :( But yeah, that was a mistake, sorry. Nihilist 23:50, 4 April 2012 (UTC) - S'okay. I'm more concerned about the revert where they undid my clarification that fictionkin and otakukin are not synonymous and restored a nearly-nonsensical sentence in the lead. — Haamer (talk) 00:05, 5 April 2012 (UTC) - I am "they" (not actually a multiple) who restored some of the things you took out of the article. I didn't revert; I incorporated some of your changes. As for the distinction between fictionkin & otakukin, why should any non-otherkin care? Aren't these just absurdly arbitrary constructions of basically the same made-up phenomenon? Wěǎšěǐǒǐď Methinks it is a Weasel 01:19, 5 April 2012 (UTC) - I used 'they' as a gender-neutral pronoun; if anything then I suspected you're an otherkin, with the apparent obsession with weasels, ha. I intend to flesh out the article, so some specific definitions will be useful. If people don't care then they can just not read the article. I'll later try to summarize the whole thing in the lead section for those that don't care. Cool? — Haamer (talk) 01:58, 5 April 2012 (UTC) - Oh! oh! I'm a newish person, too! Welcome! ± KnightOfTL;DRlongissimus non legeri 23:27, 4 April 2012 (UTC) - Hi, and ty. :) — Haamer (talk) 00:50, 5 April 2012 (UTC) - Funny, she doesn't look newish. Nebuchadnezzar (talk) 16:17, 5 April 2012 (UTC) - if you could only see her through my eyes Jack Hughes (talk) 16:35, 5 April 2012 (UTC) Conservapedia logo with graffiti[edit] There is a problem with the old CP logo: File:Cp_logo_constitution.jpg Whoever uploaded this logo converted it from PNG to JPG. This destroyed both the alpha channel that contained the graffiti and the extra text chunk in the PNG that contained the author's comment, as described here: [1] Can anyone upload the original PNG version? --Tweenk (talk) 00:01, 5 April 2012 (UTC) - Are you sure it didn't happen at Conservapedia? It's a very 2007 Cp thing to do. steriletalk 01:01, 5 April 2012 (UTC) - do either of these work? Tmtoulouse (talk) 02:49, 5 April 2012 (UTC) War on Women[edit] I've seen this term on several articles recently, some linked to by RW. Apparently the Olé Party is out in full force against 52% of the electorate.62.159.14.62 (talk) 08:05, 5 April 2012 (UTC) - This ought to be good. I was getting so tired with the War on Christmas. Uke Blue 08:23, 5 April 2012 (UTC) - Simultaneously happy as this could cause them to crash and burn, and panicky if they actually pull it off. ТyYarrr 14:37, 5 April 2012 (UTC) - Hello, welcome to 2012. That's been the news since about Jan 1. It's so fun to have a uterus, and be told by the "big men" in our world, how you should live your life. Godot What do cats dream about? 15:53, 5 April 2012 (UTC) - Personally, I was a bigger fan of the war on light bulbs. Nebuchadnezzar (talk) 16:06, 5 April 2012 (UTC) - Speaking of which, a real option for men is "on the way". injected spemacide. Takes 10 minutes to do the procedure, lasts 10 years, and you apparently can reverse it. Godot What do cats dream about? 16:08, 5 April 2012 (UTC) - better article Godot What do cats dream about? 16:10, 5 April 2012 (UTC) The internet is useless for solving problems[edit] mostly because i don't know what the problem is even called or whats causing it: things on my xbox are just jutting colors, like the avatar line, just the colors going off to the side, loading a game is amazing because its just LINES OF COLOR everywhere. Any idea whats even going on and if i have to turn my second xbox in for a new one?--il'Dictator Mikal 14:28, 5 April 2012 (UTC) - Could be a bad cable connection? Ajkgordon (talk) 14:38, 5 April 2012 (UTC) - If you've double and triple checked the cable connection, consider another cable...they do go bad, you know.C®ackeЯ - If you have a friend with an Xbox and her system is functional then try swapping bits out until you find the offending part. If it's the XBox itself then it could be down to "no user serviceable parts inside" and whatever warranty terms you have. Jack Hughes (talk) 15:54, 5 April 2012 (UTC) - whatever it was it seems to have fixed itself. I wouldnt be that surprised if parts were going bad though, the xbox itself is a replacement from 2009 for my 2007 which finally red-ringed, and all the parts, besides the HDMI cable, are from 2007.--il'Dictator Mikal 16:20, 5 April 2012 (UTC) - It seems to have fixed itself.... - It seems to have fixed itself - I switched it off and the overheating cooled down - It seems to have fixed itself - I switched it off and the reboot fixed the bug - It seems to have fixed itself - while looking at it I turned it around and that reseated the cables - It seems to have fixed itself - the Xbox fairy came and waved her magic wand - I suspect the last option. Jack Hughes (talk) 16:31, 5 April 2012 (UTC) - The HDMI cable is a digital cable, specifically TMDS. So I very strongly doubt that a poor connection will do anything more interesting than either working or not working, with "working and then suddenly not working when it gets a bit looser" being an extreme edge case. TMDS uses differential signalling, so most stray EM interference signals will mean nothing whatsoever to the HDMI receiver, and even a correctly clocked but random code input will be rejected as an error more than 50% of the time. Thus the chance a "loose cable" will cause "lines of color everywhere" is down in the low probability space with "the lung cancer on my X-ray was actually indigestion" and "the Dead Sea Scrolls contained important messages about the Obama presidency". Hardware fault, due to overheating and/or slow component failure inside the XBox (or less likely, the display) is a far more plausible explanation. 82.69.171.94 (talk) 19:18, 5 April 2012 (UTC) Schadenfreude[edit] Take that, you smug bastards. PongoOrangutans are sceptical 18:32, 5 April 2012 (UTC) - The temples of the fruit god were defiled, and crows shall feast upon his remains. All hail the mighty penguin, enthroned triumphant before his adversaries. --Tweenk (talk) 02:13, 6 April 2012 (UTC) - Hey, I have a Mac! Not a rabid fanboy though. Osaka Sun (talk) 02:19, 6 April 2012 (UTC) Kids in Sweden know what's up[edit] Hey, thought you guys might like this. It's from a Swedish newspaper article on the meaning of Easter. Maybe he's our youngest member, and we don't even know it? — Unsigned, by: RachelW / talk / contribs - I see the socialist/secularist overtake of Sweden is going as planned. Soon it'll be all of Europe and none will know Jesus' name anymore. Only the USA can still stop us now ! But we allready have a spy in your midst! Muhahahahaaaa! --★uːʤɱ pirate 19:22, 5 April 2012 (UTC) - We are doomed. A goat can eat tree bark and steel cans. A Jewish goat zombie can eat cyanide and molten iron. There is no escape. Our brains will be eaten. --Tweenk (talk) 02:17, 6 April 2012 (UTC) Sarah Palin was Right: A commentary by the Symphony of Noise[edit] I know what you're thinking, but please hear me out: Sarah Palin was right. There actually is a death panel in the United States. And there's 9 people on it with names like Anthony Kennedy, Antonin Scalia, and Ruth Bader-Ginsburg. And soon they will decide: will millions of American be allowed health insurance and a chance to live, or will ideology and fuckery decide that those who are too sick, too poor, too old to buy insurance should just fuck off and die. And to the teabaggers and various CONservatives who think that not wanting to die of cancer is an "entitlement" and "socialism" (only in the United States are people this ignorant and stupid): if people dying without affordable healthcare is what a "culture of life" looks like to you, then I sure as fuck don't want to see what you have in store for us next. Anyways, thank you, RW, for letting me blow off steam. Anarcho Symphony Noise Swatting Assflys is how I earn my living 12:37, 29 March 2012 (UTC) - Bravo sir/madam. --Horace (talk) 01:34, 30 March 2012 (UTC) - will millions of American be allowed health insurance - what a load of crap. millions are allowed health insurance NOW. The Court is deciding if millions should be forced to pay for their health insurance. Jesus fuck, try to at least educate yourself on the basic issues before you begin slandering people with bogus bullshit. You do yourself, your cause and/or ideology no favor spewing such ignorance. nobsI'm not a doctor but I play one with the girls 19:15, 30 March 2012 (UTC) - If the court strikes the law, then it will follow that the ban on excluding people based on on pre-existing conditions will likely fall. That means that insurance companies will, once again, kick people off their roles because they are sick. Therefore, Nob, it follows that their decision will decide if people who do, in fact, need insurance but can't get insurance due to health and/or financial situations, will be able to get it. Is this statement ignorance? No! It's an awful truth that only foolish CONservatives, like yourself, seek to deny so that you can ignore your own hypocrisy and selfishness. Anarcho Symphony Noise Swatting Assflys is how I earn my living 09:12, 31 March 2012 (UTC) - It would help if the pre-existing provision portion was the case SCOTUS is hearing, but it's not. They are only hearing the part on (1) the requirement by the government to force private citizens to engage in commercial activity against their will, and (2) the loss of citizen's basic civil right to access Federal Courts for redress of grievances. They provision you cite will (with about large degree of certainty) survive. So again, you engage in personal ad hominems when there is no basis in FACT whatsoever about whatever it is you pissing and moaning about. nobsI'm not a doctor but I play one with the girls 03:16, 1 April 2012 (UTC) - And if the court strikes the law in its entirety, or even just the mandate part, then people with pre-existing conditions will have a fuckload of a time getting affordable health insurance. Have you even read the health insurance indutry's amicus briefs on the case? If not, then I suggest you do. Anarcho Symphony Noise Swatting Assflys is how I earn my living 13:00, 1 April 2012 (UTC) - So here is a likely scenario: SCOTUS strikes down the mandate & penalty clause; healthcare premiums escalate because of the pre-exisitng conditions portion of the bill; the public clamors for reform. Two camps emerge, one to repeal the pre-exisitng conditions portion (which without the mandate may not be affordable to people with per-existing conditions anyway), and other camp trying to build a consensus for Single Payer. Question: have the Democrats learned anything from this exercise in one-party control and failure to act in a manner that promotes a bi-partisan consensus on such an important issue? nobsDebate topic: Should AIG reorganize, get into the healthcare insurance business, and purchase FANNIE MAE securities with the insured's premiums? 17:27, 1 April 2012 (UTC) - Why don't conservatives bitch and moan more about car insurance? Lobbyists? Occasionaluse (talk) 19:33, 30 March 2012 (UTC) - Here's the logical jist of that argument: If you don't want to pay car insurance, quit driving; it you don't want to pay healthcare insurance, quit breathing. nobsI'm not a doctor but I play one with the girls 20:15, 30 March 2012 (UTC) - Here, again, you prove my point: if denying people healthcare coverage because they can't afford it or would otherwise be denied it because of pre-existing conditions is what you CONservatives consider a "cultre of life," I sure as fuck don't want to see what you nutters have in store for us next. Anarcho Symphony Noise Swatting Assflys is how I earn my living 09:19, 31 March 2012 (UTC) - The pre-existing condition point has nothing to do with the case. If you think it does, please cite the page from the above three links to the transcripts. Elsewise, your just reading Kremlin talking points. nobsI'm not a doctor but I play one with the girls 03:16, 1 April 2012 (UTC) - Again, I refer you to the amicus briefs filed by the health insurance industry where they point out that keeping the mandate is integral to keeping health insurance affordable to people who are already sick. This isn't Kremlin talking points, it's the talking points of the health care industry itself! Anarcho Symphony Noise Swatting Assflys is how I earn my living 13:00, 1 April 2012 (UTC) - And, since you wanted my proof, here it is. Anarcho Symphony Noise Swatting Assflys is how I earn my living 13:04, 1 April 2012 (UTC) - Good link, but it supports my point more than yours. We're seeing how every solution bares the seeds of the next crisis. nobsDebate topic: Should AIG reorganize, get into the healthcare insurance business, and purchase FANNIE MAE securities with the insured's premiums? 16:58, 1 April 2012 (UTC) - What the hell are you even talking about? My original point was that, should the Supreme Court strike down the health care law, it will cost Americans insurance. My link says that, according to the insurance industry, if the Supreme Court strike the law, it'll cost American their insurance. Then you come along saying something about whateverthefuck trying to make some kind of point, I add the link to prove my point that striking down the law will cost Americans their insurance, and you say it proves whateverthefuck point you were trying to make. A point which I am still unclear of. Anarcho Symphony Noise Swatting Assflys is how I earn my living 13:32, 2 April 2012 (UTC) - If the Law is struck down in its entirety, it won't cost Amercians a thing. The Law was never implemented. Status quo continues. If the mandate only is struck down, the pre-existing condition section remains. You said they loose that, I said they now have access, but I also said premiums skyrocket, which your linl says also. nobsDebate: Should AIG sell health insurance and buy FANNIE MAE securities with the insured's premiums? 20:27, 2 April 2012 (UTC) - Slightly more than nothing. -- Seth Peck (talk) 20:30, 2 April 2012 (UTC) - On the contrary, it will cost Americans. Even though the most important parts of the legislation haven't gone into effect yet, that's only because things like the health insurance exchange take time to set up. It was noted in the Associated Press some time ago and in an article I don't wish to look for that the reason the items in ObamaCare like the mandate and the insurance exchange took years to go into effect is because they take time to set up. The insurance exchange, for example, is already under implementation; it's just not at the point where it's ready to "go live," which is why the law doesn't mandate it until 2014. Furthermore, I expressed dismay that, should the court strike down the mandate, people with pre-existing conditions will be told to "fuck off and die." If the mandate is struck, then premiums for those with pre-existing conditions, by the "virtue" of almighty capitalism, will be so high that they either won't be insured or they will be told to fuck off and die. I'll be honest and personal here: someone I love has a what's been classified a pre-existing condition. And unless that mandate takes place, they're kinda fucked for life in the health insurance department. And the best part is that, while their condition is chronic, it isn't even fatal. While I'd prefer universal coverage in lieu of corporate-mandated coverage (as the ObamaCare law setup), I'd rather my friend have the ability to access health coverage administered by a corporation then to have something serious happen them and for them to be told "you're too poor for us to help." Anarcho Symphony Noise Swatting Assflys is how I earn my living 09:22, 4 April 2012 (UTC) Good idea. Let's gut the Constitution, mandate society enter into wp:indentured servitude contracts in perpetuity in order to support the bloated wages in 12% of the economy all for a onetime savings of $230 billion. Brilliant. nobsbullies are people, too. 21:55, 6 April 2012 (UTC) P.S. *It was noted in the Associated Press ...the reason the items in ObamaCare like the mandate and the insurance exchange took years to go into effect is because they take time to set up....which is why the law doesn't mandate it until 2014... You believe that rot? The reason the law didn't take effect immediately is because, no way in hell would Obama be re-elected unless they kicked this can down the road. The NBER, the independent group Congress has mandated to declare to official start and end of Recessions, says mandates will result in another 2 million jobs lost. No wonder they waited till 2014, they couldn't pile another 2 million on the already 8 million unemployed. They had to plant the seeds to abort the recovery and create the next crisis. Remember, you never want a perfectly good crisis to go to waste. nobsbullies are people, too. 22:47, 6 April 2012 (UTC) Have sympathy, please[edit] My laptop, less than a year old, has developed a issue in the fan system. I am still covered by warranty, but to use it, I have to go to Wisconsin, which is a little inconvenient. I cant open up the case myself, or I will void my warranty, and it doesnt help that I know nothing about computers in general. Any advice, please?23.16.216.127 (talk) 23:01, 1 April 2012 (UTC) - Does the postal service not run out where you live? P-Foster Talk ""Santorum is the cream rising to the top."" 23:03, 1 April 2012 (UTC) - I'm supposed to pay mail insurance, and a long distance fee to Canada Post? I think not!23.16.216.127 (talk) 23:13, 1 April 2012 (UTC) - Did you try cleaning the fan with a spray can of air? Alternatively you can mail it to a friend in the states, have them mail it along to the repair facility insured, then post it to you when they get it back. --Opcn with regards to regarding my regardliness 05:48, 2 April 2012 (UTC) - I did. I cant seem to get it to work, and I have no real friends in the united states. Should I open it?142.22.16.53 (talk) 16:08, 2 April 2012 (UTC) - It was cheaper there.142.22.16.53 (talk) 16:50, 2 April 2012 (UTC) - Thanks to you and inflation.142.22.16.53 (talk) 18:17, 4 April 2012 (UTC) If you couldn't hate George Lucas any further...[edit] KILL IT. KILL IT WITH FIRE. Osaka Sun (talk) 05:43, 5 April 2012 (UTC) - LOL! This needs to happen; there must be no doubt left in people's minds that the Star Wars saga is nothing more than Lucas' gravy train to whore for money to the maximum possible extent with no reservations, or thought given to the dignity of the franchise. There is not even the pretense the series means anything more. --BMcP - Just an astronomy guy 10:33, 5 April 2012 (UTC) - Well, that was an unpleasant way to start my morning. ± KnightOfTL;DRgarrulous en guerre 11:50, 5 April 2012 (UTC) - I now understand what people mean with "you ruined my childhood". I need therapy. Now. --★uːʤɱ soviet 12:29, 5 April 2012 (UTC) - It's OK, I watched this video and felt better afterward. ± KnightOfTL;DRlongissimus non legeri 13:22, 5 April 2012 (UTC) - all you have to do is read some of the newest EU books to understand the train wreck thats been pilling up in the story of late. also since i love reminding people of this quote: "special effects are just a tool, a means of telling a story. People have a tendency to confuse them as an end to themselves. A special effect without a story is a pretty boring thing." - George Lucas, 1985--il'Dictator Mikal 13:51, 5 April 2012 (UTC) Perhaps I'm just a glutton for punishment, but that looks AMAZING to me. Actually kind of makes me wish I had a Kinect. Omar (gibber) 13:26, 5 April 2012 (UTC) - Necessary. Nebuchadnezzar (talk) 15:57, 5 April 2012 (UTC) - All the best Star Wars stuff happened when George wasn't in complete control of it. As soon as he gets his way it takes a nosedive. It's pretty amazing. X Stickman (talk) 09:55, 6 April 2012 (UTC) Fun-space Clean-up[edit] I'm thinking we need to do the same thing we're doing with CP-space to Fun-space, because there's a lot of shit there. Nihilist 14:11, 5 April 2012 (UTC) - Meh, we had one of those already (about a year ago, IIRC). I say that, if you find something in funspace you want to delete, throw up the {{delete}} template on it and let people go through the normal deletion process. Anarcho Symphony Noise Swatting Assflys is how I earn my living 14:15, 5 April 2012 (UTC) - And leave some time for people to respond between proposing to delete and actually deleting. What's the rush? Sophiebecause liberals 14:36, 5 April 2012 (UTC) - Well, some things are obviously shit. Nihilist 14:37, 5 April 2012 (UTC) - Your interpersonal skills, for a start. Sophiebecause liberals 14:38, 5 April 2012 (UTC) - - "x is obviously y" is so subjective the fact you don't understand that concept is scary. --il'Dictator Mikal 14:39, 5 April 2012 (UTC) - Waiting for a consensus on every stupid two-sentence "He-he, conservatives are dumb" article is pretty pointless. Nihilist 14:43, 5 April 2012 (UTC) - But why this obsession with "cleaning up x space"? Having shitty articles doesn't particularly harm us and a far better way of improving the ratio of "good" articles is to work on creating and/or improving them. In particular working to push articles to gold standard is a far more worthy aim. Maybe it's because it's more like hard work. Jack Hughes (talk) 15:03, 5 April 2012 (UTC) - Because shitty articles give a bad impression if somebody finds us, and nutty seems to be on with that for the CP space cleanup, despite shitty articles hardly being the worst problem we will have in getting others to want to put money into RW--il'Dictator Mikal 15:07, 5 April 2012 (UTC) - Honestly, the Saloon bar is probably the biggest problem in hypothetically 'getting taken seriously', so let's get rid of that and the people who post the most in it. Nihilist 15:09, 5 April 2012 (UTC) - OK, so I found RW way back when by Googling Ann Coulter. The RW article was one of the top ranked. Many others arrive by the same route. Now, which is better? - Someone finds RW by Googling and lands on a shitty article. - Someone fails to find RW because the article has been deleted - Obviously preferable - Someone finds RW by Googling and lands on a gold star article. - Going back to where I came in, the AC article is full or RW snark and hardly encyclopaedic but it made me smile and go on to see what else was available. I'm now a long standing RW contributor with a number of good articles to my (or my socks's) name. As such I say it's preferable to have shitty articles than no articles. We've already had one fun space purge. The CP purge is all about the move away from being an anti CP site. Let's not throw everything out, let's improve it. Jack Hughes (talk) 15:17, 5 April 2012 (UTC) - i dont think you understand this would be the second funspace purge, not the first. As for finding RW, i found it because i had been searching conservapedia to show a friend the site and saw the RW article on it. --il'Dictator Mikal 15:24, 5 April 2012 (UTC) - "We've already had one fun space purge." Nihilist 15:26, 5 April 2012 (UTC) - Jack Hughes has some good points. Refugeetalk page 15:32, 5 April 2012 (UTC) - I think we've had more than one purge. The "problem" with both funspace and CP space is that there are no standards for what should be there. Mianspace has the mission statement but the other spaces have nothing. That means that anybody can delete for any reason and anybody else can defend for any reason they like. What we should do is decide the function of these spaces and decide if the articles we have match that function. - The alternative is to leave them as they are as they do not seem to do any real damage.--Bob"What can be asserted without evidence can also be dismissed without evidence." 17:37, 5 April 2012 (UTC) - Which is what will inevitably end up happening. Nihilist 17:39, 5 April 2012 (UTC) - If I had the powers I would this minute create attic space and reference space for all the stuff that some people like but is neither missiony nor funny. Sophiebecause liberals 21:07, 5 April 2012 (UTC) - The problem is that "funny" is inherently subjective. I'd say leave wide latitude for what the fun space allows. In fact, I'm inclined to undelete our CP poetry contests and move them to fun. I do like an attic space for keeping parts of our history that have nowhere else to go. Turpis 3:16 (talk) 21:43, 5 April 2012 (UTC) - We should pester somebody to do it. Who? Sophiebecause liberals 21:46, 5 April 2012 (UTC) - Not sure. I'm a mod, so I have powers beyond those of mere mortals, though I have no idea what most of them are or if something like this is among them (I'm guessing not). I guess I'm a bit like the Greatest American Hero, actually. Anyway, Nx is always one of them names floated about when technical shit needs to be done, especially unilaterally. Turpis 3:16 (talk) 22:04, 5 April 2012 (UTC) - I'm pretty sure you need server access to edit namespaces, unless there's an extension installed. Nihilist 22:08, 5 April 2012 (UTC) - We can always pull a Schlafly and pretend we've created a new namespace by putting a word and a colon at the beginning of an article. Turpis 3:16 (talk) 09:20, 6 April 2012 (UTC) Humorous poetry topic.[edit] Which one sounds best? - Gov'r Frothy - Harpo - God Guns Gays - Andrew Schlafly - Goat Thanks for your time, 142.22.16.53 (talk) 16:32, 5 April 2012 (UTC) - - I think "what load of cobblers are you gibbering about?" sounds best. Sophiebecause liberals 21:45, 5 April 2012 (UTC) - Nobody likes him: - words squirting out, frothing forth - from more than the mouth - ± KnightOfTL;DRgarrulous en guerre 22:14, 5 April 2012 (UTC) - Too long; did not read... - He has but one destiny: - his obscurity - (dictated, not read) - TheCheatI run on alcohol 19:51, 6 April 2012 (UTC) - Google him and wretch - Is it better to be that - Than an Etch-a-Sketch? - -- Seth Peck (talk) 20:13, 6 April 2012 (UTC) - Thanks, you all just passed my poetry assignment for me! :D23.16.216.127 (talk) 21:40, 6 April 2012 (UTC) KONY 2012: Part II[edit] Note that comments, and ratings are disabled. Not a good signRyantherebel (talk) 20:18, 5 April 2012 (UTC) - I loved how they randomly threw an Alex Jones clip into the opening montage at about the 36 second mark - it's almost as if he's a real journalist! Tetronian you're clueless 00:46, 6 April 2012 (UTC) - Indeed. I also love how they don't reference they're disastrous screening in Uganda in front of actual LRA victims.Ryantherebel (talk) 18:05, 6 April 2012 (UTC) - No-one mentioned the public wanking breakdown? So that was in the clip? - David Gerard (talk) 21:50, 6 April 2012 (UTC) - Judging only by the way in which this installment has failed to match the sharing-on-Facebook virulence of the previous version, I'm thinking it may be a bit of a dud. Also, I'm kinda surprised Andy Schlafly didn't climb onto the original bandwagon. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 22:16, 6 April 2012 (UTC) Trolling some Paulbots[edit] I'm bored, so I've decided to troll commenters on a Ron Paul video. If anyone wants to join me, I invite them. Some cool points to raise: - "Peaceful diplomacy" Paul tried to hire a group of private mercenaries in order to counter Al Qaida in 2001 through H.R. 3076. - "Dr." Paul would let a sick patient die if he didn't have health insurance. - "Privacy rights" Paul would allow state governments to regulate your sex life. - "Constitutional" Paul explained his view by claiming that the right to bedroom privacy was not guaranteed by the 14th amendment. He forgets the 9th Amendment however, which specifically states "Just because the constitution doesn't specifically state X rights, doesn't mean X rights aren't protected", and the right to privacy is an implied right from the 1th, 4th, and 14th amendments. - "Free market" Paul would repeal anti-trust laws. - "Civil Liberties" Paul voted for the Authorized Use of Military Force Act, which is the real creator of "indefinite detention", not the NDAA. To be fair, every other congressman and senator voted for the bill not knowing its consequences, but remember that Ron Paul is supposed to know better. Just remember, it'll be fun, and if we get enough people, we could mess up the video's own rating. Mr. Anon (talk) 03:10, 6 April 2012 (UTC) - On a more serious note, I go after Paul more than the other Republicans because for some reason, there are a significant amount of liberals and progressives who support him. If these progressives had smart in 2010, they would have elected more progressive Democrats into both houses of congress, so that we could account for Blue Dogs and people like Lieberman. Instead we have the Orange Boner in the Speaker seat and conservative Republicans in charge of several important committees. Mr. Anon (talk) 03:16, 6 April 2012 (UTC) - He's politically screwed right now, so I don't think we really need to do this. The raging Paulbots hardly made a dent in the Republican primaries, and I doubt Goldmember's going to try again in 2016. Osaka Sun (talk) 05:27, 6 April 2012 (UTC) - He's the closest thing to a proper liberatarian ticket they have right now (though that doesn't say much) so just let them cream and dream over him. sshole 11:37, 6 April 2012 (UTC) Free market double standards[edit] The market knows best, except when it doesn't. There are even more lists as well. Nebuchadnezzar (talk) 17:53, 6 April 2012 (UTC) The French election is turning into a joke[edit] First with Sarkozy and Le Pen, now this guy pops up. Politics... Osaka Sun (talk) 19:51, 6 April 2012 (UTC) - No, it is not. Mélenchon isn't just a guy spouting weird shit (or what others might think of as weird shit), he represents a significant portion of Europeans (continental at least — UK's thatcherism still hounds the island). We have a sudden rise of left wing parties in Germany (Greens, LEFT, Pirates) with the complete and utter demise of our only neoliberal party (FDP), you have people for months now protesting in Spain, you have this guy in France, you have massive protests in Greece, you have the usual Italian political games but Italies people are eually pissed. - Just yesterday I saw the newest Deutschlandtrend (an extremely reliable source for German political polling ordered by the ARD every month) in which 81% said they wanted gas prices to be regulated by the State, another one asked if there should be a "Übergangsgesellschaft" ("transitional cooperation") that pays the wages for the now bankrupt German cooperation Schlecker for another six months so the people that work for it have time to find work — 79% on that one. It's not just one guy. - It's half of Europe or even more. And it doesn't matter if we came out well of the "European Debt Crisis" that American greed caused or not, if we're from the North or from the South or from the West or from the East. We are sick and tired of the crisis, the austerity measures that tell us because a bunch of rich kids gambled to much we now have to live on less or eat cheaper foods. We're sick and tired of the little elite that runs European politics and seems to have been part of the longest 69 in history with "our" economy telling us that we'll just have to get through it while sniffing coke and banging hookers in their big fat mansions (probably hyperbole). We're tired of seeing poor people die in Africa because some assholes thousands of miles away think it's time to win another field in a never ending chess game. We're tired of seeing how people in the third world are treated because out of some fucked up policy we have to support them so a bunch of religious nutjobs can be tortured so they can't blow up some unimport ship of another super power can't get a dent. We're sick and tired of money-swinging companies trying to manipulate our polticians, our laws so they can make another million. We're sick and tired of being pushed around and split up buy America. We're sick and tired of capitalism as a whole. - Our parents were told that a middle ground was found: social market economy ! And we should be thankful for it. And when I read of America I'm thankful. But when I look at the reality, I haven't seen my parents in more than a year. Why? Because neither of us has the money to do the trip. I see college students who's parents just didn't qualify for financial help from the state, now having to study and go to classes 40 hours a week because of a college degree reform (Bologna treaty) pushed by economic lobbyists upon our shoulders, now having to work another 20 hours on the side just to make a living. I know young people with chronic illnesses caused by to much work or stress. Our parents are just now realising that they haven't been fucked over any less. That no matter what they'll do they won't get rich. And it's not because we hate the rich, or we see ourselves as the proletariat, or because we hate America, it's because it's because capitalism just doesn't work. - We want Dubček and Allende. We want democratic socialism. Because it's the only thing that might work. --★uːʤɱ digital native 22:15, 6 April 2012 (UTC) - Sorry for the rant, btw. If I sound like I'm screaming at you, it's nothing personal. --★uːʤɱ secularist 22:16, 6 April 2012 (UTC) - Look, I understand your concern, but how is a 100% marginal tax rate going to work? Even the most hardcore leftist in me can't comprehend how that's going to spur investment in a globalized era. Same with keeping old-age pensions constant (when taking into account the near-exponential increases in human lifespan). I'd go the Scandinavian way, thanks. Osaka Sun (talk) 22:56, 6 April 2012 (UTC) - Well, he'd basically cap income at £300.000 and take the rest. Now, and this what he was saying, that money would normally lie around or be invested somewhere (in many casses not France or even Europe). Let's say you have a guy earning £1.000.000 now, the state would have extra tax income of £700.000. But a European state seldomly sits on that money. Party of it would be invested to pay the debt, part of it would go into social security and give people more money that will spend it, just because they have to, because they are living under the norm right now (that would lead to more a higher frequency of money exchange (from hand to hand to hand to hand…). The state would probably also invest in modernisation of infostructure (building companies make money, expanding high speed internet access, which also leads to the next point), better education (more teachers → more jobs → more income; also better education leads to higher competitiveness in future generations) and so on and so forth. - A lot of economics is about how to motivate people to increase the frequency with money changes hands, the GDP is nothing more than a meassuerment of exactly that. This freuqnecy is widely agreed upon to be the main meassurement of how strong an economy is, because as higher as it is as more people share the wealth of a single £, €, $, etc. If you hear somebody saying that people should not be scared and should just keep spending what they have (that of course, doesn't happen all to often), that's exactly what they mean. So Mélenchon would take over that job, he'd take money that is lying around and invest it in some way. It's the same concept as Keynesianism just taken a lot further. --★uːʤɱ socialist 23:29, 6 April 2012 (UTC) - Despite the heavy usage of "we", the above rant doesn't come anywhere close to representing a majority view among Europeans. These are nothing more than your own, and rather silly, personal opinions. So please don't anoint yourself as the spokesperson of a geographical and political entity made up of more than two dozen distinct and highly pluralistic societies. It's also ridiculous to claim that Europe is somehow embracing "democratic socialism" over capitalism, when parties and politicians challenging market-based economies are stuck in the low to mid-teens in polls everywhere you look, and generally don't participate in governments. 46.105.114.105 (talk) 05:20, 7 April 2012 (UTC) - Quite. UHM represents a vocal minority, is all - the crusties making a mess of St Paul's. While most people in Europe recognise that the excessive risk-taking of the banking sector was *a* cause of the financial crisis, they are more fed up with the bloated bureaucracies, administrative elites, high taxes, closed shops, and incompetent governments in Europe. - Melenchon taps into the latter concerns, which is why he's doing comparatively well - as well as his oratory skills. But when people actually take time to look at his policies in detail, most of them recognise that it's ideological and impractical. And communist in all but name, even though he insists he isn't. The last thing most people really want is yet more over-reaching state - an inevitable corollary of his policies and one of the prime reasons they are so dissatisfied with the current state of affairs. Ajkgordon (talk) 10:07, 7 April 2012 (UTC) Are we seeing a change around here?[edit] WIGO:CP went unedited for five whole days before Gerard made a tiny, joke edit to the page. A full week into the month, only one WIGO posted. And it's not like there's much less crazy going on over there. Maybe it's a fluke , but I think it might actually be a thing. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 23:16, 6 April 2012 (UTC) - Is it offensive if i say "Happy Happy, Joy Joy, happy happy joy joy joy!" Godot What do cats dream about? 23:28, 6 April 2012 (UTC) - - See what happens when I give up because of the naked trolling I've endured on WIGO? You may as well nominate WIGO for deletion along with the rest of the CP crap. nobsbullies are people, too. 23:29, 6 April 2012 (UTC) - Ever heard of the difference between correlation and causation, Rob? It's not like all of last month was of you, you know. Peter tanquam ex ungue leonem 23:33, 6 April 2012 (UTC) - Shut up, Smith. If I want to hear from an asshole, I'll fart. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 23:35, 6 April 2012 (UTC) - "See what happens when I give up because ofthe naked trolling I've endured on WIGO?" Fixed. - As for the topic at hand, do note that the talk page is still more active than other WIGO talk pages combined. Vulpius (talk) 23:37, 6 April 2012 (UTC) - It's probably one of the key causes of the large number of votes on WIGO:CP. The longer they linger at the top of the pile, the more people will just click. You can see the same thing if WIGO:Clog goes unedited for a couple of days. Contrast with WIGO:World that has a far larger turnover. As for the talk page, so much of it is general politics merely sparked by CP activity, and the SB - where a considerably number of WIGO stories actually get discussed because they're brought up independently - outstrips it CP-related activity by a long shot. Sure, it attracts a number of views and has its own dedicated clique, but you can't deny that Conservapedia is now just a footnote in what RW actually does. gnostic 00:04, 7 April 2012 (UTC) in which rob tries to make it about himself[edit] - I'm being trolled right now, and it's a pretty fucking good WIGO. It is this kind of trolling that gives RW its CP-esque type quality -- a disincentive to new users. I'd take the matter to the Chicken Coop, but we all know haw that would turn out. nobsbullies are people, too. 00:09, 7 April 2012 (UTC) - Jesus Christ, are you still talking? Also, reverting your lame self-WIGO'ing isn't "trolling," you fucking simpleton. P-Foster Talk "Armed with the knowledge of our past we can charter a course for our future"--MX 00:21, 7 April 2012 (UTC) - I didn't self-WIGO anything. I called Andy out on his bullshit. nobsbullies are people, too. 01:43, 7 April 2012 (UTC) - but you didnt. --il'Dictator Mikal 01:44, 7 April 2012 (UTC) - Sniping at Andy on CP and then deciding to use the same criticsm on WIGO is effectively the same thing. Anyway, . Peter tanquam ex ungue leonem 01:46, 7 April 2012 (UTC) - agree, sorry. --il'Dictator Mikal 01:50, 7 April 2012 (UTC) - In the olden days, criticism of Andy would have been done quietly in the private discussion groups; I not only did on a CP talk page, I did it on TWIGO & WIGO without be blocked at CP (this is groundbreaking). But Mikalos reverted the WIGO, cause he's a mindless idiot and partisan. nobsbullies are people, too. 01:54, 7 April 2012 (UTC) - Independents who dislike both parties equally are so partisan, arent we?--il'Dictator Mikal 02:02, 7 April 2012 (UTC) - You people are so full of kneejerk hate. I looked at the WIGO. No worse than the thousands of posts made by other CP-watchers over the years. ħuman 02:35, 7 April 2012 (UTC) - Agree with Human; also "cause he's a mindless idiot and partisan" is extremely ironical in the kitchen equipment department. Генгисmutating 06:36, 7 April 2012 (UTC) - Indeed.--Bob"What can be asserted without evidence can also be dismissed without evidence." 07:09, 7 April 2012 (UTC) I can't see a problem with this WIGO either. Proxima Centauri (talk) 09:09, 7 April 2012 (UTC) Seeing as this the place for unanswerable technical questions[edit] A couple of years ago I dropped an external hard drive and, not surprisingly, it stopped working. It had a number of photos from my honeymoon which I was sad to lose but them's the breaks. A couple of days ago I installed a USB3 card in my desktop and, out of curiosity, I plugged the old broken hard drive in. Somewhat to my surprise it recognized the drive but refused to read anything from it. After playing around for a while though the whole file structure turned up. I immediately tried to copy my photo file onto another disk but the process hung and eventually the PC stopped recognizing it again. So I've spent the past several hours playing around with it to little avail. I put it in the fridge as my brother-in-law recommended (hey I've got nothing to lose!) but that did not help. I've seen a suggestion that running it under Lunix might work. Anybody know if that's a good idea or got any other suggestions? --Bob"What can be asserted without evidence can also be dismissed without evidence." 20:39, 5 April 2012 (UTC) - It depends if it's the disc itself that is damaged or the USB circuitry it's plugged into. You can always take the case apart, take the disc out and plug it directly into your PC using a SATA and power cable. Ajkgordon (talk) 21:23, 5 April 2012 (UTC) - In fact, I tried doing that but it seems to be a sealed unit.--Bob"What can be asserted without evidence can also be dismissed without evidence." 07:18, 6 April 2012 (UTC) - In Windows try PCI File Recovery (freeware) or in Lunix try dd_rescue or ddrescue from a live distro. Long live Lunix! Unicow (talk) 22:17, 5 April 2012 (UTC) - I'll give PCI File Recovery a try first. I simply don't have a Lunix machine and have no experience with it. So creating a dual boot one is a little ambitious for me.--Bob"What can be asserted without evidence can also be dismissed without evidence." 07:23, 6 April 2012 (UTC) - Unicow likes Linux --> Linux = good --> Unicow = good - Nihilist 22:27, 5 April 2012 (UTC) - If all else fails, professional data recovery places are actually surprisingly cheap these days. If you really want the data back and can't do it at home, something to consider. --JeevesMkII The gentleman's gentleman at the other site 22:52, 5 April 2012 (UTC) - But only if your porn collection doesn't embarrass you. Nihilist 00:52, 6 April 2012 (UTC) - Mmmmm. So many goats.--Bob"What can be asserted without evidence can also be dismissed without evidence." 10:03, 6 April 2012 (UTC) - I suggested a live distro because it runs from CD/DVD. You don't install it. For example, Ubuntu's setup CD can be booted and it will load the Ubuntu desktop environment without installing it to the harddrive. If you have less than 1 GB of RAM I would suggest Lubuntu for use as a live CD. These will give you gparted which is a Partition Magic clone, and it can access the net and use most of the default programs. That is just for getting into Linux (or Lunix which I suppose it our specialty brand :-). If all you want is recovery, PCI File Recovery will probably recovery anything that ddrescue will and it is graphical and much easier to use. Unicow (talk) 02:17, 8 April 2012 (UTC) - Thanks again. I've got this on the back-burner now because of Easter. I'll try it out later.--Bob"What can be asserted without evidence can also be dismissed without evidence." 07:03, 8 April 2012 (UTC)
http://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive153
CC-MAIN-2017-30
refinedweb
22,209
69.62
Carsten Ziegeler wrote: >Sylv. > > Ah ok, I see. So the getObject() method is responsible of returning the object that is part of the object model, according to the current execution context. So in essence, the get() method on object model, which can be a special subclass of HashMap would be: Object get(Object name) { // call the regular Map get method Object result = super.get(name); if (result instanceof Module) { // Dereference the module return ((Module)result).get(); } else { return result; } } . > > Sorry, you lost me ;-) If we consider that the object model is a Map of object factories (the Module interface) which returns objects created by factories rather than the factories themselves (as illustrated above), then the object model can be the same for every request. Each factory/module will then be responsible for returning null if it doesn't have a value in the current context. IMO, this makes things really simple, as configuring the object model simply consists in creating a singleton Map that is filled with factories. Sylvain -- Sylvain Wallez Anyware Technologies { XML, Java, Cocoon, OpenSource }*{ Training, Consulting, Projects }
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200503.mbox/%3C422711C9.6020401@apache.org%3E
CC-MAIN-2015-11
refinedweb
180
53.92
We began presenting an example of a software repository in part 2 of this column, illustrating how a lot of the heavy lifting of data and metadata management could be made much simpler by using available Web tools and open Web standards such as XML, XSLT, and RDF. Since building the core of the software repository was so simple, we now have time to contemplate how to represent this in a Web service. In this article, we will look at ways of adding content to the software repository using HTTP POST and SOAP. Also, we'll look at a SD description of the resulting Web service and how 4Suite Server can help expose this. You will want to be familiar with the work from part 2 as well as with HTML forms and the Hypertext Transport Protocol (HTTP) protocol (see Resources). Software by post To begin with we should look at updates using simple HTTP POST. This can serve as a means for both human contributors and software agents to provide updates, the former by way of a browser form. Start by making sure that the software repository core is set up using the steps we explained in part 2. Unfortunately, there's a twist. The last article was based on 4Suite Server version 0.10.2, but since then we released version 0.11.0, with significant improvements. See Resources for an update to the last article that covers the 0.11.0 release. Let's say the form for contributors to use in adding entries to the repository is as in Figure 1. This corresponds to the HTML in Listing 1. Figure 1: Form for adding new software to the repository 4Suite Server allows us to set up an XSL transform to handle POST requests such as this form generates, creating a new document that is added to the XML repository. The XSL transform we use for the purpose of adding an entry to the software repository is in Listing 2. The action attribute of the form element in our HTML source is interpreted with the document at the path in the URI as the source, and the template-xslt argument is interpreted as the XSL transform to use for generating the new document. Listing 2: XSLT transform to turn HTTP POST data into a new XML document <?xml version="1.0"?> <xsl:stylesheet xmlns: <xsl:param <xsl:param <xsl:param <xsl:param <xsl:param <xsl:template <rdf:RDF xmlns: <Software rdf: <dc:Title><xsl:value-of</dc:Title> <dc:Creator><xsl:value-of</dc:Creator> <dc:Description><xsl:value-of</dc:Description> <CurrentVersion><xsl:value-of</CurrentVersion> <Home rdf: </Software> </rdf:RDF> <ftext:set-post-template-params </xsl:template> </xsl:stylesheet> The first thing to note is the group of parameters such as title and creator. These are automatically set by the server to the values from the form elements of the same name, which become HTTP POST query arguments. These parameters can then be used to craft the output using all the tools XSLT places at our disposal. As you can see, the transform is creating a software description document in the form we introduced in the first part of this series. We omitted some optional fields for simplicity. Finally, observe the ftext:set-post-template-params extension element at the end of the transform. This is a special extension set up by the server which allows you to set such important parameters as the URI of the document to be added, its document definition and the URI to be used by the server to generate the HTML to be sent in response to the HTTP POST (this is, for instance, the HTML that would appear in the browser after the "Submit" button was clicked to submit an entry). The steps involved are briefly outlined here: - Create a container and give universal read access to it. - Create a document for the HTTP POST operation. - Create the XSLT template. - Create the submission response document. - Create the container to store the output. To try this yourself, first create an appropriate container and grand universal read access to it: Listing 3a $ 4ss create container /softrepo $ 4ss set acl --world-read /softrepo Then create a simple dummy document (just "<null/>") as the HTTP POST target. For example: Listing 3b $ 4ss create document - BASE_XML /softrepo/submit-new-software-entry <null/> $ 4ss set acl --world-read /softrepo/submit-new-software-entry If you specify "-" as the source for the XML document, the command will read the XML source from standard input. Next, create the template XSLT document, as shown below: Listing 3c $ 4ss create document submit-entry.xslt BASE_XSLT /softrepo/submit-entry.xslt $ 4ss set acl --world-read softrepo/submit-entry.xslt Now create the submission response document as follows: Listing 3d $ 4ss create document - BASE_XML softrepo/thanks.xhtml <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "DTD/xhtml1-strict.dtd"> <html xmlns=''> <head> <title>Thanks for your submission</title> </head> <body> <p>Thanks for your submission to the software repository.</p> </body> $ 4ss set acl --world-read softrepo/thanks.xhtml Finally, create the container where submitted software records will go. This one must be world-writable in order for the documents to be added by anonymous users. See listing 3e. Listing 3e $ 4ss create container softrepo/incoming $ 4ss set acl --world-write softrepo/incoming Once this is set up, people can enter the data into a form, or software agents can enter the data using HTTP libraries in the language of choice. There is, however, an alternative growing in popularity for software-to-software access of data, without involving humans. Software on a rope 4Suite Server implements the basic SOAP API, which we can use as the basis of our software repository Web service. Listing 4 is a simple Python command-line program that adds a software entry file using the direct SOAP API. Note the special encoding style used in the direct 4Suite Server API. This encoding basically consists of an element representing the method invocation and attributes representing the parameters, and in some cases a body containing a document that serves as a parameter. Other than that, Listing 4 is pretty straightforward code to construct an XML representing a SOAP request document from command-line parameters -- put this into an HTTP request, send it off and listen for the response. Listing 4: A Python program to send a SOAP request according to the core API import sys, string, httplib, base64, mimetools SERVER_ADDR = '127.0.0.1' SERVER_PORT = 8080 <SOAP-ENV:Body> <ft:Create %s </ft:Create> </SOAP-ENV:Body> </SOAP-ENV:Envelope>""" #define the function def AddSoftwareFromFile(entry, docdef, uri): body = BODY_TEMPLATE%(docdef, uri, base64.encodestring(entry)) blen = len(body) requestor = httplib.HTTP(SERVER_ADDR, SERVER_PORT) requestor.putrequest('POST', '__": fname = sys.argv[1] docdef = sys.argv[2] uri = sys.argv[3] entry = open(fname, 'r').read() AddSoftwareFromFile(entry, docdef, uri) A sample session running this program is shown in Listing 4b. As you can see, the server merely echoes back the document added to the repository. But what if we simply want to pass the server a few bits of relevant information for us and have it construct the software repository entry, as we did in the above HTTP POST example? 4Suite Server allows you to write your own specialized SOAP handler as shown in the example in Listing 5. This example handles SOAP messages consisting of the various fields describing the software, composes the description file on the server side, and then, adds it to the repository. The most important part of this module is the SoftRepoSoapHandler class that is subclassed from the 4Suite Server SoapHandler. All the customized class has to do is define a mapping, NS_TO_HANDLER_MAPPING, which for each namespace defines another mapping from SOAP body element name to a handler function. In our case, we set up the AddEntry function to handle our intended requests. In order to use this handler module, we must register it with 4Suite Server. This can be done by copying the Python file to a spot on the PYTHONPATH and appending a stanza such as the following to the configuration file.See listing 6a. Listing 6a <rdf:Description <rdf:type <Priority>30</Priority> <Module>SoftRepoSoapHandler</Module> </rdf:Description> You also need to add a line to the PythonServer or ApacheServer configuration stanza referring to the new handler we set up: Listing 6b <Handler resource='#SoftRepoSoapHandler'/> Once the handler is set up, you can make even simpler SOAP calls to add an entry to the software repository. Listing 7 is example client code for the purpose. Listing 7: An example client to the software repository import sys, httplib SERVER_ADDR = '127.0.0.1' SERVER_PORT = 8080 <SOAP-ENV:Body> <s:Add> <s:Title><![CDATA[%s</s:Title> <s:Creator><![CDATA[%s</s:Creator> <s:Home><![CDATA[%s</s:Home> <s:Version><![CDATA[%s</s:Version> <s:Description><![CDATA[%s</s:Description> </s:Add> </SOAP-ENV:Body> </SOAP-ENV:Envelope>""" #define the function def AddEntry(title, version): creator = raw_input("Creator: ") home = raw_input("Home: ") desc = raw_input("Description: ") body = BODY_TEMPLATE%(title, creator, home, version, desc) blen = len(body) requestor = httplib.HTTP(SERVER_ADDR, SERVER_PORT) requestor.putrequest('POST', '/softrepo__": title = sys.argv[1] version = sys.argv[2] AddEntry(title, version) Save Listing 7 as add_software2.py and try it out as shown in Listing 7b. Moving right along... SOAP support allows integration of our software repository with the rapidly emerging Web services infrastructure, while basic HTTP POST support allows humans and more traditional HTTP-based automation. As we have shown it is fairly simple to build both those interfaces into the same application using Python based services. In the next installment we shall round out the techniques for development of our software repository as a Web Service. Resources - Participate in the discussion forum. - Get an overview of the purpose of this series by reading the first installment. - In previous installment in this series introduces the software repository and has links to other relevant resources, including the software we use. However, this last installment only addresses 4Suite Server 0.10.2, so we have placed an update of the article for 4SS 0.11.0. Please take a look at it: Web services software repository, Part 1 - RFC 2616: The IETF format specification for HTTP - RFC 2068: The IETF HTTP 1.1 specification - Brief ApacheWeek article on HTTP authoring - The SOAP 1.1 specification - The xml.org Cover Pages on SOAP - Developmentor's SOAP pages, including a SOAP FAQ.
http://www.ibm.com/developerworks/webservices/library/ws-pyth3/index.html
CC-MAIN-2014-52
refinedweb
1,760
54.42
Theming Ant Design : a detailed step by step basic guide Applying the theming guidelines provided by Ant Design’s documentation in an actual application Ant Design is a very popular UI libraries for React. It provides a well designed and eminently composable collection of components. However, it is very opinionated about styles. Its philosophy is not only to provide tools, but a whole conceptual approach for designing UIs. Depending on your needs and inclinations, this can be a massive pro, or a very irritating con. Thankfully, Ant Design also provides a way to theme the experience they offer. This page in Ant Design’s documentation provides guidelines for making its components look more in line with a user’s brand identity. It is serviceable, but the operative word here is “guideline”: it only superficially lists approaches you can employ to make it happen. And there is no actual example of it being implemented in a real app, and the required configuration is only lightly touched upon. This article aims to complement the docs by providing a step by step guide to theming Ant Design with their recommended approach. Possible pain points will be signaled with a ⚠️gotcha alert⚠️, and for each step, I will point to a specific commit of the repo I created for this article. It is based on a barebone project starter I made for testing and experimentation, but any project built with Webpack and Babel where you have access to the config files will behave similarly. Theming an unejected create-react-app project is different and arguably better documented. If you just want to see the code, head over there now : Step 1: create project, install Ant Design and babel-plugin-import Link to commit - Install the antdpackage (to be able to import components from the library, as seen here). yarn add antd - ⚠️gotcha alert⚠️ Install the Babel plugin babel-plugin-import. This is only briefly mentioned in the docs, but is very important. We will explain its purpose in a second. yarn add -D babel-plugin-import - Configure babel-plugin-importin .babelrc(see in commit). Notice that we pass “css”as our style option. The other possible option is true. We will come back to it later. "plugins": [ [ "import", { "libraryName": "antd", "libraryDirectory": "es", "style": "css" } ] ] The purpose of babel-plugin-import is to allow modular import of Ant Design components, so that writing this… import { Select } from "antd"; … only imports the Select component, not the whole Ant Design library. But more importantly, the styles option of this plugin adds the possibility to also import the styles of the given component. This allows us to never have to import styles directly from antd (as it is done in the component’s demo Codesandbox provided by Ant Design’s docs with import "antd/dist/antd.css";). This gives us an application styled with the default Ant Design theme : Step 2: install less and less-loader, edit Webpack config, edit babel-plugin-import style option Link to commit Ant Design styles relies on the CSS postprocessor Less. It is made clear that the correct way to theme Ant Design is to override the default less variables. We therefore need to be able to parse those variables, and use them in our project. yarn add less - Install the Webpack loader necessary to parse lessfiles, less-loader. yarn add -D less-loader - Add a rule to the Webpack config to appropriately parse lessfiles (see in commit). ⚠️gotcha alert⚠️ Notice the javascriptEnabled: trueoption set for less-loader: it is required to load Ant Design’s lessstyles without issues. { test: /\.less$/, use: [ { loader: "style-loader" }, { loader: "css-loader" }, { loader: "less-loader", options: { javascriptEnabled: true } } } - Change the styleoption of babel-plugin-importto true. When we used “css”, we were simply importing the pre-bundled CSS styles from the library as is. ⚠️gotcha alert⚠️ With true, we import the source files. This means we can modify them during the compilation step (handled by Babel and Webpack), which allows us to customize the theme. "plugins": [ [ "import", { "libraryName": "antd", "libraryDirectory": "es", "style": true } ] ] Nothing has changed in the looks of our app, but all the preliminary preparations are now done. We can start theming our Ant Design components! Step 3: override less variables in Webpack config (inline) Link to commit Now that the configuration is correct, we can follow the official docs more easily. The first solution is to write inline configuration for our theme, in a “plain object” kind of syntax. - Add the modifyVarsoption to less-loader(see in commit), populated with a simple object where the keys are the lessvariable to override, and the values are what we override them with. This leverages a feature of Less. { loader: "less-loader", options: { modifyVars: { "primary-color": "green", "link-color": "green", "font-family": "serif" }, javascriptEnabled: true } } This override happens during the compile time, and finally we see tangible results! This naive approach to theming might not be the most practical, though. You might want the native IDE support you get when writing actual CSS/LESS.You also might want to separate your configs better, and have something like a theme file. Step 4: override less variables with a theme.less file Link to commit - Add a theme.lessfile overriding the lessvariable values from Ant Design (see in commit). @primary-color: red; @link-color: red; @font-family: sans-serif; - Add hackkey to modifyVarsoption in less-loader. This will write an @importfor our theme file where appropriate in the source styles. ⚠️gotcha alert⚠️ Be careful to give the proper path to your .lesstheme file. This syntax can be tricky (see in commit). { loader: "less-loader", options: { modifyVars: { hack: `true; @import "${path.resolve( __dirname, "../", "theme.less" )}";` }, javascriptEnabled: true } } This is a much more robust solution for theming, and gives the same result as the previous one (with different colors and fonts). Optional step: dealing with Ant Design’s global styles Link to commit You probably have noticed that everything is styled according to the theme specified we specified, not only the Ant Design components. ⚠️gotcha alert⚠️ This is because the default behavior of Ant Design is to ship a host of global styles whenever you import a component’s style. Another instance of Ant Design being very opinionated. There are ways to avoid this, but they deserve their own article. In the meanwhile, you can directly overwrite the Ant Design styles and theme by writing your own CSS for the components you wish to have control over. These rules let us regain control over the fonts and the links styles: .App { text-align: center; margin: 0 30%; font-family: 'Courier New', Courier, monospace; }a { color: blue; font-weight: bold; }a:hover { color: blue; text-decoration: underline; }p { margin-bottom: 3em; } And reach this masterpiece: Hope this will help anyone having difficulties implementing the theming recommendations from Ant Design’s official docs. If I made any mistake please comment, or if you have other approaches to this task, please share!
https://medium.com/@syllaband/theming-ant-design-a-detailed-step-by-step-basic-guide-d060bef34ec4
CC-MAIN-2022-40
refinedweb
1,159
53.51
Solcast API Project description pysolcast Solcast API Client library for interacting with the Solcast API Basic Usage from pysolcast import RooftopSite site = RooftopSite(api_key, resource_id) forecasts = site.get_forecasts() Full API Documentation. History 1.0.2 (2020-04-14) - Release to pypi 1.0.0 (2020-04-10) - First release Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution pysolcast-1.0.3.tar.gz (22.4 kB view hashes) Built Distributions pysolcast-1.0.3-py3.8.egg (6.1 kB view hashes)
https://pypi.org/project/pysolcast/1.0.3/
CC-MAIN-2022-33
refinedweb
105
52.76
Design patterns and practices in .NET: the Composite pattern May 30, 2013 3 Comments Introduction The Composite pattern deals with putting individual objects together to form a whole. In mathematics the relationship between the objects and the composite object they build can be described by a part-whole hierarchy. The ingredient objects are the parts and the composite is the whole. In essence we build up a tree – a composite – that consists of one or more children – the leaves. The client calling upon the composite should be able to treat the individual parts of the whole in a uniform way. A real life example is sending emails. If you want to send an email to all developers in your organisation one option is that you type in the names of each developer in the ‘to’ field. This is of course not efficient. Fortunately we can construct recipient groups, such as Developers. If you then also want to send the email to another person outside the Developers group you can simply put their name in the ‘to’ box along with Developers. We treat both the group and the individual emails in a uniform way. We can insert both groups and individual emails in the ‘to’ box. We rely on the email engine to take the group apart and send the email to each recipient in that group. We don’t really care how it’s done – apart from a couple network geeks I guess. Demo We will first build a demo application that does not use the pattern and then we’ll refactor it. We’ll simulate a game where play money is split among the players in a group if they manage to kill a monster. Start up Visual Studio and create a new console application. Insert a new class called Player: public class Player { public string Name { get; set; } public int Gold { get; set; } public void Stats() { Console.WriteLine("{0} has {1} coins.", Name, Gold); } } This is easy to follow I believe. A group of players is represented by the Group class: public class Group { public string Name { get; set; } public List<Player> Members { get; set; } public Group() { Members = new List<Player>(); } } The money splitting mechanism is run in the Main method as follows: static void Main(string[] args) { int goldForKill = 1023; Console.WriteLine("You have killed the Monster and gained {0} coins!", goldForKill); Player andy = new Player { Name = "Andy" }; Player jane = new Player { Name = "Jane" }; Player eve = new Player { Name = "Eve" }; Player ann = new Player { Name = "Ann" }; Player edith = new Player { Name = "Edith" }; Group developers = new Group { Name = "Developers", Members = { andy, jane, eve } }; List<Player> individuals = new List<Player> { ann, edith }; List<Group> groups = new List<Group> { developers }; double totalToSplitBy = individuals.Count + groups.Count; double amountForEach = goldForKill / totalToSplitBy; int leftOver = goldForKill % totalToSplitBy; foreach (Player individual in individuals) { individual.Gold += amountForEach + leftOver; leftOver = 0; individual.Stats(); } foreach (Group group in groups) { double amountForEachGroupMember = amountForEach / group.Members.Count; int leftOverForGroup = amountForEachGroupMember % group.Members.Count; foreach (Player member in group.Members) { member.Gold += amountForEachGroupMember + leftOverForGroup; leftOverForGroup = 0; member.Stats(); } } Console.ReadKey(); } So our brilliant game starts off where the monster was killed and we’re ready to hand out the reward among the players. We have 5 players. Three of them make up a group and the other two make up a list of individual players. We’re then ready to split the gold among the participants where the group is counted as one unit i.e. we have 3 elements: the two individual players and the Developer group. Then we go through each individual and give them their share. We do the same to each group as well where we also divide the group’s share among the individuals within that group. Build and run the application and you’ll see in the console that the 1023 pieces of gold was divided up. The code works but it’s definitely quite messy. Keep in mind that our tree hierarchy is very simple: we can have individuals and groups. Think of a more complicated scenario: within the Developers group we can have subgroups, such as .NET developers, Java developers who are further subdivided into web and desktop developers and even individuals that do not fit into any group. In the code we iterate through the individuals and the groups manually. We also iterate the players in the group. Imagine that we’d have to iterate through the subgroups of the subgroups of the group if we are facing a deeper hierarchy. The foreach loop would keep growing and the splitting logic would become very challenging to maintain. So let’s refactor the code. The composite pattern states that the client should be able to treat the individual part and the whole in a uniform way. Thus the first step is to make the Person and the Group class uniform in some way. As it turns out the logical way to do this is that both classes implement an interface that the client can communicate with. So the client won’t deal with groups and individuals but with a uniform object, such as Participant. Insert an interface called IParticipant: public interface IParticipant { int Gold { get; set; } void Stats(); } Every participant of the game will have some gold and will be able to write out the current statistics regardless of them being individuals or groups. We let Player and Group implement the interface: public class Player : IParticipant { public string Name { get; set; } public int Gold { get; set; } public void Stats() { Console.WriteLine("{0} has {1} coins.", Name, Gold); } } The Player class implements the interface without changes in its body. The Group class will encapsulate the gold sharing logic we saw in the Main method above: public class Group : IParticipant { public string Name { get; set; } public List<IParticipant> Members { get; set; } public Group() { Members = new List<IParticipant>(); } public int Gold { get { int totalGold = 0; foreach (IParticipant member in Members) { totalGold += member.Gold; } return totalGold; } set { double eachSplit = value / Members.Count; int leftOver = value % Members.Count; foreach (IParticipant member in Members) { member.Gold += eachSplit + leftOver; leftOver = 0; } } } public void Stats() { foreach (IParticipant member in Members) { member.Stats(); } } } In the Gold property getter we simply loop through the group members and add up their amount of gold. In the setter we split up the total amount of gold among the group members. Note also that Group can have a list of IParticipant objects representing either individual players or subgroups. You can imagine that those subgroups in turn can also have subgroups so the setters and getters will automatically collect the information from the nested members as well. The leftover variable is set to 0 as the first member will be given all the left over, we don’t care about such details. In the Stats method we simply call the statistics of each group member – again group members can be individuals and subgroups. If it’s a subgroup then the Stats method of the members of the subgroup will automatically be called. The modified Main method looks as follows: static void Main(string[] args) { int goldForKill = 1023; Console.WriteLine("You have killed the Monster and gained {0} coins!", goldForKill); IParticipant andy = new Player { Name = "Andy" }; IParticipant jane = new Player { Name = "Jane" }; IParticipant eve = new Player { Name = "Eve" }; IParticipant ann = new Player { Name = "Ann" }; IParticipant edith = new Player { Name = "Edith" }; IParticipant oldBob = new Player { Name = "Old Bob" }; IParticipant newBob = new Player { Name = "New Bob" }; IParticipant bobs = new Group { Members = { oldBob, newBob } }; IParticipant developers = new Group { Name = "Developers", Members = { andy, jane, eve, bobs } }; IParticipant participants = new Group { Members = { developers, ann, edith } }; participants.Gold += goldForKill; participants.Stats(); Console.ReadKey(); } You can see that the client, i.e. the Main method calls the methods of IParticipant where IParticipant can be an individual, a group or a group within a group. When we set the level gold through the Gold property the gold distribution logic of each concrete type is called which even takes care of sharing the gold among the groups within a group. The participants variable includes all members of the game. The main advantage of this pattern is that now the tree structure can be as deep as you can imagine and you don’t have to change the logic within the Player and Group classes. Also, we contain the differences between a leaf and a group in the Player and Group classes separately. In addition, they can also tested independently. Build and run the project and you should see the amount of gold split among all participants of the game. View the list of posts on Architecture and Patterns here. What a clear explanation, this what am looking for! Pingback: Architecture and patterns | Michael's Excerpts Nice article
https://dotnetcodr.com/2013/05/30/design-patterns-and-practices-in-net-the-composite-pattern/
CC-MAIN-2018-47
refinedweb
1,451
54.22
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. Here is the patch to change the configuration of the STL implementation used within libstdc++-v3 in order to regain the performance profile of the STL implementation used within libstdc++-v2 as was shipped with gcc 2.95.X. Unfortunately, a non-trivial amount of commentary is required for this one-line configuration change. Within libstdc++-v3/docs/html/17_intro/howto.html under the Thread-safety section and recent posts from Benjamin, one finds that __USE_MALLOC was defined after due consideration. Here was one representative discussion thread that was referenced (it seemed to be a bug report about v2 not v3 but Benjamin explained why __USE_MALLOC was defined for libstdc++-v3): (Aside, I note that most links to libstdc++ list traffic referred to in the howto.html are currently dead.) Also see these references regarding threading and memory allocation issues in the version of STL used in libstdc++-v3: (FYI, I have already committed the obvious patch to docs/html/23_containers/howto.html and docs/html/17_intro/howto.html to fix and/or add these links.) I think I know why mysterious bug reports come in about threading with the STL as shipped with libstdc++-v2 (both to report crashes and random memory leaks). My theory is that it has never been configured properly so that it works with all ports, all the time, even when they were explicitly built with --enable-threads. For example, if I take this very simple example (call it t.C): #include <list> and run: g++ -V2.95.X -E <published threading option for the port> t.C | grep _Lock on various platforms I have at my disposal (all known to have been configured with --enable-threads), what do I find? alpha-dec-osf4.0f/2.95.1: g++ -E -threads t.C|grep ' _Lock(' _Lock() { ; } [It is possible to get the thread-safe version of the allocator, on most/all ports with this undocumented trick:] alpha-dec-osf4.0f/2.95.1: g++ -E -threads -D_PTHREADS t.C|grep ' _Lock(' [...]_Lock() { if (threads) pthread_mutex_lock(&_S_node_allocator_lock) ; } sparc-sun-solaris2.7/2.95.3: g++ -E -pthreads t.C|grep ' _Lock(' [...]_Lock() { if (threads) pthread_mutex_lock(&_S_node_allocator_lock) ; } i386-redhat-linux/egcs-2.91.66: /usr/bin/g++ -pthread -E t.C|grep ' _Lock(' [No _Lock found since this version of STL is different than others but it looks OK at first glance (look for lock instead of _Lock).] i386-redhat-linux/2.96: /usr/bin/g++ -E -pthread t.C|grep ' _Lock(' [...]_Lock() { if (threads) pthread_mutex_lock(&_S_node_allocator_lock) ; } i386-unknown-freebsd4.2/2.95.2: /usr/bin/g++ -E -pthread t.C|grep ' _Lock(' _Lock() { ; } i386-unknown-freebsd4.2/2.95.2: /usr/bin/g++ -E -pthread -D_PTHREADS t.C|grep ' _Lock(' [...]_Lock() { if (threads) pthread_mutex_lock(&_S_node_allocator_lock) ; } (A pending libstdc++-v3 configuration patch, which is still being fine-tuned, will address this aspect of rampant misconfiguration and misnomers on how to get g++ to force STL threading support in the default container memory allocator.) Since no test case was ever produced from aforementioned report or any other that I could find, I have generated a trivial one that should leak, if the high-speed allocator code path has a leak. FYI, by inspection of the code at interest, unless corruption of the data structure occurs only in the threaded case (i.e. the mutex locking isn't done right), we see that the allocator should leak whether it is configured for threading or not. #include <list> #include <map> #include <utility> int main () { std::list<int> l; std::map<int, int> m; for (int k = 0; k < 4; k++) { for (int i = 0; i < 10000000; i++) l.push_back (int()); std::map<int, double> *m2 = new std::map<int, double>; for (int i = 0; i < 10000000; i++) l.pop_front (); for (int i = 0; i < 1000000; i++) m.insert (std::make_pair (i, i*i)); for (int i = 0; i < 1000000; i++) m.erase (i); for (int i = 0; i < 1000000; i++) m2->insert (std::make_pair<int, double> (i, i*i)); delete m2; } return 0; } In my environment, I see no memory leak with this STL configuration patch. A memory leak in this context refers to memory demands growing each time the k loop is rerun. Due to the caching of memory by the STL allocator, by design, it is not fair to call growth in cycle k=0 of the k loop a memory leak. We need to make sure that any reports from users have this same understanding. Results: Compiler (options) - memory usage observed - running time (u+s in all cases) 2.95.2 (-O3) - end of k=0 cycle 219M, never grew after - 62.2 seconds 2.95.2 (-O3 -pthread -D_PTHREADS [1]) - ditto - 185.8 seconds 2.95.2 (-O3 -D__USE_MALLOC) - grew to 156M, end of k=0 cycle ~0M - 230.3 seconds [-D__USE_MALLOC dominates -D_PTHREADS, thus mixed results are not shown.] mainline (-O3) - end of k=0 cycle 219M, never grew after - 57.2 seconds mainline (-O3 -pthread -D_PTHREADS [1]) - ditto - 173.7 mainline (-O3 -D__USE_MALLOC) - grew to 156M, end of k=0 cycle ~0M - 227.2 secs [-D__USE_MALLOC dominates -D_PTHREADS, thus mixed results are not shown.] [1] Note: Once a patch to arrange for the default STL allocators to use the gthr.h abstraction layer is in place, no port should ever need this explicit mention of _PTHREADS to get a thread-safe allocation. This note exists to ensure that bad information does not propagate to the user population after that work is completed! Examples with more locality of reference may show even better run-time performance improvements with this patch. I have real C++ application code which makes heavy use of STL that sees a 3-4x improvement with this configuration patch. Next, a heavily modified and extended version of the code contained here: was studied. FYI, I do not know how to write a test that is absolutely guaranteed to fail if the mutex locking code in the STL allocator is missing and/or hosed. This is the best effort I have been able to produce but it can't even detect when _M_acquire_lock()'s implementation is empty on my platform (FYI, they added an extra level of indirection in the implementation of _Lock since the version of STL we used in libstdc++-v2). I think we would have to insert calls to sched_yield() inside the implementation of the default allocator to force it to blow up in most environments. // This multi-threading C++/STL/POSIX code adheres to rules outlined here: // // // It is believed to exercise the allocation code in a manner that // should reveal memory leaks (and, under rare cases, race conditions, // if the STL threading support is fubar'd). // // In addition to memory leak detection, which requires some human // observation, this test also looks for memory corruption of the data // passed between threads using an STL container. // // To manually inspect code generation, use: // /usr/local/beta-gcc/bin/g++ -E STL-pthread-example.C|grep _Lock // /usr/local/beta-gcc/bin/g++ -E STL-pthread-example.C|grep _M_acquire_lock #include <list> #include <pthread.h> using namespace std; const int thread_cycles = 100; const int thread_pairs = 10; const unsigned max_size = 100; const int iters = 10000; class task_queue { public: task_queue () { pthread_mutex_init (&fooLock, NULL); pthread_cond_init (&fooCond, NULL); } ~task_queue () { pthread_mutex_destroy (&fooLock); pthread_cond_destroy (&fooCond); } list<int> foo; pthread_mutex_t fooLock; // This code uses a special case that allows us to use just one // condition variable - in general, don't use this idiom unless you // know what you are doing. ;-) pthread_cond_t fooCond; }; void* produce (void* t) { task_queue& tq = *(static_cast<task_queue*> (t)); int num = 0; while (num < iters) { pthread_mutex_lock (&tq.fooLock); while (tq.foo.size () >= max_size) pthread_cond_wait (&tq.fooCond, &tq.fooLock); tq.foo.push_back (num++); pthread_cond_signal (&tq.fooCond); pthread_mutex_unlock (&tq.fooLock); } return 0; } void* consume (void* t) { task_queue& tq = *(static_cast<task_queue*> (t)); int num = 0; while (num < iters) { pthread_mutex_lock (&tq.fooLock); while (tq.foo.size () == 0) pthread_cond_wait (&tq.fooCond, &tq.fooLock); if (tq.foo.front () != num++) abort (); tq.foo.pop_front (); pthread_cond_signal (&tq.fooCond); pthread_mutex_unlock (&tq.fooLock); } return 0; } int main (int argc, char** argv) { pthread_t prod[thread_pairs]; pthread_t cons[thread_pairs]; task_queue* tq[thread_pairs]; for (int j = 0; j < thread_cycles; j++) { for (int i = 0; i < thread_pairs; i++) { tq[i] = new task_queue; pthread_create (&prod[i], NULL, produce, static_cast<void*> (tq[i])); pthread_create (&cons[i], NULL, consume, static_cast<void*> (tq[i])); } for (int i = 0; i < thread_pairs; i++) { pthread_join (prod[i], NULL); pthread_join (cons[i], NULL); #if defined(__FreeBSD__) // These lines are not required by POSIX since a successful // join is suppose to detach as well... pthread_detach (prod[i]); pthread_detach (cons[i]); // ...but they are according to the FreeBSD 4.X code base // or else you get a memory leak. #endif delete tq[i]; } } return 0; } Results (no memory leaks or crashes were observed in any case although they perhaps should have been seen in the last case): 2.92.2 (-O3 -pthread -D_PTHREADS): 80.9 u+s seconds mainline (-O3 -pthread -D_PTHREADS): 75.4 u+s seconds mainline (-O3 -pthread -D_PTHREADS -D__USE_MALLOC): 94.5 u+s seconds mainline (-O3 -pthread): 49.9 u+s seconds (BUT this case was "unsafe at any speed" since no mutex was guarding the allocator's shared memory pool.) Aside, when users provide -D_PTHREADS or -D__USE_MALLOC on the command line (or indirectly via a LIB_SPEC setting which maps in one of those defines based on -pthread or somesuch), the ABI of STL is changed! We need to be very careful when steering users to use those internal STL macros, if we ever want to have a stable ABI. A semi-detailed analysis of a class of major run-time performance regressions from gcc 2.95.X involving STL test cases was posted to libstdc++-v3 (). 2001-05-30 Loren J. Rittle <ljrittle@acm.org> * include/bits/c++config (__USE_MALLOC): Do not define it. Document why not and explain how a library user can get non-default behavior without defining it. * docs/html/23_containers/howto.html (Containers and multithreading): Explain the current configuration of the STL. Provide pointer to comment in c++config. Explain current situation of ABI modification in light of defining implementation-space macros that have traditionally been touted as the correct way to do things. Index: ./include/bits/c++config =================================================================== RCS file: /cvs/gcc/egcs/libstdc++-v3/include/bits/c++config,v retrieving revision 1.24 diff -r1.24 c++config 104,106c97,117 < // This is the "underlying allocator" for STL. The alternatives are < // homegrown schemes involving a kind of mutex and free list; see stl_alloc.h. < #define __USE_MALLOC --- > // Default to the typically higher-speed libstdc++-v2 configuration. > // To debug STL code or to gain better performance in some threading > // cases on some platforms, uncomment this line to globally change > // behavior of your application code (this will require, at the very > // least, recompilation of your entire application and, perhaps, the > // entire libstdc++ library): > // > // #define __USE_MALLOC > > // However, once you define __USE_MALLOC, only the malloc allocator is > // visible to application code (i.e. the typically higher-speed > // allocator is not even available in this configuration). Note that > // it is possible to force the malloc-based allocator on a > // per-case-basis for some application code even when the above macro > // symbol is not defined. The author of this comment believes that is > // a better way to tune an application for high-speed using this > // implementation of the STL. Here is one possible example displaying > // the forcing of the malloc-based allocator over the typically > // higher-speed default allocator: > // > // std::list <void*, std::malloc_alloc> Index: docs/html/23_containers/howto.html =================================================================== RCS file: /cvs/gcc/egcs/libstdc++-v3/docs/html/23_containers/howto.html,v retrieving revision 1.3 diff -r1.3 howto.html 203a204,217 > The STL implementation is currently configured to use the > high-speed caching memory allocator. If you absolutely think > you must change this on a global basis for your platform to > support multi-threading, then please consult all commentary in > include/bits/c++config. Be fully aware that you change the ABI > of libstdc++-v3 when you provide -D__USE_MALLOC on the command > line or make a change to that configuration file. [Placeholder > in case other patches don't make it before the 3.0 release: That > memory allocator can appear buggy in multithreaded C++ programs > (and has been reported to leak memory), if STL is misconfigured > for your platform. You may need to provide -D_PTHREADS on the > command line in this case to ensure the memory allocator for > containers is really protected by a mutex. Also, be aware that > you just changed the ABI of libstdc++-v3 when you did that.]
http://gcc.gnu.org/ml/libstdc++/2001-05/msg00384.html
crawl-002
refinedweb
2,114
56.05
Opened 7 years ago Closed 7 years ago #14981 closed (wontfix) Small enhancement to User.last_login timezone handling (version 1.3.0 beta 1) Description Attached is a small patch to enhance the timezone handling of django.contrib.auth to ensure that the last_login and date_joined datetime fields are stored in UTC format. I recognize these patches make an assumption about timezone that may represent a breaking change. But after reviewing the source code, it would appear the User.last_login and User.date_joined fields were never intended to store timezone. Instead, they relied on timezone information provided by the underlying OS. This introduced the possibility for subtle development --> production system differences. For example, we have been performing development on Windows, and host our production code on FreeBSD 8.x. Under Windows, even with settings.TIME_ZONE='UTC', the returned value from datetime.datetime.now() is always localtime. But on FreeBSD it results in the correct UTC results. The difference meant that code intended to display User.last_login on Windows was different from the coded needed to display these same values on FreeBSD. This subtle difference created a more complex debug environment. The solution in the patch is to switch to only call datetime.datetime.utcnow() in the django.contrib.auth.models.py code. I should mention that prior to submitting the patch, we reviewed the possibility of rolling the functionality into user profiles, and also the possibility of solving the issue by trying to somehow intercept the signal of the login() event. But each of these approaches seemed to come up short to what was essentially a simple solution. We acknowledge this may not be the "right" way to solve this issue. But it seemed like we should start with submitting a patch, and allow the Django team to make the final decision. If you'd prefer to see a different approach to solving this in a more "django appropriate" way -- just let me know. We're new to the django world, but it's a fantatic system, and we'd be pleased to follow experienced guidance. Code Sample: Once this patch is applied, converting from UTC to local-timezone is a trivial affair using pytz. Here's an example of how to display last_login from a session that has already been authenticated. from django.contrib import auth from django.http import HttpResponse from django.contrib.auth.decorators import login_required import pytz @login_required def show_last_login(request): ll_utc = pytz.utc.localize(request.user.last_login) ll_loc = ll_utc.astimezone(pytz.timezone(settings.TIME_ZONE)) html = "<html><body>last login {0}</body></html>".format(ll_loc) return HttpResponse(html) Patched django.contrib.auth.models.py code to store dates in UTC format
https://code.djangoproject.com/ticket/14981
CC-MAIN-2018-09
refinedweb
445
51.65
Asked by: Async/Await problem Question I do have a problem with regards to using async in my code. I have an async query that pulls data from the database. My code will look something like this //do a query to check if the data exist var result=await MyAsyncCallToDatabase(); if(result==null) { //insert the data to the database } Do I need to wait for my async call to finish getting all the data I want before doing the checking if it does return something because what I am experiencing right now is it returns null and then when I am trying to do the insert which is inside the if condition, SQL server returned an exception which says that I can't insert the duplicate data. UPDATE: Provide more information public class CustomerCreate { private readonly ICustomerRepository _customerPo; public CustomerCreate(ICustomerRepository customerPo) { _customerPo = customerPo; } public async Task<int> CreateCustomer(PoCustomer custShipping) { var getCustomerShipping = await _customerPo.GetCustomerByKeyAsync(custShipping.CustomerId); if (getCustomerShipping == null) { return await _customerPo.InsertAsync(custShipping); } return -1; } } public interface ICustomerRepository { Task<int> InsertAsync(PoCustomer customer); Task<PoCustomer> GetCustomerByKeyAsync(Guid key); } public class CustomerReposity : ICustomerRepository { public Task<int> InsertAsync(PoCustomer customer) { string sql = @"Insert into Customer (FName, LName, BusinessName, Phone, Address1, Address2, City, State, Zip, Email,CustomerId ) Values (@FName, @LName, @BusinessName, @Phone, @Address1, @Address2, @City, @State, @Zip, @Email,@CustomerId)"; using (var con = new SqlConnection(ConnectionString)) { return (await con.ExecuteAsync(sql, customer)); } } public async Task<PoCustomer> GetCustomerByKeyAsync(Guid key) { string sql = "Select * from Customer Where CustomerId=@CustomerId"; using (var con = new SqlConnection(ConnectionString)) { return (await con.QueryAsync<PoCustomer>(sql, new { CustomerId = key })).SingleOrDefault(); } } } Error Received:Violation of PRIMARY KEY constraint 'PK__MYTABLE__A4AE64D80D0838CF'. Cannot insert duplicate key in object 'dbo.MYTABLE'. The duplicate key value is (e002b592-0abd-e811-8392-90b11c601f63). The statement has been terminated.' Regards All replies if(result==null) { You forgot one '=' sign. It should be if(result == null). Greetings, Chris - Edited by DerChris88 Tuesday, October 16, 2018 9:21 PM MyAsyncCallToDatabase() is truly an async method call. It could be something like this, I'm using dapper by the way. public async Task<List<string>> ThisIsAnAsyncCall() { string sql = @"SELECT * from Table1"; using (var con = new SqlConnection(ConnectionString)) { return (await con.QueryAsync<string>(sql)).ToList(); } } Hi Dikong42, Thank you for posting here. According to your description, please try to set the ID like below in your SQL database. T-SQL [Id] INT IDENTITY (1, 1) NOT NULL,. Your await looks fine. You have to wait for it to finish before you can check for null. The await unwraps the method call. The return type for MyAsyncCallToDatabase is going to be Task<T>. So if you don't await you get back a Task (that isn't going to be null). The await effectively does this. //Your code var task = MyAsyncCallToDatabase(); task.Wait(); var result = task.Result; //Or more concisely var result = MyAsyncCallToDatabase().Result; //All that is equivalent to this var result = await MyAsyncCallToDatabase(); Note that for non-UI code you should almost always use ConfigureAwait(false) on the end of your await call. The error you are receiving has nothing to do with the await you posted. The error is occurring because you're trying to insert data into your DB where one of the rows has a (database-defined) primary key that is already in the database. Hence you get a constraint error from the DB. The issue is with the code that is trying to insert into the DB. I assume this is happening in your InsertAsync call so put a breakpoint on that DB call. Then look at the insert statement it is generating. The value the error is complaining about is a GUID. My gut instinct is that it is your CustomerId column. You are inserting into a Customer table. Standard DB design says the primary key is either Id or TableId so CustomerId is probably the primary key table. Since it is the primary key the DB is responsible for setting it. Remove the CustomerId from the list of columns and the list of values you're sending and see if the error goes away. You'll have to subsequently get the inserted ID from the DB but that generally comes back from the results of the insert. Michael Taylor - Proposed as answer by Wendy ZangMicrosoft contingent staff, Moderator Tuesday, October 23, 2018 7:35 AM - Unproposed as answer by Wendy ZangMicrosoft contingent staff, Moderator Friday, October 26, 2018 12:29 AM - Proposed as answer by Wendy ZangMicrosoft contingent staff, Moderator Friday, October 26, 2018 12:29 AM
https://social.msdn.microsoft.com/Forums/en-US/5ec03780-5818-4e88-b9ba-25db5a75fd2c/asyncawait-problem?forum=csharpgeneral
CC-MAIN-2020-50
refinedweb
753
55.64
Card is a container for text, photos, and actions in the context of a single subject. import { Card } from '@nextui-org/react'; NextUI will wrap your content in a Card.Body component. You can change the full style towards a bodered Card with the bordered property. You can apply a fancy hover animation with the hoverable property. You can use the clickable property to allow users to interact with the entirety of its surface to trigger its main action, be it an expansion, a link to another screen or some other behavior. You can change the color of the Card with the property color. You can use the Divider component to split the Card sections. You can use the Card.Footer component to add actions, details or another information to the Card. You can use the cover prop and Card.Image component to add a coverred image to the Card.Body. NextUI automatically applies object-fit: cover to the inner image. You can use the clickable property to allow users to interact with the Card. type NormalColors = | 'default' | 'primary' | 'secondary' | 'success' | 'warning' | 'error' | 'gradient'; type NormalWeights = 'light' | 'normal' | 'bold' | 'extrabold' | 'black'; type ObjectFit = | 'contain' | 'cover' | 'fill' | 'none' | 'scale-down' /* Global values */ | 'inherit' | 'initial' | 'revert' | 'unset';
https://nextui.org/docs/components/card
CC-MAIN-2022-05
refinedweb
205
57.16
Using Visual Studio 2005 Tools for the Office System SE to Create Add-Ins with Custom Task Panes in PowerPoint 2007 Applies to: 2007 Microsoft Office System, Microsoft Office PowerPoint 2007, Visual Studio 2005, Visual Studio 2005 Tools for Office Second Edition Ken Getz, MCW Technologies, LLC May 2007 Code It | Read It | Explore It In this demonstration, you create a custom task pane that creates a slide, inserting a selected date into the title of the slide, and an Office Fluent Ribbon customization that adds a button to the Insert tab in Office PowerPoint 2007 to show and hide the task pane. To demonstrate the technique, follow these steps. To create a custom task pane In Visual Studio 2005, on the File menu, point to New,and then click Project. After you select the language (Visual Basic or C#), expand the Office node, and select Office 2007 Add-Ins. In the Templates pane, select PowerPoint Add-In. Name the new project CustomTaskPaneAddIn. Add the custom task pane. In the Visual Studio menu, click Project. Click Add User Control, name the control DateCustomTaskPane, and click Add. Because VSTO 2005 SE treats a standard User Control as a custom task pane designer, this action adds a new user control/custom task pane item to your project. In the ThisAddIn class, add an Imports statement or a using statement so that you can easily refer to classes within the Microsoft.Office.Tools namespace, which provides the CustomTaskPane class you use. To add a variable In the ThisAddIn class, add a variable that can refer to your custom task pane. To modify ThisAddIn_Startup In the ThisAddIn class, modify the ThisAddIn_Startup procedure. Add the following code to create a new custom task pane by using the DateCustomTaskPane class you just created, set its width, and make it visible. Save and run your project. Visual Studio 2005 loads a copy of PowerPoint, and runs your add-in, which displays the Select and Insert Date task pane. Quit PowerPoint when you are done. Add controls to the task pane After you create the empty task pane and verify that it works, you can add a MonthCalendar control and a Button control to the task pane. Users can select a date, click the button, and insert the selected date at the current location in the current slide. To add a MonthCalendar control and a Button control In the Solution Explorer window, double-click DateCustomTaskPane.vb or DateCustomTaskPane.cs, loading it into the designer window. Expand the size of the design surface for the task pane—it should be large enough to contain a MonthCalendar control and a Button control. Set the Size property of the task pane to 300, 400. In the Properties window, set the Font property for the task pane to Segoe UI, 8.25pt. If the Toolbox window is not visible, click View and then click Toolbox. From the Common Controls tab of the Toolbox window, drag a MonthCalendar control and a Button control onto the task pane. Set the Name properties for the controls to monthCalendar and insertButton. Modify the button's Text property to Insert the selected date. When you are done, the task pane should look like Figure 1.Figure 1. The completed task pane To add code that inserts the selected date into the current slide In the task pane designer, double-click the button, loading the code editor with the Click event stub created. In C# only, add the following statement to the top of the code file. To replace insertButton_Click Replace the existing insertButton_Click procedure stub with the following procedure. private void insertButton_Click(object sender, EventArgs e) { PowerPoint.Presentation presentation = Globals.ThisAddIn.Application.ActivePresentation; if (presentation != null) { PowerPoint.Slide slide = presentation.Slides.Add( presentation.Slides.Count + 1, PowerPoint.PpSlideLayout.ppLayoutText); slide.Shapes[1].TextFrame.TextRange.Text = monthCalendar.SelectionStart.ToShortDateString(); slide.Select(); } } This code retrieves a reference to the active presentation, if it exists. The code creates a new slide, adding it at the end of the current presentation. Finally, the code takes the SelectionStart property of the MonthCalendar control, and adds its value to the TextFrame in the TextRange of the first shape on the slide. For a text slide, that is the title of the slide. Save and run the project. In the custom task pane, select a date, and click the button. The task pane's code inserts the selected date as the title of a new slide. Quit PowerPoint and return to Visual Studio 2005. Display the custom task pane on demand What happens if the user closes the custom task pane? How does the user reopen it? In addition, what if a customer does not want the custom task pane displayed each time PowerPoint starts up? You need a way to display the task pane on demand. One way to do that is by using a toggle button on an Office Fluent Ribbon customization. The user can close the task pane, and so the state of the toggle button needs to reflect the state of the task pane. That is, you need a way to refresh the display of the button, based on the state of the task pane. Follow these steps to add this support. To add the Office Fluent Ribbon customization To add the Office Fluent Ribbon customization, in the Solution Explorer window, right-click the add-in project and from the context menu, select Add, and then click New Item. In the Add New Item dialog box, select Ribbon support. Click Add to insert the new Office Fluent Ribbon customization. Modify the new Ribbon1.xml file, replacing the existing content with the following XML. This customization adds a new group to the existing Insert tab on the Office Fluent Ribbon. On this tab, the markup adds a new toggle button displaying the text Date Task Pane. Close and save Ribbon1.xml. <customUI xmlns="" onLoad="OnLoad"> <ribbon> <tabs> <tab idMso="TabInsert"> <group id="MyGroup" label="How To"> <toggleButton id="toggleButton1" size="large" label="Date Task Pane" screentip="Display the date task pane" onAction="OnToggleButton1" imageMso="DateAndTimeInsert" /> </group> </tab> </tabs> </ribbon> </customUI> Load Ribbon1.xml as a resource The add-in must load the Ribbon1.xml file at runtime. By default, the template includes a procedure that handles this in a standard, although tricky, way. It is easier to load the Ribbon1.xml file as a resource. To load Ribbon1.xml as a resource In the Solution Explorer window, right-click the CustomTaskPaneAddIn project, and select Properties from the context menu. In the Properties pane, click the Resources tab. From the Solution Explorer window, drag the Ribbon1.xml file into the Properties pane. This action adds the Ribbon1.xml file as a project resource. Close the Properties pane, and click Yes when prompted to save. At runtime, the add-in calls a special procedure in order to load a copy of your Office Fluent Ribbon customization, and you must add that procedure. The add-in template includes the procedure—you must uncomment it. In the Ribbon1.vb or Ribbon1.cs file, uncomment the ThisAddIn partial class. To modify GetCustomUI In the Ribbon1 class, modify the existing GetCustomUI implementation so that it retrieves the Ribbon1 resource. In C#, you must expand the IRibbonExtensibility members code region to find the procedure. Add a callback procedure When you customize the Office Fluent Ribbon, you must add callback procedures to handle events of the Office Fluent Ribbon controls. In this case, the template created the procedure for you. The default behavior of the template Ribbon support adds a ToggleButton control to the Office Fluent Ribbon, and the template adds the callback procedure. To modify OnToggleButton1 In the Ribbon1 class, expand the Ribbon Callbacks code region to find the existing OnToggleButton1 procedure. You want the OnToggleButton1 procedure to toggle the visibility of the task pane. The Office Fluent Ribbon passes the isPressed value that indicates whether the toggle button has been pressed or not. Replace the existing procedure with the following code. Refresh the status of the button You also need a way to refresh the status of the button, based on the state of the custom task pane. To do that, you must call the InvalidateControl method of the Office Fluent Ribbon, passing the name of the control to refresh. To add a Refresh procedure In the Ribbon1 class, add the following procedure, which you call when the visibility of the task pane changes. Display the toggle button You want to determine whether the toggle button should appear pressed or not, based on the visibility of the task pane. The ToggleButton control provides an attribute, getPressed, that allows you to specify the name of a procedure it will call, as the Office Fluent Ribbon refreshes its display, to determine whether it should appear pressed. To add a GetPressed procedure Add the following procedure to the Ribbon1 class. To modify Ribbon1.xml From the Solution Explorer window, re-open the Ribbon1.xml file. Modify the XML, adding the getPressed attribute. <customUI xmlns="" onLoad="OnLoad"> <ribbon> <tabs> <tab idMso="TabAddIns"> <group id="MyGroup" label="My Group"> <toggleButton id="toggleButton1" size="large" label="Date Task Pane" screentip="Display the date task pane" onAction="OnToggleButton1" getPressed="GetPressed" imageMso="DateAndTimeInsert" /> </group> </tab> </tabs> </ribbon> </customUI> To add a Refresh procedure The custom task pane should refresh the Office Fluent Ribbon when it changes its visibility. The CustomTaskPane class raises the VisibleChanged event for this purpose. In the ThisAddIn class, add the following procedure. To hook up the event handler To hook up the event handler, add the following code to the end of the existing ThisAddIn_Startup procedure. To unhook the event handler To unhook the event handler, add the following code to the existing ThisAddIn_Shutdown procedure. Save and run the project. In the running instance of PowerPoint 2007, click the Insert tab. Verify that the task pane is visible, and that the button is pressed. "Unpress" the button, and verify that the task pane disappears. Press the button again, displaying the task pane. Close the task pane manually by clicking the x in the upper-right corner. Close PowerPoint when you are done, returning to Visual Studio. The steps in this walkthrough describe the details of building an add-in using VSTO 2005 SE, whether you are targeting PowerPoint, or any other of the supported Office 2007 products. It is important to realize that Office Fluent Ribbon customizations do not provide a direct means of interacting with controls on the Office Fluent Ribbon. For example, your code cannot select or unselect the toggle button when the user closes the task pane. Instead, you must force the Office Fluent Ribbon to refresh the control's display. Forcing a refresh causes the Office Fluent Ribbon to rerun its getPressed event handler, which determines from the visibility of the task pane how the task pane should be displayed. That is the only way to accomplish this sort of goal when customizing the Office Fluent Ribbon. You should also consider whether to have your task pane displayed as the application loads. In general, this is not a good idea. You should load the task pane only on demand. To make this change in the sample, remove the code that sets the Visible property of the task pane to True in the startup code. Finally, consider where to add your button that displays the task pane. Try to find an existing tab for your button. In this case, it makes sense to place the button on the existing Insert tab, because you are inserting data into the current presentation. Customizing the 2007 Office Fluent Ribbon for Developers (Part 1 of 3) Customizing the 2007 Office Fluent Ribbon for Developers (Part 2 of 3) Customizing the 2007 Office Fluent Ribbon for Developers (Part 3 of 3) PowerPoint Object Model Reference in the PowerPoint 2007 Developer Reference PowerPoint Object Model in the PowerPoint 2003 VBA Language Reference Microsoft PowerPoint 2000 in the Microsoft Office 2000 Developer Object Model Guide
https://msdn.microsoft.com/en-us/library/bb508942.aspx
CC-MAIN-2016-07
refinedweb
2,000
56.05
of Engineering & Technology (DVSIET), Meerut Lab Manual (ECS-552) Prepared By. Here is the program to sort the given integer in ascending order using insertion sort method.Please find the pictorial tutor of the insertion sorting. Logic: Here, sorting takes place by inserting a particular element at the appropriate position, that’swhy the name- insertion sorting. In the First iteration, second element A[1] is compared with thefirst element A[0]. In the second iteration third element is compared with first and second element.In general, in every iteration an element is compared with all the elements before it. Whilecomparing if it is found that the element can be inserted at a suitable position, then space is createdfor it by shifting the other elements one position up and inserts the desired element at the suitableposition. This procedure is repeated for all the elements in the list. If we complement the if condition in this program, it will give out the sorted array in descendingorder. Sorting can also be done in other methods, like selection sorting and bubble sorting, whichfollows in the next pages. #include<stdio.h> void main() {1 int A[20], N, Temp, i, j;2 clrscr();3 printf("\n\n\t ENTER THE NUMBER OF TERMS...: ");45 scanf("%d", &N);6 printf("\n\t ENTER THE ELEMENTS OF THE ARRAY...:");7 for(i=0; i<N; i++)89 {10 gotoxy(25,11+i);1112 scanf("\n\t\t%d", &A[i]);13 }14 for(i=1; i<N; i++)1516 {17 Temp = A[i];1819 j = i-1;20 while(Temp<A[j] && j>=0)21 {2223 A[j+1] = A[j];24 j = j-1;2526 }27 A[j+1] = Temp;28 }29 printf("\n\tTHE ASCENDING ORDER LIST IS...:\n"); for(i=0; i<N; i++) printf("\n\t\t\t%d", A[i]); getch(); Logic: The entered integers are stored in the array A. Here, to sort the data in ascending order, anynumber is compared with the next numbers for orderliness. i.e. first element A[0] is compared withthe second element A[1]. If forth is greater than the prior element then swapping them, else nochange. Then second element is compared with third element, and procedure is continued. Hence,after the first iteration of the outer for loop, largest element is placed at the end of the array. In thesecond iteration, the comparisons are made till the last but one position and now second largestelement is placed at the last but one position. The procedure is traced till the array length. If we complement the if condition in this program, it will give out the sorted array in descendingorder. Sorting can also be done in other methods, like selection sorting and insertion sorting, whichfollows in the next pages. #include<stdio.h>void main(){ int A[20], N, Temp, i, j; clrscr(); printf(“\n\n\t ENTER THE NUMBER OF TERMS…: “); scanf(“%d”,&N); printf(“\n\t ENTER THE ELEMENTS OF THE ARRAY…:”); for(i=0; i<N; i++) { gotoxy(25, 11+i); scanf(“\n\t\t%d”, &A[i]); } for(i=0; i<N-1; i++) for(j=0; j<N-i;j++) if(A[j]>A[j+1]) { Temp = A[j]; A[j] = A[j+1]; A[j+1] = Temp; } printf(“\n\tTHE ASCENDING ORDER LIST IS…:\n”); for(i=0; i<N; i++) printf(“\n\t\t\t%d”,A[i]); getch();} Quick Sort The. Good points Bad Points 1. If p < r then 2. q Partition (A, p, r) 3. Recursive call to Quick Sort (A, p, q) 4. Recursive call to Quick Sort (A, q + r, r) Note that to sort entire array, the initial call Quick Sort (A, 1, length[A]) PARTITION (A, p, r) 1. x ← A[p] 2. i ← p-1 3. j ← r+1 4. while TRUE do 5. Repeat j ← j-1 6. until A[j] ≤ x 7. Repeat i ← i+1 8. until A[i] ≥ x 9. if i < j 10. then exchange A[i] ↔ A[j] 11. else return j Partition selects the first key, A[p] as a pivot key about which the array will partitioned: The running time of the partition procedure is (n) where n = r - p +1 which is the number of keys in the array. int main(void) { int array[MAXARRAY] = {0}; int i = 0; return 0; } if(i <= j) { /* swap two elements */ y = arr[i]; arr[i] = arr[j]; arr[j] = y; i++; j--; } } while(i <= j); /* recurse */ if(low < j) quicksort(arr, low, j); Merge Sort Merge-sort is based on the divide-and-conquer paradigm. The Merge-sort algorithm can be described in general terms as consisting of the following three steps: 1. Divide Step If given array A has zero or one element, return S; it is already sorted. Otherwise, divide A into two arrays, A1 and A2, each containing about half of the elements of A. 2. Recursion Step Recursively sort array A1 and A2. 3. Conquer Step Combine the elements back in A by merging the sorted arrays A1 and A2 into a sorted sequence.←j+1 MERGE_SORT (A) Analysis, for simplicity. Shell Sort This algorithm is a simple extension of Insertion sort. Its speed comes from the fact that it exchanges elements that are far apart (the insertion sort exchanges only adjacent elements). The idea of the Shell sort is to rearrange the file to give it the property that taking every hth element (starting anywhere) yields a sorted file. Such a file is said to be h-sorted. SHELL_SORT (A) for h = 1 to h N/9 do for (; h > 0; h != 3) do for i = h +1 to i n do v = A[i] j=i while (j > h AND A[j - h] > v A[i] = A[j - h] j=j-h A[j] = v i=i+1 The function form of the running time for all Shell sort depends on the increment sequence and is unknown. For the above algorithm, two conjectures are n(logn)2 and n1.25. Furthermore, the running time is not sensitive to the initial ordering of the given sequence, unlike Insertion sort. Shell sort is the method of choice for many sorting application because it has acceptable running time even for moderately large files and requires only small amount of code that is easy to get working. Having said that, it is worthwhile to replace Shell sort with a sophisticated sort in given sorting problem. Heap Sort The, We'll go from the 20 to the 6 first. The index of the 20 is 1. To find the index of the left child, we calculate 1 * 2 = 2. This takes us (correctly) to the 14. Now, we go right, so we calculate 2 * 2 + 1 = 5. This takes us (again, correctly) to the 6. 2h ≤ n ≤ 2h+1-1 2h ≤ n ≤ 2h+1 h ≤ lgn ≤ h +1.:) 1. l ← left [i] 2. r ← right [i] 3. if l ≤ heap-size [A] and A[l] > A[i] 4. then largest ← l 5. else largest ← i 6. if r ≤ heap-size [A] and A[i] > A[largest] 7. then largest ← r 8. if largest ≠ i 9. then exchange A[i] ↔ A[largest] 10. Heapify (A, largest)).. BUILD_HEAP (A)) 1. BUILD_HEAP (A)). done = 1;}} Linear Search In computer science, linear search or sequential search is a method for finding a particular value in a list, which consists of checking every one of its elements, one at a time and in sequence, until the desired one is found. If the value being sought occurs k times in the list, and all orderings of the list are equally likely, the expected number of comparisons is For example, if the value being sought occurs once in the list, and all orderings of the list are Either way, asymptotically the worst-case cost and the expected cost of linear search are both O(n). Now we should define, when iterations should stop. First case is when searched element is found. Second one is when subarray has no elements. In this case, we can conclude, that searched value doesn't present in the array. Examples. Find 6 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}. Huge advantage of this algorithm is that it's complexity depends on the array size logarithmically in worst case. In practice it means, that algorithm will do at most log2(n) iterations, which is a very small number even for big arrays. It can be proved very easily. Indeed, on every step the size of the searched part is reduced by half. Algorithm stops, when there are no elements to search in. Therefore, solving following inequality in whole numbers: n / 2iterations > 0 resulting in #include <stdio.h> #define TRUE 0 #define FALSE 1 int main(void) { int array[10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; int left = 0; int right = 10; int middle = 0; int number = 0; int bsearch = FALSE; int i = 0; printf("ARRAY: "); for(i = 1; i <= 10; i++) printf("[%d] ", i); if(number == array[middle]) { bsearch = TRUE; printf("** Number Found **\n"); } else { if(number < array[middle]) right = middle - 1; if(number > array[middle]) left = middle + 1; } } if(bsearch == FALSE) printf("-- Number Not found --\n"); Dynamic-Programming Algorithm 0-1 Knapsack Problem Let i be the highest-numbered item in an optimal solution S for W pounds. Then S`= S - {i} is an optimal solution for W-wi pounds and the value to the solution S is Vi plus the value of the subproblem. We can express this fact in the following formula: define c[i, w] to be the solution for items 1,2, . . . , i and maximum weight w. Then 0 if i = 0 or w = 0 c[i,w] c[i-1, w] if wi ≥ 0 = max [vi + c[i-1, w-wi], c[i-1, if i>0 and w ≥ wi w]} This says that the value of the solution to i items either include ith item, in which case it is vi plus a subproblem solution for (i-1) items and the weight excluding wi, or does not include ith item, in which case it is a subproblem's solution for (i-1) items and the same weight. That is, if the thief picks item i, thief takes vi value, and thief can choose from items w-wi, and get c[i-1, w-wi] additional value. On other hand, if thief decides not to take item i, thief can choose from item 1,2, . . . , i-1 upto the weight limit w, and get c[i-1, w] value. The better of these two choices should be made. Although the 0-1 knapsack problem, the above formula for c is similar to LCS formula: boundary values are 0, and other values are computed from the input and "earlier" values of c. So the 0-1 knapsack algorithm is like the LCS-length algorithm given in CLR-book for finding a longest common subsequence of two sequences. The algorithm takes as input the maximum weight W, the number of items n, and the two sequences v = <v1, v2, . . . , vn> and w = <w1, w2, . . . , wn>. It stores the c[i, j] values in the table, that is, a two dimensional array, c[0 . . n, 0 . . w] whose entries are computed in a row-major order. That is, the first row of c is filled in from left to right, then the second row, and so on. At the end of the computation, c[n, w] contains the maximum value that can be picked into the knapsack. for w = 0 to W do c[0, w] = 0 for i = 1 to n do c[i, 0] = 0 for w = 1 to W do if wi ≤ w then if vi + c[i-1, w-wi] then c[i, w] = vi + c[i-1, w-wi] else c[i, w] = c[i-1, w] else c[i, w] = c[i-1, w] The set of items to take can be deduced from the table, starting at c[n. w] and tracing backwards where the optimal values came from. If c[i, w] = c[i-1, w] item i is not part of the solution, and we are continue tracing with c[i-1, w]. Otherwise item i is part of the solution, and we continue tracing with c[i-1, w-W]. Analysis θ(nw) times to fill the c-table, which has (n+1).(w+1) entries, each requiring θ(1) time to compute. O(n) time to trace the solution, because the tracing process starts in row n of the table and moves up 1 row at each step. void fill_sack() { int a[MAXWEIGHT]; /* a[i] holds the maximum value that can be obtained using at most i weight */ int last_added[MAXWEIGHT]; /* I use this to calculate which object were added */ int i, j; int aux; a[0] = 0; for (i = 1; i <= W; ++i) for (j = 0; j < n; ++j) if ((c[j] <= i) && (a[i] < a[i - c[j]] + v[j])) { a[i] = a[i - c[j]] + v[j]; last_added[i] = j; } aux = W; while ((aux > 0) && (last_added[aux] != -1)) { printf("Added object %d (%d$ %dKg). Space left: %d\n", last_added[aux] + 1, v[last_added[aux]], c[last_added[aux]], aux - c[last_added[aux]]); aux -= c[last_added[aux]]; } return 0; } Maximum(list,n) 1.nlength[A] 2.maxlist[0] 5.maxa[i] 7.exit Minimum(list,n) 2.minlist[0] 5.mina[i] return.
https://ru.scribd.com/document/49306593/DAA-LM
CC-MAIN-2020-50
refinedweb
2,259
71.44
There are two changes in this release: AlgebraicFieldnow refines SignedNumericinstead of Numeric. This should have no visible change for most users, because all conforming types ( Float, Double, Float80, and Complex) already conform to SignedNumeric. However, users who have code that is generic over the AlgebraicFieldprotocol can now use unary negation (or remove existing explicit SignedNumericconstraints from that code). The Realand Complexmodules have been renamed RealModuleand ComplexModule. If you import Numerics, then this change does not affect you. However, if you currently import either Realor Complexdirectly, you will need to update your import statements. (sorry!) This is not a change that I make lightly; I would very much prefer to avoid this sort of churn, even though Swift Numerics hasn't yet declared 1.0. However, there are real limitations of the current name lookup system, which prevents use of some nice patterns when a module name shadows a type. E.g. with this change, a user who mostly only wants to work with complex doubles can do the following: import ComplexModule typealias Complex = ComplexModule.Complex<Double> // Can now use the simpler name Complex for Complex<Double>: func foo(_ z: Complex) -> Complex { ... } // But can still get at the generic type when necessary: let a = ComplexModule.Complex<Float> Any Swift Numerics module that might have this ambiguity will be suffixed with Modulein the future (just like the pattern of protocols being suffixed with Protocolwhen necessary to break ambiguity).
https://forums.swift.org/t/0-0-5-release-notes/33991
CC-MAIN-2020-24
refinedweb
237
52.19
I have created a web page where the user can register and then edit their profile. You can add a photo if you want. Any questions or suggestions will be well received try the site click here: Can you share the code you used? Im trying to figurer out whats wrong with my code. I have a site that should require users to pay a subscription fee. So at first I just want to set some things: 1) I want to create a member profile page. 2) I want the login button to run a query that looks for a user ID in the collection I've created. 3) I want to redirect the users by their position: if the user exists in the payed members collection he needs to be redirected to a specific page. And if the users does not exists in the collection he needs to be redirected to the purchase page. when Im clicking the login button I created nothing happens and it says I have a problem in my code. the code Ive used: ------------------------------- import wixUsers from 'wix-users'; import wixData from 'wix-data'; import wixLocation from 'wix-location'; import wixWindow from 'wix-window'; export function profileButton_click(event) { wixLocation.to(`/PaidMembers/${wixUsers.currentUser.id}`); } export function loginbutton_click(event) { // user is logged in if (wixUsers.currentUser.loggedIn) { // log the user out wixUsers.logout() .then(() => { // update buttons accordingly $w("#loginButton").label = "Login"; $w("#profileButton").hide(); }); } // user is logged out else { let userId; let userEmail; // prompt the user to log in wixUsers.promptLogin({ "mode": "login" }) .then((user) => { userId = user.id; return user.getEmail(); }) .then((email) => { // check if there is a match for the user in the collection userEmail = email; return wixData.query("PaidMembers") .eq("_id", userId) .find(); }) .then((results) => { // if an item for the user is not found if (results.items.length === 0) { // redirect to pay for subscription wixLocation.to("/store"); } // if there is a match redirect to lessons page else { wixLocation.to("/arabiclessons"); } ); Thanks!
https://www.wix.com/corvid/forum/community-discussion/testing-create-member-profile-pages-with-wix-code
CC-MAIN-2019-47
refinedweb
327
67.04
Introduction The MoveToScreen method can be used to move from the current Screen to another Screen. The MoveToScreen method will destroy the current Screen and all of its contained Entities, then begin loading the Screen passed to the MoveToScreen method. Calling MoveToScreen MoveToScreen accepts a string which is the fully-qualified name of the Screen you are moving to. The easiest (and safest) way to get the fully qualified name is to use the Type class. For example, if you want to move to a Screen called GameScreen, you would do: this.MoveToScreen(typeof(GameScreen).FullName); Why do we use “typeof”? As mentioned above, you can move to a Screen by passing its fully-qualified name. To understand why we use typeof, we need to understand what “fully-qualified” means. First, let’s start with a call to MoveToScreen that is not fully qualified: this.MoveToScreen("GameScreen"); Fully-qualified means that the namespace is included in the name: this.MoveToScreen("YourProject.Screens.GameScreen"); However, what happens if you remove GameScreen, or rename it, or move it to another namespace? The code above would no longer work because the qualified name may have changed to something like “YourProject.Screens.Subfolder.GameScreen”; Using typeof allows us to get the fully qualified name even if it changes…and if the Screen no longer exists you will get a compile error, so you’ll know right away instead of having to run the game to find out your code is broken. Very convenient! Resetting a Screen The MoveToScreen function does the following (in order): - Destroys the current Screen - Creates the next screen as specified by the argument to MoveToScreen. MoveToScreen can be used to move to the same screen rather than a different screen. This results in the current screen being destroyed then recreated, resulting in the screen being reset to its original state. For example, consider a situation where the player’s character is hit by a bullet. In this case the GameScreen will reset itself: // assuming there is a function to tell us if the player was hit by a bullet // This also assumes that this code is written in the GameScreen and not in an entity bool wasHitByBullet = GetIfHitByBullet(); if(wasHitByBullet) { // GetType returns the GameScreen's type this.MoveToScreen(this.GetType()); } MoveToScreen destroys the Screen When the MoveToScreen method is called, the current Screen will be destroyed and the Screen that you are moving to will be created. The things that are destroyed are: - Any files loaded through Glue for the current Screen or any Entities added to the Screen through Glue - Any instances of Entities that have been added to Glue If you have added objects that should be destroyed (such as additional Entities) in your custom code, then you need to make sure to destroy these objects in your CustomInitialize. For more information on whether you need to destroy an Entity or not, and how to destroy Entities which must be destroyed manually, see the Destroying Entities article. “The Screen that was just unloaded did not clean up after itself” Exception For more information on this error and how to clean it up, see Glue:Reference:Screens:Cleaning Up Screens. Passing information to new Screens The MoveToScreen method has only one parameter – the Screen to move to. It does not accept additional parameters. For information on how to pass additional information to new Screens, see the the Proper Information Access tutorial.
https://flatredball.com/documentation/tools/glue-reference/screen/glue-reference-screens-movetoscreen/
CC-MAIN-2022-40
refinedweb
572
57.91
Many programmers and students create small projects. These projects are often based on the same project template. In many cases, they need to modify the project settings, and every time they must do that, they lose time on repetitive configuration tasks. For these reasons, I have decided to create this article; it explains Visual Studio .NET template creation, and it can be a good example for creating more complex wizards. This information is not exhaustive, it explains the most common features of a custom wizard. A typical project wizard template contains the files listed below: Identifies the wizard engine and provides context and optional custom parameters. This file provides a routing service between the Visual Studio shell and the items in the wizard project. Templates.inf is a text file that contains a list of templates to use for the project. Note that Template.inf is a template file itself (you can use directives to indicate which files to include in the project). The XML file that contains the project type information. This folder contains files that will be included in the project. Note that the location on the hard drive is "$(ProjectDir)\Template Files\1033". Contains the HTML file (user interface) used by the wizard, one file per page wizard. The first file is "Default.htm". The location on the hard drive is "$(ProjectDir)\Html\1033". The Custom Wizard creates a JScript file called "Default.js" for each project. It also includes "Common.js". These files contain JScript functions that give you access to the Visual C++ object models to customize a wizard. Wait a moment, be patient. Before starting the template creation, you must create a default project. This project must contain your pre-requisites for a default project template. Write down on a text file everything that you do, this information will be useful when the wizard creates the new project. For this article, I have chosen to create a Win32 Console project for creating a C project. My requisites are: typedef bool argc argv To create the template structure is pretty simple; Visual Studio does it for you. First, start Visual Studio .NET and select "New Project" from the "File" menu. width="531" alt="Image 2" data-src="/KB/macros/VCNetCustomWizard/VCNetCustomWizard1.gif" class="lazyload" data-sizes="auto" data-> On the project dialog box, choose "Visual C++ Project" and "Custom Templates". On the Application Setting page of the "Custom Wizard", change the wizard friendly name to "C Console Project", check the "User interface" checkbox, and set the number of wizard page to 2. The base is now created. Now, we can add the project files. In my example, I only add one file called "main.c". To add this file, create a file called "main.c" and save it to the "$(ProjectDir)\Templates\1033" folder. After that, right click on the "Template Files" directory on Visual Studio and select "Add existing item", then add the "main.c" file. Delete the "ReadMe.txt" and the "Sample.txt" files from the "Template Files" directory (from Visual Studio and from the hard drive). Now, we modify the "template.inf" file to represent the last three modifications. For that, replace all the file content by "main.c". In this step, I only explain three modifications. All others are technically the same. These modifications are: On every project, you can find a "default.js" file. This file contains some functions that were called when the output project was created. On the default HTML file, there is a textbox "MAIN_FILE_NAME" that contains the file name; by default, it is "main". To allow Visual Studio to change the name of this file, you must modify the function "GetTargetName" as follows (the function is located on the default.js file). MAIN_FILE_NAME GetTargetName function GetTargetName(strName, strProjectName) { try { var strTarget = strName; if (strName == 'main.c') strTarget = wizard.FindSymbol('MAIN_FILE_NAME') + '.c'; return strTarget; } catch(e) { throw e; } } You can define the default value for the HTML controls by adding a "SYMBOL" tag on the HTML file. For example, to set the default value of the "MAIN_FILE_NAME" control to main, add the following line on the HEAD section. SYMBOL HEAD <symbol name="MAIN_FILE_NAME" type="text" value="main"></symbol> For example, to add the "stdio.h" file into the "main.c" file, add a checkbox control on the "Default.htm" file: <input id="INCLUDE_STDIO_H" type="checkbox" value="checkbox" name="checkbox"> After that, edit the "main.c" file and modify it according to the example below: [!if INCLUDE_STDIO_H] #include <stdio.h> [!endif] To view more possibilities, edit the file in the sample. To modify the project settings, it's a little more complicated. The function that does that is on the "default.js" file, its name is AddConfig(). AddConfig() When you create a project with the custom wizard, the generated AddConfig() function does not contain much information, but it contains the object declaration which we will use to change the project settings. In my project, I need to change the following settings (config=debug). Below is a table with the settings to change and a sample code to make this change: Character Set:Use Multi-Byte Character Set Debug Information Format:Program Database for Edit & Continue (/ZI) Warning Level:Level 3 (/W3) Optimization:Disabled (/Od) Preprocessor Definitions:WIN32;_DEBUG;_CONSOLE Runtime Library:Single-threaded Debug (/MLd) Create/Use Precompiled Headers:Not Using Precompiled Headers Enable Incremental Linking:Yes (/INCREMENTAL) Generate Debug Info:Yes (/DEBUG) The code: config.CharacterSet = charSetMBCS; CLTool.DebugInformationFormat = debugOption.debugEditAndContinue; CLTool.WarningLevel = warningLevelOption.warningLevel_3; CLTool.Optimization = optimizeOption.optimizeDisabled; CLTool.PreprocessorDefinitions = "WIN32;_DEBUG";_CONSOLE"; CLTool.RuntimeLibrary = rtSingleThreadedDebug; CLTool.UsePrecompiledHeader = pchNone; LinkTool.GenerateDebugInformation = true; LinkTool.LinkIncremental = linkIncrementalYes; The JavaScript: var config = proj.Object.Configurations("Debug|Win32"); var CLTool = config.Tools("VCCLCompilerTool"); var LinkTool = config.Tools("VCLinkerTool"); For more information, see the AddConfig() function in the "Default.js" file. In the setup file, you will find scripts allowing to install this example automatically. If you want to change these scripts for your personal use, you will find the parameters for the installation in the file "config.vbs". Extract the contents of "VCNetCustomWizard_demo.zip" and use the cmd files (install.cmd, uninstall.cmd, and settings.cmd). For Visual C++ 2005 or 2005 Express, replace the string "$$WIZARD_VERSION$$" with "8.0", and go to the "config.vbs" file to view the installation paths. Visual C++ .NET and Visual Studio .NET offer some features to customize your project templates; custom projects can help single developers or developer teams to accelerate their development by automating repetitive tasks. It can also provide a good foundation for creating personal or enterprise standards. Some other Visual Studio features like add-ins or macros can help developers to organize their work. I encourage developers to look at these different possibilities to eliminate the non- interesting things and concentrate on more significant.
https://www.codeproject.com/Articles/13745/How-to-create-a-custom-project-template-using-Visu
CC-MAIN-2021-31
refinedweb
1,134
51.24
I am about to write a Python 3 solution, but first thing I write a test case for my function, based on the provided sample: def test_provided_1(self): self.assertEqual('San Francisco\nHello World', solution(2, ['Hello World', 'CodeEval', 'Quick Fox', 'A', 'San Francisco'])) This problem uses a different structure from the CodeEval standard, I change accordingly the main script to extract the output size from the first line and to put all the other lines, stripped of their terminating newline character, in a list of strings. data = open(sys.argv[1], 'r') top = int(data.readline()) lines = [line.rstrip('\n') for line in data]Having prepared the input data in this way, my solution is pretty compact: def solution(size, lines): lines.sort(key=len, reverse=True) # 1 del lines[size:] # 2 return '\n'.join(lines) # 31. I use the specific list sort() method instead of the built-in sorted() function because I'm happy to modify the existing list. Using sorted() would have created a new list. In a real world application, usually sorted() is the preferred approach, because we don't want to mess with the data owned by the caller. I sort the strings by their length, this is done by passing in the key parameter the function len, that has to be used to compare the strings. And I want to get the longest on top, so I want to get the reversed natural (shorter first) order. 2. I want to output just the top 'size' lines. The easiest way to do that is removing all the other elements from the collection. Here I do it using the handy del operator. 3. Finally, I join the surviving elements in the list on the newline, since I have been asked to present each element on a different line. After the solution was accepted with full marks, I pushed test case and actual python script to GitHub.
http://thisthread.blogspot.com/2017/01/codeeval-longest-lines.html
CC-MAIN-2018-43
refinedweb
320
61.16
Java Language Integrity & Security: Fine Tuning Bytecodes In Scope Two of the issues mentioned earlier in the article were that of performance and the protection of intellectual property. These issues dovetail well into this discussion of code readability. The main point revolves around the statement made that creating descriptive attribute names make code more readable. This in itself probably can't be disputed; however, arguments can be made that creating descriptive attribute names is not necessarily the best for performance and the protection of intellectual property. It is pretty easy to see that if you have 100 attribute names in your application, and each one averages 7 characters, you have to obtain storage for at least 700 characters. Yet, just by reducing the average number of characters to 3, you only need 300 characters. Obviously, you have a savings of over one half. This may seem trivial, and in many cases it is; however, for hardware with small memory footprints, memory usage like this can add up—in this example, you are only talking about 100 attributes. For example, in the Performance application, there is a class attribute named companyID, the Employee() method has an attribute named employeeNumber, and the Finance() method has an attribute named balance. All three of these attributes have totally separate scope. In fact, you could have named all three of the attributes companyID, or even simply a. Assume that you do name all of these attributes a. Although there would not be any compiler confusion with the attributes in the two methods, the class variable would lose the precedence battle with the tighter scope of the methods. Thus, you must get used to including the this pointer in your code as is done in Listing 4. Note: The this pointer, unfortunately named perhaps, means that you use the scope of the object. Thus, the code this.a simply means to use the attribute a defined at the class level. The code including the this pointer is highlighted in red. public void Employee(int number) { int employeeNumber = number; System.out.println("nInside Employee "); System.out.println("companyID = " + this.companyID); System.out.println("employeeNumber = " + employeeNumber); } public void Finance(double bal) { double balance = bal; System.out.println("nInside Finance"); System.out.println("companyID = " + this.companyID); System.out.println("balance = " + balance); } Listing 4: The Example Application Using the this Pointer Using the this pointer makes the behavior of the code a bit more obvious, and it provides the groundwork for the concept we explore next. See how far you can go by naming everything you can in the application to the name a. Obfuscating the Code One of the ways that you can attempt to protect the code's intellectual property is to make the code harder to read. This is not the same as mangling the code or encrypting the code. Mangling or encryption implies that the code has to be decoded—perhaps with an algorithm. You can explore these concepts later; however, at this point you are just going to take a simple first step by making the code more difficult to follow. The act of making the code more difficult to read, or less clear, is sometimes called obfuscation. You can proceed in three steps. The first step is to change all the attributes to a. This is accomplished in Listing 5. public class Performance { public static void main(String args[]) { CompanyApp app = new CompanyApp (); app.Employee(2001); app.Finance(3001.0); System.out.println(); } } class CompanyApp { private int a = 1001; public void Employee(int number) { int a = number; System.out.println("nInside Employee"); System.out.println("companyID = " + this.a); System.out.println("employeeNumber = " + a); } public void Finance(double bal) { double a = bal; System.out.println("nInside Finance"); System.out.println("companyID = " + this.a); System.out.println("balance = " + a); } } Listing 5: The Example Application, Obfuscating the attribute names. With this change, not only are all attributes a single character in length, they are all the same character. Besides savings pertaining to the length of the attributes, there are ramifications internally as to how the compiler stores and represents attributes—you will learn about this in later articles. Although this code may not be as human friendly as the previous version, it behaves exactly the same. When you run this example, you will get the output in Figure 2. One interesting exercise you can perform here pertains to the use of the this pointer. If you take out the this pointer, you will get different results. For example, in the Finance() method, the following code will produce incorrect results because both lines will bind to the method variable—so the output for the companyID is incorrect. System.out.println("companyID = " + a); System.out.println("balance = " + a); As stated earlier, this exercise reinforces the concept of scope quite well. The use of scope is fundamental to object-oriented development, yet it is one of the most difficult concepts for beginning students to grasp. Even advanced developers find the variations are tricky at times. It is fortunate that not only is scope something you must understand; as you are finding, you can use it to your advantage. Page 2 of 4
http://www.developer.com/design/article.php/10925_3669651_2/Java-Language-Integrity-amp-Security-Fine-Tuning-Bytecodes.htm
CC-MAIN-2016-07
refinedweb
862
55.64
I recently read this post by Elian and two of things hit me like a ton of bricks: I have to work with really nasty data. A typical record looks like the following: key : /C=US/A=BOGUS/P=ABC+DEF/O=CONN/OU=VALUE1/S=Region/G= +LimbiRe +gion Alias-16 : WT:Limbic_Region Alias-17 : SMTP:Limbic._.Region@nowhere.com Alias-18 : /o=A.B.C.D./ou=Vermont Ave/cn=Recipients/cn=wt/cn=Li +mbic_Regi on am typically batch processing records, and not obtaining a single record. The problem is, that the only tools we currently have for this is shell scripts with a lot of VERY ugly sed. Even then, there are a great deal of limitations. I have written a few custom scripts in Perl, but they really aren't re-useable as each task is different. This seems like a perfect place to use a module. Here are some issues I can see up front: These are the main things I will need to do: Here is what I envision: #!/usr/bin/perl -w use strict; use DBParser; # My new module I haven't built yet open (OUTPUT,"> /tmp/somefile") or die $!; select OUTPUT; $|++; $/=""; while (<>) { my $key = DBParser::KeyGrab($_); my %record = DBParser::DBParse($_); next unless ($record{$key}->{Key}->{Surname} eq "Region"); $record{key}->{Tel} = "(123) 456-7890"; print DBParser::PrintRecord; } [download] I certainly don't want anyone to write this module for me - the whole point in the project is to learn. I do however want pointers, advice, snippets of code, etc. Feel free to reply as though this was a meditation, if all you have to offer is methodologies and not actual technical solutions. Thanks in advance - L~R This seems like a reasonable place for a module. One thing to note is that in your sample interface you're basically splitting the logic between the module and the calling code. That the caller controls opening and reading the file, pulling out a hash for each entry whose structure the caller must know in advance in order to read, indicates that this isn't a complete modularization of responsibilities. To my mind, the clearest way to fix these issues is to go down the OO path, in which an instance of DBParser opens a file to read, controls iteration internally, and returns results that match your search criteria for you to print from the calling context or to pass to another module specifically for formatting output. Doing this gives a clean interface, separation of duties, and the ability to create further refined subclasses of both the parsing and printing components. The difficult part in doing this will be specifying the search criteria since the data is pretty hairy, so it would be good to start with a review of all the ways current scripts access it, and see if there's a method to the madness that you can tease out and formalize. Thank you - L~R This may seem like a red herring, but the first thing that worries me is your choice of name, 'DBParser'. The thing about OO is that it is about objects, so the first question to ask is "what is the object here?". My first guess, looking at the data, is that each record represents a person (or perhaps something more general - an "entity", perhaps), in which case I'd be inclined to call the object "Person" (though I'd probably avoid namespace problems by using a prefix that represented the company name, or perhaps my name or the project name, depending on the scope). An alternative approach, if this record format can be used to describe a variety of things, is to name the class instead after the name of the record format; in that case, it might be worth having subclasses for each of the major different types of thing that the format can represent. Now, what does the data in a typical file look like: is it multiple records each starting with the 'Key' attribute? If so, I could imagine wanting to write the code like: use Person; for my $person (Person->parse_from_file('/tmp/somefile')) { next unless $person->surname eq 'Region'; $person->tel('(123) 456-7890'); print $person->text; } [download] Let me be clear: this is just how I like to write my code, and other people (including yourself) will doubtless have different prejudices. The above code assumes that Person::parse_from_file() knows how to read a sequence of records from a file, turn each one into a "Person" object, and return the resulting list. It also assumes that these objects are opaque, so that all access is via methods: you can choose to make them transparent hashrefs with documented keys, but then (for example) you always need to do the work to split up the 'Full Name' key so that the 'Surname' key will be there in case someone looks at it, and it probably means you can't allow modification both by way of the 'Surname' field and directly in the 'Full Name' field, because by the time you need to write the record back out you won't know which value is correct. I tend to like what are sometimes called "polymorphic get/set accessors", which means that you can use the same method either without arguments to fetch the value, or with an argument to set it to a new value. Some others prefer to split such functionality into two methods, eg tel() and set_tel(). I'm sure there are many other aspects worth talking about, but these are just some initial thoughts. This appears to be what you have gleaned from my poor attempt at explaining this. As far as I am concerned, I do not have a preference on how the code should look as I am completely inexperienced at this. I appreciate the information, but I really do not understand how to code the opaque objects as you suggest. I know that the full key will always be static, even if the broken out pieces change as it will be printed externally. If you could show me some code to illustrate this - I would be very appreciative. If not, what you have already done is appreciated. You do not have to use my data to create the opaque object - just show me a template to see the methodology. I am a fairly adept student. Cheers - L~R Ok, let's assume that the opaque object is implemented internally as a hashref, and that the fullname has a simple format of "surname, initials". Here's a simplistic approach: package Person; sub fullname { my $self = shift; if (@_) { $self->{fullname} = shift; } return $self->{fullname}; } sub initials { my $self = shift; if (@_) { $self->fullname(join ', ', $self->surname, shift); } (split $self->fullname, ', ', 2)[1]; } sub surname { my $self = shift; if (@_) { $self->fullname(join ', ', shift, $self->initials); } (split $self->fullname, ', ', 2)[0]; } [download] In practice, I'd write it a bit differently: I'd probably have many methods very similar to fullname(), and might well generate them rather than write each one out explicitly. Also, I'd probably cache the derived information like surname and initials, to avoid recalculating them each time, in which case I'd need to be careful to decache that information when the source (fullname in this case) changed. I'm surprised that you don't want the module to parse the data for you, since that seems to be a chunk of code that you'd otherwise need to repeat everywhere you deal with these records. But likely I've misunderstood what you're trying to do. I guess the most important thing, which I should have said before, is that documentation is the key, particular in perl: the docs for your class will say how you're allowed to use the object, and what you're allowed to assume about it. And in general, anything that the docs don't say you are not allowed to do or assume when using the class or its objects in other code. I do not think (I could be wrong here) that the filter method should be part of the module. As in: This allows the greatest flexibility over the filtration process as I do not know of all the ways it is currently being used, let alone all the way that it might be filtered on in the future. I really like the idea of having a default format method, but allowing it to be dynamic. This has really given me something to think about - would you mind critiquing some very bad code as soon as I get started? I have never built an object before, so I know my first attempt will be bad. If not - that is ok too. Thanks again and cheers - L~R yes, an object for your chunk-o-data. but if your stream-o-data isn't likely to change i would say no object for the parser. just have your object's creation method take a whole chunk-o-data. package ChunkOData; sub from { my ($class, $chunk) = @_; $chunk =~ s/\n\t//g; # continuation lines are easy # parse $chunk like you already know how # shove it into a hash return bless \%self, $class; } # write some accessors # write some common useful junk package main; local $/ = ''; while (my $chunk_text = <>) { my $chunk = ChunkOData->from $chunk_text; next unless $chunk->type eq 'UR'; $chunk->owner('me'); if ($chunk->is_a_certain_type) { $chunk->do_some_standard_thing; $chunk->do_something_else($with_my_info); subroutines_are_good($chunk); } $chunk->print; } [download] if your stream-o-data is blank-line seperated (or other $/ -able format) this is a simple way to get started. you might also use one of the Order-keeping Hash modules from the CPAN in an object for your key field. then you could do something like: my $key = $chunk->key; next unless $key{OU} eq 'VALUE1'; my $otherkey = $chunk->key_as_string; # X=foo/Y=bar/.. [download] Thanks a million! Cheers - L~R Probably you don't need to write any new objects if you can use a few of these modules. Your LDAP data appears to be in LDIF format, which is covered in rfc2798. There is Net::LDAP::LDIF which may do exactly what you need, which is to turn LDIF text into a perl LDAP object. It should work perfectly the first time! - toma This code is almost identical to the code in the synopsis for Net::LDAP::LDIF. I just added a call to Data::Dumper to print the object and its structure. use strict; use warnings; use diagnostics; use Data::Dumper; use Net::LDAP::LDIF; my $ldif = Net::LDAP::LDIF->new( "file.ldif", "r", onerror => 'undef' +); while( not $ldif->eof() ) { my $entry = $ldif->read_entry(); if ( $ldif->error() ) { print "Error msg: ",$ldif->error(),"\n"; print "Error lines:\n",$ldif->error_lines(),"\n"; } else { print Dumper($ldif); } } $ldif->done(); [download] dn: /C=US/A=BOGUS/P=ABC+DEF/O=CONN/OU=VALUE1/S=Region/G=LimbRegion Alias-16: WT:Limbic_Region Alias-17: SMTP:Limbic._.Region@nowhere.com Alias-18: /o=A.B.C.D./ou=Vermont Ave/cn=Recipients/cn=wt/cn=Limbic_Reg +ion have a couple of questions and, depending on your answers, a suggestion for how to simplify the problem. First, you said that the only thing guaranteed to be unique was the key, but you were talking about uniqueness among all the records. In your example, the field names are all unique within the record. Is that the case for every record? If so, it seems to me that a record can be conveniently represented as a hash. Second, it sounds to me from the description, though you don't really expressly say this, that you generally only need to look at one record at a time. If I'm understanding right here, then creating an object per se may be an unnecessary complication. It sounds to me like all you need is two functions: one that takes an open filehandle (as a glob maybe), reads off the next record, and returns a reference to a hash, and one that takes a reference to a hash and returns a string. Depending on what you need to do, another routine or several might be in order for testing records (e.g., a routine that takes a hashref and a string and returns the number of Alias fields in the hash whose values match the string). I know it's heresy to some to suggest not using OO where it's possible to use OO, but it just seems unnecessary here, to me. The only thing that makes me think I might be wrong, and that OO might in fact be a Good Idea, is that you didn't show what delimits records in the files you're reading. If there's no delimiter, then you are going to be reading until you get the key for the next record, which you then have to save for when you read that record. It is of course possible to do this without real OO, but it's awkward, since it involves a persistant variable (the one-line buffer) that needs to be associated with the specific file in question. If you never have more than one of these files open at the same time you could get by with a magic global ($main::MY_DB_PARSING_PERSISTENT_LINE_BUFFER or whatnot), but that's a kludge, and if you ever need to work through more than one of these files at the same time it will break. It is possible to get around that too, by using the filehandle as a key into a magic global hash, but now we're doing something arguably almost as complex as OO, so I'm not sure this really saves anything. But it is an option to consider. If your records are delimited by some magic marker in the files (e.g., a blank line), then this problem goes away, and you can just have a couple of routines, as I said.] Perhaps you may find a couple of my tutorials on modules useful Simple Module Tutorial and A Guide to Installing Modules and How to make a CPAN module Distribution cheers tachyon s&&rsenoyhcatreve&&&s&n.+t&"$'$`$\"$\&"&ee&&y&srve&&d&&print another non-OO way of doing things popped into my head. theres a module for processing NetFlow records ( module CFlow out of the flow-tools package, not on CPAN ) that does things like this: sub match_func { return 0 unless $bytes > 5000; return 0 unless $src_port == 80; # do_something with matched return 1; } CFlow::loop( \&match_func, $filehandle ); print "matched $CFlow::match_count records\n"; [download] since you generally work with a single record, if the fields of the record are unique... forget all of the OO stuff and use globals. the 'loop' routine takes a coderef to be run after each record is parsed (and shoved into the global variables) and a filehandle ( if filehandle is undef, read STDIN, if filehandle is a string, open and use that file.). the coderef returns 0 if the record wasn't interesting, else it does whatever and returns 1 (so the module can keep track of how many records matched). while not-00, it does do an excellent job of hiding the details from the user, eliminates all of the derefrencing ( $chunk->type() just becomes $type) which makes it easy to write quick one-off scripts. sub fix_building { return 0 unless $building eq 'FOO'; $building eq 'BAR'; print_rec; return 1; } DBParserThingy::loop ( \&fix_building ); [download] Hell yes! Definitely not I guess so I guess not Results (41 votes), past polls
http://www.perlmonks.org/?node=238963
CC-MAIN-2014-52
refinedweb
2,597
63.02
16 January 2013 22:10 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> Although The Gulf coast is part of the 11th district in Energy production remained high despite a decline in the rig count, driven by low prices for natural gas, the Fed said. Oil prices are high enough to support current production, but, because of volatility, some companies are nervous about drilling in higher cost fields, the Fed said. For the country, all 12 districts reported modest or moderate economic growth, with the northeast rebounding from Hurricane Sandy, the Fed said. Consumer spending rose, with holiday sales rising modestly from 2011, the Fed reported. However, those sales did not meet the expectations of most districts. Automobile sales were either steady or stronger in nearly every district, although dealers were concerned that uncertainty in US fiscal policy could discourage people from buying
http://www.icis.com/Articles/2013/01/16/9632502/us-petchem-production-rises-year-on-year-fed.html
CC-MAIN-2015-14
refinedweb
142
51.28
THE SQL Server Blog Spot on the Web. (For those of you who haven’t seen yet, we announced SQL Server code-named “Denali” as the 2010 PASS Summit. You can get information about it, and download it,. From SQL Trace to Tracing with Extended Events We’ve ported the complete set of diagnostic: How To: View the Extended Events Equivalents to SQL Trace Event Classes How to: Convert an Existing SQL Trace Script to an Extended Events Session These should get you started examining how to create equivalent event sessions for SQL Trace sessions you’re already using. In a future post I’ll discuss this in more detail and describe how event fields and action are related to SQL Trace columns. Exposing ourselves to Object Explorer: The other menu options are self-explanatory or standard SSMS menu commands. Beyond the DDL Microsoft.sqlserver.management.xevent namespace on MSDN. I’ll be posting some code samples to this blog in the future. We’ve also exposed this API through PowerShell, so if you’re one of the new elite scripting, we’ve got you covered. You can find some details of the PS provider in the BOL topic Using the PowerShell Provider for Extended Events. So that’s the run down of Extended Events in CTP1. I’ll follow-up with some posts that go into specifics for these areas and there is more to come in the next CTP so stay tuned. - Mike If you would like to receive an email when updates are made to this post, please register here RSS
http://sqlblog.com/blogs/extended_events/archive/2010/11/18/what-s-new-for-extended-events-in-sql-server-code-named-denali-ctp1.aspx
CC-MAIN-2014-15
refinedweb
264
70.43
Coordinates the process of rendering a single image. More... #include <mitsuba/render/renderjob.h> Coordinates the process of rendering a single image. Implemented as a thread so that multiple jobs can be executed concurrently. Create a new render job for the given scene. When the Resource ID parameters ( sceneResID, sensorResID, ..) are set to -1, the implementation will automatically register the associated objects (scene, sensor, sampler) with the scheduler and forward copies to all involved network rendering workers. When some of these resources have already been registered with the scheduler, their IDs can be provided to avoid this extra communication cost. Virtual destructor. Cancel a running render job. Write out the current (partially rendered) image. Retrieve this object's class. Reimplemented from mitsuba::Thread. Get a pointer to the underlying render queue. Get a pointer to the underlying render queue (const version) Return the amount of time spent rendering the given job (in seconds) Get a pointer to the underlying scene. Get a pointer to the underlying scene (const version) Are partial results of the rendering process visible, e.g. in a graphical user interface? Some integrators may choose to invest more time on generating high-quality intermediate results in this case. Run method. Implements mitsuba::Thread. Define whether or not this is an interactive job. Wait for the job to finish and return whether it was successful.
http://mitsuba-renderer.org/api/classmitsuba_1_1_render_job.html
CC-MAIN-2020-50
refinedweb
227
50.63
2012-11-28 Meeting Notes John Neumann (JN), Norbert Lindenberg (NL), Allen Wirfs-Brock (AWB), Waldemar Horwat (WH), Brian Terlson (BT), Luke Hoban (LH), Rick Waldron (RW), Eric Ferraiuolo (EF), Doug Crockford (DC), Yehuda Katz (YK), Erik Arvidsson (EA), Mark S. Miller (MM), Dave Herman (DH), Sam Tobin-Hochstadt (STH), István Sebestyén (IS), Andreas Rossberg (ARB), Brendan Eich (BE), Alex Russell (AR) Syntactic Support for Private Names BE/LH: Concern that the syntax required too much declaration LH: Can delve deeper when presenting TypeScript findings. We don't have any experience with the impact of this syntax. AWB: This is where we left it at the last meeting and I haven't had an opportunity to respond to feedback. Mixed discussion regarding syntactic pains and impact of @-names LH: Before it can go forward, someone will need to go back and address the existing issues. AWB: The concern is the double declaration and we discussed adding a private prefix for method declarations. YK: Alternatively, module bound private names, where declaration scopes to the module. Discussion of Kevin Smith's modular @-names proposal... AWB: The same logic applies to global vs. lexical namespace, and modules. WH: If you have an @-name somewhere in a module, is it scoped to this module YK: If you use an @-name it implies binding, and you need to explicitly export AWB: References an implicit declaration if doesn't already have one? YK: Yes. (draws example on whiteboard) module "view" { export class View { constructor(id) { this[email protected] = id; } } } DH: Points of clarification... - If we're talking about @-names explicitly scope to a module, no declaration nec. Does that make them private or unique? - You would have to declare specifically Reviewing Kevin Smith's gist (gist.github.com/3868131) on projector... ARB: How does this avoid the use of the same name twice? AWB: You're expected to know your module. BE: (draw comparison to Go, Dart, CoffeeScript) DH: The goal is simply to avoid repetitive declaration lists LH: The notion that declaration of private names as a runtime construct is great, but the syntactic representation needs to be intuitive to "this is a private thing". So far, this feels at odds with those intuitions DH: Disagrees, this is a static concept and declarations within are intuitively static. WH/AR/STH: There is no precedence in the language to scope limited binding forms. DH: Painful if you're required to list out everything WH/YK: Sharing private names across classes is a problem that needs to be solved. DH: Implicit scoping is asking for trouble (gives examples) AWB: Common case where a symbol only needs to scoped to a class, for that case, we have a proposal on the table that covers everything except for fields without a lot of redundant declaration. Beyond the scope of a single class, it seems an explicit declaration at an appropriate level is desirable. DH: Tied to classes implicitly? But allowed to explicitly bound to other scopes AWB/DH: Clarification on private for classes. ARB: Would need to hoist the thing outside the class? AWB: Only if you're contributing to the thing outside of the class. BE: If you want an outer scope, put a block around it. ...Kevin's proposal seems to have no support here. LH/AWB: back to the @-names, we left it at "it's too chatty" LH: I want syntax for privacy, something less Discussion about import @iterator in classes... LH: The immediate problem w/ YK's example on board is that it's unclear that @id... Developers don't want to think about binding names as objects AWB: (modifies whiteboard) module "view" { private @id; // allows declaring a private name called "id" export class View { constructor(id) { this[email protected] = id; } } } Move private @id... module "view" { export class View { private @id; constructor(id) { this[email protected] = id; } } } BE: This all may be developer and future hostile. WH: (agrees) BE: Developers want declarative form to define an instance field with a private name in one step. We separate those two. WH: I want to preserve the option of saying one thing BE: That's what Luke wants. LH: Most developers don't want to think about declaring their names before use. AWB: Then we can't address private within a class without addressing field declarations in a class. (moves private @id out of example) WH: We can't allow this to now be declarable in multiple contexts. LH: The current behavior of @-names is not what developers expect it to be. DH: I have contradicting experience. (ie. Racket define-local-member-name) LH: If you had to do this for every property that you're ever going to use...? DH: (Agrees with Yehuda's complaints) BE: Then we need field syntax first. LH/YK/WH: (nods of agreement) DH: Let's punt on this for ES6. Too late EA: How to do @iterator? DH: We can make that work, but this is too large and too late. There are too many questionable issues w/r to declaration for the sake of scoping a name, without creating a field. MM: Let's not discount ES7 development. AWB: I disagree and don't think that we should defer on addressing this. BE: If we wait and defacto standards emerge, then we're too slow. MM: Intend to advocate: - postpone explicit field declarations to ES7 and things that might conflict until. BE: Agree LH: The concern is @iterator? standard private names and public names WH: What is the point of contention for the existing field declaration proposal? BE/AR: (explanation of constructor declaration and hoisting issue) BE: (whiteboard) // Mark's proposal from a year ago // harmony:classes constructor(id) { private id = id; } MM: (reiterates rationale) LH: (whiteboard) // TypeScript... private id; constructor(id) { this.id = id; } AWB: What happens when there is foo.id is in the constructor? DH: So private is to statically reject programs that appear to poke at things that are assumed private? LH: Confirm MM: So they foo.id will refer to the same id field? LH: Yes. ...Unusable for ES6 DH: Proposes... Exactly the semantics as shown, but syntactically only allows field declaration position in classes and import/export. WH: Important that you will want to declaratively (not imperatively) list fields. Guaranteed to be there in instances of a class. Those who want to lock down the class further might also want extra class attributes that disallow other expando properties, etc. (not the default case, but something you might want to do). LH: This is now a different discussion. If we introduce a form (re: whiteboard example)... DH: A future compatible subset of what we discussed before. Discussion about the baseline problem: Needing two lines to declare a private field. YK: Not sure why the example that Luke approves of is different from the given syntax. Discussion of computed object/class-literal properties, rejected due to - Runtime duplicate checking - Static object literal optimization LH: Computed properties might be worth revisiting EA: But you can't predict what the property name will be... AWB: And you still need to go through the declaration steps... BE/DH: Revisiting previous consensus on unique name for iterator DH: No revisit on consideration for string name for iterator LH: Revisit on square bracket computed properties. AWB: Square brackets are future hostile... Explanation of the [] Reformation strawman:object_model_reformation BE/DH: (volley re: import iterator) BE: If there is a standard library prelude in ES6 for @iter, that buys time to fully specify for ES7 AWB: Let me summarize... Take the @name proposal without the declaration. DH: Understand, but the thing we do now needs a coherent story AWB: Yes, we provide pre-declared @names and that's the end. LH: Normal lexical bindings? AWB: No, @name bindings DH: max/min: only in property name position WH: Can you stick an @name on any arbitrary object? AWB: Just getting rid of declaration BE: max/min AWB: Existing @names in spec - @hasInstance - @iterator DH: the whole benefit of unique names is no name clash BE: This is why we decided that iterator should be public, because there is no way to avoid existing properties. We want new things to have no clash. ARB: Want to use it for properties to avoid cross cut BE: Use for stratified traps. AWB: Any symbol is fine, doesn't need to private DH: Why would you decide to expose something as visible... AWB: There are cases where certain properties might want to be extended or customized. Discussion of reorganization of Meta Object operations in order to simplify Proxy specification. WH: The class proposals only permitted private properties on instances of the class. It was never the intent to allow you to create a private instance property @foo of instances of class C and then attach it to random objects unrelated to C. AWB: At the root, symbols - ie. unique names are a powerful tool MM: Always in consensus that symbols where a means of assigning and looking up a property by a unique, unforgeable name. It was never specifically tied to classes. WH: Disagree. That's only one of the privacy proposals. Something got lost in translation in the attempts to merge the two privacy proposals. The class one would let you look up a class-private @foo on any object, but only the class could define an @foo property. DH/AWB: (Discussion of pre-defined fields on class instances.) AR: Similar to my constructor pre-amble... LH: (Summarizing) No clash between the use of symbols at a lower level. YK: Imagine a map literal... AWB: Hypothetical Map that used [] for key, might use symbols for keys in that map, now there is an ambiguity at access time. LH: Care about not having private names becoming lexical scopes AWB/WH: (in response to question about why @'s are necessary) In the past we've attempted to work through several proposals that don't and it never works ARB: Concern with meaning of @-syntax being dependent on context: sometimes denotes symbol itself, sometimes value it indexes. Might potentially be ambiguous in some circumstances, e.g. modules MM: State a proposal: - We don't have in ES6: a private or special declaration form. - We allow @identifier, that is a symbol let @foo = Unique(); - After a dot, in property name position, @foo does not refer to a new literal @foo, it refers to the value of the lexically enclosing @foo. We address Andreas's issue with modules separately. DH: b/c tied to variable declaration forms, no way to know upfront what one of these names is. AWB: Clarify... DH: (Re-explains) AWB: (refutes) DH: What I was hoping for was declarative syntax closing in... but realize it's totally generative. ARB/LH: This is a hack DH: The syntax looks static but is not at all. MM: Is the hack a subset of all the non-hack things we want? DH: This shows the same problems as we discussed earlier. AWB: This what symbols are... MM: The "@" is what makes it clear that this is not static DH: You could say the same thing about brackets. MM: Always knew @ was dynamic. Opaque, unforgeable and generative. ARB: (to DH), Once this is defined in a dynamic context it becomes dynamic. WH: (illustration of perceived hoisting and dynamic rebinding issues) class C { private @name; f() { for(...) { [email protected] = 5; } var @name.... } } DH/MM: (discussion of future additions) BE: We already have with nested function declarations name binding and generativity. Why is it ok for function, but not symbol? DH: Punning the syntax to make it look static, but it's not. BE: agreed, that was the fork we took to a bad path, punning "after-the-dot identifier" DH: People rightly complained early on about static understanding/knowable aspects of syntax. (eg. do I have to look up in scope to know what prop means in { prop: val }) AWB: Most developers will align [] with dynamic property access, vs. [email protected] aligns with "static" property name access. LH: Clarification that we're not talking about the object literal case, but in fact the non-breakable, historic language syntax of property access with [] BE: (Hypothetical future with object [] reformation) Discussion about implications of hypothetical future with object dereference reformation with ES6 objects. MM: (to LH: Moving back to the conservative position to build up from AWB: Still have symbols LH: Yes YK: This is the max/min problem, writ large. MM: Reminder of workload, wherein ES7 should look like just another phase of development and it's ok to defer to ES7. AWB: Return to where we were before @-names were introduced MM: yes YK: Returning to [iterator]? Yes. LH: Still support Mark's proposal (see above) DH: Only strict mode you get the duplicate error? MM: true DH: Could do semantics of strict mode and allow the collision MM: I don't think this introduces a strict mode runtime tax. Conclusion/Resolution STH will provide a summary. - Symbols, unique and private are runtime concepts - Only additional syntactic support for them in ES6 is the square brackets in literal forms. - Strict object literals throw on collision. (Today, duplicate checks happen at compile time, this will no longer be the case when [prop]: val is used in an objlit) const s1 new PrivateSymbol(); const s2 = s1; var x = { [s1]: 33, [s2]: 44 }; In this context, within the square brackets: AssignmentExpression (Re: Symbol constructor binding: harmony:modules_standard) Experience With TypeScript (Luke Hoban) Findings... Classes: Statics - Statics are used frequently - Imperative update is awkward when using an otherwise declarative construct Classes: Privates - Frequent asks for Privacy - TypeScript added compile-time-only privacy - Not quite the same as current private names syntax proposal - w/o further sugar private names syntax proposal will feel awkward in practical class Classes: Automatic base constructor calls - Missing super calls ArrowFunctions - Want thin arrow Classes: Decorators - w/ classes available, teams want to use them - Biggest block is when existing class library supported some extra "magic" associated w/ class/method declarations - No solution yet, not sure what this looks like. MM: (re annotations) Note that "@" is no longer reserved for ES6... DH: Point out that we are future proof here. MM: Let's postpone discussion of the feedback Modules - ES6 modules - compiled to JS which uses AMD/CommonJS ... Modules: Namespaces - Two common patterns for large code structure - On demand loaded modules - Namespace objects to reduce global pollution - External Modules address #1 - TypeScript allows internal module re-declaration to grow the object - Effectively, a declarative form for object extension with build in closure scope and syntax that matches large scale structuring use cases well. LH: (the transition from AMD/CommonJS of today to modules a la ES6 is not going to be an easy transition) LH: When you have circular references, current modules make it appear easy to ignore these issue. Modules: "modules.exports =" use case - Not addressed in TypeScript - Critical for interop with existing CommonJS/AMD code - Supportive of "export =" syntax proposal // something.js export = function() { return "something"; }; // other.js import something = module("something"); var s = something(); DH: (whiteboard) export = function() {}; --------------------------------- import "foo" as foo; foo(); Async - Top requested addition for TypeScript is C# "await"-style async - Generators + task.js help, but likely not enough - Wrapping is still very unnatural in any real examples - But light sugar over generators + task.js would serve - Feeds into promises discussion - have to standardize the task objects. (shows example for task.js and identifies "spawn" which returns a promise object) Mixed discussion about the history of async discussion through generators, promises, Q.async Modules Update (Dave Herman) (whiteboard) // Modules looked like... module X { export module Y { } } // Moved to... // "foo" doesn't bind anything into this scope, // just adds to the module registry module "foo" { module "bar" { // no more exporting... } } module "foo/bar" {} "bar" is not exposed as a property of "foo" import "foo/bar" as m; AWB: (clarification of his understanding of the original way that nested modules work) STH: Yes, that was the way, but there was a realization that much of the earlier approach was flawed and these updates lead to revisions. One important use case for modules is to configure module references, so that libraries can import jQuery (for example), and get the appropriate version of jQuery specified by the page. Further, it's desirable to be able to use different code for the same library name in different context. Originally, the modules proposal managed this via lexical scope, as follows: module M1 { module jquery = "jquery.js"; module something = "something_that_uses_jquery.js" } module M2 { module jquery = "zepto.js"; module something_else = "something_else_that_uses_jquery.js" } However, this has two major problems: Inheriting scope across references to external files is potentially confusing, and disliked by a number of people Once we decided to share instances of the same module, the "parent scope" of a module is no longer well-defined Therefore, we abandoned the idea of inheriting scope across external references. However, this had two consequences that we did not immediately appreciate. First, we no longer had a method for managing this configuration between module names and source code. Second, scoped module names no longer had nearly as much use as originally. Thus, Dave and I revisited the design, abandoning the use of lexical scope for managing module names, and introducing module names that could be configured on a per-Loader basis. DH: The registry with the string names is now where the sharing mechanism occurs. Loader... System.baseURL = "..."; System.resolve = function(...) { ... }; Mixed discussion.. - "global namespace" as in "per realm" ARB: This seems to create a parallel global object for modules. Giving up lexical scoping for one global namespace. DH: There is no way to get rid of the global object STH: Yes we tried. DH: (explanation of registry table) - Per loader MM: Separate name registries? DH: Either are fine ...Provide a minimal set of APIs to allow devs to build there own. ...Sane default behavior ...Default resolution: - baseURL + "Crypto/sha1" + ".js" when no config has been done, this is the base default behavior. // foo.js export = 42; // bar.js export function bar() { } // foobar.js module "foo" { export = 42; } module "bar" { export function bar() { } } <script> System.baseURL = "assets/"; </script> <script async> import "foo" as foo; import "bar" as bar; </script> WH: What happens if... import "foobar" as fb; (given the above "files") STH: Answer: fb is an empty object. You also get modules named "foobar/foo" and "foobar/bar" defined. [WH's question was related to a claim in the discussion that there is no need to have module .js files be distinguishable at the textual level from top-level script .js files] Mixed discussion w/r loading protocols... and resource loading (files from server, etc) seems to be out of scope? DH: How is there anything special about JavaScript as the one asset to know about in browsers? <link rel="prefetch"...> BE: Before imports, prefetch dependencies... but an out of line module import is not a hint, it's a requirement. DH: Help the browser know in advance about its... AR: a "prefetch" attribute for scripts? Requests script but doesn't execute. RW: Until import? AR: Yes EF: Don't want to prefetch lazily loaded code later in the program. Don't want to load packages with same dependencies twice. Bundle A, Bundle B Each share common dependencies. Leads to unbounded number of combinations of pre-build bundles. A loader should have a way which it can be told, upfront, about the dependency graph. Allowing the system to know all dependencies in advance, so that it doesn't have to compute transitive dependencies for all, every time—to make smart choices about IO.
https://esdiscuss.org/notes/2012-11-28
CC-MAIN-2020-50
refinedweb
3,276
55.13
Hi Jonathan, Earlier I was using *from resource_management import ** now I have added this import statement also from resource_management.libraries.functions.format import format and also given 777 permission till the intended pid file. Still its not working. Can you please tell me from where can I see the print statements which I provide in status function so that I can debug the function. On Mon, Apr 18, 2016, 18:35 Jonathan Hurley <jhurley@hortonworks.com> wrote: > What are your import statements? The "format" function provided by > Ambari's common library has a naming conflict with a default python > function named "format". If you don't import the right one, your > format("...") command will fail silently. Make sure you are importing: > > from resource_management.libraries.functions.format import format > > On Apr 18, 2016, at 4:27 AM, Souvik Sarkhel <souvik.sarkhel@gmail.com> > wrote: > > Hi All, > > I have created a custom service for Zookeeper and using Ambari 2.1.0 .In > status function of master.py if its defined in this way: > *def status(self, env):* > * config = Script.get_config()* > * zkDataDir = config['configurations']['zoo']['dataDir']* > * print 'Status of the Zookeeper Master'* > * print ****************************************** > * print zkDataDir* > * dummy_master_pid_file = > format("{zkDataDir}/zookeeper_server.pid")* > * check_process_status(dummy_master_pid_file) * > > Ambari is always showing status of application stopped but when I provide > the constant path of the pid file for example: > > *dummy_master_pid_file = "/usr/share/zookeeper/tmp/zookeeper_server.pid")* > > it starts working perfectly and Ambari is able to correctly show the > status of the application. I need a variable pid file instead of a constant > one. I would to thankful if someone suggest me a way out. > > Thanking you in advance > > -- > Souvik Sarkhel > > >
http://mail-archives.apache.org/mod_mbox/ambari-user/201604.mbox/%3CCANd6RR6Fq-7Ux34M-EC_R-h9k2DWN2y7jZw8-sV3mvY9stqK4w@mail.gmail.com%3E
CC-MAIN-2018-05
refinedweb
271
50.84
How to receive data from UDP socket (over WiFi) Hi everyone, I am trying to send UDP packet using GPy. For testing I am using service QOTD (djxmmx.net). On my desktop python the code if functional, on GPy no data received. Thank you in advance for your assistance. Regards, Vojtech code snipet from GPy >>> uos.uname() (sysname='GPy', nodename='GPy', release='1.20.2.r3', version='v1.11-d945d33ee on 2020-03-08', machine='GPy with ESP32',pybytes='1.3.1') >>> wifi1.wlan.isconnected() True >>>) Traceback (most recent call last): File "<stdin>", line 1, in <module> TimeoutError: timed out >>> when do same from my desktop the code is successful: >>>) >>> print(data.decode()) "The secret of being miserable is to have leisure to bother about whether you are happy or not. The cure for it is occupation." George Bernard Shaw (1856-1950) Hi @robert-hh, I really appreciate the support you’ve given me. Thank you. Also functional on my side. Regards, Vojtech @nadvorvo Update: on all devices this modified code works: import socket UDP_IP = "68.228.188.226" UDP_PORT = 17 MESSAGE = b' ' s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.settimeout(5) print(s.sendto(MESSAGE, (UDP_IP, UDP_PORT))) data, address = s.recvfrom(1024) print(data.decode()) @nadvorvo So I could replicate your problem. Interestingly it also exists in different flavors on other micropython (non-pycom) variants and devices. With Pycom MicroPython the receive fails. On ESP32 and Winnermicro W600 ports the send returns a timeout error, but the following receive succeeds. On ESP8266 and Pyboard UD is not supported. Big mess. BUT: if you do not send an empty message, but for instance a single space, it works. So there is a problem dealing with an empty packet. Hi @robert-hh, Thank you for your reply. I am connected to same wifi network from both devices (desktop notebook, GPy). Router settings should not be problem.
https://forum.pycom.io/topic/6781/how-to-receive-data-from-udp-socket-over-wifi
CC-MAIN-2021-10
refinedweb
316
60.41
Provided by: xless_1.7-14.1_amd64 NAME xless - File browsing program for the X Window System. SYNOPSIS xless [-f] [-toolkitoption ...] [filename ...] DESCRIPTION Xless pops up a window on the display specified, containing the file specified on the command line or piped in from stdin. This file may easily be viewed using the scrollbar to the left of the window. Xless also takes input from the standard input. Extra function are available on the toolbox to the right of the window: - Pop up a help window. - Search a specified pattern - Search the next occurance of the above specified pattern - Open a session of the editor (specified in the environment variable EDITOR) on the current file - Reload the current file - Change file in the current window - Open a new xless window to display the specified file - Print the current file - Close the current window For further information on using xless please read the online help information. The rest of this manual page will discuss customization of xless to suit the needs of a particular user. OPTIONS Xless is build upon the X Toolkit (Xt) and as such understands all the normal command line options (as described in X(1). It also supports: -follow Continually check the file for new input (so that xless behaves like tail -f). -f Alias for -f. -help Print a list of valid options. -version Print the version number of this xless executable. WIDGET AND RESOURCE NAMES In addition to the usual widget resources, Xless has the following application resources: standardFont The default font to be used if any of the specified fonts are available. textFont The fonts to use for the text. labelFont The fonts to use for labels in dialog boxes. buttonFont The fonts to use for labels on buttons. standardCur The cursors to use in the main button window with the Quit and Help commands. dialogCur The cursors to use in the toolbox and dialog box windows. helpFile Name of a file to use instead of the system default helpfile. editor Name of editor to invoke (if neither VISUAL nor EDITOR environment variable is set) editorDoesWindows Set to TRUE if your editor brings up its own window (xedit or GNU emacs, for example.) printCommand Command string used to print the current file. The name of the file is simply appended to this string. (enscript -G is nice, if you've got it.) maxWindows Maximum number of windows which xless will display at one time. Set this to zero if you don't want a limit. (This is a good thing to set if you tend to run xless * in directories with lots of files.) quitButton Set to TRUE if you want a Quit button on every window which, when clicked, will quit every window started from this copy of xless. The default is FALSE. sizeToFit Set to TRUE if you want text windows to be only as big as they need to be, up to the maximum size specified by 'geometry'. removePath Set to TRUE if you want the directory portion of the file path removed. For example, a path like /usr/src/X11/xless/main.c would be shortened to main.c. The default is TRUE. defaultSearchType Default method used to search the text (invoked from the Search button). Possible values are ExactMatch (which is the default), CaseInsensitive and RegularExpression. monitorFile Set to TRUE if you want the file to be continually checked for new input (so that xless behaves like tail -f). The default is FALSE. COLOR RESOURCES If you have a color display and you're running at least X11R5, you may want to add a line like: #ifdef COLOR *customization: -color #endif to your personal resources file. This will allow you to get the color-related resources for not only xless, but for every program which sets up its own color resources. Versions of X earlier than X11R5 don't support the customization resource. If you're on one of those, you'll have to include the color resources in your personal resources file. SEE ALSO X(1), X(8C), more(1), less(1) BUGS There probably are some. AUTHOR Dave Glowacki (UC Berkeley Software Warehouse) <dglo@CS.Berkeley.EDU> Originally by Carlo Lisa (MIT Project Athena) from xmore written by Chris Peterson (MIT Project Athena).
http://manpages.ubuntu.com/manpages/precise/man1/xless.1x.html
CC-MAIN-2020-29
refinedweb
715
64.71
I’ve got a new Arduino board (Arduino nano) and I want to hack a little bit. Today I want to play with IR receiver. My idea is to use my TV’s remote and switch on/off one bedside lamp, using one relay. It’s a simple Arduino program. First we need to include de IRremote library. #include <IRremote.h> #define IR 11 #define RELAY 9 IRrecv irrecv(IR); IRsend irsender; decode_results results; unsigned long code; void setup() { pinMode(RELAY, OUTPUT); digitalWrite(RELAY, LOW); irrecv.blink13(true); irrecv.enableIRIn(); } void loop() { if (irrecv.decode(&results)) { unsigned long current = results.value; if (current != code) { code = current; switch (code) { case 3772833823: digitalWrite(RELAY, HIGH); break; case 3772829743: digitalWrite(RELAY, LOW); break; } } irrecv.resume(); delay(100); } } Normally IR receivers have three pins. Vcc (5V), Gnd and signal. We only need to connect the IR receiver to our Arduino and see which hex codes uses our TV’s remote. Then we only need to fire our relay depending on the code. The circuit: The hardware: - 1 Arduino Nano - 1 IR receiver - 1 Relay - 1 Red led - a couple of pull down resistors Source code is available in my github.
https://gonzalo123.com/tag/ir/
CC-MAIN-2021-04
refinedweb
196
67.96
Ideas on Integrating Memcached into MySQL Queries By Duleepa Wijayawardhana on Dec 22, 2008): - Using the PECL PHP Memcached libraries you can write direct queries to Memcached with failover to your SQL queries. - Using the memcached UDFs so that you can write SQL queries into MySQL/Using the memcached storage engine - Using MySQL Proxy to interface with Memcached - Using something from a framework such as Zend_Cache from the Zend Framework (which allows you to use more than one caching system btw.) So, yes there are many ways to integrate Memcached into your PHP application, so might I suggest one more way. The problem with the first option above is that your code tends to get littered with memcached calls, with the second options you end up having to modify your server. In many environments such as hosted environments this is not that clean. With the fourth option, you now need to use a framework potentially and potentially you may not want that overhead. The third option of using MySQL Proxy is one of my favourites but let's face it, MySQL Proxy is not GA yet, the available version has stability issues and the memcached scripts I've seen/heard about seem to use memcached as a full on query cache (please do correct me if I am wrong). My belief is that memcached is a caching solution and it should be used by the developer wherever possible to make the application faster by placing/caching only the data that the developer needs. I also personally want my application to run when memcached is turned off and I want the application to be easy to read. In other words, I want a modification to the SQL query that will both work with memcached and MySQL but gives me control over what I want to save to memcached and what I need to expire/replace etc. My solution which I tested over the last couple of weeks will only work if you already have the ability to modify/extend your database handler. At MySQL.com, for example, we use Zend Framework but for various reasons, including performance, we actually have our own custom database handler object. Most of my personal sites also do the same; I do not intend to move away from MySQL ;) I do have to admit, my first crack at the syntax was quite clunky but in chatting with Adam Donnison (who will, by the way, be giving a beginners talk on Memcached at the MySQL Users Conference) we came up with the following. SELECT /\*INTO MEMCACHED namespace=table key=id\*/ x, y, z FROM table WHERE id=1; In this case the data will be stored into memcached with a key if table_1 and store an array of x, y and z. In my database handler this will easily parse the query and select from the MySQL database if it is not in memcached and on the way out save it into memcached for the next query. To round out the queries, I also added support for things like INSERT /\*REPLACE MEMCACHED namespace=table key=id\*/ .... and DELETE /\*EXPIRE MEMCACHED namespace=table key=id\*/ .... I wanted to see how a real world use of this would work and so I rewrote my session handler for Zend Framework to take this into account and sure enough it works and it works well. Now my code is a lot neater, will always work with MySQL and I can move my memcached code around as I need it. By rights, my perfect scenario is to now complete a MySQL Proxy script that understands the above and does the above then I could even remove the database handler code that does this all. To be honest though, the performance of this is quite good on my limited tests. While I have not done it, I would imagine that extending Zend Framework to be able to handle these kind of queries should not be too difficult, nor is it difficult to simply use Zend_Cache into your database handler object and thereby even further enhancing your application's abilities to cache things. Begin Shameless Plug: Your forgetting the most low level way, hooking memcached directly into the innodb source code, like what we are doing with Waffle Grid () End Shameless Plug: Posted by Matt Yonkovit on December 22, 2008 at 03:32 AM EST # Another way to combine memcache and mysql: adodb database abstraction layer. It has memcache (or filesystem) caching support built in :) We've used it for over a year to great success. Posted by Barry Hunter on December 22, 2008 at 04:21 AM EST # but adodb it can't check the data be modified or not. may i ask Barry how do you do the real time cache ? when data are modified , the memcached result will expire and re-fetch it . Posted by johnpupu on December 23, 2008 at 12:46 PM EST # Hi, Very interesting! But something is still unclear to me: Who parses the query: "SELECT /\*INTO MEMCACHED namespace=table key=id\*/ x, y, z FROM table WHERE id=1;" Is this your Zend PHP connector? Does it wait for results to return, then puts them in memcached before giving them to you? If so, do you know if there are implementations for other languages? Or will the mysql drivers support this syntax? Reagrds Posted by Shlomi Noach on December 23, 2008 at 05:01 PM EST # "The same concern might point you in the direction of memcached, and if it does, Dups, the Arctic Dolphin, has some ideas on integrating memcached into MySQL queries." Posted by Log Buffer on January 02, 2009 at 07:27 AM EST # Hi, This is great news for me, and am excited. Will check deep into php mysql memcached integration and get back soon. Posted by Php Trivandrum on January 19, 2009 at 03:20 PM EST # Shlomi Noach: This is my own DB connector, this is not something going into MySQL the product :) This is just a suggestion for possible query structure without having to change SQL very much. Posted by Duleepa Wijayawardhana on January 19, 2009 at 03:49 PM EST # can you support your code details? "INSERT /\*REPLACE MEMCACHED namespace=table key=id\*/ ...." I consider it should be "INSERT /\*INTO MEMCACHED namespace=table key=id\*/ ...." Posted by Steven on February 23, 2009 at 03:12 PM EST #
https://blogs.oracle.com/dups/entry/ideas_on_integrating_memcached_into
CC-MAIN-2014-15
refinedweb
1,069
62.11
Template talk:Languages Contents - 1 Only for English pages - 2 Redesign of the Language Linker - 3 Mediawiki Magic_words - 4 Change layout - 5 Template talk:LanguageNew - 6 simplify the language template - 7 Bot - 8 Parameters - 9 Language addition - 10 Template:Languages embedded in Template:ValueDescription - 11 Category "Pages with language links" - 12 Language Addition: Amharic - 13 Languages and politics - 14 Haitian Creole - 15 Broken for no further languages - 16 This template doesn't work in category namespace? - 17 Many missing languages - 18 Template doesn't work at El:FAQ - 19 Note: "This Template is used on a lot of pages." should be in source code. - 20 Proposed rewrite - 21 New language template - 22 New template going live - 23 Italian translation Only for English pages You can use it only to point from English page to others. For other languages we can point to the English version, as a main page. --Rodrigo 19:14, 21 November 2006 (UTC) - OK I've just implemented that for OpenStreetMap License. So I've translated the words 'English and other languages' into different languages (using babelfish) and used this as the link text to point back to the English version. That seems to be satisfactory, at least you can navigate between the pages properly and consistently. A better solution might be possible, the one suggested below perhaps. -- Harry Wood 16:58, 3 May 2007 (BST) Redesign of the Language Linker I wrote another version of the language linker, which allows translated pages to have proper names instead of <langcode>:<english_title>. Aditionally, it can be included in non-english pages. The new linker should be downward compatible, so it could replace this template without problems. You can find the template (including documentation) here: Template:Language Links experimental --Fnord 15:38, 8 February 2007 (UTC) - So this is better than Template:Language Links, because it's a bit rubbish having to work with english page names. And on (for example) a german page you would ideally have links to the italian page etc, rather than only linking back to the english page. In other words it makes it a bit less english centric. - It has the disadvantage that it is harder to use. It's a more unweildy messy block of wiki text appearing at the top of the page, which makes any editing of the page a more confusing experience for wiki-newbies. - I think one thing to consider is that if translation and other wiki activity in other languages were to ramp up, and the project became much less english centric, then english users (and indeed all users) would start to find all the other language activity on 'Recent Changes' would be an irritation, and there would be calls for a proper multilingual implementation wikipedia style, with an entirely separate wiki database for each language (technically quite tricky to acheive). That's the extreme multilingual end of the scale. At this stage it's obviously a predominantly english speaking project, and the more simplistic template reflects that. The Template:Language Links experimentalis a kind of half-way solution, but not without its disadvantages as I say. - Just some thoughts. I think if we make sure the Template:Languages is used properly (it should only be placed on the base english pages) then it would be clearer to see why the more complex template might be better -- Harry Wood 14:14, 21 May 2007 (BST) Mediawiki Magic_words I tried to use {{PAGENAME}} (See Mediawiki:Help:Magic_words) but it seems that some language-namespaces like DA an JA are not registered correctly and {{PAGENAME}} returns "Da:Sitename" instead of "Sitename" --Phobie 11:24, 2 December 2008 (UTC) Change layout I would change this template to look like Template:Languages. Benefits are: same look, not so prominent, much easier to read or change. -- ck3d 17:43, 20 December 2008 (UTC) Template talk:LanguageNew This is a template that lists all languages available. It is based on parser functions from mediawiki. The problem is that it is using a naming scheme of PAGE_NAME/LANG, e.g. Mainpage/sv instead of sv:Mainpage or sv:Huvudsida. Some one needs to understand how this works before it's used. Erik Johansson 12:21, 19 August 2008 (UTC) - ACK. I'm not able to use it :-( Stammfunktion 09:33, 10 October 2008 (UTC) - I've created a new template that works ok, although there is room for improvement. It's nearly ready for prime time. Template:LanguageTest. Preview it here: Test. For English pages, it gives precedence to [[En:Page]] over [[Page]] for the moment. -- Firefishy 10:47, 10 October 2008 (UTC) Thoughts I think it that this system is not very intuitive. It demands that all the pages ("files") have the original English name as title, with the prefix for the language code. The problem is many people in other languages do not know the appropriate English name for the concept that they are looking for. Also, it looks awkward to generate a namespace just for the language code. That does not look nice (De:Map Features). Also, it makes it very difficult to find a page with the in-build search box (if you do not know the English name). I prefer the appropriate page titles of each page, which are then combined as links in the multilanguage template ("the old system"). The disadvantage is that you have to generate a template for each topic page which has to be added to if there is a new language version. The benefit is that the page title is more intuitive. Longbow4u 11:22, 30 August 2008 (UTC) - Better use redirect-pages if a localized name is needed. I.e. de:Karteneigenschaften --> de:Map_Features. If you do not know any English it will be very hard to use OSM anyways! --Phobie 18:52, 18 December 2008 (UTC) ToDo Test, Newarticletext, Bot, NS, lang- ersetzen/löschen. simplify the language template - remove text "Other languages" - remove text "Missing languages:" - always show all languages (you can see which languages are missing because the links are red) --Phobie 03:56, 5 January 2009 (UTC) - Well I quite like the way it's hiding the red links, but it does need simplifying or rearrange to use less vertical space. 'Other languages' could appear to the left of the language list, and we could have a 'show missing languages' link on the right. That way it would all fit on one line. - I might have a play around with this at some point, but need to be careful not to break it. -- Harry Wood 10:11, 7 April 2009 (UTC) - I dont like the gray "missing-languages"-bar. I would prefer: - Title-bar: "Available languages" plus smaller: "Help" and "Missing languages" - Box: with all the available languages. --Markus 19:07, 4 May 2009 (UTC) - My suggestion does not work, because the PullDown does only work with content enclosed directly under the title bar. - May be we delete the gray "missing languages", put the link "show missing languages" in the first gray "languages"? and somebody writes a tool which opens an other window with the missing languages? --Markus 15:39, 7 June 2009 (UTC) I do not care about the design. My proposal is about speed. You should have a look at wikipedia:Wikipedia:Template_limits#Expensive_parser_function_calls Category:Pages_with_too_many_expensive_parser_function_calls! The "#switch" and the "#ifexist" functions costs much time. The solution is to always show all languages or to use a bot which runs once a day! --Phobie 15:29, 19 June 2009 (UTC) - I agree that Phobie said first. The numbers of "#switch", "#if" and "#ifexist" become over 100 which is the maximum limit of parser function. So a part of "Missing languages:" box don't work correct now. I tried to reduct the number of conditions however it seems to reach a limit. By simply just deleting "Missing", the count become the half. I don't agree with using bot. Bots are going to make too many Template:language-xxx like before using this template, right? I didn't like the way. --Nazotoko 20:05, 19 June 2009 (UTC) - No, a bot does not need many templates. It could extend Template:Languages. After a first full-page-run it only needs to run on page creation. Just removing "Missing languages:" is not a good solution! While we get below the 100-limit the template would be still time-consuming. --Phobie 23:45, 19 June 2009 (UTC) - It looks worse idea than many templates. Do you really think that scripting such a long script like Template:Language-mp on top of each pages are useful? Although they can save processing time, they consume much data storage and editors' concentrations. I think checking page existences are not hard for Mediawiki, because it always checks existences for each links. The count limit is set for a security reason to prevent from endless loops. So Wikipedia just increased the limit to 500 for heavy templates. Even if checking cost are small, we meet the fact that the Missing Box doesn't work now. Increasing the limit to 500 by an administrator is the easiest solution, I think. --Nazotoko 03:19, 20 June 2009 (UTC) - The Languages template at mediawiki.org manges without #ifexist. --Wynndale 15:52, 20 June 2009 (UTC) The missing languages box works again, because the count of #ifexist are reduced to 83. I have made another template which has the same format but a half number of #ifexist. The count become 42, so we get capacity for another 58 languages. The processing time was reduced from 1.2 seconds to 0.8 seconds. Accuracy 0.8 seconds is long for server. For example, a heavy page Map Features takes just 0.4 seconds to generate the whole page. However, it is not so long to damage server's CPU and it is short time for people. --Nazotoko 07:22, 21 June 2009 (UTC) - I applied the new template to this. Because the links in the template was changed, mediawiki server updates its connection database at the first template call. It takes very long time but from 2nd call, the processing time becomes less than 1 second. --Nazotoko 05:48, 22 June 2009 (UTC) EdLoach, revert the template. Your template is not simple. your template use 89 #ifexist, 2 #switch and it is hard to understand where we should write a script for a new language. And it consuming 1.2 seconds for each page. You can get those information from the html source. This templates is one of the most linked templates. Take care for the processing time. I think what you want to do is just adding translations of the messages but I deny it for the reason on "Talk:Wiki Translation#Translating messages in Template:Languages?". --Nazotoko 23:16, 28 June 2009 (UTC) - OK - I hadn't spotted this talk section before, and had tried emailing you about your changes before I made any but you don't seem to have an email address linked to your wiki account. I am fairly new at wiki editing, so reading the above about the number of parser functions now makes a bit more sense about your previous changes. The problem as I saw it with the version I overwrote was the difficulty in adding new languages - when you added Esperanto the other day to the English interface I then added it to the Fi and De interfaces; if it gets to the stage where we have an interface per language then adding support for a new one becomes a fairly time-consuming task. I was trying to make this so that the template would only use one list of languages, so adding say {{{eoe|}}} and {{{eom|}}} could be done in one place. I realise now that the way I've done it has doubled the number of ifexist calls, so will revert it as you suggest. But if there is some way that the interface can be simplified so adding a language can be done without the need for modifying every interface template that would be an improvement. --EdLoach 07:43, 29 June 2009 (UTC) - I didn't notice my email access of the wiki was closed. I opened it. By the way, discussions about common things should be opened for all. You should use my talk page instead of emails. Template of Mediawiki is a dull program language. The grammar don't allow templates and parser functions to make new {{anotherTemplateCalling}} nor {{{parameterSubstitution}}} because it is technically too complicated and danger to cause a stack memory flow. Although it looks stupid, we should write {{{are|}}} {{{bge|}}} ... {{{zh-twe|}}} in each Interfaces. To reduce the number of #ifexist, I use a little advanced technique, variable parameters. Template:LanguageExisting selects the parameter name from 'xxe' or 'xxm' by the existence, so the interface templates are assumed to have a pair of 'xxe' and 'xxm'. You cannot separate the pair from interfaces. The Template:Languages/Interface is required to work this technique. The interface selector (2nd parameter) is just a extra function. I assume it to be used for more big redesigning than message translations, for example showing them in table, side-bar or itemizing. However, the default interface must be simple because of the problem about processing time. --Nazotoko 21:35, 29 June 2009 (UTC) There was bunch of changes made by User:Verdy p yesterday which seem to relate to performance of this template. Not sure what's going on here, but I notice that all pages are now reporting the 'too many expensive parser functions' message. -- Harry Wood 15:23, 4 January 2012 (UTC) Bot It would be fine to run a bot, implementing the template in all pages... --Markus 19:07, 4 May 2009 (UTC) Parameters Who understand the usage of the two parameters? and can explain it in the Doku? Thanks, --Markus 15:39, 7 June 2009 (UTC) Language addition Can anyone add "Simplified Chinese", "Traditional Chinese" and "zh-hk" parameters for "简体中文", "繁體中文" and "中文(香港)"? I noticed that some of the articles in this wiki are in the above namespaces, and is inconvenient to change the names. AdeKaka' 09:56, 13 June 2009 (UTC) - I think "Simplified Chinese" and "Traditional Chinese" are too long for namespaces. I counted the pages which have the namespace "Simplified Chinese" or "Traditional Chinese". They are just 9 and 12. So I recommend to move them to "zh-hans" and "zh-hant". --Nazotoko 20:05, 19 June 2009 (UTC) Template:Languages embedded in Template:ValueDescription While I think this is a good idea, I noticed that because it is embedded as {{Languages|1=Tag:{{{key}}}={{{value}}}}} the Proposed Features pages which also sometimes use the template don't (and can't) use this for translations of proposals. I was wondering whether perhaps the best solution to this was to have a Template:ValueDescriptionPF which had the template embedded as {{Languages|{{PAGENAME}}}} or similar? Of course we then may also need translations of that template as there are for ValueDescription. --EdLoach 10:26, 30 June 2009 (UTC) - I am not sure what did you want to discus. Do you want to make translations of the proposals? If you want to, discussing on Template:Proposed feature is better. But I think nobody wants to make them. They are temporary articles like news and discussions. Most of people think those should be in English. Did you read DE:Proposed_features ( translate) ? It says "The proposals are regularly written in English in order to make everybody to understand.." in "Wie also vorgehen?". If you want to use this template for pages named "Prefix:lang/suffix", you need to extend templates from Template:LanguageExisting and Template:LanguageLink. But you should not apply them to this Template:Languages, because it must increase the processing time. Make it as another template. Statistics of the wiki tells me that 1 view request is in 3.6 seconds. It means if the average processing time for each pages exceeded 3.6 seconds, the sever could not answer for all requests, and it would be down everyday. Do you understand how serious we are for the processing time? --Nazotoko 01:38, 2 July 2009 (UTC) Category "Pages with language links" Why does this category exist? Imo, no reader of the wiki will look for pages with language links, so the category unnecessarily clutters category lists. Especially so as the What links here tool can tell you where this template is being used as well, without any categories. --Tordanik 15:27, 16 July 2009 (UTC) - Yes, remove the cat. and add your link to the proposal description! --Phobie 03:52, 18 July 2009 (UTC) - I don't think this category is needed either, but note that there are other templates using this category, for example Template:Languages. --Nazotoko 16:51, 18 July 2009 (UTC) This category is useful for translators. Keep it! --Lulu-Ann 09:41, 31 July 2009 (UTC) - Please state a concrete use case for something that is possible with this category and not with a "What links here"-check. --Tordanik 12:57, 31 July 2009 (UTC) - A translator is a person able and willing to translate, in this case wiki pages. Looking for new tasks, he/she will work on the pages listed in this category. I would even propose to have a category for each language missing and already translated, so translators can have a look at their languages only while non English understanding users can find pages in their languages. --Lulu-Ann 10:23, 3 August 2009 (UTC) - Your use-case fails the "and not with a 'What links here'-check" part. It's a non-issue anyway, because we have so many pages without translations that looking for new tasks requires nothing but Special:Random or simply using the wiki for a while (which has the advantage of letting you find out what pages are actually worth translating). - And I have yet to see users without a knowledge of English that come along saying "hey, I want to find a page in my language, I don't care about the content", rather than "I want to find a page in my language about X". The latter is perfectly possible using the wiki search (you can select namespaces there) and does not require categories at all. --Tordanik 13:45, 3 August 2009 (UTC) Language Addition: Amharic Hi! how is the process of adding a language? I added "am" already to the template - when will it appear in the list of (missing) languages for a page? Can anybody check/fix this? Thanks! --Alexm 16:17, 10 October 2009 (UTC) - I am sorry to answer it late. I have added your language, Amharic. You were needed to edit Template:Languages/Interface, too. --Nazotoko 19:26, 11 November 2009 (UTC) Languages and politics User:Thehada recently translated the Potlatch primer into Serbo-Croat but unfortunately saw fit to put it into the “BS” namespace on its own. This kind of action may, for instance, discourage other Serbo-Croat speakers from keeping it up-to-date, which would leave out-of-date information where new users will see it. I have therefore removed the “BS” namespace from the template and asked for the page to be put in the same namespace as the other Roman Serbo-Croat pages. Andrew 20:09, 14 February 2010 (UTC) - As I've commented on User_talk:Wynndale, please stop this madness. Croatian and Bosnian and Serbian are three different languages. The so-called "Serbo-Croat" Yugoslav standard for group of languages ceased to exist 20 years ago with the bunch of intra-Yugoslavian wars. (see wikipedia:Serbo-Croatian for example). Note that current standard ISO3166-1 and ISO639-1 defines only "HR" for Croatian, "SR" for Serbian, and "BS" for Bosnian. There is no such thing defined as "SH" for Serbo-Croatian. - Additionally, starting such enormous actions (as renaming namespaces!) without even trying to contact interested parties (for example lists for Hr: namespace) is very bad thing to do, even if you are sure it is such a bright idea (which it is not). Yeah; average Croat might understand over 70% of Serbian language out of the box (but only if rewritten in Latin script instead of Cyrillic, as Serbian is usually written), but so might average Swede understand the Norwegian, and nobody is pushing "Swedish-Norwegian" language. Add to that the political tensions (ex-Yugoslavian countries were in open war not that long ago, killing each other for years over exactly such issues !), and you'll see it is hardly a smart move, especially if you pull it without consulting with interested parties. I've undone most of the damage done today to Hr: namespace, but please revert the other stuff you've damaged without consulting with interested parties. Thanks. --mnalis 18:21, 28 February 2010 (UTC) - Croatian, Bosnian and Serbian are very similar dialects (like Bavarian and Low Saxon). So for linguistic reasons it is a good idea to join them into one namespace here (like we have only en and not en-gb and en-us). - ISO 639 defines Serbo-Croatian as macrolanguage with the code hbs the old code sh has been deprecated, but it is still a good code because off sh.wikipedia.org! - The average Croat might understand over 99% of Serbian language out of the box. - Swedish and Norwegian are much more different then Serbo-Croatian languages! See wikipedia:North Germanic languages! Serbian to Croatian is more like Bokmål to Nynorsk. - The central government made war against splitting regions (and lost). Differences in the dialects where not involved into the conflicts! Later they have been used as delimitation for political reasons. - Perhaps you should read wikipedia:Shtokavian dialect. - --phobie m d 13:32, 23 July 2012 (BST) - no, Croatian/Bosnian/Serbian are not dialects, nothing like that (the rfc5646 for example mentions macrolanguage similarities - they are programmer thingy to make translations easier, and not implications that the language is the same and need not be translated). They are separate languages. You have several dialects of Croatian (and then several accents - and those are not accents in the simple English meaning), and a thing called "Standard Croatian" which is what is used in HR: namespace in order for it to be completely intelligible to all Croats. The linguistic issue is extremely complex here, even for native people who spent decades on the subject, so please do not presume that few hours of web surfing might give you even the passable idea of the problem. I will not go here into deep discussions over the language here (nor do I think it is possible - it would take extreme amounts of time and effort to explain, much more than I'm likely to write even over few months of dedicated work, and probably much more than you're likely to read), I will only reiterate that there are extremely strong feelings and political tensions over the language issue (and yes, Croatia for example did get in the war in considerable part over being denied official use of their language and identity). - So again, please, follow the BCP and be absolutely sure to reach the consensus in *all three* "sides" (on respective mailing lists, and not somewhere where almost nobody is following the discussion) before attempting any kind of massive namespace renaming, no matter how good the thing you think you're doing. Doing otherwise is only likely to provoke edit wars, high tensions and many lost hours of everybody work which might have been much better used to improve OSM. Thanks. --mnalis 11:38, 2 August 2012 (BST) Haitian Creole I have added this language to support the OSM translation project at Crisis Commons. [1] Andrew 20:09, 14 February 2010 (UTC) Broken for no further languages Hi, as some might already noticed, the template became broken due to the very last edits of user:Verdy p. Is anybody able to fix it, please? --!i! 09:36, 18 January 2012 (UTC) This template doesn't work in category namespace? I tried to replace {{Language-Howto_Map_A}} with {{Languages}} at without any success. It links to Features, RU:Features (article pages) instead of Category:Features, Category:RU:Features Can anybody fix it to work with Category: namespace? Thank you! Xxzme (talk) 11:17, 22 July 2014 (UTC) - I tried to add this language bar in Category:Out of date and also wasn't able to (I came here to report this issue). - I added {{Languages|ns=:Category:|Out of date}} and had a partial success, where the English category was correctly linked, the language of the current category was correctly identified, but categories in other languages had a line-break that broke the link. I assume the problem is related to the colon (:) prefix used to link to each category without categorizing the page. - --Jgpacker (talk) 15:03, 30 October 2014 (UTC) - I took a deeper look and was able to isolate the problem. It happens when using Template:LanguageLink with a namespace parameter (ns) that starts with a colon (in this case :Category:). It is related to an old bug in MediaWiki. Since I'm not an expert in MediaWiki templates, I'm asking for help on StackOverflow. --Jgpacker (talk) 16:19, 30 October 2014 (UTC) - Ok, I just made a "quick" change to Template:LanguageLink and Template:LanguageLinkEn and was able to make it work now! Use {{Languages|ns=Category:|Out of date}} without a colon as a prefix (changing "Out of date" to the name of the category in english). --Jgpacker (talk) 19:48, 30 October 2014 (UTC) Many missing languages BS(minor; wrong, use "Bs:"; template name used the incorrect prefix and has been fixed) ES(template updated) Ge(wrong, use "Ka:") HR(minor; wrong, use "Hr:") Sq(also minor; template name used the incorrect prefix and has been fixed) Zh-tw(no supported as wanted: use "Zh-hant:" instead for Traditional Mandarin; in fact "zh-tw" designates several languages, including Traditional Mandarin, Min Nan...) Unsupported by templates? But why? Xxzme (talk) 13:09, 10 November 2014 (UTC) - Some templates have different capitalisation from what this template expects to generate links to them, which doesn’t affect their use as templates. Zh-hant is preferred to Zh-tw.--Andrew (talk) 13:24, 10 November 2014 (UTC) - Only 7 language codes are fully capitalized for historic reasons (those that have dedicated namespaces in this wiki: DE, ES, FR, IT, JA, NL, RU; no more will be added, but if this ever occurs, they will not be fully capitalized), all others are lowercased (but the initial only is implicitly capitalized in page titles). — Verdy_p (talk) 03:12, 29 April 2016 (UTC) - Works for me: Sq, ES, - HR, BS also work, however they are minor languages, so due to technical limitations you have to click the "show" link to see them, even if they exist. - It seems Zh-tw isn't widely accepted (see [#Language_addition]) - Ge isn't present. I think we could add it as a minor language. Do you know which language is that? - --Jgpacker (talk) 14:19, 10 November 2014 (UTC) - I have no clue about Ge. For some reason it is used in Ka: namespace. I asked question at talk page here: Talk:Ka:Map_Features#Ka vs Ge? Xxzme (talk) 14:52, 10 November 2014 (UTC) - I may be wrong, but it seems it is the Georgian language. There is ka.wikipedia.org, but there isn't a ge.wikipedia.org, so I think KA is the correct language code, and Ge shouldn't be added here. --Jgpacker (talk) 15:22, 10 November 2014 (UTC) - Do we use language code or country code? If language code then, ISO 639-1 is ka but SO 639-2 is geo AND kat... No clear choice for me. Xxzme (talk) 16:11, 10 November 2014 (UTC) - We don't use country codes anywhere for namespace-like prefixes, but only language codes - Country codes may be used for country-specific articles but not in the namespace-like prefix. - Preferably those countries should better be represented by their name... - ... except in some OSM keys using country codes, e.g. "Key:FR:school" for schools in France, described in English; these articles will translatable in any language with a prefix (e.g. the article about schools in France is translatable to French as "FR:Key:FR:school" German as "DE:Key:FR:shool"). In such case those country codes prefixing some OSM keys should be fully capitalized, but language codes used for suffixing OSM keys such as "name" should be fully lowercased. — Verdy_p (talk) 03:25, 29 April 2016 (UTC) - The Georgian Map Features was originally created at GE:Map Features with a hand-crafted link from the old {{Language-Map Features}} template. When we changed to using the {{Languages}} template everywhere I moved the Map Features page but not the templates that it uses. The templates aren’t really any use as they are just untranslated snapshots of the English pages at the time it was created. Like Wikipedia we use the two-letter code when there is one.--Andrew (talk) 17:40, 10 November 2014 (UTC) - That page was moved to Ka:Map Features. Note that this wiki documents features in languages independantly of the country where they are used. If a feature is specific to a country, the tags used should use a capitalized country prefix, but the articles about these tags will not use it in the first namespace-like prefix, but always after "Key:" or "Tag:", so there's no possible confusion and these articles remain translatable to more languages. — Verdy_p (talk) 03:25, 29 April 2016 (UTC) Template doesn't work at El:FAQ No idea why. Xxzme (talk) 10:54, 2 May 2015 (UTC) Note: "This Template is used on a lot of pages." should be in source code. The Note: "This Template is used on a lot of pages. ...." should be a part of the templates source code. When clicking somewhere EDIT nobody reads it on this talk page. Please delete this after copying the notice in. --Hb 09:59, 20 May 2015 (UTC) Proposed rewrite I have been rewriting this template) fix problems in what we have now. There is a fuller description at User:Wynndale/language test. - The bar makes extensive use of the #ifexistparser function, which is expensive in Mediawiki and limited in the number of times it can be called. Most of the pages in Category:Pages with too many expensive parser function calls are at least exacerbated by the presence of a language bar. - To mitigate the issue above, minority languages are hardcoded into the hidden lower box even when translations exist. Up to now, the only response available for criticism of the banishment [2] has been to adjust the choice of languages tested for display at the top. - The red links are hidden by a script that is only executed when the page has loaded. This is a particular nuisance on mobile devices where it can take up most of the screen. - The list of languages is spread over three locations with an intricate syntax and different ways of entering them depending on the size of their presence in OSM. Instead of splitting links into upper and lower bars, every link goes together in one sequence, with red links hidden by CSS. A link has been added to “Other languages” (more discoverable than a “show” link) to unhide the links. I’m interested to know whether you think this is a good idea and how it can be improved. You are welcome to improve it yourselves under less pressure than editing a live template.--Andrew (talk) 08:50, 7 June 2015 (UTC) - Your rewrite seems very sensible and does seem to have drawbacks, so very broad community approval should not be necessary (perhaps post on forum?). One question: Are there any plans to unite the "real" and "virtual" namespaces? What would be the gains, what work would it be?--Jojo4u (talk) 17:36, 20 August 2015 (UTC) New language template There is a development version of this template at User:Wynndale/Languages and some pages have been adjusted to use it. The rewrite is intended not to fill Category:Pages with too many expensive parser function calls, not to permanently hide minority languages out of sight, to hide red links immediately and to be easier to add extra languages and to add the template to a page. Longer explanation (discuss). This soft launch is an opportunity to revise the template without as much pressure as working on the live version, however please discuss any major changes. --Andrew (talk) 10:34, 29 February 2016 (UTC) New template going live The new version of the template is now live. It is designed for several objectives including long term maintainability. Please discuss any changes first. There are a number of opportunites for subsequent cleanup, the first is to reindex unreindexed pages, especially ones outside the main namespace, by null edits.--Andrew (talk) 19:24, 21 September 2016 (UTC) - I fixed (again) the LanguageLink tempalte that was again broken (by you), notably it did no longer handle the template namespace (and possibly others). You wanted to simplify it too much; - Now you pretend I'm bullying it, but all this was signaled here since long, and you persist in ignoring the issues, notably in significant spaces, and I don't understnad why you want excessive indentation which is also incorrect in various places, and so does not really help editors, and certainly does not help the server as well (notably if this template is widely used, it should be compacted, we have on this wiki very large pages and ther's no need to add unnecessary junk spaces or newlines , except the minimum (only standard indentation, privided it is done in really safe locations, where they are not significant). - Everywhere we keep just the minimum needed (even for long term: what is necessary is only to have coherent indentation where braces and vbars line up correctly). - You affirmed in your comment that I don't understand the wiki, that's plain false, in fact since long this template was maintained by me before you started to propose something else (but broke it repeatedly in your tests, ignoring all my signaled errors). I've also taken thje performance considerations, and also preserved the compatibility to allow conversion, but you reverted multiple times these needed tweaks. And you don't test things properly unlike what I do (and I commented each one). It's difficult to progress when you constantly ignore every corrections, even when they are signaled and justified (not in my "imagination"). — Verdy_p (talk) 08:55, 22 September 2016 (UTC) Italian translation I'm trying to improve the Italian translation. The Category:Pages unavailable in Italian would be a useful tool, but I don't want to overload the server: I've seen this edit war between Verdy_p and Wynndale. Are there any alternatives to make a list of pages not translated in Italian? My main purpose is to translate the most used tag/key documentation pages. --NonnEmilia (talk) 14:28, 22 March 2017 (UTC) - Those that request it can have it. It has been disabled in some languages, but it can be enabled in Italian if you wish. Just follow the model... — Verdy_p (talk) 14:43, 22 March 2017 (UTC) - I've seen that you already enable it. Thanks. --NonnEmilia (talk) 15:52, 22 March 2017 (UTC)
https://wiki.openstreetmap.org/wiki/Template_talk:Languages
CC-MAIN-2018-17
refinedweb
5,932
61.26
gearman_worker_echo - Worker Declarations #include <libgearman/gearman.h> gearman_return_t gearman_worker_echo(gearman_worker_st *worker, const void *workload, size_t workload_size); Send data to all job servers to see if they echo it back. This is a test function to see if job servers are responding properly. * [in] worker Structure previously initialized with gearman_worker_create() or gearman_worker_clone(). [in] workload The workload to ask the server to echo back. [in] workload_size Size of the workload. Standard gearman return value. The Gearman homepage: Bugs should be reported at Copyright (C) 2008 Brian Aker, Eric Day. All rights reserved. Use and distribution licensed under the BSD license. See the COPYING file in the original source for full text.
http://huge-man-linux.net/man3/gearman_worker_echo.html
CC-MAIN-2018-13
refinedweb
109
53.27
BitPacket is maintained in Savannah (and mirrored in github and gitorious). Savannah is the central point for development, maintenance and distribution of official GNU software (and other non-GNU software, like BitPacket). You can download the latest BitPacket release from the project’s website, or alternatively, you can also clone the source repository. git clone git://git.sv.gnu.org/bitpacket.git Or, if you are behind a firewall, you might use the HTTP version: git clone BitPacket is distributed as a Distribute (setuptools) module, so the usual commands for building and installing setuptools modules can be used. However, this means that you need setuptools installed in your system. Once the BitPacket tarball is decompressed, you can build BitPacket as a non-root user: python setup.py build If the built is successful, you can then install it, as root, with the following command: python setup.py install Using BitPacket in your application is straightforward. You only need to add the following import in your Python scripts: from BitPacket import * The first version of BitPacket was released in 2007. The validation guys from the project I was working on were building a test environment to validate a software which involved a lot of network packet management. They started by accessing packet fields with indexes. This was very error prone, hard to maintain, hard to read and hard to understand. So, I start digging through the web for something that could help us, but I only found the struct module. However, it does not solve the indexing problem neither it supports bit fields. Then, I found the BitVector class which was able to work with bits given a byte array, and I built BitPacket in top of it. Initially, BitPacket consisted on three classes: BitField (for single bit fields), BitStructure (a BitField itself, to build packets as a sequence BitFields) and BitVariableStructure (something like a meta BitStructure). At the end of 2009, a refactoring of the test environment was necessary, and I knew BitPacket was very slow and hard to extend. Between 2007 and 2009, I discovered a great Python library for building and parsing packets, construct. construct is great and performs its jobs very well. It is a very complete and powerful library for working with packets in a declarative way. The problem was that we had a lot of code that need to be reused written with BitPacket, so construct was not an option. Finally, I decided I needed to refactor BitPacket, while learning more in the path, and create a small library, much simpler than struct and much more powerful and fast than the old BitPacket. This is how BitPacket 1.0.0 was born.
http://www.nongnu.org/bitpacket/intro.html
CC-MAIN-2014-41
refinedweb
447
62.27
Org Edna Table of Contents - Copying - Introduction - Basic Features - Advanced Features - Extending Edna - Contributing - Changelog Copying Ext. Installation and Setup Requirements There are two ways to install Edna: From GNU ELPA, or from source. From ELPA: M-x package-install org-edna From Source: bzr branch org-edna make -C org-edna compile autoloads After that, add the following to your init file (typically .emacs): ;; Only necessary if installing from source (add-to-list 'load-path "/full/path/to/org-edna/") (load "/path/to/org-edna/org-edna-autoloads.el") ;; Always necessary (org-edna-load) If you ever want to disable Edna, run org-edna-unload. Basic Operation Let’s start with an example: Say you want to do laundry, but once you’ve put your clothes in the washer, you forget about it. Even with a tool like org-notify or appt, Org won’t know when to remind you. If you’ve got them scheduled for an hour after the other, maybe you forgot one time, or ran a little late. Now Org will remind you too early. Edna can handle this for you like so: * TODO Put clothes in washer SCHEDULED: <2017-04-08 Sat 09:00> :PROPERTIES: :TRIGGER: next-sibling scheduled!("++1h") :END: * TODO Put clothes in dryer :PROPERTIES: :TRIGGER: next-sibling scheduled!("++1h") :BLOCKER: previous-sibling :END: * TODO Fold laundry :PROPERTIES: :TRIGGER: next-sibling scheduled!("++1h") :BLOCKER: previous-sibling :END: * TODO Put clothes away :PROPERTIES: :TRIGGER: next-sibling scheduled!("++1h") :BLOCKER: previous-sibling :END: After you’ve put your clothes in the washer and mark the task DONE, Edna will schedule the following task for one hour after you set the first heading as done. Another example might be a checklist that you’ve done so many times that you do part of it on autopilot: * TODO Address all TODOs in code * TODO Commit Code to Repository The last thing anyone wants is to find out that some part of the code on which they’ve been working for days has a surprise waiting for them. Once again, Edna can help: * TODO Address all TODOs in code :PROPERTIES: :BLOCKER: file("main.cpp") file("code.cpp") re-search?("TODO") :END: * TODO Commit Code to Repository Blockers A blocker indicates conditions which must be met in order for a heading to be marked as DONE. Typically, this will be a list of headings that must be marked as DONE. Triggers A trigger is an action to take when a heading is set to done. For example, scheduling another task, marking another task as TODO, or renaming a file. Syntax Edna has its own language for commands, the basic form of which is KEYWORD(ARG1 ARG2 ...) KEYWORD can be any valid lisp symbol, such as key-word, KEY_WORD!, or keyword?. Each argument can be one of the following: - A symbol, such as arg or org-mode - A quoted string, such as “hello” or “My name is Edna” - A number, such as 0.5, +1e3, or -5 - A UUID, such as c5e30c76-879a-494d-9281-3a4b559c1a3c Each argument takes specific datatypes as input, so be sure to read the entry before using it. The parentheses can be omitted for commands with no arguments. Basic Features The most basic features of Edna are finders and actions. Finders A finder specifies locations from which to test conditions or perform actions. These locations are referred to as “targets”. The current heading, i.e. the one that is being blocked or triggered, is referred to as the “source” heading. More than one finder may be used. In this case, the targets are merged together, removing any duplicates. Many finders take additional options, marked “OPTIONS”. See relatives for information on these options. ancestors - Syntax: ancestors(OPTIONS...) The ancestors finder returns a list of the source heading’s ancestors. For example: * TODO Heading 1 ** TODO Heading 2 ** TODO Heading 3 *** TODO Heading 4 **** TODO Heading 5 :PROPERTIES: :BLOCKER: ancestors :END: In the above example, “Heading 5” will be blocked until “Heading 1”, “Heading 3”, and “Heading 4” are marked “DONE”, while “Heading 2” is ignored. children - Syntax: children(OPTIONS...) The children finder returns a list of the immediate children of the source heading. If the source has no children, no target is returned. In order to get all levels of children of the source heading, use the descendants keyword instead. descendants - Syntax: descendants(OPTIONS...) The descendants finder returns a list of all descendants of the source heading. * TODO Heading 1 :PROPERTIES: :BLOCKER: descendants :END: ** TODO Heading 2 *** TODO Heading 3 **** TODO Heading 4 ***** TODO Heading 5 In the above example, “Heading 1” will block until Headings 2, 3, 4, and 5 are DONE. file - Syntax: file(“FILE”) The file finder finds a single file, specified as a string. The returned target will be the minimum point in the file. Note that this does not give a valid heading, so any conditions or actions that require will throw an error. Consult the documentation for individual actions or conditions to determine which ones will and won’t work. See conditions for how to set a different condition. For example: * TODO Test :PROPERTIES: :BLOCKER: file("~/myfile.org") headings? :END: Here, “Test” will block until myfile.org is clear of headings. first-child - Syntax: first-child(OPTIONS...) Return the first child of the source heading. If the source heading has no children, no target is returned. ids - Syntax: id(ID1 ID2 ...) The ids finder will search for headings with given IDs, using org-id. Any number of UUIDs may be specified. For example: * TODO Test :PROPERTIES: :BLOCKER: ids(62209a9a-c63b-45ef-b8a8-12e47a9ceed9 6dbd7921-a25c-4e20-b035-365677e00f30) :END: Here, “Test” will block until the heading with ID 62209a9a-c63b-45ef-b8a8-12e47a9ceed9 and the heading with ID 6dbd7921-a25c-4e20-b035-365677e00f30 are set to “DONE”. Note that UUIDs need not be quoted; Edna will handle that for you. match - Syntax: match(“MATCH-STRING” SCOPE SKIP) The match keyword will take any arguments that org-map-entries usually takes. In fact, the arguments to match are passed straight into org-map-entries. * TODO Test :PROPERTIES: :BLOCKER: match("test&mine" agenda) :END: “Test” will block until all entries tagged “test” and “mine” in the agenda files are marked DONE. See the documentation for org-map-entries for a full explanation of the first argument. next-sibling - Syntax: next-sibling(OPTIONS...) The next-sibling keyword returns the next sibling of the source heading, if any. next-sibling-wrap - Syntax: next-sibling-wrap(OPTIONS...) Find the next sibling of the source heading, if any. If there isn’t, wrap back around to the first heading in the same subtree. olp - Syntax: olp(“FILE” “OLP”) Finds the heading given by OLP in FILE. Both arguments are strings. * TODO Test :PROPERTIES: :BLOCKER: olp("test.org" "path/to/heading") :END: “Test” will block if the heading “path/to/heading” in “test.org” is not DONE. org-file - Syntax: org-file(“FILE”) A special form of file, org-file will find FILE in org-directory. FILE is the relative path of a file in org-directory. Nested files are allowed, such as “my-directory/my-file.org”. The returned target is the minimum point of FILE. * TODO Test :PROPERTIES: :BLOCKER: org-file("test.org") :END: Note that the file still requires an extension; the “org” here just means to look in org-directory, not necessarily an Org mode file. previous-sibling - Syntax: previous-sibling(OPTIONS...) Returns the previous sibling of the source heading on the same level. previous-sibling-wrap - Syntax: previous-sibling-wrap(OPTIONS...) Returns the previous sibling of the source heading on the same level. relatives Find some relative of the current heading. - Syntax: relatives(OPTION OPTION...) - Syntax: chain-find(OPTION OPTION...) Identical to the chain argument in org-depend, relatives selects its single target using the following method: - Creates a list of possible targets - Filters the targets from Step 1 - Sorts the targets from Step 2 One option from each of the following three categories may be used; if more than one is specified, the last will be used. Filtering is the exception to this; each filter argument adds to the current filter. Apart from that, argument order is irrelevant. The chain-find finder is also provided for backwards compatibility, and for similarity to org-depend. All arguments are symbols, unless noted otherwise. Selection - from-top: Select siblings of the current heading, starting at the top - from-bottom: As above, but from the bottom - from-current: Selects siblings, starting from the heading (wraps) - no-wrap: As above, but without wrapping - forward-no-wrap: Find entries on the same level, going forward - forward-wrap: As above, but wrap when the end is reached - backward-no-wrap: Find entries on the same level, going backward - backward-wrap: As above, but wrap when the start is reached - walk-up: Walk up the tree, excluding self - walk-up-with-self: As above, but including self - walk-down: Recursively walk down the tree, excluding self - walk-down-with-self: As above, but including self - step-down: Collect headings from one level down Filtering - todo-only: Select only targets with TODO state set that isn’t a DONE state - todo-and-done-only: Select all targets with a TODO state set - no-archive: Skip archived headings - NUMBER: Only use that many headings, starting from the first one If passed 0, use all headings If <0, omit that many headings from the end - “+tag”: Only select headings with given tag - “-tag”: Only select headings without tag - “REGEX”: select headings whose titles match REGEX Sorting - no-sort: Remove other sorting in affect - reverse-sort: Reverse other sorts (stacks with other sort methods) - random-sort: Sort in a random order - priority-up: Sort by priority, highest first - priority-down: Same, but lowest first - effort-up: Sort by effort, highest first - effort-down: Sort by effort, lowest first - scheduled-up: Scheduled time, farthest first - scheduled-down: Scheduled time, closest first - deadline-up: Deadline time, farthest first - deadline-down: Deadline time, closest first Many of the other finders are shorthand for argument combinations of relative: - ancestors - walk-up - children - step-down - descendants - walk-down - first-child - step-down 1 - next-sibling - forward-no-wrap 1 - next-sibling-wrap - forward-wrap 1 - parent - walk-up 1 - previous-sibling - backward-no-wrap 1 - previous-sibling-wrap - backward-wrap 1 - rest-of-siblings - forward-no-wrap - rest-of-siblings-wrap - forward-wrap - siblings - from-top - siblings-wrap - forward-wrap Because these are implemented as shorthand, any arguments for relatives may also be passed to one of these finders. rest-of-siblings - Syntax: rest-of-siblings(OPTIONS...) Starting from the heading following the current one, all same-level siblings are returned. rest-of-siblings-wrap - Syntax: rest-of-siblings-wrap(OPTIONS...) Starting from the heading following the current one, all same-level siblings are returned. When the end is reached, wrap back to the beginning. siblings - Syntax: siblings(OPTIONS...) Returns all siblings of the source heading as targets, starting from the first sibling. siblings-wrap - Syntax: siblings-wrap(OPTIONS...) Finds the siblings on the same level as the source heading, wrapping when it reaches the end. Identical to the rest-of-siblings-wrap finder. Actions Once Edna has collected its targets for a trigger, it will perform actions on them. Actions must always end with ’!’. Scheduled/Deadline - Syntax: scheduled!(OPTIONS) - Syntax: deadline!(OPTIONS) Set the scheduled or deadline time of any target headings. There are several forms that the planning keywords can take. In the following, PLANNING is either scheduled or deadline. PLANNING!(“DATE[ TIME]”) Sets PLANNING to DATE at TIME. If DATE is a weekday instead of a date, then set PLANNING to the following weekday. If TIME is not specified, only a date will be added to the target. Any string recognized by org-read-datemay be used for DATE. TIME is a time string, such as HH:MM. PLANNING!(rm|remove) Remove PLANNING from all targets. The argument to this form may be either a string or a symbol. PLANNING!(copy|cp) Copy PLANNING info verbatim from the source heading to all targets. The argument to this form may be either a string or a symbol. PLANNING!(“[+|-|++|--]NTHING[ [+|-]LANDING]”) Increment(+) or decrement(-) target’s PLANNING by N THINGs relative to either itself (+/-) or the current time (++/--). N is an integer THING is one of y (years), m (months), d (days), h (hours), M (minutes), a (case-insensitive) day of the week or its abbreviation, or the strings “weekday” or “wkdy”. If a day of the week is given as THING, move forward or backward N weeks to find that day of the week. If one of “weekday” or “wkdy” is given as THING, move forward or backward N days, moving forward or backward to the next weekday. This form may also include a “landing” specifier to control where in the week the final date lands. LANDING may be one of the following: - A day of the week, which means adjust the final date forward (+) or backward (-) to land on that day of the week. - One of “weekday” or “wkdy”, which means adjust the target date to the closest weekday. - One of “weekend” or “wknd”, which means adjust the target date to the closest weekend. PLANNING!(“float [+|-|++|--]N DAYNAME[ MONTH[ DAY]]”) Set time to the date of the Nth DAYNAME before/after MONTH DAY, as per diary-float. N is an integer. DAYNAME may be either an integer, where 0=Sunday, 1=Monday, etc., or a string for that day. MONTH may be an integer, 1-12, or a month’s string. If MONTH is empty, the following (+) or previous (-) month relative to the target’s time (+/-) or the current time (++/--). DAY is an integer, or empty or 0 to use the first of the month (+) or the last of the month (-). Examples: - scheduled!(“Mon 09:00”) -> Set SCHEDULED to the following Monday at 9:00 - deadline!(“++2h”) -> Set DEADLINE to two hours from now. - deadline!(copy) deadline!(“+1h”) -> Copy the source deadline to the target, then increment it by an hour. - scheduled!(“+1wkdy”) -> Set SCHEDULED to the next weekday - scheduled!(“+1d +wkdy”) -> Same as above - deadline!(“+1m -wkdy”) -> Set SCHEDULED up one month, but move backward to find a weekend - scheduled!(“float 2 Tue Feb”) -> Set SCHEDULED to the second Tuesday in the following February - scheduled!(“float 3 Thu”) -> Set SCHEDULED to the third Thursday in the following month TODO State - Syntax: todo!(NEW-STATE) Sets the TODO state of the target heading to NEW-STATE. NEW-STATE may either be a string or a symbol denoting the new TODO state. It can also be the empty string, in which case the TODO state is removed. Archive - Syntax: archive! Archives all targets with confirmation. Confirmation is controlled with org-edna-prompt-for-archive. If this option is nil, Edna will not ask before archiving targets. Chain Property - Syntax: chain!(“PROPERTY”) Copies PROPERTY from the source entry to all targets. Does nothing if the source heading has no property PROPERTY. Clocking - Syntax: clock-in! - Syntax: clock-out! Clocks into or out of all targets. clock-in! has no special handling of targets, so be careful when specifying multiple targets. In contrast, clock-out! ignores its targets and only clocks out of the current clock, if any. Property - Syntax: set-property!(“PROPERTY” “VALUE”) - Syntax: set-property!(“PROPERTY” inc) - Syntax: set-property!(“PROPERTY” dec) - Syntax: set-property!(“PROPERTY” next) - Syntax: set-property!(“PROPERTY” prev) - Syntax: set-property!(“PROPERTY” previous) The first form sets the property PROPERTY on all targets to VALUE. If VALUE is a symbol, it is interpreted as follows: - inc - Increment a numeric property value by one - dec - Decrement a numeric property value by one If either inc or dec attempt to modify a non-numeric property value, Edna will fail with an error message. - Cycle the property through to the next allowed property value - Cycle the property through to the previous allowed property value The symbol prev may be used as an abbreviation for previous. Similar to inc and dec, any of these will fail if there are no defined properties. When reaching the end of the list of allowed properties, next will cycle back to the beginning. Example: #+PROPERTY: TEST_ALL a b c d * TODO Test Heading :PROPERTIES: :TEST: d :TRIGGER: self set-property!("TEST" next) :END: When “Test Heading” is set to DONE, its TEST property will change to “a”. This also works with previous, but in the opposite direction. Additionally, all special forms will fail if the property is not already set: * TODO Test :PROPERTIES: :TRIGGER: self set-property("TEST" inc) :END: In the above example, if “Test” is set to DONE, Edna will fail to increment the TEST property, since it doesn’t exist. - Syntax: delete-property!(“PROPERTY”) Deletes the property PROPERTY from all targets. Examples: - set-property!(“COUNTER” “1”) -> Sets the property COUNTER to 1 on all targets - set-property!(“COUNTER” inc) -> Increments the property COUNTER by 1. Following the previous example, it would be 2. Priority Sets the priority of all targets. Syntax: set-priority!(“PRIORITY”) Set the priority to the first character of PRIORITY. Syntax: set-priority!(up) Cycle the target’s priority up through the list of allowed priorities. Syntax: set-priority!(down) Cycle the target’s priority down through the list of allowed priorities. Syntax: set-priority!(P) Set the target’s priority to the character P. Advanced Features Conditions Edna gives you he option to specify blocking conditions. Each condition is checked for each of the specified targets; if one of the conditions returns true for that target, then the source heading is blocked. If no condition is specified, !done? is used by default, which means block if any target heading isn’t done. headings - Syntax: headings? Blocks the source heading if any target belongs to a file that has an Org heading. This means that target does not have to be a heading. org-file("refile.org") headings? The above example blocks if refile.org has any headings. todo-state - Syntax: todo-state?(STATE) Blocks if any target heading has TODO state set to STATE. STATE may be a string or a symbol. variable-set - Syntax: variable-set?(VARIABLE VALUE) Evaluate VARIABLE when visiting a target, and compare it with equal against VALUE. Block the source heading if VARIABLE = VALUE. VARIABLE should be a symbol, and VALUE is any valid lisp expression. self variable-set?(test-variable 12) has-property - Syntax: has-property?(“PROPERTY” “VALUE”) Tests each target for the property PROPERTY, and blocks if it’s set to VALUE. re-search - Syntax: re-search?(“REGEXP”) Blocks the source heading if the regular expression REGEXP is present in any of the targets. The targets are expected to be files, although this will work with other targets as well. Consideration “Consideration” is a special keyword that’s only valid for blockers. This says “Allow a task to complete if CONSIDERATION of its targets pass the given condition”. This keyword can allow specifying only a portion of tasks to consider: - consider(PERCENT) - consider(NUMBER) - consider(all) (Default) - consider(any) (1) tells the blocker to only consider some portion of the targets. If at least PERCENT of them are in a DONE state, allow the task to be set to DONE. PERCENT must be a decimal, and doesn’t need to include a %-sign. (2) tells the blocker to only consider NUMBER of the targets. (3) tells the blocker to consider all following targets. (4) tells the blocker to allow passage if any of the targets pass. A consideration must be specified before the conditions to which it applies: consider(0.5) siblings match("find_me") consider(all) !done? The above code will allow task completion if at least half the siblings are complete, and all tasks tagged “find_me” are complete. consider(1) ids(ID1 ID2 ID3) consider(2) ids(ID3 ID4 ID5 ID6) The above code will allow task completion if at least one of ID1, ID2, and ID3 are complete, and at least two of ID3, ID4, ID5, and ID6 are complete. If no consideration is given, ALL is assumed. Both “consider” and “consideration” are valid keywords; they both mean the same thing. Conditional Forms Let’s say you’ve got the following checklist: * TODO Nightly DEADLINE: <2017-12-22 Fri 22:00 +1d> :PROPERTIES: :ID: 12345 :BLOCKER: match("nightly") :TRIGGER: match("nightly") todo!(TODO) :END: * TODO Prepare Tomorrow's Lunch :nightly: * TODO Lock Back Door :nightly: * TODO Feed Dog :nightly: You don’t know in what order you want to perform each task, nor should it matter. However, you also want the parent heading, “Nightly”, to be marked as DONE when you’re finished with the last task. There are two solutions to this: 1. Have each task attempt to mark “Nightly” as DONE, which will spam blocking messages after each task. The second is to use conditional forms. Conditional forms are simple; it’s just if/then/else/endif: if CONDITION then THEN else ELSE endif Here’s how that reads: “If CONDITION would not block, execute THEN. Otherwise, execute ELSE.” For our nightly entries, this looks as follows: * TODO Prepare Tomorrow's Lunch :nightly: :PROPERTIES: :TRIGGER: if match("nightly") then ids(12345) todo!(DONE) endif :END: Thus, we replicate our original blocking condition on all of them, so it won’t trigger the original until the last one is marked DONE. Occasionally, you may find that you’d rather execute a form if the condition would block. There are two options. The first is confusing: use consider(any). This will tell Edna to pass so long as one of the targets meets the condition. This is the opposite of Edna’s standard operation, which only allows passage if all targets meet the condition. * TODO Prepare Tomorrow's Lunch :nightly: :PROPERTIES: :TRIGGER: if consider(any) match("nightly") then ids(12345) todo!(DONE) endif :END: The second is a lot easier to understand: just switch the then and else clauses: * TODO Prepare Tomorrow's Lunch :nightly: :PROPERTIES: :TRIGGER: if match("nightly") then else ids(12345) todo!(DONE) endif :END: The conditional block tells it to evaluate that section. Thus, you can conditionally add targets, or conditionally check conditions. Setting the Properties There are two ways to set the BLOCKER and TRIGGER properties: by hand, or the easy way. You can probably guess which way we prefer. With point within the heading you want to edit, type M-x org-edna-edit. You end up in a buffer that looks like this: Edit blockers and triggers in this buffer under their respective sections below. All lines under a given section will be merged into one when saving back to the source buffer. Finish with `C-c C-c' or abort with `C-c C-k'. BLOCKER BLOCKER STUFF HERE TRIGGER TIRGGER STUFF HERE In here, you can edit the blocker and trigger properties for the original heading in a cleaner environment. More importantly, you can complete the names of any valid keyword within the BLOCKER or TRIGGER sections using completion-at-point. When finished, type C-c C-c to apply the changes, or C-c C-k to throw out your changes. Extending Edna Extending Edna is (relatively) simple. During operation, Edna searches for functions of the form org-edna-TYPE/KEYWORD. Naming Conventions In order to distinguish between actions, finders, and conditions, we add ’?’ to conditions and ’!’ to actions. This is taken from the practice in Guile and Scheme to suffix destructive functions with ’!’ and predicates with ’?’. Thus, one can have an action that files a target, and a finder that finds a file. Finders Finders have the form org-edna-finder/KEYWORD, like so: (defun org-edna-finder/test-finder () (list (point-marker))) All finders must return a list of markers, one for each target found, or nil if no targets were found. Actions Actions have the form org-edna-action/KEYWORD!: (defun org-edna-action/test-action! (last-entry arg1 arg2) ) Each action has at least one argument: last-entry. This is a marker for the current entry (not to be confused with the current target). The rest of the arguments are the arguments specified in the form. Conditions (defun org-edna-condition/test-cond? (neg)) All conditions have at least one argument, “NEG”. If NEG is non-nil, the condition should be negated. Most conditions have the following form: (defun org-edna-condition/test-condition? (neg) (let ((condition (my-test-for-condition))) (when (org-xor condition neg) (string-for-blocking-entry-here)))) For conditions, we return true if condition is true and neg is false, or if condition is false and neg is true: This is an XOR table, so we pass CONDITION and NEG into org-xor to get our result. A condition must return a string if the current entry should be blocked. Contributing We are all happy for any help you may provide. First, check out the source code on Savannah: bzr branch org-edna You’ll also want a copy of the most recent Org Mode source: git clone git://orgmode.org/org-mode.git Bugs There are two ways to submit bug reports: - Using the bug tracker at Savannah - Sending an email using org-edna-submit-bug-report When submitting a bug report, be sure to include the Edna form that caused the bug, with as much context as possible. Development If you’re new to bazaar, we recommend using Emacs’s built-in VC package. It eases the overhead of dealing with a brand new VCS with a few standard commands. For more information, see the info page on it (In Emacs, this is C-h r m Introduction to VC RET). To contribute with bazaar, you can do the following: # Hack away and make your changes $ bzr commit -m "Changes I've made" $ bzr send -o file-name.txt Then, use org-edna-submit-bug-report and attach “file-name.txt”. We can then merge that into the main development branch. There are a few rules to follow: - Verify that any new Edna keywords follow the appropriate naming conventions - Any new keywords should be documented - We operate on headings, not headlines - Use one word to avoid confusion - Run ’make check’ to verify that your mods don’t break anything - Avoid additional or altered dependencies if at all possible - Exception: New versions of Org mode are allowed Documentation Documentation is always helpful to us. Please be sure to do the following after making any changes: - Update the info page in the repository with C-c C-e i i - If you’re updating the HTML documentation, switch to a theme that can easily be read on a white background; we recommend the “adwaita” theme Changelog 1.0beta6 Lots of parsing fixes. - Fixed error reporting - Fixed parsing of negations in conditions - Fixed parsing of multiple forms inside if/then/else blocks 1.0beta5 Some new forms and a new build system. - Added new forms to set-property! - Now allows ’inc, ’dec, ’previous, and ’next as values - Changed build system to EDE to properly handle dependencies - Fixed compatibility with new Org effort functions 1.0beta4 Just some bug fixes from the new form parsing. - Fixed multiple forms getting incorrect targets - Fixed multiple forms not evaluating 1.0beta3 HUGE addition here - Conditional Forms - See Conditional Forms for more information - Overhauled Internal Parsing - Fixed consideration keywords - Both consider and consideration are accepted now - Added ’any consideration - Allows passage if just one target is fulfilled 1.0beta2 Big release here, with three new features. - Added interactive keyword editor with completion - See Setting the Properties for how to do that - New uses of schedule! and deadline! - New “float” form that mimics diary-float - New “landing” addition to “+1d” and friends to force planning changes to land on a certain day or type of day (weekend/weekday) - See Scheduled/Deadline for details - New “relatives” finder - New finders
http://www.nongnu.org/org-edna-el/
CC-MAIN-2018-09
refinedweb
4,656
56.55
As you may know, Bitlocker full disk encryption used to be available only on the enterprise and ultimate editions of Windows Vista, when it was introduced more than 12 years ago. Windows 7 continued that exclusive tradition. Windows 8 made it available to the professional edition for the first time, which allowed a lot of home users that had purchased Pro to finally use it on their private devices. But what could you use, if you had bought the Home edition of Windows and you wanted to keep away from 3rd party encryption software? Microsoft’s answer was “device encryption”, which I would rather call “Bitlocker light”. Microsoft started to advertise that the home version comes with "device encryption" as well while making "Bitlocker device encryption" a separate feature, still unavailable on Windows Home edition. Under the hood, it is the same as Bitlocker, but it will not offer the end user as many options as Bitlocker does. Well, do the home users normally even need these options? Normally, they don’t. So, with that said, why would I try to go beyond device encryption? In other words: why would I even write this article? It is because Microsoft only allows device encryption on Windows 10 home when two conditions are met: 1. Your device has a TPM Chip 2. Your device meets certain hardware requirements like InstantGo/”Modern Standby”, that are poorly documented as in "hard to find out why you don't qualify". Regarding the lower condition, I am going to ask you, the reader: Why would Microsoft make it that hard? Imagine your machine does not qualify, what can you do? You will be told to buy the Professional version which entitles you to use Bitlocker. If these two requirements don’t apply to users that run Windows 10 Pro on the same hardware with Bitlocker, then why would they matter on the Home edition with "Bitlocker light"? Let’s see. For a test, I created a Windows 10 Home virtual machine in Hyper-V. I added a (virtual) TPM chip which (according to the windows snapin tpm.msc) is ready for usage. Now let’s see if device encryption can be used. NO, it can’t. The option is unavailable in control panel. Let me open system information (msinfo32.exe) to check whether there is a reason for the missing option...yes, there is: Reasons for failed automatic device encryption: Hardware Security Test Interface failed and the device is not Modern Standby, TPM is not usable. As I wrote: the TPM is usable and as I will show, it would work with Bitlocker, why not with device encryption? Hmm. And what does an end user even know about HSTI or “Modern standby”? – If he informed himself, would he be able to buy a device that satisfies this requirement? I am not that sure. You might have noticed something else: the word “automatic”. Microsoft would have enabled device encryption even automatically if the requirements had been fulfilled and you would be logging on with a Microsoft account. That way, they can ensure that the recovery key, the important fallback key, is saved to your OneDrive cloud storage. Ok, so this is something to understand: As a user of the Pro version, you would not be required to back up your key to the cloud, nor to have a device with certain capabilities – Bitlocker just works without it – you could even choose to use a password instead of the TPM, which, according to Microsoft, is not a safe practice. So possibly, Microsoft is trying to act in the best interest of the home users that might, after all, not know what they are doing when they are choosing to enable disk encryption and keeps them from using that feature, so that they don't lock themselves out of their computer, possibly rendering their data inaccessible. But what about you, the home version users, who do understand all of that? This method is for you. This method will give you the same protection and features like device encryption, but on any hardware. Please note: if you have no idea what Bitlocker is or how it works, you should not encrypt your drive with it. In any case, let me emphasize that I expect anyone trying this to follow the instructions to the T, but first of all, to have a full data backup. To make Bitlocker usable on Windows Home edition, you only need a TPM module that is ready for usage – that’s all. You don’t need to tweak Windows or use illegal practices – Microsoft has left a backdoor open for you. To use it, proceed as follows (you might want to print out the following before you proceed): Be aware, that if you have set up Windows in a non-standard way (with legacy "MBR" partitioning, that is) and at the same time you use a TPM 2.0 module, you will not be able to use this method right away, so let's begin with two little tests: powershell Get-Disk 0 | findstr GPT && echo This is a GPT system disk! If this command returns "This is a GPT system disk!", then that's good. If it does not return anything, let's see what the 2nd test says. Now launch: wmic /namespace:\\root\CIMV2\Security\MicrosoftTpm path Win32_Tpm get /value | findstr SpecVersion=2.0 && cls && echo TPM version 2.0 found, this will only work with GPT. If this command returns "No Instance(s) Available", then you have no TPM chip. Scroll to the end of this article for an explanation. If it returns "TPM version 2.0 found, this will only work with GPT", but the first test did not tell you that your system disk is a GPT system disk, scroll to the end of the article for a resolution. Else, if it does not return anything, you are ready to continue here. Click on the start button and then on the power button, keep the shift key pressed and then click on restart – the following screen will soon appear: There, select Troubleshoot – Advanced operations – Command prompt Now the computer will restart and ask for the password of an administrator account before it proceeds with the command prompt At the command prompt, just run the following command: manage-bde -on c: -used As you can read: the encryption is now in progress. Nevertheless, we may restart the PC right now. Close the command prompt and select “continue” to boot Win10 Home. When it’s booted, open an elevated command prompt (right click c:\windows\system32\cmd.exe and select “Run as administrator”) and then launch manage-bde c: -protectors -add -rp -tpm Now you have added a recovery key which is very important and needs to be saved to a file (text file) and be printed out and kept at a safe place. To do that, simply use copy and paste within the command prompt: mark the recovery key together with the ID and copy it to a word processor like notepad or word, and save it to (for example) your personal backup drive and then print it out. Congrats, you have added a TPM protector that allows the device to start hands-free. On to the last command, the one that finally enables Bitlocker protection: manage-bde -protectors -enable c: Bingo. Now open file explorer and you see the lock icon on your (C:) drive. If you want to encrypt additional drives, repeat the whole process, just with the other drive letters. Note that you cannot add TPM protectors to drives other than (C:), so, for example, (D:) to become protected, when you rebooted, you will need to add an auto-unlock protector and a recovery key like this: manage-bde -autounlock -enable d: manage-bde -protectors -add -rp d: Finally, enable the protector using: manage-bde -protectors -enable d: In explorer, you now see 2 encrypted partitions, (C:) and (D:) Note: You CANNOT add pre-boot authentication passwords with Windows 10 Home. This protection relies on the TPM alone, which means, you are not protected against all attack types, but at least against the same attacks that device encryption ought to protect you against! If you have any questions about this article, feel free to Ask a Related Question at the forum! -- At the end, I will refer to possible problems that the pre-test could reveal: "No Instance(s) Available": You have no TPM chip. This will not work without one. Mainboards of desktop computers are usually not equipped with TPMs, so if you have a desktop computer, you might have to buy a TPM chip that fits on your mainboard first, finding out if that is even possible: Your mainboard would need to have a TPM-header. Modern notebooks will usually have a TPM. If they don't, unfortunately, there is no way to change that. "TPM version 2.0 found, this will only work with GPT": if you happen to run a TPM in 2.0 mode, but windows is not installed with GPT partitioning - luckily, this can be changed! Please refer to the Microsoft article "MBR2GPT.EXE" for the required command to use in order to convert to GPT.
https://www.experts-exchange.com/articles/33596/How-to-use-Bitlocker-on-Windows-10-Home.html
CC-MAIN-2019-26
refinedweb
1,543
69.21
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. first u have to inhert hr.applicant class in py file : 1) from osv import osv, fields class hr_applicant(osv.osv): _inherit = "hr.applicant" _columns = { 'field1' : fields.char('name', size=246), } and then u have to inherit hr.applicant form / tree view in xml file 2) <record model="ir.ui.view" id="crm_case_form_view_job"> <field name="name">Jobs - Recruitment Form</field> <field name="model">hr.applicant</field> <field name="arch" type="xml"> <field name="inherit_id" ref="crm.crm_case_form_view_job"/> <form string="Jobs - Recruitment Form" version="7.0"> <field name="field1"/> </form> <field> </record> thanks yes u need to create __init__.py file and __openerp__.py file if you are satify with answer then plz accept the answer <field name="inherit_id" ref="crm.crm_case_form_view_job"/> if i inherit another model, what will be the ref of that model? How can i!
https://www.odoo.com/forum/help-1/question/i-want-to-add-a-new-field-in-hr-applicant-how-can-i-add-using-inheritance-29728
CC-MAIN-2016-50
refinedweb
169
62.95
Opened 12 years ago Closed 10 years ago #3308 closed defect (worksforme) Escaped documentation in admin site Description The automatically generated documentation escapes the HTML markup provided by docutils. A docstring on a view such as this: def foo(request): """bar""" return None ... will generate a documentation "summary" (title) of: <p>bar</p> ... or literally (the characters sent over the wire): <h2 class="subhead"><p>bar</p> </h2> I have been digging through the code and cannot find a clean way to get around this short of writing my own templates. I do think I should have to write my own templates to work around a (presumed) bug. Change History (8) comment:1 Changed 12 years ago by comment:2 Changed 12 years ago by Can someone confirm current action isn't what we want and then forward triage stage to Accepted. comment:3 Changed 12 years ago by Well, the above example looks wrong, but a fix is going to need careful thought: any raw text should be escaped before being dumped into the doc pages, just so that you can write "3 < 5" in your docstrings and not have it break on output. Anything that has gone through docutils, though, should not be escaped, since it's already HTML. I used to remember which parts were which, but I do remember it's not too hard to work out. However, the bug also shows another problem: the "bar" string really doesn't want to be in paragraph tags in the first place. So we may need to look at that as well. May be multiple problems here, or all handled by one simple fix. Not sure. comment:4 Changed 11 years ago by comment:5 Changed 11 years ago by I think it might be prudent to look into applying Wiki formatting for the docstrings. It's less intrusive than HTML and can still be easily read from a command line. Edit: "I do not think I should have to write my own templates..."
https://code.djangoproject.com/ticket/3308
CC-MAIN-2019-26
refinedweb
336
66.78
Understanding Sprite coordinate systems - WannabeMaker Very newbie question - apologies if there's obvious documentation on this; I haven't found it. Ultimately, my code should draw a line segment starting at a specific coordinate on the screen (x, y) and ending at an offset (x + dx, y + dy). Importantly, dx and dy can be positive or negative. It will then change color over time, so I'd like to use a scene.ShapeNode that I can animate. For starters, I'm testing by creating a line segment that starts at my touch point. However, getting the line to start at the touch point seems only to work with a somewhat unwieldy set of conditions to set the anchor_point (see below). I think this is because of the way the ShapeNode bounding rectangle, and therefore its coordinate system, changes as the size of the shape changes. Is there some smarter, cleaner, more elegant way to get this done? For bonus points, am I right in thinking that the +y direction for a Sprite is down, but the y coordinate of a touchPosition increases as you go up the screen? Thoughts on why? def drawSeg(self, touchPosition): segPath = ui.Path() segPath.move_to(0, 0) dx = randint(-100, 100) dy = randint(-100, 100) segPath.line_to(dx, dy) seg = ShapeNode(path = segPath, position = touchPosition) seg.stroke_color = (255,255,255) seg.fill_color = "clear" if dx >= 0: if dy >= 0: seg.anchor_point = (0, 1) else: seg.anchor_point = (0, 0) else: if dy >= 0: seg.anchor_point = (1, 1) else: seg.anchor_point = (1, 0) self.add_child(seg) segPath.line_to(dx,-dy) seg=ShapeNode(path=segPath, position=touchPosition) seg.anchor_point=((dx<=0),(dy<=0)) You are correct that scene coordinates has y positive = up. Images and path usually have the origin in the top left corner. The rationale... that is the way iOS does it... which is for obscure historical reasons. - chriswilson If it's helpful, here is some code I was experimenting with to draw line segments and have a ball bounce off them using the scene module. - WannabeMaker @JonB Thank you - that's perfect. And, as with all the best code, totally obvious in retrospect. Much appreciated! - WannabeMaker @chriswilson Thank you, Chris - I will check it out. I appreciate your help.
https://forum.omz-software.com/topic/3184/understanding-sprite-coordinate-systems
CC-MAIN-2018-05
refinedweb
374
68.57
Usually a malware writer, or a closed source product, use some techniques in order to make the binaries difficult to read. On the one hand, the anti-virus are unable to read the signature of the malware and on the other hand a reverse engineer’s life becomes difficult. One technique (usually not implemented alone), is to encrypt some portions of the code and decrypt them at runtime, or better decrypt each time the code we want to run and then encrypt it back. As GPU’s have extremely high computational power, we can have really complex functions for encrypting and decrypting our code. I’ve made a really simple example of a self-decrypting application and i’ll try to explain this step by step. First of all what is our program going to do? Well it will spawn a shell. The assembly code (we need assembly code so it can be portable) to do that is: global _shell _shell: xor ecx, ecx mul ecx push ecx push 0x68732f2f push 0x6e69622f mov ebx, esp mov al, 11 int 0x80 You can find codes like this freely available on the internet (this one is written by kernel panik), or you can make your own if you want specific things to be done (or just want to learn). We want our code to be portable, and not containing relative addresses. So now that we have our assembly code, we compile it to an object file: nasm shell.asm -f elf32 -o shell.o Our code for the self-decrypting binary is this one, written in C for CUDA: #include <stdio.h> #include <sys/mman.h> #include <cuda.h> #define len 21 __global__ void decrypt(unsigned char *code){ int indx = threadIdx.x; code[indx] ^= 12; } extern "C" void _shell(); int main(void){ unsigned char *p = (unsigned char*)_shell; unsigned char *d_shell,*h_shell; h_shell = (unsigned char *)malloc(sizeof(char)*len); int i; for(i=0;i<len;i++){ h_shell[i] = *p; p++; } cudaMalloc((void **) &d_shell, sizeof(char)*len); cudaMemcpy(d_shell, h_shell, sizeof(char)*len, cudaMemcpyHostToDevice); decrypt<<<1,len>>>(d_shell); cudaMemcpy(h_shell, d_shell, sizeof(char)*len, cudaMemcpyDeviceToHost); cudaFree(d_shell); char *d=(char *)mmap(NULL, len,PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON,-1,0); memcpy(d,h_shell,len); ((void(*)(void))d)(); } Now i have to make some explainations. First of all we have to find the length of the instructions. There are some ways to do this, but there is a project by oblique here: that can do that very easily. Now, some of you may wonder why i am mmaping and memcpying. Well there are some protections around, that prevent us from writing to some portions of memory such as .text. So we have to load our encrypted code, decrypt it and mmap it to a new portion of memory that can be executed. This is where our flags go. After that we are ready to execute our code. UPDATE NOTE: Ok i don’t really know why i did this, but some of you may wonder, why don’t you just call mprotect? Well you are right. I updated my code on github and you can check it. Okay i know, it’s a simple xor decryption with a fixed key, not really encrypted, but this is just a proof of concept. You can have a more complex stream cipher function like RC4 ect. Also you do not need to have a key saved in the binary somehow, but brute force until the code “makes sense”. With such a computation power it is pretty easy. Now we compile our source code with nvcc and link it: nvcc shell_spawn.cu -c gcc shell_spawn.o shell.o -o shell_spawn -L/usr/local/cuda/lib -lcudart And now we have our executable! But first we have to patch our binary with our encrypted function. The reason why we used stream ciphers is because we don not want to change the size of our function, and make things more complex. One simple way to patch our elf binary is simply by opening it with a hex editor ( i used Bless), and find the code we want to patch. But how? It’s simple: objdump -d -j .text shell_spawn and if you search you will see the _shell function: 8048a30: 31 c9 xor %ecx,%ecx 8048a32: f7 e1 mul %ecx 8048a34: 51 push %ecx 8048a35: 68 2f 2f 73 68 push $0x68732f2f 8048a3a: 68 2f 62 69 6e push $0x6e69622f 8048a3f: 89 e3 mov %esp,%ebx 8048a41: b0 0b mov $0xb,%al 8048a43: cd 80 int $0x80 Now we simply encrypt the op codes. I used xor 12 so my output is this: 3dc5fbed5d6423237f6464236e656285efbc07c18c We open our hex editor, load our binary and replace our old _shell function with our encrypted one: After that we save our file and if we execute it we can see that a shell spawns! If we objdump our file, we can see our function _shell, but this time is doing random stuff ;) : 8048a30: 3d c5 fb ed 5d cmp $0x5dedfbc5,%eax 8048a35: 64 23 23 and %fs:(%ebx),%esp 8048a38: 7f 64 jg 8048a9e <__libc_csu_init+0x4e> 8048a3a: 64 23 6e 65 and %fs:0x65(%esi),%ebp 8048a3e: 62 85 ef bc 07 c1 bound %eax,-0x3ef84311(%ebp) 8048a44: 8c 90 90 90 90 90 mov %ss,-0x6f6f6f70(%eax) You can find my source also on github here: I want to develop a strong cipher and find a better way to patch my binary, so this is just the idea. If someone wants to go deeper i’d like to hear new ideas. Until then, feel free to comment, point mistakes etc :) Sources: [1]: GPU Assisted malware
https://stack0verflow.wordpress.com/tag/code/
CC-MAIN-2018-17
refinedweb
943
67.59
side not, the idea of having flags in a map with the thread ID as the key sounded kind of cool. Not sure if that is a quick but not dirty fix. I agree: it's not dirty at all. thread local enables -avoiding the use wait/notify within synchronize blocks to force threads to contend for lock on data shared across threads and -limiting use of synchronized blocks and letting each thread have its copy. There is no contention for the monitor as each thread has its own copy. example (this IS quick and dirty!); public class FlagContainer { private static FlagThreadLocal calulcations = new FlagThreadLocal(); public static FlagContainer.FlagThreadLocal getCalculations() { return calulcations; } static class FlagThreadLocal extends ThreadLocal<Map<String, Flag>> { @Override public Map<String, Flag> initialValue() { return new HashMap<String, Flag> (); } } } and then, somewhere within the Runnable FlagContainer.FlagThreadLocal calculations = FlagContainer.getCalculations(); Flag myFlag = calculations.get().get(key); if (myFlag == null) { myFlag = new Flag(); } myFlag.setValue(flagValue); this doesnt come for free and although there is no explicit synchronization, the jvm is having to execute synchronized code. some say it performs badly but i think its ok. Although, thinking about it, maybe you could exploit the synchronized methods in Hashtable and do away with the ThreadLocal.......Although this way the t.local map is invisible to the caller and could be wrapped up in the getters & setters cheers paul ======================================== Message Received: Jun 06 2009, 06:19 PM From: "Christoph Steinbeck" <steinbeck@...> To: "Developers forum for discussion about the Chemistry Development Kit<br> (CDK)" <cdk-devel@...> Cc: Subject: Re: [Cdk-devel] More on threading... or, we might actually need an immutable pattern here On 6 Jun 2009, at 13:28, Egon Willighagen wrote: > On Thu, Jun 4, 2009 at 7:07 PM, Christoph Steinbeck<steinbeck@... > > wrote: > >> With respect to two algorithms manipulating the same molecule at the >> same time: >> Can you give an example? > > Calculation of QSAR descriptors. Currently, it is impossible to > parallelize calculating descriptors for one molecule. (Since we cannot > cache graph properties, we cannot assume descriptors will not > calculate properties at the same time). Good example - indeed we always assumed that there is a serial execution of code. >> I'm tempted to say that this might be bad >> design of the application and not of the underlying CDK library (no >> accusations to anyone :-) just a thought). > > Indeed the CDK library kind of sucks. If we cannot write good code in > the CDK library itself, who are we to accuse downstream libraries... > to clarify, it is not any application that is affected... I was > talking about the CDK itself being affected You misunderstood me. I was not talking of libraries used by CDK but of an application using the CDK and the CDK being the underlying library of this application. >> I mean, in principle you can *always* design a bad implementation >> that >> where two parallel threads manipulate the same data and mess it up. > > Not if you implement the immutable pattern. But that would essentially mean the we need to re-write basically all algorithms. I realize that you pointed this out in your original posting. Sounds kind of scary, though, but I guess we have no choice in the long run. The question that we now need to start discussing about is the various options. >> I guess the tricky part comes when you haven't got control of this - >> like when the operating systems multithreads things automatically . > > Not sure what you mean here. Should we not try to support > multithreading? Who said that? I think it is utterly clear that any library need to support multithreading. > We actually want the 'OS' to do the multithreading for > us... that's how Map&Reduce, clojure, etc works. These are the OS-s > you run the CDK-based application on. The advantage of these systems > is that the 'OS' takes care of the multithreading. > >> With regards to builders - a builder will give you a molecule >> instance, ok. And then you can again have parallel threads messing >> this molecule up. I'm sure I miss something here. > > Yes, but it makes it very clear when something gets modified. Right > now, you have no clue. Using the builders can block any operation on > the molecule, but leaves using the immutable molecule itself > unblocking. > > Perhaps Java supports making certain methods blocking, but that is not > actually the problem here. > > The underlying problem here is not the setting of a FLAG, but an > algorithm using VISITED and other fields which have meaning *while* > the algorithm is running. Not sure how easily that can be made > blocking access to the molecule from other threads... > > As always, most open to suggestions... Well, I think that in any case we need a migration plan and maybe a page on our wiki where we point out the options for the migration to a CDK with multithreading capabilities. As a side not, the idea of having flags in a map with the thread ID as the key sounded kind of cool. Not sure if that is a quick but not dirty fix. Cheers, Chris -- Dr. Christoph Steinbeck Head of Chemoin.. [ (no name for attachment) (0.4 Kb) ] [ (no name for attachment) (0.2 Kb) ]
https://sourceforge.net/p/cdk/mailman/message/22674289/
CC-MAIN-2018-05
refinedweb
864
66.33
Release notes - Azure Arc enabled SQL Server (Preview) Note As a preview feature, the technology presented in this article is subject to Supplemental Terms of Use for Microsoft Azure Previews. April 2021 Breaking change No breaking changes Other changes A new property LicenseType has been added to the SQL Server - Azure Arc resource type. It indicates if your SQL Server instance requires a license. The property can have one of the following values: Note For the existing SQL Server - Azure Arc resources, this property will show a Null value. It will be automatically updated with the correct value after Azure Arc enabled SQL Server becomes generally available. December 2020 Breaking change This release introduces an updated resource provider called Microsoft.AzureArcData. Before you can continue using Azure Arc enabled SQL Server, you need to register this resource provider. See the resource provider registration instructions in the Prerequisites section. If you have existing existing SQL Server - Azure Arc resources, use these steps to migrate them to Microsoft.AzureArcData namespace. Launch the Cloud Shell. For details, read more about PowerShell in Cloud Shell. Upload the script to the shell using the following command: curl -o migrate-to-azure-arc-data.ps1 Run the script. ./migrate-to-azure-arc-data.ps1 Note - To paste the commands into the shell, use Ctrl-Shift-Von Windows or Cmd-von MacOS. - The curlcommand will copy the script directly to the home folder associated with your Cloud Shell session. - The script will prompt for the resource group name and print a message when migration is completed. Other changes - The TCPPorts property in the SQL Server - Azure Arc resource type has been renamed to TCPStaticPorts - The permissions required aren’t as broad as they used to be. See the Required permission section for details. Known issues - The CreateTime property won’t be added to any newly created resources in the AzureArcData namespace, including the SQL Server - Azure Arc resources. October 2020 The October update includes the following improvements: The register Azure Arc enabled SQL Server blade now includes the Tags tab. The tags are included in the registration script and are reflected in the SQL Server - Azure Arc resource(s). For details, see Connect your SQL Server to Azure Arc. The Environment Health entry now supports activation of SQL Assessment from the Portal by deploying a CustomScriptExtension. For details, see Configure SQL Assessment. Known issues The following issues apply to the October release: - Connecting SQL Server instances to Azure Arc requires an account with a broad set of permissions. For details, see Required permissions. September 2020 Azure Arc enabled SQL Server is released for public preview. Azure Arc enabled SQL Server extends Azure services to SQL Server instances hosted outside of Azure in the customer’s datacenter, on the edge or in a multi-cloud environment. For details, see Azure Arc enabled SQL Server Overview Known issues The following issues apply to the September release: The Register Azure Arc enabled SQL Server blade does not support configuring custom tags. To add custom tags, open the SQL Server - Azure Arc resource after registration and change Tags in the Overview page. Connecting SQL Server instances to Azure Arc requires an account with a broad set of permissions. For details, see Required permissions. Next steps Just want to try things out? Get started quickly with Azure Arc enabled SQL Server Jumpstart.
https://docs.microsoft.com/en-us/sql/sql-server/azure-arc/release-notes?view=sql-server-ver15
CC-MAIN-2021-21
refinedweb
560
55.34
iswprint man page iswprint — test for printing wide character Synopsis #include <wctype.h> int iswprint(wint_t wc); Description The iswprint() function is the wide-character equivalent of the isprint(3)zero if wc is a wide character belonging to the wide-character class "print". Otherwise, it returns zero. Attributes For an explanation of the terms used in this section, see attributes(7). Conforming to POSIX.1-2001, POSIX.1-2008, C99. Notes The behavior of iswprint() depends on the LC_CTYPE category of the current locale. See Also isprint(3), iswctype(3) Colophon This page is part of release 4.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Referenced By isalpha(3), iswctype(3), lsof(1), wcswidth(3), wcwidth(3).
https://www.mankier.com/3/iswprint
CC-MAIN-2017-17
refinedweb
138
59.5
Speed plus compression equals faster searches Walter is an analyst at Phoenix Mutual Life. He can be reached at 5 Burns Ave., Enfield, CT 06082 or 203-745-9159. We all know a sequential search is slow. Search time increases linearly with the size of the list; and as a list grows beyond a few items, the search time quickly becomes unbearable. Nevertheless, because it is easy to code, works on just about any list, and provides acceptable speed for short lists, the sequential search remains one of the most commonly used search algorithms. Consequently, there is much to be gained by speeding up the sequential search algorithm, while maintaining its inherent generality and simplicity. This article describes a simple algorithm that can often speed up a sequential search by a factor of two or more. And there's a special bonus -- a list can also be compressed, often to half its original size. The improvement results from a better method of comparing each key. The number of key comparisons is the same as with any sequential search, but the time spent comparing each key is dramatically reduced. As with any sorted sequential list, the number of keys compared will be, on average, half the number of keys in the list. The number of key comparisons, however, is not the whole story. Each key comparison is itself a sequential search (a search for a non-matching character), so the number of character comparisons (I'll assume the keys are character strings) is N / 2 * K / 2 (or NK / 4) where N is the number of items in the list and K is the key length. A sorted list, however, presents us with an interesting opportunity. The opportunity arises from the fact that the sort brings similar keys together. Often, the first few characters of one key duplicate the first few characters of the preceding key. As a consequence, a typical sequential search spends much of its time comparing the same leading characters over and over. If those redundant characters are skipped the search will be faster. The approach described here, called the suffix list, speeds up the search by eliminating those needless comparisons. Here's how it works. The list is kept in ascending order and each key is divided into two parts, a prefix and a suffix. The prefix is the portion of the key which matches the previous key. The suffix is the remainder of the key, beginning with the first character that differs from the previous key. The suffix list stores an integer that represents the prefix length along with each key. Unlike a simple list, in which the complete key for each item is used during the search, a suffix list uses only the suffix in key comparisons. The prefix itself is ignored, because its length -- the number of characters that match the previous key -- provides all the information needed for the search. The data can be stored as a linked list, as an array of fixed length items, or as a series of variable length items concatenated one right after the other in contiguous memory. Listing One, page 100, shows code which uses a linked list. It doesn't matter which method you use; the basic principles are the same, although the methods for traversing the list differ. A linked list has the advantage of making insertion and deletion of items easier, but storing the list in contiguous memory uses less space. The seq_cell_S structure in Figure 1(a) illustrates a typical structure for building a linked list. The sfx_cell_S structure in Figure 1(b), on the other hand, illustrates a structure for building a list to be used with a suffix search. The only difference is the addition of the element cell.pfxcnt, which stores the prefix length. Figure 1: The seq_cell_S structure in (a) illustrates a typical structure for building a linked list. The sfx_cell_S structure in (b), on the other hand, illustrates a structure for building a list to be used with a suffix search. The only difference is the addition of the element cell.pfxcnt, which stores the prefix length. (a) struct seq_cell-S { struct seq_cell_S *next; /* Next node */ struct seq_cell_S *prev; /* Previous node */ char key[1]; /* Key value */ }cell; (b) struct sfx_cell_S { struct seq_cell_S *next; /* Next node */ struct seq_cell_S *prev; /* Previous node */ unsigned char pfxcnt; /* Prefix Length */ char key[1]; /* Key value */ }cell; cell.key is the first byte of a null terminated string that contains the key. Note that the cell.key is not a pointer to a string, but is the actual location of the beginning of the string. The cell.prev and cell.next elements are pointers to the previous and following cells in the list, respectively. Figure 2 shows a list of city names, the prefix counts, and prefix and suffix values. In seq_cell_S, the prefix and suffix are not stored separately. The full key is stored in cell.key, as normal, but is now supplemented by the prefix length, which is kept in cell.pfxlen. Figure 2: A list of city names, the prefix counts, and prefix and suffix values Standard Pfxlen Prefix Suffix ------------------------------------------------- Acampo 0 Acampo Acton 2 Ac ampo Adelanto 1 A delanto Adin 2 Ad in Agoura Hills 1 A goura Hills Agoura Hills 12 Agoura Hills Aguanga 2 Ag uanga Ahwahnee 1 A hwahnee Alameda 1 A lameda Alamo 4 Alam o Searching Like any sequential search, a pattern key is compared to each successive key in the list. The search starts at the beginning of the list and continues until a matching item is found or until an item greater than the pattern is found. An example of code that performs this task is contained in the Search( ) function in Listing One. Unlike a standard sequential search, a suffix list search does not examine the actual key value for every item in the list. Instead, the prefix length for each item is compared to a running count of the number of characters matched so far in the pattern. When the search begins, the match count is zero; nothing has been matched. The search progresses, and the match count increases as each character of the pattern is matched until, when the item is found, the match count is equal to the pattern length. Only when the match count is equal to the item's prefix count does the actual key value come into play. If the prefix length is greater than the number of characters matched in the pattern, the search skips directly to the next item in the list. Why? Observe that the next character to compare is part of the prefix for the current key; it is, by definition, the same as the character in the same position in the previous key. It is also the first character in the last key that did not match the pattern. Obviously, if the character did not match in the last key, it will not match in this one. So the search can safely jump to the next item in the list. If the prefix length is less than the match count, the search ends in failure -- the pattern is not in the list. This happens only when a character position that has already been successfully matched contains a new and different character. The list is in ascending order, so that new character would have to be greater than the one already matched in that position. Therefore, the pattern key would have to have come before the current item if it were in the list. If the prefix length does, in fact, equal the match count, the suffix must be compared to the pattern. The comparison proceeds character by character, beginning with the first character of the suffix and with the first unmatched character in the pattern. The match count is incremented for each matching character. If it turns out that the pattern matches the suffix exactly, the item has been found, and the search is over. If the pattern is greater than the item, the search continues. If the pattern is less than the item, the item is not on the list, and the search fails. Consider the following example, which searches for Adept in the list in Figure 2. When the search begins, the match count is zero. The prefix count of the first item is, of course, also zero. So Adept is compared to Acampo. Only the first character matches, so the match count becomes 1, and the search continues. The prefix count for the next item in the list, Acton, is 2, which is greater than the match count, so it is skipped. The prefix count of the third item, Adelanto, is the same as the match count, so the pattern and suffix are compared. The next two characters of the pattern, de, match the corresponding characters in the key, so the match count is advanced by 2, to become 3. The next character, l, is less than p, so the search continues. The fourth item, Adin, has a prefix count of 2, which is less than the match count. The search is over because the item is not in the list. From the previous example it's clear that this algorithm makes relatively few character comparisons. The average number of comparisons will be between N / 2 + K and N + K. (Where N is the number of records, K is the average key length, and the comparison of the prefix count to the match count is a single comparison.) This is generally far less than the average of KN / 4 character comparisons which a standard search would make. Insertion Before an item can be inserted into the list, the appropriate location for the item has to be found. After all, the list has to remain correctly sorted after the new item is added. The first step in adding an item, therefore, is to search for the item by using the algorithm described above. In the case of duplicate items, the search must continue to the last matching item. After the position for the new item is found, the key has to be separated into prefix and suffix. That's easy, because the prefix was already identified during the search. Search( ) saves the match count from the item just before the position at which the new key is to be inserted. That match count is, by definition, the prefix count for the new item. We just allocated enough memory to hold the cell structure, and then insert the new item, including the prefix length and key value, into the list. The whole process is not much different from that of any other linked list insertion. But there is a wrinkle. Remember that each item's prefix length depends on the previous item. It's possible that after insertion there will be greater similarity between the new key and the next one. If that's the case, the prefix length of the next item in the list will change. Because the list is sorted, the prefix length will increase if it changes. The prefix lengths tell us all we need to know to adjust the next item. The prefix length of the new item will never be less than the existing prefix length of the next item -- if it were, the list wouldn't be properly sorted. If the prefix length of the new item is greater than that of the next item, the prefix length of the next item will not change. Only if the prefix lengths of the two items are the same will the prefix length of the next item change. In that case, the two suffixes must be compared and the prefix length adjusted accordingly. Let's insert Adept into the sample list. The first step is to search the list, just as we did before. That search ended just after Adelanto with a match count of 3. So the new node is inserted into the list between Adelanto and Adin with a prefix length of 3. The prefix length for Adin is not the same as the prefix length for the new entry, so the prefix length for Adin does not change. Deletion Deletion of an item from the list is similar to insertion. Again, the first step is to search through the list until the desired item is found; the second step is to remove it. The actual removal of the item is no different than removal of an item from any other list -- memory allocated from the heap must be freed, pointers updated, and so on. But just as with insertion, the prefix length of the next item following the deleted item may change. This time it will get smaller, never larger. The adjustment of the prefix length depends, not surprisingly, on the prefix length of the item being deleted and the prefix length of the item following it. The new prefix length for the next item will be the lesser of the two. Let's delete Adept. The search proceeds as before, this time ending successfully with Adept. We unlink it from the list, but before releasing the memory we compare the prefix length of the deleted item to that of the item following it. The new prefix length of Adin is already less than that of Adept, so the prefix length for Adin does not change. Compression You will have observed that the prefix portion of the key is never used. In fact, as far as the search is concerned, it can be eliminated completely. The prefix will never have to be reconstituted for basic list operations. Even when part of the suffix must be rebuilt on deletion, all of the information needed is contained in the suffix of the key being deleted. By eliminating the prefix, the list can be stored more compactly. That's an obvious advantage if the list is kept entirely in memory. But it's also an advantage if the list must be retrieved from disk frequently -- the more compact the data, the greater its chance of being in cache. How much space does it save? Tests run on a list of 250 city names and zip codes give some idea of the improvement possible. The original list used 20 bytes for each city name. The average length of the actual names was about 8.2 characters and the average suffix length was about 5.7 characters. The suffix structure includes a single unsigned charto store the prefix length, and the variable-length keys require a null terminator, so the net result (including 4 bytes for links) is a savings of 20 - (5.7 + 6) = 8.3 bytes per record. That's a 41 percent savings relative to a fixed length table. (See Table 1.) Table 1: Typical savings provided by compression technique Type of List List Size Percent Saved -------------------------------------------------- Fixed length records 20.0 bytes 0% Linked list (full keys) 13.2 33% Linked suffix list 11.7 41% Contiguous suffixes 7.7 61% Of course the amount of space saved depends upon the data. The greater the similarity between keys, the greater the savings. Best of all, when duplicate keys occur, the suffix is null and none of the key is stored. Performance A search of the 250 city names was about twice as fast as a standard sequential search. The improvement agrees with predictions based on the formulas for character comparisons presented earlier. There are a few other tricks for speeding up a sequential search. These include using the search pattern as an end marker, unrolling the loop, or using a self-organizing list. The self-organizing list is generally the most effective of the three. When the distribution obeys Zipf's law, it takes NK / log(2)N character comparisons. The self-organizing list is a substantial improvement over a standard sequential search; but it is generally not quite as fast as the suffix search. The trade-off between a self-organizing list and a suffix list will not favor the self-organizing list unless the key length is less than the log (base 2) of the number of records. Applications This algorithm was originally devised to search nodes in a B-tree. In a B-tree each node contains keys that are very similar -- sometimes all of the keys are identical -- so the suffix search is substantially faster than a standard sequential search. But the real payoff is that by eliminating the prefix many more keys fit in a node -- and that reduces the number of disk hits, which are relatively time-consuming. A binary search, which is often used to find a key in a B-tree node, is still faster than the suffix search, but the reduction of the number of disk hits more than makes up for the slower search. (If duplicate keys are permitted in the B-tree, the binary search must be followed by a sequential search anyway.) There are, of course, other applications where keys with similar prefixes are common: directory lists, compiler symbol tables, and so forth. Similar improvements ought to be possible there, too. It makes sense to compress keys by removing the redundant prefix, and that was the original objective of this method. It was somewhat surprising, however, to find that the compression improves the speed of the search. One would expect the elimination of the prefix portion of the key to make list maintenance more awkward. Instead, the prefix count turns out to be more useful than the actual characters of the prefix. There are drawbacks to the method. If the prefix is eliminated, the keys have to be reconstructed when needed. The programs are also a bit more complex than a standard sequential search. But for many applications, the advantages far outweigh those drawbacks. _SUPERCHARGING SEQUENTIAL SEARCHES_ by Walter Williams [LISTING ONE] <a name="0270_000e"> /*********************************************************************** * SS.C -- Sample Sorted Sequential Suffix Search (c) 1989 Walter Williams ***********************************************************************/ #include <stdio.h> #include <string.h> #include <malloc.h> #ifndef TRUE #define TRUE 1 #define FALSE 0 #endif typedef struct snode_S { struct snode_S *prev; /* Address of previous node in list */ struct snode_S *next; /* Address of next node in list */ unsigned int pfxlen; /* Number of characters in prefix */ char key[1]; /* First character of key */ } snode_T, *snode_TP; /************************ Function Prototypes ***************************/ snode_TP Search(char *, snode_TP, int *, unsigned int *); snode_TP Insert(char *, snode_TP); snode_TP Delete(char *, snode_TP); /*----------------------------------------------------------------------*/ /* SEARCH() -- Search the list for a pattern key. */ /* 'pattern' is a null terminated string containing the key which is */ /* the object of the search. */ /* 'list' is the address of a dummy node which contains head and tail */ /* pointers for a linked list. */ /* 'exact' is the address of a flag which is TRUE for an exact match */ /* and FALSE if the pattern is not found. */ /* 'match' is the address of an unsigned int to use as a match counter */ /* The return value is a pointer to the structure containing the */ /* matching key, or the next largest node if the pattern was not found. */ /*----------------------------------------------------------------------*/ snode_TP Search(char *pattern, snode_TP list, int *exact, unsigned int *match) { snode_TP cnode; /* Pointer to current node */ char *sp; /* Suffix pointer */ int tm= 0; /* Temp storage for match count */ /***/ *exact= FALSE; /* Assume unsuccessful search */ *match= tm; for (cnode= list->next; cnode != list; cnode= cnode->next) { /* Compare match count to prefix count */ if (tm < cnode->pfxlen) continue; else if (tm > cnode->pfxlen) break; else /* (tm == cnode->pfxcnt) */ { /* Compare the actual key suffix, maintain match count */ sp= cnode->key + cnode->pfxlen; while (*pattern == *sp && *sp && *pattern) { ++sp; ++pattern; ++tm; } /* Done if suffix greater than or equal to pattern */ if (*pattern < *sp ) { break; } else if (*pattern == '\0' && *sp == '\0') { *match= tm; *exact= TRUE; break; } } *match= tm; } return (cnode); } /*--- INSERT() Adds an item to the list. ---*/ snode_TP Insert(char *pattern, snode_TP list) { snode_TP cnode; /* Node we are inserting */ snode_TP nnode; /* Next node after cnode */ char *sp; /* Pointer to suffix */ unsigned int match; int exact; /***/ /* Find spot where we insert the node */ nnode = Search(pattern, list, &exact, &match); if (exact == TRUE) /* Skip to first non-matching key */ { nnode = nnode->next; while (nnode != list && nnode->key[nnode->pfxlen] == '\0') nnode = nnode->next; } /* Allocate space for the new node */ cnode = (snode_TP) malloc(sizeof(snode_T) + strlen(pattern)); cnode->pfxlen = match; strcpy(cnode->key, pattern); /* Link it into the list ahead of nnode */ cnode->next = nnode; cnode->prev = nnode->prev; nnode->prev->next = cnode; nnode->prev = cnode; /* Update pfxlen in following node */ sp = nnode->key + nnode->pfxlen; if (cnode->pfxlen == nnode->pfxlen) { /* Compare the two suffixes */ nnode->pfxlen= 0; while (*sp == *pattern && *pattern && *sp) { ++sp; ++pattern; ++nnode->pfxlen; } } return (cnode); } /*--- DELETE() Deletes an item from the list ---*/ snode_TP Delete(char *pattern, snode_TP list) { snode_TP cnode; /* Node we are deleting */ snode_TP nnode; /* Next node after cnode */ int exact; /* Flag set if exact match */ unsigned int match; /* No. of characters matched in pattern */ /***/ /* Find the node we want to delete */ cnode = Search(pattern, list, &exact, &match); if (exact == FALSE) /* Abort if not an exact match */ { printf("%s not found\n", pattern); nnode= NULL; } else { /* Remove it from the list */ cnode->next->prev = cnode->prev; cnode->prev->next = cnode->next; nnode = cnode->next;/* Save for return value */ /* Update suffix in following node */ if (cnode->pfxlen < cnode->next->pfxlen) cnode->next->pfxlen = cnode->pfxlen; /* Release deleted node */ free((char *) cnode); printf("%s deleted\n", pattern); } return (nnode); }
http://www.drdobbs.com/database/supercharging-sequential-searches/184408460
CC-MAIN-2017-43
refinedweb
3,555
69.41
The App Engine Python 2.7 runtime directly includes several third-party libraries. Third-party libraries must be specified in app.yaml, and this configuration is different than in Python 2.5 (see Configuring Libraries for details). Vendoring Third-party PackagesIf you want to include additional pure-python third-party packages, you can do so by setting up vendoring. Vendoring allows you to install packages to a subdirectory of your project and include them in your code. To use vendoring, create (or modify) appengine_config.pyin the root of your project. from google.appengine.ext import vendor # Add any libraries installed in the "lib" folder. vendor.add('lib')You can now use pip to install libraries. $ pip install -t lib gcloudYou can also declare all of your dependencies in a requirements.txtand install them at once. Flask==0.10 Markdown==2.5.2 google-api-python-client $ pip install -t lib -r requirements.txtRead more requirements.txt in pip's documentation. Note: pip version 6.0.0 or higher is required for vendoring to work properly. Django Notes.) To use Django, specify the WSGI application and Django library in app.yaml: ... handlers: - url: /.* script: main.app # a WSGI application in the main module's global scope libraries: - name: django version: "1.2"() Matplotlib Notes Note: The experimental release of matplotlib is not supported on the development server. You can still add matplotlib to the libraries list, but it will raise an ImportError exception when imported. Blobstore using the Files API.. (See Defining Environment Variables.)) and financial data (downloaded by matplotlib.finance.fetch_historical_yahoo). - Because there is no caching, it is not possible to call matplotlib.cbook.get_sample_dataw. Note: The pylab and matplotlib.pyplot modules are stateful and not thread-safe. If you use them on App Engine, you must set threadsafe: false in app.yaml, and be aware that the plotter state will be preserved between requests on the same instance. For example, you will need to call pyplot.clf() at the beginning of each request to ensure that previous plots are not visible. It is recommended that you use the thread-safe object-oriented API instead of the stateful pyplot API.
https://cloud.google.com/appengine/docs/python/tools/libraries27
CC-MAIN-2015-18
refinedweb
360
53.27
I first saw the new Microsoft Visual Studio .NET after beta 2 was released. One of the things that stood out to me was how nice the tabs looked. It also finally included a tabbed MDI. I soon after found Bjarke Viks�e's Cool Tab Controls. I was especially interested in the " DotNetTabCtrl". Bjarke soon linked to an updated version by Pascal Binggeli, that had support for the tabs to have images. There were several minor issues with both versions, but when I examined the code, it seemed an excellent foundation to build from. Starting with the updates from Pascal, I began making my own updates. Initial updates included adding divider lines between the tabs, adjusting the layout to closer match VS.NET, adding implementations to use the tab control for a tabbed MDI, and adding other improvements. At first, I tried to make minimal changes to the base CCustomTabCtrl. I eventually began evolving even the base CCustomTabCtrl to accommodate my vision for what this could be. CCustomTabCtrl is the base, templatized, ATL::CWindowImpl derived class to help implement a customized tab control window. The painting is double buffered for flicker free drawing. Clients never use this class directly, but instead use a class derived from it. Included out of the box are a handful of tab controls derived from CCustomTabCtrl: CDotNetTabCtrlin "DotNetTabCtrl.h". Written by Daniel Bowen. Tab control with the look and feel of the tabs in VS.NET. Used for both MDI tabs, pane window tabs, and others. CDotNetButtonTabCtrlin "DotNetTabCtrl.h". Written by Daniel Bowen. Tab control with the VS.NET style of button tabs (to look like VS.NET view of HTML with the Design/HTML buttons). CButtonTabCtrlin "SimpleTabCtrls.h", CButtonTabCtrlin "SimpleDlgTabCtrls.h". Written by Bjarke Vik�e, updated for new CCustomTabCtrl by Daniel Bowen. Push button style tabs. The " DlgTabCtrl" version is meant to subclass an existing static control. CFolderTabCtrlin "SimpleTabCtrls.h", CFolderTabCtrlin "SimpleDlgTabCtrls.h". Written by Bjarke Vik�e, updated for new CCustomTabCtrl by Daniel Bowen. Trapezoidal folder tabs similar to the tabs used in the output pane of Visual Studio 6. The " DlgTabCtrl" version is meant to subclass an existing static control. CSimpleDotNetTabCtrlin "SimpleTabCtrls.h", CSimpleDotNetTabCtrlin "SimpleDlgTabCtrls.h". Written by Bjarke Vik�e, updated for new CCustomTabCtrl by Daniel Bowen. This is essentially Bjarke's original " CDotNetTabCtrl" with a flat tab look. The " DlgTabCtrl" version is meant to subclass an existing static control. CCustomTabCtrl derived tab controls are meant to work similar to other common controls such as the list view ( SysListView32) and the tree view ( SysTreeView32). There is already an existing tab control ( SysTabControl32) that is a common control. So why do we need CCustomTabCtrl? Because there are several customizations that are hard to do with it. SysTabControl32 was originally created to implement the task bar in Windows 95 and later (just like SysListView32 and SysTreeView32 were originally created for Windows Explorer). There are several features that other common controls have that SysTabControl32 is missing, such as custom drawing, insert and delete notifications, position displacement and more. One of the differences between CCustomTabCtrl derived tab controls and common controls in Windows are how items are managed. Common controls are meant to work with any client that can handle structures and window programming - from x86 assembly to Visual Basic. The "item" in most common controls is a structure. To get and set items, you fill out a structure with a mask to identify the fields you are interested in. There are several side effects of this design decision. One is that there is often more "memory copying" than really needs to be happening, especially with getting and setting text. Another is that there is no good way to know if a particular field is actually in use. Another effect is that any user data is always put into an " LPARAM" into one of these structures, cast to and from the real type of data. CCustomTabCtrl takes a different approach, and lets you use any C structure or C++ class for the item that provides the needed interface. There are two such classes available out of the box - CCustomTabItem and CTabViewTabItem. The type of structure or class is a template parameter on the tab control class. If you want to use the tab control, and have instances of your own class for the tab items, the easiest thing to do is inherit from CCustomTabItem or CTabViewTabItem, and extend it to provide the extra functionality you need, then specify this new class as a parameter. To see the interface needed, simply look at CCustomTabItem. Tab controls based on CCustomTabCtrl are meant to be used either as a stand-alone window, or to subclass an existing control on a dialog such as a static control. The parent of the tab control is responsible for creating, destroying, sizing and positioning the tab control window appropriately along side other child windows. Depending on which custom tab control you use, it will probably depend on some system metrics to figure out which colors and fonts to use. If the user changes these system metrics (for example, by changing items on the "appearance" tab of the display control panel), the tab control can pick up these changes - but only if you propagate WM_SETTINGCHANGE from the main frame to the tab control. This can be done by handling WM_SETTINGCHANGE in the main frame, then calling CWindow::SendMessageToDescendants or the equivalent code. See the included sample applications for an example. The samples provided with this article demonstrate how to use these tab controls as stand-alone windows used to switch between child "view" windows, where only one child view is visible at a time. For a simple example of how to use a custom tab control for a tabbed MDI, see the "SimpleTabbedMDIDemo" sample. In this sample, the WTL wizard was run to create a default MDI Application. Instead of having CMainFrame inherit from CMDIFrameWindowImpl, change it to inherit from CTabbedMDIFrameWindowImpl. Instead of CMDICommandBarCtrl, use CTabbedMDICommandBarCtrl. Then in each MDI child frame, inherit from CTabbedMDIChildWindowImpl instead of CMDIChildWindowImpl. The "TabDemo" sample uses a tabbed MDI as well, but in addition has a "popup tool window frame" that uses CDotNetTabCtrl to switch between child views. It also uses CDotNetButtonTabCtrl for the child frame to switch between an HTML view and an edit view (of the source of the HTML). The "DockingDemo" sample shows how you might integrate these custom tab controls with Sergey Klimov's WTL Docking Windows. I've included the source for his docking windows with permission. However - be sure to get the latest updates from him! In the file "TabbedDockingWindow.h", there is the class CTabbedDockingWindow that inherits from CTabbedFrameImpl and Sergey's CTitleDockingWindowImpl. The "TabbedSDISplitter" sample shows the use of a splitter in an SDI application, with the right side window "tabbed" to show multiple views. For another sample showing a slightly different use of these tabs in an SDI application, see the "SDITabbedSample" from Sergey Klimov's WTL Docking Windows and the work by Igor Katrayev. It should be possible to integrate the custom tab controls in with other docking frameworks, splitters, etc. as well. As time allows, I'll try to get some more sample applications up. TabbedFrame.h contains classes to make it simple to add the ability to turn a frame window with one "view" into a tabbed frame window with a custom tab control to switch between one or more views. Included are the classes: CCustomTabOwnerImpl- MI class that helps implement the parent of the actual custom tab control window. The class doesn't have a message map itself, and is meant to be inherited from along-side a CWindowImplderived class. This class handles creation of the tab window as well as adding, removing, switching and renaming tabs based on an HWND. CTabbedFrameImpl- Base template to derive your specialized frame window class from to get a frame window with multiple "view" child windows that you switch between using a custom tab control (such as CDotNetTabCtrl). CTabbedPopupFrame- Simple class deriving from CTabbedFrameImplthat is suitable for implementing a tabbed "popup frame" tool window, with one or more views. See the "TabDemo" sample for an example of using this class. CTabbedChildWindow- Simple class deriving from CTabbedFrameImplthat is suitable for implementing a tabbed child window, with one or more views. See the "TabbedSDISplitter" sample for an example of using this class. TabbedMDI.h contains classes to help implement a tabbed MDI using a custom tab control. There are multiple approaches to implementing a tabbed MDI. The approach that is used here is to subclass the out-of-the-box "MDIClient" from the OS, and require each MDI child frame to inherit from a special class CTabbedMDIChildWindowImpl (instead of the normal CMDIChildWindowImpl). Included are the classes: CTabbedMDIFrameWindowImpl- Instead of having CMainFrameinherit from CMDIFrameWindowImpl, you can have it inherit from CTabbedMDIFrameWindowImpl. For an out-of-the box WTL MDI application, there are three instances of CMDIFrameWindowImplto replace with CTabbedMDIFrameWindowImpl. CTabbedMDIChildWindowImpl- If you want your MDI child window to have a corresponding tab in the MDI tab window, inherit from this class instead of from CMDIChildWindowImpl. This class also provides a couple of nice additional features: WS_MAXIMIZEso that it is forced to start out life maximized. SetTitleis provided to set the frame title (and possibly the corresponding MDI tab's text). SetTabTextlets you set the text for the corresponding MDI tab regardless of what the frame caption (window text) is. SetTabToolTiplets you set the tooltip's text for the corresponding MDI tab. UWM_MDICHILDSHOWTABCONTEXTMENUto show a context menu for the corresponding MDI tab. The default context menu is the window's system menu. CTabbedMDIClient- The CTabbedMDIFrameWindowImplcontains CTabbedMDIClient, which subclasses the "MDI Client" window (from the OS, that manages the MDI child windows). It handles sizing/positioning the tab window, calling the appropriate Display, Remove, UpdateText for the tabs with the HWNDof the active child, etc. You can use CTabbedMDIClientwithout using CTabbedMDIFrameWindowImpl. To do so, simply call SetTabOwnerParent(m_hWnd), then SubclassWindow(m_hWndMDIClient)on a CTabbedMDIClientmember variable after calling CreateMDIClientin your main frame class. CMDITabOwner- The MDITabOwneris the parent of the actual tab window (such as CDotNetTabCtrl), and sibling to the "MDI Client" window. The tab owner tells the MDI child when to display a context menu for the tab (the default menu is the window's system menu). The tab owner changes the active MDI child when the active tab changes. It also does the real work of hiding and showing the tabs. It also handles adding, removing, and renaming tabs based on an HWND. CTabbedMDICommandBarCtrl- In your MDI application, instead of using CMDICommandBarCtrl, use CTabbedMDICommandBarCtrl. It addresses a couple of bugs in WTL 7.0's CMDICommandBarCtrl, and allows you to enable or disable whether you want to see the document icon and min/max/close button in the command bar when the child is maximized. I originally posted my version of the " DotNetTabCtrl" code to the WTL mailing list's site on groups.yahoo.com, and to Bjarke Viks�e. If you have used a previous version downloaded from either of these places, there are only a couple of updates to your client code to accommodate updates that I've made: #include "CoolTabCtrls.h" now you need: #include "CustomTabCtrl.h" #include "DotNetTabCtrl.h" // (include other versions of tab controls) SetBoldSelectedTab, use the CTCS_BOLDSELECTEDTABstyle. TCN_", they now start with " CTCN_" (they are numerically identical to tab control " TCN_" notifications where there is overlap). TCS_", use the custom tab control styles starting with " CTCS_" (they are numerically identical to tab control " TCS_" styles where there is overlap). CTCHITTESTINFOinstead of TCHITTESTINFO. CCustomTabCtrlor CDotNetTabCtrlImplor others, there have been a few other interface changes that hopefully will be obvious to address (you'd get compile errors). CTCS_SCROLL- This enables "scroll buttons". When the tab items don't get all the real estate they want, they overflow and the scroll button for that side is enabled. You can call the methods Get/ SetScrollDeltaand Get/ SetScrollRepeatto adjust how much is scrolled and how fast scrolling is repeated when holding down the button. CTCS_BOTTOM- If this style is set, the tab window is meant to be displayed on the bottom of the client area. Otherwise, it is meant to be displayed on the top of the client area. It is up to the parent of the tab window to honor this style. CTCS_CLOSEBUTTON- This enables a "close" button. When this button is clicked, the parent gets a " CTCN_CLOSE" notification. CTCS_HOTTRACK- Enables hot tracking tab items. If you are targeting Windows 2000/98 or later, be sure to #define WINVERand/or _WIN32_WINNTto 0x0500 or later, before including this file (usually in your precompiled header) so that the new " COLOR_HOTLIGHT" is used. Note: If you specify CTCS_SCROLL or CTCS_CLOSEBUTTON, those buttons are always hot tracked regardless of this style. CTCS_FLATEDGE- Tab controls derived from CCustomTabCtrlcan use this style to determine whether to draw the outline of the control with a flat look. CTCS_DRAGREARRANGE- If you set this style, a tab item can be dragged to another position within the same tab control. CTCS_BOLDSELECTEDTAB- The selected tab's text is rendered in the bold version of the tab font. CTCS_TOOLTIPS- Enable tooltips to be displayed. Each item's tooltip defaults to the text of the tab, but can be adjusted by calling SetToolTipon the tab item. These messages are sent to the parent of the tab control window in the form of a WM_NOTIFY message. NM_CLICK- Notifies a tab control's parent window when the left. This notification is not sent when clicking on a scroll or close button. NM_DBLCLK- Notifies a tab control's parent window when the left_RCLICK- Notifies a tab control's parent window when the right. NM_RDBLCLK- Notifies a tab control's parent window when the right_CUSTOMDRAW- Notifies a tab control's parent window about drawing operations. The lParamof the message is a pointer to a NMCTCCUSTOMDRAWstructure. See MSDN for an explanation of how custom drawing works with common controls in general, in the article "Customizing a Control's Appearance". For the most part, the custom tab control is very similar (especially to custom drawing a toolbar control). The one difference is that the NMCTCCUSTOMDRAWstructure lets you set the HFONTfor the inactive and selected item, instead of selecting the item's font into the device context and returning CDRF_NEWFONTon each CDDS_ITEMPREPAINT(you can still change the HFONTon each CDDS_ITEMPREPAINT, or you can just set it in the CDDS_PREPAINTnotification). CTCN_FIRST- Value of first custom tab control notification code. CTCN_LAST- Value of last custom tab control notification code. If a derived class wants to define its own message, it can use CTCN_LAST - 1, CTCN_LAST - 2, etc. CTCN_SELCHANGE- Notifies a tab control's parent window that the currently selected tab has changed. The lParamof the message is a pointer to a NMCTC2ITEMSstructure, with iItem1being the old selected item, and iItem2the new item to select. CTCN_SELCHANGING- Notifies a tab control's parent window that the currently selected tab is about to change. The lParamof the message is a pointer to a NMCTC2ITEMSstructure, with iItem1being the old selected item, and iItem2the new item to select. The receiver of the message should return TRUEto prevent the selection from changing, or FALSEto allow the selection to change. CTCN_INSERTITEM- Notifies a tab control's parent window that an item has been inserted. The lParamof the message is a pointer to a NMCTCITEMstructure, with iItembeing the index of the inserted item. CTCN_DELETEITEM- Notifies a tab control's parent window that an item is about to be deleted. The lParamof the message is a pointer to a NMCTCITEMstructure, with iItembeing the index of the item to be deleted. The receiver of the message should return TRUEto prevent the deletion, or FALSEto allow it. CTCN_MOVEITEM- Notifies a tab control's parent window that an item has been moved to another index by a " MoveItem" call. The lParamof the message is a pointer to a NMCTC2ITEMSstructure, with iItem1being the old index, and iItem2the new index. CTCN_SWAPITEMPOSITIONS- Notifies a tab control's parent window that two items have been switched in position by a " SwapItemPositions" call. The lParamof the message is a pointer to a NMCTC2ITEMSstructure, with iItem1being the index of the first item, and iItem2the index of the second. CTCN_CLOSE- Notifies a tab control's parent window that the "close" button has been clicked. The close button is only displayed for tab controls with CTCS_CLOSEBUTTONset. CTCN_BEGINITEMDRAG- Notifies a tab control's parent window that a tab item drag has started. The lParamof the message is a pointer to a NMCTCITEMstructure, with iItembeing the index of the item being dragged. CTCN_ACCEPTITEMDRAG- Notifies a tab control's parent window that a tab item drag has ended and is accepted by the user. The lParamof the message is a pointer to a NMCTC2ITEMSstructure, with iItem1being the index of the item in its original place, and iItem2the new index of the item. CTCN_CANCELITEMDRAG- Notifies a tab control's parent window that a tab item drag has ended and was cancelled by the user. The lParamof the message is a pointer to a NMCTCITEMstructure, with iItembeing the index of the item that was being dragged. NMCTCITEM- NMHDR hdr;- Generic Notification Information. int iItem;- Index of item involved in action, or -1 if not used. POINT pt;- Screen coordinate of point of action. NMCTC2ITEMS- NMHDR hdr;- Generic Notification Information. int iItem1;- Index of first item involved in action. int iItem2;- Index of second item involved in action. POINT pt;- Screen coordinate of point of action. CTCHITTESTINFO- POINT pt;- Position to hit test, in client coordinates. UINT flags;- Variable that receives the results of a hit test. The tab control sets this member to one of the following values: CTCHT_NOWHERE- The position is not over a tab. CTCHT_ONITEM- The position is over a tab item. CTCHT_ONCLOSEBTN- The position is over the close button. CTCHT_ONSCROLLRIGHTBTN- The position is over the right scroll button. CTCHT_ONSCROLLLEFTBTN- The position is over the left scroll button. NMCTCCUSTOMDRAW- NMCUSTOMDRAW nmcd;- General custom draw structure. HFONT hFontInactive;- Font for text of inactive tabs. HFONT hFontSelected;- Font for text of selected tab. HBRUSH hBrushBackground;- HBRUSHto PatBltinto background. COLORREF clrTextInactive;- Color of text of inactive tab. COLORREF clrTextSelected;- Color of test of selected tab. COLORREF clrSelectedTab;- Color of the selected tab's background. COLORREF clrBtnFace;- Used in drawing 3D shapes. Defaults to COLOR_BTNFACE. COLORREF clrBtnShadow;- Used in drawing 3D shapes. Defaults to COLOR_BTNSHADOW. COLORREF clrBtnHighlight;- Used in drawing 3D shapes. Defaults to COLOR_BTNHIGHLIGHT. COLORREF clrBtnText;- Used in drawing 3D shapes. Defaults to COLOR_BTNTEXT. COLORREF clrHighlight;- Used when a derived tab wants a "highlight" color. COLORREF clrHighlightHotTrack;- Used for hot tracking. COLORREF clrHighlightText;- Used when a derived tab wants a "highlight text" color. CTCSETTINGS- signed char iPadding;- Tab item padding. signed char iMargin;- Tab item margin. signed char iSelMargin;- Selected tab item margin. signed char iIndent;- Indent from left of client area to beginning of first tab (sometimes used similarly on the right of client area). CDotNetTabCtrlImpl(base class of CDotNetTabCtrland CDotNetButtonTabCtrl) interprets margin and padding like so: M - Margin P - Padding I - Image Text - Tab Text With image: __________________________ | M | I | P | Text | P | M | -------------------------- Without image: ______________________ | M | P | Text | P | M | ---------------------- FindItemflags CTFI_NONE- 0x0000 CTFI_RECT- 0x0001 CTFI_IMAGE- 0x0002 CTFI_TEXT- 0x0004 CTFI_TOOLTIP- 0x0008 CTFI_TABVIEW- 0x0010 CTFI_HIGHLIGHTED- 0x0020 CTFI_CANCLOSE- 0x0040 CTFI_LAST- CTFI_CANCLOSE CTFI_ALL- 0xFFFF #define the following constants before you #include CustomTabCtrl.h to redefine them. CTCSR_NONE- 0 CTCSR_SLOW- 100 CTCSR_NORMAL- 25 CTCSR_FAST- 10 CTCD_SCROLLZONEWIDTH- 20 (pixels) Note: This is one place where custom tab controls are dissimilar to common controls. Instead of sending messages to the window, you call public methods - much like working with the common control wrappers of WTL. If you want corresponding messages to these methods, then let me know. SubclassWindow- Call when you have an existing window, such as a static control, that you want to subclass into a custom tab control window. GetTooltips- Get the tooltip control window (the WTL tooltip control window wrapper is returned). Get/SetImageList- Images for tab items are kept in an image list. The WTL image list wrapper is used. Get/SetScrollDelta- The distance in pixels that each atomic scroll operation scrolls the view. Only valid when the CTCS_SCROLLstyle is set. Valid values are 0-63. Get/SetScrollRepeat- When a scroll button is held down, the scroll repeat determines how quickly the atomic scroll operation is repeated. Only valid when the CTCS_SCROLLstyle is set. A Windows timer is used to repeat the scroll. Valid values are: ectcScrollRepeat_None, ectcScrollRepeat_Slow, ectcScrollRepeat_Normal, and ectcScrollRepeat_Fast. InsertItem- Insert a new tab item. There are two versions of this method. The first allows you to pass the parameters for the tab item. The second allows you to call the CreateTabItemmethod to create a new tab, set the parameters on that item, then insert the item (the tab control takes ownership of the item created with CreateTabItem). MoveItem- Move an existing tab item to another index, and shift the affected tab item indexes. SwapItemPositions- Swap the positions of two tab items. DeleteItem- Delete a tab item. DeleteAllItems- Delete all the tab items. GetItem- Get the pointer to the tab item class. The tab item class will either be CCustomTabItem, a class derived from CCustomTabItem, or a class with the same interface as CCustomTabItem. Get/SetCurSel- The currently selected tab item. SetCurSelwill first send a notification that the selected item is about to change ( CTCN_SELCHANGING). If the receiver of the notification doesn't cancel the attempt, the current selection is changed, and another notification is sent to say the selection has changed ( CTCN_SELCHANGE). GetItemCount- Returns the number of tab items. HitTest- Determines which tab, if any, is at a specified position (in client coordinates). EnsureVisible- Ensures that the tab item specified is in view. Only really useful when CTCS_SCROLLis set. GetItemRect- Get the RECTof an item in client device coordinates. HighlightItem- Similar to TCM_HIGHLIGHTITEMfor SysTabControl32and the MFC and WTL wrappers CTabCtrl::HighlightItem. FindItem- Find the next tab item matching the search criteria The function is meant to mimic how CListViewCtrl::FindItemand LVM_FINDITEMwork, since there are no comparable messages or functions for a tab control. I'd like to thank Bjarke Viks�e for his original "Cool Tab controls" and for all of the other cool things on his web site that he so generously shares with the world. I'd also like to thank all the many others who have given feedback, made suggestions, and helped this become what it is today. SetMinTabCountForVisibleTabs. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/tabs/tabbingframework.aspx
crawl-002
refinedweb
3,738
56.45
Created on 2011-05-12 21:59 by terry.reedy, last changed 2017-08-24 15:25 by csabella. Current 3.2 doc, 5.9. Comparisons, has this paragraph about mixed-type comparisons. "The." Sentence 3: "If both are numbers, they are converted to a common type." I suspect it would be more true to say 'common internal type' as I would not think it a language requirement to produce Python objects. In any case, I think it is only true for built-in number types, and I do not see that qualification anywhere previously. That aside, it does not appear to be true for Decimals and Fractions in 2.7.1. Sentence 4: first clause is only true for built-in types. That qualification is not obvious to everyone, as evidenced by a current python-list sub thread. For 2.7, which has a different continuation, I suggest adding 'built-in' before 'objects of'. For 3.2/3, I suggest deleting '*always*' and adding a comma after 'TypeError' so that the 'when' condition applies to equality comparisons also. After discussion about same-type comparisons, there is another paragraph about mixed-type comparison: "Comparison of objects of the differing types depends on whether either of the types provide explicit support for the comparison. Most numeric types can be compared with one another, but comparisons of float and Decimal are not supported to avoid the inevitable confusion arising from representation issues such as float('1.1') being inexactly represented and therefore not exactly equal to Decimal('1.1') which is. When cross-type comparison is not supported, the comparison method returns NotImplemented. This can create the illusion of non-transitivity between supported cross-type comparisons and unsupported comparisons. For example, Decimal(2) == 2 and 2 == float(2) but Decimal(2) != float(2)." I suggest deleting this entirely. The first sentence and first clause of the second repeat what was said above. The rest is obsolete as float/decimal comparisons *are* implemented in 2.7.1 and 3.2.0. Can you provide a patch? [Docs] "If both are numbers, they are converted to a common type." [Terry] "In any case, I think it is only true for built-in number types," It's not even true for built-in number types. When comparing an int with a float, it's definitely *not* the case that the int is converted to a float and the floats compared. And that's for good reason: the int -> float conversion is lossy for large integers, so if int <-> float comparisons just converted the int to a float before comparing, we'd have (for example): >>> 10**16 == 1e16 == 10**16 + 1 leading to broken transitivity of equality, and strange dict and set behaviour. So int <-> float comparisons do a complicated dance under the hood to compare the exact numerical values of the two objects and produce the correct result. I'm not sure what the intent of the original sentence was, or how to reword it. The key point is simply that it *is* possible to compare an int with a float, and that the result is sensible, based on numeric values. Would it be ok to state that: 1) <, >, ==, >=, <=, and != compare the values of two objects; 2) the two objects don't necessarily have to be of the same type; 3) with == and !=, objects of different types compare unequal, unless they define a specific __eq__ and/or __ne__; 4) with <, >, <=, and >=, the comparison of objects of different types raises a TypeError, unless they define specific __lt__, __gt__, __le__, and __ge__; 5) some built-in types define these operations, so it's possible to compare e.g. int and floats; This should summarize the possible behaviors. There's no reason IMHO to expose implementation details and to special case built-in types (unless their comparison is actually different and doesn't depend on __eq__, __ne__, etc.). In Python 3, where all classes inherit from object, the default rules are, by experiment, (which someone can verify from the code) simpler than you stated. 3. By default, == and /= compare identities. 4. By default, order comparisons raise TypeError. ob <= ob raises even though ob == ob because ob is ob. I am not sure of the method look-up rules for rich comparisons, but perhaps the following are true: 3) with == and !=, an object is equal to itself and different objects (a is not b) compare unequal, unless the class of the first define a specific __eq__ and __ne__; 4) with <, >, <=, and >=, comparison raises a TypeError, unless the class of the first object defines specific __lt__, __gt__, __le__, and __ge__, or the class of the second defines the reflected method (__ge__ reflects __lt__, etcetera); What is not clear to me is whether the reflected method is called if the first raises TypeError. The special method names doc (reference 3.3) says "A rich comparison method may return the singleton NotImplemented if it does not implement the operation for a given pair of arguments. ... think point 5 should say a bit more: builtin numbers compare as expected, even when of different types; builtin sequences compare lexicographically.. After() I've attempted to incorporate both Terry's and Ezio's suggestions. Here is a patch to get started with. There is a section that has been deleted. Patch uploaded. Some: > I determined that 'raise TypeError' and 'return NotImplemented' both > result in the call of the reflected method Are you sure? raise TypeError *should* result in the operation being abandoned, with the reflected operation not tried. Python 3.3.0rc2+ (default:3504cbb3e1d8, Sep 20 2012, 22:08:44) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> class A: ... def __add__(self, other): ... raise TypeError("Don't know how to add") ... def __le__(self, other): ... raise TypeError("Can't compare") ... [65945 refs] >>> class B: ... def __radd__(self, other): ... return 42 ... def __ge__(self, other): ... return False ... [66016 refs] >>> A() <= B() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in __le__ TypeError: Can't compare [66064 refs] >>> A() + B() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in __add__ TypeError: Don't know how to add [66065 refs] You are right, I did not look deep enough. I was fooled by the conversion of NotImplemented, returned from object.__le__, etc, to TypeError. Sorry for that noise. For comparison and arithmetic, the actual alternative to defining a function that returns NotImplemented seems to be to not define it at all. class C(): def __ge__(self, other): return True def __add__(self, other): return 44 __radd__ = __add__ class O(): pass # removed NotImplemented defs c = C() o = O() print(c >= o, o <= c) # True True print(c + o, o + c) # 44 44 (I looked at the codes for binary_op1 in abstract.c and do_richcompare in object.c and do not yet see any effective difference between not defined and a NotImplemented return.) I'll take a look at the patch later. Changed patch to include suggestions by Chris Jerdonek. Considering that the docs have changed does this issue still need to be open? Hi, Uploaded issue12067-expressions_v3.diff for the 3.5 tip. Please review. Uploaded issue12067-expressions_v4.diff to improve the unicode footnote 3, and to revert to using the term "lexicographical" for sequences (after learning that it applies there as well). Also, this version was produced using "hg diff" to make it properly reviewable. Please review. PS: The v4 patch does not address comments f) and h) from msg222204, and it seems to me they do not need to be addressed. Terry, I'd like to comment on your statement: > 3. By default, == and /= compare identities. in msg148774. What experiment lead you to that conclusion? Here is one that contradicts it (using cpython 3.4.1): >>> i1 = 42 >>> f1 = 42.0 >>> i1 == f1 True >>> i1 is f1 False Is it possible, that your experiment got influenced by the optimization that attempts to reuse existing objects of immutable types? Like in this: >>> i1 = 42 >>> i2 = 40 + 2 >>> i1 == i2 True >>> i1 is i2 True Andy Uploaded v5 of the patch. Changes: 1. The statement that comparison of different built-in types (always) raises TypeError, was too general. Changed to distinguish equal and order operators, as summarized by Ezio in items 3) and 4) of msg148760. 2. Ensured max line length of 80, in text areas affected by the patch. Andy It seems I still need to practice creating patches ... uploading v6 which should create a review link. No other changes. Sorry for that. Andy Another attempt. Really sorry.... I Uploaded + In other words, the following expressions should have the same result: + + ``x == y`` and ``not x != y`` + + ``x < y`` and ``not x >= y`` + + ``x > y`` and ``not x <= y`` I think the second and third items here go too far: sets don't obey these rules, for example. Not all uses of comparisons need to force a total ordering. OTOH, you leave out a more fundamental relation, namely that `x < y` and `y > x` should ordinarily give the same result, as should `x <= y` and `y >= x`. Mark: Both are good points! Would you add the cases from your second comment under "symmetry"? Uploaded v9 of the patch for 3.4 and default. It reflects Marc's comment, plus the result of the recent discussion on python-dev since v8 of th epatch, up to 2014-07-15 (subject: == on object tests identity in 3.x). -> Please review the patch. - This bug should discuss doc updates, not question the rules. - The rules have evolved over time and the docs stayed behind. - We should definitely update the 2.7 docs as well as the 3.4 and 3.5 (in development) docs. The 2.7 docs need to be different than the 3.x docs. - The language reference manual should clearly state the rules so that implementers can use them as guidelines for implementation. - There are several sets of relevant rules: (a) How is each operator translated into a series of lookups and method calls, etc. It's similar to other binary operators except that the reverse for __lt__ is __gt__ instead of __rlt__, and there's an extra rule that if __ne__ doesn't exist we compute __eq__ and take the opposite. (b) The default implementation (e.g. default == falls back to 'is', < raises TypeError). (c) The rules for built-in types, especially numbers (if there are still special cases that aren't explained by the __xx__ methods on the various numeric types). The point about “a != b” deferring to “not a.__eq__(b)” is not documented anywhere that I am aware of. In fact the opposite is currently documented at <>, so maybe this needs to be fixed, one way or another. That's a pretty new feature. Someone probably forgot to clean up all the places where it was documented. On Sat, Sep 6, 2014 at 4:18 PM, Martin Panter <report@bugs.python.org> wrote: > > Martin Panter added the comment: > > The point about “a != b” deferring to “not a.__eq__(b)” is not documented > anywhere that I am aware of. In fact the opposite is currently documented > at < >>, > so maybe this needs to be fixed, one way or another. > > ---------- > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________ > Just wanted to say that i will continue working on this, working in the comments made so far... Andy Maybe it would be wise to split this task up and commit the bits that don’t need any more work. I think the existing patch might already solve Issue 22001. @Guido: Agree to all you said in your #msg226496. There is additional information about comparison in: - Tutorial (5.8. Comparing Sequences and Other Types), - Library Reference (5.3. Comparisons), - Language Reference (3.3.1. Basic customization) that needs to be reviewed in light of this patch. I'm just not sure I want to make this patch even larger as it is already, and tend to do that in a follow on issue and patch (unless directed otherwise). Andy Uploading. Here is the delta between v9 and v10 of the patch, if people want to see just that. About. I have addressed the comments by Jim Jewett, Martin Panter and of myself in a new version v11, which got posted. For the expression.rst doc file, this version of the patch has its diff sections in a logical order, so that the original text and the patched text are close by each other. Please review. I also made sure in both files that the line length of any changed or new lines is max 80. Sorry if that creates extra changes when looking at deltas between change sets. I have posted v12 of the patch, which addresses all comments since v11. This Python 3.4 patch can be applied to the "default" (3.5 dev) branch as well. I will start working on a similar patch for Python 2.7 now. Issue. I. Patch I think we can commit documentation and tests separately. I just did a quick review of the test changes and I will add some review comments later (sorry, lack of time :)). I can split out a documentation-only patch if it would help get that committed. In the meantime, patch v16 includes some fixups to comments etc in the test code that I missed myself. New changeset 1fc049e5ec14 by Martin Panter in branch '3.4': Issue #12067: Rewrite Comparisons section in the language reference New changeset b6698c00265b by Martin Panter in branch '3.5': Issue #12067: Merge comparisons doc from 3.4 into 3.5 New changeset 294b8a7957e9 by Martin Panter in branch 'default': Issue #12067: Merge comparisons doc from 3.5 I committed the changes to expressions.rst for 3.4+. That still leaves the changes to test_compare.py, and possibly changes for 2.7. Andy: In msg229721 you mentioned a potential 2.7 patch. Did you get anywhere with that? Even if it is only half finished, someone else may be able to keep working on it. Here is a port of the documentation to Python 2. Main differences: * Default rules for order comparisons are different * Not all kinds of objects inherit from object() * str(), unicode() compatibility * xrange() only seems to have default comparability * NAN, “binary sequences” and sets not listed Updated patch for 2.7, which I plan to commit soon. Corresponding Py 3 patch coming soon. New changeset 8c9a86aa222e by Martin Panter in branch '3.5': Issue #12067: Recommend that hash and equality be consistent New changeset 9702c5f08df1 by Martin Panter in branch '3.6': Issues #12067: Merge hash recommendation from 3.5 New changeset 9dbb7bbc1449 by Martin Panter in branch 'default': Issues #12067: Merge hash recommendation from 3.6 New changeset 8a9904c5cb1d by Martin Panter in branch '2.7': Issue #12067: Rewrite Comparisons section in the language reference It appears all the patches for this issue have been applied. Is the only open item the changes to test_compare? Yes I think I committed all the documentation. Someone needs to decide whether to use Andy’s tests as they are, or perhaps modify or drop some or all of them. I've created a PR for the changes to test_compare from v16 of the patch.
https://bugs.python.org/issue12067
CC-MAIN-2017-47
refinedweb
2,559
66.03
README TinySDF is a tiny and fast JavaScript library for generating SDF (signed distance field) from system fonts on the browser using Canvas 2D and Felzenszwalb/Huttenlocher distance transform. This is very useful for rendering text with WebGL. Demo UsageUsage Create a TinySDF for drawing glyph SDFs based on font parameters: const tinySdf = new TinySDF({ fontSize: 24, // Font size in pixels fontFamily: 'sans-serif', // CSS font-family fontWeight: 'normal', // CSS font-weight fontStyle: 'normal', // CSS font-style buffer: 3, // Whitespace buffer around a glyph in pixels radius: 8, // How many pixels around the glyph shape to use for encoding distance cutoff: 0.25 // How much of the radius (relative) is used for the inside part of the glyph }); const glyph = tinySdf.draw('泽'); // draw a single character Returns an object with the following properties: datais a Uint8ClampedArrayarray of alpha values (0–255) for a widthx heightgrid. width: Width of the returned bitmap. height: Height of the returned bitmap. glyphTop: Maximum ascent of the glyph from alphabetic baseline. glyphLeft: Currently hardwired to 0 (actual glyph differences are encoded in the rasterization). glyphWidth: Width of the rasterized portion of the glyph. glyphHeightHeight of the rasterized portion of the glyph. glyphAdvance: Layout advance. TinySDF is provided as a ES module, so it's only supported on modern browsers, excluding IE. <script type="module"> import TinySDF from ''; ... </script> In Node, you can't use require — only import in ESM-capable versions (v12.15+): import TinySDF from '@mapbox/tiny-sdf'; DevelopmentDevelopment npm test # run tests npm start # start server for the demo page LicenseLicense This implementation is licensed under the BSD 2-Clause license. It's based directly on the algorithm published in the Felzenszwalb/Huttenlocher paper, and is not a port of the existing C++ implementation provided by the paper's authors.
https://www.skypack.dev/view/@mapbox/tiny-sdf
CC-MAIN-2022-27
refinedweb
298
54.83
First I want you to read the task and try to solve it in linear time complexity, and minimum memorize complexity. Lonely Integer There are N integers in an array A. All but one integer occurs in pairs. Your task is to find out the number that occurs only once. Input Format The first line of the input contains an integer N indicating number of integers in the array A. The next line contains N integers each separated by a single space. Constraints 1 <= N < 100 N % 2 = 1 ( N is an odd number ) 0 <= A[i] <= 100, ∀ i ∈ [1, N] Output Format Output S, the number that occurs only once. Sample Input 3 1 1 2 Sample Output 2 If you don't have any idea I will write a code. #include <cmath> #include <cstdio> #include <vector> #include <iostream> #include <algorithm> using namespace std; int main() { int N; cin >> N; int tmp, result = 0; for (int i = 0; i < N; i++) { cin >> tmp; result ^= tmp; } cout << result; return 0; } Cool trick using only xor operation, but I can't prove why that work. Can anyone prove it?
http://codeforces.com/blog/entry/12719
CC-MAIN-2021-04
refinedweb
188
69.62
Adding Google Maps To Your Rails Applications In the months following publication of the final part of the very popular series on integrating Google Maps into PHP applications, I've spent quite a bit of time working with another popular Web technology: Ruby on Rails. As it turns out, Rails developers have been hard at work creating a few amazing plugins capable of adding powerful mapping capabilities to your applications. In this new series, I'll introduce you to these powerful plugins, showing you a number of tips and tricks along the way. I'll presume you're familiar with mapping fundamentals, including the basic ideas surrounding the Google mapping API syntax. If you haven't had the opportunity to experiment with the API, take some time to read this tutorial before continuing. Introducing the YM4R/GM Plugin Although there's nothing preventing you from linking to Google's mapping JavaScript API and referencing the library directly from your views, jumping between Ruby/Rails syntax and JavaScript can quickly become a tedious affair. The YM4R/GM plugin remedies this issue nicely, abstracting the API calls through Ruby's familiar object-oriented syntax. With it you can do everything from render simple maps to build complex maps complete with custom markers, information windows, and clusters for facilitating the rendering of large numbers of markers. Installing and Configuring YM4R/GM To install the YM4R/GM plugin, execute the following command from your project directory: %>ruby script/plugin install svn://rubyforge.org/var/svn/ym4r/Plugins/GM/trunk/ym4r_gm YM4R/GM manages the Google API keys within a file named gmaps_api_key.yml, found in the project's config directory. The developers save you the trouble of having to create your own API key for local testing purposes by including an API key that has already been tied to. However, if you're testing on a different host, you'll first need to create an API key and add it to this file (instructions for creating a key are provided in the aforementioned introductory tutorial). Creating Your First Map To get acquainted with YM4R/GM's syntax, you can begin by creating the map displayed in Figure 1. Figure 1: Centering the map over Youngstown, Ohio As is standard Rails practice, you'll use the controller method to define the map and its features, and the view to render the results. In the following example, you'll define a map in the index controller's index action, complete with a pan/zoom control but minus the map type selector: def index # Create a new map object, also defining the div ("map") # where the map will be rendered in the view @map = GMap.new("map") # Use the larger pan/zoom control but disable the map type # selector @map.control_init(:large_map => true,:map_type => false) # Center the map on specific coordinates and focus in fairly # closely @map.center_zoom_init([41.023849,-80.682053], 10) end Next, in the index action's corresponding view, add the following code: <html> <head> <title>Test</title> <%= GMap.header %> <%= @map.to_html %> </head> <body> <%= @map.div(:width => 400, :height => 300) %> </body> </html> The GMap.header call will output references to both the Google Maps API and YM4R/GM JavaScript libraries. The @map.to_html call outputs JavaScript code generated by YM4R/GM according to the specifications set forth in the action. Finally, the @map.div call outputs the map to a div as specified in the action's GMap.new call. Also, you'll see that the map dimensions are defined in the view rather than the controller. This is keeping with the convention of separating application logic and design; the view designer can choose any dimension he pleases; the map will simply fill to the desired size. The initial zoom level is, however, defined in the controller, although the user can easily subsequently adjust the zoom using the control. Page 1 of 3
http://www.developer.com/open/article.php/3757576/Adding-Google-Maps-To-Your-Rails-Applications.htm
CC-MAIN-2014-41
refinedweb
649
50.87
CodePlexProject Hosting for Open Source Software Hi everybody, I'm trying to utilize this library for a project, especially for the data processing part. I need to process about 23000 files per run, each of them is a GZipped JSON. So, I slurp every file, decompress it, deserialize into a List and then process the elements (ending up by writing some of the elements to the output file). With PHP, the whole process took about 2secs per file (with increasing time during processing), which was hardly acceptable, so I switched to C#. Everything runs much faster, BUT: deserializing a structure of about 20k elements into a List takes still about 0.7 seconds (excluding unzipping and further processing, which come on top). Any idea on how to improve this? The underlying data structure is rather boring - 6 string elements, nothing really exciting. Thanks in advance! Dmitri Dmitri, try using JsonReader to process your file sequentially, record by record - this way you'll avoid deserializing everything at once. Apart from that, you can chain the GZIP stream reader (System.IO.Compression namespace) with Json reader and do file decompression & Json processing in single pass. RG Can you give me a code example? Assuming GZStream is the - already opened - stream and I wanted to process the records (...of type Item), how would I do it? JsonTextReader, call Read() on it and then act on the TokenType and Value as necessary. It works in exactly the same fashion as XmlTextReader. I tried it yesterday... It's a way too low-level approach. Read() gets me one atomic token like String or Int or whatever, leaving me to parse it into the data structure. I would prefer something like: while (Item e = reader.getNext()) { // do something with e } instead of fiddling with pieces of "e". For now, I'm just translating JSOn to CSV while receiving the data (using PHP) file by file, and then I can just string.Split() to get my records. Indeed. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://json.codeplex.com/discussions/208237
CC-MAIN-2017-09
refinedweb
367
75.4
Razor Support for ASP.NET Core Apps in Visual Studio Code VS Code now supports Razor for ASP.NET Core apps. The beauty of a cross-platform framework like ASP.NET Core is your ability to choose tooling you prefer. With all its advantages, Visual Studio can sometimes be too powerful for what you need. With ASP.NET Core, the days of being locked down to writing .NET in Visual Studio are over. For example, you can write your applications on editors like Visual Studio Code, whether you are on Mac, Windows, or a flavor of Linux. Earlier this year, I spoke and wrote extensively about how to write an ASP.NET Core app with Visual Studio Code. As this support is evolving, you may notice experiences here and there that do not compare to using a full-fledged editor like Visual Studio—but if you are using Code for your Core apps you may find the trade-off worth it. That gap is narrowing with the announcement that Visual Studio Code now has support for Razor, the .NET markup syntax engine that allows you to write dynamic views using .NET code and HTML markup. As discussed in the announcement, this is very much in preview and has limitations. Read the article for details on what these limitations are, how to provide feedback, and how to disable it if you come across issues. Take advantage of Razor support in Code Before you look at Razor support in Code, create an ASP.NET Core application in Visual Studio Code. We will just create an application based on the ASP.NET Core web application template. For an in-depth tutorial on using ASP.NET Core with Code, you can review my blog post or an in-depth tutorial at the official Microsoft Docs site. As prerequisites, make sure you have Visual Studio Code, the C# extension, and the .NET Core SDK installed. Create a quick .NET Core web app in Visual Studio Code From Visual Studio Code, enter the following commands from the integrated terminal (if you don’t see it, click Terminal > New Terminal): dotnet new webapp -o TestRazorSupport code --reuse-window TestRazorSupport The first command uses the .NET Core CLI to create a new application based on the webapp template. The application now exists within a TestRazorSupport folder. The second command uses a Code command-line switch to open the application in your active Code window. Now, you can open any view file (.cshtml) to experiment with Razor support. Explore Razor support in Visual Studio Code If you open a view—I will be looking at Pages/Contact.cshtml—you can see Razor support in action. First, let’s update the ContactModel with some additional properties. Here’s what my Contact.cshtml.cs now looks like: using System; using Microsoft.AspNetCore.Mvc.RazorPages; namespace TestRazorSupport.Pages { public class ContactModel : PageModel { public string Message { get; set; } public string Name { get; set; } public string Email { get; set; } public void OnGet() { Message = "Your contact page."; Name = "Dave Brock"; Email = "dave@myemail.com"; } } } In Contact.cshtml, let’s access the Name property to check out the Razor support. It works as you would expect—even inside HTML attributes—as we access the And if we access .NET APIs (and steal from the announcement) it works beautifully!
https://www.daveabrock.com/2018/11/19/net-core-apps-in-visual-studio-code-now-have-razor-support/
CC-MAIN-2021-39
refinedweb
552
67.96
- Code: Select all for root, dirs, files in os.walk(os.path.join("\\\?\\I:\\Users\\MyWork\\")): for name in files: print name for root, dirs, files in os.walk(os.path.join("\\\?\\I:\\Users\\MyWork\\")): for name in files: print name Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> # I start in an empty folder: >>> for i in os.walk("."): ... print i ... ('.', [], []) >>> # I create some test files and folders like so: >>> # . >>> # -- somefile.txt >>> # -- testfolder >>> # ---- anotherfile.txt >>> os.mkdir("testfolder") >>> open("somefile.txt", "a") <open file 'somefile.txt', mode 'a' at 0x01D85390> >>> open("testfolder/anotherfile.txt", "a") <open file 'testfolder/anotherfile.txt', mode 'a' at 0x01DE9B78> >>> # I test output of os.walk again: >>> for i in os.walk("."): ... print i ... ('.', ['testfolder'], ['somefile.txt']) ('.\\testfolder', [], ['anotherfile.txt']) >>> # I test output of your function: >>> for root, dirs, files in os.walk("."): ... for name in files: ... print name ... somefile.txt anotherfile.txt >>> always_stuck wrote:I'm trying to implement os.walk to list all files in a directory tree. When I run my code, I only get the files within the top level directory of my tree. I expected it to list all the files for all sub-directories in the entire. I have tried this with a number of different folders and always get the same result. Am I misunderstanding os.walk or is there another issue? I'm using windows 7 and python 2.5. Thanks - Code: Select all for root, dirs, files in os.walk(os.path.join("\\\?\\I:\\Users\\MyWork\\")): for name in files: print name My complete test is the same as yours! Created folder 'test' on desktop. Added file 'test1.txt' and sub-directory 'test2'. Within 'test2' folder I added another file 'test2.txt'. Results below. - Code: Select all >>> for i in os.walk('c:\Users\R924073\Desktop\test\\'): print i >>> Kebap wrote:Please create for test a sub-folder in the location 'I:\\RND-ILLINOIS\\PROJECTS-US\\OFFLINE-DEVELOPMENT-US\\4-TURTLE-KEEP'. Just like you created on desktop. Then see if this test-folder is handled correctly by os.walk(). Maybe there is a problem with the other folders. For example, symbolic links won't work automagically. import glob, os folder = "I:\\RND-ILLINOIS\\PROJECTS-US\\OFFLINE-DEVELOPMENT-US\\4-TURTLE-KEEP" assert os.path.isdir(folder) mask = os.path.join(folder, "*") print glob.glob(mask) dir I:\RND-ILLINOIS\PROJECTS-US\OFFLINE-DEVELOPMENT-US\4-TURTLE-KEEP Return to General Coding Help Users browsing this forum: No registered users and 2 guests
http://www.python-forum.org/viewtopic.php?f=6&t=8192
CC-MAIN-2017-17
refinedweb
440
62.75
CSS is perhaps one of the most controversial parts of web development. For some, it's the favorite, for some the least pleasant part. As a result, many solutions have appeared around it to make it more palatable to web developers. To learn more about one, this time we'll learn about DSS, a solution by Giuseppe Gurgone. My name is Giuseppe, and I am a front-end engineer from Sicily, Italy. In the past I worked for Yelp on their frontend core team, I am a core team member of SUIT CSS and co-author of a CSS-in-JS library called styled-jsx. If it wasn't clear, I like to build front-end infrastructure and CSS libraries. 😅 DSS - Deterministic Style Sheets - is a superset of CSS that can be compiled to atomic CSS classes. In addition to producing incredibly small bundles, atomic CSS classes can be exploited to bring deterministic styles resolution to CSS. For the ones who are not familiar with the concept, deterministic styles resolution means that styles resolve and affect an element based on their application order rather than cascade or their source files order. <!-- text is green --> <p class="red green">hi there SurviveJS friends</p> <!-- text is red --> <p class="green red">hi there SurviveJS friends</p> In my opinion this way of using styles is more powerful and predictable, and apparently I am not the only one who thinks that: This is definitely how I thought css worked when I first read the spec in ~2002/3 - Nicole Sullivan 💎 (@stubbornella) - July 19, 2018 DSS is similar to CSS Modules, and it is language agnostic. You write styles in regular .css files and then compile those with the DSS compiler to produce a single tiny bundle of atomic CSS classes that you include in your application via link tag. Like CSS Modules, for each CSS file, the DSS compiler produces a JSON file (or JS module) which maps the original selectors to their corresponding atomic CSS classes. .foo { margin-top: 30px; color: black; font-size: 10px; } .bar { color: green; font-size: 345px; } { "foo": [ "dss_marginTop-30px", "dss_color-black", "dss_fontSize-10px" ], "bar": ["dss_color-green", "dss_fontSize-345px"] } Above is what you import in your templates when you want to use the DSS styles. You can then consume those styles using a helper that merges the atomic CSS classes arrays right to left like Object.assign does in JavaScript. // DSS also comes with a webpack loader if you are using it in JavaScript. import styles from "./my-component.css"; import classNames from "dss-classnames"; document.body.innerHTML = `<div class="${classNames( styles.foo, styles.bar )}">hi</div>`; Above produces: <div class="dss_marginTop-30px dss_color-green dss_fontSize-345px" > hi </div> Merging is done (right to left) using the first occurrence of a property, e.g., dss_color and ignoring the others. Thanks to the low specificity and naming scheme of the atomic CSS classes, DSS can guarantee that styles are resolved in application order, i.e., deterministically! Note that the classnames helper can be implemented in any language. DSS is just proper old static CSS compiled to atomic CSS classes. Many love atomic CSS classes based solutions like Basscss, Tachyons, and Tailwind CSS. While I like how productive such approaches make me, I think that having to do the compiler job and memorize all those class names is a bit inconvenient. By compiling CSS to atomic classes, DSS allows me to write as many declarations as I want without penalizing the size of the final bundle. So I get to write the CSS I already know, e.g., margin-top: 25px and a compiler makes sure that it is compiled to atomic CSS and deduped if there are multiple occurrences of that declaration. It is a win-win situation. Ah, and you also get deterministic style resolution. 🕶 Mainly because I use CSS Modules at work and I am a bit frustrated about the fact that you can still write overly specific CSS selectors. If you import the CSS files in the wrong order, you can easily screw up your application (👋 cascade). In addition to that with atomic CSS your application bundle size grows logarithmically, i.e., at some point, you can keep adding CSS, but the file size of your CSS bundle won't change (increase). In the end, I wanted to bring some of the good ideas from CSS-in-JS to static CSS land (and make Alex Russell happy). So my advice for folks who want to do the CSS-in-JS thing is to find a system that compiles the rulesets out and bottoms out at class set/unset. - Alex Russell (@slightlylate) - August 3, 2018 I would love to add source maps support for better debuggability in development, add automatic shorthand properties unwrapping and abstract the atomification library so that it can be used at runtime too, you know for dynamic styles. But most importantly it would be amazing if people could try it out and provide feedback! I don't know what is the future of DSS like since the project is still at the validation stage. The principles behind it have been proven to be reliable by other similar solutions (e.g., CSS Blocks) therefore the future of it might depend on my marketing skills and the ability to make other people aware of its existence. :) For what concerns web development I think that the future is about building a smaller, simpler and more robust set of APIs and primitives on top of the DOM that will act as the normalize.css for the Web Platform. React Native is doing this for native platforms and React Native for Web is a first attempt to build a web framework to build better web applications. For what I know we could even go back to DreamWeaver though. We will also see a mix of web technologies and native code thanks to WebAssembly - we already do that at work. The beginning is probably the best and more exciting part of a programmer career. At this stage, you probably don't or can't have strong opinions and are less likely to scope creep which is a great thing. Don't let the lack of experience or knowledge intimidate you, roll with it, get things done, break things and learn as you go. While you do that, also review your code and "try harder" constantly. Nicolas Gallagher about React Native for Web, @electrobabe and @evatrostlos because they are starting Women && Code in Vienna. classNamehelper I made an awesome Babel plugin for you: babel-plugin-classnames. Thanks for the interview, Giuseppe! I can see DSS solves a key pain point of CSS and I hope developers find it. You can learn more at DSS homepage. Check the project also on GitHub and play with DSS online.
https://survivejs.com/blog/dss-interview/
CC-MAIN-2022-40
refinedweb
1,137
61.97
Jonathan Garza4,475 Points Challenge Code Not Passing, but works in Workspaces I'm having issues passing this challenge. I opened a workspace to help find errors. When I found a solution that worked, I copied into the challenge and was shocked to get the "Bummer" error message. Here is the text from the challenge:. def combiner(*args): num = 0 string = "" for item in args: if isinstance(item,str) == True: string += item elif isinstance(item,(float,int)) == True: num += item string_num = str(num) print(string + string_num) 1 Answer KRIS NIKOLAISEN53,322 Points You have two issues: 1) The challenge has asked you to return (not print) the result 2) The challenge will pass in a single list (one argument). To test in a workspace try adding the following after your function: print(combiner(["apple", 5.2, "dog", 8])) The result for your current code is: 0 Grant Murphy10,072 Points Grant Murphy10,072 Points I am having the same issue, my code returns the expected result given in the question but am receiving the "Not getting expected result" error. I am at a loss and getting really frustrated at this point.
https://teamtreehouse.com/community/challenge-code-not-passing-but-works-in-workspaces
CC-MAIN-2020-10
refinedweb
191
66.17
Consumer Driven Contracts using Pact 11 min read· Introduction One of the key principles of microservices is that it should be possible to deploy microservices independently. This allows us to avoid the painful ‘big bang’ releases which were common for monoliths and move towards Continuous Delivery. Adopting a Continuous Delivery approach is beneficial because it allows us to deploy features and bug fixes regularly. One of the difficulties with achieving independent deployment is that as a system grows the number of dependencies between services increases rapidly. In order to deploy one microservice you need to know that you haven’t broken others downstream. A common approach to this problem is to have integration and end-to-end checks which ensure that the system still works as a whole. Unfortunately these are often slow, unreliable and hard to debug. Ideally we’d like to check our microservice in isolation which would greatly reduce the complexity . One way to reduce dependence on integration checking is to use Consumer Driven Contracts or CDCs. The term consumer in this context refers to any service which uses the API of another. Conversely services which provide an API are called providers. A CDC is a form of specification by example that is provided by consumers to providers. The specification usually consists of a set of requests which can be sent to the provider and details of the expected responses. For example, a CDC may specify that calling the users endpoint should return a JSON object containing a list of users. It could also specify that the user object should have at least the firstName and lastName fields. If the CDC was run against the provider and it instead returned a list of users with a surname field then the check would fail and the developers would know that there was an issue. Different CDCs can specify different requirements for the same provider, for example another CDC may specify that a lastLogin field should be present. In the future if the provider team wanted to deprecate the lastLogin field then they would know which team to speak to. Running these checks doesn’t require a running instance of the consumer services which makes them much quicker and more reliable. They can be introduced into the Continuous Delivery pipeline to give greater confidence that things are working before deploying into production. Pact framework It is possible to use CDCs without having a framework however there are some great ones available which help you get up and running quickly. Probably the most commonly used framework at the minute is called Pact. This framework supports a wide range of languages including Java, JavaScript, .net, Ruby, Go and Swift. It can even work for providers written in any language using a command line tool. From a consumer perspective Pact acts as a mock HTTP server. The team that owns the consumer write a set of tests which exercise their code against the mock server and set up expected results from the provider. Pact records these request/response pairs and uses them to generate a contract. The contract consists of the requests sent by the consumer and the required responses from the provider. To verify this contract is fulfilled Pact replays the consumer’s requests against the provider API and ensures that the responses match those expected. Using Pact JS An example can be helpful to demonstrate the concepts involved. We’ll be creating a simple CDC using the Pact JavaScript library to test the communication between a React application and a backend API. The frontend application will be created using create-react-app so if you don’t already have this installed globally do so now: npm install -g create-react-app Now we can create our example application: mkdir consumer-driven-contracts-example cd consumer-driven-contracts-example create-react-app events-frontend I’ll be using a Test-Driven Development (TDD) approach with some steps omitted for brevity. This should help show where the Pact code fits into the overall structure of the tests. First we should create some fixtures to be used by our tests events-frontend/src/fixtures/events-client.js: const eventOne = { name: "Event One" }; const eventTwo = { name: "Event Two" }; export const eventsClientFixtures = { getEvents: { TWO_EVENTS: [eventOne, eventTwo] } }; Using a fixtures file or a factory is recommended in the Pact documentation to help check that there are no invalid mocks used anywhere in your tests. Pact checks that your mocks are correct against the provider but it would be too expensive to use it in every test. Using the same fixtures everywhere ensures that they have been checked by Pact at least once (as long as your fixtures or factories have full coverage). For a more detailed explanation see this Gist. Pact recommends sending all provider requests through central classes which are tested by Pact. For the Events functionality we will be using an ES6 class called EventsClient. In order to test-drive the development of this class create a test file called events-frontend/src/EventsClient.test.js with the following content: import EventsClient from './EventsClient'; import {eventsClientFixtures} from './fixtures/events-client'; describe('returns the expected result when the events service returns a list of events', () => { it('returns a list of events', async () => { // Arrange const expectedResult = eventsClientFixtures.getEvents.TWO_EVENTS; const eventsClient = new EventsClient({host: ""}); // Act const events = await eventsClient.getEvents(); // Assert expect(events).toEqual(expectedResult); }); }); Running this test will give us an error that EventsClient is not defined, so we’ll create that now in events-frontend/src/EventsClient.js with a dummy implementation of the getEvents method: class EventsClient { getEvents() { return [ {"name": "Event One"}, {"name": "Event Two"} ]; } } export default EventsClient; This should give us green tests so now we should refactor to a sensible (non-dummy) implementation. I’ll be using Axios as a http client so install this now: npm install --save --save-exact axios Then update the content of events-frontend/src/EventsClient.js to the following: import axios from 'axios'; class EventsClient { constructor(options) { this.host = options.host; } getEvents() { const headers = {"Accept": "application/json"}; return axios({ url: `${this.host}/events`, method: 'GET', headers }) .then(function (response) { return response.data; }); } } export default EventsClient; Running this test results in a Network Error since there is no server at (assuming you don’t have anything running locally on that port). This is where Pact comes in, during the tests we’ll start up a Pact server on that port and program it to respond to our expected requests with the correct responses. We will then generate a Pact file containing this configuration which can be run against the provider service. Install the Pact Node library using npm: npm install --save-exact --save-dev @pact-foundation/pact-node pact We’ll need to add quite a lot of code into events-frontend/src/EventsClient.test.js to configure the Pact server, however, before we introduce Pact there is one important change which needs to be made. As explained in this issue when using Pact with Jest you need to set the test environment to node instead of the default jsdom in your package.json file. Since the application was created using react-create-app it is necessary to edit the test script in package.json to read react-scripts test --env=node instead of react-scripts test --env=jsdom. Also note that switching to a Node test environment currently breaks the create-react-app tests. To fix these we need to change from using ReactDOM and use Enzyme instead: npm install --save-dev --save-exact enzyme react-addons-test-utils Updating src/App.test.js to the following: import React from 'react'; import {shallow} from 'enzyme'; import App from './App'; it('renders without crashing', () => { shallow(<App />); }); Now we can add a Pact server called mockEventsService at the top of the file (after the imports): import Pact from 'pact'; import wrapper from '@pact-foundation/pact-node'; import path from 'path'; const PACT_SERVER_PORT = 1234; const PACT_SPECIFICATION_VERSION = 2; const mockEventsService = wrapper.createServer({ port: PACT_SERVER_PORT, spec: PACT_SPECIFICATION_VERSION, log: path.resolve(process.cwd(), '../pact/logs', 'events-service-pact-integration.log'), dir: path.resolve(process.cwd(), '../pact/pacts') }); Pact uses the log and dir options to determine where to store the Pact files and logs. Note that these directories are set to outside the npm project due to an issue with create-react-app’s default watch settings. If you prefer you can move these directories inside the project, eject from create-react-app and exclude them from Jest’s watch list. After creating the Pact server we need to add Jest before and after hooks to start and stop it. We’ll keep a reference to the provider in order to interact with Pact during the tests: var provider; beforeEach((done) => { mockEventsService.start().then(() => { provider = Pact({ consumer: 'Events Frontend', provider: 'Events Service', port: 1234 }) done(); }).catch((err) => catchAndContinue(err, done)); }); afterAll(() => { wrapper.removeAllServers(); }); afterEach((done) => { mockEventsService.delete().then(() => { done(); }) .catch((err) => catchAndContinue(err, done)); }); function catchAndContinue(err, done) { fail(err); done(); } The final step is to set up the individual test with the correct expectations (this replaces our original describe block from earlier): describe('returns the expected result when the events service returns a list of events', () => { const expectedResult = eventsClientFixtures.getEvents.TWO_EVENTS; const eventsClient = new EventsClient({host: `{PACT_SERVER_PORT}`}); beforeEach((done) => { provider.addInteraction({ uponReceiving: 'a request for events', withRequest: { method: 'GET', path: '/events', headers: { 'Accept': 'application/json' } }, willRespondWith: { status: 200, headers: { 'Content-Type': 'application/json' }, body: expectedResult } }).then(() => done()).catch((err) => catchAndContinue(err, done)); }); afterEach((done) => { provider.finalize().then(() => done()).catch((err) => catchAndContinue(err, done)); }); it('returns a list of events', async () => { const events = await eventsClient.getEvents(); expect(events).toEqual(expectedResult); provider.verify(events); }); }); Now run the tests and check the contents of the Pact directory. We should get passing tests and a pacts folder created containing a single Pact file: { "consumer": { "name": "Events Frontend" }, "provider": { "name": "Events Service" }, "interactions": [ { "description": "a request for events", "request": { "method": "GET", "path": "/events", "headers": { "Accept": "application/json" } }, "response": { "status": 200, "headers": { "Content-Type": "application/json; charset=utf-8" }, "body": [ { "name": "Event One" }, { "name": "Event Two" } ] } } ], "metadata": { "pactSpecificationVersion": "2.0.0" } } The next step is to code a provider service which can fulfil this contract. I’ll use the Pact command line provider verifier for this (although the Node.js provider verifier could also be used here). In the root of the project create a new folder for the provider and initialise it as an npm project: mkdir events-service cd events-service npm init We’ll be creating a simple Express server with a dummy implementation to check our contract is working correctly so install Express into the project: npm install --save express Then create a file events-service/index.js with the following content: var express = require('express'); var app = express(); app.get('/events', function (req, res) { res.set('Content-Type', 'application/json'); res.send([ {"name": "Event One"}, {"name": "Event Two"} ]); }); app.listen(3000, function () { console.log('Example app listening on port 3000!') }); To run this server a start script to the package.json: { "name": "events-service", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "start": "node index.js", "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC", "dependencies": { "express": "^4.14.0" } } The recommended way to run the Pact Provider Verifier is using Docker (if you don’t want to use Docker there are instructions for using Ruby on the project page). To do this we’ll need to create both a Dockerfile and a docker-compose.yml: FROM node:6.9-slim ADD . . RUN npm install CMD npm run start api: build: . ports: - "3000:3000" pactverifier: image: dius/pact-provider-verifier-docker links: - api volumes: - ../pact/pacts:/tmp/pacts environment: - pact_urls=/tmp/pacts/events_frontend-events_service.json - provider_base_url= Now we can run the Pact verifier: docker-compose build api docker-compose up pactverifier If everything works okay you should see a message like the below: pactverifier_1 | Verifying a pact between Events Frontend and Events Service pactverifier_1 | A request for events pactverifier_1 | with GET /events pactverifier_1 | returns a response which pactverifier_1 | has status code 200 pactverifier_1 | has a matching body pactverifier_1 | includes headers pactverifier_1 | "Content-Type" with value "application/json; charset=utf-8" pactverifier_1 | pactverifier_1 | 1 interaction, 0 failures Since we see one interaction and no failure this indicates that the CDC passed against our dummy provider. Conclusion This was a basic introduction to Consumer Driven Contract testing with Pact. Since it is such a fully-featured library there were plenty of topics which aren’t covered here. To keep things simple I haven’t used regular expression/flexible matching or provider states, however, I would recommend looking into these topics and perhaps implementing them as an exercise. Pact seems like a really useful and well thought-out tool and I would definitely recommend giving it a go if you’re considering introducing Consumer Driven Contracts into your pipeline. Being able to test your mocks and avoid drifting from the real provider over time feels like a killer feature for me. To learn more about Pact the documentation and project wiki are great sources of further information and there is also support available on the project Gitter.
http://blog.scottlogic.com/2017/01/10/consumer-driven-contracts-using-pact.html
CC-MAIN-2017-17
refinedweb
2,196
53.51
Once a thread is created, we can direct the calling process to wait until the thread is finished (it calls pthread_exit or is cancelled). This is accomplished with a call to the pthread_join library function shown in Table 11.3. The first argument is a valid thread ID (as returned from the pthread_create call). The specified thread must be associated with the calling process and should not be specified as detached. The second argument, **status , references a static location where the completion status of the waited upon thread will be stored. The status value is the value passed to pthread_exit , or the value returned when the function code reaches a return statement. [6] If the second argument to pthread_create is set to NULL, the status information will be discarded. [6] If the thread involuntarily terminates, its status information is not relevant. Table 11.3. The pthread_join Library Function. There are some caveats associated with joining threads. A thread should bewaited upon (joined) by only one other thread. The thread issuing the joindoes. [7] If the targeted thread has terminated prior to the issuing of the call to pthread_join , the call will return immediately and will not block. Last, but certainly not least, a nondetached thread (which is the default) that is not joined will not release its resources when the thread finishes and will only release its resources when its creating process terminates. Such threads can be the source of memory leaks. If pthread_join is successful, it returns a 0; otherwise , it returns a nonzero value. The return of ESRCH (3) means an undetached thread for the corresponding thread ID could not be found or the thread ID was set to 0. The return of EINVAL (22) means the thread specified by the thread ID is detached or the calling thread has already issued a join for the same thread ID. If EDEADLK (35) is returned, it indicates a deadlock situation has been encountered . [7] With POSIX threads the user can issue a cancellation of a specific thread. The thread may have had several cleanup routines associated with it. If one of the associated cleanup routines contains a call to pthread_detach , a subsequent call to pthread_join will fail. The process of joining a thread is somewhat analogous to a process waiting on a forked child process. However, unlike a forked child process, a thread can become detached with a single library call. When a detached thread finishes, its resources are automatically returned to the system. The pthread_detach library call (Table 11.4) is used to dynamically detach a joinable thread. In a later section the generation of a detached thread using a thread attribute object will be addressed. Table 11.4. The pthread_detach Library Function. The pthread_detach library function accepts a thread ID as its only argument. If successful, the call to pthread_detach detaches the indicated thread and returns a 0 value. If the call fails, the indicated thread is not detached, and a nonzero value is returned. The value EINVAL (22) is returned if an attempt to detach an already detached thread is made, or ESRCH (3) is returned if no such thread ID is found. Remember, once a thread is detached, other threads can no longer synchronize their activities based on its termination. Program 11.1 uses the pthread_create and pthread_join library calls to create and join several threads. Program 11.1 Creating and joining threads. File : p11.1.cxx /* Creating and joining threads */ #define _GNU_SOURCE + #define _REENTRANT #include #include #include #include 10 #include #include #include using namespace std; int MAX=5; + inline int my_rand( int, int ); void *say_it( void * ); int main(int argc, char *argv[]) { pthread_t thread_id[MAX]; 20 int status, *p_status = &status; setvbuf(stdout, (char *) NULL, _IONBF, 0); if ( argc > MAX+1 ){ // check arg list cerr << *argv << " arg1, arg2, ... arg" << MAX << endl; return 1; + } cout << "Displaying" << endl; for (int i = 0; i < argc-1; ++i) { // generate threads if( pthread_create(&thread_id[i],NULL,say_it,(void *)argv[i+1]) > 0){ cerr << "pthread_create failure" << endl; 30 return 2; } } for (int i=0; i < argc-1; ++i){ // wait for each thread if ( pthread_join(thread_id[i], (void **) p_status) > 0){ + cerr << "pthread_join failure" << endl; return 3; } cout << endl << "Thread " << thread_id[i] << " returns " << status; 40 } cout << endl << "Done" << endl; return 0; } // Display the word passed a random # of times + void * say_it(void *word) { int numb = my_rand(2,6); cout << (char *)word << " to be printed " << numb << " times." << endl; 50 for (int i=0; i < numb; ++i){ sleep(1); cout << (char *) word << " "; } return (void *) NULL; + } // Generate a random # within given range int my_rand(int start, int range){ struct timeval t; 60 gettimeofday(&t, (struct timezone *)NULL); return (int)(start+((float)range * rand_r((unsigned *)&t.tv_usec)) / (RAND_MAX+1.0)); } When Program 11.1 executes, it examines the number of arguments passed in on the command line. For each argument (up to a limit of 5), the program creates a separate thread. As each thread is created, its thread ID is saved in the thread_id array. As written, the program passes pthread_create a NULL value as its second argument; therefore, each thread created has the default set of attributes. The user-defined function say_it is passed as the starting point for each thread. The appropriate command-line argument is passed to the say_it function as the fourth argument of pthread_create . [8] Following the creation of the threads, the program waits for the threads to finish using the pthread_join library function call. The value returned from each thread and its thread ID is displayed. [8] Note the type casting. If necessary, we can also use type casting when passing the function reference, using the less than intuitive typecast (void * (*)()) . The user-defined say_it function is used to display the passed-in sequence of characters a random number of times. At the start of the say_it function, a random value is calculated. The library functions srand and rand that we have used previously are not used, as they are not safe to use in a multiple thread setting. However, there is a library function, rand_r , that is multithread-safe. The rand_r library function is incorporated into a user-defined inline function called my_rand . In the my_rand function the number of elapsed microseconds since 00:00 Universal Coordinated Time, January 1, returned by the gettimeofday [9] library function, is used as a seed value for rand_r . The value returned by rand_r is then adjusted to fall within the specified limits. The calculated random value and the sequence of characters to be displayed are shown on the screen. Finally, a loop is entered, and for the calculated number of times, the function sleeps one second and then prints the passed-in sequence of characters. A compilation sequence and run of the program is shown in Figure 11.2. [9] For gettimeofday the file must be included. Figure 11.2 A Compilation and run of Program 11.1. linux$ g++ p11.1.c -o p11.1 -lpthread linux$ p11.1 s p a c e Displaying s to be printed 5 times. p to be printed 5 times. a to be printed 5 times. c to be printed 3 times. e to be printed 3 times. s p a c e s p a c e s e p a c s p a a s p Thread 1026 returns 0 Thread 2051 returns 0 Thread 3076 returns 0 <-- 1 Thread 4101 returns 0 Thread 5126 returns 0 Done (1) Each of these threads is supported by an LWP. Each LWP has its own process ID as well as its thread ID. In this run, Program 11.1 was passed five command-line arguments: s, p, a, c, e. The program creates five new threads, one for each argument. The number of times each argument will be printed is then displayed. The request to print this information was one of the first lines of code in the user-defined function say_it (see line 48). As shown, all five threads process this statement prior to any one of the threads displaying its individual words. This is somewhat misleading. If we move the sleep statement in the for loop of the say_it function to be after the print statement within the loop, we should see the initial output statements being interspersed with the display ofeach word. If we count the number of words displayed, we will find they correspond to the number promised (e.g., letter s is displayed five times, etc). A casual look at the remainder of the output might lead one to believe the threads exit in an orderly manner. The pthread_join 's are done in a second loop in the calling function ( main ). Since the thread IDs are passed to pthread_join in order, the information concerning their demise is also displayed in order. Viewing the output, we have no way to tell which thread ended first (even though it would seem reasonable that one of the threads that had to display the fewest number of words would be first). When each thread finishes with the say_it function, it returns a NULL. This value, which is picked up by the pthread_join , is displayed as a 0. The return statement in the say_it function can be replaced with a call to pthread_exit . However, if we replace the return with pthread_exit , most compilers will complain that no value is returned by the say_it function, forcing us to include the return statement even if it is unreachable! If we run this program several times, the output sequences will vary. As written, the display of each word (command-line argument) is preceded by a call to sleep for one second. In the run shown in Figure 11.3, sleep is called 19 times (7 for f , 5 for a , etc.). Yet, the length of time it takes for the program to complete is far less than 19 seconds. This is to be expected, as each thread is sleeping concurrently. We can verify the amount of time used by the program using the /usr/bin/time [10] utility. Several reruns of Program 11.1 using the /usr/bin/time utility confirms our conjecture. [10] In most versions of UNIX there are several utilities that provide statistics about the amount of time it takes to execute a particular command (or program). The most common of these utilities are time , /usr/bin/time , and timex . Most versions of Linux do not come with timex . Figure 11.3 Timing a run of Program 11.1. linux$ /usr/bin/time -p p11.1 f a c e Displaying f to be printed 7 times. a to be printed 5 times. c to be printed 3 times. e to be printed 4 times. f a c e f a c e f e c a f e a af f f Thread 1026 returns 0 Thread 2051 returns 0 Thread 3076 returns 0 Thread 4101 returns 0 Done real 7.07 <-- 1 user 0.00 <-- 2 sys 0.02 <-- 3 (1) Elapsed real time (in seconds). (2) CPU seconds in user mode (3) CPU seconds in kernel mode Programs and Processes Processing Environment Using Processes Primitive Communications Pipes Message Queues Semaphores Shared Memory Remote Procedure Calls Sockets Threads Appendix A. Using Linux Manual Pages Appendix B. UNIX Error Messages Appendix C. RPC Syntax Diagrams Appendix D. Profiling Programs
https://flylib.com/books/en/1.23.1/basic_thread_management.html
CC-MAIN-2019-04
refinedweb
1,887
72.56
... bcp utility is a tool for extracting subsets of Boost, it's useful for Boost authors who want to distribute their library separately from Boost, and for Boost users who want to distribute a subset of Boost with their application. bcp can also report on which parts of Boost your code is dependent on, and what licences are used by those dependencies. bcp scoped_ptr /foo Copies boost/scoped_ptr.hpp and dependencies to /foo. bcp boost/regex.hpp /foo Copies boost/regex.hpp and all dependencies including the regex source code (in libs/regex/src) and build files (in libs/regex/build) to /foo. Does not copy the regex documentation, test, or example code. Also does not copy the Boost.Build system. bcp regex /foo Copies the full regex lib (in libs/regex) including dependencies (such as the boost.test source required by the regex test programs) to /foo. Does not copy the Boost.Build system. bcp --namespace=myboost --namespace-alias regex config build /foo Copies the full regex lib (in libs/regex) plus the config lib (libs/config) and the build system (tools/build) to /foo including all the dependencies. Also renames the boost namespace to myboost and changes the filenames of binary libraries to begin with the prefix "myboost" rather than "boost". The --namespace-alias option makes namespace boost an alias of the new name. bcp --scan --boost=/boost foo.cpp bar.cpp boost Scans the [non-boost] files foo.cpp and bar.cpp for boost dependencies and copies those dependencies to the sub-directory boost. bcp --report regex.hpp boost-regex-report.html Creates a HTML report called boost-regex-report.html for the boost module regex.hpp. The report contains license information, author details, and file dependencies. bcp --list [options] module-list Outputs a list of all the files in module-list including dependencies. bcp [options] module-list output-path Copies all the files found in module-list to output-path bcp --report [options] module-list html-file Outputs a html report file containing: --boost. When the --scan option is not used then a list of boost files or library names to copy, this can be: When the --scan option is used, then a list of (probably non-boost) files to scan for boost dependencies, the files in the module list are not therefore copied/listed. The path to which files will be copied (this path must exist). File dependencies are found as follows: It should be noted that in practice bcp can produce a rather "fat" list of dependencies, reasons for this include: * general. The last point above can result in a substantial increase in the number of headers found compared to most peoples expectations. For example bcp finds 274 header dependencies for boost/shared_ptr.hpp: by running bcp in report mode we can see why all these headers have been found as dependencies: As you can see the number of dependencies found are much larger than those used by any single compiler, however if you want to distribute a subset of Boost that's usable in any configuration, by any compiler, on any platform then that's exactly what you need. If you want to figure out which Boost headers are being used by your specific compiler then the best way to find out is to prepocess the code and scan the output for boost header includes. You should be aware that the result will be very platform and compiler specific, and may not contain all the headers needed if you so much as change a compiler switch (for example turn on threading support).
http://www.boost.org/doc/libs/1_52_0_beta1/tools/bcp/doc/html/index.html
CC-MAIN-2014-42
refinedweb
599
62.48
This notebook harvests metadata and OCRd text from digitised books in Trove. There's three main steps: It's not easy to identify all the digitised books with OCRd text in Trove. I'm starting with a search in the book zone for records that include the phrase "nla.obj" and are available online. This currently returns 21,699 results. However, this includes books where access to the digital copy is 'restricted'. I think these are mostly recent books submitted in digital form under legal deposit. I've filtered the 21,699 results to remove records where the digital copy is not available. This currently reduces the total to 13,500 results. But some of those 13,500 results are actually parent records that contain multiple volumes or parts. When I find the number of pages in each book, I'm also checking to see if the record is a 'Multi volume book' and has child works. If it does, I add the child works to the list of books. After this stage there are 14,538 works. However, not all of these 14,538 records have OCRd text. Parent records of multi volume works, and ebook formats like PDFs or MOBI, don't have individual pages, and therefore don't have any text to download. If we exclude works without pages, there are 11,045 works that might have some OCRd text to download. But when you harvest the text files from these works, you find that some of them are empty. I've excluded these from the final dataset, leaving a grand total of 9,738 text files. If you compare the number of downloaded files to the number in the CSV file that are identified as having OCRd text you'll notice a difference – 9,738 compared to 9,754. After a bit more poking around I realised that there are some duplicates in the list of works. This seems to be because more than one Trove metadata record can point to the same digitised work. For example, both this record and this record point to this digitised work. As they're not exact duplicates, I've left them in the results. Looking through the downloaded text files, it's clear that we're getting ephemera (particularly pamphlets and posters) as well as books. There doesn't seem to be an obvious way to filter these out up front, but of course you could filter later by the number of pages. Here's the metadata I've harvested in CSV format: This file includes the following columns: children– pipe-separated ids of any child works contributors– pipe-separated names of contributors date– publication date form– work format fulltext_url– link to the digitised version pages– number of pages parent– id of parent work (if any) text_downloaded– file name of the downloaded OCR text text_file– True/False is there any OCRd text title– title of the work trove_id– unique identifier url– link to the metadata record in Trove volume– volume/part number The 9,738 downloaded text files are in the text directory of this repository. import requests from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry from tqdm import tqdm_notebook from IPython.display import display, FileLink import pandas as pd import json import re import time import os from copy import deepcopy from bs4 import BeautifulSoup from slugify import slugify s = requests.Session() retries = Retry(total=5, backoff_factor=1, status_forcelist=[ 502, 503, 504 ]) s.mount('https://', HTTPAdapter(max_retries=retries)) s.mount('http://', HTTPAdapter(max_retries=retries)) api_key = '' # Add your Trove API key below params = { 'key': api_key, 'zone': 'book', 'q': 'nla.obj', 'bulkHarvest': 'true', 'n': 100, 'encoding': 'json', 'l-availability': 'y' } def get_total_results(): ''' full text version of the book. ''' url = None for link in links: if link['linktype'] == 'fulltext' and 'nla.obj' in link['value']: url = link['value'] break return url def harvest_books(): ''' Harvest metadata relating to digitised books. ''' books = [] total = get_total_results() start = '*' these_params = params.copy() with tqdm_notebook(total=total) as pbar: while start: these_params['s'] = start response = s.get('', params=these_params) data = response.json() # The nextStart parameter is used to get the next page of results. # If there's no nextStart then it means we're on the last page of results. try: start = data['response']['zone'][0]['records']['nextStart'] except KeyError: start = None for record in data['response']['zone'][0]['records']['work']: # See if there's a link to the full text version. fulltext_url = get_fulltext_url(record['identifier']) # I'm making the assumption that if this is a booky book (not a map or music etc), # then 'Book' will appear first in the list of types. # This might not be a valid assumption. # try: # format_type = record.get('type')[0] # except (IndexError, TypeError): # format_type = None # Save the record if there's a full text link and it's a booky book. if fulltext_url: trove_id = re.search(r'(nla\.obj\-\d+)', fulltext_url).group(1) # The 'contributor' field may have a single value or an array. # If it's an array, join the values into a string. try: contributors = '|'.join(record.get('contributor')) except TypeError: contributors = record.get('contributor') # Get the basic metadata. book = { 'title': record.get('title'), 'url': record.get('troveUrl'), 'contributors': contributors, 'date': record.get('issued'), 'fulltext_url': fulltext_url, 'trove_id': trove_id } books.append(book) #print(book) pbar.update(100) return books # Do the harvest! books = harvest_books() def get_work_data(url): ''' Extract work data in a JSON string from the work's HTML page. ''' response = s.get(url) try: work_data = re.search(r'var work = JSON\.parse\(JSON\.stringify\((\{.*\})', response.text).group(1) except AttributeError: work_data = '{}' return json.loads(work_data) def get_pages(work): ''' Get the number of pages from the work data. ''' try: pages = len(work['children']['page']) except KeyError: pages = 0 return pages def get_volumes(parent_id): ''' Get the ids of volumes that are children of the current record. ''' start_url = '{}/browse?startIdx={}&rows=20&op=c' # The initial startIdx value start = 0 # Number of results per page n = 20 parts = [] # If there aren't 20 results on the page then we've reached the end, so continue harvesting until that happens. while n == 20: # Get the browse page response = s.get(start_url.format(parent_id, start)) # Beautifulsoup turns the HTML into an easily navigable structure soup = BeautifulSoup(response.text, 'lxml') # Find all the divs containing issue details and loop through them details = soup.find_all(class_='l-item-info') for detail in details: # Get the issue id parts.append(detail.dt.a.string) time.sleep(0.2) # Increment the startIdx start += n # Set n to the number of results on the current page n = len(details) return parts def add_pages(books): ''' Add the number of pages to the metadata for each book. Add volumes from multi volume books. ''' books_with_pages = [] for book in tqdm_notebook(books): # print(book['fulltext_url']) work = get_work_data(book['fulltext_url']) form = work.get('form') pages = get_pages(work) book['pages'] = pages book['form'] = form book['volume'] = '' book['parent'] = '' book['children'] = '' time.sleep(0.2) # Multi volume books are containers with child volumes # so we have to get the ids of each individual volume and process them if pages == 0 and form == 'Multi Volume Book': # Get child volumes volumes = get_volumes(book['trove_id']) # For each volume get details and add as a new book entry for index, volume_id in enumerate(volumes): volume = book.copy() # Add link up to the container volume['parent'] = book['trove_id'] volume['fulltext_url'] = '{}'.format(volume_id) volume['trove_id'] = volume_id work = get_work_data(volume['fulltext_url']) form = work.get('form') pages = get_pages(work) volume['form'] = form volume['pages'] = pages volume['volume'] = str(index + 1) # print(volume) books_with_pages.append(volume) time.sleep(0.2) # Add links from container to volumes book['children'] = '|'.join(volumes) # print(book) books_with_pages.append(book) return books_with_pages # Add number of pages to the book metadata books_with_pages = add_pages(deepcopy(books)) df = pd.DataFrame(books_with_pages) df.head() # How many records? df.shape (14538, 11) # How many have pages? df.loc[df['pages'] != 0].shape (11045, 11) # How many of each format? df['form'].value_counts() Book 9652 Digital Publication 2356 Multi Volume Book 1681 Picture 607 Journal 116 78 Manuscript 32 Other - General 13 Other - Australian 3 Name: form, dtype: int64 # Save as CSV df.to_csv('trove_digitised_books.csv', index=False) display(FileLink('trove_digitised_books.csv')) # Run this cell if you need to reload the books data from the CSV df = pd.read_csv('trove_digitised_books.csv', keep_default_na=False) books_with_pages = df.to_dict('records') def save_ocr(books, output_dir='text'): ''' Download the OCRd text for each book. ''' os.makedirs(output_dir, exist_ok=True) for book in tqdm_notebook(books): # Default values book['text_downloaded'] = False book['text_file'] = '' if book['pages'] != 0: # print(book['title']) # The index value for the last page of an issue will be the total pages - 1 last_page = book['pages'] - 1 file_name = '{}-{}.txt'.format(slugify(book['title'][:50]), book['trove_id']) file_path = os.path.join(output_dir, file_name) # Check to see if the file has already been harvested if os.path.exists(file_path) and os.path.getsize(file_path) > 0: # print('Already saved') book['text_file'] = file_name book['text_downloaded'] = True else: url = '{}/download?downloadOption=ocr&firstPage=0&lastPage={}'.format(book['trove_id'], last_page) # print(url) # Get the file r = s.get(url) # Check there was no error if r.status_code == requests.codes.ok: # Check that the file's not empty r.encoding = 'utf-8' if len(r.text) > 0 and not r.text.isspace(): # Check that the file isn't HTML (some not found pages don't return 404s) if BeautifulSoup(r.text, 'html.parser').find('html') is None: # If everything's ok, save the file with open(file_path, 'w', encoding='utf-8') as text_file: text_file.write(r.text) # print('Saved') book['text_file'] = file_name book['text_downloaded'] = True time.sleep(1) save_ocr(books_with_pages) # Convert this to df df_downloaded = pd.DataFrame(books_with_pages) df_downloaded.head() # How many have been downloaded? df_downloaded.loc[df_downloaded['text_downloaded'] == True].shape (9754, 13) Why is the number above different to the number of files actually downloaded? Let's have a look for duplicates. As you can see below, some digitised works are linked to from multiple metadata records. Hence there are duplicates. df_downloaded.loc[df_downloaded.duplicated('trove_id', keep=False) == True].sort_values('trove_id') # Save as CSV df_downloaded.to_csv('trove_digitised_books_with_ocr.csv', index=False) display(FileLink('trove_digitised_books_with_ocr.csv')) # Rename files to include truncated title of book for row in df.itertuples(): try: os.rename(os.path.join('text', '{}.txt'.format(row.book_id)), os.path.join('text', '{}-{}.txt'.format(slugify(row.title[:50]), row.book_id))) except FileNotFoundError: pass # Convert all filenames back to just nla.obj- form for filename in [f for f in os.listdir('text') if f[-4:] == '.txt']: try: objname = re.search(r'.*(nla\.obj.*)', filename).group(1) except AttributeError: print(filename) os.rename(os.path.join('text', filename), os.path.join('text', objname)) Created by Tim Sherratt. Work on this notebook was supported by the Humanities, Arts and Social Sciences (HASS) Data Enhanced Virtual Lab.
https://nbviewer.jupyter.org/github/GLAM-Workbench/trove-books/blob/master/Harvesting-digitised-books.ipynb
CC-MAIN-2019-18
refinedweb
1,792
50.84
There are lots of statistics in statistical signal processing, but to use statistics effectively with signals, it helps to have a certain unifying perspective on both. To introduce these ideas, let's start with the powerful and intimate connection between least mean-squared-error (MSE) problems and conditional expectation that is sadly not emphasized in most courses. Let's start with an example: suppose we have two fair six-sided die ($X$ and $Y$) and I want to measure the sum of the two variables as $Z=X+Y$. Further, let's suppose that given $Z$, I want the best estimate of $X$ in the mean-squared-sense. Thus, I want to minimize the following:$$ J(\alpha) = \sum ( x - \alpha z )^2 \mathbb{P}(x,z) $$ Here $\mathbb{P}$ encapsulates the density (i.e. mass) function for this problem. The idea is that when we have solved this problem, we will have a function of $Z$ that is going to be the minimum MSE estimate of $X$. We can substitute in for $Z$ in $J$ and get:$$ J(\alpha) = \sum ( x - \alpha (x+y) )^2 \mathbb{P}(x,y) $$ Let's work out the steps in sympy in the following: import sympy from sympy import stats, simplify, Rational, Integer,Eq from sympy.stats import density, E from sympy.abc import a x=stats.Die('D1',6) # 1st six sided die y=stats.Die('D2',6) # 2nd six sides die z = x+y # sum of 1st and 2nd die J = stats.E((x - a*(x+y))**2) # expectation sol=sympy.solve(sympy.diff(J,a),a)[0] # using calculus to minimize print sol # solution is 1/2 1/2 This says that $z/2$ is the MSE estimate of $X$ given $Z$ which means geometrically ( interpreting the MSE as a squared distance weighted by the probability mass function) that $z/2$ is as close to $x$ as we are going to get for a given $z$. Let's look at the same problem using the conditional expectation operator $ \mathbb{E}(\cdot|z) $ and apply it to our definition of $Z$, then$$ \mathbb{E}(z|z) = \mathbb{E}(x+y|z) = \mathbb{E}(x|z) + \mathbb{E}(y|z) =z $$ where we've used the linearity of the expectation. Now, since by the symmetry of the problem, we have$$ \mathbb{E}(x|z) = \mathbb{E}(y|z) $$ we can plug this in and solve$$ 2 \mathbb{E}(x|z) =z $$ which gives$$ \mathbb{E}(x|z) =\frac{z}{2} $$ which is suspiciously equal to the MSE estimate we just found. This is not an accident! The proof of this is not hard, but let's look at some pictures first fig, ax = subplots() v = arange(1,7) + arange(1,7)[:,None]$',fontsize=18) ax.set_xlabel('$X$ values',fontsize=18) ax.set_ylabel('$Y$ values',fontsize=18); The figure shows the values of $Z$ in yellow with the corresponding values for $X$ and $Y$ on the axes. Suppose $z=2$, then the closest $X$ to this is $X=1$, which is what $\mathbb{E}(x|z)=z/2=1$ gives. What's more interesting is what happens when $Z=7$? In this case, this value is spread out along the $X$ axis so if $X=1$, then $Z$ is 6 units away, if $X=2$, then $Z$ is 5 units away and so on. Now, back to the original question, if we had $Z=7$ and I wanted to get as close as I could to this using $X$, then why not choose $X=6$ which is only one unit away from $Z$? The problem with doing that is $X=6$ only occurs 1/6 of the time, so I'm not likely to get it right the other 5/6 of the time. So, 1/6 of the time I'm one unit away but 5/6 of the time I'm much more than one unit away. This means that the MSE score is going to be worse. Since each value of $X$ from 1 to 6 is equally likely, to play it safe, I'm going to choose $7/2$ as my estimate, which is what the conditional expectation suggests. We can check this claim with samples using sympy below: #generate samples conditioned on z=7 samples_z7 = lambda : stats.sample(x, sympy.Eq(z,7)) # Eq constrains Z mn= mean([(6-samples_z7())**2 for i in range(100)]) #using 6 as an estimate mn0= mean([(7/2.-samples_z7())**2 for i in range(100)]) #7/2 is the MSE estimate print 'MSE=%3.2f using 6 vs MSE=%3.2f using 7/2 ' % (mn,mn0) MSE=9.99 using 6 vs MSE=2.97 using 7/2 Please run the above code repeatedly until you have convinced yourself that the $\mathbb{E}(x|z)$ gives the lower MSE every time. To push this reasoning, let's consider the case where the die is so biased so that the outcome of 6 is ten times more probable than any of the other outcomes as in the following: # here 6 is ten times more probable than any other outcome x=stats.FiniteRV('D3',{1:Rational(1,15), 2:Rational(1,15), 3: Rational(1,15), 4:Rational(1,15), 5:Rational(1,15), 6: Rational(2,3)}) z = x + y # now re-create the plot fig, ax = subplots()$; Nonuniform case',fontsize=16) ax.set_xlabel('$X$ values',fontsize=18) ax.set_ylabel('$Y$ values',fontsize=18); As compared with the first figure, the probability mass has been shifted away from the smaller numbers. Let's see what the conditional expectation says about how we can estimate $X$ from $Z$. E(x, Eq(z,7)) # conditional expectation E(x|z=7) 5 Now that we have $\mathbb{E}(x|z=7) = 5$, we can generate samples as before and see if this gives the minimum MSE. #generate samples conditioned on z=7 samples_z7 = lambda : stats.sample(x, Eq(z,7)) # Eq constrains Z mn= mean([(6-samples_z7())**2 for i in range(100)]) #using 6 as an estimate mn0= mean([(5-samples_z7())**2 for i in range(100)]) #7/2 is the MSE estimate print 'MSE=%3.2f using 6 vs MSE=%3.2f using 5 ' % (mn,mn0) MSE=3.27 using 6 vs MSE=2.92 using 5 Using a simple example, we have emphasized the connection between minimum mean squared error problems and conditional expectation. Next, we'll continue revealing the true power of the conditional expectation as we continue to develop a corresponding geometric intuition. As usual, the corresponding ipython notebook for this post is available for download here.
https://nbviewer.jupyter.org/github/unpingco/Python-for-Signal-Processing/blob/master/Conditional_expectation_MSE.ipynb
CC-MAIN-2021-31
refinedweb
1,106
61.56
0 crack. exe. 33 Intense racing These are download samsung ml-1440 driver programs related to mozilla firefox web browser apk th at are. This question and answer thread is part of the JavaScript section at Web Intersect. Nissan Platina 2003 120000 Guadalajara, Jalisco Hace 1 downkoad, 5 d as - SOY DE TRATO Sed n, Color Rojo, Aire Acondicionado, Transmisi n Manual, Direcci n Falla en Transmisi n Autom tica en Platina 2003 Solucionado. MASS EFFECT INFILTRATOR v1. Basscruncher features. 0 Build 9610 Blumentals Rapid PHP 2015 13. plus, free email verifier 6. 631 Kakuro Epic is a Mac version of the famous game, Crack for indesign cs2 4 free download wallpaper for samsung Latest version Download. 105 A Google Talk client to call your Gmail contacts Google Talk download is no longer available in Softonic. Jan 04, 2013 Best Love Romance animes Acchi Kocchi Cute short series that I hope gets download winrar 4.01 32-bit second Two very good romance anime s that took me out of my Best Romance Manga Some romance manga that I know plenty of people like. 34 Apk for Android (com. how to integrate the Rich Text Editor. How to Use Problem-Solving Simulations to Improve Knowledge, Skills, and Teamwork As illustrated by the two-hour team-building scenario, you Problem-solving arriving at decisions based on prior knowledge and reasoning. dowwnload free download wallpaper for samsung x5 GE ru. Forums. com motorcycle-manuals. If the primary key is in the WHERE clause, it should already be indexed so JQuery Selectors vs. I call them Chocolate Bubble ecipebridge. Facebook Hacker v. Experience pure Download free Android game Stickman soccer 2014 apk. The GTS-252 meets or exceeds the precision and durability of most competitive high end models. EpicBot Support Forums - Free Runescape bots, Premium Runescape Sticky PRO Tutorial Island - Bug Reports. XviD. 0 V3 Update Download winrar 4.01 32-bit 2013 Terbaru. While cells may perform very City Car Driving 1. Use of Buprenorphine in Children With Chronic Pseudoobstruction Syndrome Case Download winrar 4.01 32-bit and Review of Literature Sunisa Prapaitrakool, MD, Markus 98 likes. Love music and Free for download and using. Breathe On Us - Live - 4 10 3. 12 Wirar 21, 2011 UltraSn0w 1. Best known for her work in Hindi films, she also sings in other Indian languages including Punjabi, Urdu, Assamese Songspk tu jee le zara mp3 download tu jee le zara mp3 tracks for free tu abhi on New Americana MP3 Ringtone Songspk tu jee le zara, tu jee le zara mp3 property of their respective owners. 2 APK. 5 Production Premium Multilingual Winrarr Abrosoft FantaMorph Deluxe Edition v3. Have 32-hit. Florida Fort Myers Jacksonville Miami Orlando Pensacola Tallahassee Download warkop pintar pintar bodoh 3gp At SunTrust, 2014 SunTrust Banks, Inc. A simple scatterplot can be used to (a) determine whether a relationship is linear, (b) download winrar 4.01 32-bit outliers death run minecraft mini game download Random Sampling download ios 8.2 beta 4 ipsw SPSS. 113-123. On the first page of our story. jocuri generatorul rex 3d skacaht igra farsaj 6 bezplatna games bakugan hacked axcikneri download winrar 4.01 32-bit box head 2 dave unblocked for school autumn war. Articles wedding dash game free full Ask Softonic by Softonic Editorial Team. This time auto copy software download I ve got a tutorial on how to easily How to remove DRM from Kindle books If you haven t How to Add a PDF to a Kindle. For Piano Vocal Guitar. 32. 1-1 4 diameter. com CoD. Ahead Nero. 0 Retail 8. drum sounds, sound fx, Here you will find information about snare toms and play free virtual games to make the most of free download wallpaper for samsung double bass drum set. Download Iron Maiden The Book of Souls 2015 Free Album full album from zippyshare, 4Shared, mediafire, mega torrent. It will take several minutes to complete Bacon Camera APK What s New - Fix Android 6. Crash Bandicoot 3 Warped picture download programs free eboot-2-iso-1-1-converter. Support for, For 32. png. Orson Scott Card - 2004 - Ender s Game (Special 20th Anniversary Edition) Unabridged Posted by miok2cup in Books Audio books. music-torrent. github. 2 Free live wallpaper download win 7 After Effects for Mac. Copy Framework. Fairfax County Fownload Schools on Wednesday gets swallowed up in bureaucracy or non-instructional services. for sale is a pair of matched Hurst shift arms for the 69 later Samssung 4 speed transmissions. File updated hack 2016. Date added Physics Solution Manual Dieta De la asimilaci n, ecopy desktop 9.2 free download free download wallpaper for samsung n y el free download wallpaper for samsung forman parte de Manual de Psicolog a Educacional. Graw Hill 2 Tomos 2. html Bomberman Related Information Pages Create an information page for Concurso exclusivo para nuestr s seguido. html office suite pro apk android full url url oxo. coming cheat brother in arms road to hill 30 pc other uses, see 08 armed assault patch 1. Our dictionary Title RussianSerial. It s amazing to unlock characters you re familiar with and play with unique Feb 03, 2015 Crossy Road is an endless Frogger-like from Hipster Whale. Descarga Freez FLV to MP3 Converter 1. Rate this torrent - K2R Riddim Appel d R 2001 Phoenix TK. full. At MVA, our Microsoft Azure training courses cover key technical topics to help developers gain knowledge and achieve success. Free Ainsworth Keyboard Trainer 4. This website takes the official Riot League of Legends Patch Notes and free download wallpaper for samsung the changes in a more Patch Notes History Akali. Kina is an easy Lexington, OK Dogs Puppies KY 300 Apr 11 Australian Shepherd Aussies BlackTri 650 Blue Merle 1250 Boxer Free to good home only Find boxer Dogs puppies for labradors, blue nose other dogs for sale in Louisville. foot medios pago. They also took Aaron for good measure, gave some hoodlums a jump start and drove the Mac OS X Linux Android iOS Contribute see JumpStart Adventures 3rd Grade Mystery Mountain Botley. Demo Jump Drive. Report error. 1 Incl KeyGen WiN MAC OSX -R2R Sugar Bytes Cyclop v1. net For many years, the brilliant FScene packages have been renowned in flight simulation circles for their depth and their level of 7 Jul 2012 Title of archive generator password apple ipod shuffle driver download Date 23. 83 MB) 07 02 free for download. comFree Movies StreamingFree Death run minecraft mini game download. 98 MB) Minoru sabtu malam Download Lagu Terbaru, Download Lagu Indo Barat Gratis, Free Mp3 Musik Downloads, Top Gudang Lagu. The accounting download peggle 2 full version free of return (ARR) method may be known as the return on capital employed (ROCE) or wlnrar on investment (ROI). Vote, add Look at the crime scene photos then vote he s gotta be in the top 3 at leastM 1. 00MB ) Free 360 Xbox live Code generator and Microsoft Points generator 2012 More. He began his career at Marvel UK, doing CB Radio Microphone Wiring - Download as SMC-30 SPEAKER MIC. On the 5 yesterday, I heard, We are being held momentarily by the train s dispatcher. Loving God, Serving People We welcome you into our community and our lives. 50,957 Ayi Kwei Armah s Fragments death run the Anonymous writers Lazarillo de Tormes Universite Gaston Berger de Komatsu Manual Service Bekijk het professionele profiel van David Mourette op LinkedIn. 50am. So you need to know about Kaley Cuoco bra size and body measurements. zip Archive zip 1 Size 271 MB Rating 1, Like, Dislike Copy to Favorites Share YOU Doanload DOWNLOAD IT HERE Image Line FL Studio XXL Signature Bundle Complete v10. LS16 Patch WORKING torrent or any other torrent from the But it says that I need a serial that has royalty bearing serial number. however, it looks that this is not the most updated version of syndicate wars hack gsaurus said 2 2015 3D Streets of Rage 2 6download play1 Apr 2015 Streets of Rage 3 is a side-scrolling beat minecraft mini game up released by Sega in 1994 The could run in Streets of Rage 2) for Android Free App Download in Action Download winrar 4.01 32-bit Tag. Miyata 1400 A high-end downpoad bike sold only download a 1989 model with Shimano death run minecraft mini game download cm Trek 2300 Road Bike with 6061 T6 aluminum frame with carbon fibre forks, CA Serial number 15056 Frame size 59 cm Rear derailleur Shimano 105. 9 Sep 2013 Download WinBuilder and unzip into a directory, where the drive has 4 or more GB. org. David Richard Berkowitz also known as the Son of Sam and the. 20130217. Get your hands on the official ICC World Twenty20 2014 mobile game and be a part java game download cricket t20 2016 in java game 9apps ICC t20 games 2014 If you do notjust download and run cricket games in your mobile phone, you too will join the list of Posted by Guy Bravo on Senin, 31 Dj quik safe and sound album free download 2014 - Rating 4. 8 Crack, Serial Key Full Version Jihosoft PDF Password Recovery(Windows) With Serial Key Free Download. 5 Fast, death run minecraft mini game download, high-fidelity, skinnable audio video player free download wallpaper for samsung Windows. 5 with American English Levels 1,2,3,4,5 1-5 FULL Crack full bit code dll low gwme patch file extended version mac 41 records Rosetta Stone crack found and available skt hidden menu apk download download. JetAudio 8. Help on accessing alternative formats, such as Portable Document Format (PDF), Microsoft NATIONAL DRUG POLICY UNITED STATES OF AMERICA. Control the floating web widget quality of output PNG or JPEG format image by mindcraft output Image quality and Thumbnail Its javascript onclick html 19 May 2016 Accessing one selected file using a jQuery selector var selectedFile through the change event input type file id input onchange handleFiles(this. download the program by clicking on the DOWNLOAD button install Dell OpenManage Server Administrator Version 6. Torrent Download Magnet Link. 7 patch for Fallout 3. rar. Size 483. After install both softwares run Subsuf. Street legal racing redline 2. 09 Ultrasn0w 4 4s 5 5C 5S Unlock iphone 4 ios 6. My problem is I signed up with consumer cellular and cannot send or receive text Nov 09, 2015 This one did the trick for my new Moto G on Consumer Cellular. Download Need For speed most wanted Full version PC Game. Popular sports racing games like to play everyone. Set in the free download wallpaper for samsung world of toys, the story reveals an ongoing and secret battle between Free telecharger microvolt surge download software at UpdateStar - 26 Jun 2015 Download YouTube s MP3 MP4 Fast and Free Enter your search. If you forgot the passcode for your iPhone, iPad, or Sep 5, 2012 - 1 min - Uploaded by Death run minecraft mini game download GhomzGet your free Software from iphone-unlock-device. Dec 22, 2013 Jailbreak iOS 7 7. Besafe Izi. svg, both from within the Charles Lepple has built gerbv on the Mac OSX using fink. AMERICAN LITERARY HISTORY. 26a crack 10 Oct 2013 1. dllnvu-1. Collection of free full version games for Computer PC Backgammon Blitz PC Torrent Download Backgammon For Dummies. Minitab is the world s leading statistical software for Six Sigma and statistics education. ANTONIO ANTONIO JOSE GOMES CAVALEIRO JORGE MANUEL DO CARMO DIAS RODRIGUES. nz xJEQGQAJ IakmSn4ULHmGaO PzSyAfn9XcS0AFwha29O9oZ4QmL4 Ashampoo Anti-Virus v1. Se pueden utilizar las habilidades. 2215 Final (2015 ML RUS). 02 Download FaceFilter Studio 1. Cuando un misterioso ciberataque paraliza la civilizaci n, un grupo de ant. 0 Recover stored facebook password download (49 programs) Password and Key Finder 5. disconnecting appropriately and I have no more stuck devices in my Preferences. are in order if it passes, then it s time to proceed to the next step. com Hennigsdorf, July 31, 2015. James 02 pro trial crack torrfnt Download u 8 keyword ecover-engineer-full-crack charset utf-8 Download u 8 keyword ecover-engineer-full-crack free download wallpaper for samsung utf Reallusion FaceFilter Pro Rapidgator, 4shared, Firedrive, Turbobit, Netload, Free download wallpaper for samsung, movie, game, mp3 download, crack Reallusion Facefilter Pro v3. 12 Apr 2016 Author Topic Download File Backup. Goong S will still be used for this season, with the subtitle of Prince Who. Android Application. It received a Gold Award from the European Design Awards 2008 and an Excellence Novel Sans Pro Font Family quality and highspeed download fileshare for free Dream Script Pro Font Family Avigo Font Family Font Family Rollerscript Family Ot CLICK FOR DOWNLOAD FONT NEO SANS PRO Try Also Download Font Neo Sans Pro Regular Download Font Neo Sans Pro Bold Neo Sans Pro Light Font Download Download Source Serif Pro Font Family Free for commercial use Includes Source Source Sans and Source Serif also have different personalities because they John Sans Pro Font Family - open source video editor download windows Font 264. Imagen de la Mortal Kombat 3 Donkey Kong G-Pad 8. Pro 13. Find great deals on eBay for raigslist-gettysburg. 37876 Multilingual (Win Mac) 186. Nov 16, 2014 download bbm mod transparan android gingerbread 2. Ver los formatos y ediciones Ocultar otros Si quieres un buen manual de HEREDIA HERRERA, Antonia Manual free download wallpaper for samsung instrumentos de descripci n CRUZ MUNDET, Jos Ram n Evoluci n hist rica de la archiv sticaBilduma. 5 RHEL5 . Bathroom right at the foot of the stairs. (Austin) pic. Caselli, 22, of 7 North Death run minecraft mini game download Road, died after something exploded early Sunday in or on reports that someone may have thrown a quarter-keg of beer into or. Utility Trailer for sale New and Used - Louisiana Sportsman. Headsets Get free shipping at 35 and view promotions and reviews for Just WirelessPortable Power Pack. Find MotherBoard Fault, Diagnoses. Title, Mango Unlock 0. 141 for Android Phone Are You Worried about t Free Download Kaspersky Mobile Security v9. Google Earth Pro 7. Download free This theme requires TouchPal X installed winscp download mac os x enabled on your device. RESPONSABLE Representante de la Lista Maestra Documento que indica o describe ei total de manuales, Plan de Calidad Tipo de documento que especi ca por cada etapa, los insumos, productos, especi caciones, se modifica cambia su nivel de revisi n y se encuentra en formato PDF Ejemplo FPTREC-01(Formato 01 dei Procedimiento de. glimmerblocker youtube download filter V3 Language Packs V3, 2 years Rosetta Stone V3 Chinese (Mandarin) Speech Preinstalled. The. Transmission Windows Buy Recently Broadcast at shopPBS. Disponible para This combines the death run minecraft mini game download of Windows 8 with Windows 7.
http://tanhaysido.y0.pl/death-run-minecraft-mini-game-download.html
CC-MAIN-2020-10
refinedweb
2,474
62.88
README SwingSet VatSwingSet Vat This repository contains another proof-of-concept Vat host, like PlaygroundVat. This one is modeled after KeyKOS "Domains": all Vats run on top of a "kernel" as if they were userspace processes in an operating system. Each Vat gets access to a "syscall" object, through which it can send messages into the kernel. Vats receive message from the kernel via a "dispatch" function which they register at startup. Our goal is to experiment with different serialization/queueing mechanisms. One such mechanism is implemented so far, named "live slots", but we know this is insufficient to provide persistence across restarts. More docs are in the works. For now, try: $ npm install $ npm test $ bin/vat run demo/encouragementBot This repository is still in early development: APIs and features are not expected to stabilize for a while. REPL ShellREPL Shell $ bin/vat shell demo/encouragementBot vat> Shell mode gives you an interactive REPL, just like running node without arguments. All vats are loaded, and three additional commands are added to the environment: dump(): display the kernel tables, including the run queue step(): execute the next action on the run queue run(): keep stepping until the run queue is empty Vat BasedirsVat Basedirs The main argument to bin/vat is a "basedir", which contains sources for all the Vats that should be loaded into the container. Every file named vat-*.js (e.g. vat-foo.js and vat-bar-none.js) will create a new Vat (with names like foo and bar-none). Each directory named vat-*/ that has an index.js will also create a new Vat (e.g. vat-baz/index.js). In addition, a file named bootstrap.js must be present. This will contain the source for the "bootstrap Vat", which behaves like a regular Vat except: - At startup, its bootstrapmethod will be invoked, as bootstrap(argv, vats) - The argvvalue will be an array of strings, from the command line. So running bin/vat BASEDIR -- x1 x2 x3will set argv = ['x1', 'x2', 'x3']. - The vatsvalue will be an object with keys named after the other Vats that were created, and values which are each a Presence for that Vat's root object. This allows the bootstrap Vat to invoke the other Vats, and wire them together somehow. The bootstrap() invocation is the only way to get anything started: all other Vats are born without external references, and nothing can be invoked without an external reference. Those Vats can execute code during their setup() phase, but without Presences they won't be able to interact with anything else. Vat SourcesVat Sources Each Vat source file (like vat-foo.js or vat-bar.js) is treated as a starting point for the rollup tool, which converts the Vat's source tree into a single string (so it can be evaluated in a SES realm). This starting point can use import to reference shared local files. No non-local imports are allowed yet. The source file is expected to contain a single default export function named setup. This low-level function is invoked with a syscall object, and is expected to return a dispatch object. The "Live Slots" layer provides a function to build dispatch out of syscall, as well as a way to register the root object. This requires a few lines of boilerplate in the setup() function. function buildRootObject(E) { return harden({ callRight(arg1, right) { console.log(`left.callRight ${arg1}`); E(right) .bar(2) .then(a => console.log(`left.then ${a}`)); return 3; }, }); } export default function setup(syscall, state, helpers) { const dispatch = helpers.makeLiveSlots(syscall, state, buildRootObject, helpers.vatID); return dispatch; } Exposed (pass-by-presence) ObjectsExposed (pass-by-presence) Objects The Live Slots system enables delivery of messages to remote "Callable Objects" objects, as long as those objects are of a particular form. All Callable Objects must follow these rules: - all enumerable properties must be functions - all properties, and the object itself, must be harden()ed The system can pass-by-copy "Data Objects" with similar rules: - all enumerable properties must be non-functions - the object's prototype must be Array or Object (or null) - all properties, and the object itself, must be harden()ed Root ObjectsRoot Objects The "Root Object" is a callable object returned by buildRootObject(). It will be made available to the Bootstrap Vat. Sending Messages with PresencesSending Messages with Presences When a Callable Object is sent to another Vat, it arrives as a Presence. This is a special (empty) object that represents the Callable Object, and can be used to send it messages. If you are running SwingSet under SES (the default), then the so-called "wavy dot syntax" (proposed for future ECMAScript) can be used to invoke eventual send methods. Suppose Vat "bob" defines a Root Object with a method named bar. The bootstrap receives this as vats.bob, and can send a message like this: function bootstrap(argv, vats) { vats.bob~.bar('hello bob'); } The ~. operator (pronounced "wavy dot") has the same left-to-right precedence as the . "dot" operator, so that example is equivalent to HandledPromise.applyMethod(vats.bob, 'bar', ['hello bob']). If you are not running under SES and your Javascript environment does not yet support "wavy dot syntax" (i.e. running "abc"~.[2] results in a syntax error, not a Promise), then the special E() wrapper can be used to get a proxy from which methods can be invoked, which looks like: function bootstrap(argv, vats) { E(vats.bob).bar('hello bob'); } Other uses for wavy dot syntaxOther uses for wavy dot syntax The main purpose of the wavy dot syntax (and E wrapper) is to provide an "eventual send" operator, in which the message is always delivered on some later turn of the event loop. This happens regardless of whether the target is local or in some other Vat: const t1 = { foo() { console.log('foo called'); }, }; t1~.foo() console.log('wavy dot called'); will print: wavy dot called foo called This is equivalent to: HandledPromise.applyMethod(t1, 'foo', []) or E(t1).foo() Return ValuesReturn Values Eventual-sends return a Promise for their eventual result: const fooP = bob~.foo(); fooP.then(resolution => console.log('foo said', resolution), rejection => console.log('foo errored with', rejection)); Sending Messages to PromisesSending Messages to Promises Wavy dot syntax also accepts Promises, just like Promise.resolve. The method will be invoked (on some future turn) on whatever the Promise resolves to. If wavy dot syntax is used on a Promise which rejects, the method is not invoked, and the return promise's rejection function is called instead: const badP = Promise.reject(new Error()); const p2 = badP~.foo(); p2.then(undefined, rej => console.log('rejected', rej)); // prints 'rejected' If the Promise resolves to something which does not support the method, the method delivery will reject with a TypeError. Promise PipeliningPromise Pipelining In fooP = bob~.foo(), fooP represents the (eventual) return value of whatever foo() executes. If that return value is also a Callable Object, it is possible to queue messages to be delivered to that future target. The Promise returned by an eventual-send can be used by wavy dot syntax too, and the method invoked will be turned into a queued message that won't be delivered until the first promise resolves: const db = databaseServer~.openDB(); const row = db~.select(criteria) const success = row~.modify(newValue); success.then(res => console.log('row modified')); If you don't care about them, the intermediate values can be discarded: databaseServer~.openDB()~.select(criteria)~.modify(newValue) .then(res => console.log('row modified')); This can be done outside of SES in legacy Javascript environments with the E() wrapper: E(E(E(databaseServer).openDB()).select(criteria)).modify(newValue) .then(res => console.log('row modified')); This sequence could be expressed with plain then() clauses, but by chaining them together without then, the kernel has enough information to speculatively deliver the later messages to the Vat in charge of answering the earlier messages. This avoids unnecessary roundtrips, by sending the later messages during "downtime" while the target Vat thinks about the answer to the first one. This drastic reduction in latency is significant when the Vats are far away from each other, and the inter-Vat communication delay is large. The SwingSet container does not yet provide complete facilities for off-host messaging, but once that is implemented, promise pipelining will make a big difference. Presence Identity ComparisonPresence Identity Comparison Presences preserve identity as they move from one Vat to another: - Sending the same Callable Object multiple times will deliver the same Presence on the receiving Vat - Sending a Presence back to its "home Vat" will arrive as the original Callable Object - Sending a Callable Object to two different Vats will result in Presences that cannot be compared directly, because those two Vats can only communicate with messages. But if those two Vats both send those Presences to a third Vat, they will arrive as the same Presence object Promises are not intended to preserve identity. Vat code should not compare objects for identity until they pass out of a .then() resolution handler.
https://www.skypack.dev/view/@agoric/swingset-vat
CC-MAIN-2021-25
refinedweb
1,512
54.02
Opened 5 years ago Last modified 21 months ago #23130 new Bug BooleanField should not override 'blank' if choices are specified Description BooleanField currently allows settings 'choices'. It also always overrides 'blank'. This stems from the fact that in HTML, a blank value means False, and any (?) non-blank value means True for check boxes. Now if you override 'choices', things change in terms of HTML, since now True and False are represented by "True" and "False" in the select box. This also makes it possible to supply a null/blank value (which would have meant False in the checkbox case). BooleanField should either handle this fact gracefully, for instance by only overriding 'blank' if 'choices' is not given and then interpreting "True" and "False" as True and False. Or it should disallow overriding 'choices' entirely. Attachments (3) Change History (25) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 Changed 5 years ago by "initial-form.png" shows the admin form right before pressing "Save". "form-after-save.png" shows the data that was actually saved. "bug23139.tar.gz" contains the code to produce this form. It's no more than a single BooleanField with some 'choices'. comment:4 Changed 5 years ago by Expected behavior when pressing "Save": The message "this field is required" should appear, since it hasn't been filled out. This doesn't work as expected because 'blank' is always set to True in the BooleanField's __init__ method. This is because a blank/absent value means False if the BooleanField widget is a checkbox. But if rendered as a choices list, things change. See initial description for a more elaborate description of the issue. comment:5 Changed 5 years ago by comment:6 Changed 5 years ago by I've just confirmed that I get this same behaviour as described by jonash. I don't believe that disallowing the overriding of choices is an appropriate solution as it is useful in cases where the Boolean values resemble a meaning. I think the way forward is to disable the override in the init method. Here's a quick update for the init method: def __init__(self, *args, **kwargs): if 'choices' not in kwargs: kwargs['blank'] = True super(BooleanField, self).__init__(*args, **kwargs) This code gives the desired behaviour: if a Null value is specified when choices are overridden, then it will not be updated. comment:7 Changed 5 years ago by Sounds good to me. comment:8 Changed 5 years ago by Great. Wrote tests + submitted patch as described above. Here's the pull request: comment:9 Changed 5 years ago by comment:10 Changed 5 years ago by Actually, my initial suggestion feels like something that violates the "separation of concerns" principle. Interpreting browser-submitted values should happen in the checkbox widget. This is completely unrelated to the model layer. Why not have BooleanField behave as follows: - Don't override blankat all CheckboxInputshould interpret "value absent from POST" as False, any other value as True (already does this) BooleanFieldshould interpret None as a blank value (already does this) Why does BooleanField need to have blank=True in the first place? comment:11 Changed 5 years ago by If blank=False on a BooleanField then False isn't allowed so by default it would be a checkbox that must be checked. comment:12 Changed 5 years ago by Are you talking about the model or the forms layer? In the former case, why is False considered a blank value in the first place? Shouldn't None be the only value considered blank? In the latter case, see my last comment. comment:13 Changed 5 years ago by It's at the forms layer. I haven't looked into it, but it may be difficult or impossible to change now due to backwards compatibility. Patches/ideas welcome. comment:14 Changed 5 years ago by Having given this whole business some hours' thought, I actually believe that making a difference between BooleanField and NullBooleanField is fundamentally wrong. I know NullBooleanField has been in Django from the beginning and removing it/changing BooleanField is a nightmare in terms of backwards compatibility. Still I'd like to propose a better implementation in form of this patch: (This passes the complete test suite except for a handful of minor now-outdated tests!) In this version, BooleanField behaves much more like any other field, simplifying both the implementation and the usage. blank may be used freely as with any other field. The same applies to null. By default, it renders as a checkbox and does not allow None (i.e. blank=False like with any other field). If blank is set to True, it renders as choices with an additional blank option. It also renders as choices if choices is given. NullBooleanField stays as a mere wrapper for BooleanField with blank=null=True. Here is a shortened version of the new implementation: class BooleanField(Field): empty_strings_allowed = False default_error_messages = { 'invalid': _("'%(value)s' value must be either True or False."), } description = _("Boolean (Either True or False)") def get_internal_type(self): return "BooleanField" def to_python(self, value): if value is None: return value if value in (True, False): # if value is 1 or 0 than it's equal to True or False, but we want # to return a true bool for semantic reasons. return bool(value) if value in ('t', 'True', '1'): return True if value in ('f', 'False', '0'): return False raise exceptions.ValidationError( self.error_messages['invalid'], code='invalid', params={'value': value}, ) # ... def formfield(self, **kwargs): if self.blank or self.choices: return super(BooleanField, self).formfield(**kwargs) else: # In the checkbox case, 'required' means "must be checked (=> true)", # which is different from the choices case ("must select some value"). # Since we want to allow both True and False (checked/unchecked) choices, # set 'required' to False. defaults = {'form_class': forms.BooleanField, 'required': False} defaults.update(kwargs) return super(BooleanField, self).formfield(**defaults) comment:15 Changed 5 years ago by What are the backwards compatibility ramifications of the proposal? comment:16 Changed 5 years ago by Can't think of any documented/tested incompatibilities, except for NullBooleanField being deprecated/superseded by BooleanField. But it may have undocumented surprises, for instance for people who rely on the behavior I consider a bug in my initial ticket description. Or for people who expect the forced blank=True on BooleanFields. comment:17 Changed 5 years ago by You mentioned some test updates were required. That suggests to me there could be some compatibility issues but difficult to say for sure without seeing them. Maybe they will be acceptable, but they will at least need to be documented in the release notes. If you can send a PR with the updates, it'll be easier to review. comment:18 Changed 5 years ago by Jonas, do you plan to submit a patch? Should we close the first PR that was proposed? comment:19 Changed 5 years ago by I plan on submitting a patch in a few weeks. I'm having trouble with my development machine at the moment. I think that PR may be closed. comment:20 Changed 3 years ago by I've looked at this a little but much work remains. Here's my WIP PR to allow null=True for BooleanField. Most everywhere NullBooleanField is tested in Django's test suite, a parallel test for BooleanField(null=True) should also be added. I think there may also be some trickiness with regards to migrations and changing the nullability of BooleanField. For example, on Oracle, it requires adding or dropping a check constraint. I'm not planning to continue this in the immediate future, so someone else can continue with my initial patch. comment:21 Changed 2 years ago by I've got a WIP PR for this Could you provide a code example of the problem?
https://code.djangoproject.com/ticket/23130
CC-MAIN-2019-51
refinedweb
1,319
64.51
1116/what-role-does-orderer-peer-serves-in-hyperledger-fabric Shared communication channel to clients and peers are provided by Orderer peers. It offers a broadcast service for messages containing transactions. The channel is used by the clients to broadcast the messages which is then delivered to the peers. The message is communicated with total-order delivery and reliability. The channel outputs the same messages to all connected peers and outputs then to all peers in a logical order. The ordering service's prime aim is to provide ordering of published transactions and cut blocks with ordered transactions. Orderer is a node running the communication service that implements a delivery guarantee, such as atomic or total order broadcast. Yes, client is aware of the endorsing ...READ MORE When you register a user, that user ...READ MORE I know it is a bit late ...READ MORE Endorsing and committing are just two functions ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE Is your transaction actually called 'OrderPlaced' (in ...READ MORE The peers communicate among them through the ...READ MORE Let me explain with an example. Suppose ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/1116/what-role-does-orderer-peer-serves-in-hyperledger-fabric?show=1120
CC-MAIN-2020-34
refinedweb
213
51.95
Suppose I have a list of numbers and list of functions to apply to numbers: val xs: List[Int] = List(1, 2, 3) val fs: List[Int => Int] = List(f1, f2, f3) Now I would like to use an Applicative to apply f1 to 1, f2 to 2, etc. val ys: List[Int] = xs <*> fs // expect List(f1(1), f2(2), f3(3)) How can I do it with Scalaz ? pure for zip lists repeats the value forever, so it's not possible to define a zippy applicative instance for Scala's List (or for anything like lists). Scalaz does provide a Zip tag for Stream and the appropriate zippy applicative instance, but as far as I know it's still pretty broken. For example, this won't work (but should): import scalaz._, Scalaz._ val xs = Tags.Zip(Stream(1, 2, 3)) val fs = Tags.Zip(Stream[Int => Int](_ + 3, _ + 2, _ + 1)) xs <*> fs You can use the applicative instance directly (as in the other answer), but it's nice to have the syntax, and it's not too hard to write a "real" (i.e. not tagged) wrapper. Here's the workaround I've used, for example: case class ZipList[A](s: Stream[A]) import scalaz._, Scalaz._, Isomorphism._ implicit val zipListApplicative: Applicative[ZipList] = new IsomorphismApplicative[ZipList, ({ type L[x] = Stream[x] @@ Tags.Zip })#L] { val iso = new IsoFunctorTemplate[ZipList, ({ type L[x] = Stream[x] @@ Tags.Zip })#L] { def to[A](fa: ZipList[A]) = Tags.Zip(fa.s) def from[A](ga: Stream[A] @@ Tags.Zip) = ZipList(Tag.unwrap(ga)) } val G = streamZipApplicative } And then: scala> val xs = ZipList(Stream(1, 2, 3)) xs: ZipList[Int] = ZipList(Stream(1, ?)) scala> val fs = ZipList(Stream[Int => Int](_ + 10, _ + 11, _ + 12)) fs: ZipList[Int => Int] = ZipList(Stream(<function1>, ?)) scala> xs <*> fs res0: ZipList[Int] = ZipList(Stream(11, ?)) scala> res0.s.toList res1: List[Int] = List(11, 13, 15) For what it's worth, it looks like this has been broken for at least a couple of years.
http://databasefaq.com/index.php/answer/1019/list-scala-scalaz-applicative-ziplist-with-scalaz
CC-MAIN-2018-47
refinedweb
346
73.07
Maps in Microsoft BizTalk Server 2010 As mentioned several times in Chapter 2, “Schemas,” schemas are really important in your BizTalk solution. This is partly because they serve as the contract between systems and are therefore useful for determining which system is faulty. And it is partly because updating them is potentially particularly laborious because so many other artifacts depend on them. One of the artifacts that depends heavily on schemas is a map. A map is a transformation from one Extensible Markup Language (XML) document into another XML document, and it is used in three places: - To transform incoming trading-partner-specific or internal-system-specific messages into an internal XML format. This is achieved by setting the map on a receive port or on the receive side of a two-way port. - To transform outgoing internal XML format messages into the trading-partner-specific or internal-system-specific formats they need to receive. This is achieved by setting the map on a send port or on the send side of a two-way port. - To perform transformations needed inside business processes that do not involve receiving or sending messages to and from trading partners or internal systems. Maps are developed inside Visual Studio 2010 using the BizTalk Mapper tool. This tool has a developer friendly interface, which allows you to have a tree structure view of both input and output schemas used by the map. When viewing these two tree structures, you can then use either direct links from nodes in the source schema to nodes in the destination schemas, or you can use functoids that perform some processing on its input and then generates output, which can be used either as input for other functoids or as a value that goes to a node in the destination schema. Although maps are developed in a nice, user-friendly interface and stored as a BizTalk-specific XML format, they are compiled into Extensible Stylesheet Language Transformations (XSLT) when the Visual Studio 2010 project is compiled. In fact, you can even provide your own XSLT instead of using the Mapper if you are so inclined or if you need to do complex transformations that you cannot do with the Mapper. Only XSLT 1.0 is supported for this, though. Incoming files are either arriving as Extensible Markup Language (XML) or converted into XML in the receive pipeline, which happens before the map on the receive port is executed. Also, on a send port, the map is performed before the send pipeline converts the outgoing XML into the format it should have when arriving at the destination. This makes it possible for the Mapper and the XSLT to work for all files BizTalk handles because the tree structure shown in the Mapper is a representation of how the XML will look like for the file and because the XSLT can only be performed and will always be performed on XML. This provides a nice and clean way of handling all files in BizTalk in the same way when it comes to transformations. The Mapper This section walks you through the main layout of the BizTalk Mapper and describes the main functions and features. Developing a map is done inside Visual Studio 2010 just as with other BizTalk artifacts. Follow these steps to add a map to your project: - Right-click your project. - Choose Add, New Item. - Choose Map and provide a name for the map. This is illustrated in Figure 3.1 Layout of Mapper After adding the map to your project, it opens in the BizTalk Mapper tool, which is shown in Figure 3.2. The Mapper consists of five parts: - To the left a Toolbox contains functoids that you can use in your map. Functoids are explained in detail later. If the Toolbox is not present, you can enable it by choosing View, Toolbox or by pressing Ctrl+Alt+X. - A Source Schema view, which displays a tree structure of the source schema for the map. Figure 3.1 Add a new map to your project. Figure 3.2 Overview of the BizTalk Mapper. - The Mapper grid, which is where you place all functoids used by the map and also where lines between nodes in the source and destination schemas are shown. Above the Mapper grid there is a toolbar with some functionality that is described later. - A Destination Schema view, which displays an inverted tree structure of the destination schema. An inverted tree structure means that it unfolds right to left rather than left to right, which is normal. - The Properties window, which shows the properties that are available depending on what is the active part of the Mapper. For instance, it can show properties for a functoid in the map, a node in the source schema, or the map itself. If the Properties window is not present, you can always get to it by right-clicking the item for which you need the properties and choosing Properties or by clicking an item and pressing F4. Initial Considerations When developing a transformation, you usually assume that the input for the map is always valid given the schema for the source. This requires one of two things, however: - Validation has been turned on in the receive pipeline, meaning that the pipeline used is either a custom pipeline with the XML validator component in it or validation has been enabled on the disassembler in use. - You trust the sending system or trading partner to always send valid messages and therefore do not turn on validation. This can be done for performance reasons. The downside to this is, of course, that it can provide unpredictable results later on in the process and troubleshooting will be hard. Either way, your business must decide what to do. Should validation be turned on so that errors are caught in the beginning of the process, or can it be turned off either because you trust the sender or because you decide to just deal with errors as they arise? As a developer of a transformation, you need to know the state of the incoming XML. If a map fails at some point, this can lead to unexpected behavior, like the following: - Orchestrations can start failing and get suspended because the logic inside the orchestration is based on valid input. - Incoming messages can get suspended if the map fails. - If you validate your XML in a send pipeline and the map generated invalid XML according to the schema, the validation will fail, and the message will get suspended and not delivered to the recipient. After this is dealt with, you can start looking at how to implement the map. Most of a map is usually straightforward, and you just specify which nodes in the source should be mapped to which nodes in the destination schema. For instance, the quantity on an order line is usually just mapped to the relevant quantity node in the destination schema that may have another name, namespace, or other. This works fine, as long as the cardinality and data type match between the source node and the destination node. Special cases, however, must also be dealt with. Handling all the special cases can take a long time just to specify, and this time should be taken because you want to generate valid output. Determining how to handle these cases is usually not something a BizTalk developer can do alone because you need to specify what actions the business wants to perform in these cases. Therefore, this specification should be done in cooperation between a businessperson and a BizTalk developer. The most common special cases are described in the following paragraphs. Different Data Types If the source node and destination node have different data types, you might run into issues. Naturally, if you are mapping from one data type to another data type that has fewer restrictions, you are safe. If you are mapping form a node of type decimal to a node of type string, for example, you can just do the mapping because anything that can be in a node of type decimal can also be in a node of type string. The other way around, however, is not so easy. You have three options: - Change the source schema either by changing the data type or by placing a restriction on the node that limits the possible values. You can use a regular expression to limit a string node to only contain numbers, for instance. - Change the destination schema by changing the data type of the relevant node. Relaxing restrictions, however, can give you trouble later on in the business process. - Handle the situation inside the map. After schemas are made and agreed upon with trading partners, they are not easy to change. So, you probably want to address this issue inside the map. You can use functoids, which are explained later, to deal with any inputs that are not numeric values. Different Cardinality If the source node is optional and the destination node is not, you have an issue. What you should do in case the input node is missing is a matter of discussion. Again, you have three options: - Change the source schema by changing the optional node to be required. - Change the destination schema by changing the relevant node to be optional. - Handle the situation inside the map. You probably want to address this issue inside the map. You can use functoids to deal with the scenario where the source node is missing. This can either mean mapping a default value to the destination node or throwing an exception. Creating a Simple Map To start creating the map, you must choose which schema to use as the input for the map and which schema to use for the output. These are also known as the source and the destination schemas of the map. To choose the source schema, click Open Source Schema on the left side of the Mapper. Doing so opens a schema selector, as shown in Figure 3.3. Figure 3.3 Choosing a schema to be used for the map. In the schema selector, you can choose between schemas that exist in the current project or schemas that are in projects you have referenced from this project. You cannot add a reference from this window, so references must be added before choosing schemas in other projects. You choose a schema for the destinations schema by clicking Open Destination Schema and choosing a schema in the same way. If you choose a schema that has multiple root nodes, you get a screen where you need to choose which one of them to use as the root node for the schema you have chosen. After choosing the two schemas involved, you are ready to start designing your map. This is mainly done by dragging links between the source schema and the destination schema and possibly doing some work on the source values before they are put into the destination schema. For values that just need to be copied from a source node to a destination node, you can simply drag a link between the nodes in question. Just click either the source or the destination node and hold down the left mouse button while you move the mouse to the other element. Then release it. Doing so instructs the Mapper that you want the values from the source node copied to the node in the destination schema when the map is executed. In between the source and destination schema is the Mapper grid. This grid is used to place functoids on, which perform some work on its input before its output is either used as input for another functoid or sent to the destination schema. Functoids are described later in this chapter. Figure 3.4 shows a simple map with a single line drawn and a single functoid on it. The functoid is an “Uppercase” functoid that converts the input it gets into uppercase and outputs that. Figure 3.4 Simple map with one line drawn and one functoid used. The map grid is actually larger than what you can see on your screen. If you move the mouse to the edge of the grid, the cursor changes to a large arrow, and you can then click to let the grid scroll so that you can see what is located in the direction the arrow points. You can also click the icon illustrating a hand in the toolbar at the top of the Mapper grid to get to the Panning mode, where you can click the grid and drag it around. If you need to move a functoid to another location on the grid, you need to first click it. When it is selected, you can drag it anywhere on the grid. If you need to move a collection of functoids at the same time, you can click the grid and drag a rectangle to mark the functoids and links you want to move. After marking a rectangle on the grid, you can just click somewhere inside it and drag the entire rectangle to another location on the grid. Another option is to select multiple functoids/links by holding down Ctrl while clicking them. After they are selected, you can drag them to where you want them. Sometimes you need to change one end of a link if, for instance, some destination node should have its value from another node than it does at the time. You can do this either by deleting the existing link and adding a new one or by clicking the link and then dragging one end of the link that has been changed to a small blue square to the new place. Changing the existing link instead of adding a new link has some advantages: - All the properties you may have set on the link remain the same, so you do not have to set them again. - If the link goes into a functoid, this will keep the order in which they are added. The order parameters are added to a functoid is important, so it is nice to not have to go in and change that order after deleting a link and adding a new one. The window shown in Figure 3.4 has a toolbar at the top of the Mapper grid in the middle. This toolbar is new in the Mapper in BizTalk 2010 and contains some functionality that wasn’t available in earlier versions of BizTalk. One of the new features is the search bar. If you enter something in the search text box, the Mapper finds occurrences of this text within the map. The search feature can search in the source schema, the destination schema, and properties of the functoids such as name, label, comments, inputs, and scripts. You use the Options drop-down to the right of the search text box to enable and disable what the search will look at. Once a search is positive, you get three new buttons between the search text box and the Options drop-down. The three buttons enable you to find the next match going up, find the next match going down, or to clear the search. The search features are marked in Figure 3.5. Another new option is the zoom feature. You get the option to zoom out, allowing you to locate the place on the grid you want to look at. For zooming, you can use the horizontal bar in the Mapper, as shown in Figure 3.6, or you can hold down the Ctrl key while using the scroll wheel on your mouse. To let the map know that a value from one node in the source is to be mapped into a specific node in the destination schema, you drag a link between the two nodes. When you drag a link between two record nodes, you get a list of options: - Direct Link: This creates a direct link between the two records. This helps the compiler know what levels of the source hierarchy correspond to what levels in the hierarchy of the destination schema. Figure 3.5 The search feature in a map. Figure 3.6 The zoom option in a map. - Link by Structure: This lets the Mapper automatically create links between the child nodes of the two records you created the link between. The Mapper attempts to create the links based on the structure of the children. - Link by Name: This lets the Mapper automatically create links between the child nodes of the two records you created the link between. The Mapper attempts to create the links based on the names of the children. - Mass Copy: This adds a Mass Copy functoid that copies all subcontent of the record in the source to the record in the destination. - Cancel: This cancels what you are doing. This functionality is also new in BizTalk 2010. In earlier versions, there was a property on the map you could set before you dragged a link between two records. Functoids and links can be moved between grid pages in two ways: - After selecting one or more functoids/links, right-click them, and choose Move to Page or press Ctrl+M Ctrl+M. Doing so opens a small screen where you can choose between the existing pages or choose to create a new page to place the selected items on. - Drag the selected items to the page tab of the page where you want them to appear. The page appears, and then you can place the items where you want them to be. If you need a copy of a functoid, retaining all the properties of the functoid, you can also do this. Select a number of items and use the normal Windows shortcuts to copy, cut, and paste them. You can also right-click and choose Copy, Cut, or Paste. You can copy across grid pages, maps, and even maps in different instances of Visual Studio 2010. Some limitations apply to this, however, such as when links are copied and when not. For a full description, see refer to. For large schemas, it can be hard to keep track of which nodes are used in the map and in what context. To assist you, the Mapper has a feature called relevance tree view. This is a feature you can enable and disable on the source and destination schemas independently, and the feature is enabled or disabled using the highlighted button in Figure 3.7. As you can see, the relevance tree view is enabled for the destination schema and not for the source schema. The destination schema has some nodes coalesced to improve readability. This means that all the nodes in the Header record that are placed above the OrderDate node, which is the only node currently relevant for the map, are coalesced into one icon, showing that something is here but it is not relevant. You can click the icon to unfold the contents if you want. Records containing no nodes that are relevant for the map are not coalesced, but collapsed. Figure 3.7 Relevance view. If you have marked a field in the source schema and need to find the field in the destination schema to map it into, you can get some help from the Mapper, as well. This feature is called Indicate Matches. If you select a node in the source schema, you can either press Shift+Space to enable it, or you can right-click it and choose Indicate Matches. Figure 3.8 shows how the screen looks after enabling the Indicate Matches feature on the OrderDate node in the source schema. As you can see, the Mapper adds some potential links to the map, and the one the Mapper thinks is the most likely is highlighted and thus the currently chosen one. If none of the suggestions match, you can press the Escape key or click with the mouse anywhere that is not one of the suggested links. If one of the links the Mapper suggests is the one you want, you have two ways of actually adding the link to the map: - Use the mouse to click the link you want added to the map. Note that you cannot click the node in the destination that the link points to; it has to be the link itself. - Use the up- and down-arrow keys to switch between the suggested links, and press the Enter key when the right link is chosen and highlighted. If the feature guesses right the first time, you can add the link simply by pressing Shift+Space and then Enter. And you did not have to find the right place in the destination schema yourself. Unfortunately, functoids are not part of this feature, so if you want the source node to be mapped into a functoid, this feature provides no help. You will have to do that yourself. After a link has been dragged, it shows up in the Mapper as one of three types of links: - A solid line: This is used for links where both ends of the link are visible in the current view of the Mapper, meaning that none of the two ends are scrolled out of the view. - A dashed line that is a little grayed out: This is used for links where only one of the ends is visible in the current Mapper view and the other end is scrolled out of view. Figure 3.8 Illustration of the Indicate Matches feature. - A dotted line that is grayed out: This is used for links where both ends are scrolled out of view but the link still goes through the current view of grid. Figure 3.9 shows the different types of links. Figure 3.9 The three types of links in a map. Because there may be a lot of links that are of the third type, where none of the ends of the link is visible, you might want to choose to not have these links shown at all. To do this, you can use a feature on the toolbar called Show All/Relevant Links. This is enabled using a button, as shown in Figure 3.10. Figure 3.10 Feature to show all or relevant links. As you can see from Figure 3.10, one of links that was also present in Figure 3.9 is no longer shown in the Mapper. The link still exists and is valid. If one or both of the ends of the link come into view, the link reappears on the grid. When a map gets filled up with functoids and links, it can get hard to keep track of which links and functoids are connected. To help you with this, the Mapper automatically highlights relevant links and functoids for you, if you select a link, a functoid, or a node in either the source or destination schema. For instance, take a look at Figure 3.11. Figure 3.11 A map with lots of functoids. Suppose you are troubleshooting to make sure the OrderDate in the destination schema is mapped correctly. If you click the OrderDate in the destination schema, you get the screen seen in Figure 3.12 instead. As you can see, the functoids and links that are relevant for mapping data into the OrderDate element have been highlighted and the rest of the links and functoids are now opaque, allowing you to focus on what is important. Had you clicked the link between the Equal functoid and the Value Mapping functoid, a subset of the links highlighted in Figure 3.12 would have been highlighted. If there are relevant links or functoids on another map page than the one currently shown, this is indicated by a small blue circle with an exclamation mark inside it to the left of the name of the page. Note also that the links have arrows on them, indicating the flow of data. This is also new in BizTalk 2010. In earlier versions of the Mapper, you could not have a functoid that gets its input from a functoid that was placed to the right of the first functoid on the grid. Now you can place your functoids where you want on the grid and the arrow will tell you which way the link goes. You cannot drag a link from a functoid to a functoid placed to the left of the first functoid, but after the link has been established, you can move the functoids around as you please. Figure 3.12 The links and functoids relevant for mapping into the OrderDate element. Another feature is the Auto Scrolling feature. This feature, which is enabled and disabled using the button shown in Figure 3.13, allows the Mapper grid to autoscroll to find relevant functoids given something you have selected. If all the functoids had been out of sight in the Mapper grid and you then clicked the OrderDate in the destination schema with this feature enabled, the grid would autoscroll to the view shown in Figure 3.14. The Auto Scroll feature also applies to other parts of the map than clicking a node in a schema. If you click a functoid, for instance, the Mapper highlights relevant links and functoids that are connected to the selected functoid and uses the Auto Scroll feature to bring them into view, if enabled. Figure 3.13 Example of using the Auto Scroll feature. Sometimes you want insert a default value into a field in the destination schema. You can do this by clicking the field in question and in the properties for the field finding the property called Value and setting a value here. Other than setting a value, you can use the drop-down to select the <empty> option. This lets the Mapper create an empty field in the output. As explained in Chapter 4, it can be useful to have a way of setting values in messages outside a transformation. Also, empty elements are needed for property demotion, as explained in Chapter 2, “Schemas.” If you choose to set a default value in a field in the destination schema in the map, you can no longer map any values to this node in the destination. If you open the Extensible Markup Language Schema Definition (XSD) and set a default value on the field in the schema itself instead of setting it in the Mapper, the Mapper uses this value, but you are allowed to map a value to the field, which overwrites the value set in the XSD. Unfortunately, there is no built-in support for using the default value from the XSD if the field that is mapped to a node is optional and not present in the source at runtime. You have to do this with an If-Then-Else sort of structure, as discussed later in this chapter. If you choose to set a default value in a field in the source schema, this value is used only when generating instances for testing the map from within Visual Studio 2010. The value is not used at runtime when the map is deployed. If you click the map grid, you can see the properties of the map in the Properties window. If this window is not present, you can right-click the grid and choose Properties or just click the grid and press F4. Table 3.1 describes the available properties for the map. Table 3.1. Properties of the Map If you click a link in the map grid, you can see and change some properties of the link. If the Properties window is not present, you can right-click the link and choose Properties or click the link and press F4. Table 3.2 describes the properties for links. Table 3.2. Properties for Links Clicking the map file in Solution Explorer reveals some properties that you can set on the Visual Studio 2010 project item. Table 3.3 describes these properties.
http://www.informit.com/articles/article.aspx?p=1752306&amp;seqNum=6
CC-MAIN-2017-09
refinedweb
4,677
68.3
How to escape message System.Int32 in C# I'm just beginning to learn C# so I have a question. Why do I get the message System.Int 32[] ? public static int[] GetFirstEvenNumbers(int count) { int[] array = new int[count]; for (int i = 1; i <= array.Length; i++) array[i - 1] = i * 2; for (int j = 0; j < count; j++) Console.Write(array[j] + " "); return new int[count]; } public static void Main(string[] args) { Console.Write(GetFirstEvenNumbers(5)); Console.ReadKey(); } What you Write to the console has type int[]. This type, int[], does not override ToString(). Therefore the behavior it inherits from object is used, and that is to simply return the full name of its type, which is System.Int32[]. What you want to do is create a string consisting of all the entries in the array, separated with some symbol. You can use string.Join for that. For example: var firstFiveEven = GetFirstEvenNumbers(5); var firstFiveEvenAsStringWithSeparators = string.Join(" - ", firstFiveEven); Console.Write(firstFiveEvenAsStringWithSeparators); Console.ReadKey(); Console.SetBufferSize(Int32, Int32) Method (System), public static void SetBufferSize (int width, int height); C# Copy. // This example demonstrates the Console.WindowLeft and Escape); // end do-while } // end try catch (IOException e) { Console.WriteLine(e.Message); } finally { Console. public struct Int32 : IComparable, IComparable< int >, IEquatable< int >, IFormattable. type int = struct interface IConvertible interface IFormattable. type int = struct interface IFormattable interface IConvertible. type int = struct interface IFormattable. what Stuartd said is true: in your code assuming you want to write each integer of the array and then all the array in one time there is some changes need to be done to your code 1- returning new empty array or array of zeros in your method GetFirstEvenNumbers need to be replaced by the array variable from your code which hold the multiplied by two values. 2- since your method(function) return array of int then you need to convert that to string so that it could be written by the console a good c# method to use is string.join so your code after these changes should look something like this: public static int[] GetFirstEvenNumbers(int count) { int[] array = new int[count]; for (int i = 1; i <= array.Length; i++) array[i - 1] = i * 2; // if you don't need to repeat the result twice get rid off the // second for loop and Console.Write for (int j = 0; j < count; j++) Console.Write(array[j] + " "); return array; } public static void Main(string[] args) { Console.Write(string.Join(",",GetFirstEvenNumbers(5))); Console.ReadKey(); } I hope this helps. Migrate from Newtonsoft.Json to System.Text.Json, System.Text.Json focuses primarily on performance, security, and Where it does escape them, it does so by emitting a \ before the A non-string value received for a string field results in a JsonException with the following message: To support a dictionary with an integer or some other type as the key, A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. The reason for the bahaviour you see is because when you pass an object (regardless of type) to the Console.Write method then it will call the objects ToString() method. Some types have overridden this method in order to present the object or value in a way that makes sense for the type (for example, an integer would print its value). Many objects have not overriden this method and thus they default back to Object.ToString() and the behaviour of Object.ToString() is to print out the type of the object. You can read more about it here. Int32 Struct (System), You can assign the value of an integer type whose range is a subset of the Int32 type. This is a widening conversion that does not require a cast operator in C# Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. Learn more Unable to cast object of type 'System.Int32' to type 'System.String' in DataReader.GetString() Built-in reference types, The object type is an alias for System. The advantage of verbatim strings is that escape sequences are not processed, which makes it easy to public delegate void MessageDelegate(string message); public delegate int An unhandled exception occurred while processing the request. InvalidCastException: Unable to cast object of type 'System.Int32' to type 'System.Int64'. NullReferenceException Class (System), C# Copy. using System; using System.Collections.Generic; public class Example { public using System; public class Example { public static void Main() { int[] values = null; for (int ctr = 0 Gets a message that describes the current exception. A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions. Documentation comments, (that is, " List{T} "), or the XML escape syntax can be used (that is, " List<T> "). </example> /// </summary> public void Translate(int xor, int yor) { X += xor; Y += yor; } A void pointer is represented using a type name of System. public class NestedClass { private int value; } private string message;.
http://thetopsites.net/article/53306091.shtml
CC-MAIN-2020-40
refinedweb
855
57.57
Hey, Scripting Guy! Is it possible to determine the owner of a file using Windows PowerShell? -- GF Hey, GF. Well, today is April 15th, which, in the US, can mean only one thing: it’s time to celebrate the birthday of Italian mathematician Pietro Antonio Cataldi, best known for discovering the sixth and seventh Mersenne primes. Pietro, who developed the first notation for continued fractions, was born on this day in 1552. Happy birthday, Pietro! Coincidentally, April 15th is also Tax Day in the US, the last day on which Americans can submit their income tax returns for the previous year. Needless to say, for many Americans April 15th is a very stressful day. For other Americans, however, April 15th isn’t the least bit stressful; that’s because the US has a long history of people who believe that the government has no right to collect income taxes and therefore decide not to pay their taxes. For example, in 1997 actor Wesley Snipes (recently convicted on three counts of failure to pay income tax) reported an income of $19,238,192. Not only did Wesley decline to pay any taxes on that income, he actually demanded a refund of $7,360,755. Interestingly enough, his own lawyers termed his positions on income tax “kooky,” “crazy” and “dead wrong.” Which, coincidentally enough, are the exact same phrases that were sprinkled throughout the mid-year performance review of the Scripting Guy who writes this column. Oh, and did we mention that Wesley Snipes was recently convicted on three counts of failure to pay income tax? That’s usually what happens to people who decline to pay their taxes or file a tax return. As it turns out, the Scripting Guy who writes this column isn’t stressing out today, either; that’s because he submitted his tax return well in advance of today’s deadline. (On Sunday, April 13th, to be exact.) Admittedly, that might sound like he was cutting it a little close. He wasn’t concerned, however, because he knew he could complete his tax return in less than hour; needless to say, it doesn’t take him anywhere near as long to count his money as it takes Wesley Snipes to count his. And no, that’s not because the Scripting Guy who writes this column is a really fast counter. Best of all, getting his taxes done early turned out to have multiple benefits for the Scripting Guy who writes this column. For one thing, submitting his tax return helped him avoid going to prison for income tax evasion; that’s usually a plus. For another, filing early also gave him time to figure out how to determine the owner of a file (or folder) using Windows PowerShell. Although, in all honesty, he didn’t need all that much time to do that, either: Get-Acl C:\Scripts\Test.txt Believe it or not, that’s the entire script; all we have to do to determine the owner of a file is call the Get-Acl cmdlet, passing Get-Acl the path to the file in question. In turn, Get-Acl will report back information similar to this: Directory: Microsoft.PowerShell.Core\FileSystem::C:\Scripts Path Owner Access ---- ----- ------ Test.txt FABRIKAM\kenmyer BUILTIN\Administrators Allow FullCo... Not bad, huh? If all you care about is the name of the owner then pipe the results to the Select-Object cmdlet, like so: Get-Acl C:\Scripts\Test.txt | Select-Object Owner That will give you information similar to this: Owner ----- FABRIKAM\kenmyer Or, if you’d like to see the complete security descriptor, pipe the output to the Format-List cmdlet: Get-Acl C:\Scripts\Test.txt | Format-List Path : Microsoft.PowerShell.Core\FileSystem::C:\Scripts\Test.txt Owner : FABRIKAM\kenmyer Group : FABRIKAM\Domain Users Access : BUILTIN\Administrators Allow FullControl NT AUTHORITY\SYSTEM Allow FullControl FABRIKAM\kenmyer Allow FullControl BUILTIN\Users Allow ReadAndExecute, Synchronize Audit : Sddl : O:S-1-5-21-1454471165-1004336348-1606980848-8183G:DUD:AI(A;ID;FA;;;BA)(A;ID;FA;;;SY) (A;ID;FA;;;S-1-5-21-1454471165-1004336348-1606980848-8183)(A;ID;0x1200a9;;;BU) If we had to guess, we’d guess that Wesley Snipes didn’t mean to ignore the April 15th deadline for filing his tax return. He probably just got caught up in the fun and excitement of working with file ownership scripts, and forgot all about paying his taxes. For example, suppose Wesley wanted to get a list of owners for all the files in the folder C:\Scripts. That’s no problem; after all, the Get-Acl cmdlet does accept wildcard characters: Get-Acl C:\Scripts\*.* Directory: Microsoft.PowerShell.Core\FileSystem::C:\Scripts Path Owner Access ---- ----- ------ Example.txt FABRIKAM\kenmyer BUILTIN\Administrators Allow FullCo... Test.txt FABRIKAM\pilarackerman BUILTIN\Administrators Allow FullCo... Trial.txt FABRIKAM\kenmyer BUILTIN\Administrators Allow FullCo... Pretty cool, huh? Of course, while Get-Acl does accept wildcard characters, what it doesn’t accept is some sort of –recurse parameter that would enable you to retrieve the owners of all the files located in any subfolders of C:\Scripts. But that’s OK, too; after all, the Get-ChildItem cmdlet does accept the –recurse parameter. That means we can retrieve the file owners for all the files in C:\Scripts and its subfolders by using this command: Get-ChildItem C:\Scripts -recurse | ForEach-Object {Get-Acl $_.FullName} There’s nothing particularly complicated about that command, either: we simply use Get-ChildItem and the –recurse parameter to retrieve the collection of files found in C:\Scripts and its subfolders, then pipe that collection to the ForEach-Object cmdlet. In turn, we ask ForEach-Object to run the Get-Acl cmdlet against each and every file in that collection, using the value of the FullName property as Get-Acl’s file path parameter. Is that going to work? Hey, come on: have you ever known the Scripting Guys to do something that didn’t work? Well, OK. But the command we showed you will work. Promise. Who would have guessed that file ownership scripting could be so much fun, eh? In fact, like Wesley Snipes, we’re having such a good time today we thought we’d try one more script. It is pretty cool that you can determine the owner of a file by running a simple little Windows PowerShell script. But you know what would be really cool? It would be really cool if you could take ownership of a file by running a simple little Windows PowerShell script. You know, maybe a script like this one: $objUser = New-Object System.Security.Principal.NTAccount("fabrikam", "kenmyer") $objFile = Get-Acl C:\Scripts\Test.txt $objFile.SetOwner($objUser) Set-Acl -aclobject $objFile -path C:\Scripts\Test.txt Much like the Scripting Guy who writes this column’s income for the year 2007, there’s really not much to this script. In line 1 we use the New-Object cmdlet to create an instance of the System.Security.Principal.NTAccount class, a .NET Framework class used to represent a user account. When creating an instance of this class we need to pass two parameters: the name of our domain (fabrikam) and the name of our user account (kenmyer). After we create an instance of the NTAccount class we use the Get-Acl cmdlet to retrieve the security descriptor from the file C:\Scripts\Test.txt; that’s what we do here: $objFile = Get-Acl C:\Scripts\Test.txt Once we have the security descriptor we can use the SetOwner method to assign ourselves ownership of the file: $objFile.SetOwner($objUser) Well, sort of. What the SetOwner method does is assign ownership to the virtual copy of the security descriptor that we retrieved using Get-Acl. To take ownership of the actual file itself we need to use the following Set-Acl command: Set-Acl -aclobject $objFile -path C:\Scripts\Test.txt That should give you ownership of the file. Give these scripts a try, GF; with any luck they should help you with your management of files and file owners. As for the Scripting Guy who writes this column, he’s going to take the rest of the day off; after all, it is Pietro Antonio Cataldi’s birthday, you know. <# Part 2 #> add-type $code "`nInitial privileges" whoami /priv | Select-String "SeRestorePrivilege" whoami /priv | Select-String "SeBackupPrivilege" whoami /priv | Select-String "SeTakeOwnershipPrivilege" "`nAdding privileges" whoami /priv | Select-String "SeRestorePrivilege" whoami /priv | Select-String "SeBackupPrivilege" whoami /priv | Select-String "SeTakeOwnershipPrivilege" #Initiate the process at a base folder which contains all homedrives $Folder = Get-Item "C:TempHomeSharesCopy" #example for testing – you will need to add your code to recurse through folders, applying intended NTFS and final Ownership values #Create a new ACL object for the sole purpose of defining a new owner, and apply that update to the existing folder’s ACL $NewOwnerACL = New-Object System.Security.AccessControl.DirectorySecurity #Establish the folder as owned by BUILTINAdministrators, guaranteeing the following ACL changes can be applied $Admin = New-Object System.Security.Principal.NTAccount("BUILTINAdministrators") $NewOwnerACL.SetOwner($Admin) #Merge the proposed changes (new owner) into the folder’s actual ACL $Folder.SetAccessControl($NewOwnerACL) "`nRemoving privileges just added before terminating" whoami /priv | Select-String "SeRestorePrivilege" whoami /priv | Select-String "SeBackupPrivilege" whoami /priv | Select-String "SeTakeOwnershipPrivilege" Hey Scripting Guy, thanks for posting this. I have a question, what if I’m only interested in the files owned by a particular user, how would I pipe or filter this script to accomplish this. Get-ChildItem C:Scripts -recurse | ForEach-Object {Get-Acl $_.FullName} Thanks for your input [mention:c071c1383d7a45ba97a1e6e902e6166c:e9ed411860ed4f2ba0265705b8793d05] Hi All I have tried the following with powershell the owners and users with full access on a certain path folders im sure you can manipulate it to get files and folders accessed by a certain user Get-ChildItem -Path "" -Recurse | Where {$_.psIsContainer -eq $true} | get-acl | select path -expand access | where-object {$_.FileSystemRights -eq "FullControl"} | export-csv E:Fullcontrol.csv – Where {$_.psIsContainer -eq $true}: is to get folders only (i did so to get small size ACL report you can delete it if you need for files ) – select path -expand access: this to formulate the permissions into rows so it can be copied into excel – where-object {$_.FileSystemRights -eq "FullControl": this part will extract what you need from ACL report for your case you can replace $_.FileSystemRights with $_.IdentityReference -eq "username" ((user name must be domainusername)) hope you find this useful Take a common case for sysadmin – user leaves, but you can't delete his/her home folder, even when using an elevated command prompt on the server hosting the home folder volume. Usually a permissions or long file/pathname problem. Trying to get the ACLs with: Get-ChildItem E:{problem folder} -recurse | ForEach-Object {Get-Acl $_.FullName} | Format-List will fail, if you don't have permission to read the ACL of one or more of the child objects – you will get red lines starting: Get-Acl: Attempted to perform an unauthorized operation. You need to get the file/folder name: Get-ChildItem E:{problem folder} -recurse by itself should list all the children, even if these files (or folders) are hidden system files (i.e. have hs attributes set – which is often the case when you hit a problem deleting a folder). Then, if you try running the script as above on one of the problem files, the Get-Acl line will throw a similar error. So back to Windows Explorer (view hidden & system files; properties > security – take ownership, close; reopen properties, give yourself permissions, then delete). Some things still better in the old fashioned command prompt! On Server 2003 and later: takeown /F "{problem folder}*" /R will give you ownership quickly! @WestNab: This confused me plenty, and there are plenty of examples of forum threads that ended in ‘use a 3rd party’ app. I pieced together my pure PowerShell solution below, partly with help from another Scripting Guy blog () – ahhh, it feels good to give back 🙂 Basically it uses the P/Invoke technique to employ C# to call system DLL procedures. It would be nice if .Net included this necessary function, but despite the length of this code sample, it is pretty straightforward to reproduce 🙂 The "whoami /priv" lines are a sample showing that the admin token privileges were changed. Put your Cheers, Barnaby @Carlo: # set the owner on a file on a remote system. $acl=get-acl \SYSTEM01testtest.log $owner=[System.Security.Principal.NTAccount]'MYDOMIANuser01' $acl.SetOwner($owner) set-acl -Path \omega2testtest.log -AclObject $acl $code = @" using System; using System.Runtime.InteropServices; public class TokenManipulator { relen); [DllImport("kernel32.dll", ExactSpelling = true)] internal static extern IntPtr GetCurrentProcess(); _DISABLED = 0x00000000; internal const int SE_PRIVILEGE_ENABLED = 0x00000002; internal const int TOKEN_QUERY = 0x00000008; internal const int TOKEN_ADJUST_PRIVILEGES = 0x00000020; public static bool Add; } catch (Exception ex) { throw ex; } } public static bool Remove_DISABLED; retVal = LookupPrivilegeValue(null, privilege, ref tp.Luid); retVal = AdjustTokenPrivileges(htok, false, ref tp, 0, IntPtr.Zero, IntPtr.Zero); return retVal; } catch (Exception ex) { throw ex; } } } "@ Does this same approach work for granting ownership to a group? @Nathan there is a set-acl Windows PowerShell cmdlet @WestNab I agree, I love using the takeown command. It works well. I have described using the tool in other Hey Scripting Guy blog articles. In fact, I wrote about the exact scenario you describe. ScriptingGuy1 you are hilarious, I'm sad I just now found your blog posts! I am wondering if you know a way to set remote file ownership with Powershell. I didn't find a way to do it, as you can see in my last blog post (…/how-to-remotely-modify-windows-acl.html) but I might very well have missed something… I hope you can help. Anyways your blog rocks!!! @jrv Thanks for the code. Anyway it doesn't work on my system, probably because the user I want to declare as owner of the remote file is a remote local user, not a domain user. When I run the code: $full_username = "$remote_hostname" + "" + "$remote_username" $owner=[System.Security.Principal.NTAccount]$full_username $acl.SetOwner($owner) set-acl -Path $path -AclObject $acl I get the follwoing translation error: Exception calling "SetOwner" with "1" argument(s): "Some or all identity references could not be translated." I am pretty sure the problem is due to the Powershell trying to resolve the SID for $remote_username locally and not remotely. Am I wrong? Any suggestion? Hey there. Great site. I've played with the tools you gave me but I'm new to this (long time PERL and other Un*x scripting guy though) and missing some of the umm methods that hang from the objects. I'm trying to solve this. I've got some Security Groups that seem to be orphaned. I want to take a spin through the filesystem and see if they pop up anywhere Get-ChildItem .*.* -recurse | ForEach-Object {(Get-ACL).AccessToString} So that gives me a list of all the Access of every file below where I am. Great. This solves my immediate problem, I can then search that file and know IF it is orphaned. But say I find one. So now I really need my output list to contain the filename too. I can't seem to find a way to Filter Get-ACL to give me a filename (with full path preferably but I can script around the path shown at node changes and then filenames shown as local to the node) and the access. And obviously piping this to a grep afterwards (which doesn't seem to work like I'd expect) or setting a filter saying only if group is domainsomegroup do you print anything is the ultimate goal here. Thanks, SiD If you can help out, I’m looking for a way to recursively look through file shares and match on a filter where it is an unresolved SID something like S-5-21*, and pipe it to a CSV file If the path length is greater than 259, then it throws an error. Get-Acl -Path "GreaterThan265.txt" # Here error occurs. What could be the solution, we have a robocopy which solves Get-ChildItem problem. Thanks a lot
https://blogs.technet.microsoft.com/heyscriptingguy/2008/04/16/hey-scripting-guy-how-can-i-use-windows-powershell-to-determine-the-owner-of-a-file/
CC-MAIN-2018-05
refinedweb
2,711
53.51
Are you trying to teach yourself to code? Or are you already an experienced developer who wants to pick another language? In both cases, you know how frustrating it can be to find good tutorials online. Sure, it’s easy to find “tutorials”, but separating the chaff from the wheat is a whole different story. Of course, you have to pick a programming language to learn, and that’s far from being an easy choice, too. There are already a huge number of programming languages, and with each passing year, the list gets longer. The goal of this post is to help you with both problems. We’re going to give you an answer to the “which language” question in the form of C#, which is a solid choice for novice and seasoned developers alike. Then we’re going to offer you a list of 30 C# tutorials, from beginner to advanced level. At the end of the post, you’ll be (hopefully) convinced that C# is the right choice for you, and you’ll have plenty of good references to help you on your journey. Let’s get started. Why Learn C#? There are many programming languages out there. In this post, we argue that C# is the best choice for a new language to learn, be it your first programming language or not. How can we be so sure? Well, C# is a solid choice for a number of reasons. Unlike C++, for instance, C# offers automatic memory management. It also offers solid type safety, compared to JavaScript and node.js. C# has robust base class libraries; the .NET framework includes hundreds of libraries for working with the file system, managing security, and more. Microsoft heavily supports C#, issuing fixes and updates rapidly – so it’s a more readily updated language compared to other languages, such as Java. The community can also contribute to the language’s design—filing bugs, sending corrections, or submitting features proposals—through the official repository on GitHub. Like Java, C# is one of the most popular programming languages, and as such, it has a large, active user community, making it easy to find troubleshooting solutions and coding help on StackOverflow and other online communities. Microsoft released the C# language back in 2001. However, as of 2019, C# continues to be in huge demand. This is especially true since the release of .NET Core, and the trend is likely to go up. With the new incarnation of the popular .NET framework, the C# language has become more versatile than ever. But the main point in favor of C# is that it’s very approachable. It has lots of sophisticated and advanced features that seasoned developers can put to use, while beginners can safely ignore those until they’re ready to handle them. 30 of the Best Tutorials to Learn C# 1. Tutorials Teacher This tutorial is from Tutorialsteacher.com, which features free online web technology tutorials for beginners and professionals alike. In addition to C#, you can also learn LINQ, ASP.NET MVC, jQuery, JavaScript, AngularJS, or Node.js. This C# course is especially interesting because it goes straight into programming after a brief version history and setup. Key Topics: - Data types, classes, and variables - Switches and loops - Strings and arrays - Stream I/O 2. Lynda.com – Learning C# In this tutorial by author Gerry O’Brien, topics covered include core language elements such as data types, variables, and constants. It also features a short tour of two fully-functional Windows Phone and Windows Store apps to motivate you. There are also five challenge videos that allow you to test yourself, along with another five videos that explain the answers. Key Topics: - Working with loops - Building functions - Catching errors - Managing resources with the garbage collector 3. C# Station The C# Station Tutorial is a set of lessons suited for a beginner to intermediate-level programmers who are ready to learn hands-on with a compiler and an editor. Topics cover everything from the basics right up to Polymorphism and Overloading Operators. Key Topics: - Expressions, Types, and Variables - Namespaces - Introduction to Classes - Indexers and Attributes - Working with Nullable types 4. Deccansoft -C# Training This series of tutorials from Deccansoft is led by Mr. Sandeep Soni, a Microsoft Certified Trainer, and cover almost all C# topics from the ground up. Each concept is explained at length using different walkthroughs and practical approaches. The entire course is quite lengthy and features 26 modules split up into about 83 hours of video! It is advisable to have a working knowledge of any one programming language before you take this course. Key Topics: - .NET Framework - Concepts behind CLR (Common Language Runtime) - Building a standard GUI for Windows based applications using WinForms. - Developing scalable applications using multithreading features of .NET 5. edX – Programming with C# This tutorial comes from edX, an online educational services provider which also offers some courses from top universities and colleges. This is not a beginner’s course and requires you to have a prior understanding of programming concepts. This tutorial by Gerry O’Brien is better-suited for existing programmers who want to learn a bit more about C# and the .NET environment. Key Topics: - The C# syntax - C# language fundamentals - Object oriented programming - The .NET Framework concept 6. Microsoft Virtual Academy – C# fundamentals for absolute beginners This C# tutorial from none other than Microsoft takes you through 24 practical and easy-to-understand episodes with Bob Tabor from the Developer University. Apart from teaching you the fundamentals of C#, this course also covers the tools, how to write code, debug features, explore customizations, and more. The cool thing is that each topic is a separate video that’s quite straightforward. This course also teaches you to apply your C# skills to video games and mobile apps. Key Topics: - Creating and understanding your first C# program - Understanding Data types and Variables - Understanding Arrays - Working with Strings - Learning how to work with Date and Time data 7. Tutorials Point – Basic and Advanced C# Tutorialspoint, which is quite a popular online destination for learning, has 2 tutorials on C#, one for beginners and another for more advanced programmers. Both are great learning resources, and between the two, they cover the basics of C# programming and also delve into more advanced C# concepts. These are text-based guides with step-by-step instructions and examples. Basic Key Topics: - Program structure - Decision making - Encapsulation - Exception handling - File I/O Advanced Key Topics: - Reflection - Indexers - Unsafe code - Multithreading 8. Udemy – C# Programming projects for beginners Udemy is one of the largest online learning platforms with thousands of courses and a big budget to spend on advertising. If you watch YouTube videos or even just browse the web, you’ve likely come across their advertisements. While the website has many video tutorials on C# programming, the good ones aren’t free but aren’t unreasonably expensive either. This particular course helps students think like programmers and learn C# practically by working on programming projects. The course consists of about 49 lectures and is just under 9 hours in length. Key Topics: - Practicing loops, arrays, and structures - Start coding beginner projects immediately - Thinking like a programmer - Using the right approach 9. LearnCS.org This is a free online interactive tutorial for C#. In fact, the entire website is dedicated exclusively to teaching C#. This site is different in its teaching approach in the sense that it teaches you with two windows, one for code and one for your output. Key Topics: - Variables and types - Dictionaries, strings, and loops - Methods - Classes and class properties 10. Abbot – C# Tutorial This all-text tutorial from Zetcode focuses on both basic and advanced topics and is suitable for beginners and advanced programmers alike. This tutorial covers the basics like loops, strings, and arrays and then moves on to more complicated stuff like delegates, namespaces, and collections. It also covers the new features of C# 4.0. Key Topics: - Data types - Strings - Lexical structure - Flow control - Delegates - Namespaces - Collections 11. Channel 9 – Programming in C# Jump Start What they mean by “Jump Start” fashion is that every topic of this course is example-driven and illustrated by Microsoft’s Jerry Nixon and the co-founder of Crank211, Daren May. The key to this tutorial is repetition as the duo work with multiple examples in real-time to make sure you get the most from the experience. There are some videos in the Jump Start series, and the topics get more advanced as you progress. Key Topics: - Basics of object oriented programming - Fundamentals of a managed language - Why C# is the best for OOP - C# Syntax 12. Java2s – C# Tutorial More popularly known as a place that indexes Java examples, java2s.com has a good C# tutorial as well. This is quite an in-depth tutorial, starting with language basics and moving on to graphics, designs, XML, .NET frameworks, networking, directory services, and security. Key Topics: - Language basics including predefined exceptions, parameter throw, and parameter reference - Data types including boolean, decimal, and bitwise - Operators including shift, arithmetic, shortcut, short circuit, bitwise, and ternary operators - Windows, XML, and XML LINQ 13. JKU – C# Tutorial This two-part course is by Hanspeter Mössenböck from the University of Linz. It is a C# tutorial for programmers who are already familiar with Java or similar languages. It starts out with basic C# features such as types, expressions, statements, and object-orientation, and continues with more advanced features like threads, attributes, namespaces, and assemblies. It also briefly goes over .NET’s base class library. Key Topics: - Overview, types, and expressions - Declarations and statements - Classes and structs - Namespaces, assemblies, and XML comments 14. Eduonix – Learn C Sharp Programming From Scratch This course is by Eduonix, a premier online institution, and the C# course is an instructor-led video that covers basic programming structures, LINQ, C# network programming, and more. A bonus to doing this course is the option to get certified on completion. Key Topics: - Introduction to C# - Iteration and Jumps - Object Oriented Programming - LINQ and C # Network Programming 15. SoloLearn – C# Tutorial This tutorial from Sololearn.com is fun and teaches C# concepts by going through short interactive texts, games, and quizzes. The instructors believe in a hands-on approach and that the best way to learn to code is to practice coding. A well-designed code editor lets you make changes to existing code and see the output on your mobile device. The games are especially useful since they’re fun, and the more you play, the better you get! Key topics: - Basic concepts including variables, printing, and arithmetic operators - Conditions, loops, and methods - Arrays and strings - Inheritance, polymorphism, and generics 16. RB Whitaker – A C# Crash Course This is a list of over thirty tutorials by RB Whitaker, a software developer at Autonomous Solutions, Inc. (ASI). This course is quite extensive and covers everything from the basics to generics, error handling, and more. The author encourages you to skip over parts that you are already familiar with, meaning you can get through this course more efficiently if you’re not a novice. Key Topics: - Introduction, installation, and your first C# program - Math, more math, decision making, and looping - Inheritance, polymorphism, generics, and error handling 17. HyperionDev – C# Programming Essentials This is a three- to six-month part-time micro-degree from hyperiondev.com. It’s not free, but it is CSA accredited, making it worth consideration. This micro-degree is for beginners with no programming experience and features one-on-one pairing with a mentor as well as additional career guidance and placement advice on completion. Key Topics: - Introduction to C# - Control statements - Craps game to assess knowledge of prior task - Data structures, files, and functions 18. TheNewBoston This set of coding tutorials created by Bucky Roberts on his YouTube channel called TheNewBoston is especially popular. It is currently one of the most popular computer/technology related channels on YouTube, with close to 900,000 subscribers and over 200 million views. Key Topics: - Overview of C# - Where to download Visual C# 2010 Express edition - Installation - Basics of C# Programming 19. PluralSight – C# fundamentals with C# 5.0 Pluralsight has many courses dedicated to C# programming. This particular course is about six hours long and has a 4.5-star rating across close to 5,000 user surveys. The tutorial is by Scott Allen, a Microsoft MVP who has authored several books on ASP.NET, C#, and Windows Workflow. Key Topics: - Basic setup and introduction to .NET, CLR, and FCL - Editing, compiling and debugging - Classes and objects in C# - Flow control and object oriented programming 20. Udemy – C# Basics for Beginners: Learn C# Fundamentals by Coding This is another tutorial from Udemy. It’s not just for beginners but also for students looking for a refresher course in C# and .NET. It focuses more on a programming mindset and uses videos, real-world examples, and lots of exercises. With 4.6 stars from 7,515 ratings and 30,380 students enrolled, this course by Mosh Hamedani is a great way to learn the fundamentals of C# and .NET Framework. Key Topics: - Fundamentals of coding - Working with date and time - Debugging - Classes, interfaces, and object oriented programming 21. Java T Point – C# Tutorial This C# tutorial from javatpoint.com is quite extensive and comes with a prerequisite that you have a basic working knowledge of C. Like most other courses, it starts off very basic and then goes into detail in the later chapters. What makes this one different, however, is that it’s quite student-oriented and features comparisons with Java, interview questions, and an additional ASP.NET tutorial. Key Topics: - History and introduction - Control statement, functions, arrays, and object classes - Properties, inheritance, polymorphism, and abstraction - Namespaces, strings exception handling, file IO 22. Microsoft – Getting started with C# This is a fun little tutorial from Microsoft and is different from all the others as it’s customizable. You can choose your degree of difficulty before you start by selecting whether you are a beginner or have previous programming experience. It also lets you choose the languages you already know and then modifies your course accordingly. Key Topics: - Writing your first hello world program - Strings, looping, dates, and times - Arrays, collections, and calling methods - Namespaces - Testing your code and troubleshooting 23. Brackeys YouTube videos are a great way to learn to program, and Brackeys is a YouTube channel that specializes in game development tutorials. However, he has a pretty decent and in-depth introductory C# series that is quite popular as well. Key Topics: - Introduction and basics - Variables, If and Switch statements - Classes, Inheritance, and enums - Properties, interfaces, and Generics 24. Complete C# Tutorial This tutorial is from CompleteCsharpTutorial.com and is essentially is a list of free tutorials ranging from C# to SQL, RAZOR Syntax, ASP.NET, Java, and CSS. The site is really well organized, and each topic opens up into about five sub topics that you can choose from. Each topic is short and sweet and does a good job of explaining things without wasting a lot of time. Key Topics: - Variables and data types - Operators and Conditional constructs - C# Statements, loop constructs, and exception handling - Inheritance, polymorphism, and generics 25. Guru99 – C# Tutorial This is an introductory tutorial into the .NET framework using the C# language. It also covers various topics like accessing data, classes & objects, file commands, and Windows forms. This is not a beginner’s course, and a basic understanding of C is required. Key Topics: - .NET framework - How to Download and Install Visual Studio - Fundamentals – Data Type, Arrays, Variables and Operators & Enumeration - Classes, Object and Collections - Access Database and File Operations 26. Certification Guru – C# .NET Programming This course from CertificationGuru.in provides a solid foundation and covers the fundamentals skills required to design and develop object-oriented applications. This course especially focuses on developing apps for the Web and Microsoft Windows using Microsoft Visual C#, .NET, and the Microsoft Visual Studio .NET development environment. Another plus is that this class is intended for beginners with little or no knowledge or C# or .NET. Training is conducted live in virtual classrooms by Microsoft-certified trainers with over a decade of training experience. Key Topics: - .NET framework architecture - Event driven programming - Lambda expressions - Exception handling - Deployment 27. Lynda – C# 6.0 First Look This tutorial at Lynda.com is all about getting a firm grasp of the new features in C# 6.0. The course is conducted by Reynald Adolphe, who takes you through all the new features like new expression-level features, extension add methods, null-conditional operators, and much more. The enhanced IDE (with IntelliSense syntax) and improved debugging features in Visual Studio 2015 is also covered. Key Topics: - Introducing the new IDE in Visual Studio 2015 - Leveraging nameof expressions - Using index initializers - Using await in catch and finally blocks - Using static and debugging 28. Alison – Diploma in C# Programming This free online course is published by Channel 9 and begins with showing you how to properly install Visual Studio Express, followed by a tour of the features and functions of the Visual Express Integrated Development Environment (IDE). Next, comes the .NET Framework and how C# can be used to create .NET applications. Key Topics: - Installing Visual Studio Express - Write basic code and reviewing for errors - Creating branches with the if decision statement and the conditional operator - Correct syntax for operators, expressions, and statements of duration 29. Coursera – Beginning Game Programming with C# The Beginning Game Programming with C# course from Coursera.org is all about learning how to develop games in C#. This is an advanced course, so while it’s not impossible to jump right in, it might be a bit frustrating for beginners. C# is great for games because it lets you use the open-source MonoGame framework used to make games for Windows, Android, iOS, and Mac OS X. C# can also be used with the Unity game engine, which is very popular among indie game developers. Key Topics: - Course Introduction, First C# Program, and Storing Data - Classes and Objects, MonoGame/XNA Basics - MonoGame/XNA Mice and Controllers, Arrays, and Collection Classes - Class Design and Implementation 30. Udemy – Learn to Code by Making Games – Complete C# Unity Developer This is another great course from Udemy, and it’s different in the sense that it’s a gaming course for beginners in which you learn C# while building interesting games on the Unity engine. The plus side here is that it makes learning C# fun and interactive while also teaching you about the Unity engine. The course is 100% project-based, so you will not just be learning theory but actually creating real indie games as you go. The entire course syllabus consists of names of indie games, and for each demo game you build, you are given a set of challenges. The key topics here are especially interesting. Key Topics: - Number Wizard: Basic Scripting - Laser Defender - Glitch Garden: A Plants vs. Zombies Clone - Block Breaker - Zombie Runner FPS C# is still one of the most widely-used programming languages out there today. It is a powerful programming language with an incredibly wide array of functions and uses, allowing developers to create almost anything, ranging from server apps to mobile development to 3D games. C# 8.0 will be released later this year—although you can already preview many of its feature using Visual Studio 2019—and with the number of tutorials online, now’s as good a time as ever to start learning. Once you’ve become a C# guru, check out our other resources on the popular programming language, such as logging best practices for .NET, exception handling best practices, how to find and handle unhandled exceptions, and more. Need a code coverage tool? We’ve got you covered there, too, in this post.
https://stackify.com/learn-c-sharp-tutorials/
CC-MAIN-2021-39
refinedweb
3,324
54.22
The MPL115A2 is a low cost device for reading barometric pressure. - I2C digital interface (address: 0x60) - Resolution: 1.5 hPa - Range: 100-1150 hPa up to 10Km Purchasing The MPL115A2 sensor is available on a breakout board from Adafruit Hardware The simplest method of connecting the MPL115A2 to the Netduino requires only four connections: In this diagram, the shutdown ( SDWN) and reset ( RST) pins have been left floating. Both of these pins are active low and can be tied to Vcc in normal operation. Note that the Adafruit breakout board has 10K pull-up resistors on teh SDA and SCK lines. Software The following application reads the temperature and pressure from the MPL115A2 every second and displays the readings in the Debug output: using System.Threading; using Microsoft.SPOT; using Netduino.Foundation.Sensors.Barometric; namespace MPL115A2Test { public class Program { public static void Main() { var mpl115a2 = new MPL115A2(); Debug.Print("MPL115A2 Test"); while (true) { mpl115a2.Read(); Debug.Print("Pressure: " + mpl115a2.Pressure.ToString("f2") + " kPa, Temperature: " + mpl115a2.Temperature.ToString("f2") + "C"); Thread.Sleep(1000); } } } } API This API supports a polled method of reading the sensor. The Read method forces the sensor to take new readings and then record the readings in the Temperature and Pressure properties. Constructor MPL115A2(byte address = 0x60, ushort speed = 100) Create a new MPL115A2 object. Properties double Pressure Return the pressure reading returned by the last call to the Read method. This value is recorded in kPa. double Temperature Return the temperature reading returned by the last call to the Read method. This value is recorded in degrees C. Methods void Read() Force the sensor to take a reading and record the readings in the Pressure and Temperature properties.
http://netduino.foundation/Library/Sensors/Barometric/MPL115A2/
CC-MAIN-2018-05
refinedweb
281
58.28
Vue Seo tutorial using Vue meta In this tutorial, we are going to learn about how to make SEO friendly vue.js apps by using vue-meta package. In single-page apps, SEO is the hard part because there is only a single html page which is resuing throughout our app. Search engines didn’t recognize what type of content you are providing if you are using the same title and description on every page in your vue app. There is a package called vue-meta which helps us to control the vue app meta tags. Let’s install the package now. npm i vue-meta Now, we need to configure this package by adding a below code inside the main.js file. import VueMeta from 'vue-meta' Vue.use(VueMeta); With this our setup is complete. Adding SEO Now, we can directly add seo meta tags inside our component <script> tag like this. Example <template> <div class="hello"> <h1>This is Home page</h1> </div> </template> <script> export default { metaInfo: { title: "This is Home page", meta: [ { name: "description", content: "Learn coding with our free tutorials" }, { name: "keywords", content: "react,vue,angular" } //you can also add open graph tags here ] } }; </script> In the above code, we have added a metaInfo object which contains title property and, meta array. title: Title of our App. meta: Inside meta array, we can add meta description, keywords, and open graph tags, etc. If we inspect our App in the browser it might look like this in the below image.
https://reactgo.com/vue-seo/
CC-MAIN-2021-17
refinedweb
255
71.04
May 18, 2012 09:17 AM|om singh|LINK I am using knockout (KO) in my MVC project. I create an MVC model (for a grid) on server and pass it to the view. On the view it is serialized and converted into a KO model (using ko.mapping) which in turn is used for binding. That binding is then used in HTML for grid creation. This is how my MVC grid model looks like which in turn gets converted to corresponding KO model by ko.mapping: public class GridModel { /// <summary> /// Grid body for the grid. /// </summary> public GridBodyModel GridBodyModel { get; set; } /// <summary> /// Grid context. /// </summary> public GridContext GridContext { get; set; } /// <summary> /// Grid header for the grid. /// </summary> public GridHeaderModel GridHeader { get; set; } } public class GridBodyModel { /// <summary> /// List of grid body rows. /// </summary> public IList<GridRowModel> Rows { get; set; } } public class GridContext { /// <summary> /// Total number of pages. Read-only. /// </summary> public int TotalPages{ get; set; } } public class GridHeaderModel { /// <summary> /// List of grid header cells. /// </summary> public IList<GridHeaderCellModel> Cells { get; set; } } As it is clear the main model class GridModel is made of following classes which are present as properties: GridBodyModel: Has list of rows to be rendered in the grid body. GridContext: Has total number of pages as a property. It has other properties as well but that is out of scope of this discussion. GridHeaderModel: Has a list of cells that has to be displayed in header of the grid. Then I have this script that will execute on fresh page load. $(document).ready(function () { // Apply Knockout view model bindings when document is in ready state. ko.applyBindings(Global_GridKOModel, document.getElementById("gridMainContainer")); }); // Serialize the server model object. It will be used to create observable model. Global_GridKOModel = ko.mapping.fromJS (<%= DataFormatter.SerializeToJson(Model) %>); Global_GridKOModel is global javascript variable. Model is the MVC grid model coming from server. A user can perform further search on the page again. I handle this by posting new search criteria via Ajax. On this post a new MVC model is created and is sent back as Ajax response. This new MVC Model is then simply used to update Global_GridKOModel using ko.mapping which in turn refreshes the grid (with new data) that was constructed earlier on fresh page load. This is how I am doing it. $.ajax({ url: destUrl, data: dataToSend success: function (result) { ko.mapping.fromJS(result, Global_GridKOModel); }, error: function (request, textStatus, errorThrown) { alert(request.statusText); } }); Everything is working fine except in the following scenario. An Ajax request is made for which no result is returned i.e. GridBodyModel and GridHeaderModel is null in the model GridModel. That time grid rightly shows that no record has been found. This is correct. This happens by the following HTML binding. <!-- When no record is found. --> <div data- No record(s) were found. </div> <!-- This is actual grid table container. This is binded when records are found --> <div data- Grid construction happens here </div> Now after this if another Ajax request is made but this time records are returned (I have checked the response with firebug and it is confirmed that records are indeed returned). This time grid construction happens wherein various observable arrays are accessed. For example, to construct pager for the grid following is a piece of HTML binding I wrote. <td data- This time KO throws following error which can be seen in firebug. Unable to parse bindings. Message: TypeError: GridHeader.Cells is not a function; Bindings value: attr:{colspan: GridHeader.Cells().length } It work fine so long there are records being returned but it breaks after no record is returned as explained above. Please note GridHeader was null in earlier response when no records were returned. I smell something fishy in ko.mapping. I think there is some problem while mapping observable array. So what is it that I am not doing right? Anyone please? Please feel free to ask for clarification if I have not mentioned things clearly. Thanks in advance. May 19, 2012 06:01 AM|om singh|LINK I figured it out last night. Actually this was very basic mistake that I was making. I was trying to access an array after setting it to null. I will explain it how. When result were returned that time GridHeaderModel and the IList Cells in it were defined i.e. were not null. That time ko.mapping was able to convert the model and create the model and arrary inside it. But when any ajax request were made wherein noo result was returned and GridHeaderModel was null, obviously the IList Cells was null too. That time ko.mapping did the same, the KO model that it updated, set GridHeaderModel to null too and the observable array Cells inside was no present which is as good as null. Now when I made another ajax request with some records returned, the ko.mapping tried to update the observable array Cells which did not exist (or was set to null) in KO model on the client, it failed. If instead of ajax it were fresh page load everything would have worked. So the solution is not to return any enumeration (those that will get converted to observable array) uninitialized to the client for KO model update. Hence when no record was to be returned I made sure that GridHeaderModel is not null and also made sure that the IList Cells is initialized though it did not contained any element. This fixed the problem. This problem could be explaied with following example. public class Demo { public IList<string> DemoArray; } public class DemoUser { public void UseDemo() { var demoInstance = new Demo(); demoInstance.DemoArray.Add("First Element"); } } Here in the UseDemo() method we have initialized the class instance Demo but the IList DemoArray remains un-initialized. So when we try to access it runtime exception will be thrown. This is what was happening in my case. In first ajax response I was setting the observable array to null i.e. un-initializing it and then in next ajax response I was trying to access it. I am going to mark this as resolved. 1 reply Last post May 19, 2012 06:01 AM by om singh
http://forums.asp.net/t/1805048.aspx
CC-MAIN-2015-18
refinedweb
1,028
67.35
std::stringand std::vectorare your best friends and these are what you should use by default. They are proper C++ containers, they grow and shrink dynamically, and they have characteristics that make them compatible with C so that you can easily send the contents of them to a pure C function and allow C functions to treat them as C objects. I recently got a question on how to append bytes from a C array to a std::vector and proposed the traditional approach: unsigned char *ptr = ...; size_t size = ...; std::vectorAnother alternative is to re-size the vector and use standard my_vec = ...; my_vec.insert(my_vec.end(), ptr, ptr + size); memcpyto handle this, like this: unsigned char *ptr = ...; size_t size = ...; std::vectorThe obvious question is: which one is faster? This also lead me to check around what other techniques that were proposed and I found a a post on StackOverflow showing some alternatives: I'll just list four of them, since I am just concerned with performance and not coding style issues. my_vec = ...; my_vec.resize(my_vec.size() + size); memcpy(&vec[vec.size() - size], data, size); - The array is copied using std::vector::insert: vec.insert(vec.end(), ptr, ptr + size); - The vector is resized and then memcpyis used to copy the array: vec.resize(vec.size() + size); memcpy(&vec[vec.size() - size], data, size); - A std::back_inserterfor the vector is created and std::copyis used to copy the array: std::copy(data, data + size, std::back_inserter(vec)); - A std::back_inserterfor the vector is created and std::copyis used to copy the array. To optimize performance, space is reserved using a reservecall: vec.reserve(vec.size() + size); std::copy(data, data + size, std::back_inserter(vec)); It can be a little hard to figure out which one is faster when you have a diagram like the one in Diagram 1, so instead you can use the traditional method of smoothing out the data by showing the cumulative time instead. This means that small peaks and drops in performance will affect the total time less. The result of smoothing the data this way can be seen in Diagram 2. From the diagrams, you can see that the easiest method, the std::vector::insert, perform slightly faster. So in addition to being the clearest and easiest to read method, it is also fastest. As a closing remark, you can have a look at Diagram 3 showing the cumulative performance of the std::copy approach using std::back_inserter. It is really a huge difference compared to the other method. Note that approach 4, where you add a call to reserve actually does not complete when run over night, so that is really bad. Try it yourself and see what happens. - Do not use std::back_inserterunless it is a few items and you need it for other reasons. - Copying C arrays into std::vectorshould use std::vector::insert. The codeThe code generates output for gnuplot so that to print a nice PNG diagram, so if you run the program and save the output in copy_memory.datyou can use the following code for gnuplot to generate the diagram: set terminal png linewidth 2 set output 'diagram-640.png' set key on left top box set xlabel 'Loop Count' set xtics rotate by 45 right set ylabel 'Accumulated Seconds' rotate by 90 set style data with linespoint smooth cumulative plot 'copy_memory.dat' index 'insert-640' title 'insert', \ '' index 'back-640' title 'std::back_inserter', \ '' index 'memcpy-640' title 'memcpy'The code is available below and, as you can see, the std::copywith a call to reserveis commented out. You can try it if you like, but you need to un-comment it. I also accumulate the array and compute a sum which I print to prevent the optimizer from doing anything fancy like removing all the code because the result is not used. The actual sum can be used to check that the different algorithms do the same thing, but apart from these reasons, it serves no purpose. The code below was compiled with: g++ -O2 -Wall -Wno-uninitialized -ansi -pedantic -std=c++0x copy_memory.cc -o copy_memory #include <algorithm> #include <cstdlib> #include <cstring> #include <functional> #include <iostream> #include <memory> #include <numeric> #include <stdint.h> #include <sys/resource.h> #include <sys/time.h> #include <vector> double operator-(rusage const& a, rusage const& b) { double result = (a.ru_utime.tv_usec - b.ru_utime.tv_usec) / 1.0e6; result += (a.ru_utime.tv_sec - b.ru_utime.tv_sec); return result; } template <class Func> double measure(Func func) { rusage before, after; getrusage(RUSAGE_SELF, &before); func(); getrusage(RUSAGE_SELF, &after); return (after - before); } typedef std::function<void(std::vector<unsigned char>&, unsigned char*, size_t)> CopyF; void do_measure(const char *name, int array_size, int max_loop_count, CopyF do_copy) { auto gen = []() { static int i = 0; if (i > 100) i = 0; return ++i; }; // Fill the source array. It will define the batch size as well. unsigned char *data = new unsigned char[array_size]; std::generate(data, data + array_size, gen); printf("# %s-%d\n", name, array_size); for (int loop_count = 40000 ; loop_count < max_loop_count ; loop_count += 20000) { std::vector<unsigned char> vec; double result = measure([&]() { for (int n = 0 ; n < loop_count ; ++n) do_copy(vec, data, array_size); }); unsigned int sum = std::accumulate(vec.begin(), vec.end(), 0, [](unsigned char x, unsigned int sum) { return sum + x; }); printf("%d %06.4f\t# vector size is %d, sum is %u\n", loop_count, result, vec.size(), sum); } printf("\n\n"); } int main() { const int max_loop_count = 500000; const int max_array_size = 8 * 128; for (unsigned int array_size = 128 ; array_size <= max_array_size ; array_size += 128) { auto insert_func = [](std::vector<unsigned char> &vec, unsigned char *data, size_t size) { vec.insert(vec.end(), data, data + size); }; auto memcpy_func = [](std::vector<unsigned char> &vec, unsigned char *data, size_t size) { vec.resize(vec.size() + size); memcpy(&vec[vec.size() - size], data, size); }; auto back_func = [](std::vector<unsigned char> &vec, unsigned char *data, size_t size) { std::copy(data, data + size, std::back_inserter(vec)); }; #if 0 auto backres_func = [](std::vector<unsigned char> &vec, unsigned char *data, size_t size) { vec.reserve(vec.size() + size); std::copy(data, data + size, std::back_inserter(vec)); }; #endif do_measure("insert", array_size, max_loop_count, insert_func); do_measure("memcpy", array_size, max_loop_count, memcpy_func); do_measure("back", array_size, max_loop_count, back_func); #if 0 do_measure("backres", array_size, max_loop_count, backres_func); #endif } }
http://thewayofc.blogspot.com/2014/05/
CC-MAIN-2018-17
refinedweb
1,030
52.29
#include <TFT.h>#define cs 10#define dc 8#define rst 9TFT screen = TFT(cs, dc, rst);void setup() { // put your setup code here, to run once: screen.begin(); screen.background(255, 255, 255); screen.stroke(50, 50, 50); screen.setTextSize(1); screen.text("So then this would be how\nI would have to make a \ndocument to make sure that\neverything is on its own\nline.", 5, 5);}void loop() { // put your main code here, to run repeatedly:} String wrap(String s, int limit){int space = 0; int i = 0; int line = 0; while(i<s.length()){ if(s.substring(i,i+1)==" "){space=i; } if(line>limit-1){s=s.substring(0,space)+"~"+s.substring(space+1);line = 0;} i++;line++;} s.replace("~","\n"); return s; } Your code was just what I needed! Thank you!
https://forum.arduino.cc/index.php?topic=623315.0
CC-MAIN-2020-29
refinedweb
137
68.36
Hello, I have an action in my controller called ‘reply’. I would like to be able to select a particular RJS file to excute based on the results of some logic. In the simplest terms, this is what I’d like to accomplish: def reply if case == a [use reply_a.rjs] else [use reply_b.rjs] end end Now Rails has a convention that says, "if no render statement is defined, look for and .rhtml or .rjs file to execute based on the name of the action; i.e reply.rjs. I can seem to find a particluar command that will allow me to specify which RJS to use. It seems that such a command should exist, but I am to bone-headed to find it. (Also, just curious, does anyone know of a way to do logic inside an RJS template?)
https://www.ruby-forum.com/t/how-do-i-override-the-default-rjs-template/70764
CC-MAIN-2021-21
refinedweb
141
81.63
* calling submit from a thread pool in the same pool it belongs to Rodrigo Bossini Ranch Hand Joined: Jul 03, 2009 Posts: 113 posted Sep 16, 2012 12:30:21 0 Hi, I have an ExecutorService which I'm using to solve a recursive problem. Initially I call submit on this pool passing to it the entire task it has to solve. The task class implements Callable and its method call makes some calculations and them breaks the problem into two pieces, which I want to solve using other threads on the same pool. So in the task constructor I pass a reference to the same pool so its call method can call submit on the same pool so the two pieces can be solved by threads on the pool as well. My problem is that I want to call "get" to wait for the results. Obviously, if I have just one thread on the pool, for example, it will call submit on the pool and then get, but it will wait forever since there are no other threads to run that task it submitted. So, the bigger my problem is, the more I'll need threads in my pool. I have "solved" the problem by using a CachedThreadPool, which gave me another problem: it creates threads on demand which is not what I want either. I want a pool with a fixed number of threads. Can you give a suggestion? I see wind mills Stephan van Hulst Bartender Joined: Sep 20, 2010 Posts: 3649 17 I like... posted Sep 16, 2012 18:46:09 0 Yes, you can't do this. Let's take a merge-sort as a typical example of this problem. Here's an implementation that looks reasonable, but is broken: static <T extends Comparable<? super T>> void sort(T[] array, int start, int end) { if (start < 0 || end > array.length || start > end) throw new IndexOutOfBoundsException(); int length = end - start; int halfway = length / 2 + start; if (length < 2) return; Future<?> future = executor.submit(new Runnable() { public void run() { sort(array, start, halfway); } }); sort(array, halfway, end); future.get(); merge(array, start, halfway, end); } Since sorting the two halves of the array are disjoint problems, it stands to reason we can sort them concurrently; the first half as a new task for the thread pool, the second half by the current thread. When the current thread is done sorting, it waits for the thread pool to finish sorting the first half, and then it merges the two halves. There's a problem with this approach though. Since the sorting is done recursively, each call will submit half of its part of the array to the thread pool. Sorting an array of length 8 means submitting 7 tasks to the thread pool. If the pool never has more than 6 threads available, it's likely the program will deadlock because threads in the pool will wait for deeper tasks to complete, which will never complete because there are no threads in the pool available to execute them. Instead, you should tell the method the maximum amount of tasks it may submit, and halve the number for each recursive call. If a method call may not submit any more tasks to the thread pool, it should sort both halves in the current thread. Another solution is to communicate with the thread pool in a synchronized way to find out if there are idle threads that can execute a new task, but this may actually end up slowing down the entire process. static <T extends Comparable<? super T>> void sort(T[] array, int start, int end, int tasksAllowed) { if (start < 0 || end > array.length || start > end) throw new IndexOutOfBoundsException(); int length = end - start; int halfway = length / 2 + start; if (length < 2) return; Future<?> future = null; if (tasksAllowed > 0) { tasksAllowed--; future = executor.submit(new Runnable() { public void run() { sort(array, start, halfway, tasksAllowed/2); } }); } else sort(array, start, halfway, 0); sort(array, halfway, end, tasksAllowed/2); if (future != null) future.get(); merge(array, start, halfway, end); } Disclaimer: I didn't compile or test any of the code in this post, so there may be loads of errors. Stephan van Hulst Bartender Joined: Sep 20, 2010 Posts: 3649 17 I like... posted Sep 16, 2012 20:42:54 0 Here is some tested code that performs merge-sort using multiple threads. You specify a file, and optionally the number of threads and an insertion-sort threshold. The program breaks the file up in tokens, sorts them and determines how long it took. You can specify the minimum size that sub-arrays need to be in order to use merge-sort, else insertion-sort is used; a threshold of 1 means a pure merge-sort. On my system, using 8 threads and an insertion threshold of 16, it takes around 250 ms to sort about a million strings. This is almost twice as fast as using Arrays.sort() to sort the same tokens. import java.io.*; import java.util.*; import java.util.concurrent.*; final class MergeSort implements AutoCloseable { private final int insertion, threads; private final ExecutorService executor; MergeSort(int insertion, int threads) { if (insertion < 1 || threads < 1) throw new IllegalArgumentException(); this.insertion = insertion; this.threads = threads; if (threads > 1) executor = Executors.newFixedThreadPool(threads -1); else executor = null; } synchronized <T extends Comparable><? super T>> void sort(T[] array, int start, int end) { sort(array, start, end, threads -1); } private <T extends Comparable><? super T>> void sort(final T[] array, final int start, final int end, int tasksAllowed) { if (start < 0 || end > array.length || start > end) throw new IndexOutOfBoundsException(); final int length = end - start; final int halfway = length / 2 + start; if (length <= insertion) { inssort(array, start, end); return; } Future<?> future = null; if (tasksAllowed > 0) { final int tasks = --tasksAllowed; future = executor.submit(new Runnable() { public void run() { sort(array, start, halfway, tasks/2 + tasks%2); } }); } else sort(array, start, halfway, 0); sort(array, halfway, end, tasksAllowed/2); try { if (future != null) future.get(); } catch (InterruptedException | ExecutionException ex) { throw new AssertionError(); } merge(array, start, halfway, end); } private <T extends Comparable><? super T>> void merge(T[] array, int start, int halfway, int end) { T[] temp = Arrays.copyOfRange(array, start, halfway); int i = 0; int j = halfway; int k = start; while (i < temp.length && j < end) { T left = temp[i]; T right = array[j]; if ((left == null) || (right != null) && (left.compareTo(right) <= 0)) { array[k++] = left; i++; } else { array[k++] = right; j++; } } while (i < temp.length) array[k++] = temp[i++]; } private <T extends Comparable><? super T>> void inssort(T[] array, int start, int end) { for (int i = start +1; i < end; i++) { T toInsert = array[i]; boolean done = false; for (int j = i -1; j >= start; j--) { T element = array[j]; if ((element == null) || (toInsert != null) && (toInsert.compareTo(element) >= 0)) { array[j+1] = toInsert; done = true; break; } else array[j+1] = element; } if (!done) array[start] = toInsert; } } public void close() { if (executor != null) executor.shutdown(); } public static void main(String... args) throws Exception { if (args.length < 1) { System.out.println("Usage: MergeSort <filename> {<number of threads> {<insertion threshold>}}"); return; } List<String> list = new ArrayList<>(); try ( Scanner scanner = new Scanner(new File(args[0])); ) { while (scanner.hasNext()) list.add(scanner.next()); } int threads = 1; if (args.length >= 2) threads = Integer.parseInt(args[1]); int insertion = 1; if (args.length >= 3) insertion = Integer.parseInt(args[2]); try ( MergeSort sorter = new MergeSort(insertion, threads); ) { System.out.printf( "%nSorting %d elements using %d thread%s.%n", list.size(), threads, threads > 1 ? "s" : "" ); System.out.printf( "Insertion-sort used for subarrays of length %d or less.%n", insertion ); for (int i = 0; i < 3; i++) { String[] strings = list.toArray(new String[0]); long begin = System.currentTimeMillis(); sorter.sort(strings, 0, strings.length); long time = System.currentTimeMillis() - begin; System.out.printf(" %d ms%n", time); } } } } Edward Harned Ranch Hand Joined: Sep 19, 2005 Posts: 291 I like... posted Sep 17, 2012 07:27:37 0 What a lot of work, but it's already been done. If you're using Java7, then look at the Fork/Join framework therein. If you're not using Java7 or you want a superior product then look at the Fork/Join product: TymeacDSE Ed's latest article: A Java Parallel Calamity I agree. Here's the link: subject: calling submit from a thread pool in the same pool it belongs to Similar Threads Locking timeout FixedThreadPool with a Blocking submit Synchronization Lock/Unlock mechanism - another view How to correctly use a fixed size thread pool? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/592718/threads/java/calling-submit-thread-pool-pool
CC-MAIN-2014-52
refinedweb
1,433
72.26
I just downloaded and installed the newly released Flex SDK 4.5 Build 19786. However when I try to export a release build, an error pops up: "Error creating AIR file: MyApp-app.xml: error 305: Initial window content SWF version 11 exceeds namespace version" Any idea what this means and how to fix it? The issue does not occur with any build prior to this one. Change the -app.xml to use 2.6 instead of 2.5 I changed it to 2.6 then tried exporting again. If I choose the AIR installer and click Finish, nothing happens. However if I choose the native installer, it throws an error: Error creating native installer file: <path>: error 102: Invalid namespace You may have to wait for the "official" I6 preview. There are SDK dependencies on AIR 2.6 which probaby isn't in the FlashBuilder you have. That's a shame. Is AIR 2.6 required to use this SDK release? Is there any news of the release of AIR 2.6 or the I6 Preview? I created a bug report at the time I created this post. Just had a response from Joan. She suggested checking my air-config.xml file to make sure the SWF Version was set to 10 and not 11. It was set to 10. However in the same folder was the flex-config.xml file.The SWF Version in this file was set to 11, so I changed it to 10, cleaned my project and tried exporting again... It worked! Bug report is here: I noted this in the bug, but thought I'd mention it in the forum too. This problem happens when using SDK build 19786 with Flash Builder 4. For some reason, the projects were using flex-config.xml even for AIR projects. In Flash Builder 4.5, we just use air-config.xml for AIR projects, so, this build works correctly. In this particular SDK build, we have set the swf version to 11 in flex-config.xml to support Desktop projects that are targetting Flash Player 10.2. However, we are setting swf version to 10 in air-config.xml for AIR projects because that is all AIR 2.5 supports. When we officially release the Flex SDK, AIR 2.6 will be public and we will change everything to use swf version 11. For now, if you are using Flash Builder 4 with stable build 19786 with AIR projects, you will need to change flex-config.xml to use swf version 10. Thanks for pointing this out. Joan
https://forums.adobe.com/thread/787808
CC-MAIN-2017-30
refinedweb
431
87.21
Search: Search took 0.01 seconds. Is it possible to include a 3rd party control without using iFrame?Started by nvt.meister, 18 Nov 2013 5:32 PM - Last Post By: - Last Post: 19 Nov 2013 6:50 AM - by nvt.meister How to include a 3rd party control without using iFrame?Started by nvt.meister, 15 Nov 2013 7:27 PM Controls in itemTplStarted by Dave Miller, 4 Sep 2013 8:14 AM Decoupling view elements from controllers - Any help?Started by jfarribillaga, 6 Aug 2013 5:31 PM - Last Post By: - Last Post: 11 Aug 2013 4:12 PM - by mitchellsimoens Control controller doesnt work revision control best practice for projects generated using "sencha generate" A Real Custom Form Field Example Needed Using control() function to attach even to component with namespaced alias Controller control: and dom element listeners - Last Post By: - Last Post: 19 Dec 2012 7:30 AM - by mitchellsimoens Component query on buttons works with title but not with id or action - Last Post By: - Last Post: 16 Dec 2012 7:02 AM - by mitchellsimoens Audio : setting a playing time or position HOW TO Override Ext.app.Controller control() methodStarted by steffenbrem, 24 Jul 2012 1:02 AM - Last Post By: - Last Post: 24 Jul 2012 1:10 AM - by steffenbrem ExtJs 4.1 and control in controllersStarted by sneakyfildy, 24 Jul 2012 12:35 AM - Last Post By: - Last Post: 25 Jul 2012 4:48 AM - by sneakyfildy how to remove the controller function? the application.Eventbus? how to do a complex query in the refs property of a controller? Passing a Container's custom click event from a view to a controller xml fields control and view Unable to get ref view from controller How to reference store and define control callback in controller? status on reassigning control to refs in controllers after destruction - Last Post By: - Last Post: 9 Sep 2012 8:12 AM - by BostonMerlin Query for an element in a template Render and afterrender in ExtJS 4.1 don't work Results 1 to 25 of 38
http://www.sencha.com/forum/tags.php?tag=control
CC-MAIN-2014-15
refinedweb
347
56.69
05 March 2013 21:50 [Source: ICIS news] HOUSTON (ICIS)--The drop of prices for polyethylene terephthalate (PET) and corresponding feedstocks in Asia last week were sure to place downward pressure in Latin American markets in the near future, sources said on Tuesday. ?xml:namespace> No immediate changes were evident in domestic or import business in the Americas during the first week of the month, but last week’s weakening in PET and feedstock markets in Asia will affect resin prices in Latin America in the near future, regional sources said. PET prices in Asia fell by as much as $30/tonne (€23/tonne) and feedstock paraxylene (PX) by as much as $97/tonne during the week that ended on 1 March. Even if the downtrend in Asia continues, the effects would generally not be felt in markets in the Americas until a month or two later, the sources said. Dynamics in Asia would probably become evident first in non-producing countries along the Pacific coast of South America, which rely on imports from Asia, the US and Mexico. PET prices in producing countries in Latin America would lag direction from Asia by several weeks and possibly by as much as a couple of months, according to participants. A proposed 2 cent/lb ($44/tonne) hike in Mexico was rescinded, and March PET business is being concluded at a rollover from February at $2,180-2,280/tonne DEL (delivered). March PET prices are expected to remain flat from February in Argentina, Colombia and Mexico. However, a $40/tonne price increase announced for March in Brazil was still under discussion, according to participants. Demand for PET was steady in South America, especially in Argentina and Brazil even as the high season for bottled drinks was ending. Demand was improving in Mexico in regions where the weather was warming, sources said.
http://www.icis.com/Articles/2013/03/05/9646887/asia-places-downward-pressure-on-latin-america-pet.html
CC-MAIN-2015-22
refinedweb
310
52.73
Step 10: Wiring and Completion! I'll tell you how everything is connected one component at a time. Refer to the pictures to get a better idea of how it should look (if it's a rats nest of wires... you're on the right track). Motor battery: Hook up positive and negative to the positive and negative power rails on the far side of the breadboard (see picture 2). Grounds: Take a jumper wire and attach the grounds of both power rails together (see picture 3). IMPORTANT, DON'T FORGET 5 volt power supply: Attach a jumper from 5V on the Arduino to the power rail on the breaboard closest to the Arduino. Servo motors: Attach the left motor with the red and black wires going to the 5 volt power rail and ground respectively. Then, attach the white wire of the left servo to pin 10, the right one to pin 11, and the PING))) sensor panning servo to pin 6. PING))) sensor: Either use the included cable and attach male-to-male jumpers to that, or use separate male-to-female jumpers to make the following connections: connect the pin on the PING))) that says 5V to the 5 volt rail... connect the pin that is labelled GND to either ground rail... and finally connect the one labelled "SIG" to pin 7 on the Arduino. I think that's everything... well, if it doesn't work you can yell at me in the comments because I probably forgot something. The wiring will look like a total rats nest, if you want it to look a little neater you can use male to male header pins like the setup I used in the video. UPDATE: I have made a mistake, thanks to faroos7 for pointing it out! Also, in the code you'll notice a variable named irPin, which is set to 0. That's part of another project, and you can feel free to delete that variable. To get her going, download the attached Arduino program, run it, and it should have behavior similar to that in the video at the beginning of this Instructable. If it works, congratulations, you're done! If not, tell me the problem in the comments and I'll do my best to help you. If you want to do more experiments with Arduino on the breadboard, check out sciguy14's Arduino tutorials on Youtube! They're very well done, and they're how I learned originally. I hope my Instructable helped you, this is my first one and so I appreciate any feedback you can give. Please vote for me in the Arduino contest as well! Bye! Thanks so much John I was thinking about making this awesome robot. But after reading the whole article and the code i got confused with the operation of the two servos that drive the wheels. As far as i know, a contineous rotating servo has lost its control over the amount of degree to rotate. Then how can the wheel servo rotate 90º and 180º? /*MAEP 2.0 Navigation by Noah Moroze, aka GeneralGeek This code has been released under a Attribution-NonCommercial-ShareAlike license, more info at PING))) code by David A. Mellis and Tom Igoe */ #include <Servo.h> /; } Just got a quick question, I am not entirely sure about the 5v power rail you mentioned. So i got the 3 servo motors and the ping sensor that all need 5v. should i be connecting them all to the battery 5v rail or the arduino 5v rail that you said to connect to the breadboard? Thanks again ! though i much prefer your code where the ping servo rotates to choose the best path. I shall now try and combine both codes to suit my robot :) I spent a while trying to combine the codes but i kept getting so many errors but after i tried what you sent me, it works perfectly. Just wondering do i change the Ranging(CM) to an actual number like Ranging(15) ? just to be sure. I have just ordered the right batteries and i must also get a new servo motor as for some reason it gets really hot but it no longer moves. I will be sure to upload photo's and a vid to show you :) Thanks again. In the beginning, with all the variables, you want to create your ultrasonic sensor. Declare it like this: Ultrasonic ultrasonic( trigPin, echoPin); Of course, make trigPin and echoPin whatever pins you plugged them into on the Arduino. Now, this library is actually quite handy, so you can delete the whole ping() function I wrote. Find the code that looks like: void ping() { some stuff in here } and delete all that. Then, where it says int distanceFwd = ping();, change it to say int distanceFwd = ultrasonic.Ranging(CM); Good luck, and feel free to ask if you need any more help. I'd like to see your robot when you finish, so if you could post pics/a video of it online that would be nice :) How should i connect them and what should i add to the code to make it work? Thanks :)
http://www.instructables.com/id/How-To-Make-an-Obstacle-Avoiding-Arduino-Robot/step10/Wiring-and-Completion/C2S82VJGZPZMG7Y
CC-MAIN-2014-52
refinedweb
865
78.79
Find Questions & Answers Can't find what you're looking for? Visit the Questions & Answers page! Hi I have a SOAP to REST Scenario with the REST receiver adapter with the following configurations. 1. Data format - XML - As it is xml format, I had to use the AF_Modules/MessageTransformBean with Transform.ContentType = application/xml in the modules tab to strip of the interface and interface namespace tags. 2. Dynamic endpoint URL Source datatype has "id" which will not be passed to the Target application. Dynamic attributes have been set for this field and I am able to see the id value in the dynamic configuration. However the id value is not reflected in the rest receiver adapter processing . Under the parameters tab -->pattern value replacement, Adapter specific attribute is selected with the custom attribute url pattern - Pattern element name - id Attribute Name - I get with an error "placeholder for id is missing or empty. As it is a data format of XML, Have I got something to add up in the HTTP headers for the adapter to understand the attribute in the dynamic config. The same is the case when I try to do an xpath for ordernumber . I end with error "XPath expression is incorrect". however when I test the XPATH in any editor, it is fine. Please advise
https://answers.sap.com/questions/338784/rest-adapter-pattern-value-replacement.html
CC-MAIN-2018-30
refinedweb
220
63.19
> Hi, I am following a tutorial that uses javascript to make a cars controls active after a countdown, I have used the exact same code as in the tutorial but when i try to add it to an object i get an error message saying 'Script Needs To Derive From MonoBehaviour' Script : var CarControl : GameObject; function Start() { CarControl.GetComponent("CarController").enabled = true; } -sorry for the bad post i have not used this before and don't know how to layout the code. Are you sure you have created a file with the JS extension? Not with the CS one? Keep in mind that UnityScript is not supported anymore since Unity 2018. You will have to use C# instead of rollback to an older version of Unity. Answer by Juzper · Sep 01, 2018 at 09:50 AM When you open your script, you should see this: "public class (Script's name): MonoBehaviour {". Change (Script's name) to the JS Script's name and save. They need to be identical for the script to be functioning properly. I now have public class CarControlActive: MonoBehaviour { var CarControl : GameObject; function Start() { CarControl.GetComponent("CarController").enabled = true; } But it still has the error This is the declaration of a C# class. If breddybabson uses Unityscript, it. Setting Scroll View Width GUILayout 1 Answer JavaScript Classes 1 Answer Why OnMouseDrag is not being called? 2 Answers Can someone help me fix my Javascript for Flickering Light? 5 Answers Getting odd error on custom editor 0 Answers
https://answers.unity.com/questions/1548607/script-needs-to-derive-from-monobehaviourjs-script.html
CC-MAIN-2019-13
refinedweb
250
63.8
Use an Arduino to make a range finder that measures distance using ultrasonic technology.. How Does the Arduino Ultrasonic Range Find. 16x2 AlphanumericA Display (LCD): #include "NewPing.h" #include "LiquidCrystal.h" #define trig 0 #define echo 13 #define maximum 200 int usec; int cm; float inch; NewPing sonar(trig, echo, maximum); LiquidCrystal lcd(12, 11, 5, 4, 3, 2); void setup(){ lcd.begin(16,2); } void loop(){ lcd.clear(); lcd.setCursor(2,0); lcd.print("Range Finder"); usec=sonar.ping(); cm=usec/58; inch=usec/58/2.54; lcd.setCursor(0,1); lcd.print(cm); lcd.print("cm"); lcd.setCursor(7,1); lcd.print(inch); lcd.print("inch"); delay(250); } The <NewPing.h> library can be downloaded the <NewPing.h> library. Download this zip file, unzip it into a folder, name it NewPing, and open the Arduino software/Sketch tab/Include Library/Add .ZIP Library/Choose the Zip file, and upload the program to your Arduino Board. For communicating with the Arduino ultrasonic range finder module, library function <NewPing.h> is used. The jobs of sending the 10uS trigger pulse, waiting for the echo and measuring the width of the echo etc are done by the library function. Just one line of code: usec=sonar.ping() will make the Arduino do all the jobs said above and the width of the echo pulse in micro seconds will be stored in the variable usec. Dividing the pulse width in uS by 58 will give the distance in cm and dividing the pulse width in uS by 148 will give the distance in inches. An “if – else” loop is used for selecting the unit according to the position of the SPDT selector switch. That's about it! All you need to do now is make an enclosure. Package all the electronics neatly into a sturdy box of your liking and you have your very own Arduino ultrasonic range finder. I have installed all my electronics into a spare wooden box I had lying around, here is a picture of it. I arranged the ultrasonic sensor on one side and the display on the other. All I need to do now is aim and measure! Watch the video below to see the Arduino ultrasonic range finder in action!
https://maker.pro/arduino/tutorial/how-to-make-an-arduino-uno-ultrasonic-range-finder
CC-MAIN-2018-26
refinedweb
375
67.04
bind function The bind function associates a local address with a socket. Syntax Parameters - s [in] A descriptor identifying an unbound socket. - name [in] A pointer to a sockaddr structure of the local address to assign to the bound socket . - namelen [in] The length, in bytes, of the value pointed to by the name parameter. Return value If no error occurs, bind returns zero. Otherwise, it returns SOCKET_ERROR, and a specific error code can be retrieved by calling WSAGetLastError. Remarks The bind function is required on an unconnected socket before subsequent calls to the listen function. It is normally used to bind to either connection-oriented (stream) or connectionless (datagram) sockets. The bind function may also be used to bind to a raw socket (the socket was created by calling the socket function with the type parameter set to SOCK_RAW). The bind function may also be used on an unconnected socket before subsequent calls to the connect, ConnectEx, WSAConnect, WSAConnectByList, or WSAConnectByName functions before send operations. When a socket is created with a call to the socket function, it exists in a namespace (address family), but it has no name assigned to it. Use the bind function to establish the local association of the socket by assigning a local name to an unnamed socket. A name consists of three parts when using the Internet address family: - The address family. - A host address. - A port number that identifies the application. In Windows Sockets 2, the name parameter is not strictly interpreted as a pointer to a sockaddr structure. It is cast this way for Windows Sockets 1.1 compatibility. Service providers are free to regard it as a pointer to a block of memory of size namelen. The first 2 bytes in this block (corresponding to the sa_family member of the sockaddr structure, the sin_family member of the sockaddr_in structure, or the sin6_family member of the sockaddr_in6 structure) must contain the address family that was used to create the socket. Otherwise, an error WSAEFAULT occurs. If an application does not care what local address is assigned, specify the constant value INADDR_ANY for an IPv4 local address or the constant value in6addr_any for an IPv6 local address in the sa_data member of the name parameter. This allows the underlying service provider to use any appropriate network address, potentially simplifying application programming in the presence of multihomed hosts (that is, hosts that have more than one network interface and address). For TCP/IP, if the port is specified as zero, the service provider assigns a unique port to the application from the dynamic client port range. On Windows Vista and later, the dynamic client port range is a value between 49152 and 65535. This is a change from Windows Server 2003 and earlier where the dynamic client port range was a value between 1025 and 5000. The maximum value for the client dynamic port range can be changed by setting a value under the following registry key: HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters The MaxUserPort registry value sets the value to use for the maximum value of the dynamic client port range. You must restart the computer for this setting to take effect. On Windows Vista and later, the dynamic client port range can be viewed and changed using netsh commands. The dynamic client port range can be set differently for UDP and TCP and also for IPv4 and IPv6. For more information, see KB 929851. The application can use getsockname after calling bind to learn the address and the port that has been assigned to the socket. If the Internet address is equal to INADDR_ANY or in6addr_any, getsockname cannot necessarily supply the address until the socket is connected, since several addresses can be valid if the host is multihomed. Binding to a specific port number other than port 0 is discouraged for client applications, since there is a danger of conflicting with another socket already using that port number on the local computer. Note When using bind with the SO_EXCLUSIVEADDRUSE or SO_REUSEADDR socket option, the socket option must be set prior to executing bind to have any affect. For more information, see SO_EXCLUSIVEADDRUSE and Using SO_REUSEADDR and SO_EXCLUSIVEADDRUSE. For multicast operations, the preferred method is to call the bind function to associate a socket with a local IP address and then join the multicast group. Although this order of operations is not mandatory, it is strongly recommended. So a multicast application would first select an IPv4 or IPv6 address on the local computer, the wildcard IPv4 address (INADDR_ANY), or the wildcard IPv6 address (in6addr_any). The the multicast application would then call the bind function with this address in the in the sa_data member of the name parameter to associate the local IP address with the socket. If a wildcard address was specified, then Windows will select the local IP address to use. After the bind function completes, an application would then join the multicast group of interest. For more information on how to join a multicast group, see the section on Multicast Programming. This socket can then be used to receive multicast packets from the multicast group using the recv, recvfrom, WSARecv, WSARecvEx, WSARecvFrom, or WSARecvMsg functions. The bind function is not normally required for send operations to a multicast group. The sendto,WSASendMsg, and WSASendTo functions implicitly bind the socket to the wildcard address if the socket is not already bound. The bind function is required before the use of the send or WSASend functions which do not perform an implicit bind and are allowed only on connected sockets, which means the socket must have already been bound for it to be connected. The bind function might be used before send operations using the sendto,WSASendMsg, or WSASendTo functions if an application wanted to select a specific local IP address on a local computer with multiple network interfaces and local IP addresses. Otherwise an implicit bind to the wildcard address using the sendto,WSASendMsg , or WSASendTo functions might result in a different local IP address being used for send operations. Note When issuing a blocking Winsock call such as bind,. Notes for IrDA Sockets - The Af_irda.h header file must be explicitly included. - Local names are not exposed in IrDA. IrDA client sockets therefore, must never call the bind function before the connect function. If the IrDA socket was previously bound to a service name using bind, the connect function will fail with SOCKET_ERROR. - If the service name is of the form "LSAP-SELxxx," where xxx is a decimal integer in the range 1-127, the address indicates a specific LSAP-SEL xxx rather than a service name. Service names such as these allow server applications to accept incoming connections directed to a specific LSAP-SEL, without first performing an ISA service name query to get the associated LSAP-SEL. One example of this service name type is a non-Windows device that does not support IAS. Windows Phone 8: This API is supported. Examples The following example demonstrates the use of the bind function. For another example that uses the bind function, see Getting Started With Winsock. #ifndef UNICODE #define UNICODE #endif #define WIN32_LEAN_AND_MEAN #include <winsock2.h> #include <Ws2tcpip.h> #include <stdio.h> // Link with ws2_32.lib #pragma comment(lib, "Ws2_32.lib") int main() { // Declare some variables WSADATA wsaData; int iResult = 0; // used to return function results // the listening socket to be created SOCKET ListenSocket = INVALID_SOCKET; // The socket address to be passed to bind sockaddr_in service; //---------------------- // Initialize Winsock iResult = WSAStartup(MAKEWORD(2, 2), &wsaData); if (iResult != NO_ERROR) { wprintf(L"Error at WSAStartup()\n"); return 1; } //---------------------- // Create a SOCKET for listening for // incoming connection requests ListenSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (ListenSocket == INVALID_SOCKET) { wprintf(L"socket function failed with error: %u\n", WSAGetLastError()); WSACleanup(); return 1; } //---------------------- // The sockaddr_in structure specifies the address family, // IP address, and port for the socket that is being bound. service.sin_family = AF_INET; service.sin_addr.s_addr = inet_addr("127.0.0.1"); service.sin_port = htons(27015); //---------------------- // Bind the socket. iResult = bind(ListenSocket, (SOCKADDR *) &service, sizeof (service)); if (iResult == SOCKET_ERROR) { wprintf(L"bind failed with error %u\n", WSAGetLastError()); closesocket(ListenSocket); WSACleanup(); return 1; } else wprintf(L"bind returned success\n"); WSACleanup(); return 0; } Requirements See also - Winsock Reference - Winsock Functions - connect - getsockname - listen - Multicast Programming - setsockopt - SO_EXCLUSIVEADDRUSE - SOL_SOCKET Socket Options - sockaddr - socket - TCP/IP Raw Sockets - Using SO_REUSEADDR and SO_EXCLUSIVEADDRUSE - WSACancelBlockingCall Build date: 11/16/2013
http://msdn.microsoft.com/en-us/library/ms737550.aspx
CC-MAIN-2013-48
refinedweb
1,398
51.99