text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Send an Email Through the Amazon SES SMTP Interface with Java
This example uses the Eclipse IDE and the JavaMail API to send email through Amazon SES using the SMTP interface.
Before you perform the following procedure, complete the setup tasks described in Before You Begin with Amazon SES and Send an Email Through Amazon SES Using SMTP..
To send an email using the Amazon SES SMTP interface with Java
In a web browser, go to the JavaMail Github page. Under Downloads, choose javax.mail.jar to download the latest version of JavaMail.
Important
This tutorial requires JavaMail version 1.5 or later.
Create a project in Eclipse by performing the following steps:
Start Eclipse.
In Eclipse, choose File, choose New, and then choose Java Project.
In the Create a Java Project dialog box, type a project name and then choose Next.
In the Java Settings dialog box, choose the Libraries tab.
Choose Add External JARs.
Browse to the folder in which you downloaded JavaMail. Choose the file
javax.mail.jar, and then choose Open.
In the Java Settings dialog box, choose Finish.
In Eclipse, in the Package Explorer window, expand your project.
Under your project, right-click the src directory, choose New, and then choose Class.
In the New Java Class dialog box, in the Name field, type
AmazonSESSampleand then choose Finish.
Replace the entire contents of
AmazonSESSample.javawith the following code:Copy
import java.util.Properties; import javax.mail.*; import javax.mail.internet.*; the Amazon SES SMTP interface by using Java."; static final String SUBJECT = "Amazon SES test (SMTP interface accessed using Java)"; // Supply your SMTP credentials below. Note that your SMTP credentials are different from your AWS credentials. static final String SMTP_USERNAME = "YOUR_SMTP_USERNAME"; // Replace with your SMTP username. static final String SMTP_PASSWORD = "YOUR_SMTP_PASSWORD"; // Replace with your SMTP password. // Amazon SES SMTP host name. This example uses the US West (Oregon) Region. static final String HOST = "email-smtp.us-west-2.amazonaws.com"; // The port you will connect to on the Amazon SES SMTP endpoint. We are choosing port 25 because we will use // STARTTLS to encrypt the connection. static final int PORT = 25; public static void main(String[] args) throws Exception { // Create a Properties object to contain connection configuration information. Properties props = System.getProperties(); props.put("mail.transport.protocol", "smtps"); props.put("mail.smtp.port", PORT); // Set properties indicating that we want to use STARTTLS to encrypt the connection. // The SMTP session will begin on an unencrypted connection, and then the client // will issue a STARTTLS command to upgrade to an encrypted connection. props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.starttls.required", "true"); // Create a Session object to represent a mail session with the specified properties. Session session = Session.getDefaultInstance(props); // Create a message with the specified information. MimeMessage msg = new MimeMessage(session); msg.setFrom(new InternetAddress(FROM)); msg.setRecipient(Message.RecipientType.TO, new InternetAddress(TO)); msg.setSubject(SUBJECT); msg.setContent(BODY,"text/plain"); // Create a transport. Transport transport = session.getTransport(); // Send the message. try { System.out.println("Attempting to send an email through the Amazon SES SMTP interface..."); // Connect to Amazon SES using the SMTP username and password you specified above. transport.connect(HOST, SMTP_USERNAME, SMTP_PASSWORD); // Send the email. transport.sendMessage(msg, msg.getAllRecipients()); System.out.println("Email sent!"); } catch (Exception ex) { System.out.println("The email was not sent."); System.out.println("Error message: " + ex.getMessage()); } finally { // Close and terminate the connection. transport.close(); } } }
In
AmazonSESSample.java, Email Addresses and Domains in Amazon SES.
RECIPIENT@EXAMPLE.COM—Replace with your "To" email address. If your account is still in the sandbox, you must verify this address before you use it. For more information, see Moving Out of the Amazon SES Sandbox.
In
AmazonSESSample.java, credential.), you need to change HOST in
AmazonSESSample.javato the endpoint you want to use. For a list of Amazon SES endpoints, see Regions and Amazon SES.
Save
AmazonSESSample.java.
To build the project, choose Project and then choose Build Project. (If this option is disabled, then you may have automatic building enabled.)
To start the program and send the email, choose Run and then choose Run again.
Review the program's console output to verify that the sending was successful. (You should see "Email sent!")
Sign into the email client of the recipient address. You will find the message that you sent. | http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-using-smtp-java.html | CC-MAIN-2017-26 | refinedweb | 732 | 53.68 |
Hi there! I'm designer and I'm trying to improve my plugin:.
Right now I'm trying to implement the following feature:
— Run "Redraw chart" command
— Inside "Redraw_chart.js" I should run "Line chart" command.
How can I do that?
I tried:
function actionWithType(context,type) {
var controller = context.document.actionsController();
if (controller.actionWithName) {
return controller.actionWithName(type);
} else if (controller.actionWithID) {
return controller.actionWithID(type);
} else {
return controller.actionForID(type);
}
}
actionWithType(context,"MSRunPluginAction").runPluginCommandWithIdentifier("com.sketchapp.examples.chart/lineChart");
But it did not work.
Inside "Redraw_chart.js" I should run "Line chart" command
Since this is your own command, why not just importing it and calling it directly?
mathieudutour
Since I have 13 types of charts (13 commands), and each command is a JS-file, in which already imported 3 common js-files.
When I try to import a few js files in "Redraw chart" that only the first is working:
function areaChart (context){
@import 'areaChart.js';
};
function barChartHorizontal (context){
@import 'barChartHorizontal.js';
};
function barChartVertical (context){
@import 'barChartVertical.js';
};
I thought that problem that each js file contains "@import 'common.js', 'parameters.js'"
mathieudutour if the plugin is others how run. Can you give me some example ir article links,thank you very match
@pavelkuligin The best solution here is to switch to skpm and modern JS. Don't use default @import statement for modules - it tend to be a huge source of weird errors and all. 🙂
@import
turbobabr I would really appreciate you if you send me examples (or links to articles) how to use modules in modern JS :-)
Have a look at the template of skpm:.
With skpm you can do stuff like
import areaChart from './areaChart'
as you would do with regular javascript | https://sketchplugins.com/d/618-run-plugin-command-from-another-plugin | CC-MAIN-2020-10 | refinedweb | 288 | 51.85 |
Quasar v1 and websockets, any luck?
Hello,
I would like to know if anyone has ever tried this plugin with Quasar:
[](link url)
If yes would you have a good example on how to set it up please?
For now I have created a boot file with:
import VueSocketio from 'vue-socket.io-extended'; import io from 'socket.io-client'; import store from '../store'; // "async" is optional export default async ({ Vue }) => { Vue.use(VueSocketio, io('wss://echo.websocket.org'), { store }); };
I am not sure on how to integrate a basic send/receive using Vuex,
if you have an example it would be great thank you. | https://forum.quasar-framework.org/topic/4129/quasar-v1-and-websockets-any-luck | CC-MAIN-2020-40 | refinedweb | 105 | 66.44 |
Description
Most.js is a toolkit for reactive programming. It helps you compose asynchronous operations on streams of values and events, e.g. WebSocket messages, DOM events, etc, and on time-varying values, e.g. the "current value" of an ,.
Most.js alternatives and similar libraries
Based on the "Reactive Programming" category.
Alternatively, view Most.js alternatives based on common mentions on social networks and blogs.
RxJs9.4 9.4 Most.js VS RxJsA reactive programming library for JavaScript
MobX9.3 8.3 L4 Most.js VS MobXSimple, scalable state management.
Cycle.js7.7 3.5 L1 Most.js VS Cycle.jsA functional and reactive JavaScript framework for predictable code
Bacon6.7 2.7 Most.js VS BaconFunctional reactive programming library for TypeScript and JavaScript
Highland5.2 0.0 L4 Most.js VS HighlandHigh-level streams library for Node.js and the browser
Kefir4.1 3.6 Most.js VS KefirA Reactive Programming library for JavaScript
concent3.2 8.7 Most.js VS concentState management that tailored for react, it is simple, predictable, progressive and efficient.
Refract2.9 0.0 Most.js VS RefractHarness the power of reactive programming to supercharge your components
Cycle.js (react-native)Cycle.js driver that uses React Native to render
Dragonbinder0.9 0.0 Most.js VS Dragonbinder1kb progressive state management library inspired by Vuex.
DELETE Kefir *0.3 0.0 Most.js VS DELETE Kefir *You're looking for
Build stunning web applications quickly using Syncfusion JavaScript UI controls.
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of Most.js or a related project?
Popular Comparisons
README
________________________________ ___ |/ /_ __ \_ ___/__ __/ __ /|_/ /_ / / /____ \__ / _ / / / / /_/ /____/ /_ / /_/ /_/ \____/______/ /_/
Monadic streams for reactive programming
Starting a new project?
Strongly consider starting with
@most/core. It is the foundation of the upcoming most 2.0, has improved documentation, new features, better tree-shaking build characteristics, and simpler APIs. Updating from
@most/core to most 2.0 will be non-breaking and straightforward.
Using most 1.x already on an existing project?
You can keep using most 1.x, and update to either
@most/core or most 2.0 when you're ready. See the upgrade guide for more information.
What is it?.
Learn more
Simple example
Here's a simple program that displays the result of adding two inputs. The result is reactive and updates whenever either input changes.
First, the HTML fragment for the inputs and a place to display the live result:
<form> <input class="x"> + <input class="y"> = <span class="result"></span> </form>
Using most.js to make it reactive:) }
More examples
You can find the example above and others in the Examples repo.
Get it
Requirements
Most requires ES6
Promise. You can use your favorite polyfill, such as creed, when, bluebird, es6-promise, etc. Using a polyfill can be especially beneficial on platforms that don't yet have good unhandled rejection reporting capabilities.
Install
As a module:
npm install --save most
// ES6 import { /* functions */ } from 'most' // or import * as most from 'most'
// ES5 var most = require('most')
As
window.most:
bower install --save most
<script src="most/dist/most.js"></script>
As a library via cdn :
<!-- unminified --> <script src=""></script>
<!-- minified --> <script src=""></script>
Typescript support:
- If your tsconfig is targeting ES6, you do not need to do anything as typescript will include a definition for Promise by default.
- If your tsconfig is targeting ES5, you need to provide your own Promise definition. For instance es6-shim.d.ts
Interoperability
Most.js streams are [compatible with Promises/A+ and ES6 Promises](promises). They also implement Fantasy Land and Static Land
Semigroup,
Monoid,
Functor,
Apply,
Applicative,
Chain and
Monad.
Reactive Programming.
Why most.js for Reactive Programming?
High performance
A primary focus of most.js is performance. The [perf test results](test/perf) indicate that it is achieving its goals in this area. Our hope is that by publishing those numbers, and showing what is possible, other libs will improve as well.
Modular architecture.
Simplicity
Aside from making combinators less "obviously correct", complexity can also lead to performace and maintainability issues. We felt a simple implementation would lead to a more stable and performant lib overall.
Integration
Most.js integrates with language features, such as promises, iterators, generators, and asynchronous generators.
Promises from([1, 2, 3, 4]) .delay(1000) .reduce((result, y) => result + y, 0) .then(result => console.log(result))
You can also create a stream from a promise:
import { fromPromise } from 'most' // Logs "hello" fromPromise(Promise.resolve('hello')) .observe(message => console.log(message))
Generators while(true) { yield i++ } } // Log the first 100 integers from(allTheIntegers()) .take(100) .observe(x => console.log(x))
Asynchronous Generators
You can also create an event stream from an asynchronous generator, a generator that yields promises:
import { generate } from 'most' function* allTheIntegers(interval) { let i=0 while(true) { yield delayPromise(interval, i++) } } const delayPromise = (ms, value) => new Promise(resolve => setTimeout(() => resolve(value), ms)) // Log the first 100 integers, at 1 second intervals generate(allTheIntegers, 1000) .take(100) .observe(x => console.log(x)) | https://js.libhunt.com/most-alternatives | CC-MAIN-2021-43 | refinedweb | 868 | 52.26 |
The natural choice today for a programming language for writing a new piece of software in the Microsoft environment is C#. For various reasons, some parts of the project may be written in other languages (existing packages, third party, performance, etc.). In this short article, I'll suggest what seems to me like a good, easy to manage, alternative for interoperation between managed and unmanaged code.
Microsoft got jealous of Java and created its own virtual machine, the CLR. Compilation of code in C# generates a machine code suitable for this virtual machine. Microsoft went one step further so that the code of other languages, such as C++, compiles to the same machine language. It is possible therefore to call from a DLL or an executable in C# to a DLL in C++, and vise versa. Each team uses its preferred programming language and it all fits together in the linkage.
Code that runs in the .NET environment is called a managed code. Code that runs as in old days is called unmanaged code. The "management" brings a lot of advantages such as "garbage collection" of memory that was allocated but will never be used again. The disadvantages of the "management" are lesser performance, and the difficulty to interact with existing libraries in unmanaged code.
Microsoft brings a friendly solution which exists to the best of my knowledge only for C++. The solution enables calling from managed code to unmanaged code and vise versa, It Just Works! The only limitation is that there will be no use of "managed" data types in the unmanaged code. Hence it is possible to pass basic data types such as int, double, and data types that were defined as "not managed". In order to use that feature, we'll create the new DLL, or executable, as a C++ project for CLR (managed), and then we'll surround unmanaged code with:
int
double
#pragma unmanaged
#pragma managed
In a project, I defined some classes in C# and created a DLL to hold those. The classes were intended to present a configuration for a real-time program. We decided that the real-time loops and logic will be written in unmanaged C++ code. I've created another DLL which was written in C++ and was also using the CLR. That way the DLL written in C++ could reference the DLL written in C# and use the classes representing the configuration. The program itself wrapping everything was written in C#. Hence the program is familiar with the classes in the C# DLL. It passes them to the C++ DLL, which is also familiar with those classes. There is still one problem left. We need to translate "managed" data types to "unmanaged" data types, in order to call unmanaged code. The work involved here is either trivial or Sisyphean, yet it will be a no-brainer. There is another alternative, which is working with unmanaged data types throughout both the unmanaged and managed code. I suggest that explicit translation is more convenient and self explanatory and is done at the last moment on the border between managed code and unmanaged code.
Following is a code snippet:
public ref class CConfig
{
public:
CConfig (int _a, String ^_str, double _d);
void doTheStuff ();
int m_a;
String ^m_str;
double m_d;
};
#include <stdio.h>
#include <stdlib.h>
#include <vcclr.h>
#include <iostream>
void someMoreInManaged ()
{
Console::WriteLine("C++ Managed – someMoreInManaged");
}
#pragma unmanaged
void doTheStuff1 (int _a, double _d, const wchar_t* const _str) {
std::cout << "C++ unmanaged – doTheStuff1" << std::endl;
std::cout << _a << _std::endl;
printf_s("%S\n", _str);
std::cout << _d << _std::endl;
someMoreInManaged();
}
#pragma managed
CConfig::CConfig
(int _a, String ^_str, double _d) {
m_a = _a;
m_str = _str;
m_d = _d;
}
void CConfig::doTheStuff () {
Console::WriteLine("C++ Managed – doTheStuff");
// Pin memory so GC can't move it while
// native function is called – MSDN documentation wchar_t <-> String
pin_ptr<const wchar_t> wch = PtrToStringChars(m_str);
doTheStuff1 (m_a,m_d,w. | http://www.codeproject.com/Articles/35437/Moving-Data-between-Managed-Code-and-Unmanaged-Cod | CC-MAIN-2014-35 | refinedweb | 651 | 61.36 |
check number is palindrome or not. First, I want to explain What is number? Then, I will explain What do you mean by Reverse? and what is positive integer? Moreover, I will explain What is a palindrome? Moreover, I will demonstrate how to check a number is palindrome or not? After all, I will write the logic of a C program to check number is palindrome and explain it with output.
Table of contents:
- What is a string?
- What is a number?
- What is a positive integer?
- What do you mean by Reverse?
- What is palindrome?
- Demonstration to check a number is it palindrome or not.
- Logic to check if a number is palindrome or not
- C program to check number is palindrome
- Explanation of the C program to check number is palindrome with output.
- Conclusion
What is a String?
The String is a combination of characters that may be any digit from 0 to 9 and Alphabets from A to Z. The repetition of characters does not matter. For instance, a String Conax uses 5 characters i.e. C,o,n,e,and x.
What is a number?
A number is also a string but only included digits, decimal point and some mathematical signs such as -, +, i etc. Number is an object that uses digits to perform mathematical task. Calculus that is a branch of mathematics included many numbers such as integer, whole number, real number, and imaginary numbers so on. Here, I will discuss about number who has palindrome. Moreover, Number is a combination of digits, some symbols and decimal point. Forthermore, A positive integer is a combination of digits only. For instance, 323 is a positive integer.
What is a positive integer?
A positive integer is a combination of digits only. It is also known as natural number. A sequence from 1 to infinite is known as natural number sequence. For instance, 323 is a positive integer.
What do you mean by Reverse?
Reverse means move backward and turned toward the direction opposite to that previously stated. For instance, If our number is 321 then its reverse string is 123.
What is a Palindrome?
An integer is a palindrome if the reverse of that number is equal to the original number. In other words, A Palindrome is an approach in which the reversed string is the same as the previous string. For instance, the Reverse of a number 323 is also 323. So we can say that 323 is a palindrome.
Demonstration to check a number is it palindrome or not
Logic to check a number is it palindrome or not
- Step 1: The user is asked to enter an integer. The number is stored in variable n.
- Step 1: We then assigned this number to another variable orignalN.
- Step 2: Then, the reverse of n is found by reversed technique and stored it in reversedN.
- Step 3: If originalN is equal to reversedN, the number entered by the user is a palindrome.
C program to check number is palindrome
#include <stdio.h> int main() { int n, reversedN = 0, remainder, originalN; printf("Please!; }
The output of the Program with code:
Explanation of the C program to reverse a string with output
Conclusion:.
In the successful compilation of the program, a message is displayed on the screen as Please! Enter an integer:
This program entered a number by user then store it into a variable after that found its reversed and store it in to another variable if both variables are equal then we can say that inputed number is palindrome otherwise that is not a palindrome. | https://www.onlineinterviewquestions.com/blog/c-program-to-check-number-is-palindrome/ | CC-MAIN-2021-17 | refinedweb | 599 | 67.65 |
Now that DTDs have been obsoleted by not being namespace compatible, XML Schemas (W3C) are the only perspective document type definition language for XML.
Schema support should include:
- Schema XML document validation
- Schema editor (eventually converter vs DTD)
- XML document editor using Schema for completion / on the fly error checking
Dependencies:
- XML Schema aware parser featuring XNI interface.
- Be able to recognize that an XML document represents XML schema
and handle it according to it (by namespace).
Handling specifics:
Parser must know that a document is XML schema to be able to apply
contrains. It can be achieved by using a wrapper document that will
reference the schema. The parser will get the document as a source to
parse.
Must have feature.
In 3.4 timeframe is planned only validation part.
Because you changed Target Milestone this feature was automatically
removed from planned 3.4 features
().
TM is again 3.4, you can file new RFE for Enhanced XML Schema support.
Schema checking and validation implemented.
Other support aspects such as completion, designer, etc. will be a
part of next releases.
implemented in 3.4
but in 4.0 seems not very reliable, I enter new issue for it
the issue was #47479, but this is a dup of already filled bug | https://netbeans.org/bugzilla/show_bug.cgi?id=15175 | CC-MAIN-2017-39 | refinedweb | 212 | 59.09 |
Africa Gets Its Own Web Address (bbc.com) 89
Africa now has the unique web address .africa, equivalent to the more familiar .com, following its official launch by the African Union. From a report on BBC:.
Too long, didn't type (Score:2)
Too long, didn't type. Why didn't they just steal ".af" (Afghanistan today, but common abbreviation for Africa)?
Re:Too long, didn't type (Score:5, Funny)
Re:Too long, didn't type (Score:5, Informative)
Re: (Score:3)
Re: (Score:2)
That would be I *bless* the rains down in Africa.
Lol, this is my Dad's favourite song, and for the last 30 years we've been singing 'missed' until Iast year when I was learning to play the song and found the real lyrics. I actually think missed sounds better, as the song has a bit of a sombre tone, about longing and missed opportunities, and missing something huge like the rains in a dry continent sort of resonates with that. Blessed just have the same ring to it.
Re: (Score:2)
Re: (Score:1)
Chrome says:
This site can’t be reached
This site on the company, organisation or school intranet has the same URL as an external website.
Try contacting your system administrator.
ERR_ICANN_NAME_COLLISION
Re: (Score:2)
lynx.africa
Re: (Score:2)
Too long, didn't type. Why didn't they just steal ".af" (Afghanistan today, but common abbreviation for Africa)?
Cause then every domain would be "as fuck", which could possibly cause confusion.
Re: (Score:1)
LOL! You're killing me. Funniest thing I've seen all day. Honestly. Let me return the favor:
Buckwheat has converted to Islam . .
.
He's now Kareem of Wheat.
What does a n1gger have in common with a soda machine?
They both don't work and always take your money.
Why are there only two pallbearers at a n1ggers funeral?
There are only two handles on a garbage can.
What's the difference between bigfoot and a hard working n1gger?
Bigfoot has been spotted.
Why do n1ggers only chill and kick it?
Because they don't like to h
Re: (Score:1)
I was going to suggest
.niqqers. But then there's a problem of distinguishing the subtypes, so it'd have to be a two parter like .co.uk.... so .sand.niqqers and .harambe.niqqers
Great! (Score:2)
An easy way to filter out those Nigerian Prince scam emails!
racists (Score:1)
Ouch, just wait till the racists find out. There's going to be some very bad websites out there...
Who exactly are "the racists" you refer to? (Score:2, Insightful)
Who exactly are "the racists" that you're referring to?
Would you consider black Africans who host a website at a
.africa domain that promotes anti-white, anti-Asian, anti-Indian, or anti-Amerindian sentiment, for example, as being among "the racists"?
Re: (Score:1)
Re: (Score:2)
If having their precious vanity domain amuses some people, it certainly won't be the dumbest idea ICANN has dabbled in; but it's hard to make a good case for a TLD that is geographic, rather than vaguely tied to a concept, like
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
It's not the worst new top-level domain. Not even close to the worst.
.science .stream .men .party .top .study .click .gdn .date .webcam .tips .expert .watch .wiki .fail .cool .wtf .xyz .gripe
Same as .com (Score:5, Informative)
Re: (Score:2)
Liquid Crystal Displays
Re: (Score:2)
Lactose Constrained Diets?
"web address" (Score:1)
Good thing you simplified the concept of a TLD to help the Slashdot audience of brain damaged 6 year olds understand that it's "equivalent to the more familiar
.com".
I TOTOly get it but.... (Score:1, Redundant)
So who is going to register Iblesstherainsdownin.africa ?
what nonsense (Score:4, Insightful)
trying to imply there is any kind of unity between the North African Arab countries and the others...yeah right
Re: (Score:3)
TLDs haven't been used properly anyway. It's a waste.
But that's what you get when you have the legacy of an American-built, American-centric system, designed with imperfect foresight, and there's too much invested to wipe and reload.
Re: (Score:2)
we got a good flexible TLD system that people can use in traditional way or with recent additions.
countries have TLD if they want to use them. the USA put those in a long time ago. And plenty of other product/concept specific ones added if anyone wants to be under them which was international effort
70 percent of the global internet traffic is carried through the USA anyway, fine that they had historic heavy influence on it. The USA built something great and useful for the world.
Re: (Score:3)
The USA gave us a dual usage-based / geo-politically based domain system.
It really ought to be have been solely geo-politically based with a byte or two's worth of flags to indicate content type, and domains restricted to appropriate use.
[domain].[state/province].[nation].[super-national grouping]. With tiered DNS that assumes most of that for you if you leave it out. And you know what? Something to distinguish the domain from the other parts so you could have arbitrary numbers of sub-domain categorizati
another TLD to block in Postfix (Score:2)
Since the only thing (network wise) that comes out of Africa is spam and other crap, blocking this will be 100% perfect compression.
I spent 5 minutes trying pronounce that name (Score:2)
Having done so, I can now conclude my reading of TFS with a proud sense of accomplishment, though I never finished it.
also in news (Score:1)
nigeria just got assigned
.scam domain
Enough already with the TLDs (Score:5, Insightful)
I wish them luck, but I'm not sure it makes a lot of sense to be creating yet another top-level domain.
For example, a mobile phone company could create mobile.africa to show its Africa-wide presence, or a travel company could set up travel.africa.
So they'll sell off a few hundred generic words to speculators, but I predict few others will be buying in. Many of the new gTLDs created over the past couple of years are either shutting down, or jacking up domain prices [domainincite.com] into the multi-hundred dollar per year range just to stay in operation. Keeping a TLD alive isn't cheap, and it turns out there's not much demand for all of this namespace after all. When you can't amortize your TLD's infrastructure cost across millions of customers, you wind up having to price each domain so high that nobody's going to buy one.
Re: (Score:2)
Re: (Score:3)
It makes no more or less sense than the
.eu TLD.
It makes a shitload more sense than every other TLD that has come out in the past 3 years.
parked domains (Score:1)
Poor DNS configuration (Score:4, Informative)
# host 0abaa55f4b4b5f8a9a55d1fe33f49a.africa
0abaa55f4b4b5f8a9a55d1fe33f49a.africa has address 127.0.53.53
0abaa55f4b4b5f8a9a55d1fe33f49a.africa mail is handled by 10 your-dns-needs-immediate-attention.africa.
Great, they have some wildcard garbage going on instead of properly returning NXDOMAIN.
Re: (Score:2)
At least their wildcard bullshit points at localhost, which is better than some ad server, or some malware hosting site (but I repeat myself). It would be worse.
Can we use .js for North America? (Score:1)
That's not a web address... (Score:1)
It's a top level domain. Which is pedantic on some level but...sigh. Whatever, this stopped being news for nerds a while ago.
Re: (Score:3)
Here are just a few of the ones I block (plus a few that aren't listed):
moncler
The Summary is Blatently Wrong (Score:5, Interesting).
Re: (Score:3)
I don't know where
/. gets their editors, but they're definitely getting dumber and dumber as the years go by.
You are blatantly wrong (Score:4, Informative).
You don't have a clue. A cursory Google search would tell you that it's operated by a South African company (ZACR), which was awarded control by ICANN following a lengthy legal dispute with a Kenyan competitor (DCA).
Re: (Score:2)
. In 10 years 99.999999999% of the domains on this TLD will not even involve an African company or individual.
In 10 years there'll probably be as many
.africa domains as there are now. .asia TLD was released and the discussion was had about whether we register a bunch of names to secure them. We decided it was a gimmick and didn't bother, and it turns out everyone else must've thought the same thing. You see the odd .asia domain from time to time, but for the size of the continent, and the amount of business they do, they are almost non-existent.
I was working in China when the
Too much trouble (Score:2)
I guess it was too much trouble to list the fucking domain in the summary, eh?
Obama can finally have a website (Score:2)
I know, I know, but it is funny.
"web address"? (Score:2)
You guys hire complete morons now, huh?
Also, grats on the clickbait tactic of not telling us what the TLD actually is in the headline.
You suck.
Re: (Score:2)
She is his ex-wife.
Separation (Score:1) | https://tech.slashdot.org/story/17/03/10/1836258/africa-gets-its-own-web-address | CC-MAIN-2017-17 | refinedweb | 1,588 | 74.29 |
I got the following question about a month ago concerning CGI application map (scriptmap) configuration on IIS6.
Hi David, I'm having some issues configuring python CGI scripting for IIS 6. I was wondering if you have a canned response or web link giving detail to configuring Python on IIS 6. If not, I need some help to configure IIS properly. 1. I can add python.exe as a WSE but I cannot get it to be a recognized ext. type. So you have to enable all unknown cgi WSE's. 2. I don't know what settings from IIS 5.0 still apply to IIS 6.0 for python cgi's 3. Do you have a customized iisext.vbs(including parameters) script that successfully configures IIS 6.0 WSE's for python? 4. Are there any incompatibility issues that simply can't be resolved?
In general, you have the following choices to configure Web Service Extensions. Now, you need to be aware that with each configuration choice, you are using a different sort of namespace and slightly different encoding/syntax rules. This is how all systems work no matter the operating system; you just have to learn to live with and adapt to it. Some of the special characters that come into play in this situation are space, quote, and ampersand -- because they are encoded differently in various namespace and have different meanings. For example, spaces are used to delimit parameters on the commandline, yet are mere whitespace to XML that can be normalized away. Quote is used to delimit attribute values in XML but delimit commandline parameters with spaces on the commandline; hence you need to " escape them in XML if you wish to preserve its meaning. Ampersand delimit commandlines and denote parameter entities in XML, so you need to & escape them in XML.
iisext.vbs /AddFile "C:\Program Files\Python\Python.exe %s" 1 Python 1 Python
.py -> "C:\Program Files\Python\Python.exe" "%s"
C:\Program Files\Python\Python.exe "%s"
Note where you have " and where you do/not have ""s. Remember that editing metabase.xml requires either you either:
//David | http://blogs.msdn.com/b/david.wang/archive/2005/04/20/iis6-cgi-web-service-extension.aspx | CC-MAIN-2015-48 | refinedweb | 356 | 68.06 |
:hey, : :I'd like to implement device cloning on open, as that would be of great :use for pcm: : :We already have vchans, i.e. we can play multiple sound sources mixed at :the same time, but programs need to open /dev/dsp0.{0,1,2,3} to do this. : :It would be much more convenient if programs just could use /dev/dsp and :the pcm driver hands out virtual channels as long as they are available. : :For this to work, we'd have to change the vnode in the fp. I'm a little :bit confused how complicated this devices stuff is in the kernel. You don't actually have to change the vnode in the fp. It should be possible to clone and route the correct device with the existing infrastructure by fixing the VOP_OPEN/VOP_CLOSE code. VOP_OPEN and VOP_CLOSE are a huge mess because they operate on vnodes that have already been created (as you probably noticed), and they are an even bigger mess when you open a device because now the code is trying to track an open count for shared access in the vnode AND in the device structure. :Why do we need to route everything over vnodes anyways? Whatever. : :Is there a way to accomplish this? Anybody got some ideas? : :cheers : simon Well, the crux of the problem are system calls like fchown() and fchmod(), expecting the access and modified times to be updated in the filesystem for device accesses, and system calls like read() and write() which you really want to have go direct to the device. In my opinion, the real problem is simply an overabundance of hacks. We have fileops as a hack to bypass the overhead of vnode operations which in turn are overloaded on top of other vnode functions (specfs) to convert the vnode I/O calls into device I/O calls due to the bad design decision of giving the two totally different I/O mechanisms (e.g. VOP_READ/VOP_WRITE vs VOP_STRATEGY). We have huge hacks in VOP_OPEN/VOP_CLOSE. It's a real mess. If you want to go about solving it I have some suggestions! * Make VOP_OPEN/VOP_CLOSE explicit operations and allow them to return a different vnode then the one supplied. The idea here is that namecache resolution can resolve to a vnode as it currently does, but that this is considered DIFFERENT from actually opening the file or device, and can be a DIFFERENT vnode from the one you get when you actually do an open(). * The namecache remains unchanged. The vnode stored in the namecache is, e.g. the vnode representing the device inode as it exists in the filesystem. * Have room for both the namespace vnode and the actual openned vnode in the file descriptor (struct file in sys/file.h). This allows us to get rid of all the specfs hacks that each and every filesystem (such as UFS) have to do to merge filesystem operations with device operations. * Get rid of fileops. Make everything go through the vnode subsystem. Sockets, pipes, devices, everything. Have a temporary shim for devices so we don't have to rewrite all the device drivers as part of this stage of the work (i.e. keep specfs in some form or another). A file descriptor would represent a vnode, period. Actually two vnodes: The namespace vnode (i.e. the file/device inode in the filesystem), and the operations vnode (i.e. the actual open file or device, which in your case could be a different vnode reprenting the 'clone'). All operations would run through VOPs. System calls and filesystem operations such as read or write would first through the operations vnode and if the operation is not supported wouuld then back-off to the namespace vnode. So, for example, if you open() a device and try to do a fchmod() of the resulting file descriptor, it would try to do the opration on the opened device vnode which would return ENOTSUP, then back-off and use the namespace vnode which of course would work. If the two vnodes happen to be the same (such as for a normal file), it winds up being a simple degenerate case. Namespace-specific system calls such as chown() would only run through the namespace vnode since there would not be an operations vnode (or a file descriptor for that matter) for such operations. Would you like to take on such a project? I believe it could be done in mid-sized committable pieces. -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx> | http://leaf.dragonflybsd.org/mailarchive/kernel/2006-01/msg00058.html | CC-MAIN-2015-22 | refinedweb | 758 | 71.24 |
Closures in Python
December 2006
From the newsgroup:
Q. I don’t understand why while a nested function perfectly matches the definition of closure, it is not closure simply because it is not used by external world.
Like so many other computing terms, the word “closure” is used in different ways by different people.
Strictly speaking, a closure is simply a function with free variables, where the bindings for all such variables are known in advance. Some early languages didn’t have “closed” functions; the bindings for free variables were left open, and was determined at runtime. And languages that had both “open” and “closed” functions needed some way to distinguish between the two, so people started referring to the latter as “closures”.
But in Python, as well as in most other modern languages, all functions are “closed” — i.e. there are no “open” free variables — so the use of the term has morphed from “a function for which all free variables have a known binding” to “a function that can refer to environments that are no longer active” (such as the local namespace of an outer function, even after that function has returned). And since that is somewhat difficult to implement, and programmers don’t like to hide things that are hard to implement, people still like to use the term to distinguish between closed functions of kind 1 and closed functions of kind 2. As in this [newsgroup] thread, they sometimes argue that when you’re using a closed function of kind 2 in a specific way, it’s not quite as much of a closure as when you use it in another way. Heck, some people even argue that languages that don’t support closed functions of kind 3 (a kind that Python currently doesn’t support) don’t really have closures at all.
But as a language user, you can actually forget about all this — all you need to know is that in Python, all functions are closed, and free variables bind to variable names in lexically nested outer scopes. | http://effbot.org/zone/closure.htm | CC-MAIN-2013-48 | refinedweb | 344 | 60.08 |
Xavier Guardiola wrote:
> Yes that's what it should be, but I forgot to mention that I assign
> different weights to different fields as well as different weights to
> different documents. So I may end up with a doc not having all the terms but
> the highest score.
> That's why I don't see a trivial way of getting the results in the desired
> order (first those with all terms and then the rest)...
Try overriding Similarity.coord(int,int).
You might use something like:
private static double POWER = 3.0;
public float coord(int overlap, int maxOverlap) {
int missing = maxOverlap - overlap; // # of query terms missing
return (float)Math.pow(1.0 / (missing + 1), POWER);
}
Thus, a hit missing one query term would have its score multiplied by
1/8, hits missing two terms would get 1/27th the score, and so on.
Adjust POWER to suit. With high-enough POWER you can pretty much
guarantee that all documents with any missing terms are ranked below any
with all query terms.
Tell me how it goes,
Doug
---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org | http://mail-archives.apache.org/mod_mbox/lucene-java-user/200305.mbox/%3C3EC55B19.70705@lucene.com%3E | CC-MAIN-2017-13 | refinedweb | 199 | 64.1 |
I totally understand where you’re coming from. It’s always frustrating to start working with a new piece of complex software, because all you want to know is where to start working, and it is often hard to figure out. Obviously documentation needs to be improved to address this. In the meantime, let’s see if I can get you up and running.
First of all, modules exist for two main reasons:
<![if !supportLists]>1.) <![endif]>To override some of the core configuration settings
<![if !supportLists]>2.) <![endif]>To provide a namespace where you can put code
Most of VuFind 2’s settings are found in module/VuFind/config/module.config.php. This contains all of the rules for routing URLs, constructing resources used within the code, etc., etc.
So, to get this going, you first need to create a local module if you haven’t already. The install.php program included with VuFind 2 can do this for you. Let’s assume you call your module VuFindLocal. Now you should have a couple of important things:
<![if !supportLists]>1.) <![endif]>module/VuFindLocal/config/module.config.php, where you can override settings
<![if !supportLists]>2.) <![endif]>module/VuFindLocal/src/VuFindLocal, where you can put code namespaced to VuFindLocal – the framework will know how to automatically load this code when it is referenced
With that setup in place, you can simply override the configuration of the Symphony driver. Currently, it looks like this in the VuFind module configuration:
'ils_driver_plugin_manager' => array(
/* … */
'invokables' => array(
/* … */
'symphony' => 'VuFind\ILS\Driver\Symphony',
This just tells VuFind, “when somebody asks for the symphony ILS driver, create an instance of the VuFind\ILS\Driver\Symphony class.”
So you can more or less copy this configuration into the array in your VuFindLocal module, but change the namespace of the class to VuFindLocal. You’ll end up with:
$config = array(
'ils_driver_plugin_manager' => array(
'invokables' => array(
'symphony' => 'VuFindLocal\ILS\Driver\Symphony',
)
)
);
Now you can edit module/VuFindLocal/src/VuFindLocal/ILS/Driver/Symphony.php:
<?php
namespace VuFindLocal\ILS\Driver;
class Symphony extends \VuFind\ILS\Driver\Symphony
{
/* … */
}
It’s a bit much at first, but really you’re just editing two files – the configuration that says what to load, and the file that is being loaded.
Changing the workflow for holds is obviously going to be a bigger project – that’s a particularly complicated part of the code – but please let me know if you need help finding anything or if you have suggestions for improvement.
- Demian
From: Chanel Wheeler [mailto:Chanel.Wheeler@yavapai.us]
Sent: Monday, December 03, 2012 4:08 PM
To: Demian Katz; vufind-tech (vufind-tech@lists.sourceforge.net)
Subject: RE: VuFind 2 module creation
I fully appreciate what the ZF2 framework is intended to accomplish at large (and wherever possible I did not hack core code in VuFind 1 – I extended classes, created my own theme, etc.). The problem I’m finding is that instead of putting 98% of my energy into making the customizations I need, I’m putting all that energy and more into trying to understand how in the world to set up all the necessary directories, files, classes that have to be created by default (which confuses me because isn’t one of the points of a framework to take away the busy work?), and VuFind specific class extensions. I don’t care how ZF2 works; I don’t want to be a ZF2 master. I just want to override existing VuFind functionality and add new functionality. </soapbox> (That was partly to relieve some frustration but also to point out where some VuFind developers will be coming from, especially those of us that can only carve out 3 or 4 hours on a good week to work on VuFind customizations.)
Two things that pop to mind immediately that I need to do are to extend the Symphony driver and modify how title/copy holds are implemented in the catalog currently. I expect that the Symphony driver modification will be straightforward once I can make sense of how I get a ZF 2 module constructed within the VuFind context (for example, ZF 2 shows the Controller directory as being at the custom module’s root but the VuFind core components have it underneath “src” -- I don’t know if that means it has to go in src).
I don’t have a clue how to go about modifying the title/copy hold logic – that’s where I would start playing with the core code to see what objects come to play. (I haven’t even begun to try to figure out VuFind core.) But again even if I figure that out, I’m baffled as to the extent and structure of stuff I need to set up in a Module to add my modifications. If I could see a working add-on Module (that does anything) to see how the pieces are put together (independent of ZF2 theory), it would go a long way to getting started on making my own customizations. And if I could just make a Module which injected “Hello World” into the home page, I ‘d be feeling way more confident.
I would compare it to when my father was teaching me how to drive (manual, of course). He went through great explanations of how the gears work and how the clutch releases the gear, blah blah blah. That didn’t help one iota in the actuality of driving a stick. In fact, only after I got the hang of driving a stick did the theory behind the practice start to have any meaning. (I would add that it would have been more helpful if he’d driven the car while explaining the actions he was doing, showed how to recover from a near stall, etc.)
Am I making any sense?
chanel
From: Demian Katz [mailto:demian.katz@villanova.edu]
Sent: Monday, December 03, 2012 1:04 PM
To: Chanel Wheeler; vufind-tech (vufind-tech@lists.sourceforge.net)
Subject: RE: VuFind 2 module creation
VuFind 1 makes it very easy to just dive in and start editing things, but this sometimes makes merging in future upgrades more difficult. The purpose of the VuFind 2 add-on module system is to allow you to isolate your customizations from the VuFind core code, which should make long-term maintenance easier. This adds some up-front overhead in exchange for long-term benefits. It should also be fairly easy once you know where all the pieces go, though obviously there’s a learning curve to reach that point (and it’s a bumpy learning curve, what with the software still being in active development and things periodically shifting around).
However, keep in mind that all of the VuFind 2 module/local directory stuff is optional. I certainly recommend using it, but nothing is stopping you from hacking the core code directly, just like you did with VuFind 1.
In the interest of learning the new code and becoming more comfortable, perhaps a useful approach would be to implement some changes directly to get a feel for how things fit together, and then reimplement them in a local module once you are more comfortable with the system. I suspect that the reason you’re finding this difficult is that you are essentially trying to learn so many independent things at the same time (core VuFind architecture, and the extension system… not to mention all the other things you mention). I’m happy to help break this down into smaller pieces, especially since any conversations we have now can probably help contribute to future documentation that makes this easier for everyone.
Perhaps if you could provide an example of something specific you are interested in doing with VuFind, I can provide some pointers about how to ease into the code. If I can find the time, it might even make sense for me to write a detailed blog post with a working example… but even if I can’t carve that out of my schedule right away, we can start this iteratively.
- Demian
From: Chanel Wheeler [mailto:Chanel.Wheeler@yavapai.us]
Sent: Monday, December 03, 2012 2:47 PM
To: vufind-tech (vufind-tech@lists.sourceforge.net)
Subject: [VuFind-Tech] VuFind 2 module creation
I’m simultaneously trying to absorb object orientation (been functional since the 80s), MVC, ZF2, and VuFind 2. (I hate to say it but VuFind 2 is exponentially more difficult to customize than VuFind 1 was.) I really need a VuFind 2 Hello World module about now. Are there any VuFind 2 add-on modules out there that I can install to see an applied example?
Thanks,
chanel
Chanel Wheeler
Library Network Programmer/Analyst
Yavapai Library Network
1120 Commerce Dr.
Prescott, AZ 86305
Phone: (928) 442-5741
chanel.wheeler@yavapai.us | http://sourceforge.net/p/vufind/mailman/attachment/FAA7DF3F09441B4DA93A34DF74596140086F9E%40VUEX14MB1.vuad.villanova.edu/1/ | CC-MAIN-2014-52 | refinedweb | 1,469 | 57.4 |
# statemachine.tcl -- # # Script to implement a finite state machine # # Version information: # version 0.1: initial implementation, april 2002 namespace eval ::FiniteState { variable statemachine namespace export defineMachine evalMachine resetMachine } # defineMachine -- # Define a finite state machine and its transitions # # Arguments: # machine Name of the machine (a variable) # states List of states # # Result: # None # # Side effects: # The variable "machine" is filled with the definition # # Notes: # The list of states can only contain the commands initialState # and state. No others are allowed (no check though) # # For instance: # defineMachine aMachine { # initialState 1 # state 1 { # "A" 2 {puts "To state 2"} # "B" 3 {puts "To state 3"} # } # state 2 { # "C" 1 {puts "To state 1"} # "D" 3 {puts "To state 3"} # } # state 3 { # "E" 3 {exit} # } # } # # proc ::FiniteState::defineMachine { machine states } { upvar $machine machinedef set machinedef {} set first_name {} set maxarg [llength $states] for { set idx 0 } { $idx < $maxarg } { incr idx } { set arg [lindex $states $idx] switch -- $arg { "state" { set statename [lindex $states [incr idx]] set transitions [lindex $states [incr idx]] lappend machinedef $statename $transitions } "initialState" { set first_state [lindex $states [incr idx]] } default { } } } # # First three items are reserved: the initial state and the current # state. By storing them in the same list we can pass the # information around in any way needed. # set machinedef [concat $first_state $first_state $machinedef] } # evalMachine -- # Evaluate the input and go to the next state # # Arguments: # machine Name of the machine (a variable) # input The input to which to react # # Result: # None # # Side effects: # The machine's state is changed and the action belonging to the # transition is executed. # proc ::FiniteState::evalMachine { machine input } { upvar $machine machinedef set current_state [lindex $machinedef 1] # # Look up the state's transitions # set states [lrange $machinedef 2 end] set idx [lsearch $states $current_state] set transitions [lindex $states [incr idx]] set found 0 foreach {pattern newstate action} $transitions { if { $pattern == $input } { uplevel $action set found 1 break } } if { $found } { set machinedef [lreplace $machinedef 1 1 $newstate] } else { #error "Input ($input) not found for state $current_state" # Or rather: ignore } } # resetMachine -- # Reset the machine's state # # Arguments: # machine Name of the machine (a variable) # # Result: # None # # Side effects: # The machine's state is changed to the initial state. # proc ::FiniteState::resetMachine { machine } { upvar $machine machinedef set initial_state [lindex $machinedef 0] set machinedef [lreplace $machinedef 1 1 $initial_state] } # # Define a simple machine to test the code: # A furnace that needs to keep the same temperature, so the heating # may be on or off # namespace import ::FiniteState::* defineMachine heater { initialState off state off { "too_cold" on { set heating $heat_capacity} } state on { "too_hot" off { set heating 0 } } } set time 0.0 set dt 0.1 set temp_amb 20.0 set temp $temp_amb set temp_ideal 200.0 set exch 0.3 set heating 0.0 set heat_capacity 500.0 while { $time < 10.0 } { evalMachine heater \ [expr {$temp<=$temp_ideal? "too_cold" : "too_hot" }] set time [expr {$time+$dt}] set temp [expr {$temp+$dt*($exch*($temp_amb-$temp)+$heating)}] puts [format "%4.1f %7.3f %5.1f %s" $time $temp $heating [lindex $heater 1]] }
Theo Verelst (who put 'anonymous' here? I sure didn't.):The formal definition of a finite state machine is that it has a countable number of (thus discrete) states, where it holds that the new state and output are computed from the old state with the inputs, which imo is relatively easy to program in any language:
set state on proc newstate {input} { global state switch $state \ off {return off} \ on {if [string match $input on] {set state on} {set state off} } } newstate on puts $state newstate off puts $stateThough of course not at all always easy to analyse.Lars H: The problem is seldom to encode the transition function, which is really all you did, but to use the finite state machine as a control structure. A goto command is in some sense the optimal (although not necessarily the most convenient) way to encode this, but in some languages that isn't easy to do. Also, your definition of a finite state machine is incorrect; countable allows the smallest infinite sets (such as that of the integers, or that of all finite words on a finite alphabet), but in a finite state machine the set of states must be (surprise?) finite.Theo Verelst: Which goes to show you're not a computer designer, in hardware, there is no such thing as the goto or whatever it is which executes the what you call state transition, there is a mapping between the current and the next state, and how you achieve the implementation of that mapping could even be a ROM. Or of course and Arithmetic Logical Unit. Integers in Tcl and computers are always bounded by word size, though I'd easily agree that it is interesting mathematically to make them a list and let imagination take over.A control structure. Hmm. The whole pentium or whatever CPU one uses, even when not a von Neuman, and possibly even when outside the broad turing machine boundaries, is full of logical circuits which can be interpreted as 'control structures', which form the basis of the functioning of the processor, the software control structure, living at a higher level of agregation but a very much lower level of actual control choices being made per second, can be made or seen as one wants on the basis of what conditionals or pointer like structures one may want to apply from the instruction set. In that respect it is no doubt of interest to include direct referencing (say like in traps) or some mathematical construct to arrive at some piece of code or a single instuction, in an easy, cheap and overseeable way. Or a lookup table. Or list referencing, which I consider pleasing.The problem or limitation is that the FSM model is probably not the most natural to program in, and in its kind, which is suitable for message processing, it is limited, for isntance when faced with non determinate network problems, which it cannot formally resolve unambiguously or even define generally (like a petri net for instance). It is a design model which allows you to separate state and functions, and the idea is that the state is known, and usually limited, and therefore overseeable. And that every transition is between two well defined states at well defined time instances.I guess it will be some time before our computers will have states which can cruise surfing Riemann surfaces somehow infinitely accurate.I think countable means just that, I wouldn't say and integer number approaching infinity is countable, but I guess that is a matter of definition. Possibly even an interesting one.AM: The mathematical definition of countable encompasses both finite and infinite sets. Discrete sets encompass both countable and incountable sets, as an example of the latter: consider the set obtained by removing all rational numbers from a finite-length interval of the real line. What you left with is a discrete, uncountable set. (I shamefully erased a remark about the Cantor set).TV: 'Finite length interval' those words at least have a nice ring to it in the context..AM: Quite intentional :)TV 2003-11-18: Coming across this page because I do know where I'm going with fsm, process algebra and that in realm of software (and did so a decade ago at least, and about 3 decades ago about hardware, where the concept is commongood for the advanced), I can't agree to not making the remark that the above is like discussing the existence of distinquishable particles when pondering on the the existence of a Fock or Hilbert space. Appearently no expert speaking.DKF: Two points
- goto is merely an artefact of the process of flattening a state graph onto linear memory.
- It should be possible to map between a Turing Machine and any other state machine representation that admits an infinite state space of sufficient complexity. I think the "sufficient complexity" bit refers to the requirement to be able to represent more than a single non-negative number of arbitrary magnitude.
{red green red green}For a 4-light variation, where clearly red, green and orange are the possible values of each substate, so that we have a total of 3^4
% expr int(pow(3,4)+0.5) 81possible overall distinguishable (very countable and finite) states. The number of actually occuring states given the transition function, possibly regarding symmetry, is less, or lets say, the domain of the next-state function is more limited than that, because certain states are never reached in a legal and correct traffic light controller. To give this finite state machine, with rigid state transition timing scheme, inputs, one may have a reset button, 'request green' buttons or detection loops, and maybe a police-supervisor-override control input.Two examples of state machines, one infinite, the other finite.A computer in basic form as it is visible to a normal progammer is, apart from distinct deviations because of networks and hardware failures or special circuits, made in actual fact to act as a perfect state machine of the traffic light variation: with a limited number of states.The states are countable, and the total number of them is the number formed by the possible combinations of all the bits stored in it, that is in the memory and the processor and add-ons normal and special registers. The hardware machinery acts as a complex but predictable state transition function on that state. For most current processors, that principle can even be found in actual practice where the chips can be tested with what is called JTAG, where all the bits of its state are serialized and can be read out and written to as a single (long) bitstring. State changes are applied by the hardware of the computer machinery, which compute the new state of all the memory bits (including the internal, possibly programmer invisible ones) from the old state of all memory bits at every tick of the hardware clock.Taking this model as a starting point, and wanting to use state machine related reasoning on the running of compute programs leads to seeing a jump instuction, which is normally the low level basis for goto, as forcing a particular piece of the overall computer state, namely the program counter, in a new state, different from the slightly incremented normally expected new state.I recall some pages on state machines:Finite State Machine Main definitions.Finite State Machine a tcl example indeedFlip Flop to see why it is imperative to not tolerate the thinking that connecting functions together always yields a function, in this case, the joining of the two simples two variable functions which exist give a combined behaviour which actually has a statemathematical foundation of computers and programs Might be good reading, it shows a page from the actual pentium datasheetstate of dutch politics machine is a dynamical or expanding state machine with sort of fun, probably useless, application.
UKo 2006-06-14: Just added a little bit of makeup to the output of the example.
AMG: I cleaned up some corrupted character encoding. I hope no one minds that I substituted in straight quotes to avoid future problems. TV accidentally used the encoding wrong back in November 2003, and things went downhill from there. Careful, guys...
The Rappture project includes a FSM program.The Statenet package creates a finite-state network of linked states, specifically for use as a Hidden Markov Model for speech recognition. [1]Building a Respirometer using FSM, Tcl and Arduino
See Also edit
- A tiny state machine
- Implements goto
- Tcllib's grammar_fa
- Moore Type State Models
- Mealy Type State Models | http://wiki.tcl.tk/3283 | CC-MAIN-2017-04 | refinedweb | 1,935 | 51.31 |
In the coming months you will learn to write a lot of code in Python yourself. Many of these tasks are general and would be repetitive if you wanted to implement them entirely in your own code. In this tutorial you’ll learn how to use execute code written by other people, so you can focus on the things that are most interesting for you.
Learning goals
In this tutorial you will learn how to:
- Start a Python interpreter
- Run a script
- Import and use external code
Review: Using Python
If you attended FSE 2016 you’ve already written and executed code in Python. This section is a quick refresh of some of the things you learned and is designed to get you up to speed even if you couldn’t come to FSE 2016.
Python is Full Stack Embedded’s go-to language. There are several reasons for this:
- It’s easy to learn
- It can call low-level routines, letting you interact directly with sensors from a script or program
- Even though it’s easy to start with Python, its good design lends itself well to writing more complex programs
- A lot of material on Python is open source so you can learn from it
- The Python community places a lot of emphasis on good design, so if you’re looking for resources you’ll often find stuff that’s built well and is approachable
Python’s focus on good programming is so entrenched in the language’s design that you can find a poem about it in an Easter Egg hidden in the language itself:
>>>!
But we’re getting ahead of ourselves – understanding this requires understanding a little bit more about using Python. We’ll get to that in a few seconds.
This tutorial will assume that you are working in a Linux system. The Python code you’ll see will work just as well in Windows or Macintosh as it will in Linux, but at Full Stack Embedded we work mainly with Linux. If you’re working with a Windows computer, you can install Python directly into your system or install it into a virtual machine that runs on top of your operating system. Otherwise, consider installing Linux next to Windows or just working on the Raspberry Pi!
Starting the interpreter
In order to execute code in Python – whether you are running a script that you saved to a file or just typing code into the terminal – you’ll need to start the Python interpreter. You can do this by executing the command
python. You’ll see something like this:
pi@raspberrypi:~$ python Python 2.7.12 (default, Jul 01 2016, 15:34:22) [GCC] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>>
As you’ll remember from the previous tutorial, the top line is the command prompt. This is where the Pi is asking us to enter a command, which we did:
python. After hitting Enter, the next two lines tell us what version of Python we’re running and where we can find more information. The next line, which only shows
>>>, is the command prompt from inside Python. Here you can enter any line of Python code and it will be executed immediately (with the exception of nested code, such as for loops, if clauses, etc.). Try entering a few lines of code and seeing what happens. If you’d like more information, see our presentation on using Python in the course materials from FSE 2016.
In order to exit Python, you can either press Ctrl+D or use the function
exit().
Running a script
If you have a script saved as a file that you’d like to execute, just pass it as an argument to python when you’re starting the interpreter from the command line, like this:
pi@raspberrypi:~$ python my_script.py
This would execute the script
my_script.py.
You can name your script anything you want, but if you use the file extension
.py it helps programs and people see at a glance that the file you’re referring to contains Python code.
Using external code
Oftentimes, you’ll use code that you didn’t write yourself. This isn’t cheating – many programs have to do things like open files, establish HTTP connections, etc. There’s no use in everybody implementing this on their own. Python’s philosophy says that you shouldn’t have to worry about these details – it comes “batteries included”, meaning that there are libraries for doing just about every basic task there is. You can read about the packages included in Python’s standard library here. That’s also a good place to look for general details about how Python works. As you can see, there are a lot of packages that can help you do just about anything – work with data stored in different formats, perform computations on times and dates, schedule events, collect data, do math, and much, much more.
The import statement
You can access external code by
importing it. Here are a few examples:
import x from x import y from x import y, z
Let’s take this apart. How does this work?
The
import keyword tells Python to take a package and expose the functions and objects it contains to the current session. They can then be reached by addressing the package’s namespace. This is easier to understand with an example:
>>> # Remember, everything after '#' is a comment. >>> # Python doesn't execute it - it's only informative. >>> >>> # The math package contains several math functions >>> import math >>> # Now we can use the math package to do some computations >>> math.log(9, 3) 2.0 >>> math.sqrt(9) 3.0
This example
imports the
math package into the current session, which contains, among other things, the functions
log and
sqrt. These compute the logarithm and square root of a number, respectively. They are accessed by invoking the
math namespace and using the
. operator to use an object inside of that namespace. Finally, the name of the function we want to use is entered, followed by parentheses, which include any arguments we want to pass to the function. The result the function returns is, as you can see, what would be expected.
If you are using a function from within a package frequently, like for example if you wanted to compute the square root of a lot of numbers in your code, it might be tedious to always have to reference the package it belongs to. You can get around this difficulty by importing the function directly into your namespace, like this:
>>> # This imports the object "sqrt" without importing "math" >>> # No other functions are imported >>> from math import sqrt >>> # You can also import multiple packages simultaneously: >>> from math import sqrt, log >>> sqrt(9) 3.0 >>> log(9, 3) 2.0
You can use as many import statements as you want. Generally, imports occur at the very top of a script, so that it’s easy to see what external dependencies the script has.
Exercise
Now it’s your turn. Write some code that uses functions or classes available from the standard library. Remember, if you’re not sure how to use a given package, read the documentation (each exercise links to the relevant page) and look for examples. The standard library is very well documented and often has examples right next to the objects in question.
- Import the datetime package. Use the date class in order to compute how old you are. Hint: You can use the date.today method in order to easily get the current day. Try finding out how many years and days old you are. Here’s an example (this will be the only solution posted in this tutorial):
>>> import datetime >>> now = datetime.date.today() >>> # Let's see what that looks like >>> now datetime.date(2017, 2, 1) >>> birthday = datetime.date(1939, 10, 27) >>> birthday datetime.date(1939, 10, 27) >>> age = now - birthday >>> age datetime.timedelta(28222) >>> age.days 28222 >>> years = age.days / 365 >>> years 77
- Use the uniform function from the random package to generate 100 random numbers between -10 and 10.
- Use the shutil package to copy a file, rename a file and create a zipfile from a folder.
- Explore the other packages available in the standard library. Get a feel for what’s available and try experimenting with at least one other package available there – perhaps you’re interested in unit tests? Working with temporary files? Saving data between Python sessions? Sending emails? In Python, this is all set up for you ahead of time.
Further reading
If you’re looking for further challenges, try reading about the following more advanced topics:
- How imports work, and how you can import your own code
- Installing new packages with pip
- Writing your own package that others can install and import | https://fullstackembedded.com/tutorials/working-with-external-code-in-python/ | CC-MAIN-2018-47 | refinedweb | 1,475 | 70.43 |
Re: random number problem
From: Paul Hsieh (qed_at_pobox.com)
Date: 07/12/04
- ]
Date: 12 Jul 2004 03:17:15 -0700
"copx" <invalid@invalid.com> wrote:
> <Jens.Toerring@physik.fu-berlin.de> schrieb:
> > copx <invalid@invalid.com> wrote:
> [snip]
> > > random_int = (rand() * max) + min
>
> > > And I think this additional step destroys the uniformity..
> >
> > Why should it do that? It's a simple linear transformation.
> > If you have uniformly distributed points drawn on an elastic
> > band and then stretch it they stay uniformly ditributed.
> > And a simple multiplication does nothing else (and adding an
> > offset doesn't change anything).
>
> Really? Well, I guess you're right because I really don't
> know sh*t about mathematics (I'm just a hobby coder)
> and you sound like you know what you're talking about.
First of all, to have this nice linear property you really do need a
floating point random number generator. The C language doesn't have
one of those. If you do the integer transformation as suggested
(whether or not you use the upper bits) your distribution will be
wrong if it does not divide evenly into (RAND_MAX + 1).
On my C compiler RAND_MAX is defined to the very tiny number: 32767.
Since you say you shy away from math, lets just prove it with a
program:
=============================================================================
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
#ifdef USE_MERSENNE_TWISTER
#include <limits.h>
#include "mt.h"
void srand_TEST (unsigned long s) {
init_genrand (s);
}
unsigned long rand_TEST (void) {
return genrand_int32 ();
}
#define RAND_MAX_TEST ULONG_MAX
#else
#define srand_TEST srand
#define rand_TEST rand
#define RAND_MAX_TEST RAND_MAX
#endif
#define AVERAGE_SAMPLES_PER_BUCKET (1000)
int main () {
int i, d = 0, n, minb, maxb;
static unsigned long * bucket;
double scale, a2;
scanf ("%d", &d);
if (d <= 0) {
fprintf (stderr, "Enter a positive integer\n");
exit (-1);
}
scale = d / (RAND_MAX_TEST + 1.0);
bucket = (unsigned long *) malloc (sizeof (unsigned long) * d);
for (i=0; i < d; i++) {
bucket[i] = 0;
}
srand_TEST (time (NULL) + clock () + d);
n = d * AVERAGE_SAMPLES_PER_BUCKET;
for (i=0; i < n; i++) {
int r = rand_TEST () * scale;
bucket[r]++;
}
/* Some statistical tests */
minb = n;
maxb = 0;
a2 = 0;
for (i=0; i < d; i++) {
if (minb > bucket[i]) minb = bucket[i];
if (maxb < bucket[i]) maxb = bucket[i];
a2 += ((double)bucket[i]) * bucket[i];
}
a2 = a2 / d - (AVERAGE_SAMPLES_PER_BUCKET*AVERAGE_SAMPLES_PER_BUCKET);
printf ("min bucket: %d\n", minb);
printf ("max bucket: %d\n", maxb);
printf ("last bucket: %d\n", bucket[d-1]);
printf ("variance: %g (supposed to be about %d)\n", a2,
AVERAGE_SAMPLES_PER_BUCKET);
free (bucket);
return 0;
}
=============================================================================
For most small numbers (less then 1000) you enter, when you run the
program, the important number that it outputs, the variance, is around
1000, which is what theory says it should be for a uniform
distribution. But as soon as you start trying out numbers in excess
of 1000 you will see some disturbing results in the variance (actually
it depends on how close the number is to one substantially divisible
into 32768 -- compare the result of 4096 with 4000). It just
increases and increases, until you approach the value of RAND_MAX
itself, when it finally converges back to about 1000. Numbers beyond
RAND_MAX are clearly useless.
If you want a *serious* random number generator, without thinking too
hard about it, use the "Mersenne Twister" which can be located here:
It passes the test above for much larger integer values, as well as
coming with a real number random generator. If you want to *study*
random number generators, I suggest you do a search in google groups
for "George Marsaglia".
-- Paul Hsieh
- ] | http://coding.derkeiler.com/Archive/General/comp.programming/2004-07/0944.html | crawl-002 | refinedweb | 588 | 59.64 |
I have a home work assignment that I am having trouble
with, if anyone could help me in any way that would be
cool. I am not asking you to code it for me. just asking
for tips. because here is what I have so far. And its all screwed
up. I just need some help.
import javax.swing.JOptionPane;
public class salary {
public static void main ( String args [] ){
int item1,
item2,
item3,
item4,
itemCounter = 0,
total = 0;
string itemNumber;
double sum;
itemNumber = Integer.ParseInt(JOptionPane.showInputDialog("Please
enter the Item number (-1 when done):"));
while ( itemNumber != -1 ) {
total = total + itemNumber;
itemCounter = itemCounter + 1;
itemNumber = Integer.ParseInt(JOptionPane.showInputDialog("Please
enter the Item number (-1 when done):"));
switch ( itemNumber ) {
case item1:
}
}
System.exit (0);
}
}
The problem is this: Develop a Java Application that inputs
one salesperson's items sold for the last week and calculates
and displays that salesperson's earnings. There is no limit
to a the number of items sold by a salesperson.
The items are 1 at $239.99, 2 at $129.75, 3 at 99.95, and 4 at 350.89.
The salesperson gets $200.00 plus 9% of the total earnings.
Forum Rules
Development Centers
-- Android Development Center
-- Cloud Development Project Center
-- HTML5 Development Center
-- Windows Mobile Development Center | http://forums.devx.com/showthread.php?25769-need-your-point-of-view-about-the-best-JDBC-driver&goto=nextoldest | CC-MAIN-2016-26 | refinedweb | 214 | 61.02 |
I John username is john from China cd
I am Frank username is frank from France cd
And I need to get only the name, username and country and create a spreadsheet with these results. Three columns with headers
I have tried many codes. This is my last one, but it is sending just the last name, username and country.
import xlwt result = [] with open("text.txt") as origin_file: for line in origin_file: if 'username' in line: result.append(line.split(' ')[2]) #result.append(int(line)) #print(len(result)) # Display all string elements in list. for st in result: row = st print(row) result2 = [] with open("text.txt") as origin_file: for line in origin_file: if 'username' in line: result2.append(line.split(' ')[5]) for st2 in result2: row2 = st2 print(row2) result3 = [] with open("text.txt") as origin_file: for line in origin_file: if 'username' in line: result3.append(line.split(' ')[7]) for st3 in result3: row3 = st3 + "\n" print(row3) workbook = xlwt.Workbook() worksheet = workbook.add_sheet('Test') style_string = "font: bold on" style = xlwt.easyxf(style_string) worksheet.write(0, 0, 'Name', style=style) worksheet.write(0, 1, 'Username', style=style) worksheet.write(0, 2, 'Country', style=style) worksheet.write(1, 0, row) worksheet.write(1, 1, row2) worksheet.write(1, 2, row3) workbook.save('test.xls') | https://www.daniweb.com/programming/software-development/threads/516336/python-3-7-filter-text-file-lines-and-set-results-into-an-excel-cell | CC-MAIN-2018-43 | refinedweb | 215 | 71.31 |
snsapi 0.7.1
lightweight middleware for multiple social networking services
A cross-platform middleware for Social Networking Services (SNS):
- Unified interfaces and data structures.
- The building block of a user-centric meta social network.
- Near-zero infrastructure requirements.
- Play with your social channels like a hacker.
Lightning Demo 1 – Read Twitter Timeline
Step 1.
Register user and developer on Twitter. Apply for application keys and access tokens.
Step 2.
Save the following codes to mytest.py in the root dir of this project:
from snscli import * nc = new_channel('TwitterStatus') nc['app_key'] = 'Your Consumer Key from dev.twitter.com' nc['app_secret'] = 'Your Consumer Secret from dev.twitter.com' nc['access_key'] = 'Your Access Token from dev.twitter.com' nc['access_secret'] = 'Your Access Token Secret from dev.twitter.com' add_channel(nc) print home_timeline()
Filling your app credentials in the above script: app_key, app_secret, access_key, access_key.
Step 3.
Try it by python mytest.py. You will see your home timeline from twitter.
Remarks
SNSApi unifies the interfaces of all SNS such that retrieving new messages from all other platforms are the same:
- Create a new channel configuration and add_channel it.
- Invoke a single home_timeline() to obtain an aggregated timeline from all channels in a batch.
Lightning Demo 2 – Backup Your Data
Step 1.
Configure a channel.json file with two channels:
- One is called “myrenren” and it interfaces with Renren (an OSN in China).
- The other is called “mysqlite” and it interfaces with a SQLite3 DB.
See one example channel.json configuration.
Step 2.
Save the following codes to backup.py in the root dir of this project:
from snsapi.snspocket import SNSPocket sp = SNSPocket() sp.load_config() sp.auth() ml = sp['myrenren'].home_timeline() for m in ml: sp['mysqlite'].update(m)
Step 3.
Try it by python backup.py. Now your timeline of Renren (latest 20 messages by default) is backed up to the SQLite DB. You can run this script on a regular basis to backup data from all kinds of SNS.
Remarks
SNSApi unifies the data structures of all SNS so as to enable flexible/ programmable inter-operation between those services:
- Backup one message in SQLite is just “update a status” there.
- In order to read those messages, just invoke home_timeline of your SQLite channel.
- The data in SQLite DB are ready for further analysis. For example, I remember someone said that “snsapi is awesome”. Who posted it? I can not recall. Now, enter sqlite and use one line of command to get the answer: select * from message where text like '%snsapi%';.
- You can also use EMail or RSS to distribute your statuses and follow the updates of your friends.
- When there are new platforms, it’s just one configuration away to use them. The intervention from app developer is not needed.
Lightning Demo 3 – An Ad-Hoc DSN
Decentralized Social Network (DSN) is the next paradigm of social networking. Current centralized services have a lot of problems, e.g. Spying for free.
SNSApi is just a middleware to offload your burden in interfacing with different platforms. Now, try to build something without worrying about the interfacing detials.
See RSoc Community Page if you are interested.
Supported Platforms
Enther the interactive shell by python -i snscli.py. Get the supported platforms as follows:
Supported platforms: * Email * FacebookFeed * RSS * RSS2RW * RSSSummary * RenrenBlog * RenrenFeed * RenrenPhoto * RenrenShare * RenrenStatus * RenrenStatusDirect * SQLite * SinaWeiboBase * SinaWeiboStatus * SinaWeiboWapStatus * TencentWeiboStatus * TwitterStatus * ...
More platforms are coming! Please join us!
- Clone and install dependencies via pip. Then you are ready to go. See installation guide if you need more detailed information. See troubleshooting page if you encounter problems in your initial tests.
- We have several demo apps in this repo. You can start with them and see how to use those classes of SNSAPI.
- Users who don’t want to write Python or other non-Python programmers can start with our command-line-interface (snscli.py). The official SNSAPI website should get your started quickly along this line. This CLI can allow interfacing with other languages using STDIN/ STDOUT.
- Users who are not comfortable with CLI can use the graphical-user-interface (snsgui.py). See more user interfaces.
Resources
- SNSApi Website: maintained by @hupili; welcome to report problems to admin, or send pull request to website repo directly.
- SNSApi Website (CN): maintained by @xuanqinanhai.
- SNSApi doc: automatically generated from code using Sphinx; also available as inline doc using help(XXX) from Python shell.
- SNSApi Github Wiki: editable by all GitHub users; welcome to share your experience.
- SNSApi Google Group: The most efficient way to get help, discuss new ideas and organize community activities. Please join us!
License
All materials of this project are released to public domain, except for the followings:
- snsapi/third/*: The third party modules. Please refer to their original LICENSE. We have pointers in snsapi/third/README.md for those third party modules.
Other
- Old version of this readme in Chinese
Build Status
- master:
- dev:
- Author: Pili Hu
- Requires oauth2 (>=1.5.211), httplib2 (>=0.8), python_dateutil (>=2.1), lxml (>=2.3.2), nose (>=1.3.0)
- Provides snsapi
- Categories
- Development Status :: 4 - Beta
- Intended Audience :: Developers
- Intended Audience :: End Users/Desktop
- Intended Audience :: Science/Research
- License :: Public Domain
- Natural Language :: English
- Programming Language :: Python :: 2.7
- Topic :: Communications :: Chat
- Topic :: Internet
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: hupili
- DOAP record: snsapi-0.7.1.xml | https://pypi.python.org/pypi/snsapi/0.7.1 | CC-MAIN-2016-26 | refinedweb | 890 | 52.46 |
.
Content Publishing
Traditionally, Dynamicweb has been about rendering full pages with HTML that has been rendered at the server by pages, modules, items, etc. However, more and more, web sites and applications require other means to interact with data on the server. It’s very common nowadays to have some jQuery code that calls out to the server to get some data without reloading the complete page. Examples of this include product and filter data to perform client side filtering (as demonstrated here) and dynamic login pages.
Dynamicweb 8.4 introduced a few new features that make this easier than ever. You can now return different content types such as JSON or XML with just a few simple steps:
- Create a layout file that is set up to return nothing but the page content (e.g. content coming from paragraphs and modules)
- Create a page that contains the content you want to return (for example, a paragraph that contains a paragraph with the Product Catalog module to return product data as JSON)
- Configure the page to use the new layout file and mark its content type to be something like application/json
With that setup, you can call into this page from client side code such as jQuery or Angular to get the data and display it on the client.
Below you find the steps you need to go through to make this work and expose Ecommerce product data as JSON. You can use the exact same concept to expose other data such as Ecommerce filters, users (useful for a client side maps implementation), and everything else that can be published on a page. It’s assumed you’re running this code on a site that has Ecommerce enabled if you want to follow along.
1. Create a new layout file in your site’s Designs folder and call it json.html. Add the following code to it:
<!--@DwContent(contentmain)--><!--@If(1=2)--><div id="content-main" class="dwcontent" title="Main content" data- </div><!--@EndIf-->
Note: as of Dynamicweb 8.5 you don’t need to use this “If-hack” anymore. You can then use the new unwrap setting to tell Dynamicweb to exclude the wrapping <div> element. This means you can simplify your code to the following:
<div class="dwcontent" id="content-main" title="Main content" data-</div>
For more details, see:
This layout file can be used to expose any type of data and is not limited to this Ecommerce example.
2. Create a new template for an ecommerce product list in the folder Templates/Designs/DesignName/eCom/ProductList. Call it something like product-list-json.html and add code like this:
{ "pageid": "123", //ID of the details page; could come from an Area setting "numberofproducts": "<!--@Ecom:ProductList.PageProdCnt-->", "pagenum": "<!--@Ecom:ProductList.CurrentPage-->", "products": [<!--@LoopStart(Products)--> <!--@If(Products.LoopCounter>1)-->,<!--@EndIf--> { "id": "<!--@Ecom:Product.ID.JSEncoded()-->", "name": "<!--@Ecom:Product.Name.JSEncoded()-->", "price": "<!--@Ecom:Product.Price.JSEncoded()-->", "link": "<!--@Ecom:Product.LinkGroup.Clean.JSEncoded()-->", "description": "<!--@Ecom:Product.ShortDescription.JSEncoded()-->", <!--@If(Ecom:Product.ImageSmall.Default.Clean<>'')-->" image": "/Files/Files/<!--@Ecom:Product.ImageSmall.Default.Clean.Replace("/Files", "").Replace("/Files", "").JSEncoded()-->",<!--@Else-->"image": "",<!--@EndIf()--> "smallImage": "<!--@Ecom:Product.CategoryField.Screens.SmImage.Value.Clean-->", "languageId": "<!--@Ecom:Product.LanguageID-->", "variantId": "<!--@Ecom:Product.VariantID-->" } <!--@LoopEnd(Products)--> ] }
This code returns a list of products as well as some additional data that could be used for client side paging. Obviously, this is just a sample; you can use all the tags and concepts you can use in regular templates. You could also convert this template to Razor and get even more flexibility.
3. Create a new page in Dynamicweb and give it a name like GetProducts. I prefer to put pages like this in a folder called Ajax but this is not required:
4. Open the Properties dialog for this new page and select the JSON layout file from the dropdown list in the Layout section. Under Content type, choose
application/json:
5. Add a new paragraph to the page and as the paragraph template, choose ModuleOnly. If you don’t have this template, create it now (in the Paragraphs folder) and add the following code:
<!--@ParagraphModule-->
This step is important as you only want the output from the module. Any other output, such as additional HTML, will break the JSON format the page will return.
6. Add the Product Catalog module to this paragraph. Select one or more groups of products you want to expose as JSON. For the Product List template,
select the template you created in step 2:
7. At the bottom of this settings screen, make sure you set Show on paragraph to another page in your site that displays products.
This way, the link to the product details page will point to a normal page with the Product Catalog on it, and not to the current, JSON-only page.
8. Save all your changes. If you now request the page, you should see the data being returned as JSON, as shown in the following image:
9. You can now make a request for this page from client side code using whatever AJAX technology you prefer. Here’s an example that uses Angular to retrieve the products and display them as a list. Note that this code features in-line JavaScript for the Angular app and controller. In a real app, you would move this code to a separate JavaScript file.
<script src="//code.angularjs.org/1.2.15/angular.min.js"></script> <script src="//code.angularjs.org/1.2.15/angular-sanitize.min.js"></script> <h2 class="page-title">Products</h2> <hr> <div ng- <div ng- <div ng- <div class="row"> <div class="product-thumbnail span4"> <img ng- </div> <div class="product-summary span7"> <h4 class="item-title"> <a href="{{product.link}}">{{product.name}}</a> </h4> <div ng-</div> <br /> <a href="{{product.link}}">View details</a> </div> </div> </div> </div> </div> <script> var ProductsModule = angular.module('ProductsModule', ['ngSanitize']); ProductsModule.factory('products', function ($http) { var returnValue = {}; returnValue.getProducts = function (callback) { return $http.get('/getproducts').success(callback); }; return returnValue; }); function ProductsController($scope, $http, products) { $scope.products = []; products.getProducts(successProducts); function successProducts(data) { $scope.products = data.products; } } </script>
On a web site based on the Solution Set, this produces the following output:
What's cool about this solution is that all the Dynamicweb functionality continues to work. This means for example you can still use query string parameters against the Ecommerce catalog to filter the list of products, provide sorting and paging information and so on. | https://devierkoeden.com/articles/content-publishing | CC-MAIN-2019-39 | refinedweb | 1,080 | 54.02 |
Hot questions for Using Neural networks in linear algebra
Question:
This is my custom extension of one of Andrew NG's neural network from deep learning course where instead of producing 0 or 1 for binary classification I'm attempting to classify multiple examples.
Both the inputs and outputs are one hot encoded.
With not much training I receive an accuracy of
'train accuracy: 67.51658067499625 %'
How can I classify a single training example instead of classifying all training examples?
I think a bug exists in my implementation as an issue with this network is training examples (train_set_x) and output values (train_set_y) both need to have same dimensions or an error related to the dimensionality of matrices is received. For example using :
train_set_x = np.array([ [1,1,1,1],[0,1,1,1],[0,0,1,1] ]) train_set_y = np.array([ [1,1,1],[1,1,0],[1,1,1] ])
returns error :
ValueError Traceback (most recent call last) <ipython-input-11-0d356e8d66f3> in <module>() 27 print(A) 28 ---> 29 np.multiply(train_set_y,A) 30 31 def initialize_with_zeros(numberOfTrainingExamples):
ValueError: operands could not be broadcast together with shapes (3,3) (1,4)
network code :
import numpy as np import matplotlib.pyplot as plt import h5py import scipy from scipy import ndimage import pandas as pd %matplotlib inline train_set_x = np.array([ [1,1,1,1],[0,1,1,1],[0,0,1,1] ]) train_set_y = np.array([ [1,1,1,0],[1,1,0,0],[1,1,1,1] ]) numberOfFeatures = 4 numberOfTrainingExamples = 3 def sigmoid(z): s = 1 / (1 + np.exp(-z)) return s w = np.zeros((numberOfTrainingExamples , 1)) b = 0 A = sigmoid(np.dot(w.T , train_set_x)) print(A) np.multiply(train_set_y,A) def initialize_with_zeros(numberOfTrainingExamples): w = np.zeros((numberOfTrainingExamples , 1)) b = 0 return w, b def propagate(w, b, X, Y): m = X.shape[1] A = sigmoid(np.dot(w.T , X) + b) cost = -(1/m)*np.sum(np.multiply(Y,np.log(A)) + np.multiply((1-Y),np.log(1-A)), axis=1) dw = ( 1 / m ) * np.dot( X, ( A - Y ).T ) # consumes ( A - Y ) db = ( 1 / m ) * np.sum( A - Y ) # consumes ( A - Y ) again # cost = np.squeeze(cost) grads = {"dw": dw, "db": db} return grads, cost def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = True): costs = [] for i in range(num_iterations): grads, cost = propagate(w, b, X, Y) dw = grads["dw"] db = grads["db"] w = w - (learning_rate * dw) b = b - (learning_rate * db) if i % 100 == 0: costs.append(cost) if print_cost and i % 10000 == 0: print(cost) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs def model(X_train, Y_train, num_iterations, learning_rate = 0.5, print_cost = False): w, b = initialize_with_zeros(numberOfTrainingExamples) parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = True) w = parameters["w"] b = parameters["b"] Y_prediction_train = sigmoid(np.dot(w.T , X_train) + b) print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) model(train_set_x, train_set_y, num_iterations = 20000, learning_rate = 0.0001, print_cost = True)
Update: A bug exists in this implementation in that the training example pairs
(train_set_x , train_set_y) must contain the same dimensions. Can point in direction of how linear algebra should be modified?
Update 2 :
I modified @Paul Panzer answer so that learning rate is 0.001 and train_set_x , train_set_y pairs are unique :
train_set_x = np.array([ [1,1,1,1,1],[0,1,1,1,1],[0,0,1,1,0],[0,0,1,0,1] ]) train_set_y = np.array([ [1,0,0],[0,0,1],[0,1,0],[1,0,1] ]) grads = model(train_set_x, train_set_y, num_iterations = 20000, learning_rate = 0.001, print_cost = True) # To classify single training example : print(sigmoid(dw @ [0,0,1,1,0] + db))
This update produces following output :
-2.09657359028 -3.94918577439 [[ 0.74043089 0.32851512 0.14776077 0.77970162] [ 0.04810012 0.08033521 0.72846174 0.1063849 ] [ 0.25956911 0.67148488 0.22029838 0.85223923]] [[1 0 0 1] [0 0 1 0] [0 1 0 1]] train accuracy: 79.84462279013312 % [[ 0.51309252 0.48853845 0.50945862] [ 0.5110232 0.48646923 0.50738869] [ 0.51354109 0.48898712 0.50990734]]
Should
print(sigmoid(dw @ [0,0,1,1,0] + db)) produce a vector that once rounded matches
train_set_y corresponding value :
[0,1,0] ?
Modifying to produce a vector with (adding
[0,0,1,1,0] to numpy array and taking transpose):
print(sigmoid(dw @ np.array([[0,0,1,1,0]]).T + db))
returns :
array([[ 0.51309252], [ 0.48646923], [ 0.50990734]])
Again, rounding these values to nearest whole number produces vector
[1,0,1] when
[0,1,0] is expected.
These are incorrect operations to produce a prediction for single training example ?
Answer:
Your difficulties come from mismatched dimensions, so let's walk through the problem and try and get them straight.
Your network has a number of inputs, the features, let's call their number
N_in (
numberOfFeatures in your code). And it has a number of outputs which correspond to different classes let's call their number
N_out. Inputs and outputs are connected by the weights
w.
Now here is the problem. Connections are all-to-all, so we need a weight for each of the
N_out x N_in pairs of outputs and inputs. Therefore in your code the shape of
w must be changed to
(N_out, N_in). You probably also want an offset
b for each output, so b should be a vector of size
(N_out,) or rather
(N_out, 1) so it plays well with the 2d terms.
I've fixed that in the modified code below and I tried to make it very explicit. I've also thrown a mock data creator into the bargain.
Re the one-hot encoded categorical output, I'm not an expert on neural networks but I think, most people understand it so that classes are mutually exclusive, so each sample in your mock output should have one one and the rest zeros.
Side note:
At one point a competing answer advised you to get rid of the
1-... terms in the cost function. While that looks like an interesting idea to me my gut feeling (Edit Now confirmed using gradient-free minimizer; use activation="hybrid" in code below. Solver will simply maximize all outputs which are active in at least one training example.) is it won't work just like that because the cost will then fail to penalise false positives (see below for detailed explanation). To make it work you'd have to add some kind of regularization. One method that appears to work is using the
softmax instead of the
sigmoid. The
softmax is to one-hot what the
sigmoid is to binary. It makes sure the output is "fuzzy one-hot".
Therefore my recommendation is:
- If you want to stick with
sigmoidand not explicitly enforce one-hot predictions. Keep the
1-...term.
- If you want to use the shorter cost function. Enforce one-hot predictions. For example by using
softmaxinstead of
sigmoid.
I've added an
activation="sigmoid"|"softmax"|"hybrid" parameter to the code that switches between models. I've also made the scipy general purpose minimizer available, which may be useful when the gradient of the cost is not at hand.
Recap on how the cost function works:
The cost is a sum over all classes and all training samples of the term
-y log (y') - (1-y) log (1-y')
where y is the expected response, i.e. the one given by the "y" training sample for the input (the "x" training sample). y' is the prediction, the response the network with its current weights and biases generates. Now, because the expected response is either 0 or 1 the cost for a single category and a single training sample can be written
-log (y') if y = 1 -log(1-y') if y = 0
because in the first case (1-y) is zero, so the second term vanishes and in the secondo case y is zero, so the first term vanishes. One can now convince oneself that the cost is high if
- the expected response y is 1 and the network prediction y' is close to zero
- the expected response y is 0 and the network prediction y' is close to one
In other words the cost does its job in punishing wrong predictions. Now, if we drop the second term
(1-y) log (1-y') half of this mechanism is gone. If the expected response is 1, a low prediction will still incur a cost, but if the expected response is 0, the cost will be zero, regardless of the prediction, in particular, a high prediction (or false positive) will go unpunished.
Now, because the total cost is a sum over all training samples, there are three possibilities.
all training samples prescribe that the class be zero: then the cost will be completely independent of the predictions for this class and no learning can take place
some training samples put the class at zero, some at one: then because "false negatives" or "misses" are still punished but false positives aren't the net will find the easiest way to minimize the cost which is to indiscriminately increase the prediction of the class for all samples
all training samples prescribe that the class be one: essentially the same as in the second scenario will happen, only here it's no problem, because that is the correct behavior
And finally, why does it work if we use
softmax instead of
sigmoid? False positives will still be invisible. Now it is easy to see that the sum over all classes of the softmax is one. So I can only increase the prediction for one class if at least one other class is reduced to compensate. In particular, there can be no false positives without a false negative, and the false negative the cost will detect.
On how to get a binary prediction:
For binary expected responses rounding is indeed the appropriate procedure. For one-hot I'd rather find the largest value, set that to one and all others to zero. I've added a convenience function,
predict, implementing that.
import numpy as np from scipy import optimize as opt from collections import namedtuple # First, a few structures to keep ourselves organized Problem_Size = namedtuple('Problem_Size', 'Out In Samples') Data = namedtuple('Data', 'Out In') Network = namedtuple('Network', 'w b activation cost gradient most_likely') def get_dims(Out, In, transpose=False): """extract dimensions and ensure everything is 2d return Data, Dims""" # gracefully acccept lists etc. Out, In = np.asanyarray(Out), np.asanyarray(In) if transpose: Out, In = Out.T, In.T # if it's a single sample make sure it's n x 1 Out = Out[:, None] if len(Out.shape) == 1 else Out In = In[:, None] if len(In.shape) == 1 else In Dims = Problem_Size(Out.shape[0], *In.shape) if Dims.Samples != Out.shape[1]: raise ValueError("number of samples must be the same for Out and In") return Data(Out, In), Dims def sigmoid(z): s = 1 / (1 + np.exp(-z)) return s def sig_cost(Net, data): A = process(data.In, Net) logA = np.log(A) return -(data.Out * logA + (1-data.Out) * (1-logA)).sum(axis=0).mean() def sig_grad (Net, Dims, data): A = process(data.In, Net) return dict(dw = (A - data.Out) @ data.In.T / Dims.Samples, db = (A - data.Out).mean(axis=1, keepdims=True)) def sig_ml(z): return np.round(z).astype(int) def sof_ml(z): hot = np.argmax(z, axis=0) z = np.zeros(z.shape, dtype=int) z[hot, np.arange(len(hot))] = 1 return z def softmax(z): z = z - z.max(axis=0, keepdims=True) z = np.exp(z) return z / z.sum(axis=0, keepdims=True) def sof_cost(Net, data): A = process(data.In, Net) logA = np.log(A) return -(data.Out * logA).sum(axis=0).mean() sof_grad = sig_grad def get_net(Dims, activation='softmax'): activation, cost, gradient, ml = { 'sigmoid': (sigmoid, sig_cost, sig_grad, sig_ml), 'softmax': (softmax, sof_cost, sof_grad, sof_ml), 'hybrid': (sigmoid, sof_cost, None, sig_ml)}[activation] return Network(w=np.zeros((Dims.Out, Dims.In)), b=np.zeros((Dims.Out, 1)), activation=activation, cost=cost, gradient=gradient, most_likely=ml) def process(In, Net): return Net.activation(Net.w @ In + Net.b) def propagate(data, Dims, Net): return Net.gradient(Net, Dims, data), Net.cost(Net, data) def optimize_no_grad(Net, Dims, data): def f(x): Net.w[...] = x[:Net.w.size].reshape(Net.w.shape) Net.b[...] = x[Net.w.size:].reshape(Net.b.shape) return Net.cost(Net, data) x = np.r_[Net.w.ravel(), Net.b.ravel()] res = opt.minimize(f, x, options=dict(maxiter=10000)).x Net.w[...] = res[:Net.w.size].reshape(Net.w.shape) Net.b[...] = res[Net.w.size:].reshape(Net.b.shape) def optimize(Net, Dims, data, num_iterations, learning_rate, print_cost = True): w, b = Net.w, Net.b costs = [] for i in range(num_iterations): grads, cost = propagate(data, Dims, Net) dw = grads["dw"] db = grads["db"] w -= learning_rate * dw b -= learning_rate * db if i % 100 == 0: costs.append(cost) if print_cost and i % 10000 == 0: print(cost) return grads, costs def model(X_train, Y_train, num_iterations, learning_rate = 0.5, print_cost = False, activation='sigmoid'): data, Dims = get_dims(Y_train, X_train, transpose=True) Net = get_net(Dims, activation) if Net.gradient is None: optimize_no_grad(Net, Dims, data) else: grads, costs = optimize(Net, Dims, data, num_iterations, learning_rate, print_cost = True) Y_prediction_train = process(data.In, Net) print(Y_prediction_train) print(data.Out) print(Y_prediction_train.sum(axis=0)) print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - data.Out)) * 100)) return Net def predict(In, Net, probability=False): In = np.asanyarray(In) is1d = In.ndim == 1 if is1d: In = In.reshape(-1, 1) Out = process(In, Net) if not probability: Out = Net.most_likely(Out) if is1d: Out = Out.reshape(-1) return Out def create_data(Dims): Out = np.zeros((Dims.Out, Dims.Samples), dtype=int) Out[np.random.randint(0, Dims.Out, (Dims.Samples,)), np.arange(Dims.Samples)] = 1 In = np.random.randint(0, 2, (Dims.In, Dims.Samples)) return Data(Out, In) train_set_x = np.array([ [1,1,1,1,1],[0,1,1,1,1],[0,0,1,1,0],[0,0,1,0,1] ]) train_set_y = np.array([ [1,0,0],[1,0,0],[0,0,1],[0,0,1] ]) Net1 = model(train_set_x, train_set_y, num_iterations = 20000, learning_rate = 0.001, print_cost = True, activation='sigmoid') Net2 = model(train_set_x, train_set_y, num_iterations = 20000, learning_rate = 0.001, print_cost = True, activation='softmax') Net3 = model(train_set_x, train_set_y, num_iterations = 20000, learning_rate = 0.001, print_cost = True, activation='hybrid') Dims = Problem_Size(8, 100, 50) data = create_data(Dims) model(data.In.T, data.Out.T, num_iterations = 40000, learning_rate = 0.001, print_cost = True, activation='softmax') model(data.In.T, data.Out.T, num_iterations = 40000, learning_rate = 0.001, print_cost = True, activation='sigmoid')
Question:
I'm new to the Julia programming language, and still learning it by writing code that I've already written in Python (or, at least, tried out in Python).
There is an article which explains how to make a very simple neural network:.
I tried the code in this article out in Python, at it's working fine. However, I haven't used linear algebra things in Python before (like dot). Now I'm trying to translate this code to Julia, but there are some things I can't understand. Here is my Julia code:
using LinearAlgebra synaptic_weights = [-0.16595599, 0.44064899, -0.99977125]::Vector{Float64} sigmoid(x) = 1 / (1 + exp(-x)) sigmoid_derivative(x) = x * (1 -x) function train(training_set_inputs, training_set_outputs, number_of_training_iterations) global synaptic_weights for (iteration) in 1:number_of_training_iterations output = think(training_set_inputs) error = training_set_outputs .- output adjustment = dot(transpose(training_set_inputs), error * sigmoid_derivative(output)) synaptic_weights = synaptic_weights .+ adjustment end end think(inputs) = sigmoid(dot(inputs, synaptic_weights)) println("Random starting synaptic weights:") println(synaptic_weights) training_set_inputs = [0 0 1 ; 1 1 1 ; 1 0 1 ; 0 1 1]::Matrix{Int64} training_set_outputs = [0, 1, 1, 0]::Vector{Int64} train(training_set_inputs, training_set_outputs, 10000) println("New synaptic weights after training:") println(synaptic_weights) println("Considering new situation [1, 0, 0] -> ?:") println(think([1 0 0]))
I've already tried to initialize vectors (like synaptic_weights) as:
synaptic_weights = [-0.16595599 ; 0.44064899 ; -0.99977125]
However, the code is not working. More exactly, there are 3 things that is not clear for me:
- Do I initialize vectors and matrixes in the right way (is it equal to what the original author does in Python)?
- In Python, the original author uses + and - operators where one operand is a vector and the other is a scalar. I'm not sure whether this means element-wise addition or subtraction in Python. For example, is (vector+scalar) in Python equal to (vector.+scalar) in Julia?
When I try to run the Julia code above, I get the following error:
ERROR: LoadError: DimensionMismatch("first array has length 12 which does not match the length of the second, 3.") Stacktrace: [1] dot(::Array{Int64,2}, ::Array{Float64,1}) at C:\Users\julia\AppData\Local\Julia-1.0.3\share\julia\stdlib\v1.0\LinearAlgebra\src\generic.jl:702 [2] think(::Array{Int64,2}) at C:\Users\Viktória\Documents\julia.jl:21 [3] train(::Array{Int64,2}, ::Array{Int64,1}, ::Int64) at C:\Users\Viktória\Documents\julia.jl:11 [4] top-level scope at none:0 in expression starting at C:\Users\Viktória\Documents\julia.jl:28
This error comes when the funtion think(inputs) tries to compute the dot product of inputs and synaptic_weights. In this case, inputs is a 4x3 matrix and synaptic weights is a 3x1 matrix (vector). I know that they can be multiplied, and the result will become a 4x1 matrix (vector). Doesn't this mean that they dot product can be computed?
Anyway, that dot product can be computed in Python using the numpy package, so I guess there is a certain way that it can also be computed in Julia.
For the dot product, I also tried to make a function that takes a and b as arguments, and tries to compute their dot product: first, computes the product of a and b, then returns the sum of the result. I'm not sure whether it's a good solution, but the Julia code didn't produce the expected result when I used that function, so I removed it.
Can you help me with this code, please?
Answer:
Here is the code adjusted to Julia:
sigmoid(x) = 1 / (1 + exp(-x)) sigmoid_derivative(x) = x * (1 -x) think(synaptic_weights, inputs) = sigmoid.(inputs * synaptic_weights) function train!(synaptic_weights, training_set_inputs, training_set_outputs, number_of_training_iterations) for iteration in 1:number_of_training_iterations output = think(synaptic_weights, training_set_inputs) error = training_set_outputs .- output adjustment = transpose(training_set_inputs) * (error .* sigmoid_derivative.(output)) synaptic_weights .+= adjustment end end synaptic_weights = [-0.16595599, 0.44064899, -0.99977125] println("Random starting synaptic weights:") println(synaptic_weights) training_set_inputs = Float64[0 0 1 ; 1 1 1 ; 1 0 1 ; 0 1 1] training_set_outputs = Float64[0, 1, 1, 0] train!(synaptic_weights, training_set_inputs, training_set_outputs, 10000) println("New synaptic weights after training:") println(synaptic_weights) println("Considering new situation [1, 0, 0] -> ?:") println(think(synaptic_weights, Float64[1 0 0]))
There are multiple changes so if some of them are not clear to you please ask and I will expand on them.
The most important things I have changed:
- do not use global variables as they will significantly slow down the performance
- make all arrays have
Float64element type
- in several places you need to do broadcasting with
.(e.g.
sigmoidand
sigmoid_derivativefunctions are defined in such a way that they expect to get a number as an argument, therefore when we call them
.is added after their name to trigger broadcasting)
- use standard matrix multiplication
*instead of
dot
The code runs around 30x faster than the original implementation in Python. I have not squeezed out maximum performance for this code (now it does a lot of allocations which can be avoided) as it would require to rewrite its logic a bit and I guess you wanted a direct reimplementation.
Question:
I use it to implement neural networks. I prefer NumPy, because it is more convenient to prepare data with Python; however, I am concerned that NumPy is not as fast as c++ libraries.
Answer:
NumPy is implemented in C. So most of the time you just call C and for some functionality optimized Fortran functions or subroutines. Therefore, you will get a decent speed with NumPy for many tasks. You need to vectorize your operations. Don't write
for loops over NumPy arrays. Of course, hand-optimized C code can be faster. On the other hand, NumPy contains a lot of already optimized algorithms that might be faster than not so optimal C code written by less experienced C programmers.
You can gradually move from Python to C with Cython and/or use Numba for jit-compilation to machine or gpu code.
Question:
Suppose that
x is a vector with shape
(a,),
T is a tensor with shape
(b, a, a).
If I want to compute
(x^T)Tx ,I can do it using
x.dot(w.dot(x).transpose()).
For example:
x = np.array([1.,2.,3.,4.,5.]) w = np.array([[[1.,2.,3.,4.,5.], [1.,2.,3.,4.,5.], [1.,2.,3.,4.,5.]], [[1.,2.,3.,4.,5.], [1.,2.,3.,4.,5.], [1.,2.,3.,4.,5.]]]) x.dot(w.dot(x).transpose())
But what if I want to decompose
T into two tensors
P and
Q (low rank express) with shape
(b,a,r) and
(b,r,a) and
r<<a so each matrix in
T which is
a*a decomposed to
a*r and
r*a, which reduce much data. Then how do I do the computation of
(x^T)PQx with numpy?
Answer:
Your example has problems.
x.shape (5,) w.shape (2,3,5) x.dot(w.dot(x).transpose()) ValueError: matrices are not aligned
But to use your description:
`x` `(a,)`, `T` `(b,a,a)`; `(x^T)Tx`
I like to use
einsum (Einstein summation) when thinking about complex products. I think your
x'Tx is:
np.einsum('i,kij,j->k', x, T, x)
T decomposed into:
P
(b,a,r),
Q
(b,r,a);
np.einsum('kir,krj->kij', P,Q) == T
together the expressions are:
np.einsum('i,kir,krj,j->k', x, P, Q, x)
einsum isn't the best when the dimensions are large, since the combined iteration space of
k,i,j,r may be large. Still it is a useful way to think about the problem.
I think it can be rewritten as 3
dots:
P1 = np.einsum('i,kir->kr', x, P) Q1 = np.einsum('krj,j->kr', Q, x) np.einsum('kr,kr->k', P1, Q1)
A sample calculation:
In [629]: a,b,r = 5,3,2 In [630]: x=np.arange(1.,a+1) In [632]: P=np.arange(b*a*r).reshape(b,a,r) In [633]: Q=np.arange(b*a*r).reshape(b,r,a) In [635]: T=np.einsum('kir,krj->kij',P,Q) In [636]: P Out[636]: array([[[ 0, 1], [ 2, 3], [ 4, 5], [ 6, 7], ... [24, 25], [26, 27], [28, 29]]]) In [637]: Q Out[637]: array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9]], ... [[20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) In [638]: T Out[638]: array([[[ 5, 6, 7, 8, 9], [ 15, 20, 25, 30, 35], [ 25, 34, 43, 52, 61], [ 35, 48, 61, 74, 87], ... [1105, 1154, 1203, 1252, 1301], [1195, 1248, 1301, 1354, 1407], [1285, 1342, 1399, 1456, 1513]]]) In [639]: T.shape Out[639]: (3, 5, 5) In [640]: R1=np.einsum('i,kij,j->k',x,T,x) ... In [642]: R1 Out[642]: array([ 14125., 108625., 293125.]) In [643]: R2=np.einsum('i,kir,krj,j->k',x,P,Q,x) In [644]: R2 Out[644]: array([ 14125., 108625., 293125.]) In [645]: P1=np.einsum('i,kir->kr',x,P) In [646]: Q1=np.einsum('krj,j->kr',Q,x) In [647]: R3=np.einsum('kr,kr->k',P1,Q1) In [648]: R3 Out[648]: array([ 14125., 108625., 293125.]) In [649]: P1 Out[649]: array([[ 80., 95.], [ 230., 245.], [ 380., 395.]]) In [650]: Q1 Out[650]: array([[ 40., 115.], [ 190., 265.], [ 340., 415.]])
The last set of calculations can be done with
dot
In [656]: np.dot(x,P) Out[656]: array([[ 80., 95.], [ 230., 245.], [ 380., 395.]]) In [657]: np.dot(Q,x) Out[657]: array([[ 40., 115.], [ 190., 265.], [ 340., 415.]]) In [658]: np.dot(np.dot(x,P),np.dot(Q,x).T) Out[658]: array([[ 14125., 40375., 66625.], [ 37375., 108625., 179875.], [ 60625., 176875., 293125.]])
But we want just the diagonal of the last
dot. A simpler sum of products is better:
In [661]: (P1*Q1).sum(axis=1) Out[661]: array([ 14125., 108625., 293125.]) | https://thetopsites.net/projects/neural-network/linear-algebra.shtml | CC-MAIN-2021-31 | refinedweb | 4,096 | 57.87 |
If you're like me, when you first started out with Go, you may have felt a temptation to use a testing framework. I sure did. Beginner's mistake. A huge one, in fact. I started out using gocheck. To be sure, I don't mention any of this to denigrate gocheck's authors; rather, I just point this out as a matter of preference as my familiarity with the ecosystem grew. This is to say, once I became more familiar with Go and its idioms, the need for these withered. The standard library combined with the idioms just delivered.
So why bring this up? Every day, we face the question about how we want to architect our tests and reason with their output. Go, with its anonymous (unnamed) types and struct literals naturally gravitates itself toward simplicity and conciseness. Table-driven tests are a natural consequence—maybe an end of history moment for the ecosystem. Anyway, I digress.
What I want to focus on in this post is not strictly table-driven tests but rather a few details that make tests in Go easy to reason with, pain free, and maintainable. This will be a chatty post, but I want you to bear with me and experience my anecdotes first hand. For background, I encourage you to read Dave Cheney's venerable piece on table-driven tests if you are unfamiliar with table tests. What I will not do in this post (perhaps in another) is discuss how to design testable code or build correct tests for your systems. My motivations are selfish: the easier it is to test and test somewhat correctly, the more it will be done. Our ecosystem will flourish with high-quality, and our peers can hold one another to high standards. Let's get started before I fall off the ivory tower!
Value Comparisons and Reflection
I come to the table with a lot of Java experience under my belt. With that, comes a strong revulsion toward reflection, because it makes reasoning with a system's design inordinately difficult. Worse, when used incorrectly, it introduces terrible faults by bypassing the compiler's type safety guarantees. You could call this baggage, if you will. When I first saw pkg/reflect, I nearly vomited in my mouth of horror should I ever need to use this thing. Thusly I avoided it—much to my detriment, as I will try to convince you.
Custom Assertions / Testing Framework
In the course of writing tests, I went through several iterations. As I mentioned above when starting out with Go, I wrote custom assertion mechanisms using gocheck. That stopped after needing to perform a couple refactorings and discovered that the maintenance cost to keep the framework-derived tests up-to-date exceeded the cost of the base refactor. What followed?
Iteratively Checking Components of Emitted Output
Iteratively checking components of emitted output—and let me tell you how not pretty that is. Suppose we have a function that consumes
[]int and emits
[]int with each slice element's value incremented by one:
// Increment consumes a slice of integers and returns a new slice that contains // a copy of the original values but each value having been respectively // incremented by one. func Increment(in []int) []int { if in == nil { return nil } out := make([]int, len(in)) for i, v := range in { out[i] = v + 1 } return out }(Source)
How would you test this under the iterative approach? It might look something like this with a table-driven test:
func TestIterative(t *testing.T) { for i, test := range []struct { in, out []int }{ {}, {in: []int{1}, out: []int{2}}, {in: []int{1, 2}, out: []int{2, 3}}, } { out := Increment(test.in) if lactual, lexpected := len(out), len(test.out); lactual != lexpected { t.Fatalf("%d. got unexpected length %d instead of %d", i, lactual, lexpected) } for actualIdx, actualVal := range out { if expectedVal := test.out[actualIdx]; expectedVal != actualVal { t.Fatalf("%d.%d got unexpected value %d instead of %d", i, actualIdx, actualVal, expectedVal) } } } }(Source)
What do you notice here aside from how terrible it is? I can't really recall how I thought of this incarnation was ever a good idea, but I suspect it was on a sleepless night. Anyway, if it doesn't appear terrible, let's make a quick inventory of what's deficient:
- That's a lot of boilerplate to write.
- The boilerplate is fragile.
- When the test does fail, it is damn hard to find out exactly which table row failed: All we get are the two index variables at the beginning of the format string. Pray for what happens if the test's table exceeds one page in length! Double pray it isn't late at night, when you'd go cross-eyed staring at the output.
It turns out that there is more wrong with this than what I enumerated. Don't fear. We'll get to that soon. (Our goal is to make the tests so tip-top that Gunnery Sgt. Hartman would smile.)
Hand-Written Equality Test
A later thought was, why not create an equality helper or a custom type for
[]int? Let's try that out and see how well that goes:
// Hold your horses, and ignore sort.IntSlice for a moment. type IntSlice []int func (s IntSlice) Equal(o IntSlice) bool { if len(s) != len(o) { return false } for i, v := range s { if other := o[i]; other != v { return false } } return true }(Source)
…, which is then used as follows:
func TestIntSliceIterative(t *testing.T) { for i, test := range []struct { in, out []int }{ {}, {in: []int{1}, out: []int{2}}, {in: []int{1, 2}, out: []int{2, 3}}, } { if out, testOut := IntSlice(Increment(test.in)), IntSlice(test.out); !testOut.Equal(out) { t.Fatalf("%d. got unexpected value %s instead of %s", i, out, test.out) } } }(Source)
You're probably asking, "what's wrong with that? Looks reasonable." Sure, this works, but …
- We've created and exported a new method receiver for a new type. Was this really necessary for users? Would a reasonable user need to use
IntSlice.Equal, ever? If you look at the generated documentation, it is an extra item in the inventory, thusly creating further cognitive burden if the method is not really useful outside of the test. We can do better than this.
- All of the fragility and error-prone remarks from the previous case still apply. We've just shifted the maintenance to dedicated function to perform the work.
"OK, but couldn't you have just made
IntSlice.Equal unexported with
IntSlice.equal," the peanut gallery protests? Yes, but that still does not represent an optimal solution when compared with what follows.
Using pkg/reflect
So, where am I going with this? pkg/reflect offers this helpful facility known as reflect.DeepEqual. Take a few minutes to read its docstring carefully. I'll wait for you. The takeaway is that overwhelmingly,
reflect.DeepEqual does the right thing for you for most correct public API design styles:
- Primitive types: string, integers, floating point values, and booleans.
- Complex types: maps, slices, and arrays.
- Composite types: structs.
- Pointers: The underlying values of the pointer and struct fields.
- Recursive values:
reflect.DeepEqualmemos what it has visited!
Let's take what we've learned and apply it to the test:
import ( "testing" "reflect" ) func TestReflect(t *testing.T) { for i, test := range []struct { in, out []int }{ {}, {in: []int{1}, out: []int{2}}, {in: []int{1, 2}, out: []int{2, 3}}, } { if out := Increment(test.in); !reflect.DeepEqual(test.out, out) { t.Fatalf("%d. got unexpected value %#v instead of %#v", i, out, test.out) } } }(Source)
Boom! You can delete
type IntSlice and
IntSlice.Equal—provided there is no actual user need for them. Remember: It is usually easier to expand an API later versus taking something away.
One bit of advice: pkg/reflect enables you to apply test-driven development for most APIs immediately when combined with table-driven tests. This is a great opportunity to validate the assumption that
reflect.DeepEqual actually works correctly for the expected-versus-actual test. There is little worse than over-reliance on something that yields a false sense of confidence. The onus is on you to know your tools.
Cases When Not to Use reflect.DeepEqual
Surely there's a downside? Yep, there are; nothing good comes without caveats:
- The type or package already exposes an equality test mechanism. This could be from code that you import and use versus author yourself. A notable example you should be aware of is the goprotobuf library's
proto.Equalfacility to compare two messages. Usually there is a good reason. Defer to the author's judgement.
- The comparison of actual versus expected involves a type or composition that is incompatible with
reflect.DeepEqual. Channels are an obvious example, unless you are expecting a
nilchannel on both sides!
- The type that is being compared has transient state. Transient state may manifest itself in unexported fields. This raises the question of whether the transient state is important. For instance, it could exist for memoization, like of a hash for an immutable type that is lazily generated.
- You are functionally using a
nilslice as an empty slice in your code:
reflect.DeepEqual([]int{}, []int(nil)) == false.
Needless to say, if any of the previous apply, exercise extreme caution.
Ordering of Test Local Values: Actual and Expected
It turns out that we aren't done yet. (If you thought we were, you'd end up as happy as Pvt. Gomer Pile during footlocker inspection). Go has a convention with modern tests to place
actual before
expected. (I highly encourage everybody to visit that link and study and practice its content!) Let's clean up our mess from above:
func TestReflectReordered(t *testing.T) { for i, test := range []struct { in, out []int }{ {}, {in: []int{1}, out: []int{2}}, {in: []int{1, 2}, out: []int{2, 3}}, } { if out := Increment(test.in); !reflect.DeepEqual(out, test.out) { t.Fatalf("%d. got unexpected value %s instead of %s", i, out, test.out) } } }(Source)
Why call this to attention? The convention exists; and when it is followed, the faster it is for a non-maintainer to reason with somebody else's code.
Naming Test Local Variables
The Code Review Comments Guide outlines some interesting ideas that ought to be adopted as convention (note that each subsequent bullet point builds on the previous):
Input should be named
in. For instance, if we were testing a string length function, each test table row's signature could be
struct { in string, len int }. Admittedly this is easiest to achieve when the tests' input is unary.
When it is not unary, sometimes grouping inputs in the table definition to a struct named
insuffices. Suppose that we are building a table test for a quadratic function:
struct { in struct { a, b, c, x int }, y int }.
Expected output should be named
want. Our string length example becomes
struct { in string, want int }; whereas, the quadratic becomes
struct { in struct { a, b, c, x int }, want int }.
If the tested component's type signature has a multiple value result, you could take an approach similar to the multiple arity input case and group the output into a struct named
want. A table test row for an implementation of a io.Reader could look like
struct { in []byte, want struct { n int, err error } }.
- The actual value (i.e., the side effect) being tested should be named
got.
What does our example above look like after these rules are applied?
func TestReflectRenamed(t *testing.T) { for i, test := range []struct { in, want []int }{ {}, {in: []int{1}, want: []int{2}}, {in: []int{1, 2}, want: []int{2, 3}}, } { if got := Increment(test.in); !reflect.DeepEqual(got, test.want) { t.Fatalf("%d. got unexpected value %s instead of %s", i, got, test.want) } } }(Source)
Formatting Error Messages
How you format your tests' error messages is an important but oft-neglected topic, one that has practical benefit. Why is that?
- Your failure messages indicate where an anomaly has occurred and why. Think about this for a moment. In the table-tests above, where is conveyed in the initial indices in the print format string. Why is conveyed through the remark of actual versus expected.
- Your failure messages have an inherent time-to-decode cost for the user. The longer it takes, the more difficult maintenance, refactorings, and reiterations become. It should take no more than two seconds for a non-maintainer reading the failure message to know on what input the test failed! This needn't mean the external parties understand why.
If your test failure messages do not fulfill the points above, they have failed the human requirements! For sake of demonstration, the test failure messages above in this post intentionally fail these criteria!
Format Expressions
Let's take a quick diversion down format string lane… What happens if your test fails above for input of type
x and the message is emitted to the console? Would you be able to figure out which table test row is responsible for the failure quickly?
The answer to this depends on the behavior of the
type that backs
in,
want, and
got. Does the type formally implement fmt.Stringer? What is the format expression?
If you are lazy and just rely on the default
fmt.Stringer behavior and use
%s, you may get some results that are hard to read. Consider this example below:
package main import "fmt" type Record struct { GivenNames []string FamilyName string Age int Quote string } func main() { rec := Record{[]string{.`} fmt.Printf("%%s %s\n", rec) fmt.Printf("%%v %v\n", rec) fmt.Printf("%%#v %#v\n", rec) }(Source)
emits
%s {[Donald Henry] Rumsfeld %!s(int main.Record{GivenNames:[]string{"Donald", "Henry"}, FamilyName:"Rumsfeld", Age:82, Quote:"…."}
Compare these emissions for a moment.
%s doesn't perform so well. Things can get even worse; suppose
Record implements
fmt.Stringer and the result is too verbose or convoluted to differentiate table rows?
func (r Record) String() string { return fmt.Sprintf("[Record: %s %s]", r.GivenNames[0], r.FamilyName) }
Note how that
fmt.Stringer omits a bunch of fields. Suppose we have multiple table records of Donald Rumsfeld with minute differences. We'd be a one very sad Pvt. Gomer Pile if any test failed.
My advice: stick to using
%#v for printing out
in,
want, and
got. You can easily differentiate the output and hopefully find the record in the test table quickly. This also prevents
%s and
fmt.Stringer from tripping you up if the code comes from a third-party! It is worth the effort.
Content of the Test Error Message
If you are still reading, thank you for bearing through this long post. You'll come out ahead writing better tests. We're now on the final topic: how to make the test error messages useful.
For consistency, prefer using a format like this for pure or semi-pure tests that exercise a function:
t.Errorf("YourFunction(%#v) = %#v; want %#v", in, got, want)
The output is concise and obvious. With clear ways of differentiating between test cases, there is no need to keep that stupid index variable in the format string. Let's now take that test we've been polishing and show what the final output should look like:
func TestReflectPristine(t *testing.T) { for _, test := range []struct { in, want []int }{ {}, {in: []int{1}, want: []int{2}}, {in: []int{1, 2}, want: []int{2, 3}}, } { if got := Increment(test.in); !reflect.DeepEqual(got, test.want) { t.Fatalf("Increment(%#v) = %#v; want %#v", test.in, got, test.want) } } }(Source)
With luck, your tests will be easy to decipher and you won't find yourself in a world of shit.
In closing, I put together this blog post largely as form of penance for my mistakes while learning Go and with the side effect that my learner's bad habits caught onto the people I was working with. Patterns are contagious—just like misinformation. Here's to hoping that this makes up for it:
Hail the Right and Just, Cmdr. Pike, By whose work Unmaintainable code is defeated, for practicality Has now overflowed upon all of the world.
In the next posting, I will discuss some more testing patterns and focus less on style. Until then, I bid you good testing!
| http://blog.matttproud.com/2014/09/go-testing-easy-polish-for-world-class.html | CC-MAIN-2017-13 | refinedweb | 2,729 | 66.54 |
Ruby Array Exercises: Check whether a given array of integers contains two 6's next to each other, or there are two 6's separated by one element
Ruby Array: Exercise-39 with Solution
Write a Ruby program to check whether a given array of integers contains two 6's next to each other, or there are two 6's separated by one element, such as {6, 2, 6}.
Ruby Code:
def check_array(nums) i = 0; while i < nums.length if(nums[i] == 6) if(nums[i+1] == 6) return true elsif(i < nums.length - 2 && nums[i+2] == 6) return true end end i = i + 1 end return false end print check_array([6, 3, 6, 5]),"\n" print check_array([6, 6, 5, 9]),"\n" print check_array([6, 4, 5, 6]),"\n"
Output:
true true false
Flowchart:
Ruby Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Ruby program to check whether a given array contains a 3 next to a 3 or a 5 next to a 5, but not both.
Next: Write a Ruby program to check whether there is a 2 in the array with a 3 some where later | https://www.w3resource.com/ruby-exercises/array/ruby-array-exercise-39.php | CC-MAIN-2021-21 | refinedweb | 196 | 58.96 |
I am trying to make a recurrent scheduling software for patient scheduling. I am a doctor and new to programming.
Each appointment slot is for 15mins. starting from xam to ypm everyday from Monday to Friday.
In the front end we provide the patient details and the no. of days (d) to be booked.
So… if i plan to start the treatment on a particular day, I give the system a preferred date and the system lets me know the available slots for the day and then books the same slot (preferably) for the next d days.
If there are no slots on the available days, it should give back the nearest slot available in the coming days.
How do i approach this?
I am trying to make a recurrent scheduling software for patient scheduling. I am a doctor and new to programming.
Well, this is a pretty open-ended question, but I’ll try and at least get you started down the right path.
First thing you’ll need is a model to store your appointments. You mentioned a front-end that supplies some details, so I’m not sure what other system(s) you have in place, but my first thought would be to create an Appointment model that would store some patient identifier (ID #, whatever you use) and a start and end datetime for each appointment you have booked. You’ll also need to code in the bounds of your schedule - the earliest and latest times you can schedule an appointment for.
Then you need to write some logic that will handle things like identifying all open timeslots for a particular day, and seeking forward in the calendar for the next available timeslot. So if you fed the system 25-May-2020 as the preferred date, you can query the database for any appointments scheduled for that date and display all of the possible times that aren’t already booked. You’ll also need logic to handle the multiple days aspect - if you are trying to book an appointment where d = 5 days, you’ll need to look at all times available today that are also available over the next 4 days.
Stuff like this seems trivially easy at first glance, but once you get into the weeds you can see how it can get pretty tricky.
I would Google “conference room scheduling algorithms” - there are a lot of examples of algos that deal with comparing calendars and finding free times/conflicts as this is a common question for programming courses/interview prep.You might find this video interesting - it’s a former Google engineer doing a mock interview with a college student and they use a calendar algorithm as the problem to work through. You can see some of the thought process and logic involved in working with blocks of time:
As a new programmer, this is a pretty complex case to muscle through (and, if I’m being honest, you’d probably be better off designing how you want the system to work and passing that off to a freelancer to code it for you) – but I don’t want to discourage you, and kudos to trying to learn how to do this on your own.
Working with dates is not trivial, and depending on how robust you want to make the system, you might also have to work around holidays, vacation days, cancellations, rescheduling, etc…
If you want some more specific advice, post what you’ve tried and where you’re getting stuck. Are you still in the planning phase? Are you halfway through building the thing and are having a specific problem?
Hope that helps a little bit at least…
-Jim
I am trying to make a recurrent scheduling software for patient scheduling. I am a doctor and new to programming.
Hi, great that you are thinking outside of the box and trying to find practical uses of your developing programming skills
In the front end we provide the patient details and the no. of days (d) to be booked.
You probably shouldn’t enter any patient details in anything you build until you’ve spent a lot of time studying how Python or databases work in general, and data security aspects in particular. You might want to collaborate with a data security expert, and you’d probably want to have some kind of legal expert involved. Patient details are highly sensitive information and you must be absolutely certain that you can store them in secure ways that are not vulnerable to data leaks on your/your company’s part, or intrusion (hacking) from external agents. You could end up in a lot of trouble if you aren’t very careful about these things, so please refrain from entering any patient details for now. What you could do is try to exchange patient details for a generic name, like ‘patient 1’, ‘patient 2’, et c. But even then, if you write meeting information in the time slots, like say ‘chemotherapy session 1’, ‘chemotherapy session 2’… then the information might still be considered sensitive. Please try to make the system as non-sensitive as possible. Build something that you would be totally fine with anyone seeing, because you should assume that bad actors could gain access to it if they try hard enough.
Do you know how base Python works? If not, this book is often recommended as a good start:
If you know Python very well, but don’t know Django yet, you probably want to start by going through the official Django tutorial, here’s a video on YouTube that helps you through it (I haven’t watched the whole thing myself, but what I did watch seemed helpful) That is, if Django is the most appropriate tool for what you want to do. It might not be, since you probably shouldn’t share the project over the web (again, unless you use very thorough security measures), and Django is a web framework.
If you don’t know Python and Django well already there’s a risk that your experience trying to build this might be very frustrating. Which would be a shame! If you find it too hard to tackle this now, please give more basic tasks a go and try to work up your experience before giving this a go again.
I see that as I was writing, jimwritescode already gave more practical advice than I could anyway, so I’ll leave the post like this. I hope it doesn’t sound too negative and that it’s helpful. Happy coding
Edit: I should add that there are different opinions on whether or not you should practice base Python before starting to learn Django. If you don’t know either one but want to get going with Django right away, you can try the django girls tutorial. I’m not allowed to post more than two links as a new user, so you can just do an online search for ‘django girls tutorial’.
Thanks @jimwritescode and @datalowe for the inputs.
I do have some basic knowledge of both.
I have made the basic framework of the program too (i think). But I think i cant get my head around the relationships in django even after lots of hours trying to learn it.
I am stuck at making the timeslots and linking the patient data to the same. And yes… i will be anonymizing the names.
I have made 2 classes - Patients and Rooms. ( i have not yet made a Doctor class, but do plan to add it later and they will have a username and password for logging in).
Well… the issue is i work for a public hospital which is quite cash strapped and low in staff. We deal with a huge patient load and the patients experience quite a bit of discomfort due to the inherent inefficiencies of the system including long waiting periods. I just want to try to see I i could help in smoothing up the process and reduce the discomfort to the patients.
This is my models.py. Here ‘Machines’ = the rooms. special techniques are the
from django.db import models
from django.contrib.auth.models import User
from multiselectfield import MultiSelectField
from django.urls import reverse
SPECIAL_TECHNIQUES = (
(‘none’, ‘NONE’),
(‘abc’, ‘ABC’),
(‘tbi’, ‘TBI’),
(‘tset’, ‘TSET’),
(‘srs_cone’, ‘SRS cone’),
(‘srs_apex’, ‘SRS APEX’),
(‘csi’, ‘CSI’),
)
SITES = (
(‘none’, ‘NONE’),
(‘head_and_neck’, ‘Head and neck’),
(‘brain’, ‘Brain’),
(‘thorax’, ‘Thorax’),
(‘pelvis’, ‘Pelvis’),
(‘extremity’,‘Extremity’)
)
class Machine(models.Model):
name = models.CharField(max_length=50)
location = models.CharField(max_length=50)
special_techniques = models.CharField(max_length=10, choices=SPECIAL_TECHNIQUES, default=“none”)
site = MultiSelectField(choices=SITES, default=‘none’)
def __str__(self): return self.name def get_absolute_url(self): return reverse("sched_app_1:detail", kwargs={'pk': self.pk})
class Patient(models.Model):
name = models.CharField(max_length=50)
Mr_no = models.CharField(max_length=10, unique=True)
age = models.PositiveIntegerField()
machine = models.ForeignKey(Machine, related_name=‘patients’, on_delete=models.DO_NOTHING)
treatment_site = MultiSelectField(choices=SITES, default=‘none’)
special_techniques = models.CharField(max_length=10, choices=SPECIAL_TECHNIQUES, default=“none”)
treatment_start = models.DateField
def __str__(self): return self.name
this is my views.py
from django.shortcuts import render from . import forms from .forms import PatientDataEntryForm from django.views.generic import (View, TemplateView, ListView, DetailView, CreateView, UpdateView, DeleteView) from django.http import HttpResponse from . import models from django.urls import reverse_lazy class IndexView(TemplateView): template_name = 'sched_app_1/index.html' class MachineListView(ListView): context_object_name = 'machinelist' model = models.Machine template_name = 'sched_app_1/machinelist.html' class MachineDetailView(DetailView): context_object_name = 'machine_detail' model = models.Machine template_name = 'sched_app_1/machinedetails.html' class MachineCreateView(CreateView): fields = ('name', 'location') model = models.Machine class MachineUpdateView(UpdateView): fields = ('name', 'location') model = models.Machine class MachineDeleteView(DeleteView): model = models.Machine success_url = reverse_lazy("sched_app_1:list") class PatientDetailView(DetailView): context_object_name = 'patient_detail' model = models.Patient template_name = 'sched_app_1/patient_details.html' def patient_register(request): registered = False if request.method == 'POST': patient_details_form = PatientDataEntryForm(data=request.POST) if patient_details_form.is_valid(): ptdet = patient_details_form.save() ptdet.save() registered = True else: print(patient_details_form.errors) else: patient_details_form = PatientDataEntryForm() return render(request, 'sched_app_1/patient_registration.html', {'patient_details_form': patient_details_form})
Forgive me if all this look too stupid.
As you rightly said. I am finding it difficult to work with calender. I will look into the video you have send.
Thanks again
Hey guys… any help with this?
If I were implementing this, I’d be encoding time slots as integers where the individual time slots were 15-minute intervals starting at midnight. (In other words, 00:00 would = 0, 00:15 = 1, 00:30 = 2, 01:00 = 4, and so on.) (Makes it easy to define different lengths of days - you could have a calendar identifying the first and last slot of each day.)
What I’m not seeing in your description is the concurrency of appointments. Can you only handle 1 person per time slot? Or is it one person per “special technique” per time slot? Or something else? (This all affects the data model needing to be created.)
Assuming for the moment that it’s actually one person per technique per time slot, I would then have an “Appointment” model with columns for the following:
date, time_slot, technique, patient
This will facilitate the various queries needing to be written to identify what slots are open for each technique.
But I’ll also echo what others have said - this type of scheduling application is not trivial, and likely to be frustrating for someone just starting out. It’s almost certainly beyond what you can expect from the type of assistance that can be provided here - these types of applications, in practice, are why the professional consulting firms exist.
Thanks @KenWhitesell.
The rooms can handle only one person per time slot (a slot of 15 mins).
Thanks for the inputs.
Hi there!
That’s great that you are a doctor yourself, so you really understand the pain of end-users haha.
Here is a great tutorial on creating appointment scheduling software, a list of ready-made software, and some other tips. All the info is based on several years of healthcare software development. | https://forum.djangoproject.com/t/making-a-recurrent-appointment-scheduling-software-with-django-and-python/2547 | CC-MAIN-2022-21 | refinedweb | 1,999 | 64.81 |
OpenCV HOG is not detecting people
Hi,
Please have a look at the below code
#include "opencv2/imgproc/imgproc.hpp" #include "opencv2/objdetect/objdetect.hpp" #include "opencv2/highgui/highgui.hpp" #include <stdio.h> #include <string.h> #include <ctype.h> using namespace cv; using namespace std; void help() { printf( "\nDemonstrate the use of the HoG descriptor using\n" " HOGDescriptor::hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector());\n" "Usage:\n" "./peopledetect (<image_filename> | <image_list>.txt)\n\n"); } int main() { Mat img; char _filename[1024]; img = imread("C:/Users/yohan/Desktop/dogwalker.jpg"); HOGDescriptor hog; hog.setSVMDetector(HOGDescriptor::getDefaultPeopleDetector()); namedWindow("people detector", 1); for(;;) { vector<Rect> found, found_filtered; double t = (double)getTickCount(); // run the detector with default parameters. to get a higher hit-rate // (and more false alarms, respectively), decrease the hitThreshold and // groupThreshold (set groupThreshold to 0 to turn off the grouping completely). hog.detectMultiScale(img, found, 0, Size(8,8), Size(32,32), 1.05, 2); t = (double)getTickCount() - t; printf("tdetection time = %gms\n", t*1000./cv::getTickFrequency()); size_t i, j; for( i = 0; i < found.size(); i++ ) { Rect r = found[i]; for( j = 0; j < found.size(); j++ ) if( j != i && (r & found[j]) == r) break; if( j == found.size() ) found_filtered.push_back(r); } for( i = 0; i < found_filtered.size(); i++ ) { Rect r = found_filtered[i]; // the HOG detector returns slightly larger rectangles than the real objects. // so we slightly shrink the rectangles to get a nicer output.); } imshow("people detector", img); int c = waitKey(0) & 255; break; } return 0; }
This is the OpenCV code for detecting humans. But I noticed that this do not detect people in most cases, for an example, please have a look at the below image.
If you run the above code on this, this will not detect the person. What is wrong here? Please help.
Actually, your step of found to found_filtered is probably screwing your detections up, not to mention the fact that it is completely useless since you are using the grouping parameter already. Remove that part and it will work just fine in 2.4.9.
@StevenPuttemans: Thanks for the reply and very sorry for the delay of my reply, I do not get email notifications! anyways, I did not understand what you mentioned. Mind providing a code sample?
Email notifications do not work on this forum, even if you ask to send them ... it is a known bug. I cannot provide full code samples. You have a piece of code like this:
But afaik you do not need anything like this to filter out detections. Cut it out and retry your code! | https://answers.opencv.org/question/37246/opencv-hog-is-not-detecting-people/?answer=80002 | CC-MAIN-2020-45 | refinedweb | 427 | 60.92 |
GHC/Coercible
From HaskellWiki
This page contains additional information about
and augments the documentation and the ICFP 2014 paper. This is a feature that first appeared in GHC 7.8.1 and will likely evolve futher.
Coercible
1 The problem
Given a newtype
we can convert between
newtype HTML = MkHTML String
and
HTML
with
String
toHTML :: String -> HTML toHTML s = MkHTML s fromHTML :: HTML -> String fromHTML (MkHTML s) = s
and these conversions are free, i.e. they have no run-time cost.But how do we get from
to
[String]
? We can write
[HTML]
but the execution of
toHTMLs :: [String] -> [HTML] toHTMLs = map MkHTML
incurs a cost at run-time.
map
2 Using CoercibleThe solution available since GHC-7.8.1 is to use
from the module
coerce
:
Data.Coerce
It works like
import Data.Coerce toHTMLs :: [String] -> [HTML] toHTMLs = coerce
, i.e. has no run-time cost, but the type checker ensures that it really is safe to use it. If you use it illegally like in
unsafeCoer = coerceThe type of
is
coerce
, and the instances of the “type class”
Coercible a b => a -> b
(which behaves almost like a regular type class) ensure that
Coercible
is only solvable if
Coercible s t
and
s
have the same run-time representation.
t
3 Interesting things to note
3.1 Using newtypes internally and externally differentlyYou can unwrap a newtype using
coerce
to
NT s
only if
NT t
holds:
Coercible s t
Nevertheless, as long as the constructor
newtype NT a = MkNT () type role NT representational
is in scope, we can do
MkNT
, if we wish to do so.
coerce :: NT Bool -> NT Int
(This does not yet work in GHC-7.8, as a bug in GHC was fixed only later.)
3.2 Using datatypes internally and externally differentlyA similar goal can be achieved for data types, but at a slight expense of convenience. Say you want export data type
that your users must not coerce freely. The way to go is to add a role annotation
Set a
But then not even you can coerce
module Set (Set) where data Set a = .... type role Set nominal
to
Set s
, and you might have valid reasons to do so!
Set t
You can solve this by adding a wrapper newtype:
As long as
module Set (Set) where data InternalSet a = .... newtype Set a = MkSet (InternalSet a) type role Set nominal
is in scope,
MkSet
will reduce to
Coercible (Set s) (Set t)
, which – assuming
Coercible (InternalSet s) (InternalSet t)
’s parameter is inferred as representational – reduces to
Set
as desired. In external code, where
Coercible s t
is not in scope, the constraint
MkSet
will not be solvalble – the role annotation on
Coercible (Set s) (Set t)
prevents coercing under it, and the newtype unwrapping cannot be used as
Set
is not in scope.
MkSet
3.3 Recursive newtypesRecursive newtypes pose a general challenge for GHC's solver for
constraints. If all newtype constructors are in scope, the solver sometimes tries to normalize away all occurrences of newtypes and then show that these normal forms are
Coercible
. (If a constructor is not in scope, the solver will not unwrap that newtype.) When a newtype is recursive, this process ends with an error, before GHC loops forever. It is believed that solving
Coercible
constraints in the presence of recursive newtypes is an undecidable problem (see here), and so this behavior is somewhat reasonable. If you need to use
Coercible
with recursive newtypes and the solver is failing you, it might be worth thinking a bit about its implementation and working around that. For example, you might need to write a type-restricted synonym for
coerce
in a module that specifically does not import certain constructors, just to control the solver. Of course, if you think that you have an easy case to solve, feel free to post your example in a bug report.
coerce | https://wiki.haskell.org/GHC/Coercible | CC-MAIN-2016-44 | refinedweb | 655 | 68.3 |
dataflash.h File ReferenceFunction library for dataflash AT45DB family. More...
#include <cfg/compiler.h>
#include <kern/kfile.h>
#include <fs/battfs.h>
Go to the source code of this file.
Detailed DescriptionFunction library for dataflash AT45DB family.
- Version:
- Id
- dataflash.h 2541 2009-04-17 14:00:57Z batt
Definition in file dataflash.h.
Define Documentation
Select bits 2-5 of status register.
These bits indicate device density (see datasheet for more details).
Definition at line 124 of file dataflash.h.
Enumeration Type Documentation
Data flash opcode commands.
- Enumerator:
-
Definition at line 129 of file dataflash.h.
Memory definitions.
List of supported memory devices by this drive. Every time we call dataflash_init() we check device id to ensure we choose the right memory configuration. (see dataflash.c for more details).
Definition at line 77 of file dataflash.h.
Function Documentation.
Run dataflash test memory.
Definition at line 171 of file dataflash_hwtest.c.
To test data falsh drive you could use this functions.
To use these functions make sure to include in your make file the drv/datafalsh_test.c source.
(see drv/datafalsh_test.c for more detail)
Definition at line 111 of file dataflash_hwtest.c.
End a dataflash Test.
(Unused)
Definition at line 193 of file dataflash_hwtest.c. | http://doc.bertos.org/2.2/dataflash_8h.html#3cee1707173f93c4e2a23dd95bad4a92 | crawl-003 | refinedweb | 207 | 63.46 |
1. Introduction
In the last article, we saw about Logging and getting function call stack information. In this article, we will see “Debugger Attributes” which controls debugging ability and provide a rich experience to the debugging user. An "Attribute" is a Tag defined over the elements like Class, Functions, assemblies etc. These tags determine how the elements should behave at run time. Let us see below specified debugging attributes with a simple example:
- DebuggerBrowsable Attribute
- DebuggerDisplay Attribute
- DebuggerHidden Attribute
2. About the Example
The example used in this article is shown in the screenshot below:
In each button click handler, the debugger breakpoint is invoked dynamically and hence it is advised to launch the application through "VisualStudio" with F5 (i.e.) start the application through the menu option Debug->Start Debugging. Once you download this example application, watch the video which demonstrates the usage of each attribute.
3. The Book Class
First a class called Book is defined to demonstrate the debugging attributes. This class has three private members and constructor to initialize those private members. The code for the class is given below:
using System; using System.Collections.Generic; using System.Text; namespace DebugAttrib { //Sample 01: Default Book Class class Book { private int m_bookid; private String m_bookname; private String m_author; public Book(Int32 idno, String bookname, String Author) { m_bookid = idno; m_bookname = bookname; m_author = Author; } } }
4. Default Behaviour of Book Class
The “Default Class” button click handler examines the behaviour of the above-specified book class. In the form file, to break into the code dynamically for debugging, the Diagnosis namespace is used. The Code is given below:
//Sample 02: Required NameSpace using System.Diagnostics;
The “Default Class” button click handler creates the instance of Book in order to examine the default behaviour. The code is below:
//Sample 03: Class Default in Auto Window private void btnDefault_Click(object sender, EventArgs e) { Debugger.Break(); Book bk = new Book(110, "C++ for Starters", "Rob Kati"); }
When you debug the class instance “bk”, you can see all the member information in the debugger window. Also, the debugger shows the Namespace and Class Name in the instance level. Have a look at the below video to know the default behaviour of the Book Class Instance:
Video 1: Default Book Class Behaviour
5. Book class behaviour with ToString Override
Once you override the "ToString() Method" in the class, the debugger knows how to translate the class in the string format. During the debug time, the ToString implementation provided by us will be invoked by the debugger. Have a look at the below code:
//Sample 04a: Override ToString public override string ToString() { return string.Format("{0}[{1}] by {2}", m_bookname, m_bookid, m_author); }
In the above code, a string is formed based on the members present in the class and that string is returned to the caller. In the debugger output, we can now see the meaningful information against the class Instance Name instead of the default "Namespace.ClassName" as it is already displayed under the "Type" column.
6. DebuggerBrowsable Attribute
There are various windows like Auto, Quick watch etc can browse the class information and examine the values in each member. The "DebuggerBrowsable Attribute" controls what information can be browse-able through the debugger. To demonstrate the debugger browsable attribute, the default Book class is shown in the previous section is modified and the modified version of the book class is Book1. Have a look at the Book1 class below:
//Sample 05: Copy Pasted Book class. Look @ 4.x for the Modifications class Book1 { //Sample 5.1: Change all private member as public public int m_bookid; public String m_bookname; public String m_author; //Sample 5.2: Add a Private Member and Check Attribute // Sample 7.0: Add the Attrubute [ DebuggerBrowsable(DebuggerBrowsableState.Never)] private String m_publisher; //Sample 5.3: Add new member to constructor public Book1(Int32 idno, String bookname, String Author, String publication) { m_bookid = idno; m_bookname = bookname; m_author = Author; //Sample 5.4: Initialize the Private Member m_publisher = publication; } //Sample 04b: Override ToString public override string ToString() { return string.Format("{0} by {1} from {2}", m_bookname, m_author, m_publisher); } }
In the above code, notice the member “m_publisher” is marked with DebuggerBrowsable attribute. Here, we set "DebuggerBrowsableState.Never" through the DebuggerBrowsable attribute. The never state informs the runtime that the member should not be browse-able while debugging any instance of the class Book1. There are other browse-able states and those are listed below:
- Collapsed - Shows the element as collapsed.
- Never - Never show the element.
- RootHidden - Do not display the root element; display the child elements if the element is a collection or array of items.
In the Main form, “Browsable” button click handler is provided to test the class Book1. The code is given below:
//Sample 06: Debug and check Browsable attribute private void btnBrowse_Click(object sender, EventArgs e) { Debugger.Break(); Book1 bk = new Book1(110, "C++ for Starters", "Rob Kati", "KPB"); }
Video 2: Browsable Attribute and effect of ToString Override
7. DebuggerDisplay Attribute
The debugger display attribute can be marked for elements like class, functions, properties etc. Consider the below screen shot:
Here, the "DebuggerDisplay Attribute" is defined for the class member m_author. The attribute is marked as 1 in the above picture and the attribute takes a string. Note that Debugger display string is accessing the class member within curly basis (Marked as 2). At runtime, the value will be substituted from the actual variable m_author (marked as 3). To examine the above attribute, the basic Book class is modified to have Book2 class. In Book2 class, all three data members are marked with DebuggerDisplay attribute. The code is shown below:
//Sample 08: Changed Class Name as Book2 class Book2 { //Sample 09: Add Attributes to Private Member [DebuggerDisplay("Book ID is {m_bookid}.")] private int m_bookid; [DebuggerDisplay("Book Title is {m_bookname}")] private String m_bookname; [DebuggerDisplay("Written by {m_author}")] private String m_author; In the Main form, Click event for the button “Debugger Display” is handled. The event handler code is listed below: //Sample 10: Debugger Display Attribute private void btnDebuggerDisplay_Click(object sender, EventArgs e) { Debugger.Break(); Book2 bk = new Book2(110, "C++ for Starters", "Rob Kati"); }
Video 3: DebuggerDisplay Attribute in action
8. DebuggerHidden Attribute
When you mark a member function with this attribute the debugger will not stop on that method; that means you can’t keep a breakpoint on that method. To check this attribute, the Book2 class is modified to have a new method called GetPrice which returns different price based on the month-Range in a year. Since the GetPrice is marked with "DebuggerHidden Attribute" we can’t debug this particular function in the Book2 class. The code is given below:
//Sample 11: Get Price of the book //Sample 13: Add the Hidden Attribute [DebuggerHidden()] public double GetPrice() { DateTime today = DateTime.Now; if (today.Month > 0 && today.Month < 5) return 500.50; else if (today.Month > 4 && today.Month < 10) return 805.00; else return 200.20; }
The “Hide Function” button click event handler creates the Instance of Book2 and makes a call to GetPrice. You can examine the function to see how debugging is prohibited for the GetPrice. Below is code for button click event handler:
//Sample 12: Debugger Hidden Attribute private void btnDebuggerFnHide_Click(object sender, EventArgs e) { Debugger.Break(); Book2 bk = new Book2(110, "C++ for Starters", "Rob Kati"); double price = bk.GetPrice(); Debugger.Log(0, "Information", string.Format("Price={0}", price)); } | http://www.mstecharticles.com/2015_09_01_archive.html | CC-MAIN-2018-13 | refinedweb | 1,230 | 55.64 |
Summary: global name in our program. As each module has a single instance, any changes to the module object get reflected everywhere.
Problem: Given a function; how to use a global variable in it?
Example:
def foo(): # Some syntax to declare the GLOBAL VARIABLE "x" x = 25 # Assigning the value to the global variable "x" def func(): # Accessing global variable defined in foo() y = x+25 print("x=",x,"y=",y) foo() func()
Expected Output:
x= 25 y= 50
In the above example, we have been given a function named
foo() which returns a global variable
x such that the value of
x can be used inside another function named
func(). Let us have a quick look at how we can use the
global keyword to resolve our problem.
Solution: Using The Global Keyword
We can use the
global as a prefix to any variable in order to make it global inside a local scope.
def foo(): global x x = 25 def func(): y = x+25 print("x=",x,"y=",y) foo() func()
Output:
x= 25 y= 50
Now that we already know our solution, we must go through some of the basic concepts required for a solid understanding of our solution. So, without further delay let us discuss them one by one.
Variable Scope In Python
The scope of a variable is the region or part of the program where the variable can be accessed directly. Let us discuss the different variable scopes available in Python.
❖ Local Scope
When a variable is created inside a function, it is only available within the scope of that function and ceases to exist if used outside the function. Thus the variable belongs to the local scope of the function.
def foo(): scope = "local" print(scope) foo()
Output:
local
❖ Enclosing Scope
An enclosing scope occurs when we have nested functions. When the variable is in the scope of the outside function, it means that the variable is in the enclosing scope of the function. Therefore, the variable is visible within the scope of the inner and outer functions.
Example:
def foo(): scope = "enclosed" def func(): print(scope) func() foo()
output:
enclosed
In the above example, the variable
scope is inside the enclosing scope of the function
foo() and available inside the
foo() as well as
func() functions.
❖ Global Scope
A global variable is a variable that is declared in a global scope and can be used across the entire program; that means it can be accessed inside as well outside the scope of a function. A global variable is generally declared outside functions, in the main body of the Python code.
Example:
name = "FINXTER" def foo(): print("Name inside foo() is ", name) foo() print("Name outside foo() is :", name)
Output:
Name inside foo() is FINXTER Name outside foo() is : FINXTER
In the above example,
name is a global variable that can be accessed inside as well as outside the scope of the function foo(). Let’s check what happens if you try to change the value of the global variable
name inside the function.
name = "FINXTER" def foo(): name = name + "PYTHON" print("Name inside foo() is ", name) foo()
Output:
Traceback (most recent call last): File "main.py", line 8, in <module> foo() File "main.py", line 4, in foo name = name + "PYTHON" UnboundLocalError: local variable 'name' referenced before assignment
We get an
UnboundLocalError in this case, because Python treats
name as a local variable inside
foo() and
name is not defined inside
foo(). If you want to learn more about the UnboundLocalError and how to resolve it, please read it in our blog tutorial here.
❖ Built-In Scope
The built-in scope is the widest scope available in python and contains keywords, functions, exceptions, and other attributes that are built into Python. Names in the built-in scope are available all across the python program. It is loaded automatically at time of executing a Python program/script.
Example:
x = 25 print(id(x))
Output:
140170668681696
In the above example, we did not import any module to use the functions
print() or
id(). This is because both of them are in the built-in scope.
Having discussed the variable scopes in Python, let us discuss about a couple of very important keywords in Python in relation to the variable scopes.
Use Global Variables Inside A Function Using The global Keyword
We already read about the global scope where we learned that every variable that is declared in the main body and outside any function in the Python code is global by default. However, if we have a situation where we need to declare a global variable inside a function as in the problem statement of this article, then the global keyword comes to our rescue. We use the
global keyword inside a function to make a variable global within the local scope. This means that the global keyword allows us to modify and use a variable outside the scope of the function within which it has been defined.
Now let us have a look at the following program to understand the usage of the
global keyword.
def foo(): global name name = "PYTHON!" print("Name inside foo() is ", name) foo() name = "FINXTER "+name print("Name outside foo() is ", name)
Output:
Name inside foo() is PYTHON! Name outside foo() is FINXTER PYTHON!
In the above example, we have a global variable name declared inside the local scope of function foo(). We can access and modify this variable outside the scope of this variable as seen in the above example.
❃ POINTS TO REMEMBER
- A variable defined outside a function is global by default.
- To define a global variable inside a function we use the
globalkeyword.
- A variable inside a function without the
globalkeyword is local by default.
- Using the
globalkeyword for a variable that is already in the global scope, i.e., outside the function has no effect on the variable.
Global Variables Across Modules
In order to share information across Python modules within the same piece of code, we need to create a special configuration module, known as config or cfg module. We have to import this module into our program. The module is then available as a global name in our program. Because each module has a single instance, any changes to the module object get reflected everywhere.
Let us have a look at the following example to understand how we can share global variables across modules.
Step 1: config.py file is used to store the global variables.
Step 2: modify.py file is used to change global variables.
Step 3: main.py file is used to apply and use the changed values of the global variable.
Output After Executing
main.py
The nonlocal Keyword
The
nonlocal keyword is useful when we have a nested function, i.e., functions having variables in the enclosing scope. In other words if you want to change/modify a variable that is in the scope of the enclosing function (outer function), then you can use the
nonlocal keyword.
Example:
def foo(): a = 25 print("Value of 'a' before calling func = ",a) def func(): nonlocal a a=a+20 print("Value of 'a' inside func = ",a) func() print("Value of 'a' after exiting func = ",a) foo()
Output:
Value of 'a' before calling func = 25 Value of 'a' inside func = 45 Value of 'a' after exiting func = 45
From the above example it is clear that if we change the value of a
nonlocal variable the value of the
local variable also changes.
Conclusion
The key points that we learned in this article are:
- Variable Scopes:
- Local Scope
- Enclosing Scope
- Global Scope
- Built-in Scope
- Important Keywords:
- The
globalKeyword
- How to use a global variable inside a function?
- How to use a global variable across modules?
- The
nonlocalKeyword
I hope you found this article useful and you can easily apply the above concepts in your code.! | https://blog.finxter.com/how-to-use-global-variables-in-a-python-function/ | CC-MAIN-2021-43 | refinedweb | 1,316 | 68.2 |
Ivan Voras wrote: > Laurent Pointal wrote: > >> The ugly part is the 'tmp' name, try to choose a name with a proper >> meaning about what it is really, and it become clean and readable: >> >> filerefs = some.big.structure.or.nested.object.with.file.references >> filerefs.> filerefs.> filerefs.use_quotes = True >> >> Isn't it ? > > Well, no, but this might be due to personal tastes. At least, I don't > think it's better then some other alternatives. For example, in C99 > you can do: > > static struct option_s foo_option = { > . }; > > At least to me, this looks even better than the Pascal's syntax. > > So basically, what you're saying is you don't like namespace prefixes at all? Keeping your namespaces separate will help the clarity of your code immensely, unless, arguably, you're doing heavy numerical processing, goes the argument in a recent thread. Probably what will help you the most is not a fancy trick for getting rid of the namespace, but getting over your aversion to them. That will make you a better programmer, in the long run. Debugging will be easier, people will enjoy working with your code more. Clarity is beautiful. Objectively so. Not just some lame "in the eye of the beholder" kind of beautiful. Cheers, Cliff | http://mail.python.org/pipermail/python-list/2007-September/462541.html | CC-MAIN-2013-20 | refinedweb | 208 | 64.91 |
Suppose we have a list integers representing the data. We have to check whether it is valid UTF-8 encoding or not. One UTF-8 character can be 1 to 4-byte long. There are some properties −
For 1-byte character, the first bit is a 0, followed by its unicode code.
For n-bytes character, the first n-bits are all 1s, the n+1 bit is 0, followed by n-1 bytes with most significant 2 bits being 10.
So the encoding technique is as follows −
So if the input is like [197, 130, 1], this represents octet sequence 11000101 10000010 00000001, so this will return true. It is a valid utf-8 encoding for a 2-bytes character followed by a 1-byte character.
To solve this, we will follow these steps −
cnt := 0
for i in range 0 to size of data array
x := data[i]
if cnt is 0, then
if x/32 = 110, then set cnt as 1
otherwise when x/16 = 1110, then cnt = 2
otherwise when x/8 = 11110, then cnt = 3
otherwise when x/128 is 0, then return false
otherwise when x /64 is not 10, then return false and decrease cnt by 1
return true when cnt is 0
Let us see the following implementation to get better understanding −
#include <bits/stdc++.h> using namespace std; class Solution { public: bool validUtf8(vector<int>& data) { int cnt = 0; for(int i = 0; i <data.size(); i++){ int x = data[i]; if(!cnt){ if((x >> 5) == 0b110){ cnt = 1; } else if((x >> 4) == 0b1110){ cnt = 2; } else if((x >> 3) == 0b11110){ cnt = 3; } else if((x >> 7) != 0) return false; } else { if((x >> 6) != 0b10) return false; cnt--; } } return cnt == 0; } }; main(){ Solution ob; vector<int> v = {197,130,1}; cout << (ob.validUtf8(v)); }
[197,130,1]
1 | https://www.tutorialspoint.com/utf-8-validation-in-cplusplus | CC-MAIN-2021-39 | refinedweb | 307 | 77.16 |
I am new to python and hence falcon. I started developing a RESTful API and falcon so far is great for it. There is some other requirement to serve a static web page and I dont want to write an app or spawn a server for that.
Is it possible from the falcon app to serve the static web page?
First and most important, I have to say that you don't want to do that. What you should do is have a nginx server on top of your Falcon app, and serve any static file directly from nginx (and redirect the API calls to Falcon).
This being said, you can serve static files easily from Falcon. This is the code you are looking for:
import falcon class StaticResource(object): def on_get(self, req, resp): resp.status = falcon.HTTP_200 resp.content_type = 'text/html' with open('index.html', 'r') as f: resp.body = f.read() app = falcon.API() app.add_route('/', StaticResource())
You may want to set the file name as a parameter in the url, and get it in your resource, so your static resource can serve any requested file from a directory. | https://codedump.io/share/H4PnbSBdhWID/1/how-to-serve-a-static-webpage-from-falcon-application | CC-MAIN-2017-51 | refinedweb | 192 | 73.27 |
#include <CGAL/Triangulation_2.h>
CGAL::Triangulation_cw_ccw_2.
Inherited by CGAL::Constrained_triangulation_2< Traits, Tds, Itag >, CGAL::Delaunay_triangulation_2< Traits, Tds >, and CGAL::Regular_triangulation_2< Traits, Tds >.
The class
Triangulation_2 is the basic class designed to handle triangulations of set of points \( { A}\) in the plane.
Such a triangulation has vertices at the points of \( { A}\) and its domain covers the convex hull of \( { A}\). It can be viewed as a planar partition of the plane whose bounded faces are triangular and cover the convex hull of \( { A}\). The single unbounded face of this partition is the complementary of the convex hull of \( {.
Triangulation_2implements this point of view and therefore considers the triangulation of the set of points as a set of triangular, finite and infinite faces. Although it is convenient to draw a triangulation as in figure Triangulation_ref_Fig_infinite_vertex, note that the
infinite vertexhas counterclockwise order. The neighbor of a face are also indexed with 0,1,2 in such a way that the neighbor indexed by \( i\) is opposite to the vertex with the same index.
The triangulation class offers Triangulation_ref_Fig_neighbors).
Traversal of the Triangulation
A triangulation can be seen as a container of faces and vertices. Therefore the triangulation provides several iterators and circulators that allow to traverse it completely or partially.
Traversal of the Convex Hull.
I/O
The I/O operators are defined for
iostream. The format for the iostream is an internal format.
The information output in the
iostream is:
The index of an item (vertex of face) is the rank of this item in the output order. When dimension \( <\) 2, the same information is output for faces of maximal dimension instead of faces.
Implementation \(.
TriangulationTraits_2
TriangulationDataStructure_2
TriangulationDataStructure_2::Face
TriangulationDataStructure_2::Vertex
CGAL::Triangulation_data_structure_2<Vb,Fb>
CGAL::Triangulation_vertex_base_2<Traits>
CGAL::Triangulation_face_base_2<Traits>
specifies which case occurs when locating a point in the triangulation.
CGAL::Triangulation_2<Traits,Tds>
Copy constructor.
All the vertices and faces are duplicated. After the copy,
*this and
tr refer to different triangulations: if
tr is modified,
*this is not.
returns a range of iterators over all faces.
All_faces_iteratoris
Face, the value type of
All_face_handles::iteratoris
Face_handle
returns a range of iterators over all vertices.
All_vertices_iteratoris
Vertex, the value type of
All_vertex_handles::iteratoris
Vertex_handle
Returns \( i+1\) modulo 3.
Compute the circumcenter of the face pointed to by f.
This function is available only if the corresponding function is provided in the geometric traits.
Returns \( i+2\) modulo 3.
returns a range of iterators over finite faces.
Finite_faces_iteratoris
Face, the value type of
Finite_face_handles::iteratoris
Face_handle
returns a range of iterators over finite vertices.
Finite_vertices_iteratoris
Vertex, the value type of
Finite_vertex_handles::iteratoris
Vertex_handle
Exchanges the edge incident to
f and
f->neighbor(i) with the other diagonal of the quadrilateral formed by
f and
f->neighbor(i).
fand
f->neighbor(i)are finite faces and their union form a convex quadrilateral.
Starts at the first edge of
f incident to
v, in counterclockwise order around
v.
fis incident to vertex
v.
Starts at face
f.
fis incident to vertex
v.
Starts at the first vertex of
f adjacent to
v in counterclockwise order around
v.
fis incident to vertex
v.
true if the line segment from
va to
vb includes an edge
e incident to
va.
If
true,
vbr becomes the other vertex of
e,
e is the edge
(fr,i) where
fr is a handle to the face incident to
e and on the right side
e oriented from
va to
vb.
Same as
locate() but uses inexact predicates.
This function returns a handle on a face that is a good approximation of the exact location of
query, while being faster. Note that it may return a handle on a face whose interior does not contain
query. When the triangulation has dimension smaller than 2,
start is returned.
Inserts point
p in the triangulation and returns the corresponding vertex.
If point
p coincides with an already existing vertex, this vertex is returned and the triangulation remains unchanged.
If point
p is on an edge, the two incident faces are split in two.
If point
p is strictly inside a face of the triangulation, the face is split in three.
If point
p is strictly outside the convex hull,
p is linked to all visible points on the convex hull to form the new triangulation.
At last, if
p is outside the affine hull (in case of degenerate 1-dimensional or 0-dimensional triangulations),
p is linked all the other vertices to form a triangulation whose dimension is increased by one. The last argument
f is an indication to the underlying locate algorithm of where to start.
Inserts the points in the range
[first,last) in the given order, and returns the number of inserted points.
inserts the points in the iterator range
[first,last) in the given order, and returns the number of inserted points..
Inserts vertex v in edge
i of
f.
vlies on the edge opposite to the vertex
iof face
f.
Inserts vertex
v in face
f.
Face
f is modified, two new faces are created.
vlies inside face
f.
Inserts a point which is outside the convex hull but in the affine hull.
fpoints to a face which is a proof of the location of
p, see the description of the
locatemethod above.
as above.
In addition, if
true is returned, the edge with vertices
va and
vb is the edge
e=(fr,i) where
fr is a handle to the face incident to
e and on the right side of
e oriented from
va to
vb.
as above.
In addition, if
true is returned, fr is a handle to the face with
v1,
v2 and
v3 as vertices.
Checks the combinatorial validity of the triangulation and also the validity of its geometric embedding.
This method is mainly a debugging help for the users of advanced features.
This function returns a circulator that allows to visit the faces intersected by the line
pq.
If there is no such face the circulator has a singular value.
The starting point of the circulator is the face
f, or the first finite face traversed by
l , if
f is omitted.
The circulator wraps around the infinite vertex: after the last traversed finite face, it steps through the infinite face adjacent to this face then through the infinite face adjacent to the first traversed finite face then through the first finite traversed face again.
pand
qmust be different points.
f != nullptr, it must point to a finite face and the point
pmust be inside or on the boundary of
f.
If the point
query lies inside the convex hull of the points, a face that contains the query in its interior or on its boundary is returned.
If the point
query lies outside the convex hull of the triangulation but in the affine hull, the returned face is an infinite face which is a proof of the point's location:
querylies to the left of the oriented line \( pq\) (the rest of the triangulation lying to the right of this line).
queryand the triangulation lie on either side of
p.
If the point
query lies outside the affine hull, the returned
Face_handle is
nullptr.
The optional
Face_handle argument, if provided, is used as a hint of where the locate process has to start its search.
Same as above.
Additionally, the parameters
lt and
li describe where the query point is located. The variable
lt is set to the locate type of the query. If
lt==VERTEX the variable
li is set to the index of the vertex, and if
lt==EDGE
li is set to the index of the vertex opposite to the edge. Be careful that
li has no meaning when the query type is
FACE,
OUTSIDE_CONVEX_HULL, or
OUTSIDE_AFFINE_HULL or when the triangulation is \( 0\)-dimensional.
returns the same edge seen from the other adjacent face.
returns the index of
f in its \( i^{th}\) neighbor.
returns the vertex of the \( i^{th}\) neighbor of
f that is opposite to
f.
If there is no collision during the move, this function is the same as
move_if_no_collision .
Otherwise,
v is removed and the vertex at point.
Assignment.
All the vertices and faces are duplicated. After the assignment,
*this and
tr refer to different triangulations: if
tr is modified,
*this is not.
Returns on which side of the oriented boundary of
f lies the point
p.
fis finite.
Removes the vertex from the triangulation.
The created hole is re-triangulated.
vmust be finite.
Removes a vertex of degree three.
Two of the incident faces are destroyed, the third one is modified.
vis a finite vertex with degree three.
Returns the line segment formed by the vertices
ccw(i) and
cw(i) of face
f.
ccw(i)and
cw(i)of
fare finite.
Returns the line segment corresponding to edge
e.
eis a finite edge.
Returns the line segment corresponding to edge
*ec.
*ecis a finite edge.
Returns the line segment corresponding to edge
*ei.
*eiis a finite edge.
This is an advanced function.
This method is meant to be used only if you have done a low-level operation on the underlying tds that invalidated the infinite vertex. Sets the infinite vertex..
creates a new vertex
v and use it to star the hole whose boundary is described by the sequence of edges
[edge_begin, edge_end).
Returns a handle to the new vertex.
This function is intended to be used in conjunction with the
find_conflicts() member functions of Delaunay and constrained Delaunay triangulations to perform insertions.
same as above, except that the algorithm first recycles faces in the sequence
[face_begin, face_end) and create new ones only when the sequence is exhausted.
This function is intended to be used in conjunction with the
find_conflicts() member functions of Delaunay and constrained Delaunay triangulations to perform insertions.
The triangulations
tr and
*this are swapped.
This method should be used instead of assignment of copy construtor. if
tr is deleted after that.
Returns the triangle formed by the three vertices of
f.
Inserts the triangulation into the stream
os.
Point.
Reads a triangulation from stream
is and assigns it to the triangulation.
Point. | https://doc.cgal.org/5.0/Triangulation_2/classCGAL_1_1Triangulation__2.html | CC-MAIN-2021-49 | refinedweb | 1,690 | 56.96 |
collective.logbook 0.7
Advanced Persistent Error Log
Introduction
collective.logbook add-on provides advanced persistent error logging for open source Plone CMS.
Installation
These instructions assume that you already have a Plone 3 buildout that’s built and ready to run.
Edit your buildout.cfg file and look for the eggs key in the instance section. Add collective.logbook to that list. Your list will look something like this:
eggs = collective.logbook
Enable via Site Setup > Add ons.
Usage
Settings
See Site Setup for log book settings.
Inspecting errors
After install, go to
The errors are logged there. You can tune some parameters.
Testing
collective.logbook provides a view error-test which Site managers can access to generate a test traceback.
First visit @@error-test and make sure the error appears in @@logbook view.
Note
You might need to turn on both Logbook enabled and Large site in Logbook Site Setup. This may be a bug regarding new Plone versions and production mode.
Web hooks
collective.logbook provides ability to HTTP POST error message to any web service when an error happens in Plone. This behavior is called a web hook.
Use cases
- Showing Plone errors real-time in Skype chat
- Routing errors to different websites and services via Zapier
In Site Setup > Logbook you can enter URLs where HTTP POST will be asynchronously performed on a traceback. HTTP POST payload is an message from Logbook, containing a link for further information.
Note
Currently repeated errros (same traceback signature) are not POST’ed again. You will receive message only once unless until you clear logbook contents in @@logbook management view.
Motivation
For anonymous users Plone generates an Error Page which contains an error number. But what to do with this error number?
You have to log into your plone site, go to the ZMI, check the error_log object and probably construct the url by hand to get the proper error with this error number, like:
If you are lucky, you will find the error. If not, and the number of occured errors exceeded the number of exceptions to keep, or maybe a cronjob restarted your zope instance, then….
Hmm, not really smooth this behaviour.
Wouldn’t it be better to have a nice frontend where you can paste the error number to a field and search for it? Keep all log persistent, also when zope restarts? Keep only unique errors and not thousand times the same Error? Get an email when a new, unique error occured, so you know already what’s going on before your customer mails this error number to you?
If you think that this would be cool, collective.logbook is what you want:)
Under the Hood
No, you won’t get DOOOOMED when you install collective.logbook :)
SiteErrorLog Patch
collective.logbook patches the raising method of Products.SiteErrorLog.SiteErrorLog:
from Products.SiteErrorLog.SiteErrorLog import SiteErrorLog _raising = SiteErrorLog.raising def raising(self, info): enty_url = _raising(self, info) notify(ErrorRaisedEvent(self, enty_url)) return enty_url
The patch fires an ‘ErrorRaisedEvent’ event before it returns the enty_url. The entry url is the link to the standard SiteErrorLog like:
The patch gets _only_ then installed, when you install collective.logbook over the portal_quickinstaller tool and removes the patch, when you uninstall it.
You can also deactivate the patch over the logbook configlet of the plone control panel.
Log Storage
The default storage is an annotation storage on the plone site root:
<!-- default storage adapter --> <adapter for="*" factory=".storage.LogBookStorage" />
The default storage adapter creates 2 PersistentDict objects in your portal. One ‘main’ storage and one ‘index’ storage, which keeps track of referenced errors.
The storage will be fetched via an adapter lookup. So the more specific adapter will win. Maybe an SQL storage with SQLAlchemy would be nice here:)
Notify Event
When a new unique error occurs, an INotifyTraceback event gets fired. An email event handler is already registered with collective.logbook:
<subscriber for=".interfaces.INotifyTraceback" handler=".events.mailHandler" />
This handler will email new tracebacks to the list of email adresses specified in the logbook configlet of the plone control panel.
Properties
collective.logbook installs 2 Properties in your application root:
- logbook_enabled
- logbook_log_mails
These properties take the values you enter in logbook configlet in the plone control panel.
The first one checks if logbook logging is disabled or not when you restart your instance:
def initialize(context): """ Initializer called when used as a Zope 2 product. """ app = context._ProductContext__app enabled = app.getProperty("logbook_enabled", False) if enabled: monkey.install_monkey()
The latter one is used to email new tracebacks to these email addresses.
The properties get uninstalled when you uninstall collective.logbook via the quickinstaller tool.
Unit Tests
The product contains some unit tests.
more to come…
Changelog
0.7 (2014-06-12)
- Fixed tests [ramonski]
- Add Plone4.3-compatibility. [WouterVH]
- Added Web Hook support [miohtama, sevanteri]
0.6 (2011-11-28)
- Log exceptions within exception handler [jfroche]
- Move delete all button, add show all button, show error message if error was not found. [jfroche]
- Add option that disable browsing stored errors. This option become useful if you have a site with many errors. [jfroche]
0.5 (2011-08-16)
- Move storage to OOBTree to avoid that logging error transactions get bigger and bigger. Add upgradehandler accordingly. [gotcha]
- Fix saving configuration. [gotcha]
- Logging initialization at startup time was broken with Zope 2.13 at least. [gotcha]
- Added support for i18n [macagua]
- Added support for Spanisn translation [macagua]
- Move the mail notifier into a view to use a template for better HTML email handling. [rossp]
- Include the REQUEST HTML for more useful debugging and troubleshooting. [rossp]
0.4 (2010-08-02)
- Add “z3c.autoinclude.plugin” entry point, so in Plone 3.3+ you can avoid loading the ZCML file. [WouterVH]
- expose send mail exception to the log message this fixes [naro]
- fixed email notification for plone4 since MailHost.send signature changed (see Upgrade Information) [fRiSi]
- store and show user and date for referenced errors too this fixes [fRiSi]
0.3.1 (2009-03-18)
- the error handler now starts a new transaction before saving it into the logbook [ramonski]
0.3 (2009-03-17)
- 0.2 release was broken, sorry for this re-release [ramonski]
- fixed issues which caused some ugly Database Conflict errors [ramonski]
- removed all Zope2.app() stuff [ramonski]
- mail handler stops when no emails specified [ramonski]
- fixed uninstall method of properties [ramonski]
0.2 (2009-03-17)
- added a configlet for plone control panel [ramonski]
- added a default notify traceback email handler [ramonski]
- added propert install/uninstall methods for the SiteErrorLog patch [ramonski]
- added 2 properties in the application root [ramonski]
0.1 - Unreleased
- Initial release [ramonski]
- Downloads (All Versions):
- 41 downloads in the last day
- 148 downloads in the last week
- 831 downloads in the last month
- Author: Ramon Bartl
- License: GPL
- Categories
- Package Index Owner: ramonski, jfroche, frisi, gotcha, mpeeters
- Package Index Maintainer: jfroche, gotcha
- DOAP record: collective.logbook-0.7.xml | https://pypi.python.org/pypi/collective.logbook/0.7 | CC-MAIN-2015-18 | refinedweb | 1,154 | 58.38 |
Important: Please read the Qt Code of Conduct -
Why does a covered MouseArea still react
I have one Rectangle filled with a MouseArea that gets covered by another Rectangle without a MouseArea, but the covered MouseArea still responds.
Why does a covered element still get events? Is there a way to stop this?
import QtQuick 2.0 Rectangle { width: 300 height: 200 Rectangle { x: 0 y: 0 color: "red" width: 100 height: 100 MouseArea { anchors.fill: parent onPressed: { console.log("Pressed") } onReleased: { console.log("Released") } } } Rectangle { x: 0 y: 0 color: "blue" width: 100 height: 100 } }
- sierdzio Moderators last edited by
Rectangle is transparent to mouse events. It does not receive, handle or stop them.
If you want to stop the covered area from getting mouse events, add another MouseArea in your top rectangle and catch the mouse there. Or disable the bottom mouse area (
enabled: false) then the top rectangle is visible. | https://forum.qt.io/topic/89139/why-does-a-covered-mousearea-still-react | CC-MAIN-2021-21 | refinedweb | 153 | 66.33 |
I am trying to create a script tool to do a search but i am having a little trouble putting it together and i am not sure if i am going about it the right way. I would like to have a script tool that can search for a subdivision name inside arcmap in the toolbox. I have attached my python code i have and working with, one other thing i would like this script to do is use a wildcard. For example say a subdivision that i am searching for attribute is "County Side Estates" but I would like to just enter "County" and select all subs with "County" but i am not sure how to accomplish this. Any help would be gratefully appreciated.
Thanks.
The error i am getting with my current code.
Traceback (most recent call last):
File "C:\GIS\Python Scripts\ZoomToParcelScript6Eb.py", line 9, in <module>
for row in cursor:
RuntimeError: Underlying DBMS error [[Microsoft][SQL Server Native Client 10.0][SQL Server]Incorrect syntax near 'ACRES'.] [vector.DBO.Subdivision_boundaries]
Failed to execute (ZoomToSubScript).
import arcpy mxd = arcpy.mapping.MapDocument('CURRENT') df = arcpy.mapping.ListDataFrames(mxd, "Layers") [0] lyr = arcpy.mapping.ListLayers(mxd, "Subdivision Boundaries")[0] whereClause = arcpy.GetParameterAsText(0) with arcpy.da.SearchCursor(lyr, ("PLAT_NAME"), whereClause) as cursor: for row in cursor: query_str = whereClause #= '{0}'.format(row[1]) arcpy.SelectLayerByAttribute_management(lyr, "NEW_SELECTION", query_str) df.extent = row[0].extent df.scale = df.scale * 5 arcpy.RefreshActiveView() #arcpy.mapping.ExportToPDF(mxd, "C:/test/" + fc + "_" + str(row[1]) + "_" + str(row[2]) + ".pdf")
Here are my script parameters.
Greetings,
It looks like your whereclause is incomplete. Try:
whereClause = "PLAT_NAME LIKE '%" + arcpy.GetParameterAsText(0) + "%'"
Regards,
Tom | https://community.esri.com/thread/159413-create-a-search-with-user-getparameterastext-using-arcpydasearchcursor | CC-MAIN-2020-40 | refinedweb | 279 | 52.76 |
<ScrollRestoration />
This component will emulate the browser's scroll restoration on location changes after loaders have completed to ensure the scroll position is restored to the right spot, even across domains.
You should only render one of these and it's recommended you render it in the root route of your app:
import { ScrollRestoration } from "react-router-dom"; function RootRouteComponent() { return ( <div> {/* ... */} <ScrollRestoration /> </div> ); }
getKey
Optional prop that defines the key React Router should use to restore scroll positions.
<ScrollRestoration getKey={({ location, matches }) => { // default behavior return location.key; }} />
By default it uses
location.key, emulating the browser's default behavior without client side routing. The user can navigate to the same URL multiple times in the stack and each entry gets its own scroll position to restore.
Some apps may want to override this behavior and restore position based on something else. Consider a social app that has four primary pages:
If the user starts at "/home", scrolls down a bit, clicks "messages" in the navigation menu, then clicks "home" in the navigation menu (not the back button!) there will be three entries in the history stack:
1. /home 2. /messages 3. /home
By default, React Router (and the browser) will have two different scroll positions stored for
1 and
3 even though they have the same URL. That means as the user navigated from
2 →
3 the scroll position goes to the top instead of restoring to where it was in
1.
A solid product decision here is to keep the users scroll position on the home feed no matter how they got there (back button or new link clicks). For this, you'd want to use the
location.pathname as the key.
<ScrollRestoration getKey={({ location, matches }) => { return location.pathname; }} />
Or you may want to only use the pathname for some paths, and use the normal behavior for everything else:
<ScrollRestoration getKey={({ location, matches }) => { const paths = ["/home", "/notifications"]; return paths.includes(location.pathname) ? // home and notifications restore by pathname location.pathname : // everything else by location like the browser location.key; }} />
When navigation creates new scroll keys, the scroll position is reset to the top of the page. You can prevent the "scroll to top" behavior from your links:
<Link preventScrollReset={true} />
See also:
<Link preventScrollReset>
Without a server side rendering framework like Remix, you may experience some scroll flashing on initial page loads. This is because React Router can't restore scroll position until your JS bundles have downloaded, data has loaded, and the full page has rendered (if you're rendering a spinner, the viewport is likely not the size it was when the scroll position was saved).
Server Rendering frameworks can prevent scroll flashing because they can send a fully formed document on the initial load, so scroll can be restored when the page first renders. | https://beta.reactrouter.com/en/dev/components/scroll-restoration | CC-MAIN-2022-40 | refinedweb | 469 | 60.04 |
I am using SQLServer2008 R2 and need to search on decimal data field I have rows like ID | Amount 1 68.88 2 65.55 Where Amount is of type decimal(18, 2) Now i am using EntityFramework6.0 and making query like whereClause = t => (#some_condition1) &
I have created a windows form application in VS2012. I have a button function in my form. How do I open a different application (separate project) once I click the button? --------------Solutions------------- Application.Run(new MyForm()); should be
I'm trying to get the formula out of a cell not the final value // A1 // B1 =A1 cell.inputValue // value is null. I want the value: =A1 cell.value // equals. Again not the value I'm looking for. According to googles API am using .NET Reflector and Reflexil. I am trying to remove some pieces of code that reference another DLL. I am pretty sure I deleted all entries. Now when I go to save the DLL, I get the error message: Reflexil unable to save this assembly. Objec
I need to pull data from Sailthru using .NET API to get following columns: profile_id, email, hash, send_time, open_time, click_time, first_ten_clicks, first_ten_clicks_time, purchase_time, device I have gone through the site also tried few parameter
I developed a perfectly working .NET WCF web service that produces a SOAP response. My problem is the SOAP format of the response, cause my final customer is not able to use my web service. (He expects just the body content of the response.) In order
I am having my webpage where i am entering the telephone number of the person i am trying to call. So what's next to reach them through VoIP? How to reach the network provider for call processing? --------------Solutions------------- You can use twil
I've been using the Expression Blend for a while and seems to be very helpful for making animations and drawing designs. Everytime when i create storyboard and make animations, i use to set Easing options through the panel on the right. It shows up w
I'm trying to make my table do a lookup on a webservice for each table element: my html code: <script src="~/Scripts/MainApp.js"></script> <div ng- <div ng-controller="
Currently I am working on a project which uses different technologies for different modules and those are installed in different servers using msi's. During production release, it is quite difficult task for developers/support to do this process as t
I have a .net Console application in 1.0 version. I have converted that application into 2010. Now need to compile the application in .Net 3.5. Is there any way where I can set the compile version for console application. However, when I compile the
I want to use a System.Collections.Generic.Queue, but with one difference: I want Queue.Peek to return the last item put in rather than the first. I still want items to enter and exit in the same manner. I was thinking of using Queue.Last() as an alt
Need to create a Delphi DLL that can be called from Delphi and .NET. What does .NET like more, COM or dll? Or does it matter? I just remember hearing that COM can be annoying because of versioning. --------------Solutions------------- The issue of ve
Good Day, I have a console application that has a crystal report that has an image. I will run the report use exporttostream and store the exported report within a varbinary field in sqlserver 2012. I will then select batches of these reports and con
My team and I are trying to integrate an Angular.js SPA with our companies SSO libraries. Those libraries are available in .NET and Java - our team is a .NET shop. These libraries expose API that allow developers to get information about who is logge
I have an Entity Framework project with 14 migrations. These all work fine: I can update the database to every migration I want, up or down. Now, if I change a single thing about the model (for instance, add an integer property), add a new migration,
I want to bind a Dictionary<DateTime, MyClass> as ItemSource for a StackPanel using a DataTemplate. It works with a Collection<DateTime> but can't seem to get the hand of the syntax for the Dicitonary. The xaml file below works if I just bind
I have defined two classes with a one-to-many foreign key relation, generated by the Entity Framework 6.0 database first. public class car { public Nullable<int> label_id { get; set; } public virtual label label { get; set; } } public class label { | http://www.dskims.com/category/net/ | CC-MAIN-2018-34 | refinedweb | 775 | 66.03 |
fam 1.1.0
Simple Python ORM for CouchDB, and Sync Gateway#fam
A simple Python ORM for CouchDB and Couchbase Sync Gateway.
Current build status: Fam is a work in progress growing as the needs of my current project dictate. It is not a feature complete ORM, however it is useful if you, like me, have highly relational data in a couch type db. I use it to support a web app that sits side by side with a mobile application using sync gateway.
Fam adds a type and namespace to each document:
- **type** - A lower case string derived from the class name
- **namespace** - An opaque string to help avoid class name clashes and allow versioning of classes
And uses them to provide:
- A class to bind methods to documents
- Automatic generation of design documents for relationships between classes
- Lookup of related documents
- Validation of documents
- Document life-cycle callbacks for creation, updates and deletion
- Optional cascading deletion through relationships
You can define a fam class like this:
```python
NAMESPACE = "mynamespace"
class Dog(FamObject):
use_rev = True
additional_properties = True
fields = {
"name": StringField(),
"owner_id": ReferenceTo(NAMESPACE, "person", cascade_delete=True)
}
def talk(self):
return "woof"
```
and then use it to create a document like this:
```python
dog = Dog(name="fly")
db.put(dog)
```
##Installation
You can install fam from pypi with `pip install fam`
##Databases
fam has wrappers for connecting to different databases:
- CouchDB
- Couchbase Sync Gateway
These wrapper classes do very little except remember the location of the database and send requests, relying on the python requests library to provide connection pooling.
To use fam you have to first create a class mapper passing in your classes eg:
```python
from fam.mapper import ClassMapper
mapper = ClassMapper([Dog, Cat, Person])
```
and then create a db wrapper using the mapper, the address of the database and the name of the database/bucket
```python
db = CouchDBWrapper(mapper, database_url, database_name)
```
This means that documents accessed though the db will be associated with their relative classes.
You can then write or update the relational design documents in the database from the classes in the mapper like this:
```python
db.update_designs()
```
An instance of a database wrapper provides these methods for adding and removing fam objects from databases
- **db.put(an_object)** - Puts this object into the database
- **db.get(key)** - Gets the object with this key from the database
- **db.delete(an_object)** - Removes this object from the database
- **db.delete_key(key)** - Removes the object with this key from the database
##Classes
Fam classes are defined as inheriting from fam.blud.FamObject like this:
```python
class Cat(FamObject):
use_rev = False
additional_properties = False
fields = {
"name": StringField(),
"legs": NumberField(),
"owner_id": ReferenceTo(NAMESPACE, "person")
}
```
With three class attributes
- **use_rev** - A boolean, True by default, which if true uses the default rev/cas collision protection of Couch DBs but if false always forces a document update as if this mechanism didn't exist
- **additional_properties** - A boolean, false by default, which if true lets you add arbitrary additional top level attributes to an object and if false will throw an exception when you try.
- **fields** - A dict of named fields that map to the top level attributes of the underlying json documents. See below for use.
FamObject also provides six callbacks that occur as documents are saved and deleted
- **pre_save_new_cb(self)**
- **post_save_new_cb(self)**
- **pre_save_update_cb(self, old_properties)**
- **post_save_update_cb(self)**
- **pre_delete_cb(self)**
- **post_delete_cb(self)**
##Fields
There are several types of field defined in fam.blud that map to json types
- **BoolField**
- **NumberField**
- **StringField**
- **ListField**
- **DictField**
When defining a fam class you instantiate each of fields for the class and give it a name eg `"address": StringField()`
###ObjectField Fields
An ObjectField is an instance of another python object. The class of the object must be provided when defining the field. The class has to provide an instance method `to_json` and a class method `from_json` so fam can serialise and deserialise it successfully.
This is an example of a representation of a duration of time:
```python
"duration": ObjectField(cls=TimeDuration)
...
class TimeDuration(object):
def __init__(self, nom=0, denom=0, count=0):
self.nom = nom
self.denom = denom
self.count = count
def to_json(self):
return {
"nom": self.nom,
"denom": self.denom,
"count": self.count,
}
@classmethod
def from_json(cls, as_json):
return cls(**as_json)
...
```
###ReferenceTo Fields
ReferenceTo is really just a string field that is the key of another document. ReferenceTo fields are defined with the namespace and name of the type of the referenced document.
```python
"owner_id": ReferenceTo(NAMESPACE, "person")
```
The name should always end with `_id` , this indicates that it is a reference but it also support fam's lookup of related objects. This allows you to directly access related documents for example dog.owner_id will return the key of the owner document but dog.owner will return an instance of the Owner class for that document.
###ReferenceFrom Fields
ReferenceFrom fields are quite different and they have no representation within the json document. Instead they use the automatically created design documents to find a collection of documents with the associated ReferenceTo field. So ReferenceFrom fields only work with as existing ReferenceTo Field. They are defined with the namespace and the type that the reference is from and the name of the ReferenceTo field in that type.
```python
"dogs": ReferenceFrom(NAMESPACE, "dog", "owner_id")
```
This gives way to do one-to-one and one-to-many relationships. In practice I find I tend to model immutable one-to-many relationships internally as lists of keys within documents and mutable ones with fam view lookups. I also create mutable one-to-one and many-to-many relationships with small join documents with compound keys. I also have write extra views by hand for more complex indexing.
##Field Options
There are five optional arguments when creating a field:
- **required** - A boolean, false by default that asserts that this field must be present.
- **immutable** - A boolean, false by default asserts that you cannot change the value of ths field once it has been set.
- **default** - A default value for this field that will be returned on read if this field is absent from the underlying json. None by default.
- **cascade_delete** - Only applies to ReferenceTo and ReferenceFrom fields. A boolean, false by default, which if true will delete the object the reference points to when this object is deleted.
- **unique** - The thing about uniqueness in a distributed data set is that it cannot be guaranteed, so this assertion is weaker than you would get in a monolithic dataset. This said it is still sometimes useful. It is a boolean, false by default, which if true will raise an exception when you try to add a document with a non unique field to a database using fam. It also helps provide the classmethod `get_unique_instance` which can be used like this:
```python
Cat.get_unique_instance(db, "email", "tiddles@glowinthedark.co.uk)
```
##Validation
Fam now uses JSON Schema to validate documents. Fam's mapper generates schemata dynamically from the class definitions and uses them to validate documents.
You can get the mapper to write out its internal schemata by calling ```mapper.validator.write_out_schemata(directory)```
## Writing Views
Couch views are fragments of JavaScript stored in design documents in the database. Fam automatically generates some design documents for you, those describing relationships between documents, but the chances are you will want to create some other views to help search for documents. Fam takes a minimalist approach to design documents. It provides two things: firstly, JavaScript parsing in the mapper so you can write design documents in JavaScript rather than json (which is nasty), you write in js and it turns them into json; secondly a simple method on the fam db object to query views.
You can use it like this:
Write JavaScript versions of you design documents with the views as vars in the global namespace in files with the desired name of the design document. eg
```javascript
var cat_legs = {
map: function(doc){
if(doc.type == "cat"){
emit(doc.legs, doc)
}
}
}
```
Saved in a file called `animal_views.js`. Then pass the paths to you JavaScript design documents to the constructor for the mapper:
```python
mapper = ClassMapper([Dog, Cat, Person], designs=[".../animal_views.js"])
```
Then you can then query these views like this `db.view(viewpath, **kwargs)` where viewpath is a composite of design name and view name `design_name/view_name` and kwargs are the normal view query attributes for either CouchDB or Sync Gateway (they differ slightly), eg:
```python
cats_with_three_legs = db.view("animal_views/cat_legs", key=3)
```
## String Formats
The StringField can easiliy be extended to define strings of data in certain formats. Currently there are two in fam.string_formats, EmailField and DateTimeField.
## Write Buffer
This is a context managed in-memory object buffer. Reads pass through it so the same Python object always represents same db doc,
and document write are only saved back to the database when the context manager closes.
This replaces the old cache it is aliased to it so existing code won't break.
```python
from fam.buffer import buffered_db
# create a database db as usual
# then create an in memory cache in front of it
with buffered_db(db) as bdb:
# now use bdb instead of db
dog = Dog(name="fly")
bdb.put(dog)
# dog2 will be the exact same python object as dog
dog2 = bdb.get(dog.key)
#when the context closes the docs are saved back to db
```
## Sync Function ACLs
Although I am a big fan Couchbase Sync Gateway I feel that the sync function is a little over burdened with responsibilities,
so I template some portions of my sync function that protect access to writing documents.
To support this I have added declarative acls in an additional class attribute on FamObjects. It looks like this:
```python
acl = [
CreateRequirement(role=ANYONE, owner=True),
UpdateRequirement(role=NO_ONE, fields=["channels", "project_id", "immutable_name", "owner_name"]),
UpdateRequirement(role=ANYONE, owner=True, fields=["name"]),
DeleteRequirement(role=ANYONE, owner=True)
]
```
This will not be useful for everyone or it all situations as it necessarily limits the flexibility of how the sync function works. It isn't fully documented here and still requires a clear understanding of how the sync function works, so tread carefully.
There is then a function in `fam.acl.writer` which takes two templates, a top level one for the json config function and inner one for the js sync function, and a mapper, to generate a complete config file.
```python
write_sync_function(template_path, output_path, sync_template_path, mapper)
```
The templating is crude, using simple string replacement to add a collection of the requirements to the js. Have a look at the function to see what it does. You can then apply the normal sync function checks with a function something like this:
```javascript
function check(a_doc, req){
if(req === undefined){
requireRole([]);
return;
}
if(req.owner !== undefined){
if(a_doc.owner_name === undefined){
throw("owner_name not given");
}
requireUser(a_doc.owner_name);
}
if(req.withoutAccess === undefined){
requireAccess(a_doc.channels);
}
if(req.user !== undefined){
requireUser(req.user);
}
if(req.role !== undefined){
requireRole(req.role);
}
}
```
##To Do?
Some possible further features:
- Optional class attribute **schema** to give better control over document validation.
- Pass schemata to sync gateway's sync function to enforce typed validation on document creation and update.
- Author: Paul Harter
- License: LICENSE
- Categories
- Package Index Owner: paulharter
- DOAP record: fam-1.1.0.xml | https://pypi.python.org/pypi/fam/1.1.0 | CC-MAIN-2017-13 | refinedweb | 1,870 | 52.49 |
From: SourceForge.net (noreply_at_[hidden])
Date: 2006-01-09 17:24:02
Patches item #1157160, was opened at 2005-03-04 23:23
Submitted By: Steven Weiss (fotzor)
>Assigned to: Beman Dawes (beman_dawes)
Summary: patch for boost::filesystem
Initial Comment:
hi,
when you try to get the branch path of this
"c:\\some_dir\\some_file.txt"
under windows the branch path should be "c:\\some_dir".
what is returned instead is "c:" which isn't correct.
so here's the fix:
in path_posix_windows.cpp:
std::string::size_type leaf_pos( const std::string & str,
std::string::size_type end_pos ) // end_pos is
past-the-end position
// return 0 if str itself is leaf (or empty)
{
if ( end_pos && str[end_pos-1] == '/' ) return
end_pos-1;
std::string::size_type pos( str.find_last_of( '/',
end_pos-1 ) );
# ifdef BOOST_WINDOWS
if ( pos == std::string::npos ) pos =
str.find_last_of( '\\', end_pos-1 ); //
ADDED
if ( pos == std::string::npos ) pos =
str.find_last_of( ':', end_pos-2 );
# endif
return ( pos == std::string::npos // path itself
must be a leaf (or empty)
# ifdef BOOST_WINDOWS
|| (pos == 1 && (str[0] == '/' || str[0] ==
'\\')) // or share // ADDED
# endif
) ? 0 // so leaf is entire string
: pos + 1; // or starts after delimiter
}
mfg steven
----------------------------------------------------------------------
Comment By: Steven Weiss (fotzor)
Date: 2005-03-05 00:24
Logged In: YES
user_id=1124235
i forgot something...
in branch_path() you must also change the condition of the
if to:
if ( end_pos && (m_path[end_pos-1] == '/' ||
m_path[end_pos-1] == '\\')
&& !detail::is_absolute_root( m_path, end_pos ) )
--end_pos;
else it doesn't work proper. perhaps there are other
functions depending on leaf_pos() that must be adjusted as well
----------------------------------------------------------------------
You can respond by visiting:
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
_______________________________________________
Boost-bugs mailing list
Boost-bugs_at_[hidden]
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2006/01/99066.php | CC-MAIN-2019-22 | refinedweb | 325 | 65.93 |
Python is well known for its easy syntax, fast implementation, and, most importantly, large support of multiple data structures. Lists are one of those data structures in python which helps to store large amounts of sequential data in a single variable. As huge data is stored under the same variable, it is sometimes quite difficult to manually identify whether the given element is present in the lists, and if yes, how many times. Therefore, in this article, we will study the various ways to count the number of occurrences in the list in python. To recall the concepts of python lists in detail, visit our article “3 Ways to Convert List to Tuple”.
How to Count the Number of Occurrences in the List?
There are six ways by which you can count the number of occurrences of the element in the list. Let us study them all in brief below:
1) Using count() method
count() is the in-built function by which python count occurrences in list. It is the easiest among all other methods used to count the occurrence. Count() methods take one argument, i.e., the element for which the number of occurrences is to be counted.
For example:
sample_list = ["a", "ab", "a", "abc", "ab", "ab"] print(sample_list.count("a")) print(sample_list.count("ab"))
Output
2 3
2) Using a loop
Another simple approach to counting the occurrence is by using a loop with the counter variable. Here, the counter variable keeps increasing its value by one each time after traversing through the given element. At last, the value of the counter variable displays the number of occurrences of the element.
For example:
def countElement(sample_list, element): return sample_list.count(element) sample_list = ["a", "ab", "a", "abc", "ab", "ab"] element = "ab" print('{} has occurred {} times'.format(element, countElement(sample_list, element)))
Output
ab has occurred 3 times
3) Using countof() method
Operator module from python library consists of countof() method which helps to return the number of occurrence of the element from the lists. This method takes two arguments, i.e., the list in which the count needs to be performed and the element which needs to be found. Moreover, you have to import the operator module before beginning the program using the “import” keyword as shown below:
For example:
sample_list = ["a", "ab", "a", "abc", "ab", "ab"] import operator as op print(op.countOf(sample_list,"a"))
Output
2
4) Using counter() method
Python possesses an in-built module named collections, including multiple methods to ease your programming. One such method is a counter() method where elements are stored as a dictionary with keys and counts as values.
Therefore, the counter() method helps you return the total number of occurrences of a given element inside the given list by taking one parameter as the list in which the element is to be counted. Remember that you have to import the collections module to use the counter() method as shown in the below example:
For example:
sample_list = ["a", "ab", "a", "abc", "ab", "ab"] from collections import Counter print(Counter(sample_list)) c = Counter(sample_list) print(c["a"])
Output
Counter({'ab': 3, 'a': 2, 'abc': 1}) 2
5) Using pandas library
Pandas is the in-built python library, highly popular for data analysis and data manipulation. It is an open-source tool with a large range of features and is widely used in the domains like machine learning and artificial intelligence. To learn more about pandas, please visit our article “Numpy vs Pandas.”
Pandas possess a wide range of default methods, one of which is the value_count() method. Along with the value_count() method, pandas use series, i.e., a one-dimensional array with axis label.
To count the occurrence of elements using pandas, you have to convert the given list into the series and then use the value_count() method, which returns the object in descending order. By these, you can easily note that the first element is always the most frequently occurring element.
Check out the below example for a better understanding of the Pandas library
For example:
import pandas as pd sample_list = ["a", "ab", "a", "abc", "ab", "ab"] count = pd.Series(sample_list).value_counts() print(count["a"])
Output
2
6) Using loops and dict in python
This is the most traditional method by which python count occurrences in the list that is by using the loop, conditional statement, and dictionaries. By this method, you have to create the empty dictionary and then iterate over the list. Later, check if the element present in the list is available in the dictionary or not. If yes, then increase its value by one; otherwise, introduce a new element in the dictionary and assign 1 to it. Repeat the same process until all the elements in the lists are visited.
Remember that this method is quite different from the previous method using the loop and the counter variable. The early mentioned method does not make use of dictionary data structure, whereas this one does. At last, print the count of occurrence of each element as shown in the below example:
For example:
sample_list = ["a", "ab", "a", "abc", "ab", "ab"] def countOccurrence(a): k = {} for j in a: if j in k: k[j] +=1 else: k[j] =1 return k print(countOccurrence(sample_list))
Output
{'a': 2, 'ab': 3, 'abc': 1}
Conclusion
Programming is all about reducing manual tasks and shifting to automation. Counting the occurrence of elements from the large dataset manually is quite a tedious and time-consuming task. Therefore, python provides various methods by which you can count the occurrence of elements easily and quickly with few lines of code, just like shown in the article above. It is recommended to learn and understand all these methods to make your programming effective and efficient. | https://favtutor.com/blogs/python-count-occurrences-in-list | CC-MAIN-2022-05 | refinedweb | 958 | 58.82 |
Hello everyone,
Im very new to Arduino (and my math skills arent very well) so i didnt knew exactly what i need to
looking for to solve this problem.
Im using a Pro Micro and i have this code of a simple analog reading of two potentiometers. in my
experiments with analogRead i came to know the issue of the spikes in the signal. i have looked up for a
solution for this and from the many solutions i’v red about (i didn’t tried the 100nf cap method yet - im
waiting for the capacitors to arrive) i found the RunningAverage library which seemed like the best
solution around.
I first tried the example code of the library with a single potentiometer. it worked very well. now, when
im connecting two potentiometers at two different analog inputs, despite my expectation the Runningaverage to work separately for each reading, the RA result in the Serial.println is acting as an
addition/subtraction formula - it shows only one reading as the result of pot1+pot2 (or pot1-pot2 when
the analog reading of one of the two is increased.) So, my question is, there is any way to apply the RunningAverage as a spikes filter for any of the readings separately?
Many many thanks !!!
#include <RunningAverage.h> const int knobs = 2; int inputs[knobs] = {A1, A2}; RunningAverage myRA(10); int samples = 0; void setup() { } void loop() { for (int i = 0; i < knobs; i++) { // Serial.println(inputs[i]); - works separately and shows the pin number // Serial.println(analogRead(inputs[i])); - shows the reading of the two pins separately long rn = analogRead(inputs[i]); myRA.addValue(rn * 0.001); samples++; Serial.print("Running Average: "); Serial.println(myRA.getAverage(), 3); if (samples == 300) { samples = 0; // myRA.clear(); } delay(50); } } | https://forum.arduino.cc/t/applying-runningaverage-separately-for-multiple-potentiometers-readings/666530 | CC-MAIN-2021-25 | refinedweb | 295 | 54.52 |
#include <qiconview.h>
iconview
A QIconView can display and manage a grid or other 2D layout of labelled icons. Each labelled icon is a QIconViewItem. Items (QIconViewItems) can be added or deleted at any time; items can be moved within the QIconView. Single or multiple items can be selected. Items can be renamed in-place. QIconView also supports drag and drop.
Each item contains a label string, a pixmap or picture (the icon itself) and optionally a sort key. The sort key is used for sorting the items and defaults to the label string. The label string can be displayed below or to the right of the icon (see ItemTextPos).
The simplest way to create a QIconView is to create a QIconView object and create some QIconViewItems with the QIconView as their parent, set the icon view's geometry and show it. For example:
QIconView *iv = new QIconView( this ); QDir dir( path, "*.xpm" ); for ( uint i = 0; i < dir.count(); i++ ) { (void) new QIconViewItem( iv, dir[i], QPixmap( path + dir[i] ) ); } iv->resize( 600, 400 ); iv->show();
The QIconViewItem call passes a pointer to the QIconView we wish to populate, along with the label text and a QPixmap.
When an item is inserted the QIconView allocates a position for it. Existing items are rearranged if autoArrange() is TRUE. The default arrangement is
LeftToRight -- the QIconView fills up the left-most column from top to bottom, then moves one column right and fills that from top to bottom and so on. The arrangement can be modified with any of the following approaches: Call setArrangement(), e.g. with
TopToBottom which will fill the top-most row from left to right, then moves one row down and fills that row from left to right and so on. Construct each QIconViewItem using a constructor which allows you to specify which item the new one is to follow. Call setSorting() or sort() to sort the items.
The spacing between items is set with setSpacing(). Items can be laid out using a fixed grid using setGridX() and setGridY(); by default the QIconView calculates a grid dynamically. The position of items' label text is set with setItemTextPos(). The text's background can be set with setItemTextBackground(). The maximum width of an item and of its text are set with setMaxItemWidth() and setMaxItemTextLength(). The label text will be word-wrapped if it is too long; this is controlled by setWordWrapIconText(). If the label text is truncated, the user can still see the entire text in a tool tip if they hover the mouse over the item. This is controlled with setShowToolTips().
Items which are selectable may be selected depending on the SelectionMode; the default is
Single. Because QIconView offers multiple selection it must display keyboard focus and selection state separately. Therefore there are functions to set the selection state of an item (setSelected()) and to select which item displays keyboard focus (setCurrentItem()). When multiple items may be selected the icon view provides a rubberband, too.
When in-place renaming is enabled (it is disabled by default), the user may change the item's label. They do this by selecting the item (single clicking it or navigating to it with the arrow keys), then single clicking it (or pressing F2), and entering their text. If no key has been set with QIconViewItem::setKey() the new text will also serve as the key. (See QIconViewItem::setRenameEnabled().)
You can control whether users can move items themselves with setItemsMovable().
Because the internal structure used to store the icon view items is linear, no iterator class is needed to iterate over all the items. Instead we iterate by getting the first item from the {icon view} and then each subsequent ( QIconViewItem::nextItem()) from each item in turn:
for ( QIconViewItem *item = iv->firstItem(); item; item = item->nextItem() ) do_something( item );
delete. All the items can be deleted with clear().
The QIconView emits a wide range of useful signals, including selectionChanged(), currentChanged(), clicked(), moved() and itemRenamed().
draganddrop
Definition at line 265 of file qiconview.h. | http://qt-x11-free.sourcearchive.com/documentation/3.3.4/classQIconView.html | CC-MAIN-2018-22 | refinedweb | 669 | 63.9 |
05 Feb 2021 03:39 AM
Hi
where i can find a step by step procedure to monitor Cloud Run in Google public Cloud with Dynatrace ?
This post... is not very cler
regards
05 Feb 2021 03:55 AM
I guess here's how you can start:...
05 Feb 2021 04:03 AM
Hi
i need specific step to step information, when i connect on google cloud console for cloud run i cannot create a dynatrace namespace as written in the documentation....
05 Feb 2021 04:33 AM
Have you configured your kubectl for Google cloud?
05 Feb 2021 04:44 AM
I don't like to guess Radoslaw, i need a procedure to monitor Cloud Run on Google Platform.
I don't understand if i have to use the
regards
05 Feb 2021 04:47 AM
And I don't know what environment you have to help you. Maybe reach out to Dynatrace ONE so they guide you?
07 May 2021 07:41 AM | https://community.dynatrace.com/t5/Dynatrace-Open-Q-A/How-can-i-monitor-a-Google-Cloud-Run-environment-in-Dynatrace/m-p/121516/highlight/true | CC-MAIN-2021-31 | refinedweb | 165 | 77.27 |
scala-poolscala-pool
scala-pool is a Scala library for object pooling. The library provides an API and different pool implementations that allow:
- blocking/non-blocking object acquisition
- object invalidation
- capping the number of pooled objects
- creating new objects lazily, as needed
- health checking
- time-based pool eviction (idle instances)
- GC-based pool eviction (soft and weak references)
- efficient thread-safety
InstallationInstallation
scala-pool is currently available for Scala 2.13, 2.12, and 2.11, the latest version is
0.4.3.
To use it in an existing SBT project, add the following dependency to your
build.sbt:
libraryDependencies += "io.github.andrebeat" %% "scala-pool" % "0.4.3"
The latest snapshot version is also available:
libraryDependencies += "io.github.andrebeat" %% "scala-pool" % "0.5.0-SNAPSHOT"
It might be necessary to add the Sonatype OSS Snapshot resolver:
resolvers += Resolver.sonatypeRepo("snapshots")
Currently, the library has no external dependencies apart from the Java and Scala standard libraries.
JDK7 supportJDK7 support
This library relies on features only available in Java 8 (
java.util.concurrent.atomic.LongAdder),
the versions published on Sonatype are compiled with JDK 8. This library provides support for JVM 7
if it is compiled with JDK 7 (only for Scala 2.11). If you need to use this library on JVM 7 then
you should compile and package it yourself.
UsageUsage
The basic usage of the pool is shown below:
import io.github.andrebeat.pool._ // Creating a `Pool[Object]` with a capacity of 2 instances val pool = Pool(2, () => new Object) // Acquiring a lease on an object from the pool (blocking if none available) val lease = pool.acquire() // Using the lease lease { o => println(o) } // The object is returned to the pool at this point
All of the different pool features are exposed in the
Pool companion object
apply method:
Pool( capacity: Int, // the maximum capacity of the pool factory: () => A, // the function used to create new objects in the pool referenceType: ReferenceType, // the reference type of objects in the pool maxIdleTime: Duration, // the maximum amount of the time that objects are allowed // to idle in the pool before being evicted reset: A => Unit, // the function used to reset objects in the pool // (called when leasing an object from the pool) dispose: A => Unit, // the function used to destroy an object from the pool healthCheck: A => Boolean) // the predicate used to test whether an object is // healthy and should be used, or destroyed otherwise
It is also possible to get a value from a lease and release it (or invalidate) manually.
import io.github.andrebeat.pool._ // Creating a `Pool[Object]` with a capacity of 2 instances val pool = Pool(2, () => new Object) // Getting the value from the lease val obj = lease.get() // There are currently no objects on the pool pool.size // res0: Int = 0 // But its capacity is 2 (objects are created lazily) pool.capacity // res1: Int = 2 // There's 1 live object pool.live // res2: Int = 1 // And its currently leased pool.leased // res3: Int = 1 // Releasing our lease back to the pool lease.release // Its now in the pool waiting to be reused pool.size // res4: Int = 1 // Closes this pool, properly disposing of each pooled object and // releasing any resources associated with the pool pool.close()
The API is documented in depth in the Scaladoc.
LicenseLicense
scala-pool is licensed under the MIT license. See
LICENSE
for details. | https://index.scala-lang.org/andresilva/scala-pool/scala-pool/0.4.3?target=_2.13 | CC-MAIN-2022-05 | refinedweb | 564 | 53.71 |
GNU debug code, replaces standard behavior with debug behavior.
Macros and namespaces used by the implementation outside of debug wrappers to verify certain properties. The __glibcxx_requires_xxx macros are merely wrappers around the __glibcxx_check_xxx wrappers when we are compiling with debug mode, but disappear when we are in release mode so that there is no checking performed in, e.g., the standard library algorithms.
Based on operator<.
Definition at line 735 of file debug/forward_list.
Based on operator<.
Definition at line 721 of file debug/forward_list.
Based on operator<.
Definition at line 728 of file debug/forward_list.
See std::forward_list::swap().
Definition at line 742 of file debug/forward_list. | http://gcc.gnu.org/onlinedocs/gcc-4.7.2/libstdc++/api/a01578.html | CC-MAIN-2017-39 | refinedweb | 108 | 53.68 |
basket-client 0.3.
Usage
Are you looking to integrate this on a site for email subscriptions? All you need to do is:
import basket
basket.subscribe(‘<email>’, ‘<newsletter>’, <kwargs>)
You can pass additional fields as keyword arguments, such as format and country. For a list of available fields and newsletters, see the basket documentation.
Are you checking to see if a user was successfully subscribed? You can use the debug-user method like so:
import basket
basket.debug_user(‘<email>’, ‘<supertoken>’)
And it return full details about the user. <supertoken> is a special token that grants you admin access to the data. Check with the mozilla.org developers to get it.
Settings
- BASKET_URL
- URL to basket server, e.g.
If you’re using Django you can simply add this setting to your settings.py file. Otherwise basket-client will look for this value in a BASKET_URL environment variable. The default is.
Change Log
v0.3.4
- Fix issue with calling subscribe with an iterable of newsletters.
- Add request function to those exposed by the basket` module.
v0.3.3
- Add get_newsletters API method for information on currently available newsletters.
- Handle Timeout exceptions from requests.
-.4.xml | https://pypi.python.org/pypi/basket-client/0.3.4 | CC-MAIN-2016-50 | refinedweb | 196 | 61.93 |
Running Trac on IIS 6 using PyISAPIe
Contributed by: Aa`Koshh
Configuration:
- Windows Server 2003
- Python 2.6
- IIS 6
- Trac 0.12
I have multiple Django sites running on the server based on the instructions at, which suggests using PyISAPIe, so that each site can use its own application pool, which is necessary to support Django configuration. Using this setup, each site can be restarted independently from the others. I wanted Trac to be served up the same way, and with a few modifications, it works, more or less.
- To support multiple sites, create a directory somewhere local to IIS and copy the PyISAPIe.dll and the Http module into it. This will only serve Trac.
- Follow the instructions on the Django site to set up a virtual directory based on the above copy of the dll, up until when the Info.py file works in its own application pool. Don't forget to add read permission on the dll to network service.
- Modify Isapy.py in the Http directory to call the Trac WSGI handler:
# $URL: $ from trac.web.main import dispatch_request from Http.WSGI import RunWSGI from Http import Env import os os.environ['TRAC_ENV'] = r"\\myserver\path\to\trac\env" os.environ['PYTHON_EGG_CACHE'] = r"\\myserver\path\to\trac\env\eggs" # This is how the WSGI module determines what part of the path # SCRIPT_NAME should consist of. If you configure PyISAPIe as # a wildcard map on the root of your site, you can leave this # value as-is. # Base = "/Trac" # This is an example of what paths might need to be handled by # other parts of IIS that still come here first. This value's # default of "/media" assumes that you've mapped a virtual # directory to Django's admin media folder and so expect the # files to be served by the static file handler. # Exclude = ["/Trac/chrome/common", "/chrome/common"] # The main request handler. Handler = dispatch_request def Request(): PathInfo = Env.PATH_INFO.lower() # Check for anything we know shouldn't be handled by Python and # pass it back to IIS, which in most cases sends it to the static # file handler. if not PathInfo.startswith(Base.lower()): return True for Excl in Exclude: if PathInfo.startswith(Excl.lower()): return True return RunWSGI(Handler, Base=Base)
- Edit the WSGI.py module in the Http directory and uncomment REMOTE_USER in the
IsapeEnvAutolist so that Trac can pick up the logged in user.
- When there is an error, Trac tries to present its error.html template, but also passes the exception info to the WSGI function
start_response. PyISAPIe, however, re-raises the exception even though it was handled, so for example unhandled paths result in an IIS Internal Error page and a stack trace. I edited the WSGI.py again so that if Trac supports an exception info but headers are also present, no exception is raised to IIS:
def StartResponse(Status, Headers, ExcInfo = None): if ExcInfo and not Headers : # only raise exception to IIS if Trac is not about to present the error page try: raise ExcInfo[0], ExcInfo[1], ExcInfo[2] finally: ExcInfo = None Status = int(Status.split(" ",1)[0]) Header(Status = Status) for NameValue in Headers: Lname = NameValue[0].lower() if Lname == "content-length": Header(Length = int(NameValue[1])) continue elif Lname == "connection": if NameValue[1].lower() == "close": Header(Close = True) continue Header("%s: %s" % NameValue) return Write
- After changes are made to either Python modules or the trac.ini file, right click on the application pool and click recycle. The site should display at this point.
- I had some problems with static files (Trac css and Javascript) over 4K in size not being served. The exception seems to be raised when the content length header is set in
StartResponseabove, maybe it is sent twice. I ended up setting up an IIS virtual directory to serve the htdocs directory of Trac and added the htdocs_location = /trac_media entry to trac.ini
- To support authentication, create an empty directory in the base Trac virtual directory in IIS and name it "login". Set directory security to Digest Authentication (it works behind Nginx, whereas Integrated Windows Authentication does not).
- I had some trouble with the admin account: when I accessed the site directly by IP and logged in with my Windows credentials, the remote user was "domain\user", but through our router it became "DOMAIN\user" with capital letters, and the roles defined were picked up only for the one that I explicitly granted it for with trac-admin. Whichever appears on the site, add permissions for that username like this:
trac-admin $ENV permission add DOMAIN\user TRAC_ADMIN
- The header and footer did not appear on Trac pages (it did in the standalone server). I had to modify layout.html and remove the fallback tag at the bottom of the document like this:
<xi:include</xi:include> <xi:include<xi:fallback /></xi:include> </html>
- I really hope nothing new will come up…
Last modified 8 years ago Last modified on Jun 27, 2011, 1:18:29 PM | https://trac.edgewall.org/wiki/TracOnWindowsIisPyISAPIe | CC-MAIN-2019-47 | refinedweb | 837 | 54.42 |
2.3. Scalars, Vectors, Matrices, and Tensors¶
Now that you can store and manipulate data, let us briefly review the subset of basic linear algebra that you will need to understand and implement most of models covered in this book. Below, we introduce the basic mathematical objects in linear algebra, expressing each both represented\).
In MXNet code, a scalar is represented by an
ndarray with just one
element. In the next snippet, we instantiate two scalars and perform
some familiar arithmetic operations with them, namely addition,
multiplication, division, and exponentiation.
from mxnet import np, npx npx.set_np() x = np.array(3.0) y = np.array(2.0) x + y, x * y, x / y, x ** y
(array(5.), array(6.), array(1.5), array})\).
In MXNet, we work with vectors via \(1\)-dimensional
ndarrays.
In general
ndarrays can have arbitrary lengths, subject to the
memory limits of your machine.
x = np.arange(4) x
array([3]
array Python array, we can access the length of an
ndarray by calling Python’s built-in
len() function.
len(x)
4
When an
ndarray represents a vector (with precisely one axis), we
can also access its length via the
.shape attribute. The shape is a
tuple that lists the length (dimensionality) along each axis of the
ndarray. For
ndarrays with just one axis, the shape has just
one element.
x.shape
an
ndarray to refer to the number of axes that an
ndarray has.
In this sense, the dimensionality of an
ndarray’s some axis will
be the length of that axis.
2.3.3. Matrices¶
Just as vectors generalize scalars from order \(0\) to order
\(1\), matrices generalize vectors from order \(1\) to order
\(2\). Matrices, which we will typically denote with bold-faced,
capital letters (e.g., \(\mathbf{X}\), \(\mathbf{Y}\), and
\(\mathbf{Z}\)), are represented in code as
ndarrays with
\(2\) in MXNet by specifying a
shape with two components \(m\) and \(n\) when calling any of
our favorite functions for instantiating an
ndarray.
A = np.arange(20).reshape(5, 4) A
array([[:
In code, we access a matrix’s transpose via the
T attribute.
A.T
array([[ 0., 4., 8., 12., 16.], [ 1., 5., 9., 13., 17.], [ 2., 6., 10., 14., 18.], [ 3., 7., 11., 15., 19.]])
As a special type of the square matrix, a symmetric matrix \(\mathbf{A}\) is equal to its transpose: \(\mathbf{A} = \mathbf{A}^\top\).
B = np.array([[1, 2, 3], [2, 0, 4], [3, 4, 5]]) B
array([[1., 2., 3.], [2., 0., 4.], [3., 4., 5.]])
B == B.T
array([[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]).
2.3.4. Tensors¶
Just as vectors generalize scalars, and matrices generalize vectors, we
can build data structures with even more axes. Tensors give us a generic
way of describing
ndarrays with an arbitrary number of axes.
Vectors, for example, are first-order tensors, and matrices are
second-order tensors. Tens.
Tensors will become more important when we start working with images,
which arrive as
ndarrays with 3 axes corresponding to the height,
width, and a channel axis for stacking the color channels (red, green,
and blue). For now, we will skip over higher order tensors and focus on
the basics.
X = np.arange(24).reshape(2, 3, 4) X
array([[[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], [[12., 13., 14., 15.], [16., 17., 18., 19.], [20., 21., 22., 23.]]])
2.3.5. Summary¶
Scalars, vectors, matrices, and tensors are basic mathematical objects in linear algebra.
Vectors generalize scalars, and matrices generalize vectors.
In the
ndarrayrepresentation, scalars, vectors, matrices, and tensors have 0, 1, 2, and an arbitrary number of axes, respectively.
2.3.6. tensor
Xof shape (\(2\), \(3\), \(4\)) in this section. What is the output of
len(X)?
For a tensor
Xof arbitrary shape, does
len(X)always correspond to the length of a certain axis of
X? What is that axis? | https://www.d2l.ai/chapter_preliminaries/scalar-tensor.html | CC-MAIN-2019-47 | refinedweb | 656 | 63.09 |
<import addon="script.module.youtube.dl" version="14.810.0"/>
import YDStreamExtractor
YDStreamExtractor.disableDASHVideo(True) #Kodi (XBMC) only plays the video for DASH streams, so you don't want these normally. Of course these are the only 1080p streams on YouTube
url = "" #a youtube ID will work as well and of course you could pass the url of another site
vid = YDStreamExtractor.getVideoInfo(url,quality=1) #quality is 0=SD, 1=720p, 2=1080p and is a maximum
stream_url = vid.streamURL() #This is what Kodi (XBMC) will play
choices = []
if vid.hasMultipleStreams():
for s in vid.streams():
title = s['title']
choices.append(title)
index = some_function_asking_the_user_to_choose(choices)
vid.selectStream(index) #You can also pass in the the dict for the chosen stream
stream_url = vid.streamURL()
#This will return True if the URL points (probably) to a video without actually fetching all the stream info.
YDStreamExtractor.mightHaveVideo(url)
import YDStreamUtils
import YDStreamExtractor
YDStreamExtractor.disableDASHVideo(True)
url = ""
vid = YDStreamExtractor.getVideoInfo(url,quality=1)
path = "/directory/where/we/want/the/video"
with YDStreamUtils.DownloadProgress() as prog: #This gives a progress dialog interface ready to use
try:
YDStreamExtractor.setOutputCallback(prog)
result = YDStreamExtractor.downloadVideo(vid,path)
if result:
#success
full_path_to_file = result.filepath
elif result.status != 'canceled':
#download failed
error_message = result.message
finally:
YDStreamExtractor.setOutputCallback(None)
(2014-08-11 12:55)Koying Wrote: Was thinking that this was really missing
Is there already "frontend" addons to you module, to the best of your knowledge?
Pretty sure a youtube frontend would allow VEVO, by instance...
(2014-08-11 19:19)ruuk Wrote: I better get going and submit it to the official repository
(2014-08-11 19:45)ruuk Wrote: I also submitted this to the official repository. I'll update here when it is accepted.
(2014-08-11 20:47)Kib Wrote: (2014-08-11 19:45)ruuk Wrote: I also submitted this to the official repository. I'll update here when it is accepted.
I am sure we would love to have this in the official repo, but the request hasn't come in yet. Are you sure you sent it to the correct address?
(2014-08-19 12:56)mujunk Wrote: Pardon my ignorance. What is the Average Joe required to do with this? I have installed your repo but couldnt find this under video addons. | https://forum.kodi.tv/showthread.php?tid=200877 | CC-MAIN-2017-30 | refinedweb | 381 | 52.76 |
Answered by:
Call webservice from mobile handled c#
Question
- am trying to consume a web service via smart device project in c#.
I have web service on my server that respond via windows application fine. (The web service uploads any file type to the server). This is my goal to send files from handled pda.
But from smart device, I can add the web reference but in the code I can't recognize the service.
Someone know why?
[EDIT]
You can see image:
also, the error is: The type or namespace name 'Uploader' does not exist in the namespace 'SmartDeviceProject1' (are you missing an assembly reference?)
So... can't tell exacly what is the problem because i'm not so strong in mobile.
[/EDIT]
Answers
All replies
Hi,
If you click on the Uploader web reference and then click on the Show All Files button, do you see a Reference.cs file?
Thanks
Paul Diston
Hi,
The Show All Files button is located in the Solution Explorer window, at the top, to the left of the Refresh button.
In the Reference.cs file, there is the namespace that the Uploader web service is sitting in, you need to use this namespace in your Form1.cs.
Hope this helps.
Paul Diston
Hi,
All the files there with yellow exclamation mark.
I have 2 questions:
1. Why I can't change Uploader namespace to project namespace? and if yes, how? do I need to change the webservice itself?
2. Why there isn't automatically file reference.cs when i create the webservice? on the solution.
thanks.
hi bk-1,
I check it in my local machine,it works.you can follow below steps to check it again:
1.right-click->add service reference->click'Advanced'->click'Add Web reference'->please write URL in textbox->Enter'Go'.
please refer to more:()
hope this help you.
if so,please remember to mark the replies as answer if they help and unmark them if they provide no help. | https://social.msdn.microsoft.com/Forums/en-US/492de26a-f318-4136-ab71-dd543ef3c9ba/call-webservice-from-mobile-handled-c?forum=windowsmobiledev | CC-MAIN-2020-40 | refinedweb | 333 | 76.32 |
WinRT sockets provide a low-level socket interface for use with TCP and UDP sockets. The features are exposed in the
Windows.Networking.Sockets namespace. WinRT sockets also use classes in the
Windows.Networking namespace to provide access to managed
Hostname and
EndPoint classes. WinRT sockets are needed if other higher-level WinRT networking APIs (web access, Atompub, JSON, and background HTTP/FTP transfer) don't meet the requirements of your app. WinRT sockets are the building blocks a developer can use to build apps that use TCP or UDP protocols to access services or peers. Some typical examples might be Voice over IP (VoIP), instant messaging, mail clients (IMAP, POP, SMTP), and database clients. A very simple example using sockets might be an app that sends status messages to an SMTP server or an app that logs status to a SYSLOG server.
The WinRT API for sockets provides a simple API for sockets programming with three primary classes:
StreamSocket: A TCP stream socket.
DatagramSocket: A UDP datagram socket.
StreamSocketListener: A listener for incoming network connections using a TCP stream socket.
The
StreamSocket and
DatagramSocket classes implement the core TCP and UDP sockets. The
StreamSocketListener class is a helper class that implements a TCP listener. Once a connection request is received and accepted, a
StreamSocket object is created.
Associated with each of these classes are related
Control and
Information classes that provide socket control data and information for each of the primary classes. All WinRT socket methods are designed to be asynchronous and not block the UI thread. They even have
Async appended in the method name to make this clear.
Developers should be aware of some limitations placed on Windows Store Apps using sockets in Windows 8:
- TCP and UDP sockets are supported. Raw sockets and other types of sockets are not supported. So, for example, you can't write a ping tool that needs access to ICMP or ICMPv6.
- Sandboxing prevents a Windows Store app that uses sockets from accessing IPv4 or IPv6 loopback addresses on the device. This limitation is removed for apps running under the Windows debugger in Visual Studio to allow testing of client/server apps on the same device.
- The
StreamSocketclass has built-in support for SSL for client apps, but not for servers. It would be much more difficult to implement a TCP server that uses SSL because the app would need to implement all the SSL protocols.
- When a networking app using WinRT sockets loses focus, the app can't continue to send or receive packets. There are some special features (
Windows.Networking.Sockets.ControlChannelTrigger) that can be used to support background networking with a
StreamSocket, but they are complex.
Windows Store Apps and Lifetimes
There are some important implications when developing apps using WinRT. Each Windows Store app is run in sandbox by the OS for security and protection. This prevents a rogue app from accessing data or information from another app except through controlled mechanisms that require approval of the apps. The sandbox prevents apps from sharing sockets.
Windows Store apps must declare in a manifest what capabilities they plan to use. Several of these capabilities deal specifically with network access (Internet client, Internet client/server, or Intranet client/server). When the app is installed, the user must approve the use of these capabilities.
Windows Store apps operate under a very different app lifecycle model than Windows 8 desktop apps (and applications on Windows 7 and earlier). When a Windows Store app loses focus (that is, gets dragged away from the foreground and replaced by another app), the app is not multitasked as on previous versions of Windows. The app doesn't get any more CPU cycles and may be purged from memory. Before the OS switches to the new app, the existing app gets notified and has a short time to save state before it is cut off from further execution. Many network operations often take time to complete. So this app lifecycle model for Windows Store apps has important implications for the design and implementation of network apps. Also, when used on mobile devices, network connectivity may be lost. Caching network content becomes an important feature to include.
High Performance Winsock and Registered I/O
At the other end of the Winsock continuum, Windows 8 and Windows Server 2012 introduced new APIs for Winsock registered I/O (RIO) extensions. These APIs, which are usable only by desktop apps, are a set of Winsock extensions designed to lower latency and jitter and improve the network performance of network servers. While these APIs can be used by both clients and servers, network performance improvements, when compared with using traditional Winsock APIs, are likely to occur only on heavily-loaded network servers. The greatest improvements would be on servers that handle a large number of small network packets (~1K), which is typical of database servers and some applications used in the financial services industries.
The goal of Winsock and network stacks on other operating systems has always been to minimize data copying. For sends and receives, the regular Winsock APIs and the core TCP/IP stack use network buffers that must be pinned and unpinned by the system. The OS management of these buffers used by Winsock requires CPU cycles and user-mode to kernel-mode transitions to lock and unlock the buffers in memory. There are also CPU cycles associated with the methods used for I/O completion. | http://www.drdobbs.com/jvm/jvm/the-new-socket-apis-in-windows-8/240148403?pgno=2 | CC-MAIN-2015-35 | refinedweb | 908 | 53.51 |
HTML::FormHandler::Field - base class for fields
version 0.40016
Instances of Field subclasses are generally built by HTML::FormHandler from 'has_field' declarations or the field_list, but they can also be constructed using new for test purposes (since there's no standard way to add a field to a form after construction).
use HTML::FormHandler::Field::Text; my $field = HTML::FormHandler::Field::Text->new( name => $name, ... );
In your custom field class:
package MyApp::Field::MyText; use HTML::FormHandler::Moose; extends 'HTML::FormHandler::Field::Text'; has 'my_attribute' => ( isa => 'Str', is => 'rw' ); apply [ { transform => sub { ... } }, { check => ['fighter', 'bard', 'mage' ], message => '....' } ]; 1;
This is the base class for form fields. The 'type' of a field class is used in the FormHandler field_list or has_field to identify which field class to load from the 'field_name_space' (or directly, when prefixed with '+'). If the type is not specified, it defaults to Text.
See HTML::FormHandler::Manual::Fields for a list of the fields and brief descriptions of their structure.
The name of the field. Used in the HTML form. Often a db accessor. The only required attribute.
The class or type of the field. The 'type' of HTML::FormHandler::Field::Money is 'Money'. Classes that you define yourself are prefixed with '+'.
If the name of your field is different than your database accessor, use this attribute to provide the accessor.
The name of the field with all parents:
'event.start_date.month'
The field accessor with all parents.
The full_name plus the form name if 'html_prefix' is set.
By default we expect an input parameter based on the field name. This allows you to look for a different input parameter.
Set the 'inactive' attribute to 1 if this field is inactive. The 'inactive' attribute that isn't set or is set to 0 will make a field 'active'. This provides a way to define fields in the form and selectively set them to inactive. There is also an '_active' attribute, for internal use to indicate that the field has been activated/inactivated on 'process' by the form's 'active'/'inactive' attributes.
You can use the is_inactive and is_active methods to check whether this particular field is active.
if( $form->field('foo')->is_active ) { ... }
The input string from the parameters passed in.
The value as it would come from or go into the database, after being acted on by inflations/deflations and transforms. Used to construct the
$form->values hash. Validation and constraints act on 'value'.
See also HTML::FormHandler::Manual::InflationDeflation.
Values used to fill in the form. Read only. Use a deflation to get from 'value' to 'fif' if an inflator was used. Use 'fif_from_value' attribute if you want to use the field 'value' to fill in the form.
[% form.field('title').fif %]
Initial value populated by init_from_object. You can tell if a field has changed by comparing 'init_value' and 'value'. Read only.
Input for this field if there is no param. Set by default for Checkbox, and Select, since an unchecked checkbox or unselected pulldown does not return a parameter.
A reference to the containing form.
A reference to the parent of this field. Compound fields are the parents for the fields they contain.
Returns the error list for the field. Also provides 'num_errors', 'has_errors', 'push_errors' and 'clear_errors' from Array trait. Use 'add_error' to add an error to the array if you want to use a MakeText language handle. Default is an empty list.
Add an error to the list of errors. Error message will be localized using '_localize' method. See also HTML::FormHandler::TraitFor::I18N.
return $field->add_error( 'bad data' ) if $bad;
Compound fields will have an array of errors from the subfields.
Set the method used to localize.
The 'element_attr' hashref attribute can be used to set arbitrary HTML attributes on a field's input tag.
has_field 'foo' => ( element_attr => { readonly => 1, my_attr => 'abc' } );
Note that the 'id' and 'type' attributes are not set using element_attr. Use the field's 'id' attribute (or 'build_id_method') to set the id.
The 'label_attr' hashref is for label attributes, and the 'wrapper_attr' is for attributes on the wrapping element (a 'div' for the standard 'simple' wrapper).
A 'javascript' key in one of the '_attr' hashes will be inserted into the element as-is.
The following are used in rendering HTML, but are handled specially.
label - Text label for this field. Defaults to ucfirst field name. build_label_method - coderef for constructing the label wrap_label_method - coderef for constructing a wrapped label id - Useful for javascript (default is html_name. to prefix with form name, use 'html_prefix' in your form) build_id_method - coderef for constructing the id render_filter - Coderef for filtering fields before rendering. By default changes >, <, &, " to the html entities disabled - Boolean to set field disabled
The order attribute may be used to set the order in which fields are rendered.
order - Used for sorting errors and fields. Built automatically, but may also be explicitly set
The following are discouraged. Use 'element_attr', 'label_attr', and 'wrapper_attr' instead.
css_class - instead use wrapper_attr => { class => '...' } input_class - instead use element_attr => { class => '...' } title - instead use element_attr => { title => '...' } style - instead use element_attr => { style => '...' } tabindex - instead use element_attr => { tabindex => 1 } readonly - instead use element_attr => { readonly => 'readonly' }
Rendering of the various HTML attributes is done by calling the 'process_attrs' function (from HTML::FormHandler::Render::Util) and passing in a method that adds in error classes, provides backward compatibility with the deprecated attributes, etc.
attribute hashref class attribute wrapping method ================= ================= ================ element_attr element_class element_attributes label_attr label_class label_attributes wrapper_attr wrapper_class wrapper_attributes
The slots for the class attributes are arrayrefs; they will coerce a string into an arrayref. In addition, these 'wrapping methods' call a hook method in the form class, 'html_attributes', which you can use to customize and localize the various attributes. (Field types: 'element', 'wrapper', 'label')
sub html_attributes { my ( $self, $field, $type, $attr ) = @_; $attr->{class} = 'label' if $type eq 'label'; return $attr; }
The 'process_attrs' function will also handle an array of strings, such as for the 'class' attribute.
A hashref containing flags and strings for use in the rendering code. The value of a tag can be a string, a coderef (accessed as a method on the field) or a block specified with a percent followed by the blockname ('%blockname').
Retrieve a tag with 'get_tag'. It returns a '' if the tag doesn't exist.
This attribute used to be named 'widget_tags', which is deprecated.
This string is used when rendering the input tag as the value for the type attribute. It is used when the form has the is_html5 flag on.
The 'widget' attribute is used in rendering, so if you are not using FormHandler's rendering facility, you don't need this attribute. It is used in generating HTML, in templates and the rendering roles. Fields of different type can use the same widget.
This attribute is set in the field classes, or in the fields defined in the form. If you want a new widget type, create a widget role, such as MyApp::Form::Widget::Field::MyWidget. Provide the name space in the 'widget_name_space' attribute, and set the 'widget' of your field to the package name after the Field/Form/Wrapper:
has_field 'my_field' => ( widget => 'MyWidget' );
If you are using a template based rendering system you will want to create a widget template. (see HTML::FormHandler::Manual::Templates)
Widget types for some of the provided field classes:
Widget : Field classes -----------------------:--------------------------------- Text : Text, Integer Checkbox : Checkbox, Boolean RadioGroup : Select, Multiple, IntRange (etc) Select : Select, Multiple, IntRange (etc) CheckboxGroup : Multiple select TextArea : TextArea, HtmlArea Compound : Compound, Repeatable, DateTime Password : Password Hidden : Hidden Submit : Submit Reset : Reset NoRender : Upload : Upload
Widget roles are automatically applied to field classes unless they already have a 'render' method, and if the 'no_widgets' flag in the form is not set.
You can create your own widget roles and specify the namespace in 'widget_name_space'. In the form:
has '+widget_name_space' => ( default => sub { ['MyApp::Widget'] } );
If you want to use a fully specified role name for a widget, you can prefix it with a '+':
widget => '+MyApp::Widget::SomeWidget'
For more about widgets, see HTML::FormHandler::Manual::Rendering.
password - prevents the entered value from being displayed in the form writeonly - The initial value is not taken from the database noupdate - Do not update this field in the database (does not appear in $form->value)
See also the documentation on "Defaults" in HTML::FormHandler::Manual::Intro.
Supply a coderef (which will be a method on the field) with 'default_method' or the name of a form method with 'set_default' (which will be a method on the form). If not specified and a form method with a name of
default_<field_name> exists, it will be used.
Provide an initial value just like the 'set_default' method, except in the field declaration:
has_field 'bax' => ( default => 'Default bax' );
FormHandler has flipped back and forth a couple of times about whether a default specified in the has_field definition should override values provided in an initial item or init_object. Sometimes people want one behavior, and sometimes the other. Now 'default' does *not* override.
If you pass in a model object with
item => $row or an initial object with
init_object => {....} the values in that object will be used instead of values provided in the field definition with 'default' or 'default_fieldname'. If you want defaults that override the item/init_object, you can use the form flags 'use_defaults_over_obj' and 'use_init_obj_over_item'.
You could also put your defaults into your row or init_object instead.
This is deprecated; look into using 'use_defaults_over_obj' or 'use_init_obj_over_item' flags instead. They allow using the standard 'default' attribute.
Allows setting defaults which will override values provided with an item/init_object. (And only those. Will not be used for defaults without an item/init_object.)
has_field 'quux' => ( default_over_obj => 'default quux' );
At this time there is no equivalent of 'set_default', but the type of the attribute is not defined so you can provide default values in a variety of other ways, including providing a trait which does 'build_default_over_obj'. For examples, see tests in the distribution.
See also HTML::FormHandler::Manual::Validation.
Flag indicating whether this field must have a value
For DB field - check for uniqueness. Action is performed by the DB model.
messages => { required => '...', unique => '...' }
Set messages created by FormHandler by setting in the 'messages' hashref. Some field subclasses have additional settable messages.
required: Error message text added to errors if required field is not present. The default is "Field <field label> is required".
Field values are validated against the specified range if one or both of range_start and range_end are set and the field does not have 'options'.
The IntRange field uses this range to create a select list with a range of integers.
In a FormHandler field_list:
age => { type => 'Integer', range_start => 18, range_end => 120, }
Fields that contain 'empty' values such as '' are changed to undef in the validation process. If this flag is set, the value is not changed to undef. If your database column requires an empty string instead of a null value (such as a NOT NULL column), set this attribute.
has_field 'description' => ( type => 'TextArea', not_nullable => 1, );
This attribute is also used when you want an empty array to stay an empty array and not be set to undef.
It's also used when you have a compound field and you want the 'value' returned to contain subfields with undef, instead of the whole field to be undef.
Use the 'apply' keyword to specify an ArrayRef of constraints and coercions to be executed on the field at validate_field time.
has_field 'test' => ( apply => [ 'MooseType', { check => sub {...}, message => { } }, { transform => sub { ... lc(shift) ... } } ], );
See more documentation in HTML::FormHandler::Manual::Validation.
An action to trim the field. By default this contains a transform to strip beginning and trailing spaces. Set this attribute to null to skip trimming, or supply a different transform.
trim => { transform => sub { my $string = shift; $string =~ s/^\s+//; $string =~ s/\s+$//; return $string; } } trim => { type => MyTypeConstraint }
Trimming is performed before any other defined actions.
There are a number of methods to provide finely tuned inflation and deflation:
Inflate to a data format desired for validation.
Deflate to a string format for presenting in HTML.
Modify the 'default' provided by an 'item' or 'init_object'.
Modify the value returned by
$form->value.
Another way of providing a deflation method.
Another way of providing an inflation method.
Normally if you have a deflation, you will need a matching inflation. There are two different flavors of inflation/deflation: one for inflating values to a format needed for validation and deflating for output, the other for inflating the initial provided values (usually from a database row) and deflating them for the 'values' returned.
See HTML::FormHandler::Manual::InflationDeflation.
This is the base class validation routine. Most users will not do anything with this. It might be useful for method modifiers, if you want code that executed before or after the validation process.
This field method can be used in addition to or instead of 'apply' actions in custom field classes. It should validate the field data and set error messages on errors with
$field->add_error.
sub validate { my $field = shift; my $value = $field->value; return $field->add_error( ... ) if ( ... ); }
Supply a coderef (which will be a method on the field) with 'validate_method' or the name of a form method with 'set_validate' (which will be a method on the form). If not specified and a form method with a name of
validate_<field_name> exists, it will be used.
Periods in field names will be replaced by underscores, so that the field 'addresses.city' will use the 'validate_addresses_city' method for validation.
has_field 'my_foo' => ( validate_method => \&my_foo_validation ); sub my_foo_validation { ... } has_field 'title' => ( isa => 'Str', set_validate => 'check_title' );
FormHandler Contributors - see HTML::FormHandler
This software is copyright (c) 2012 by Gerda Shank.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~gshank/HTML-FormHandler-0.40016/lib/HTML/FormHandler/Field.pm | CC-MAIN-2017-04 | refinedweb | 2,292 | 56.55 |
See also: IRC log
<chris> anybody know the callin number and passcode?
<kinetik> Zakim: IPcaller.a is kinetik
<quinnirill> case sensitive :)
<kinetik> zakim: IPcaller.a is kinetik
<chris> phone number?
<tmichel> *Bridge US:* +1-617-761-6200 (Zakim)
<kinetik> thanks
<quinnirill> np
<chris> tmichel: thanks - and the code?
<tmichel> the passcode is 26631#
Al is talking about the different APIs we have and asking to open up discussion.
<chris> thanks :)
Jo suggests a useful starting point might be to ask a question about implementation.
<tmichel> chris has joined ?
<chris> yes
Jo suggests we need to consider the use cases - streaming and synthesis.
Chris Rogers joins and asks for a brief summary.
Jo asks whether a specific concrete implementation could serve the two different use cases.
Jo thinks the APIs can be made to coincide. But can an implementation work in both worlds.
Chris Rogers says, I recently posted some example code integrating the audio API with the streaming API / Web RTC implementation.
Chris R says there has to be some point of integration between these different APIs.
The public web rtc implementation looks good to Chris for setting up p2p connections and dealing with them at a high level, and he's given some examples in the last couple of days showing how the web audio API could work with it.
He thinks the implementation would be possible in the next couple of months in Chrome.
The distinction is the objects don't have to be the same.
Joe asks what the arguments are for for unification.
Chris Rogers says that a unified API would be difficult to develop the larger it gets (adding audio event scheduling to the p2p api for example)
Namespace collision, fragile base class problem and so on.
Al asks roc for his ideas.
roc says he's addressed quite a lot of issues by email, asks whether to recap.
Joe would like roc to talk a little about Chris's suggestions.
quinnirill: I think that's JoeB
<Alistair> that was JoeB
<quinnirill> yeah, just trying to identify who is echoing :)
roc says we don't yet have a concrete namespace collision problem.
<quinnirill> ty :)
roc can't remember an occasion when we've split objects on the web to make them simpler
roc, but it's hard to be sure of how things will evolve.
Al says it seems that we have different apis for different use cases
Audio API for games, synthesis and so on.
And RTC for streaming.
It seems to make sense for RTC to have some audio features, and it would make sense to see if we could share some core features.
Maybe a basic audio spec for mixing and panning that we could share, then pass the output of that to a seperate mixing level.
But that might lead to redundency
Chris is talking about Al's questions about controlling the audio of each tab.
For mixing and panning Chris says that's the bread-and-butter of the Audio API.
It's able to take and mix / change the volume and pan from a number of sources, including the web rtc apis/
Al is asking whether it would make sense to split the API into two differnent APIs, while recognising that that runs contrary to roc's opinion.
He says it would make sense to share functionality at some level.
JoeB is asking that if the mixing part of the API is simple, why can't it just be spliced in to the audio path at the point where it's needed
Al understands the argument, clarifies that the split might not be in code necessarily but a split in the standardisation effort.
Al has talked to musicians and they would like the synthesis/effects pinned down to the sample accuracy.
But because we have the push from RTC for communications in the browser, and could use things like compression and noise reduction, it seems like we could take the set of most-used features and put that in one deliverable
and the more complicated features a bit further out.
Chris Rogers(?) thinks that phasing the support levels of APIs in general is a great idea.
Chris doesn't think we need to standardise to the sample level when we role out, just as web graphics specs don't specify rendering down to the pixel level.
roc - authors don't generally care about graphics anti-aliasing in general, whereas musicians maybe do care as Al points out.
If we're producing things that sound the same we'll be ok, but if they're different then not.
Chris thinks we can produce something that sounds the same across implementations. Most things are specified to the sample level in the Web Audio API (except perhaps the compression stuff which is however following established principles. There's no agreed 'standard' for compressors)
Al thinks the point about starting out without full precision is good one.
Chris says that the impulse response of the convolution filters exactly defines how those effects work.
And mixing and panning, and filters can all be specified precisely in a mathematical sense
Al asks roc how far he's thought about rendering audio and what the signal path would be.
roc - wants to integrate the framework with media elements, that's what he's working on now.
Then he wants to test it with hundreds of streams at the same time and see what kind of performance he can get.
Al - that would be interesting to see.
Al asks roc to talk about pulling the audio api into the streaming api.
roc doesn't have a lot of experience with effects, and would like to see the effects made available in a common proposal.
roc's looking at a simple mixing effect at the moment.
Al asks Chris Rogers about copyright issues around his API
Chris says it's all open source.
Chris would like to see some more agreement with roc to take the Web Audio code as is and move it into the Gecko code base to integrate it there.
As an alternative to roc working to reimplement from scratch.
roc says most of the effort so far has been on syncronisation issues, blocking issues and so on.
And they'll remain issues even if we take a bridging approach.
Roc - does Web Audio have the ability to notice that streams have stopped?
Chris - is the issue around syncronising audio elements in the case of buffer underruns.
Roc - it's also about syncing filters and mixers and so on, so you don't filter silence while you're waiting for rebuffering.
Chris - if it's only going to pause for a second or so you can keep running the filter even with silence going through it.
Roc - but that might not work if there's an element that you're waiting for.
Al asks if we can add handlers for these things and let developers worry about it.
Roc - you have to handle it in real time which is very tricky, and is better handled by the browser.
Chris thinks that that kind of work needs to be done in the browser, is not sure about the syncing of effects though. Would like to discuss that further later.
Chris - we're talking about streaming html element streams, a kind of stream that can be blocked and we could work on the html media element apis to allow syncronisation of streams.
Roc mentions the media controller proposal which does some of that.
JoeB - by bridging the audio graph to the rtc api then the developer would have control of how granular they wanted to handle the blocking and syncronisation issues.
Chris agrees. In the audio context there is no "blocking" everything is a continuos stream, which may be silent for periods.
Chris thinks the syncing should be in the HTML element or the controller proposal (with which he's not so familiar)
<roc>
(the proposal roc mentioned at 20:51)
Joe is talking about treating the whole graph as something that blocks.
Chris proposes that if the blocking and syncing is handled externally, and if something is blocked it just inputs silence into the audio API graph, with no notion of transmitting blocking information to the audio api.
The audio api doesn't need to care about it.
Roc mentions that if you want audio to be in sync with video that's not what you want.
<roc> oops dropped off
Chris disagrees - it doesn't mean that we'd lose syncronisation, it would just process silence.
<roc> yeah I'm back
<tmichel> RRSagent help
<quinnirill> tmichel: publish minutes maybe?
tmichel: is that what you wanted to do?
<quinnirill> haha, that's an interesting command
Chris - imagine you have a video tag with transport controlls. The audio is going through a reverb. If the video freezes the audio stops but the tail of the reverb carries on.
Chris - when you hit play again, the audio would go back through the reverb.
<tmichel> right I wanted to make the minutes public. thanks.
If you didn't want the reverb tail to sound, you could use javascript to turn off the reverb at the point that "pause" was pressed.
tmichel: I'll publish them at the end of the call.
<tmichel> THey are already been published ...
tmichel: yeah, I think we need to add some metadata at the end.
roc so the latency of the filter isn't an issue?
chris - for the most part no, the effects built in the audio api don't have a latency.
chris sometimes with a delay you'd mix wet and dry, that's not latency as such - it's part of the effect.
Al closes the meeting for today. Let's keep the discussion going on the thread.
<scribe> Scribe: Chris Lowis
<scribe> ScribeNick: chrislo
Alistair: still there?
<Alistair> yes
<Alistair> chrislo: thanks so much for scribing, i really appreciated itr
This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: Chris Lowis Found ScribeNick: chrislo WARNING: No "Topic:" lines found. Default Present: [IPcaller], +1.978.314.aaaa, chrislo, quinnirill Present: [IPcaller] +1.978.314.aaaa chrislo quinnirill Got date from IRC log name: 18 Oct 2011 Guessing minutes URL: People with action items: WARNING: Possible internal error: join/leave lines remaining: <tmichel> chris has joined ? WARNING: Possible internal error: join/leave lines remaining: <tmichel> chris has joined ?] | http://www.w3.org/2011/10/18-audio-minutes.html | CC-MAIN-2016-30 | refinedweb | 1,757 | 71.14 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » Python Help
Dear all,
I am trying to create a GUI for a project i want to do. In theory its really simple, I just want to be able to change the intensity of a high powered LED using PWM by changing the value of the OCR1BL register from 0 to 255.
The thing is I know absolutely nothing about python. So could anyone recommend me the following:
Many thanks
Kemil
So the first question is do you have the microcontroller part using PWM and say a potentiometer working, forgetting python?
Then where does the GUI come in? Do you want to have a graphical slider that you move on the pc that would brighten the led connected to the micro.
There are threads here in the forum on setting up python. As far as how to setup the GUI go to the python forums.
You will get other replies here but to get in depth help you probable will need to go to a python specific forum.
Ralph
Hi Ralph,
Yes - 'a graphical slider that you move on the pc that would brighten the led connected to the micro' is exactly what im after.
I am confident i have the hardware side of things set up. As im using High power LEDs i have to use some LED drivers which are connected to a power supply unit. The LED driver has a dim connection which is where the microcontroller with its PWM port comes into play.
The problem is i dont know how to interface the MCU with my PC running the python GUI, (or how to make the GUI its self). How does the python script update the OCR1BL register on the MCU? i guess im looking for a blow-by-blow explanation of the interaction between the two.
Thanks
Kemil
I think I started out with the python code for the Weight Scale and the
PS2 Keyboard Tutorial. The weight scale shows you how to show a graph using Python and the PS2 Keyboard shows you how to talk to a micro using UART.
I am not a Python programmer but those two projects should help you to get going.
Building a GUI is a very interesting idea so please keep us posted on your progress.
Ralph
Hi Kemil,
Here are a few things to get you started.
Python Download site. I am pointing you directly to the python 2.6 download which is the latest version that is still compatible with PyGame
PyGame download site. PyGame is a module for Python that makes it easier to write GUIs. It's one of our favorites although there are others out there. Make sure you get Python.
The PyGame website has a whole bunch of tutorials that are great to get started, and their documentation is very usable.
Definitely do take a look at the links Ralph pointed you to.
Humberto
Thanks Humberto,
Can i just confirm, before i start spending time building this GUI, that it is theoretically possible to change the brightness on an LED through a GUI on my PC, whilst the microcontroller is running?
It would be absolutely possible. The GUI on your PC would just output serial data to the micro that it would interpret to the various brightness levels.
Rick
Does anyone know any good tutorials on how a python script running on the pc interfaces with the microcontroller?
kemil
Hi all,
Ive just been playing around with the tempsensor porject to get a feel for how serial connections work and have run into a problem. Using Putty (im using windows vista) I can get the temperature reading to be displayed on my PC. However when it comes to using the Python script from the meat thermometer tutorial it doesn't work properly.
Im stilly trying to learn python and am not good enough yet to figure out the problem.
This is what my command prompt shows when i initially run pc-pygame.py
As you can see some temperature vaulus come up but the graph remains un altered.
Then after about a minute this comes up on my command prompt
I assume (although i have no idea) that there is something wrong in the python code beauce i can get values on putty, however, I got the python code straight from the nerdkits website and only changed the line:
self.s = serial.Serial("COM4", 115200) (chenged version)
As this is the port i am using and the baud rate, as stated in the servo tutorial, should be 115200.
Any Ideas of what is going on anyone?
Many Thanks
Hi all,
This is where i am up to with my little project.
I have written the python code which produces a little GUI shown in the photo below.
As you can see the slider goes from 0 to 255 which corresponds to the different OCR1BL levels which will determine the PWM rate and hence the brghtness of the LED.
On the command prompt in the same picture you can see that i have managed to print out the values which show up on the slider. My thinking is that i could just put this variable called brightness into the piece of code:
ser = serial.Serial("COM4", 115200, timeout=1)
time.sleep(1.5)
ser.write(brightness)
which would port it onto the microcontroler.
I got this from the web but not sure if its this simple.
I am now stuck on how i am meant to set OCR1BL equal to brightness whilst the microcontroller is running. Am i meant to use an interrupt? if so how is this done?
Cheers
You have gotten pretty far, now is the time you get into the real meat of you project. Varying the brightness of an LED through PWM is not a very hard thing to do, but wrapping your head around some of the concepts can be tricky. You will have to structure your program in such a way that the main loop reads a new value from the serial port, and then simply sets the value of the PWM duty cycle right after. No interrupts should be necessary unless you want to do other things while waiting for the serial value to be read. Hope that helps.
Thanks Humberto,
I think my problem is i dont conceptually understand what is happening to the data as it is sent over the serial port, i.e what form is the data sent in etc.
the code im am using to send and receive my data is as follows:
On the python side i have the lines
ser = serial.Serial("COM4", 115200)
ser.write(brightness)
which im hoping is sending the vairable 'brightness' over the serial port
Whilst the code on the MCU"
int main(void)
{
volatile uint8_t brightness ;
DDRB |= (1<<PB2);
TCCR1A = (1<<COM1B1)|(1<<1);
TCCR1B = 1;
OCR1BH = 0;
uart_init();
FILE uart_stream = FDEV_SETUP_STREAM(uart_putchar, uart_getchar, _FDEV_SETUP_RW);
stdin = stdout = &uart_stream;
while(1) {
scanf_P(PSTR("%u"), &brightness);
OCR1BL = brightness;
}
return 0;
}
I hope is receiving this data and assigning this 'brightness' variable to OCR1BL.
However, the code above doesnt work, and im not sure why not.
As you can see ive set up the PWM register and declared a unsigned integer called brightness, i think i have set up the serial port with the uart_init(); command and the 2 lines underneith.
Then in the while loop i use the scanf command to capture the data coming over the serial port... im pretty sure this part is wrong... but i dont know how to make it right.
Anyone have anyideas of what the MCU code should look like?
Thanks
I have modified the C code. I can get the LED to turn on using the scroller but it does not change the intensity.
new C code is:
int main(void)
{
uart_init();
FILE uart_stream = FDEV_SETUP_STREAM(uart_putchar, uart_getchar, _FDEV_SETUP_RW);
stdin = stdout = &uart_stream;
char brightness;
DDRB |= (1<<PB2);
TCCR1A = (1<<COM1B1)|(1<<1);
TCCR1B = 1;
OCR1BH = 0;
while(1) {
brightness = uart_read();
OCR1BL = brightness;
}
return 0;
}
GOT IT TO WORK!
for all those who are interested. i altered the python code bit to:
ser = serial.Serial("COM4", 115200)
ser.write(str(int(brightness)))
ser.write("\n")
and the c code is
int main(void)
{
uart_init();
FILE uart_stream = FDEV_SETUP_STREAM(uart_putchar, uart_getchar, _FDEV_SETUP_RW);
stdin = stdout = &uart_stream;
int brightness ;
int16_t newbright;
DDRB |= (1<<PB2);
TCCR1A = (1<<COM1B1)|(1<<1);
TCCR1B = 1;
OCR1BH = 0;
while(1) {
//brightness = uart_read();
newbright = scanf_P(PSTR("%d"), &brightness);
OCR1BL = brightness;
}
return 0;
}
If anyone wants to optomise it or suggest any improvements then be my guest
kemil
Thanks kemil, that will come in handy one of these day. Would you post a link to your whole project or at least the basic python code to make the slider, the serial communications and then the mcu code.
I do not understand your logic of passing the variable "brightness" instead just passing the brightness value but you got it to work, that's great.
It is possible to send and receive data over the serial port at the same time? I want to combine the Light dimmer widget with the temp sensor project.. but this would required me to send data to the MCU whilst also reading from it..
Anyone know how i should go about doing this?
Isn't that what USART does?
or is it UART that does the simultaneous communications.
H Kemil,
Ralph is right, you should be able to read and write through the serial port at the same time with no problem. On the computer end you will have to make sure the same process is doing both the reading and the writing as only program can keep the device open at the same time, but other than that it should be straight forward.
Hi Humberto,
When u guys say use USART instead does this mean i have to do something similar to what u have done with UART in the libnerdkits folder? has anyone writtten a function similar to that of uart_init() but for USART... even once reading the data sheet i dont know what bits on what registers to turn on and off in order to do what i want.
Thanks
Kemill
Just tried it with the uart_init for libnerdkits and it works. but i have a bigger problem
Im using a modified version of the tempsensor code where ive just added an if statement which turns on an led if the temp is above a certain value.
Ive written a short python code which allows me to change the minimum temperature which turns on the led:
the python code)
def callback_origin():
data_inp_1 = enter_data_1.get()
label_2_far_right = Label(root, text=data_inp_1)
label_2_far_right.grid(row=1, column = 3)
ser.write(str(int(data_inp_1)))
ser.write("\n")
x = ser.readline()
print x
but1= Button(root, text="press to transfer",/
command=callback_origin).grid(row=5,column=0)
root.mainloop()
This kind of worked but it wouldnt give me the continuous stream of data which i was looking for (i was able to change the data_inp_1 value in the if statement: if (temp_avg > data_inp_1 ) though, which is good) ... instead it only gave me the temp reading when i pressed the button.... i think this is because the print command is only called when the button is pushed....
so i tried moving the ser.readline and the print x commands but doing this made the gui not even come up!
I looked through the pc-pygame python code for the meat thermom project and it uses the threading module... im not sure how to implement this module and i have i hunch its what i need to do, i tried playing around with it to no avail....
Does anyone know what i need to do to be able to print out continuouse stream of temperatures?
Kemill
UPDATE:
I think ive narrowed down the problem... its def something to do wtih my code, shown below: i have changed my python code so i just print out the temp values. This ONLY works successfully when i comment out the scanf_P line in my code. when i uncomment it and runt the same code it doesnt work.
The question is how do i simaltaneously send and receive data from the mcu.. whos got the code to do so?
;
int data_inp_1;
// holder variables for temperature data
uint16_t last_sample = 0;
double this_temp;
double temp_avg;
uint8_t i;
while(1) {
// take 100 samples and average them!
ADMUX = 0;
temp_avg = 0.0;
for(i=0; i<100; i++) {
last_sample = adc_read();
this_temp = sampleToFahrenheit(last_sample);
// add this contribution to the average
temp_avg = temp_avg + this_temp/100.0;
}
if (temp_avg > data_inp_1 ){ //|| temp_avg < 85){
// LED as output
DDRB |= (1<<PB4);
// turn on LED
PORTB |= (1<<PB4);
} else {
// turn off LED
PORTB &= ~(1<<PB4);
}
// write message to LCD
//lcd_home();
//lcd_write_string(PSTR("ADC: "));
//lcd_write_int16(last_sample);
//lcd_write_string(PSTR(" of 1024 "));
//lcd_line_two();
//fprintf_P(&lcd_stream, PSTR("Temperature: %.2f"), (temp_avg-32)*.5556);
//lcd_write_data(0xdf);
//lcd_write_string(PSTR("C "));
// write message to serial port
printf_P(PSTR("%.2f degrees F\r\n"), temp_avg);
scanf_P(PSTR("%d"), &data_inp_1);
}
return 0;
}
Hi kemil,
scanf is a blocking I/O call. Which means the code will wait to read something before moving on. This is probably what is causing your confusion. What you can probably do is use the uart_char_is_waiting() function of uart.c to check if there is anything waiting on the serial port, and only call scanf if there is something to read.
That Solved one of the problems but now i have another! As previously mentioned i want to be able to send commands to the mcu whilst at the same time get temp readings from it. The thing is i cant seem to get a continuous stream of data from it unless i use an infinate while loop and it seems that if i do that i can no change the temperature range(i have a little function which just turns on an LED if i the temperature is above a certain value which i can adjust through my small gui). I need to be able to continuously read the serial data whilst doing other things.
Thanks Kemil
Here's my python code:
import thread, timeout=1)
#class FeederThread(threading.Thread):
# def run(self):
def callback_origin():
#while 1:
data_inp_1 = enter_data_1.get()
label_2_far_right = Label(root, text=data_inp_1)
label_2_far_right.grid(row=1, column = 3)
ser.write(str(int(data_inp_1)))
ser.write("\n")
prevVal = None
# Read the serial value
ser.flushInput()
serialValue = ser.readline().strip()
# Catch any bad serial data:
try:
if serialValue != prevVal:
# Print the value if it differs from the prevVal:
print "New Val: ", serialValue
prevVal = serialValue
except ValueError:
pass
but1= Button(root, text="press to transfer", /
command=callback_origin).grid(row=5,column=0)
root.mainloop()
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1437/ | CC-MAIN-2020-40 | refinedweb | 2,468 | 70.94 |
Opened 16 months ago
Closed 10 months ago
#21597 closed Bug (wontfix)
(2006, 'MySQL server has gone away') in django1.6 when wait_timeout passed
Description (last modified by aaugustin)
EDIT -- THE SOLUTION TO THIS PROBLEM IS EXPLAINED IN COMMENT 29. JUST DO WHAT IT SAYS. THANK YOU!
In django 1.6, when the wait_timeout passed (of mysql), then DB access cause the (2006, 'MySQL server has gone away') error.
This was not the case in django 1.5.1
I've noticed this error when using workers that run the django code (using gearman).
To reproduce:
Set the timeout to low value by editing /etc/mysql/my.cnf
add the following under [mysqld]
wait_timeout = 10
interactive_timeout = 10
Then
% python manage.py shell >>> # access DB >>> import django.contrib.auth.models >>> print list(django.contrib.auth.models.User.objects.all()) >>> import time >>> time.sleep(15) >>> print list(django.contrib.auth.models.User.objects.all())
Now you get the error.
Simple solution I found on the web is to call django.db.close_connection() before the access
>>> import django.db >>> django.db.close_connection() >>> print list(django.contrib.auth.models.User.objects.all())
works ok.
Attachments (1)
Change History (45)
Changed 16 months ago by anonymous
comment:1 Changed 16 months ago by aaugustin
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 16 months ago by ekeydar@…
Not using this.
comment:3 Changed 16 months ago by aaugustin
This might be a side effect of using autocommit. Since Django 1.5 kept a transaction open in that case, MySQL couldn't close the connection. Now it can.
Could you try the following snippet to confirm my hypothesis?
>>> import django.contrib.auth.models >>> from django.db import transaction >>> with transaction.atomic(): ... print list(django.contrib.auth.models.User.objects.all()) ... import time ... time.sleep(15) ... print list(django.contrib.auth.models.User.objects.all())
You shouldn't get a timeout when you run this on Django 1.6.
comment:4 Changed 16 months ago by anonymous
I ran into the same issue after upgrading to Django 1.6. In my case it affects long-running (multiple days) WorkerThreads, although I am not using any wait_timeout setting. It looks like the connection just times out without Django noticing/handling it.
Code which is triggering this:
cursor = connections['legacydb'].cursor() cursor.execute(query) result = cursor.fetchall() cursor.close()
The above code works for a few hours and after a while the OperationalError 2006 is triggered on cursor.execute(). This behavior was not present in early Django versions and is a critical regression (aka: code which used to work now raises an Exception → severity should be set to release blocker).
BTW: The legacydb in the example does not support transactions (MyISAM).
comment:5 Changed 16 months ago by claudep
AFAIK the default timeout on MySQL is 28800 seconds (8 hours).
comment:6 Changed 16 months ago by claudep
- Triage Stage changed from Unreviewed to Accepted
comment:7 Changed 16 months ago by claudep
Query comparison (as in OP example, without using transaction.atomic()):
comment:8 Changed 16 months ago by aigarius
Could this actually be a failure to re-establish a connection to MySQL after receiving the 2006 error? I get the same symptoms in production now when a MySQL server is restarted - all workers start getting 2006 errors and do not recover. Only gunicorn restart helps then. You could reproduce it locally by calling "print list(django.contrib.auth.models.User.objects.all())" every second and then restarting MySQL server. The connection does not recover even after the server is back up and accepting connections. This was tested with CONN_MAX_AGE > 0 but lower than wait_timeout on the MySQL server.
comment:9 Changed 16 months ago by aigarius
For now we are using the following workaround for long-running workers:
from django.db import connection ... def is_connection_usable(): try: connection.connection.ping() except: return False else: return True ... def do_work(): while(True): # Endless loop that keeps the worker going (simplified) if not is_connection_usable(): connection.close() try: do_a_bit_of_work() except: logger.exception("Something bad happened, trying again") sleep(1)
comment:10 Changed 15 months ago by err
Maybe we can validate that connection is usable in ensure_connection ?
something like this:
*** Django-1.6.1/django/db/backends/__init__.py 2014-01-23 16:57:15.927687924 +0400 --- /usr/local/lib/python2.7/dist-packages/django/db/backends/__init__.py 2014-01-23 16:56:21.000000000 +0400 *************** *** 119,125 **** """ Guarantees that a connection to the database is established. """ ! if self.connection is None or not self.is_usable(): with self.wrap_database_errors: self.connect() --- 119,125 ---- """ Guarantees that a connection to the database is established. """ ! if self.connection is None: with self.wrap_database_errors: self.connect()
are there any caveats?
comment:11 Changed 15 months ago by err
or even close connection (if not usable) in ensure_connection
comment:12 Changed 14 months ago by andreis
Hi,
to me it seems to be a bug. Old django would close every connection right away, django 1.6 checks with CONN_MAX_AGE:
- It gets CONN_MAX_AGE from DATABASES, sets close_at:
max_age = self.settings_dict['CONN_MAX_AGE'] self.close_at = None if max_age is None else time.time() + max_age
- Actually the code above affects close_if_unusable_or_obsolete, which closes the connection if 'self.close_at is not None and time.time() >= self.close_at'
- close_if_unusable_or_obsolete itself is being called by close_old_connections, which in turn is a request handler for signals.request_started and signals.request_finished.
We have a worker, which is effectively a django app but but it doesn't process any HTTP requests. In fact that makes all connections persistent because close_old_connections never gets called.
Please advise.
Thanks
comment:13 Changed 14 months ago by jeroen.pulles@…
Hi,
Without transactions you hit the Gone Away if the sleep is longer than MySQL's wait_timeout:
mysql> set global wait_timeout=10;
>>> import django.contrib.auth.models >>> import time >>> print list(django.contrib.auth.models.User.objects.all()) >>> time.sleep(15) >>> print list(django.contrib.auth.models.User.objects.all())
According to MySQL/python documentation this should not be a problem. If you add the INTERACTIVE bit to the client connection flags in db/backends/mysql/base.py, you regain the MySQL drivers' auto-reconnect feature and everything works as before (I think that in Django 1.5 you ran into trouble with a transaction that ran longer than wait_timeout too).
from kwargs['client_flag'] = CLIENT.FOUND_ROWS
to kwargs['client_flag'] = CLIENT.FOUND_ROWS | CLIENT.INTERACTIVE
I haven't looked into the origins of this line, but maybe it is the real culprit for the recent Gone Away issues.
comment:14 Changed 14 months ago by andreis
Hi Jeroen!
It seems like adding CLIENT.INTERACTIVE flag just tells the driver to switch from checking on wait_timeout to taking interactive_timeout into account. I set interactive_timeout=10 and was able to reproduce this problem.
Both of these values are 8 hours by default, but once your code has been inactive for that long, mysql drops the connection and the client fails next time it tries to access some data. It looks perfectly right to catch this error in the code, call django.db.close_connection() every time or whatever, but I think that maybe connection persistence logic needs a bit of fine-tuning so that we can control persistence without relying on signals.request_started/request_finished.
comment:15 Changed 13 months ago by andreis
Hey folks! Any thoughts on this matter?
Thanks
comment:16 Changed 13 months ago by jeroen.pulles@…
I've checked with my existing long running processes on Django 1.5 installations, running the same mysql-python and libmysqlclient.so, with tcpdump:
They do set the interactive flag on the MySQL connection. That explains for me why I never experienced the Gone Away's before. Django doesn't notice that the underlying connection went away and came back.
I haven't had enough time to find out (dig down deep enough) what makes this flag appear on the connection in the 1.5 situation and what changed in 1.6 that is relevant to this problem. (a) My suspicion is that it isn't directly related to the connection persistence mechanism. (b) To me it doesn't seem to be in any way related to the transactions mechanisms: My transactions happen fast enough and it's fine that things break if the transaction takes longer than wait_timeout (e.g. more than two minutes); The same application that works fine in 1.5 also works in 1.6 with the interactive flag set.
JeroenP
comment:17 Changed 13 months ago by err
- Owner changed from nobody to err
- Status changed from new to assigned
comment:18 Changed 13 months ago by err I've submitted pull request
comment:19 Changed 13 months ago by timo
- Cc timo added
- Has patch set
When I run the MySQL tests using djangocore-box, I get the traceback below when the test suite concludes. The PR above resolves this error.
Traceback (most recent call last): File "/django/tests/runtests.py", line 374, in <module> options.failfast, args) File "/django/tests/runtests.py", line 216, in django_tests test_labels or get_installed(), extra_tests=extra_tests) File "/django/django/test/runner.py", line 149, in run_tests self.teardown_databases(old_config) File "/django/django/test/runner.py", line 124, in teardown_databases connection.creation.destroy_test_db(old_name, self.verbosity) File "/django/django/db/backends/creation.py", line 452, in destroy_test_db self._destroy_test_db(test_database_name, verbosity) File "/django/django/db/backends/creation.py", line 466, in _destroy_test_db % self.connection.ops.quote_name(test_database_name)) File "/django/django/db/backends/utils.py", line 59, in execute return self.cursor.execute(sql, params) File "/django/django/db/utils.py", line 94, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/django/django/db/backends/utils.py", line 57, in execute return self.cursor.execute(sql) File "/django/django/db/backends/mysql/base.py", line 128, in execute return self.cursor.execute(query, args) File "/home/vagrant/.virtualenvs/py2.7/local/lib/python2.7/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/home/vagrant/.virtualenvs/py2.7/local/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue django.db.utils.OperationalError: (2006, 'MySQL server has gone away')
comment:20 Changed 13 months ago by aaugustin
- Patch needs improvement set
I believe this patch breaks the transactional integrity guarantees made by transaction.atomic.
Django mustn't attempt to reconnect until the connection has been dropped properly, possibly after waiting for the exit of an atomic block.
comment:21 Changed 13 months ago by anonymous
A fix for this is still badly required, as this is a horrible regression. Please mark this bug as a 'release blocker', as it breaks existing code and makes upgrading to Django 1.6 impossible for quite a few people.
comment:22 follow-up: ↓ 24 Changed 12 months ago by aaugustin
- Severity changed from Normal to Release blocker
Unless I missed something, this only affects people that have management commands running for more than 8 hours (or MySQL's connection timeout if it has been changed from the default). Is that correct?
comment:23 Changed 12 months ago by andreis
Hi, this absolutely true for me. However I've managed to come up with a workaround, we've wrapped some of our code with decorators which implicitly call close_old_connections() routine, so that django now respects CONN_MAX_AGE parameter to some extent.
comment:24 in reply to: ↑ 22 Changed 12 months ago by anonymous
Unless I missed something, this only affects people that have management commands running for more than 8 hours (or MySQL's connection timeout if it has been changed from the default). Is that correct?
As far as I can tell I think this is correct. At least this is the place where I was hitting this bug in my environment (before downgrading to 1.5 again because of this).
comment:25 Changed 12 months ago by jazeps.basko@…
Maybe this helps someone like me who uses Django and SQLAlchemy. I was experiencing this problem every morning with an API (not used for several hours at night) which had CONN_MAX_AGE=3600 and MySQL server with wait_timeout = 28800. I even changed settings which are used by manage.py (to run migrations and collect static assets) to not use persistent connections CONN_MAX_AGE=0, but the problem persisted. Then I noticed that this is actually happening in the code where SQLAlchemy accesses DB, so I googled and found this post: Followed the instructions in the blog post. Now that my SQLAlchemy does not handle connections (just uses whatever Django supplies) the problem is gone.
comment:26 Changed 12 months ago by matteius@…
Having the same problem in Django 1.6.3 in a process that is run indefinitely via a management command. I am attempting a build now where at the start of each loop I call:
db.close_old_connections()
We'll see if this solves the problem or not.
comment:27 Changed 12 months ago by aaugustin
My biggest problem is that I have no idea why this worked on 1.5.x :(
comment:28 Changed 12 months ago by aaugustin
- Owner changed from err to aaugustin
comment:29 Changed 12 months ago by aaugustin
- Resolution set to wontfix
- Status changed from assigned to closed
Actually this is the intended behavior after #15119. See that ticket for the rationale.
If you hit this problem and don't want to understand what's going on, don't reopen this ticket, just do this:
- RECOMMENDED SOLUTION: close the connection with from django.db import connection; connection.close() when you know that your program is going to be idle for a long time.
- CRAPPY SOLUTION: increase wait_timeout so it's longer than the maximum idle time of your program.
In this context, idle time is the time between two successive database queries.
comment:30 Changed 11 months ago by pembo13
Could someone give a little more context to this issue? I ran into it the very first morning after upgrading from 1.5.x to 1.6.x, I am guessing because the app was idle for a few hours (2am to 8am). I had 'CONN_MAX_AGE' set to 15mins, but have had wait_timeout and interactive_timeout set to 5mins.
So in my case, I don't exactly know my program is going to be idle for a long time, though my wait_timeout didn't seem that low. I've preemptively adjusted my wait_timeout to 30mins and my interactive_timeout to 60mins, but a bit more explanation of the issue. I don't yet see the relation to #15119.
Also, this issue should probably be mentioned in the docs somewhere.
comment:31 Changed 11 months ago by jeroen.pulles@…
My preferred solution is to increase the wait_timeout to 86400 (24hr) on sessions from processes that are long lived. Otherwise I have to do a close before any blocking call to other systems, e.g. redis, which may block for a month or return in a split second. These blocking calls are mostly in loops; Under load they are repeatedly called. I am not about to add a connection.close() call in those code paths. (And this is where the MySQL reconnect behavior worked fine, it only kicked in whén there was a timeout).
comment:32 Changed 11 months ago by victorgama
- Cc victorgama added
I just opened an pull request that may address this issue:
@aaugustin can you review it?
comment:33 Changed 11 months ago by aaugustin
I believe this patch breaks the transactional integrity guarantees made by transaction.atomic.
Django mustn't attempt to reconnect until the connection has been dropped properly, possibly after waiting for the exit of an atomic block.
(Yes, I've just copy-pasted comment 20, because that's the answer to all naive reconnection attempts. If this bug was that easy to fix, I would have done it.)
comment:34 Changed 10 months ago by pembo13
What is the correct solution for management commands that may take longer than the regular web queries? This issue was closed with more questions than answers.
comment:35 Changed 10 months ago by aaugustin
I already answered in comment 29.
If people read the answer instead of asking the same question again and again, it would remain more visible.
comment:36 Changed 10 months ago by aaugustin
comment:37 Changed 10 months ago by aaugustin
comment:38 Changed 10 months ago by germanoguerrini
I know I'll be hated for this, but it actually happens even for regular web queries. We installed 1.6.5 on one of our production servers yesterday and we received a hundred or so tracebacks from some of our views. I can't do the math, but it's probably less then 1%. Still, with 1.5 we had no issues.
To give you an idea, the problem arises both from select and insert statements. They could take maybe fractions of second. I triple checked MySql variables and wait_timeout is 28800, while connection_timeout is 10 seconds (which, by the way, should raise a Lost connection to MySQL server).
I tried switching from CONN_MAX_AGE = 0 to CONN_MAX_AGE = 2 and nothing changed, so maybe it's not related with the new persistent connection mechanism.
The traceback is slightly different from the one in comment 19:
Traceback (most recent call last): [...] File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/django/db/models/query.py", line 96, in __iter__ self._fetch_all() File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/django/db/models/query.py", line 857, in _fetch_all self._result_cache = list(self.iterator()) File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/django/db/models/query.py", line 220, in iterator for row in compiler.results_iter(): File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 713, in results_iter for rows in self.execute_sql(MULTI): File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/django/db/models/sql/compiler.py", line 786, in execute_sql cursor.execute(sql, params) File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/django/db/backends/util.py", line 53, in execute return self.cursor.execute(sql, params) File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/django/db/utils.py", line 99, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/django/db/backends/util.py", line 53, in execute return self.cursor.execute(sql, params) File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 124, in execute return self.cursor.execute(query, args) File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/MySQLdb/cursors.py", line 205, in execute self.errorhandler(self, exc, value) File "/home/django/VIRTUALENVS/multi/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue OperationalError: (2006, 'MySQL server has gone away')
comment:39 Changed 10 months ago by aaugustin
- Resolution wontfix deleted
- Status changed from closed to new
That's a different issue and a good reason to reopen the ticket until we figure it out.
Do you have any way to check if MySQL drops connections while a view function is executing? In that case, Django 1.5 reopens a connection automatically (possibly breaking transactional integrity) while Django 1.6 raises an error.
comment:40 Changed 10 months ago by germanoguerrini
I was able to reproduce the error. First of all, while the global wait_timeout was 28800, the session value was just 30 seconds.
That means the if the time between two queries in the same view is longer than that, or if a view runs just one query but after 30 seconds from its invocation (for example for a very long computation in a middleware or a forced sleep) it raises the infamous MySQL has gone away.
For example:
def my_view(request): import time time.sleep(31) print User.objects.get(pk=1) return HttpResponse()
and I think this is the expected behavior.
The problem is that we run some pretty sophisticated monitoring application (namely New Relic) and the longest trace I can see is a 3.5 seconds view which was probably a glitch from MySQL (it took 3.1 seconds to select a user using his pk).
For now, we solved the most severe view error by closing the connection at the beginning of it (forcing the creation of a new one), but I'll keep monitoring the situation and I'll get back to you because I'm starting to think that it has to do with our particular configuration. Django 1.6 has been out for too long for us being the first to notice that.
comment:41 Changed 10 months ago by giuliettamasina
- Cc markus.magnuson@… added
comment:42 Changed 10 months ago by aaugustin
- Severity changed from Release blocker to Normal
It has been suggested to call close_if_unusable_or_obsolete before handling each HTTP request. However, I don't think that makes much sense. If the connection to MySQL times out between two HTTP requests, you should either stop using persistent connections (with CONN_MAX_AGE = 0) or increase wait_timeout. I don't like the idea of adding overhead to Django to compensate for inconsistent settings.
What I don't understand is that you tried CONN_MAX_AGE = 0 and that didn't help. I fail to see why closing the connection before each request would help if closing it after each request does nothing. Theoretically closing after a request is a safer behavior than closing before the next request -- you never run into timeouts. Could you double-check your tests?
comment:43 Changed 10 months ago by germanoguerrini
We closed the connection at the beginning of the view, not at the beginning of the request. So, presumably, the timeout happened inside a middleware but it didn't bubble up before the first query in the view.
We ended up increasing wait_timeout (we kept it at a fairly low value as we had a large number of sleeping queries) and that solved the issue.
As far as I'm concerned I think that is exactly the expected behavior and as such the ticket can be closed.
comment:44 Changed 10 months ago by aaugustin
- Resolution set to wontfix
- Status changed from new to closed
Cool. Thanks for taking the time to investigate and report your results!
Are you using persistent connections (CONN_MAX_AGE > 0)? | https://code.djangoproject.com/ticket/21597 | CC-MAIN-2015-18 | refinedweb | 3,733 | 50.33 |
Hi A (which one is your first name???) On Wed, Apr 25, 2007 at 08:17:18AM -0700, Anand Avati wrote: > Hi Steffen, > answers inline. Thanks for your almost exhaustive answer, see my comments in-line as well :) > > - The two example configs are a bit confusing. In particular, I suppose I > > don't have to assign different names to all 15 volumes? Different > > ports are only used to address a certain sub-server? > > are you referring to differnt protocl/client volume names in the > client spec file? if so, yes, each volume for a server should have a > seperate name. there can only be one volume with a given name in a > graph (read: spec file) > > > - This would mean I could use the same glusterfs-server.vol for all > > storage bricks? > > > yes, the same glusterfs-server.vol can be used for all the servers. ... so all volumes on the servers can have the same name since they are referred to by the *client-side* volume definitions (volume brick${serverid}; option remote-host ${serverip}; option remote-subvolume brick) OK, that's becoming clear now. > > - The "all-in-one" configuration suggests that servers can be clients at the > > same time? (meaning, there's no real need to separately build > > server and client) > > the same machine can run the glutserfs server and the client. Which is indeed very nice, and allows for uniform namespaces across the whole cluster (and even for abusing compute nodes as storage bricks if necessary). > > - The instructions to add a new brick (reproduce the directory tree with > > cpio) suggest that it would be possible to form a GluFS from > > already existing separate file servers, each holding part of the > > "greater truth", by building a unified directory tree (only > > partly populated) on each of them, then unifying them using > > GluFS. Am I right? > > you are right!?] ? If that works, I could slowly grow the backup FS (use one brick until it's nearly full, add another one ...) This brings up another wishlist item: it should be possible to migrate all data off one server (set server "read-only", then read every file and re-write to another location)... > > - Would it still be possible to access the underlying filesystems, using > > NFS with read-only export? > > will be possible. This is great in particular for the transition phase. > > -? > > - Is there a Debian/GNU version already available, or someone working on it? > I recently saw a post about someone working on it - > I'm in touch with Christian... > > - Are there plans to implement "relaxed" RAID-1 by writing identical copies > > of the same file (the same way AFR does) to different servers? > > I do not quite understand what difference you are asking from the > current AFR? do you mean relaxed as in, make the copy after the file > is closed? please exlplain more clearly.?) Remark: if one of those servers was destroyed beyond repair the additional copy would be lost - so another wishlist item would be to check for replica counts, and re-establishing redundancy in a background process. > > _. > > More to come... > > awaiting :) That's really nice. If all developers were that nice, this world could be a place to live in :-) Don't worry, I will show up again as soon as I have read all the other stuff, and finished my regular work... - | http://lists.gnu.org/archive/html/gluster-devel/2007-04/msg00152.html | CC-MAIN-2014-15 | refinedweb | 553 | 72.26 |
/2012 at 15:17, xxxxxxxx wrote:
Hi, I cannot find any way to enable, disable (grey out) or even hide user controls for my plugin.
For example, I have a checkbox named "Automatic". When this is checked, I want a spline control to be either hidden, or greyed out.
I have come that far that I programmatically, without reloading the plugin and without restarting C4D, can change the value of a checkbox, in the Execute event:
def Execute(self, tag, doc, op, bt, priority, flags) :
tag[1005] = True
This is all.
Considering how simple this is to do, to change the Controls default value, I wish there was possible to do this:
tag[1005].Enabled = False
or
tag[1005].Visible = False
But (of course) it has to be way more complicated than this. And I also might want to hide / grey out the group the control in question belongs to, but I have not found a way to access the group itself, at all. Any help is much appreciated!
There is a thread here, but I have not succeeded in making anything here work in my own plugin.
-Ingvar
On 01/07/2012 at 15:36, xxxxxxxx wrote:
You need to look at the function GetDEnabling. Cinema calls this to determine if a control should be enabled or disabled (greyed out). Return true if enabled, false if not. So in your case you would test if the passed ID is for your spline object, if it is you check the status of the checkbox and return the correct value.
You can also show or hide controls but it's more tricky and in C++ you need the function GetDDescription. I don't think it's in the Python API yet, unless they've added an easier way to do it (sorely needed).
On 02/07/2012 at 04:17, xxxxxxxx wrote:
Thanks spedler! This is the right solution.
But still not that easy to find out about.
Here is my code:
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
if (id[0].id == MY_SUPER_SPLINE) :
return node[SOME_CHECKBOX] == 1
return True
Firstly, it is not obvious to me how to get the id out of the id - lol :))
The id that comes as an argument actually consists of 3 numbers. And to get the one I want, I had to search the Internet, yes. id[0].id is what I need. Not obvious at all, and no example in the doc showing it to me.
Then, the C4D docs about this event handler is strange. Fisrt it says the method should return true to set the control enabled, and false to set it disabled (grayed out). And in fact, that works. But in the same document I am told to "Then make sure to include a call to the parent at the end:" and also this: "_ Note: It is recommended that you include a call to the base class method as your last return."_
With this example:
return NodeData.GetDEnabling(node, id, t_data, flags, itemdesc)
I wish the documenation could make up its mind. Am I supposed to return True of False, or am I supposed to return the call to this function?
Anyhow, I was not able to implement this call at all, have no idea how to do it and the dcs give me no example. "NodeData" is unknown is the message I get.
Without the help from you guys, I would have been stuck. This is a good example of tghe lack of examples in the docs. I wish they had shown sample code, like the one I posted in the beginning of this message. Would have saved me lots of time!
-Ingvar
On 02/07/2012 at 06:48, xxxxxxxx wrote:
Yes, the documentation should be improved because it's currently confusing.
The note:
"It is recommended that you include include a call to the base class method as your last return."
Should be:
"If the passed id element is not processed, you should include a call to the base class method as your last return:"
And here's an example:
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
if (id[0].id == MY_SPLINE_ID) :
return node[MY_CHECKBOX_ID] == 1
else:
return NodeData.GetDEnabling(node, id, t_data, flags, itemdesc)
On 02/07/2012 at 06:53, xxxxxxxx wrote:
The other thing which could be improved are the examples. Some have clearly been ported from the C++ original, but not completely. For example, Ingvar wants a GetDEnabling example, and you can find one in the C++ SDK DoubleCircle plugin, but the Python port is missing that bit. Presumably the call wasn't in the API when it was ported, so they really need bringing up to date.
On 02/07/2012 at 08:32, xxxxxxxx wrote:
In:
return NodeData.GetDEnabling(node, id, t_data, flags, itemdesc)
what is "NodeData" supposed to be?
I've tried "self", "node" and the Object Class but no luck.
Cheers
Lennart
On 02/07/2012 at 08:53, xxxxxxxx wrote:
Originally posted by xxxxxxxx
And here's an example:
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
if (id[0].id == MY_SPLINE_ID) :
return node[MY_CHECKBOX_ID] == 1
else:
return NodeData.GetDEnabling(node, id, t_data, flags, itemdesc)
Originally posted by xxxxxxxx
Good. As I wrote in my post further up, and as Lennart points out, what is NodeData supposed to be? Have you tried this yourself? The problem is - the example you gave won't run here.
While I am on the air - is there a way to diable (grey out) a whole group? Group IDs do not appear in this GetDEnabling handler, unfortunately. Only the controls themselves.
-Ingvar
On 02/07/2012 at 10:20, xxxxxxxx wrote:
@ingvarai:
NodeData is supoosed to be the NodeData class from the c4d.plugins module. Also, Yannick forgot to pass self as the first argument.
else:
return c4d.plugins.NodeData.GetDEnabling(self, node, id, t_data, flags, itemdesc)
This might be confusing when you've been working with other languages before, but this is how Python works.
Personally, I prefer using the super() method. As all classes in the c4d module inherit from object, you won't get problems with it.
else:
super(MySubclass, self).GetDEnabling(node, id, t_data, flags, itemdesc)
Here's some code that should help you understand.
class Superclass(object) :
def bar(self) :
print "Superclass.bar()"
class Subclass(Superclass) :
def bar(self) :
print "Subclass.bar()"
Superclass.bar(self)
super(Subclass, self).bar()
o = Subclass()
o.bar()
print
Superclass.bar(o)
Subclass.bar(o)
super(Subclass, o).bar()
Subclass.bar()
Superclass.bar()
Superclass.bar()
Superclass.bar()
Subclass.bar()
Superclass.bar()
Superclass.bar()
Superclass.bar()
-Nik
On 02/07/2012 at 10:56, xxxxxxxx wrote:
Thanks Niklas, one step closer, but I still get a "TypeError: argument 5".
Looking in the SDK, it says "flags" is not used, so I removed but then
got . "TypeError: GetEnabling() takes exactly 5 arguments (6 given)"
It's only five given.....
Maxon for heavens sake, give a -working- example.
Not only is the SDK a nightmare, but this is the official plugin support.
Ingvar you will soon learn that coding for Cinema is not a walk in the park.
On 02/07/2012 at 11:37, xxxxxxxx wrote:
@tca:
Hm, I actually didn't execute the version I did correct from Yannicks post, but I can't spot the error now
by just looking at it. I'll dig in right away.
Yes, indeed. I hope to have enough influence on MAXON, being a beta tester now, to make them
enhancing the SDK.
Cya in a minute,
-Nik
PS: Not used doesn't mean it does not require the argument.
On 02/07/2012 at 12:29, xxxxxxxx wrote:
@tca, ingvar, Yannick:
I can't get the parent-call to work. (I've never needed it, so I didn't know it doesn't work until now) I guess it's just a bug in the Py4D API. Yannick or Sebastian should be able to give us more information.
If the official support does not have a solution for the parent-call, I'd suggest just to ignore the advice in the SDK and return False instead of the return-value of the parent-call.
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
rid = id[0].id
if rid == c4d.MYTAG_VALUE:
# ...
return False
elif rid == ...:
# ...
return True
return False
@Lennart
_> Ingvar you will soon learn that coding for Cinema is not a walk in the park.
_ Which is illustrated by this:
"Maxon for heavens sake, give a -working- example."
When Maxon cannot do it right themselves, I somehow accept that I myself have problems.. I am asking myself to what extent it is my way of thinking, to what extent the documentation is written in an unusal way and so forth. But I am starting to realize that the way the docs are laid out is unusal to me, and that this is part of the source of my problems. I spend waaaaaaaaay too much time carrying out even the simplest tasks. So it is partly bad or even wrong documentation, and partly me that is not familiar with the way it is documented.
And I have written plugins before. I wrote several for Sony Vegas, the video NLE. What a breeze! The SDK has a full list of Classes, their properties, their methods and mostly everything works on the first or second attempt. Ok, sleeves up, I must get used to the docs. But I often feel like blind folded, tapping around in the dakrness..
And if you want C4D to crash - I mean really crash, you can do this:
def GetDEnabling(self, node, id, t_data, flags, itemdesc) :
if(id == 1003) :
Return False
:)))
-Ingvar
On 02/07/2012 at 12:38, xxxxxxxx wrote:
Nik,
how do you do this:
c4d.MYTAG_VALUE:
I have never gotten this to work.
I must redefine MYTAG_VALUE in the pyp file.
I often see the c4d.XXXX. What does it mean, and how do you do it?
Another question:
Is it possible to disable a whole group using Python? (With several user controls)
I see that C4D can do it, with the built in controls.
-Ingvar
@ingvar:
I totally agree with you. Once I started with Py4D, I had experience with COFFEE so it wasn't that hard, because I did already understand the principles of c4d. I've never written any code before COFFEE, but I think it is even harder for people that are already used to something else, especially something better else.
To your "crashy" code: Why should this crash Cinema 4D? ^^ IMHO the easiest way to crash it, is this:
op.InsertUnder(op)
or even
import types
types.FunctionType(types.CodeType(0, 0, 0, 0, "KABOOM", (), (), (), "", "", 0, ""), {})()
That even works for every Python Interpreter
edit :
This only works for descriptions, not for dialogs. When C4D detects a new plugin, it recomputes the "symbolcache". But if you later add new symbols to your description, it might happen that C4D does not add them to the "symbolcache". [citation needed, information out of experience]
You can delete the file under %appdata%/MAXON/Cinema 4D RXX/prefs/symbolcachce to fix this. Note that it's called coffeesymbolcache under R12.
Hm, I don't know actually. Only when Cinema 4D asks you about the group in GetDEnabling. Otherwise, you'd have to figure out what ID's are in the group you'd like to disable.
The whole GetDEnabling thingy is a little overcomplicate, imho. Why not just make something like SetDEnabling(id, status)?! Just like Enable() of the GeDialog class..
On 02/07/2012 at 13:07, xxxxxxxx wrote:
Nik,
_> You can delete the file under _
Worked! Thank you!
> Only when Cinema 4D asks you about the group in GetDEnabling
Unfortunately it does not. Groups are not iterated, only user controls.
> The whole GetDEnabling thingy is a little overcomplicate
Probably prepared for a more complicate future..
Lots of things seem overcomplicated to me..
-Ingvar
On 02/07/2012 at 13:37, xxxxxxxx wrote:
Thanks Niklas for checking.
On 03/07/2012 at 01:29, xxxxxxxx wrote:
Originally posted by xxxxxxxx
The whole GetDEnabling thingy is a little overcomplicate, imho. Why not just make something like SetDEnabling(id, status)?! Just like Enable() of the GeDialog class..-Nik
The whole GetDEnabling thingy is a little overcomplicate, imho. Why not just make something like SetDEnabling(id, status)?! Just like Enable() of the GeDialog class..-Nik
My guess is that a lot of the SDK is ancient and comes from early versions of Cinema. To rewrite it now would be a gigantic task because this goes right to the core of Cinema's GUI.
If you think GetDEnabling is bad, wait until you have to use GetDDescription (C++ only ATM). This one requires you to use undocumented function calls that aren't even in the C++ docs.
On 04/09/2017 at 00:45, xxxxxxxx wrote:
ahhm... this thread and others about GetDEnabling() are a bit confusing!
ie. the sdk example does not use the (above discussed) return c4d.plugins.NodeData.GetDEnabling() call
so, is there a way to ghost an userdata entry (in python tag space)?
for example i have a simple op[c4d.USERDATA_ID,5] = "whatever"
now i want to ghost (or unghost) this field... (maybe just to disallow user interactions)
i guess you need to hook GetDEnabling() in like message() ? but then, if i add
def GetDEnabling(self, node, id, t_data, flags, itemdesc) : print "hi there!"
to my pyhton tag, it doesnt get called at all. i do not get any console output....
is GetDEnabling even possible in python tags?
On 06/09/2017 at 05:40, xxxxxxxx wrote:
Hi,
neither GetDEnabling() nor GetDDescription() can be overridden in a Python Tag (talking about the scripting tag here, not a TagData plugin implemented in Python). So, no, it's unfortunately not possible to disable parameters in a Python tag. But it should be fairly easy to translate a Python tag into a TagData plugin written in Python (script code goes into Execute()).
On 06/09/2017 at 23:12, xxxxxxxx wrote:
ok, thanks for clarifying this, andreas! | https://plugincafe.maxon.net/topic/6436/6907_enable--disable--hide-user-controls | CC-MAIN-2021-49 | refinedweb | 2,362 | 74.19 |
Collect a sequence of APFilter objects and apply them in turn when a value is found for an attribute/property. More...
#include <ie_exp_RTF_AttrProp.h>
Collect a sequence of APFilter objects and apply them in turn when a value is found for an attribute/property.
Useful when you wish to mutate some of the values for either the attribute or properties but you do not wish to have to track the memory of your mutation. This class contains a single std::string cache so the caller can rely on the return value being sane until the next call to operator(); Many APFilter functors can be called on the same attr/prop and the result is: push_back(f1); push_back(f2); this->operator()( name, value ) == f2( name, f1(name, value ))
If there are no filter objects then the degenerate case is to just return the szValue given directly. Thus a APFilterList object without any filtering should not present a significant performance overhead. Since this is a template, the compiler has the option to inline the code and perpahs the empty() test on the filterlist also so that the cost becomes very very minimal.
Added by monkeyiq in June 2011 in order to delete part of the markup inside the revision attribute when a copy and paste is happening. Specifically, the markers as to if a paragraph is deleted need to be removed for pasted content as that content is considered fresh and content will have been coalesed.
Note that you can apply this filterlist to Attributes or Properties depending on where you source the pValue from.
Usage: APFilterList al; al.push_back( f2 ); ... somehow get szName and pValue from an AP return al( szName, pValue );
Where f2 is a filter like this: struct APFilterDropParaDeleteMarkers { std::string operator()( const gchar * szName, const std::string& value ) const { if( !strcmp( szName, "foo" )) return "bar"; return value; } };
References m_filterlist.
Referenced by push_back().
References m_cache, and m_filterlist.
Referenced by s_RTF_AttrPropAdapter_AP::s_RTF_AttrPropAdapter_AP().
Referenced by operator()().
Referenced by append(), and operator()(). | http://www.abisource.com/doxygen/classAPFilterList.html | crawl-003 | refinedweb | 330 | 52.6 |
Created on 2017-10-15 06:22 by pdox, last changed 2021-07-15 12:46 by iritkatriel. This issue is now closed.
Ensure that every function pointer in every PyTypeObject is set to a non-NULL default, to spare the cost of checking for NULL with every use. As a basic example, consider PyNumber_Negative:
PyObject *
PyNumber_Negative(PyObject *o)
{
PyNumberMethods *m;
if (o == NULL) {
return null_error();
}
m = o->ob_type->tp_as_number;
if (m && m->nb_negative)
return (*m->nb_negative)(o);
return type_error("bad operand type for unary -: '%.200s'", o);
}
If "tp_as_number" and "nb_negative" were always guaranteed non-NULL, then the function could omit the second if statement, and invoke the function pointer directly. To maintain the existing behavior, the default nb_negative function would be set to the following:
PyObject* nb_negative_default(PyObject *o) {
return type_error("bad operand type for unary -: '%.200s'", o);
}
This removes two NULL-checks from the PyNumber_Negative. Many other operators and builtins would be able to benefit in the same way.
I believe it would also make sense to inline the 'as' structures (tp_as_async, tp_as_number, tp_as_sequence, tp_as_mapping, and tp_as_buffer), instead of having them be pointers. This would save one pointer dereference with each use. But this would have API compatibility consequences, so for this change it is off the table.
1. This will increase the size of type object. Most types are not numbers, not collections and don't support the buffer protocol, thus save memory for tp_as_number, tp_as_sequence, tp_as_mapping and tp_as_buffer. This also can increase the time of type creation.
2. This will make harder testing that the particular protocol is supported (e.g. see PyIndex_Check or PyObject_CheckBuffer). And this will break any third-party code that does this directly, without using C API.
3. Calling function instead of executing inlined code after checking can increase stack consumption and slow down execution (see the usage of tp_descr_get).
serhiy.storchaka:
1) Where tp_as_number would normally be NULL, instead it would point to a fixed PyNumberMethods structure containing the default functions. This would make the memory increase negligible, as all non-number types would use the same structure.
2) If this is behavior we want to formally support, then we should provide macros for it. (all they must do is compare against the default pointer value, rather than NULL). Are there particular extension(s) you suspect may be doing this?
3) This has to be handled on a case-by-case basis. I would not remove inlined optimizations. If there are some fields that truly benefit from remaining NULL, those can be left alone. I would like to focus on the functions for which the common case (in code without type errors) is to make the indirect call.
For all the reasons Serhiy mentioned, I recommend rejecting this request. This was an intentional and long-standing design decision. A substantial part of the Python ecosystem depends on it. | https://bugs.python.org/issue31791 | CC-MAIN-2021-39 | refinedweb | 477 | 56.25 |
Exploiting multi-cycle false paths in the performance optimization of sequential circuitsDownload PDF
Info
- Publication number
- US5448497AUS5448497A US07941658 US94165892A US5448497A US 5448497 A US5448497 A US 5448497A US 07941658 US07941658 US 07941658 US 94165892 A US94165892 A US 94165892A US 5448497 A US5448497 A US 5448497A
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- circuit
- sub
- paths
- gate
- tf
- to semiconductor integrated circuits and more particularly to the design of such circuits with the aid of computers.
As integrated circuits have increased in complexity and as there has developed a need for integrated circuits customized to a particular application, computer-aided design (CAD) has become an important technology. Moreover, to achieve fast turn around time in the design phase, an important technique in CAD has been logic synthesis for the design of integrated circuits. In this technique, the digital circuit to perform a specific application is first synthesized in block schematic form as an assembly of functional elements, such as AND and OR gates, and memory elements, such as registers. Various computer programs are available for this process.
While this approach can provide fast turn around times and a circuit that is usually efficient in its use of surface area and that also lends itself readily to testing, the circuit is often not especially high in performance, typically because it usually includes unnecessarily long paths that serve to slow the speed of the circuit. The slower the speed of a circuit, the longer the clock period that needs to be used with the circuit, and the slower the rate at which the circuit can perform the processing. Generally, a technique for improving the performance of an integrated circuit synthesized in this manner is subsequently to modify the circuit specifically to shorten the paths thereof that introduce long delays.
The bulk of the work done in the area of performance optimization of digital circuits has focused on combinational logic circuits which are circuits, free of memory elements, such as registers, that make a circuit dependent on its prior history. Circuits that include such memory elements are generally described as sequential circuits. While recognizing the fact that these prior techniques vary significantly in terms of their approaches towards the problem of designing faster combinational logic circuits, we will collectively classify them as being combinational speedup or combinational resynthesis techniques to recognize the fact that they focus on combinational logic circuits. Combinational speedup techniques have been directly applied to sequential logic circuits by considering the combinational part between the memory elements; a speedup of the combinational part can directly translate into a reduction of the clock period. However, this approach does not exploit any information derived from the sequential behavior of the circuit. An alternative approach, termed retiming is described in the paper entitled "Optimizing Synchronous circuitry by Retiming", published in Advanced Research in VLSI: Proceedings of the Third Caltech Conference, pp. 23-36, Computer Science Press, 1983. This approach recognizes the sequential behavior of the circuit and attempts to minimize the clock period of the circuit by repositioning the memory elements. Combinational speedup and retiming can be viewed as two ends of the spectrum; combinational speedup works only on the combinational logic and ignores the memory elements, retiming focuses only on the memory elements and ignores the nature of the combinational logic. This naturally led to work that attempted to combine the two ends of this spectrum. In the approach termed retiming and resynthesis, as described in the paper entitled "Retiming and Resynthesis: Optimizing Sequential Networks With Combinational Techniques", published in IEEE Transactions on Computer-Aided Design, Vol. CAD-10 No. 1, pp. 74-84, January, 1991, it was shown how the two could be combined for a restricted class of sequential circuits. Subsequently, in a paper entitled "Performance Optimization of Pipelined Circuits", published in Proceedings of the International Conference on Computer-Aided Design, November, 1990, pp. 410-413, it was demonstrated how retiming and resynthesis could be optimally combined for the performance optimization of pipelined circuits. The most notable limitation of this approach was the restriction on the class of circuits that could be handled.
The present invention relates to a design methodology, applicable more generally to sequential circuits, to improve the speed of the sequential circuit without changing its function. Basically, the methodology includes the following steps. Beginning with a circuit whose speed is to be improved, one first prepares for analysis a virtual circuit formed by cascading N copies of the original circuit over N time frames, where N is at least two. The memory elements are omitted in the virtual circuit. Next, one does a timing analysis (taking into account false paths) of the virtual circuit to identify the length L of the longest true path. Then one removes from the original circuit those paths that were longer than length L, making allowance for any fanouts along the long paths over the N time frames, to obtain a modified circuit that is fanout-free. This generally includes making multiple copies of the gates that are involved in fanouts, such that one copy lies in a path that is not false which can be retained, while the other copy lies in a false path that can be discarded. Then conventional techniques are used to remove any combinational or sequential redundancies from the modified circuit. Finally, the resulting redundancy-free circuit is retimed in conventional manner to reduce the delay and to obtain the desired faster circuit.
An important new element of this new methodology is cascading N copies (N being greater than one) of the original circuit and using such a cascaded circuit to analyze the timing properties of the original circuit. This element should have application in other methodologies involving sequential circuits. Usually, it will be desirable to eliminate the memory elements in the cascaded circuit, but this may not always be necessary. When the memory elements are not eliminated, the timing analysis tool would need to be modified appropriately.
The invention will be better understood from the following more detailed description taken with the accompanying drawing.
FIGS. 1-3 show circuits useful in a discussion of false paths and their removal. This discussion is helpful in explaining the present invention.
FIG. 4 is an example of a circuit that is to be modified to reduce its delay and so to increase its speed.
FIG 5 shows the combinational parts of the circuit of FIG. 4 cascaded for two-cycle behavior.
FIG. 6 is the circuit of FIG. 5 after the two-cycle false paths have been removed and fanout compensation added for the two-cycle false paths removed.
FIG. 7 shows the circuit of FIG. 6 after the first connections of these paths have been set to a constant value.
FIG. 8 shows the circuit of FIG. 7 after final retiming to shorten its delay.
FIG. 9 is a flowchart of the basic process of the invention.
It will be helpful to begin with discussion on false paths for an understanding of the principles of the invention. False paths in combinational logic circuits have been studied extensively, and for a full discussion, reference is made to a paper by S. Devadas , K. Keutzer, and S. Malik entitled "Delay Computation in Combinational Circuits: Theory and Algorithms", that appeared in Proceedings of the International Conference on Computer-Aided Design, November, 1991.
The circuit 10 in FIG. 1 will serve to illustrate the notion of false paths as well as their removal in combinational circuits. It includes a series of gates, including buffers 11 and 12, AND gate 13 and OR gate 14. The integers inside the gates in this circuit represent the assumed delays of the gates. We will consider the floating mode operation of the circuit. In this mode, the state of the circuit is considered to be unknown when a given input vector is applied. This is a pessimistic assumption; it does not underestimate the length of the longest true path. In addition, we allow for monotone speedup, i.e. the analysis remains valid even when one or more gates in the circuit speed up to switch faster than their specified delay values. In this context a path is false when for each possible value of a primary input vector v one of the following two things happens:
1. At the inputs to some gate along the path in question, the signal value on the path presents a non-controlling value while an off-path signal (also referred to as a side-input) presents a controlling value. A controlling value for a gate is a value that determines the output value for a gate independent of the other inputs to a gate, for example a 0 for an AND gate. A non-controlling value cannot determine the output value by itself, for example, a 1 for an AND gate. Thus, the off-path signal controls the output.
2. Both the on-path signal and the off-path signal appear to present controlling values, but the off-path signal presents the controlling value first (this is referred to as an early input), thereby determining the output of the gate. FIG. 1 illustrates both these conditions. Consider the path of length 4 from terminal 15, the input to buffer 11, to terminal 17, the output of OR 14, by way of gates 11, 13 and 14. Consider all possible assignments to a and b, the two inputs, at terminals 15 and 16, respectively.
a=0, b=0: The second condition listed above occurs at the AND gate 13.
a=0, b=1: The first condition listed above occurs at the OR gate 14.
a=1, b=0: The first condition listed above occurs at the AND gate 13.
a=1, b=1: The second condition listed above occurs at the OR gate 14.
Thus, this path is false. As used hereinafter and in the claims, a "false path" is a path that satisfies the two conditions set forth above.
It is well known that if all paths of length at least δ are false in a circuit, then these paths can be removed from the circuit while preserving its logical functionality and guaranteeing that its true delay does not increase. This is accomplished by first making these paths fanout free; a path is said to be fanout free if each gate along the path has exactly one output connection. This process may involve the duplication of some gates in the circuit. This is illustrated in FIG. 2 where buffer 11A and AND gate 13A have been added as copies of gates 11 and 13. Next, the first connection along each of these paths may be set to either constant value 0 or 1. Removing the first connection of each of these paths effectively removes these paths from the circuit without changing its functionality. When the "a" input to the AND gate 13 on the path from terminals 15 to 17 is set to a 0, this permits the removal of the first connection 15 as well as the AND gate 13. The resulting circuit is shown in FIG. 3. Note that the removal of paths of length at least δ in the circuit (all of which were false), results in the longest path in the circuit to be of length less than δ.
There now can be explained the principles of the invention and more particularly their applicability to the problem of increasing the speed of sequential circuits. FIG. 4 is an example of a sequential circuit 20 that is to be reconfigured to operate at a faster speed without change in functionality. The circuit 20 includes the AND gate g5 of which one input is applied by input terminal i3 and the other by way of register r3. The output of AND gate g5 is one input to the NAND gate g4, the other input of which is from register r2. The output of NAND gate g4 supplies one input to OR gate g1 the other input of which is from terminal i2. The output of NAND gate g4 is also connected to output terminal o4. The output of OR gate g1 is supplied to the input of the register r1 the output of which is connected both as an input to AND gate g7 and as an input to OR gate g3 . The output of AND gate g7 is connected both as an input to register r1 and as an output to terminal o1. The terminal o3 is also connected both to an input of OR gate g2 and to an input of OR gate g3. The output of OR gate g2 is the input to register r2 whose output is the other input to OR gate g3. The output of OR gate g3 is an input to AND gate g6, the other input of which is supplied by terminal i5. The output of AND gate g6 is supplied to terminal o2.
The initial (or starting) state of the circuit is <r1 =0, r2 =0, r3 =0>. For simplicity let the delay of each gate in this circuit be one time unit and let the propagation delay along a path be the sum of the gate delays encountered along the path. Since all the gates are two- input gates, this is a reasonable delay model. (Dependence of gate delay on its load can be handled and is an orthogonal issue.) Let all the primary inputs be available at the clock edge and let all the primary outputs be required only at the clock edge. Thus, the smallest feasible clock period for this circuit is 3, to allow for the delay through g5, g4, and g1. This circuit has the following properties:
It has no sequential redundancies; hence no connection/gate can be removed without changing its functionality.
Retiming cannot reduce the clock period to below 3.
There are no false paths in the combinational part of the circuit. This includes consideration of unreachable states of the circuit. The state <r1 =0, r2 =0, r3 =1> is an unreachable state, i.e. there is no sequence of inputs that can drive the machine to this state. It may be possible that for a given path in the circuit to be true, it is required that the machine be in an unreachable state. Since that will never be possible, this path will never be exercised. Thus, unreachable states must be taken into account while determining the truth or falsity of paths. In this example, there is no path that needs the unreachable state in order to be exercised. Thus, for each path in the circuit, starting from the initial state, there exists some sequence of input vectors that will exercise the path.
Our goal is to reduce the clock period needed for this circuit to 2.
Consider the operation of the circuit over two clock periods (or cycles). Conceptually, this can be visualized by considering two copies of the combinational part of this circuit cascaded, with the registers removed, as shown in FIG. 5. In this circuit, the same reference characters have been used for the corresponding gates in the two versions cascaded with either a plus or minus sign added to the new terminals, as described in the retiming and resynthesis paper. This is similar to the notion of considering multiple time frames in the test generation for sequential logic circuits.
Consider the two broken line paths beginning at the two inputs of AND circuit g5 on the left copy of the circuit and passing through NAND gate g4, OR gate g1, OR gate g3 in the right hand copy of the circuit, and AND gate g6 to output o2 +.
These paths are two-cycle paths, i.e. they span two cycles of operation for the circuit. Also, these are the only two-cycle paths of length 5, if we assume as before a delay of one unit per gate. What is interesting about these paths is that they are false. To see why this is so, observe that for any of these paths to be true o3 must present a 1 at the input of NAND gate g4 along the path since it is an early side input. However, if this happens, the output of g2 is also a 1 and this is presented to g3 in the second time frame. For any of these paths to be true, the connection from g2 to g3 in the second time frame must present a 0 since it is an early side input. Thus, both the paths in question are false. Note that the single cycle segments of both these paths are true. The three single-cycle paths: (1) from the output of register r to x the output of OR g1, (2) from input i3 to z, and (3) from x to o2 + are each individually true.
Since the two paths are false, they can be removed without changing the functional behavior of the circuit as viewed over two cycles. However, before their removal, these paths must be made fanout free.
Actually, the major steps of the procedure, viz. identifying and removing two-cycle false paths, followed by a retiming can be directly done on the original circuit itself. The two-cycle behavior is only needed to conceptually understand what is going on. It should be noted that this technique can be extended to any number of cycles (or time frames).
We know from the foregoing that both of the longest two-cycle paths (of length 5), r3, g5, g4, g1, r1, g3, g6, o2 and i3, g5, g4, g1, r1, g3, g6, o2 are false in the circuit of FIG. 4. As explained before, they are false because to sensitize these paths, a 1 is required on the output of register r2 in the first clock cycle, and a 0 in the second clock cycle. This is not possible because of the OF gate g2.
Given the knowledge that all the two-cycle paths of length 5 are false, we generate the circuit of FIG. 7 in which both two-cycle paths of length 5 are fanout free. This circuit was obtained in the following manner from the initial circuit shown in FIG. 4: We began from the final terminal in the longest paths, namely o2, and proceeded towards the circuit inputs along the longest paths. The first point at which a fanout is encountered is the output of register r1. The fanout of r1 to g3 lies on the two-cycle paths of length 5, and the fanout of r1 to g7 does not. Since the goal is to make the two-cycle paths of length 5 fanout free, we duplicate r1 into r1 and r'1. The initial state of r'1 is the same as the initial state of r1, i.e. 0. The gate g1 now fans out to r1 and r'1. The same step is repeated on the gates g1, g.sub. 4 and g5 in that order to finally obtain the circuit in FIG. 6.
In the circuit of FIG. 6, the two-cycle paths of length 5, i.e. r3, g'5, g'4, g'1, r'1, g3, g6, o2 and i3, g'5, g'4, g'1, r'1, g3, g6, o2, are not only false, they are also fanout free. As a result both stuck-at-0 and stuck-at-1 faults on the fanout of r3 to g'5 are sequentially redundant because their effect cannot be propagated beyond two time frames. Similarly, the stuck-at faults on the fanout of i3 to g'5 are also sequentially redundant (independent of the faults on the fanout of r3 to g'5). Therefore, we can replace each of the two wires by constant values. We choose to replace both the wires by the constant 1. Propagating the constant values provides the circuit of FIG. 7. This result is a more general form of the observation made in a prior art paper that a stuck-at fault on the first link of a fanout-free combinational false path is combinationally redundant.
In particular, if in the circuit of FIG. 6 the inputs provided at inputs from i3 and r3 are fixed at 1, it is assured that the output of AND gate g'5 is 1. In turn, this permits the output of NAND gate g'4 to be dependent only on the input from register r2. As a consequence, the gates g'5 and g'4 are eliminated and there is added the inverter gate g10 whose input is supplied by register r2 and whose output provides an input to OR gate g'1, as shown in FIG. 7.
Notice that the longest two-cycle path in this circuit is of length 4, and is true. The clock-period of this circuit is still 3, but is reduced to 2 by retiming, as shown in FIG. 8. Retiming is done in the manner described in the earlier mentioned paper published in Advanced Research in VLSI: Proceedings of the Third Caltech Conference, pp. 23-36, Computer Science Press, 1983. As seen in FIG. 8, in the retimed circuit, register r1 has been replaced from the output path of OR g1 and register r1 "' has been included in an input lead of OR gate g1 and a new register r1 " has been added to the other input lead of OR g1.
The initial state of the retimed circuit is <r3 =0, r2 =0r', 1 =0, r"1 =0, r"'1 =0>. Notice two things:
All the circuits in FIGS. 4-8 are functionally identical.
The circuit of FIG. 8 has 9 gates and 5 registers, while the original circuit (FIG. 4) has 7 gates and 3 registers.
As mentioned previously, it should be noted that this technique can be extended to false paths over any number of cycles, when required.
The details of this multi-cycle false path utilization algorithm that operates on a single copy of the circuit will now be discussed.
The delay optimization algorithm operates on a single copy of the circuit. The algorithm is general in the sense that it can utilize false paths over an arbitrary number of time frames to reduce the clock-period. The basic outline of the overall algorithm is given below in C-like syntax familiar to workers in logic synthesis:
______________________________________reduce-clock-period(ckt, num.sub.-- tf)/* ckt satisfies the property that it has no false paths overnum.sub.-- tf - 1 time frames */casc.sub.-- ckt = make-cascade(ckt, num.sub.-- tf);δ = timing-analysis(casc.sub.--ckt);ckt = remove-multi-cycle-false-paths(ckt, num.sub.-- tf, δ);ckt = remove-combinational-redundancies(ckt);ckt = remove-sequential-redundancies(ckt);ckt = retime-for-delay(ckt);return (ckt);}______________________________________
The corresponding flowchart is shown in high level form in FIG. 9.
As indicated in block 90, there is first prepared the sequential circuit C whose delay is to be reduced by eliminating the unnecessary longest paths that serve as the limiting factor in the length of clock cycle that can be used. In the example described this corresponds to FIG. 4.
Next, as indicated by block 91, the circuit C in its combinational form is cascaded over at least two cycles and the resulting circuit C' is subjected to a timing analysis to identify the length L of the longest true path in known fashion, as pointed out previously. Then as indicated in FIG. 6, the circuit C is reconfigured to provide any fanout needed in any false paths longer than L and then such false paths are removed from the reconfigured circuit to obtain a circuit C" as indicated by block 93. Then as indicated by block 94, any combinational or sequential redundancies are removed (in the example there were no such redundancies) and finally, as indicated in block 95, the circuit is retimed in conventional fashion to provide the final circuit shown in FIG. 8.
In practice, the relative order of the retiming step and the step of removing combinational or sequential redundancies is not critical and either may be performed first.
A more technical description of the algorithm might be as follows. The input to the algorithm is the sequential circuit (ckt) to be optimized for delay and the number of time frames (num-- t f) over which the multi-cycle false paths have to be identified and removed. In the first step, a combinational circuit (casc-- ckt) is generated by unfolding ckt num-- tf times. There are no registers on the wires that connect successive time frames. In this combinational circuit, there is no logical correlation between the values at any pair of primary inputs, and all primary outputs are distinct from each other. The timing analysis algorithm that determines δ, the length of the longest true path, will work on this circuit. The timing analysis algorithm determines δ by iteratively checking if a given value of δ is correct. This transformation is shown in FIG. 5. For purposes of the timing analysis algorithm, the arrival times on the primary inputs are fixed as follows: Let αi be the arrival time at primary input i. Let in be a primary input i in the nth time frame, let δ be the value of the longest path being checked for correctness, the arrival time at in, αi, is set to (αi1 +δ×(n-1)/num-- tf). Note that casc-- ckt is generated only for the purposes of timing analysis. In the second step of the algorithm, timing analysis is performed on casc-- ckt to compute δ, the length of the longest true path in casc-- ckt. Once the timing analysis has been carried out, casc-- ckt is discarded.
In the next step, the multi-cycle false path removal algorithm is invoked. The algorithm operates on a single copy of the circuit, and is novel in that it only duplicates that part of the sequential circuit necessary to make the long multi-cycle false paths fanout free. Once the long false paths have been made fanout free, the first connection of each fanout-free long false path is set to a constant (either of 0 or 1), and the constant is propagated as far as possible. This algorithm is based on the algorithm for false-path removal in combinational circuits described a paper presented at the ACM/SIGDA Workshop on Timing Issues in the Specification and Synthesis of Digital Systems, March 1992, entitled "Circuit structure relations to redundancy and delay: The KMS Algorithm Revisited". An outline of remove-multi-cycle-false-paths() is presented in the pseudo-code below:
__________________________________________________________________________remove-multi-cycle-false-paths(ckt, num.sub.-- tf, δ)/* In each time frame tf, for each gate g, compute the set of all pathlengths a .sub.g.sup.tfstarting from the first time frame to the output of g in the time frametf. */gate.sub.-- list = list of all gates ordered from circuit inputs tocircuit outputs;for (tf = 1; tf <= num.sub.-- tf; tf ++) {foreach.sub.-- gate g in gate.sub.-- list {if (g is a primary input) {a .sub.g.sup.tf = arrival time at input g in time frame tf;} else if (g is a latch && tf ! = 1) {fanin = gate feeding the latch g;a .sub.g.sup.tf = a.sup.tf-1 .sub.fanin;} else if (g is a latch && tf == 1) {a .sub.g.sup.tf = {0};} else {a .sub.g.sup.tf = { };foreach.sub.-- fanin fanin of gate g { /* d .sub.i.sup.j is the delay from the output of gate i to the output of gate j */ a .sub.g.sup.tf = a .sub.g.sup.tf ∪ {t + d.sup.g .sub.fanin. vertline.t a.sup.tf .sub.fanin};}}}}for (tf = num.sub.-- tf; tf >= 1; tf --) {gate.sub.-- list = list of all gates ordered from circuit outputs tocircuit inputs;/* In the time frame tf, for each gate g, compute the set of all pathlengths e .sub.g.sup.tffrom the output of g in the time frame tf to the circuit outputs in thenum.sub.-- tf.sup.th time framforeach.sub.-- gate g in gate.sub.-- list {e .sub.g.sup.tf = { };foreach.sub.-- fanout fanout of gate g { if (fanout is a primary output) { e .sub.g.sup.tf = e .sub.g.sup.tf ∪ {0}; } else if (fanout is a latch && tf == num.sub.-- tf) { e .sub.g.sup.tf = e .sub.g.sup.tf ∪ {0}; } else if (fanout is a latch && tf != num.sub.-- tf) { e .sub.g.sup.tf = e .sub.g.sup.tf ∪ e.sup.tf+1 .sub.fanout; r } else { e .sub.g.sup.tf = e .sub.g.sup.tf ∪ {t + d .sub.g.sup.fanou t|t e.sup.tf .sub.fanout}; }}}/* Now, duplicate gates so that the parts of the paths longer thanδ that are traversedduring the tf.sup.th clock tick will not have any fanout */foreach.sub.-- gate g in gate.sub.-- list {foreach.sub.-- time t in ascending order in a .sub.g.sup.tf { if (t + min(e .sub.g.sup.tf ) ≦ δ && t + max(e .sub.g.sup.tf) > δ) { /* Gate g must by duplicated. If g is a latch, the latch is duplicated */ g' = duplicate.sub.-- gate(g); a.sup.tf .sub.g' = a .sub.g.sup.tf; e.sup.tf .sub.g' = e .sub.g.sup.tf - {t.sub.e |t.sub.e e .sub.g.sup.tf, t + t.sub.e ≦ δ}; e .sub.g.sup.tf = e .sub.g.sup.tf - {t.sub.e |t.sub.e e 1.sub.g.sup.tf, t + t.sub.e > δ}; /* Now distribute the fanout */ ; foreach.sub.-- fanout fanout of gate g { if (fanout is a latch && t + min(e.sup.tf+1 .sub.fanout) > δ) { replace connection from g to fanout by g' to fanout; }else if (t + min(e.sup.tf .sub.fanout) + d .sub.g.sup.fanout > δ) { replace connection from g to fanout by g' to fanout; } } }}}}Set constants on the first edge of all paths longer than δ innum.sub.-- tf time frames;Propagate constants as far as possible;/* A constant is propagated through a latch only if it is the same as theinitial value of the latchreturn(ckt);}__________________________________________________________________________
Once the long multi-cycle false paths have been removed, combinational and sequential redundancies are removed from the circuit to recover area. The circuit is then retimed to reduce the clock period.
The procedure described can be shown not to alter the functionality of the circuit.
It is to be understood that the specific example described is merely illustrative of the general principles of the invention and that typically the circuit that needs to be configured is more complex than that described. | https://patents.google.com/patent/US5448497?oq=flatulence | CC-MAIN-2018-09 | refinedweb | 5,133 | 60.85 |
I'm a newbie to Java and to EJB. Want to create a simple EJB.
So I've created an EJB Project via Eclipse (Eclipse Java EE IDE Juno
Service Release 1 Build ID: 20120920-0800). Then I've added a Stateless
Session Bean there with Remote and Local interfaces. Then I added a simple
method that just returns a sum of 2 numbers. Here is the code:
Test.java:
packag
I am looking to add some dynamics to our corporate website. This is a
secondary role so I'd rather not be spending a ton of time on it.
At this point, all I need is a simple PHP script where a non-technical
user can pull up and manage the records in a MySQL table. There's only one
table of data to be managed; it's just that it will be accessed and updated
quite frequently.
How to create simple post/get API making simple calling to GAE database
using Google App Engine? Like create DB item retrive and delete. How to
acsess it after you created it?
I am trying to show Grouped Products list based on attributes of its
associated simple products. Right now i am doing like below
- Create a collection of simple products and add attribute
filters like color,brand etc., like below$productCollection =
Mage::getModel('catalog/product')->getCollection()
->addStoreFilter(Mage::app()->getStore())
I am a PHP programmer who is having to do some work in the android
development environment. I have 2 books on this and have tried 30 search
engine topics and still have not found just a simple example of everything
that you need to do to place a working hyperlink in a Java android
application. I just need a very simple but complete ingredient for
doing so. I have the 2.2 android developme
Does anyone have any simple JEXL examples using a loop. I am looking to
iterate around a simple object arraylist to output various string values?
I didn't want to cause anyone to
waste their time trying to fix this code, since I've discovered the issue.
Everyone who tried to help will get upvotes for it.
Here's the
issue I discovered the issue myself and feel incredibly stupid for making
this mistake.
The mistake is in my HTML code. I accidentally
named a bunch of the fields to
I want to create an object, let's say a Pie.
class Pie
def initialize(name, flavor) @name = name
@flavor = flavor end end
But a Pie can
be divided in 8 pieces, a half or just a whole Pie. For the sake of
argument, I would like to know how I could give each Pie object a price per
1/8, 1/4 or per whole. I could do this
I used to get it to send an email, but this seems to only work half of
the time.
I just need a relatively fail-safe way to send a
~20KB plain text message to somewhere I, the developer, can access on the
Internet.
Any suggestions would be greatly appreciated.
Thanks.
ok so i have have this
{"status":0,"id":"7aceb216d02ecdca7ceffadcadea8950-1","hypotheses":[{"utterance":"hello
how are you","confidence":0.96311796}]}
and at the
moment i'm using this shell command to decode it to get the string i need,
echo $x | grep -Po '"utterance":.*?[^]"' | sed -e s/://g -e
s/utterance//g -e 's/"//g' | http://bighow.org/tags/simple/1 | CC-MAIN-2017-47 | refinedweb | 571 | 72.66 |
import sys sys.path.append('../code') from init_mooc_nb import * init_notebook() from holoviews.core.options import Cycle %output size=120 pi_ticks = [(-np.pi, r'$-\pi$'), (0, '0'), (np.pi, r'$\pi$')] def ts_modulated_wire(L=50): """Create an infinite wire with a periodic potential Chain lattice, one orbital per site. Returns kwant system. Arguments required in onsite/hoppings: t, mu, mu_lead, A, phase The period of the potential is 2*pi/L. """ omega = 2 * np.pi / L def onsite(site, p): x = site.pos[0] return 2 * p.t - p.mu + p.A * (np.cos(omega * x + p.phase) + 1) def hopping(site1, site2, p): return -p.t sym_lead = kwant.TranslationalSymmetry([-L]) lat = kwant.lattice.chain() syst = kwant.Builder(sym_lead) syst[(lat(x) for x in range(L))] = onsite syst[lat.neighbors()] = hopping return syst def modulated_wire(L=50, dL=10): """Create a pump. Chain lattice, one orbital per site. Returns kwant system. L is the length of the pump, dL is the length of the clean regions next to the pump, useful for demonstration purposes. Arguments required in onsite/hoppings: t, mu, mu_lead, A, omega, phase """ def onsite(site, p): x = site.pos[0] return 2 * p.t - p.mu + p.A * (np.cos(p.omega * x + p.phase) + 1) lead_onsite = lambda site, p: 2 * p.t - p.mu_lead def hopping(site1, site2, p): return -p.t lat = kwant.lattice.chain() syst = kwant.Builder() syst[(lat(x) for x in range(L))] = onsite syst[lat.neighbors()] = hopping sym_lead = kwant.TranslationalSymmetry([-1]) lead = kwant.Builder(sym_lead) lead[lat(0)] = lead_onsite lead[lat.neighbors()] = hopping syst.attach_lead(lead) syst.attach_lead(lead.reversed()) return syst def total_charge(value_array): """Calculate the pumped charge from the list of reflection matrices.""" determinants = [np.linalg.det(r) for r in value_array] charge = np.cumsum(np.angle(np.roll(determinants, -1) / determinants)) charge = charge - charge[0] return charge / (2 * np.pi):18.042348.
MoocVideo("gKZK9IGY9wo", src_location='3.1-intro', res='360')
Previously, when studying the topology of systems supporting Majoranas (both the Kitaev chain and the nanowire), we were able to calculate topological properties by studying the bulk Hamiltonian $H(k)$.
There are two points of view on this Hamiltonian. We could either consider it a Hamiltonian of an infinite system with momentum conservation$$H = H(k) |k\rangle\langle k|,$$
or we could equivalently study a finite system with only a small number of degrees of freedom (corresponding to a single unit cell), and a Hamiltonian which depends on some continuous periodic parameter $k$.
Of course, without specifying that $k$ is the real space momentum, there is no meaning in bulk-edge correspondence (since the edge is an edge in real space), but the topological properties are still well-defined.
Sometimes we want to know how a physical system changes if we slowly vary some parameters of the system, for example a bias voltage or a magnetic field. Because the parameters change with time, the Hamiltonian becomes time-dependent, namely$$H = H(t).$$
The slow adiabatic change of parameters ensures that if the system was initially in the ground state, it will stay in the ground state, so that the topological properties are useful.
A further requirement for topology to be useful is the periodicity of time evolution:$$H(t) = H(t+T).$$
The period can even go to $\infty$, in which case $H(-\infty) = H(+\infty)$. The reasons for the requirement of periodicity are somewhat abstract. If the Hamiltonian has parameters, we're studying the topology of a mapping from the space of parameter values to the space of all possible gapped Hamiltonians. This mapping has nontrivial topological properties only if the space of parameter values is compact.
For us, this simply means that the Hamiltonian has to be periodic in time.
Of course, if we want systems with bulk-edge correspondence, then in addition to $t$ our Hamiltonian must still depend on the real space coordinate, or the momentum $k$.
In the image below (source: Chambers's Encyclopedia, 1875, via Wikipedia) you see a very simple periodic time-dependent system, an Archimedes screw pump.
The changes to the system are clearly periodic, and the pump works the same no matter how slowly we use it (that is, change the parameters), so it is an adiabatic tool.
What about a quantum analog of this pump? Turns out it is just as simple as you would think.
Let's take a one-dimensional region, coupled to two electrodes on both sides, and apply a strong sine-shaped confining potential in this region. As we move the confining potential, we drag the electrons captured in it.
So our system now looks like this:
# def f(x): if x < 0.0: return mu_lead if x >= 0.0(2.0, 1.25, 5, 0, head_width=0.15, head_length=1.0, fc='k', ec='k') plt.xlabel('$x$') plt.ylabel('$U(x)$') plt.xticks([]) plt.yticks([]) plt.show()
It is described by the Hamiltonian$$H(t) = \frac{k^2}{2m} + A [1 - \cos(x/\lambda + 2\pi t/T)].$$
As we discussed, if we change $t$ very slowly, the solution will not depend on how fast $t$ varies.
When $A \gg 1 /m \lambda^2$ the confining potential is strong, and additionally if the chemical potential $\mu \ll A$, the states bound in the separate minima of the potential have very small overlap.
The potential near the bottom of each minimum is approximately quadratic, so the Hamiltonian is that of a simple Harmonic oscillator. This gives us discrete levels of the electrons with energies $E_n = (n + \tfrac{1}{2})\omega_c$, with $\omega_c = \sqrt{A/m\lambda^2}$ the oscillator frequency.
We can quickly check how continuous bands in the wire become discrete evenly spaced bands as we increase $A$:
p = SimpleNamespace(t=1, mu=0.0, phase=0.0, A=None) syst = ts_modulated_wire(L=17) def title(p): return "Band structure, $A={:.2}$".format(p.A) kwargs = {'ylims': [-0.2, 1.3], 'xticks': pi_ticks, 'yticks': [0, 0.5, 1.0], 'xdim': r'$k$', 'ydim': r'$E$', 'k_x': np.linspace(-np.pi, np.pi, 101), 'title': title} holoviews.HoloMap({p.A: spectrum(syst, p, **kwargs) for p.A in np.linspace(0, 0.8, 10)}, kdims=[r'$A$'])
So unless $\mu = E_n$ for some $n$, each minimum of the potential contains an integer number of electrons $N$.
Electron wave functions from neighboring potential minima do not overlap, so when we change the potential by one time period, we move exactly $N$ electrons.
question = "Why are some levels in the band structure flat while some are not?" answers = ["The flat levels are the ones whose energies are not sensitive to the offset of confining potential.", "Destructive interference of the wave functions in neighboring minima suppresses the dispersion.", "The flat levels are localized deep in the potential minima, " "so the bandwidth is exponentially small.", "The flat levels correspond to filled states, and the rest to empty states."] explanation = ("The dispersion of the bands in a perodic potential appears " "when the wave functions from neighboring minima overlap.") MoocMultipleChoiceAssessment(question=question, answers=answers, correct_answer=2, explanation=explanation)
As we already learned, integers are important, and they could indicate that something topological is happening.
At this point we should ask ourselves these questions: Is the number of electrons $N$ pumped per cycle topological, or can we pump any continuous amount of charge? How important is it that the potential well of the pump is deep?
To simplify the counting let's "dry out" the pump: We can define a procedure that empties the middle region, and pushes $n_L$ extra electrons to the left and $n_R$ electrons to the right.
For example, we can do this:
# Same plot as above, but now with an extra rectangular barrier in the # middle, and with arrows both ways showing that the barrier widens. # Plot of the potential in the pumping system as a function of coordinate. # Some part of the leads is shown with a constant potential. # Regions with E < 0 should be shaded to emulate Fermi sea. # a = 4.5 b = 6.5 top = 1.2 def f(x): if x < 0.0: return mu_lead if x >= 0.0 and x <= a: return mu + A * (1.0 - np.cos(x / lamb)) if x > a and x < b: return top if x >= b(a, 1.05, -1, 0, head_width=0.1, head_length=0.4, fc='k', ec='k') plt.arrow(b, 1.05, +1, 0, head_width=0.1, head_length=0.4, fc='k', ec='k') plt.xlabel('$x$') plt.ylabel('$U(x)$') plt.xticks([]) plt.yticks([]) plt.show()
A reverse of this procedure does the reverse of course, so it reduces the number of charges on the left and right sides.
Now here comes the trick:
When the middle region is emptied, the two sides are completely disconnected, and so the number of electrons on either side must be integer for every eigenstate of the Hamiltonian.
Next, if we performed the manipulation adiabatically, then if we start in an eigenstate of the Hamiltonian, we will also end in an eigenstate of the Hamiltonian. This is a consequence of the adiabatic theorem.
In light of 1. and 2., we conclude that in the process of drying the middle out, we pumped an integer number of charges.
Finally, adiabatic manipulation is only possible if the Hamiltonian stays gapped at all times.
Bonus: In our argument we didn't use the shape or the strength of the potential, so it applies universally to any possible pump.
So without doing any calculations, we can conclude that:
The number of electrons pumped per cycle of a quantum pump is an integer as long as the bulk of the pump is gapped. Therefore it is a topological invariant.
The expression for the pumped charge in terms of the bulk Hamiltonian $H(k, t)$ is complicated.
It's an integral over both $k$ and $t$, called a Chern number or in other sources a TKNN integer. Its complexity is beyond the scope of our course, but is extremely important, so we will have to study it... next week.
There is a much simpler way to calculate the same quantity using scattering formalism. From the previous two weeks, recall that we may infer the presence or absence of Majoranas at an end of a system by calculating either $Q = \textrm{sign}[\textrm{Pf}\,H(0)\,\textrm{Pf}\,H(\pi)]$ or $Q=\textrm{sign}\det r$, where $r$ is the reflection matrix from one end of the Majorana wire.
In order to derive the scattering expression, we need to understand how the pumped charge manifests in the reflection matrix.
Let's start from the case when there's just one mode in the reservoir. We'll count the charge pumped by making the reservoir finite but very large.
Now all the levels in the reservoir are quantized, and are standing waves, so they are equal weight superpositions of waves going to the left $\psi_L$ and to the right $\psi_R$,$$ \psi_n = \psi_L(x) + \psi_R(x) \propto \exp(ik_n x) + \exp(-ik_n x + i\phi), $$
where the wave number $k_n$ is of course a function of energy. The relative phase shift $\phi$ is necessary to satisfy the boundary condition at $x=0$, where $\psi_L = r \psi_R$, and so $\exp(i \phi) = r$. The energies of the levels are determined by requiring that the phases of $\psi_L$ and $\psi_R$ also match at $x = -L$.
Now, what happens when we pump one extra charge into the reservoir? All the energy levels are shifted up by one, that is $E_n \rightarrow E_{n+1}$, and accordingly the wave functions also change $\psi_n \rightarrow \psi_{n+1}$.
We conclude that the charge can only be pumped as the reflection phase $\phi$ advances by $2\pi$.
It's very easy to generalize our argument to many modes. For that we just need to sum all of the reflection phase shifts, which means we need to look at the phase of $\det r$.
We conclude that there's a very compact relation between charge $dq$ pumped by an infinitesimal change of an external parameter and the change in reflection matrix $dr$:$$ dq = \frac{d \log \det r}{2\pi i} = \operatorname{Tr}\frac{r^\dagger dr }{ 2 \pi i}. $$
While we derived this relation only for the case when all incoming particles reflect, and $r$ is unitary, written in form of trace it also holds if there is transmission.¹
Let's check if this expression holds to our expectations. If $||r||=1$, this is just the number of times the phase of $\det r$ winds around zero, and it is certainly an integer, as we expected.
We're left with a simple exercise.
We know now how to calculate the pumped charge during one cycle, so let's just see how it works in practice.
The scattering problem in 1D can be solved quickly, so let's calculate the pumped charge as a function of time for different values of the chemical potential in the pump.
%%opts Path.Q (color=Cycle(values=['r', 'g', 'b', 'y'])) %%opts HLine (color=Cycle(values=['r', 'g', 'b', 'y']) linestyle='--') def plot_charge(mu): energy = 0.0 phases = np.linspace(0, 2*np.pi, 100) p = SimpleNamespace(t=1, mu=mu, mu_lead=mu, A=0.6, omega= .3) syst = modulated_wire(L=100).finalized() rs = [kwant.smatrix(syst, energy, args=[p]).submatrix(0, 0) for p.phase in phases] wn = -total_charge(rs) title = '$\mu={:.2}$'.format(mu) kdims = [r'$t/T$', r'$q/e$'] plot = holoviews.Path((phases / (2 * np.pi), wn), kdims=kdims, label=title, group='Q') return plot[:, -0.5:3.5](plot={'xticks': [0, 1], 'yticks': [0, 1, 2, 3]}) kwargs = {'ylims': [-0.2, 1.3], 'xticks': pi_ticks, 'yticks': [0, 0.5, 1.0], 'xdim': r'$k$', 'ydim': r'$E$', 'k_x': np.linspace(-np.pi, np.pi, 101), 'title': lambda p: "Band structure, $A={:.2}$".format(p.A)} p = SimpleNamespace(t=1, mu=0.0, phase=0.0, A=0.6) syst = ts_modulated_wire(L=17) mus = [0.1, 0.3, 0.6, 0.9] HLines = holoviews.Overlay([holoviews.HLine(mu) for mu in mus]) spectrum(syst, p, **kwargs) * HLines + holoviews.Overlay([plot_charge(mu) for mu in mus]).relabel('Pumped charge')
In the left plot, we show the band structure, where the different colors correspond to different chemical potentials. The right plot shows the corresponding pumped charge. During the pumping cycle the charge may change, and the relation between the offset $\phi$ of the potential isn't always linear. However we see that after a full cycle, the pumped charge exactly matches the number of filled levels in a single potential well.
As a final mental exercise about pumps, let's think about what happens if we disconnect the leads and consider the spectrum of a closed system.
As the periodic potential moves, it tries to increase the energies of all the states at the right of the system and reduce the energy of all the states to the left (that's what pumping does after all).
So there should be states crossing the bulk band gap. Let's see if it's true.
p = SimpleNamespace(t=1, mu=0.0, mu_lead=0, A=0.6, omega=0.3, phase=None) syst = modulated_wire(L=110).finalized() phases = np.linspace(0, 2*np.pi, 251) en = [np.linalg.eigvalsh(syst.hamiltonian_submatrix(args=[p])) for p.phase in phases] en = np.array(en) ticks = {'xticks': [0, 1], 'yticks': [0, 0.5, 1]} kdims = [r'$t/T$', r'$E$'] holoviews.Path((phases / (2*np.pi), en), kdims=kdims)[:, 0:1.2](plot=ticks)
Indeed, the levels in the bulk stay flat and have a high degeneracy, but we see that there are also single levels that get pushed across the gap. Since the bulk is homogeneous, these states have to be localized at the edge.
Of course, since we have a finite system, the charge cannot be pumped forever from one end into the other. So the pumping breaks down when you see the edge states crossing the bulk bands. At these moments the charge can flow back through the bulk.
question = ("What happens to the dependence of the reflection phase shift on time if we " "remove one of the reservoirs and leave the other one?") answers = ["It becomes constant.", "For most of the cycle it stays the same, but there appear " "sharp jumps such that the total winding becomes zero.", "Nothing changes, since the two ends of the pump are " "far apart from each other, and the pump is not conducting.", "The reflection phase gets a new time dependence with zero winding, unrelated to the original one."] explanation = ("The total pumped charge must become equal to zero since there's nowhere to place the charge, but " "since the pump is insulating, the phase cannot change " "for most of the cycle unless a sharp resonance appears") MoocMultipleChoiceAssessment(question=question, answers=answers, correct_answer=1, explanation=explanation)
MoocVideo("6lXRAZ7hv7E", src_location='3.1-summary', res='360')
Questions about what you learned? Ask them below
MoocDiscussion('Questions', 'Quantum pumps')
Discussion Quantum pumps is available in the EdX version of the course. | https://nbviewer.jupyter.org/url/topocondmat.org/notebooks/w3_pump_QHE/pumps.ipynb | CC-MAIN-2019-18 | refinedweb | 2,852 | 66.33 |
Digital voltmeter using Arduino UNO Range:0-50 volt Using SIMULINO UNO
This is a simple project showing you how to make a digital voltmeter using Arduino where the readings are displayed in a Liquid Crystal Display LCD20x4.
The proposed voltmeter design can read up to 50V. We are using analogue to digital conversion process.
Arduino microcontroller equipped with 10-bit analogue to digital converter (ADC). This means Arduino can read 2^10=1024 discrete voltage levels.In this project, we measure the input voltage range between 0 to 50V by using the voltage divider. It is very simple to use Arduino as a voltmeter. Arduino UNO has 5 analog pin to read the input analog value. The circuit consists of two resistors, one LCD display and an Arduino which is brain of the digital voltmeter. The two resistor acts as voltage divider, the node of the divider is connected to analogue pin # A0 of the Arduino, which reads the input voltage. Ground connection is established between Arduino and external voltage source.
You cannot feed 50V directly to a Arduino I/O pin, you need a resistor divider network that converts 0-50V range into 0-5V.A 5.1V Zener diode in the figure is to prevent V_AN0 (V in) to rise above 5.1V if the input voltage goes much above 50V. This will protect the Arduino board.
The Arduino in built 10 bit ADC,can be used for measuring the 0Volt to 50 Volts Digital Voltmeter. LCD display connected with Arduino Uno will be used for displaying the measured voltage. This voltmeter can read only DC voltage.
Code Designing in Arduino IDE 1.8.0 :
So, here’s the programming code youneed to use for Displaying voltage value on LCD using Arduino board in Proteus ISIS:
#include "LiquidCrystal.h"
// initialize the library with the numbers of the interface pins
// lcd(RS, E, d4, d5, d6, d7)
LiquidCrystal lcd(2, 3, 4, 5, 6, 7);
float voltage = 0.0;
float V=0.0;
float R1 = 100000.0; // resistance of R1 (100K)
float R2 = 10000.0; // resistance of R2 (10K)
int analog_value= 0;
void setup()
{
Serial.begin(9600);
Serial.println("DIGITAL VOLTMETER");
lcd.begin(20, 4); // set up the LCD's number of columns and rows
lcd.setCursor (0,0);
lcd.print(" LET'S THINK BINARY "); // Print a message to the LCD.
lcd.setCursor (0,1);
lcd.print(" "); // Print a message to the LCD.
lcd.setCursor(1,2);
lcd.print("Digital Voltmeter");
}
void loop()
{
// read the value at analog input pin A0
//and store it in the variable analog_value
analog_value = analogRead(A0);
voltage = (analog_value * 5.0) / 1024.0;
V = voltage / (R2/(R1+R2));
if (V < 0.1)
{
V = 0.0;//statement to quash undesired reading !
}
// Display the voltage in the serial monitor
Serial.print ("Voltage: ");
Serial.print (V);
Serial.println (" VOLTS");
// Display the voltage in the LCD
lcd.setCursor(0, 3);
lcd.print("Voltage= ");
lcd.print(V);
lcd.setCursor(15,3);
lcd.print("VOLTS");
// Wait 2 seconds between the measurements
delay (2000);
}
Digital voltmeter using Arduino UNO Range:0-50 volt Using SIMULINO UNO (Schematic Diagram)
Results :
Compile the Arduino code and get the hex file from it.
For simulating with PROTEUS ISIS hit run button and then you will get following results:
So, now we can see the LCD is displaying exactly the same values as are shown in the voltmeter. Now if you change the value of variable resistor then the value of voltage will change in LCD.
This digital voltmeter using Arduino UNO board can read voltage only between 0-50 volt.
DownloadHere
You candownload the Source Code and Proteus files etc from here arduino digital voltmeter:
This Our Group (thanks to join and participate) :
Facebook page :
Youtube Channel (thanks to subscribe) :
Please share this project
JLCPCB – Prototype 10 PCBs for $2 + 2 days Lead Time
China’s Largest PCB Prototype Enterprise, 300,000+ Customers & 10,000+ Online Orders Per Day
Get quote online easily: | http://duino4projects.com/digital-voltmeter-using-arduino-uno-range0-50-volt-using-simulino-uno/ | CC-MAIN-2018-30 | refinedweb | 659 | 66.64 |
SIGVEC(3B) SIGVEC(3B)
sigvec - 4.3BSD software signal facilities
#include <signal.h>
struct sigvec {
int (*sv_handler)(int, int);
int sv_mask;
int sv_flags;
};
int sigvec(int sig, struct sigvec *vec, struct sigvec *ovec);vec specifies and reports on the way individual signals are to be
handled in the calling process. If vec is non-zero, it alters the way
the signal will be treated - default behavior, ignored, or handled via a
routine - and the signal mask to be used when delivering the signal if a
handler is installed. If ovec is non-zero, the previous handling
information for the signal is returned to the user. In this way (a NULL
vec and a non-NULL ovec) the user can inquire as to the current handling
of a signal without changing it. If both vec and ovec are NULL, sigvec
will return -1 and set errno to EINVAL if sig is an invalid signal (else
0), allowing an application to dynamically determine the set of signals
supported by the system.bloc
call, or when a signal is delivered to the
Page 1
SIGVEC(3B) SIGVEC(3B)
process.'ing in the signal
mask associated with the handler to be invoked.
Sigvec assigns a handler for a specific signal. If vec is non-zero, it
specifies a handler routine and mask to be used when delivering the
specified signal. Further, if the SV_ONSTACK bit is set in sv_flags, the
system will deliver the signal to the process on a signal stack,
specified with sigstack(2b).
For a list of valid signal numbers and a general description of the
signal mechanism, please see signal(5).
Once a signal handler is installed, it remains installed until another
sigvec call is made, or an execve(2) is performed. The default action
for a signal may be reinstated by setting sv_handler to SIG_DFL; this
default is termination with a core image for signals marked [1]. If
sv_handler is SIG_IGN the signal is subsequently ignored, and pending
instances of the signal are discarded.
SIGKILL will immediately terminate a process, regardless of its state.
Processes which are stopped via job control (typically <ctrl>-Z) will not
act upon any delivered signals other than SIGKILL until the job is
restarted. Processes which are blocked via a blockproc(2) system call
will unblock if they receive a signal which is fatal (i.e., a non-jobcontrol).
After a fork(2) the child inherits all handlers, the signal stack and the
signal masks, but not the set of the pending signals.
The exec(2) routines reset all caught signals to default action , clear
all handler masks and reset all signals to be caught on the user stack.
Ignored signals remain ignored; the blocked signal mask is unchanged and
Page 2
SIGVEC(3B) SIGVEC(3B)
pending signals remain pending.
The mask specified in vec is not allowed to block SIGKILL, SIGSTOP, or
SIGCONT. This is enforced silently by the system.
A 0 value indicated that the call succeeded. A -1 return value indicates
an error occurred and errno is set to indicate the reason.
sigvec is a library routine (executing in user space): if either vec or
ovec points to memory that is not a valid part of the process address
space, the process will receive a memory fault (SIGSEGV) signal and
terminate (unless it has installed a handler for SIGSEGV). If the
invalid pointer is the result of using a REFERENCE instead of a POINTER,
the compiler will issue a warning.
sigvec will fail and no new signal handler will be installed if one of
the following occurs:
block(3B), sigsetmask(3B), sigpause(3B), sigvec more a
detailed description of the behavior.
WARNING (IRIX)
The 4.3BSD and System V signal facilities have different semantics.
Using both facilities in the same program is strongly discouraged and
will result in unpredictable behavior.
PPPPaaaaggggeeee 3333 | https://nixdoc.net/man-pages/IRIX/man3B/sigvec.3B.html | CC-MAIN-2021-10 | refinedweb | 641 | 60.55 |
Multiple Kernels
Multiple kernels
As weve seen before, SYCL kernels are launched asynchronously. To retrieve the results of computation, we must either run the destructor of the buffer that manages the data or create a host accessor. A question comes up - what if we want to execute multiple kernels over the same data, one after another? Surely we must then manually synchronize the accesses? Luckily, we barely have to do anything. The SYCL runtime will guarantee that dependencies are met and that kernels which depend on others results will not launch until the ones they depend on are finished.
All of this is managed under the hood and controlled through buffers and accessors. It is deterministic enough for us to be able to know exactly what will happen. Lets see an example:
Executing interdependent kernels
#include <iostream> #include <numeric> #include <CL/sycl.hpp> namespace sycl = cl::sycl; int main(int, char**) { sycl::queue q(sycl::default_selector{}); std::array<int, 16> a_data; std::array<int, 16> b_data; std::iota(a_data.begin(), a_data.end(), 1); std::iota(b_data.begin(), b_data.end(), 1); sycl::buffer<int, 1> a(a_data.data(), sycl::range<1>(16)); sycl::buffer<int, 1> b(b_data.data(), sycl::range<1>(16)); sycl::buffer<int, 1> c(sycl::range<1>(16)); sycl::buffer<int, 1> d(sycl::range<1>(16)); <<Read A, Write B>> <<Read A, Write C>> <<Read B and C, Write D>> <<Write D>> auto ad = d.get_access<sycl::access::mode::read>(); for (size_t i = 0; i < 16; i++) { std::cout << ad[i] << " "; } std::cout << std::endl; return 0; }
In this example, we submit four command groups. Their operations are not particularly important. What matters is which buffers they write to and read from:
Read A, Write B
q.submit([&] (sycl::handler& cgh) { auto aa = a.get_access<sycl::access::mode::read>(cgh); auto ab = b.get_access<sycl::access::mode::discard_write>(cgh); cgh.parallel_for<class kernelA>( sycl::range<1>(16), [=] (sycl::item<1> item) { ab[item] = aa[item] * 2; } ); } );
Read A, Write C
q.submit([&] (sycl::handler& cgh) { auto aa = a.get_access<sycl::access::mode::read>(cgh); auto ac = c.get_access<sycl::access::mode::discard_write>(cgh); cgh.parallel_for<class kernelB>( sycl::range<1>(16), [=] (sycl::item<1> item) { ac[item] = aa[item] * 2; } ); } );
Read B and C, Write D
q.submit([&] (sycl::handler& cgh) { auto ab = b.get_access<sycl::access::mode::read>(cgh); auto ac = c.get_access<sycl::access::mode::read>(cgh); auto ad = d.get_access<sycl::access::mode::discard_write>(cgh); cgh.parallel_for<class kernelC>( sycl::range<1>(16), [=] (sycl::item<1> item) { ad[item] = ab[item] + ac[item]; } ); } );
Write D
q.submit([&] (sycl::handler& cgh) { auto ad = d.get_access<sycl::access::mode::read_write>(cgh); cgh.parallel_for<class kernelD>( sycl::range<1>(16), [=] (sycl::item<1> item) { ad[item] /= 4; } ); } );
As we can see, some buffers are reused between the kernels with different access modes, while others are used independently. The order in which the SYCL runtime schedules the kernels will mirror this usage.
The first two kernels will be scheduled concurrently, because they do not depend on each other. Both of them read from the same buffer (A), but they do not write to it. Since concurrent reading is not a data race, that part is independent. Then, they also write to different buffers, so writes do not conflict. The runtime is aware of all this and will exploit it for maximum parallelism.
The third kernel is not independent - it reads from the buffers B and C into which the first two kernels write. Hence, it will wait for them to finish and be scheduled immediately after that.
Finally, the fourth kernel does not read anything that a previous kernel wrote, but it does write to the same data - the D buffer. Since mutating shared state in parallel is a data race, this kernel has to wait for the third one to finish and will execute only then.
Our program outputs the correct results:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
In this case we have a well-defined execution order, since all kernels are submitted from the same thread. What if we have a multi-threaded application, with
submit calls being made on several threads? The queue is thread-safe, and the order in which kernels are executed will be decided by the order of submission. If you want to guarantee a specific order between kernels submitted from different threads, you have to synchronize this manually and make
submit calls in the right order - otherwise it could be random, depending on which thread happens to execute its operation on the queue first. | https://developer.codeplay.com/products/computecpp/ce/guides/sycl-guide/multiple-kernels | CC-MAIN-2020-16 | refinedweb | 774 | 56.76 |
On 01/18/11 14:57, Eli Zaretskii wrote: >> From: Paul Eggert <address@hidden> > >> * Look for new symbols in config.in that need to be configured >> manually. > > Can you post a list of those new symbols, please? You can see a complete list of all the symbols I recently changed by running this: bzr diff -r102854..102889 src/config.in (If this isn't easy for you to run, please let me know and I'll send out the full list.) Most of these symbols, I expect, you won't need to worry about, since they default to assuming that a feature is absent, and that assumption will be correct for Microsoft platforms. Since I don't know Microsoft, I don't know exactly which symbols actually need to be worried about. However, I suggest looking at these symbols more carefully, as they may need to be defined. The other new symbols, I expect, you don't need to define. /* Define to 1 if GCC-style __attribute__ ((__aligned__ (expr))) works. */ #undef HAVE_ATTRIBUTE_ALIGNED /* Define to 1 if strtold conforms to C99. */ #undef HAVE_C99_STRTOLD /* Define to `__inline__' or `__inline' if that's what the C compiler calls it, or to nothing if 'inline' is not supported under any name. */ #ifndef __cplusplus #undef inline #endif /* Define to the equivalent of the C99 'restrict' keyword, or to nothing if this is not supported. Do not define if restrict is supported directly. */ #undef restrict Also, you no longer need to worry about the following symbols, since they were removed from config.in: /* Define to 1 if the mktime function is broken. */ #undef BROKEN_MKTIME /* Define to 1 if you have the `mktime' function. */ #undef HAVE_MKTIME /* Define to compiler's equivalent of C99 restrict keyword. Don't define if equivalent is `__restrict'. */ #undef __restrict | http://lists.gnu.org/archive/html/emacs-devel/2011-01/msg00624.html | CC-MAIN-2016-40 | refinedweb | 297 | 65.12 |
On 2/22/13, Branko Čibej <brane@wandisco.com> wrote:
> On 22.02.2013 11:41, Jure Zitnik wrote:
>> On 2/22/13 3:03 AM, Branko Čibej wrote:
[...]
>>> I have to say that for once I completely agree with Olemis. NULL table
>>> column, and empty UI prefix equals "it all looks exactly like it used to
>>> before the migration" and it also can't collide with any existing
>>> product names or prefixes.
>>>
>>> Furthermore, doing it this way, if a user installs Bloodhound but
>>> doesn't want to bother with product namespaces, everything will Just
>>> Work.
>>>
>> I agree with the 'Just Work' part, I don't agree with tickets in
>> global scope.
>>
>>.
>
[...]
>
> Under the hood, this can be a completely different thing than the global
> scope.
>
jftr , I confirm this is what I had in mind since the beginning .
>
> P.S.: By the way, do we test upgrades from Trac to BH? If not, why not? :)
>
We should . That's another argument against performing MP upgrade in
EnvironmentStub.__init__ method (i.e. at the beginning of one such
test the env would be upgraded already o.O ) . In general , testing
upgrades will be easier after committing BEP-5 and refactoring
MultiproductSystem procedure accordingly . AFAICR franco included some
test cases for upgrades so it seems it all (or good part of it ;) will
come in a single package .
@franco : coluld you please confirm ?
--
Regards,
Olemis. | http://mail-archives.apache.org/mod_mbox/bloodhound-dev/201302.mbox/%3CCAGMZAuMNRhYTq3xg=SA0mTNOfq=JOWksPvJcX4=MiivMZTXYQA@mail.gmail.com%3E | CC-MAIN-2018-30 | refinedweb | 234 | 75.61 |
cimmof(1) cimmof(1)
NAME cimmof - compile MOF files into the CIM Repository
SYNOPSIS cimmof -h | --help
cimmof --version
cimmof [ -w ] [ -E ] [ -uc ] [ -aE | -aV | -aEV ] [ -I path ] [ -n namespace ] [ --namespace namespace ] [ --xml ] [ --trace ] [ mof_file ... ]
Remarks Only a superuser or user with write access to the default or specified namespace can run the cimmof command to compile MOFs in the CIM Reposi- tory.
Superclasses must be compiled before subclasses, else the compile will fail.
It is strongly recommended that MOF files include all necessary sub- classes, so they can compile properly even if certain classes are not in the CIM Repository.
DESCRIPTION -I command line option.
The -n option can be used to specify a R namespace in which the CIM classes and instances will be compiled. If this option is not speci- fied, the default R namespace is root/cimv2 (with the exception of provider registration schemas).
For provider registration schemas, if the -n option is not specified, the default R namespace is root/PG_InterOp. If -n option is specified, the R namespace specified must be root/PG_InterOp, otherwise, the error message "The requested operation is not supported." is returned. For provider MOFs, the R namespace specified must match one of the names- paces specified in the PG_ProviderCapabilities class schema definition.
Options The cimmof command recognizes the following options:
-aE Allow Experimental Schema changes.
-aEV Allow both Experimental and Version Schema changes.
-aV Allow both Major and Down Revision Schema changes.
-E Syntax check only
-h, --help Display command usage information.
-I path Specify the path to included MOF files. This path may be relative or absolute.
If the input MOF file has include pragmas and the included files do not reside in the current directory, the directive must be used to specify a path to them on the cimmof command line.
-n Override the default CIM Repository namespace. The namespace specified must be a valid CIM namespace name. For the definition of a valid CIM namespace name, refer to the Administrators Guide. For provider registration schemas, the namepace specified must be root/PG_InterOp.
--namespace Override the default CIM Repository namespace. The namespace specified must be a valid CIM namespace name. For the definition of a valid CIM namespace name, refer to the Administrators Guide. For provider registration schemas, the namepace specified must be root/PG_InterOp.
--trace Trace to file (default to stdout)
-uc Allow update of an existing class definition.
--version Display CIM Server version.
-w Suppress warning messages.
When compiling the MOF files, if there are CIM elements (such as classes, instances, properties, or methods) defined in the MOF files which already exist in the CIM Repository, the cimmof command returns warning messages. The -w option can be used to suppress these warning messages.
--xml Output XML only, to stdout. Do not update reposi- tory.
EXIT STATUS The cimmof command returns one of the following values:
0 Successful completion 1 Error
When an error occurs, an error message is written to stderr and an error value of 1 is returned.
USAGE NOTES The cimmof command requires that the CIM Server is running. If an operation requires more than two minutes to be processed, the cimmof command prints a timeout message and returns an error value.
DIAGNOSTICS Error trying to create Repository in path localhost:5988: Cannot con- nect to: localhost:5988 Failed to set DefaultNamespacePath.
The CIM Server is not running. Start the CIM Server with the command and re-run Administrators Guide.
EXAMPLES Compile a MOF file into the default namespace in the CIM Repository, issue the cimmof command with no options.
cimmof processInfo.mof
Compile the MOF files into the "root/application" namespace.
cimmof -nroot/application test1.mof test2.mof
Compile the MOF file defined in the directory ./MOF with the name CIM- Schema25.mof, and containing include pragmas for other MOF files also in the ./MOF directory.
cimmof -w -I./MOF MOF/CIMSchema25.mof
Display Usage Info for the cimmof command.
cimmof -h
SEE ALSO cimserver(1).
cimmof(1) | http://man.linuxtool.net/centos6/u7/man/1_cimmof.html | CC-MAIN-2019-43 | refinedweb | 665 | 57.57 |
Recent:
Archives:
Before you begin coding, I would like to explain how I organized the project, what tools I used for its development, how I tested it, and where to find the appropriate files.
I used an integrated development environment (IDE) called Java Development Environment (JDE), which is a nice add-on to Emacs. Written by Paul Kinnucan, JDE incorporates great features, like an integrated debugger, code completion, and comment generation.
I also used Ant 1.1 to build this project. Ant is a powerful scripting tool, similar to the make utility. But unlike make, Ant easily defines rules by using an XML file. Because it's written in Java, Ant is portable. If you are unfamiliar with the Ant tool, read Michael Cymerman's excellent article, "Automate your build process using Java and Ant" (JavaWorld, October 20, 2000).
Listing 1 provides the content of my Ant file,
build.xml:
If you use my Ant file, don't forget to change the base directory at the top of the file to your base directory. Refer to Figure 1 for more details on this project's directory structure.
Every Java programmer should complete some form of unit testing. Unit testing not only tests your code, it can also provide excellent examples on how to use your code. In this series, all the code examples come from test fixtures.
To test my code, I used JUnit 3.2, a high-quality testing framework. I specifically like JUnit's ability to test code as either a black box, as the code-user will see it, or a white box, from the inside. The two testing modes differ based on the location of your test fixtures. If you place your test fixtures in the same package as your test program, then you perform white-box testing; you can do black-box testing by placing your test fixtures in another package. In this series, the test fixtures will feature black-box testing.
For the framework's file organization, see Figure 1. In the root directory named
PrintFrameWork, you will find all the subdirectory files related to this project.
Figure 1. Project file structure
Table 1 defines each directory:
Now that you know the files' location, you can begin coding. Start by implementing all the measurement classes. If you need to review the framework's design, refer to UML Diagram 1.
You must implement the measurement classes because every other class in the framework relies on them.
The
PFUnit class forms the heart of the measurement system; only the
getPoints() and
setPoints() methods are left
abstract. Place your code that converts the measurement unit to points in
getPoints(), and the code that converts from points to the measurement unit, in the
setPoints() method.
All the basic math operations have been implemented in
PFUnit. The math methods support either a
double value or a
PFUnit class as their input. When you pass a
double value in parameters, the method assumes that the value is in the measurement unit represented by the class. For example,
a
double value passed to one of the math methods in the
PFInchUnit class is assumed to be in inches. The implementation of
PFUnit is in Listing 2.
The next code segment shows how easily you can implement your own measurement system. This segment implements the
PFInchUnit class.
1|package com.infocom.print; 2| 3|/** 4| * Class: PFInchUnit <p> 5| * 6| * @author Jean-Pierre Dube <jpdube@videotron.ca> 7| * @version 1.0 8| * @since 1.0 9| * @see PFUnit 10| */ 11| 12|public class PFInchUnit extends PFUnit { 13| 14| //--- Private constants declarations 15| private final static int POINTS_PER_INCH = 72; 16| 17| 18| /** 19| * Constructor: PFInchUnit <p> 20| * 21| */ 22| public PFInchUnit () { 23| 24| } 25| 26| 27| /** 28| * Constructor: PFInchUnit <p> 29| * 30| * @param parValue a value of type double 31| */ 32| public PFInchUnit (double parValue) { 33| 34| super (parValue); 35| 36| } 37| 38| 39| /** 40| * Method: getPoints <p> 41| * 42| * Return the result of the conversion from 43| * inches to points. 44| * 45| * @return a value of type double 46| */ 47| public double getPoints () { 48| 49| return (getUnits () * POINTS_PER_INCH); 50| 51| } 52| 53| 54| /** 55| * Method: setPoints
56| * 57| * @param parPoints a value of type double 58| */ 59| public void setPoints (double parPoints) { 60| 61| setUnits (parPoints / POINTS_PER_INCH); 62| 63| } 64|}// PFInchUnit
Line 49 in the code above returns the units (inches) converted to points. There are 72 points per inch, so converting inches
to points requires simply multiplying the inch value by 72. The
PFUnit library features two other measurement unit classes:
PFCmUnit and
PFPointUnit.
Next, implement the
PFPoint,
PFSize, and
PFRectangle classes. Use those classes to represent geometrical coordinates with the aforementioned measurement system.
The
PFPoint class has more functionality than the point classes included in the Java Print API. For example,
PFPoint can add and subtract another
PFPoint. Since the math operations are repetitive, I decided to include the code in the class itself; it also improves our object-oriented
design.
To represent a size in the framework, use the
PFSize class. Its only noteworthy feature is the
scale() method, which allows you to scale the size by two factors: one for the width and one for the height.
scale() is useful for zooming pages.
Graphicsframework for advanced 2D compositions
thankingBy Anonymous on December 16, 2008, 3:27 pmhey your effort is really great....u r poviding most valuable java theme in a such simple way..thank you
Reply | Read entire comment
View all comments | http://www.javaworld.com/javaworld/jw-02-2001/jw-0202-print.html | crawl-002 | refinedweb | 915 | 63.29 |
Question:
I have a CSV file with several entries, and each entry has 2 unix timestamp formatted dates.
I have a method called
convert(), which takes in the timestamp and converts it to
YYYYMMDD.
Now, since I have 2 timestamps in each line, how would I replace each one with the new value?
EDIT: Just to clarify, I would like to convert each occurrence of the timestamp into the
YYYYMMDD format. This is what is bugging me, as
re.findall() returns a list.
Solution:1
I assume that by "unix timestamp formatted date" you mean a number of seconds since the epoch. This assumes that every number in the file is a UNIX timestamp. If that isn't the case you'll need to adjust the regex:
import re, sys # your convert function goes here regex = re.compile(r'(\d+)') for line in sys.stdin: sys.stdout.write(regex.sub(lambda m: convert(int(m.group(1))), line))
This reads from stdin and calls convert on each number found.
The "trick" here is that
re.sub can take a function that transforms from a match object into a string. I'm assuming your convert function expects an int and returns a string, so I've used a lambda as an adapter function to grab the first group of the match, convert it to an int, and then pass that resulting int to convert.
Solution:2
If you know the replacement:
p = re.compile( r',\d{8},') p.sub( ','+someval+',', csvstring )
if it's a format change:
p = re.compile( r',(\d{4})(\d\d)(\d\d),') p.sub( r',\3-\2-\1,', csvstring )
EDIT: sorry, just realised you said python, modified above
Solution:3
Not able to comment your question, but did you take a look at the CSV module of python?
Solution:4
I'd use something along these lines. A lot like Laurence's response but with the timestamp conversion that you requested and takes the filename as a param. This code assumes you are working with recent dates (after 9/9/2001). If you need earlier dates, lower 10 to 9 or less.
import re, sys, time regex = re.compile(r'(\d{10,})') def convert(unixtime): return time.strftime("%Y%m%d", time.gmtime(unixtime)) for line in open(sys.argv[1]): sys.stdout.write(regex.sub(lambda m: convert(int(m.group(0))), line))
EDIT: Cleaned up the code.
Sample Input
foo,1234567890,bar,1243310263 cat,1243310263,pants,1234567890 baz,987654321,raz,1
Output
foo,20090213,bar,20090526 cat,20090526,pants,20090213 baz,987654321,raz,1 # not converted (too short to be a recent)
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/05/tutorial-regex-subtitution-in-python.html | CC-MAIN-2018-43 | refinedweb | 458 | 75.81 |
MDI Form in C Sharp
This is a brief introduction to a Multiple Document Interface (MDI) example in C#. Don't know about MDI, then see the Tek Eye article What is MDI Form? The MDI User Interface. A quick summary is that MDI is both a style of User Interface (UI), and the name given to the type of window component in an application (the MDI parent form or MDI child form). The MDI UI design style is no longer popular. However, for complex business, industry and science software (especially those running on large displays) it is a useful design pattern.
Two types of major Microsoft UI technologies can be used in C# for Windows desktop applications, WinForms (short for Windows Forms), and Windows Presentation Foundation (WPF). Both are well established though WinForms is the earlier technology. This tutorial is not concerned with WinForms vs WPF or the relative merits of one over the other. The C# MDI example discussed here uses WinForms because MDI functionality is built in, without the need to use additonal components. Plus the code is straightforward and easy to understand. This article concentrates on the practicallity of WinForms over the beauty of WPF. (This tutorial assumes that Microsoft Visual Studio is installed and a simple C# WinForms project can be created. In which case a MDI program can be developed.)
A C# MDI Project in Visual Studio
An example text editing MDI application is used to show MDI in action. This text editor for Windows shows the classic MDI features. It supports multiple documents and the text editing Windows can minimised, maximised, and dragged around within a larger parent window. Start a new Visual C# Windows Forms Application, here it is called Many Notes. With the Many Notes project highlighted in Visual Studio use the File menu or context menu (normally right-click) and use the Add option. Selecting MDI Parent Form add a new Windows Form named MDIParent.cs.
Open the Progam.cs file and change Form1 to MDIParent (or the name given to the MDI parent form). I.e.
Application.Run(new Form1()); is changed to
Application.Run(new MDIParent());. The project now has a basic MDI skeleton in place with a menu strip, toolbar and status strip. Run the application (use Start). Each click on the document icon (New) will generate a new child form. These forms can be rearranged, maximised, and minimised within the MDI parent. The Windows menu provides options to rearrange all the windows (cascade, tile vertically, tile horizontally).
Adding Simple Text Editing to Each Form
Open MDIParent and change the title, setting the Text property, e.g. to Many Notes. Notice that the IsMdiContainer property is set to true. Now rename Form1.cs to FormNote and select Yes to the question asking to rename all references. Open FormNote and set the Text property (form title) to New Note.
Drop a TextBox on to FormNote and set the Name to TextNote. Set the text box Multiline property to true. Drag the text box to fill the form and set the Anchor to all four sides (Top, Bottom, Left, Right).
In the MDIParent.cs file change the ShowNewForm function to create new instances of FormNote, i.e. change
Form childForm = new Form(); to
Form childForm = new FormNote();. When the application is now started all the new windows have their own text editing ability. (Note: The TextBox control is very limited for text editing. For a proper note taking application a custom text editor control is more suitable, for example the TFEdit control.)
Uniquely Identify Each Child Form
To help easily identify each child form when the application is running it can be given a unique identifier. Add a public property to the form using
public long Id { get; set; }. Now if a specific child form needs special processing it can be found by iterating through all the child forms in the MDI parent code. This is useful when the application is closed and additional checks need to be made for each form (such as saving data).
private void DoFormWork(long FormId) { foreach(Form form in this.MdiChildren) { if(((FormNote)form).Id == FormId) { //do specific form processing } } }
Remembering a File Name
As for defining the Id property, a file name can be stored for each child form:
using System.Windows.Forms; namespace ManyNotes { public partial class FormNote : Form { public long Id { get; set; } public string FileName { get; set; } = string.Empty; public FormNote() { InitializeComponent(); } } }
See Also
- Walkthrough: Creating an MDI Form with Menu Merging and ToolStrip Controls
- The TFEdit (Text File Edit) component for WinForms .NET applications.
- View the Tek Eye full Index for other articles.
Author:Daniel S. Fowler Published: | https://tekeye.uk/visual_studio/mdi-form-in-c-sharp | CC-MAIN-2018-39 | refinedweb | 780 | 65.12 |
Pages: 1
example of my problem:
a program is made of three files, 1.py , 2.py , 3.py ,
i start the program from 1.py and use 'import' for the other two,
# 1.py import 2, 3
my question is, how can i use stuff in 1 and 2 from 3 , i can't get it to work no matter what i try, python just complains about no such global variable, it must be possible somehow! ,
i've tried global statement but that just seems to work inside the same file and from that file containing the imports,
arch + gentoo + initng + python = enlisy
Offline
1 imports 3
and you want 3 to import 1
oh boy..,
arch + gentoo + initng + python = enlisy
Offline
ok if I understand correctly now, you want some 'local' variables to be included in a seperate file because you cannot stand the one file being too long.
well If I were in your shoes (those english phrases are so funny), I would just make those variables global if this doesn't work (consider posting some code):
[nk@Freud tmp]€ cat fo.py a='1' [nk@Freud tmp]€ cat gab.py from fo import * print a [nk@Freud tmp]€ python gab.py 1
Offline
arrrrrgghhh!!!
i know that way zeppelin but how do i access variables the other way around,
like in my example above, i import a few files in 1.py , when i start 1.py i start a few classes in 2 and 3 , the problem is now how those classes can access variables from inside 1,
arch + gentoo + initng + python = enlisy
Offline
Yeah calm down .....
post some code
Have you read the docs ?
Mr Green
Offline
when i start 1.py i start a few classes in 2 and 3,
Listen, I want to help you, I don't understand this setence though. Either post some of you code (or a link to it), or I hope someone understands you better.
The only thing I 've understand is that (in 1.py) you import 2, 3
and then that you have some classes in 2.py and 3.py and that those classes want to use variables from the 1.py.
again if you don't post the structure of 2.py and 3.py all you're gonna say is aaaaaaaarghhh!
1.py a=1 2.py from ga import * class yeah: print a
now if 1.py has those variables in a class then you should make class yeah a subclass of the class in 1.py
if you have those variables in 1.py in a class and you also have them private, you need to write in the the class of 1.py a getter (and maybe a setter) too.
I know, I confused you on purpose because of your 'argh'. Post some code of 2.py and 3.py if the above don't work.
Also read … l6u351.htm
ps. I'm a greek and xerxes was beaten the hell out by my (ancient) ancenstors. So you can keep saying 'aaaaaargh' , I just want to thank Mr. Green,
well me again, here I understand differently.
so 1.py has
import 3
but you also want 3.py to use 1?
THE ONLY THE WAY for 3 to use 1 is to import it
and then one file imports the other which import the first (that's why I said oh boy)
if you really have this problem, then
1.py *should* just keep (store) variables/settings or whatever and NOT import 3
then 3.py can easily import 1 and use it
I believe this is what you wanted. if not, well I could be an idiot, I let others judge
Offline
i've rearranged my code a bit and it's working but i haven't solved the initial problem yet, maybe it's not possible to do it that way,
Edit: i don't wan't to import 1 to 3, i just want access to a few objects in 1 from 3, not the code,
arch + gentoo + initng + python = enlisy
Offline
I don't wan't to import 1 to 3, i just want access to a few objects in 1 from 3, not the code,
so your main code is in 3.py and you need some vars (or other staff from 1.py)
in 3.py
from 1 import *
of course tha implies that in 1.py you don't import 3! (not only it won't work, but also won't help you write good code). It remains a mystery that you don't post some code so all could understand better. At least it seems now we(I) understand that your main file is 3.py and you want stuff from 1. import those stuff and you're done
if it doesn't work, and you don't want to make your code public, pm me with a link to your code, so I can help you ON THE EXACT CODE.
Offline
i think you've been right all the time zeppelin, i did an experiment now as you said in your last post, use one file to store variables and you can use it in all files you want using import, this way i get access to the exact same objects, my code is massive and it wouldn't make any sense to post it,
my initial question is probably not possible to solve but i can explain it one more time,
#1.py import 3 class Foo: __init__(): #here i start a class from 3 MyClass = 3.Foo2() #here is a variable that i want to use as a global self.bar = someobject()
in my case the class from 3 is gtk stuff,
when i use that i want to use self.bar from 1 but maybe it's not possible,
my question is stupid from the beginning!!!
arch + gentoo + initng + python = enlisy
Offline
well if you want that variable to be a global one you shouldn't make it self.bar
or else all the classes that you want to use this var, should be childs of class Foo (the class in 1.py)
do in 1.py
global bar
and then
bar = someobject()
Offline
.... isnt it as simple as....
import 2, 3
print 2.variablename
print 3.anothervariable
at least, thats what I've done with my applications. btw, 123 isnt really great naming. and im not sure if you're allowed to.
iphitus
Offline
hey xerxes....
I don't think people really understand your issue (and I don't claim to either) - but I will say this:
If you have a design that won't work due to the rules, then the design may be flawed... there has to be a different way to accomplish the same result within the rules of python...
perhaps making a separate class simply to contain some passed around variables?
# 4.py class Vars: __init__(): self.someVar = '' self.anotherVar = [] self.bar = someobject() # 1.py import 3 from 4 import * class Foo: __init__(): if(Vars.someVar == '') Vars.someVar = 2.Foo2() self.bar = Vars.bar
Now keep in mind, I'm not a python god... but from what I can gather, that would work.... maybe
Offline
This may (or may not) be helpful:
==> m1.py <== import m2 print 'in m1' x = 99 m2.importme(locals()) print 'from m1, m2.x=',m2.x m2.showme() ==> m2.py <== x = 42 print 'in m2' def importme(mod_locals): global m1x m1x = mod_locals['x'] def showme(): print 'from m2, m1x=',m1x
And here's the output from running m1.py:
in m2 in m1 from m1, m2.x= 42 from m2, m1x= 99
Offline
wow alterkacker show this code to a normal person (
) and he'll go back to C
Offline
Normal people? Sorry, don't believe I know any. And if I did I certainly wouldn't expect to find them here.
Anyhow, I don't think it's quite that bad. The 'locals()' function returns the dictionary (in the pythonic sense) of local variables in module m1, and module m2 can then use that dictionary to refer back to those variables. In fact, routine 'importme()' could even do something like:
mod_locals['x'] += 1
to alter the value of variable 'x' in module m1. From the googling I've done, this is a perfectly kosher technique. I've even tried adding a variable to m1's namespace with something like:
mod_locals['newvar'] = 999
in importme(); though one source I found said not to do that for reasons I don't remember.
Offline
Pages: 1 | https://bbs.archlinux.org/viewtopic.php?pid=58574 | CC-MAIN-2016-30 | refinedweb | 1,437 | 82.14 |
This topic discusses steps you can take to troubleshoot and fix problems with the
Cassandra datastore. Cassandra is a
persistent datastore
that runs in the
cassandra component of the
hybrid runtime architecture.
See also
Runtime service configuration overview.
Cassandra pods are stuck in the Pending state
Symptom
When starting up, the Cassandra pods remain in the Pending state.
Error message
When you use
kubectl to view the pod states, you see that one or more
Cassandra pods are stuck in the
Pending state. The
Pending state indicates that Kubernetes is unable to schedule the pod
on a node: the pod cannot be created. For example:
kubectl get pods -n namespaceNAME READY STATUS RESTARTS AGE adah-resources-install-4762w 0/4 Completed 0 10m apigee-cassandra-0 0/1 Pending 0 10m ...
Possible causes
A pod stuck in the Pending state can have multiple causes. For example:
Diagnosis
Use
kubectl
to describe the pod to determine the source of the error. For example:
kubectl -n namespace describe pods pod_name
For example:
kubectl -n apigee describe pods apigee-cassandra-0
The output may show one of these possible problems:
- If the problem is insufficient resources, you will see a Warning message that indicates insufficient CPU or memory.
- If the error message indicates that the pod has unbound immediate PersistentVolumeClaims (PVC), it means the pod is not able to create its Persistent volume.
Resolution
Insufficient resources
Modify the Cassandra node pool so that it has sufficient CPU and memory resources. See Resizing a node pool for details.
Persistent volume not created
If you determine a persistent volume issue, describe the PersistentVolumeClaim (PVC) to determine why it is not being created:
- List the PVCs in the cluster:
kubectl -n namespace get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cassandra-data-apigee-cassandra-0 Bound pvc-b247faae-0a2b-11ea-867b-42010a80006e 10Gi RWO standard 15m ...
- Describe the PVC for the pod that is failing. For example, the following command describes the PVC bound to the pod
apigee-cassandra-0:
kubectl apigee describe pvc cassandra-data-apigee-cassandra-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 3m (x143 over 5h) persistentvolume-controller storageclass.storage.k8s.io "apigee-sc" not found
Note that in this example, the StorageClass named
apigee-scdoes not exist. To resolve this problem, create the missing StorageClass in the cluster, as explained in Change the default StorageClass.
See also Debugging Pods.
Cassandra pods are stuck in the CrashLoopBackoff state
Symptom
When starting up, the Cassandra pods remain in the CrashLoopBackoff state.
Error message
When you use
kubectl to view the pod states, you see that one or more
Cassandra pods are in the
CrashLoopBackoff state.
This state indicates that Kubernetes is unable to create the pod. For example:
kubectl get pods -n namespaceNAME READY STATUS RESTARTS AGE adah-resources-install-4762w 0/4 Completed 0 10m apigee-cassandra-0 0/1 CrashLoopBackoff 0 10m ...
Possible causes
A pod stuck in the
CrashLoopBackoff state can have multiple causes. For example:
Diagnosis
Check the Cassandra error log to determine the cause of the problem.
- List the pods to get the ID of the Cassandra pod that is failing:
kubectl get pods -n namespace
- Check the failing pod's log:
kubectl logs pod_id -n namespace
Resolution
Look for the following clues in the pod's log:
Data center differs from previous data center
If you see this log message:
Cannot start node if snitch's data center (us-east1) differs from previous data center
- Check if there are any stale or old PVC in the cluster and delete them.
- If this is a fresh install, delete all the PVCs and re-try the setup. For example:
kubectl -n namespace get pvc
kubectl -n namespace delete pvc cassandra-data-apigee-cassandra-0
Truststore directory not found
If you see this log message:
Caused by: java.io.FileNotFoundException: /apigee/cassandra/ssl/truststore.p12 (No such file or directory)
Verify the key and certificates if provided in your overrides file are correct and valid. For example:
cassandra: sslRootCAPath: path_to_root_ca-file sslCertPath: path-to-tls-cert-file sslKeyPath: path-to-tls-key-file
Node failure
Symptom
When starting up, the Cassandra pods remain in the Pending state. This problem can indicate an underlying node failure.
Diagnosis
- Determine which Cassandra pods are not running:
$ kubectl get pods -n your_namespace NAME READY STATUS RESTARTS AGE cassandra-0 0/1 Pending 0 13s cassandra-1 1/1 Running 0 8d cassandra-2 1/1 Running 0 8d
- Check the worker nodes. If one is in the NotReady state, then that is the node that has failed:
kubectl get nodes -n your_namespace NAME STATUS ROLES AGE VERSION ip-10-30-1-190.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-1-22.ec2.internal Ready master 8d v1.13.2 ip-10-30-1-36.ec2.internal NotReady <none> 8d v1.13.2 ip-10-30-2-214.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-2-252.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-2-47.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-3-11.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-3-152.ec2.internal Ready <none> 8d v1.13.2 ip-10-30-3-5.ec2.internal Ready <none> 8d v1.13.2
Resolution
- Remove the dead Cassandra pod from the cluster.
$ kubectl exec -it apigee-cassandra-0 -- nodetool status
$ kubectl exec -it apigee-cassandra-0 -- nodetool removenode deadnode_hostID
- Remove the VolumeClaim from the dead node to prevent the Cassandra pod from attempting to come up on the dead node because of the affinity:
kubectl get pvc -n your_namespace
kubectl delete pvc volumeClaim_name -n your_namespace
- Update the volume template and create PersistentVolume for the newly added node. The following is an example volume template:
apiVersion: v1 kind: PersistentVolume metadata: name: cassandra-data-3 spec: capacity: storage: 100Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /apigee/data nodeAffinity: "required": "nodeSelectorTerms": - "matchExpressions": - "key": "kubernetes.io/hostname" "operator": "In" "values": ["ip-10-30-1-36.ec2.internal"]
- Replace the values with the new hostname/IP and apply the template:
kubectl apply -f volume-template.yaml | https://cloud.google.com/apigee/docs/hybrid/v1.1/ts-cassandra | CC-MAIN-2022-27 | refinedweb | 1,034 | 52.6 |
//**************************************
// Name: Multiplication Table in C
// Description:Here is a very short and simple program that I wrote using C programming language I called this program multiplication table using C. In this program I am using CodeBlocks as my text editor and Dev C++ as my C compiler. The code I'm just using two for loop or nested loop to generate multiplication table. I hope you will find my work useful in your quest in learning C programming language.
If you have some questions please send me an email at jake.r.pomperada@gmail.com and jakerpomperada@yahoo.com.
My mobile number here in the Philippines is 09173084360.
// By: Jake R. Pomperada
//**************************************
#include <stdio.h>
main()
{
int x=0,y=0;
printf("\t\t Multiplication Table");
printf("\n\n");
for ( x=1; x<=12; x++) {
printf("\n");
for (y=1; y<=12; y++) {
printf(" %3d " ,x * y);
}
}
printf("\n\n");
}
Other 70. | http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=13791&lngWId=3 | CC-MAIN-2018-30 | refinedweb | 152 | 66.84 |
Flask TodoMVC: Dataset
This is the third article in the Flask TodoMVC Tutorial, a series that creates a Backbone.js backend with Flask for the TodoMVC app. In the first article, we created a Flask app using the Backbone.js example as a starting point. In the second article, we added server side synchronization. In this article, we add a database backend using dataset.
We will begin with where we left off in the previous article. If you would like to follow along, but do not have the code, it is available on GitHub.
# Optional, use your own code if you followed part 2 $ git clone $ cd flask-todomvc $ git checkout -b dataset part2
Now that we have the code, we are ready to begin.
Install dataset
We are going to use dataset because we love the icon and are feeling lazy.
We need to install it. If you followed part 1 and are using
pip and
virtualenvwrapper,
it looks something like this:
$ workon flask-todomvc $ pip install dataset
Easy enough.
Replace todo list with dataset table
In the previous article we used a Python list to store our todo items. We are going to replace this list with a table in a SQLite database managed by dataset.
Open
server.py and use dataset to connect to a SQLite database named
todos.db and
retrieve the todo table.
import dataset db = dataset.connect('sqlite:///todos.db') todos = db['todos']
That's really all there is to it. The database and table will be loaded or created automatically.
What about the schema? That is also automatically managed by
dataset. These three lines will:
- Connect to a SQLite database, specified by the path, creating the file if it does not exist.
- Load the schema for the
todostable. If the table does not exist, it will be created with an id column.
Let's modify
todo_create to store the items in this table.
@app.route('/todos/', methods=['POST']) def todo_create(): todo = request.get_json() todos.insert(todo) return _todo_response(todo)
So what's going on here?
- Backbone.js posts the new todo item as a JSON document
- We retrieve this JSON request as a dict using Flask
get_json
- Since dataset accepts a dict, we can pass this directly to insert. Again, dataset will detect and add any new columns automatically.
That's it.
NOTE: In a production app you would want to add validation. You may have noticed that we never defined nor identified any fields for our todo items. This is because we are trusting the client to validate any input posted to our server. This is, of course, a bad idea in any production app.
Now that we can add new items, let's modify the
index to load them from the table.
@app.route('/') def index(): _todos = list(todos.all()) return render_template('index.html', todos=_todos)
We retrieve all items from the table and wrap them in a list so the result set is JSON serializable.
Next modify update and delete.
@app.route('/todos/<int:id>', methods=['PUT', 'PATCH']) def todo_update(id): todo = request.get_json() todos.update(todo, ['id']) return _todo_response(todo) @app.route('/todos/<int:id>', methods=['DELETE']) def todo_delete(id): todos.delete(id=id) return _todo_response({})
Here we are updating or deleting the item identified by the id passed with the JSON document. Again, we would add more validation in production.
Finally modify
todo_read.
@app.route('/todos/<int:id>') def todo_read(id): todo = _todo_get_or_404(id) return _todo_response(todo) ... def _todo_get_or_404(id): todo = todos.find_one(id=id) if todo is None: abort(404) return todo
We simply used dataset
find_one to find the item with given id and aborting with
a 404 if not found.
Conclusion
In this article, we used dataset to add database persistence to our todo app. Our todo list will now survive a server restart. We did neglect to add validation or any access control. We may revisit these oversights in future articles. In the next article we will setup unit testing.
The code is available on GitHub with tag dataset or compared to previous article. | http://simplectic.com/blog/2014/flask-todomvc-dataset/ | CC-MAIN-2018-47 | refinedweb | 680 | 68.06 |
Apache Troubleshooting
Just as there are many ways to configure Apache, there are many ways in which things can go wrong. Luckily, most errors fall into a few basic categories. Here are some of the common problems that Rackspace sees with some of our own customers' controlled Apache servers, along with simple solutions that will get you backup in no time:
Permission errors when viewing a user's site-Permissions are set incorrectly on the public web directory. You can either change the web directory's group to the apache group or define the directory so that it is world executable. If on a Red Hat, Fedora, or Debian system, you can use UPGs to allow the user apache into the site owner's web content. See the File Permissions section earlier in the chapter for more information.
Can't access you can get web files at but not at, then you have the wrong or missing ServerAlias setting for that virtual host. The correct block of code should be as follows:
<VirtualHost 10.1.1.1> DocumentRoot /home/bob/web/html/ ServerName example.com ServerAlias </VirtualHost>
Apache won't start or generates run time errors-If httpd is not starting correctly, you may have mistakes in the httpd.conf file that prevent the Apache process from starting properly. Check /var/log/messages with the command grep httpd:/var/log/messages to see if the problem can be easily tracked down (these messages are also echoed to the console). If Apache starts but you continue to get run time errors such as broken graphic images or inoperative links, then the run time errors will be logged to /var/log/httpd/error_log. Use tail to watch this log file in real time as you use a web browser to generate the errors:
# tail -f /var/log/httpd/error_log
DNS problems-At least 50 percent of the time, new and migrating website problems are DNS related. Be sure you check DNS from the top of the DNS namespace to down. Use the whois command to discover who owns and does DNS for a given domain: whois example.com|grep -iA3 "server". After you know the authoritative DNS server(s), query it directly for the FQDN or URL you're interested in: dig [email protected] Make sure that there are valid CNAME records (aliases) or A records (IPs) for each of the URLs defined by a virtual host's ServerName and ServerAlias directives.
A new virtual site doesn't work-Jorge Arrieta, a Server Administrator and RHCE at Rackspace, has found that if you've configured a new virtual host but can't pull up the site in a browser, there are several possible causes. First, restart the server if you did not do so after adding the new host. Also, make sure that the httpd service is configured to have back up after reboots (via chkconfig). Next, make sure that the DNS record points to the appropriate IP address, remembering that any DNS changes may take some time to propagate. If DNS looks good, issue the command
# httpd -S
to parse httpd.conf and list the configured VirtualHost entries. If the new virtual host is not listed, then it is not configured correctly and you will have to correct the configuration or look through /var/log/messages for related errors. | http://codeidol.com/community/nix/apache-troubleshooting/6273/ | CC-MAIN-2017-39 | refinedweb | 558 | 58.82 |
Kind ProjectorKind Projector
DedicationDed"
OverviewOverview
One piece of Scala syntactic noise that often trips people up is the use of type projections to implement anonymous, partially-applied types. For example:
// partially-applied type named "IntOrA" type IntOrA[A] = Either[Int, A] // type projection implementing the same type anonymously (without a name). ({type L[A] = Either[Int, A]})#L
Many people have wished for a better way to do this.
The goal of this plugin is to add a syntax for type lambdas. We do this by rewriting syntactically valid programs into new programs, letting us seem to add new keywords to the language. This is achieved through a compiler plugin performing an (un-typed) tree transformation.
One problem with this approach is that it changes the meaning of (potentially) valid programs. In practice this means that you must avoid defining the following identifiers:
Lambdaand
λ
?,
+?, and
-?
Λ$
α$,
β$, ...
If you find yourself using lots of type lambdas, and you don't mind reserving those identifiers, then this compiler plugin is for you!
Using the pluginUsing the plugin
Kind-projector supports Scala 2.10, 2.11, 2.12, and 2.13.0-RC1.
To use this plugin in your own projects, add the following lines to your
build.sbt file:
resolvers += Resolver.sonatypeRepo("releases") addCompilerPlugin("org.spire-math" %% "kind-projector" % "0.9.10") // if your project uses multiple Scala versions, use this for cross building addCompilerPlugin("org.spire-math" % "kind-projector" % "0.9.10" cross CrossVersion.binary) // if your project uses both 2.10 and polymorphic lambdas libraryDependencies ++= (scalaBinaryVersion.value match { case "2.10" => compilerPlugin("org.scalamacros" % "paradise" % "2.1.0" cross CrossVersion.full) :: Nil case _ => Nil })
Note: for multi-project builds - put
addCompilerPlugin clause into settings section for each sub-project.
For maven projects, add the plugin to the configuration of the maven-scala-plugin (remember to use
_2.10,
_2.11 or
_2.12 as appropriate):
<plugin> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> ... <configuration> <compilerPlugins> <compilerPlugin> <groupId>org.spire-math</groupId> <artifactId>kind-projector_2.11</artifactId> <version>0.9.4</version> </compilerPlugin> </compilerPlugins> </configuration> </plugin>
That's it!
Versions of the plugin earlier than 0.6.2 require a different resolver. For these earlier releases, use this:
resolvers += "bintray/non" at ""
Inline SyntaxInline Syntax
The simplest syntax to use is the inline syntax. This syntax resembles Scala's use of underscores to define anonymous functions like
_ + _.
Since underscore is used for existential types in Scala (and it is probably too late to change this syntax), we use
? for the same purpose. We also use
+? and
-? to handle covariant and contravariant types parameters.
Here are a few examples:
Tuple2[?, Double] // equivalent to: type R[A] = Tuple2[A, Double] Either[Int, +?] // equivalent to: type R[+A] = Either[Int, A] Function2[-?, Long, +?] // equivalent to: type R[-A, +B] = Function2[A, Long, B] EitherT[?[_], Int, ?] // equivalent to: type R[F[_], B] = EitherT[F, Int, B]
As you can see, this syntax works when each type parameter in the type lambda is only used in the body once, and in the same order. For more complex type lambda expressions, you will need to use the function syntax.
Function SyntaxFunction Syntax
The more powerful syntax to use is the function syntax. This syntax resembles anonymous functions like
x => x + 1 or
(x, y) => x + y. In the case of type lambdas, we wrap the entire function type in a
Lambda or
λ type. Both names are equivalent: the former may be easier to type or say, and the latter is less verbose.
Here are some examples:
Lambda[A => (A, A)] // equivalent to: type R[A] = (A, A) Lambda[(A, B) => Either[B, A]] // equivalent to: type R[A, B] = Either[B, A] Lambda[A => Either[A, List[A]]] // equivalent to: type R[A] = Either[A, List[A]]
Since types like
(+A, +B) => Either[A, B] are not syntactically valid, we provide two alternate methods to specify variance when using function syntax:
- Plus/minus:
(+[A], +[B]) => Either[A, B]
- Backticks:
(`+A`, `+B`) => Either[A, B]
(Note that unlike names like
?,
+ and
- do not have to be reserved. They will only be interpreted this way when used in parameters to
Lambda[...] types, which should never conflict with other usage.)
Here are some examples with variance:
λ[`-A` => Function1[A, Double]] // equivalent to: type R[-A] = Function1[A, Double] λ[(-[A], +[B]) => Function2[A, Int, B]] // equivalent to: type R[-A, +B] = Function2[A, Int, B] λ[`+A` => Either[List[A], List[A]]] // equivalent to: type R[+A] = Either[List[A], List[A]]
The function syntax also supports higher-kinded types as type parameters. The syntax overloads the existential syntax in this case (since the type parameters to a type lambda should never contain an existential).
Here are a few examples with higher-kinded types:
Lambda[A[_] => List[A[Int]]] // equivalent to: type R[A[_]] = List[A[Int]] Lambda[(A, B[_]) => B[A]] // equivalent to: type R[A, B[_]] = B[A]
Finally, variance annotations on higher-kinded sub-parameters are supported using backticks:
Lambda[`x[+_]` => Q[x, List] // equivalent to: type R[x[+_]] = Q[x, List] Lambda[`f[-_, +_]` => B[f] // equivalent to: type R[f[-_, +_]] = B[f]
The function syntax with backtick type parameters is the most expressive syntax kind-projector supports. The other syntaxes are easier to read at the cost of being unable to express certain (hopefully rare) type lambdas.
Type lambda gotchasType lambda gotchas
The inline syntax is the tersest and is often preferable when possible. However, there are some type lambdas which it cannot express.
For example, imagine that we have
trait Functor[F[_]].
You might want to write
Functor[Future[List[?]]], expecting to get something like:
type X[a] = Future[List[a]] Functor[X]
However,
? always binds at the tightest level, meaning that
List[?] is interpreted as
type X[a] = List[a], and that
Future[List[?]] is invalid.
In these cases you should prefer the lambda syntax, which would be written as:
Functor[Lambda[a => Future[List[a]]]]
Other types which cannot be written correctly using inline syntax are:
Lambda[a => (a, a)](repeated use of
a).
Lambda[(a, b) => Either[b, a]](reverse order of type params).
Lambda[(a, b) => Function1[a, Option[b]]](similar to example).
(And of course, you can use
λ[...] instead of
Lambda[...] in any of these expressions.)
Under The HoodUnder The Hood
This section shows the exact code produced for a few type lambda expressions.
Either[Int, ?] ({type Λ$[β$0$] = Either[Int, β$0$]})#Λ$ Function2[-?, String, +?] ({type Λ$[-α$0$, +γ$0$] = Function2[α$0$, String, γ$0$]})#Λ$ Lambda[A => (A, A)] ({type Λ$[A] = (A, A)})#Λ$ Lambda[(`+A`, B) => Either[A, Option[B]]] ({type Λ$[+A, B] = Either[A, Option[B]]})#Λ$ Lambda[(A, B[_]) => B[A]] ({type Λ$[A, B[_]] = B[A]})#Λ$
As you can see, names like
Λ$ and
α$ are forbidden because they might conflict with names the plugin generates.
If you dislike these unicode names, pass
-Dkp:genAsciiNames=true to scalac to use munged ASCII names. This will use
L_kp in place of
Λ$,
X_kp0$ in place of
α$, and so on.
Polymorphic lambda valuesPolymorphic lambda values
Scala does not have built-in syntax or types for anonymous function values which are polymorphic (i.e. which can be parameterized with types). To illustrate that consider both of these methods:
def firstInt(xs: List[Int]): Option[Int] = xs.headOption def firstGeneric[A](xs: List[A]): Option[A] = xs.headOption
Having implemented these methods, we can see that the second just generalizes the first to work with any type: the function bodies are identical. We'd like to be able to rewrite each of these methods as a function value, but we can only represent the first method (
firstInt) this way:
val firstInt0: List[Int] => Option[Int] = _.headOption val firstGeneric0 <what to put here???>
(One reason to want to do this rewrite is that we might have a method like
.map which we'd like to pass an anonymous function value.)
Several libraries define their own polymorphic function types, such as the following polymorphic version of
Function1 (which we can use to implement
firstGeneric0):
trait PolyFunction1[-F[_], +G[_]] { def apply[A](fa: F[A]): G[A] } val firstGeneric0: PolyFunction1[List, Option] = new PolyFunction1[List, Option] { def apply[A](xs: List[A]): Option[A] = xs.headOption }
It's nice that
PolyFunction1 enables us to express polymorphic function values, but at the level of syntax it's not clear that we've saved much over defining a polymorphic method (i.e.
firstGeneric).
Since 0.9.0, Kind-projector provides a value-level rewrite to fix this issue and make polymorphic functions (and other types that share their general shape) easier to work with:
val firstGeneric0 = λ[PolyFunction1[List, Option]](_.headOption)
Either
λ or
Lambda can be used (in a value position) to trigger this rewrite. By default, the rewrite assumes that the "target method" to define is called
apply (as in the previous example), but a different method can be selected via an explicit call.
In the following example we are using the polymorphic lambda syntax to define a
run method on an instance of the
PF trait:
trait PF[-F[_], +G[_]] { def run[A](fa: F[A]): G[A] } val f = Lambda[PF[List, Option]].run(_.headOption)
It's possible to nest this syntax. Here's an example taken from the wild of using nested polymorphic lambdas to remove boilerplate:
// without polymorphic lambdas, as in the slide def injectFC[F[_], G[_]](implicit I: Inject[F, G]) = new (FreeC[F, ?] ~> FreeC[G, ?]) { def apply[A](fa: FreeC[F, A]): FreeC[G, A] = fa.mapSuspension[Coyoneda[G, ?]]( new (Coyoneda[F, ?] ~> Coyoneda[G, ?]) { def apply[B](fb: Coyoneda[F, B]): Coyoneda[G, B] = fb.trans(I) } ) } // with polymorphic lambdas def injectFC[F[_], G[_]](implicit I: Inject[F, G]) = λ[FreeC[F, ?] ~> FreeC[G, ?]]( _.mapSuspension(λ[Coyoneda[F, ?] ~> Coyoneda[G, ?]](_.trans(I))) )
Kind-projector's support for type lambdas operates at the type level (in type positions), whereas this feature operates at the value level (in value positions). To avoid reserving too many names the
λ and
Lambda names were overloaded to do both (mirroring the relationship between types and their companion objects).
Here are some examples of expressions, along with whether the lambda symbol involved represents a type (traditional type lambda) or a value (polymorphic lambda):
// type lambda (type level) val functor: Functor[λ[a => Either[Int, a]]] = implicitly // polymorphic lambda (value level) val f = λ[Vector ~> List](_.toList) // type lambda (type level) trait CF2 extends Contravariant[λ[a => Function2[a, a, Double]]] { ... } // polymorphic lambda (value level) xyz.translate(λ[F ~> G](fx => fx.flatMap(g)))
One pattern you might notice is that when
λ occurs immediately within
[] it is referring to a type lambda (since
[] signals a type application), whereas when it occurs after
= or within
() it usually refers to a polymorphic lambda, since those tokens usually signal a value. (The
() syntax for tuple and function types is an exception to this pattern.)
The bottom line is that if you could replace a λ-expression with a type constructor, it's a type lambda, and if you could replace it with a value (e.g.
new Xyz[...] { ... }) then it's a polymorphic lambda.
Polymorphic lambdas under the hoodPolymorphic lambdas under the hood
What follows are the gory details of the polymorphic lambda rewrite.
Polymorphic lambdas are a syntactic transformation that occurs just after parsing (before name resolution or typechecking). Your code will be typechecked after the rewrite.
Written in its most explicit form, a polymorphic lambda looks like this:
λ[Op[F, G]].someMethod(<expr>)
and is rewritten into something like this:
new Op[F, G] { def someMethod[A](x: F[A]): G[A] = <expr>(x) }
(The names
A and
x are used for clarity –- in practice unique names will be used for both.)
This rewrite requires that the following are true:
Fand
Gare unary type constructors (i.e. of shape
F[_]and
G[_]).
<expr>is an expression of type
Function1[_, _].
Opis parameterized on two unary type constructors.
someMethodis parametric (for any type
Ait takes
F[A]and returns
G[A]).
For example,
Op might be defined like this:
trait Op[M[_], N[_]] { def someMethod[A](x: M[A]): N[A] }
The entire λ-expression will be rewritten immediately after parsing (and before name resolution or typechecking). If any of these constraints are not met, then a compiler error will occur during a later phase (likely type-checking).
Here are some polymorphic lambdas along with the corresponding code after the rewrite:
val f = Lambda[NaturalTransformation[Stream, List]](_.toList) val f = new NaturalTransformation[Stream, List] { def apply[A](x: Stream[A]): List[A] = x.toList } type Id[A] = A val g = λ[Id ~> Option].run(x => Some(x)) val g = new (Id ~> Option) { def run[A](x: Id[A]): Option[A] = Some(x) } val h = λ[Either[Unit, ?] Convert Option](_.fold(_ => None, a => Some(a))) val h = new Convert[Either[Unit, ?], Option] { def apply[A](x: Either[Unit, A]): Option[A] = x.fold(_ => None, a => Some(a)) } // that last example also includes a type lambda. // the full expansion would be: val h = new Convert[({type Λ$[β$0$] = Either[Unit, β$0$]})#Λ$, Option] { def apply[A](x: ({type Λ$[β$0$] = Either[Unit, β$0$]})#Λ$): Option[A] = x.fold(_ => None, a => Some(a)) }
Unfortunately the type errors produced by invalid polymorphic lambdas are likely to be difficult to read. This is an unavoidable consequence of doing this transformation at the syntactic level.
Building the pluginBuilding the plugin
You can build kind-projector using SBT 0.13.0 or newer.
Here are some useful targets:
compile: compile the code
package: build the plugin jar
test: compile the test files (no tests run; compilation is the test)
console: launch a REPL with the plugin loaded so you can play around
You can use the plugin with
scalac by specifying it on the command-line. For instance:
scalac -Xplugin:kind-projector_2.10-0.6.0.jar test.scala
Known issues & errataKnown issues & errata
When dealing with type parameters that take covariant or contravariant type parameters, only the function syntax is supported. Huh???
Here's an example that highlights this issue:
def xyz[F[_[+_]]] = 12345 trait Q[A[+_], B[+_]] // we can use kind-projector to adapt Q for xyz xyz[λ[`x[+_]` => Q[x, List]] // ok // but these don't work (although support for the second form // could be added in a future release). xyz[Q[?[+_], List]] // invalid syntax xyz[Q[?[`+_`], List]] // unsupported
There have been suggestions for better syntax, like
[A, B]Either[B, A] or
[A, B] => Either[B, A] instead of
Lambda[(A, B) => Either[B, A]]. Unfortunately this would actually require modifying the parser (i.e. the language itself) which is outside the scope of this project (at least, until there is an earlier compiler phase to plug into).
Others have noted that it would be nicer to be able to use
_ for types the way we do for values, so that we could use
Either[Int, _] to define a type lambda the way we use
3 + _ to define a function. Unfortunately, it's probably too late to modify the meaning of
_, which is why we chose to use
? instead.
Future WorkFuture Work
As of 0.5.3, kind-projector should be able to support any type lambda that can be expressed via type projections, at least using the function syntax. If you come across a type for which kind-projector lacks a syntax, please report it.
DisclaimersDisclaimers
Kind projector is an unusual compiler plugin in that it runs before the
typer phase. This means that the rewrites and renaming we are doing are relatively fragile, and the author disclaims all warranty or liability of any kind.
(That said, there are currently no known bugs.)
If you are using kind-projector in one of your projects, please feel free to get in touch to report problems (or a lack of problems)!
MaintainersMaintainers
The project's current maintainers are:
All code is available to you under the MIT license, available at and also in the COPYING file. | https://index.scala-lang.org/non/kind-projector/kind-projector/0.9.10?target=_2.12 | CC-MAIN-2021-39 | refinedweb | 2,744 | 55.34 |
CS::Math::Noise::Module::Curve Class Reference
[Modifier Modules]
Noise module that maps the output value from a source module onto an arbitrary function curve. More...
#include <cstool/noise/module/curve.h>
Detailed Description
Noise module that maps the output value from a source module onto an arbitrary function curve.
This noise module maps the output value from the source module onto an application-defined curve. This curve is defined by a number of control points; each control point has an input value that maps to an output value. Refer to the following illustration:
To add the control points to this curve, call the AddControlPoint() method.
Since this curve is a cubic spline, an application must add a minimum of four control points to the curve. If this is not done, the GetValue() method fails. Each control point can have any input and output value, although no two control points can have the same input value. There is no limit to the number of control points that can be added to the curve.
This noise module requires one source module.
Definition at line 80 of file curve.h.
Constructor & Destructor Documentation
Constructor.
Destructor.
Member Function Documentation
Adds a control point to the curve.
- Parameters:
-
- Precondition:
- No two control points have the same input value.
- Exceptions:
-
It does not matter which order these points are added.
Deletes all the control points on the curve.
- Postcondition:
- All points on the curve are deleted.
Determines the array index in which to insert the control point into the internal control point array.
- Parameters:
-
- Returns:
- The array index in which to insert the control point.
- Precondition:
- No two control points have the same input value.
- Exceptions:
-
By inserting the control point at the returned array index, this class ensures that the control point array is sorted by input value. The code that maps a value onto the curve requires a sorted control point array.
Returns a pointer to the array of control points on the curve.
- Returns:
- A pointer to the array of control points.
Before calling this method, call GetControlPointCount() to determine the number of control points in this array.
It is recommended that an application does not store this pointer for later use since the pointer to the array may change if the application calls another method of this object.
Definition at line 119 of file curve.h.
Returns the number of source modules required by this noise module.
- Returns:
- The number of source modules required by this noise module.
Implements CS::Math::Noise::Module::Module.
Definition at line 132 of file curve.
Inserts the control point at the specified position in the internal control point array.
- Parameters:
-
To make room for this new control point, this method reallocates the control point array and shifts all control points occurring after the insertion position up by one.
Because the curve mapping algorithm used by this noise module requires that all control points in the array must be sorted by input value, the new control point should be inserted at the position in which the order is still preserved.
Member Data Documentation
The documentation for this class was generated from the following file:
Generated for Crystal Space 2.0 by doxygen 1.6.1 | http://www.crystalspace3d.org/docs/online/api-2.0/classCS_1_1Math_1_1Noise_1_1Module_1_1Curve.html | CC-MAIN-2015-06 | refinedweb | 539 | 56.35 |
Episode 4 · July 2, 2014
A glimpse into the "magic" behind the params hash and the various ways it gets populated
The params hash in your Rails application is a concept that a lot of people have trouble with, so I'm going to talk about it and try to debunk the magic that sort of happens with it, because it's actually really simple.
The params hash is a collection of data that has come through your application in that request. This data might come from various places, we might submit a form and that sends over some data, your URL might contain a chunk of the URL that has some data that you want, and you also might have at the end of the URL "GET request parameters", so those are the common ways to get data into the params hash, and Rails just knows to collect those things and put it in there nicely for you to work with. So let's talk about how we go about doing that.
The simplest thing that we can take a look at is GET request parameters, in our config/routes.rb we'll delete the comments and add a simple route
get "/blog", to: "blog#index"
I don't have a blog controller yet, or an index action, so we'll need to create that real quick, so let's do that
app/controllers/blog_controller.rb
class BlogController < ApplicationController def index end end
app/views/blog/index.html.erb
<h1>Blog!</h1>
So when we visit our /blog route, when we visit our Rails application we will see that file. So we'll run our Rails server, and then open up in our browser, localhost:3000/blog and this will show us the blog path.
So we got the view that we created, and we're able to see it and you can see in our logs that we have "Started GET /blog", that's all working correctly. Now a get request parameter is what you might have seen before in like the Google URLs and different things around the web where there's a question mark and then there is a variable, something like test= 1, the page request is actually localhost:3000/blog but then this (?test=1) extra information is our query parameters. So this query parameters basically give more extra data, it might use this to render your page, you might not, you can totally ignore it, just like this page did. It didn't blow up or anything, because it can ignore this information unless you want to use it. If you look at your Rails logs again, now we have
Parameters: {"test"=>1}
and in our previous request, we didn't have any parameters, so Rails is taking these parameters and pulling them out. Now when it writes the word parameters and gives you this hash, this is actually exactly what's in your params hash. So when you go into your blog_controller.rb, if you have the word params in your action, that's accessing this variable, so they show you exactly what's there so that you can work with it. This is not magic by any means, when you say:
def index params[:test] end
it's going to pull the string of "1" out, and if we say
def index @test = params[:test] end
we can say @test and pass this into our view, we can go to blog/index.html.erb and print out:
<h1>Blog!</h1> <%= @test %>
and when we show our page, we can see that it's "1". If we change the URL to "2", it changes, so what this is doing, is it's reading from the params hash the test variable and pulling out the value for that key, the test key, and then we're just going to save it in a variable so you can use the square brackets and a symbol to math the name and that will return the correct value. So it's really really simple for GET parameters to be doing that, to pass in the params hash and there you go, you have data. So this is really useful when you wan to say /blog?page=3, and so on. This allows you to filter out what the page displays, you're not really changing what this page does, but you might filter out how this page works. So this are really good for filters, and that is a GET request parameter, so the next one we'll take a look at is the route parameters. If we go into our routes.rb file and we add a new route and we say `get "/blog/:id", we put the colon to indicate Rails that the part that's typed into the URL should be taken and saved into the params hash as the ID name.
So when we say we want to view individual blog posts, so we have
get "/blog/:id", to: "blog#show", and with this URL, we can take that stuff that's passed in that section after blog, save it to the ID key inside of the params hash, and then go look it up. So if we go into our blog_controller.rb and we add a show and let's just not actually do a lookup in the database, let's just make this very simple, so we'll have:
app/views/blog/show.htm.erb
<h1>Here's my blog post!</h1>
When we visit "/blog/(insert anything here)", you can put in numbers, you could put in words, you name it, they will get passed in to your Rails application, so now you can see that the parameters have the id key and value and ID now is "blabla", and when we passed in "1", it was "1", so you can use this basically to say in this case, we don't want the user to put in "id=1", we actually want the URL to look prettier, so we want it to be more purposeful and this tells the browser and google and other things that this is a different page, there's something else that they're looking at for every one of these, and when you add a query parameter on there, like "page=1", the search engine knows that you're still on the blog like the index page, but you're just looking at a few other different posts on that page, so it's still generally the same page, they've just decided to split it up, and when you do separate URL's, these signify that they're separate pages and they're completely different, so this is good for us for when we pass in the ID or the name of the blog post we can just do a lookup in our database with ActiveRecord, pull that out and display it on the page, so you can edit those, or access those rather in your blog controller in the exact same way, so if you want params[:id], which you'll see often in your Rails examples, if you have a blog post model, which I haven't created one, you could find one though, doing this, and that would just load up your params id number into the .find method and that will go look up the database record based upon the number in the URL, so by default they use numbers, eventually you'll probably change them over to using words and looking them up by slugs, but we can see this just by naming a variable called ID and an instance variable, and then we can print it out here on the page as well just like we did before with tests. So in this case, we can see the number "1", we can change this to "asdf", and so on.
Now here's something interesting, we can have multiple ones, so you can have test in here as well, and you can have the test query parameter in there too, you can have
@test = 1, and if we decide to print out test, we can see both of them. So both are filtering in, Rails knows that if they come from a query parameter, we should put that in, and if they come in from a route parameter, we should put those into the params hash as well. And you can see the parameters are now
{test => 1, id => "asdf"}, so they're both being funneled into that which is awesome.
The last way we get data into our application params is by submitting data through some sort of other requests. So we've talked about get requests so far, and you can put stuff in the URL's, and that's pretty much how you get data into a GET request. Now POST, PATCH, UPDATE and DELETE requests are all designed so that you submit data to the server and they will take that and automatically put it into the params hash. Rails knows to basically use these three things.
We're going to use an example of a new blog post, so we're going to say that if you post to the blog URL, we're going to go to the "blog#create" action, so this should create a blog post, or at least that is our intention. What we need now is to create a forum that submits a POST request to this URL, and I'm going to do this on the blog/index.html.erb, so we're just going to create a regular form tag here:
<h1>Blog!</h1> <%= @test %> <%= form_tag blog_path do %> <div> <%= text_field_tag :title %> </div> <div> <%= text_area_tag :body %> </div> <%= submit_tag %> <% end %>
If we refresh the blog page now we have a title field and a body field, I didn't put labels on it because it's simple and we're just submitting data over to the server, so if we actually just call these "Post Title" and "Post body", so when we create this post, or submit the form, it's going to send a post request and we're going to get "unknown action". First thing we need to do is go into the controllers/blog_controller.rb, create an action called "create",
def create redirect_to action: :index end
It's not going to do anything, we're going to be able to see the request in the logs, when we resubmit it, so we'll come back to the same page, and we'll be able to look at the logs to see what's happened. "Started POST "/blog"" is our POST request to the blog path, and it went there, it created the POST request, it submitted data just like we set it to, and when we created the form here, and I said title and body, I made those up, so I wanted to have a params value title and a params value of body and that's what it did. You just dive these names, so ideally with Rails you make these names match what's in your database models, so that it can automatically match them up and you don't have to do it yourself, but for this example, we're just submitting over data and you can see now that the names are going to affect how it shows up in the params hash. So if we modified :title for :post_title, we can see that if we were to refresh this page and put in different text, in our logs we can see that this time post_title comes across, so you are free to name these anything you want, the utf8 and authenticity_token are automatically included with Rails form tags, and they allow it to force your browser into using utf-8, because they use a check mark utf-8 character and then the authenticity token is to allow CSRF protection, basically to keep your forms a little safer. So the ones that we're interested in are the ones we submitted, and the commit here is actually the text for the button that we created ("Create Post"), so if you actually need that information, you can see that the value of a commit button or submit button is going to be the text that is displayed, so that allows you to actually do stuff based upon the button that is submitted if you need it, it's unlikely you do, but it's there in case.
Submitting data in a form is very simple, it's very similar to what the other ID URL things, and the query parameters are, so you're just naming things whatever you want to name them, submitting them to the server, and Rails knows those locations to pull them out, and then cleanly organize them in your params hash. So every time that a request comes over and you're missing data or something, check what params are listed here so that you can see that it really did come over. Maybe you missed it, or maybe it's ignoring it after it came over. So it really came over but you did something wrong in your code. The way that the params hash works is actually very simple, there's the three typical methods of data coming into your server, and that just aggregates all of them together, so generally try not to duplicate names, you don't want to have body in the URL and in your POST form, so you will probably get something overridden, and that will be bad, so don't duplicate the names, but aside from that, there's really nothing to worry about. This is very simple and you can access all of these variables really easily.
Transcript written by Miguel
Join 24,647+ developers who get early access to new screencasts, articles, guides, updates, and more. | https://gorails.com/episodes/the-params-hash?autoplay=1 | CC-MAIN-2019-47 | refinedweb | 2,331 | 59.71 |
I'm working on a small function, that gives my users a picture of how occupied the CPU is.
I'm using
cat /proc/loadavg
makecpudosomething 30
I didn't understand very well if you want to generate arbitrary CPU load or CPU utilization. Yes, they are different things indeed. I'll try to cover both problems.
First of all: load is the average number of processes in the running, runnable or waiting for CPU scheduler queues in a given amount of time, "the one that wants your CPU" so to speak.
So, if you want to generate arbitrary load (say 0.3) you have to run a process for 30% of the time and then remove it from the run queue for 70% of the time, moving it to the sleeping queue or killing it, for example.
You can try this script to do that:
export LOAD=0.3 while true do yes > /dev/null & sleep $LOAD killall yes sleep `echo "1 - $LOAD" | bc` done
Note that you have to wait some time (1, 10 and 15 minutes) to get the respective numbers to come up, and it will be influenced by other processes in your system. The more busy your system is the more this numbers will float. The last number (15 minutes interval) tends to be the most accurate.
CPU usage is, instead, the amount of time for which CPU was used for processing instructions of a computer program.
So, if you want to generate arbitrary CPU usage (say 30%) you have to run a process that is CPU bound 30% of the time and sleeps 70% of it.
I wrote an example to show you that:
#include <stdlib.h> #include <unistd.h> #include <err.h> #include <math.h> #include <sys/time.h> #include <stdarg.h> #include <sys/wait.h> #define CPUUSAGE 0.3 /* set it to a 0 < float < 1 */ #define PROCESSES 1 /* number of child worker processes */ #define CYCLETIME 50000 /* total cycle interval in microseconds */ #define WORKTIME (CYCLETIME * CPUUSAGE) #define SLEEPTIME (CYCLETIME - WORKTIME) /* returns t1-t2 in microseconds */ static inline long timediff(const struct timeval *t1, const struct timeval *t2) { return (t1->tv_sec - t2->tv_sec) * 1000000 + (t1->tv_usec - t2->tv_usec); } static inline void gettime (struct timeval *t) { if (gettimeofday(t, NULL) < 0) { err(1, "failed to acquire time"); } } int hogcpu (void) { struct timeval tWorkStart, tWorkCur, tSleepStart, tSleepStop; long usSleep, usWork, usWorkDelay = 0, usSleepDelay = 0; do { usWork = WORKTIME - usWorkDelay; gettime (&tWorkStart); do { sqrt (rand ()); gettime (&tWorkCur); } while ((usWorkDelay = (timediff (&tWorkCur, &tWorkStart) - usWork)) < 0); if (usSleepDelay <= SLEEPTIME) usSleep = SLEEPTIME - usSleepDelay; else usSleep = SLEEPTIME; gettime (&tSleepStart); usleep (usSleep); gettime (&tSleepStop); usSleepDelay = timediff (&tSleepStop, &tSleepStart) - usSleep; } while (1); return 0; } int main (int argc, char const *argv[]) { pid_t pid; int i; for (i = 0; i < PROCESSES; i++) { switch (pid = fork ()) { case 0: _exit (hogcpu ()); case -1: err (1, "fork failed"); break; default: warnx ("worker [%d] forked", pid); } } wait(NULL); return 0; }
If you want to eat up a fixed amount of RAM you can use the program in the cgkanchi's answer. | https://codedump.io/share/5hBcIc8TQ6LN/1/linux-how-to-put-a-load-on-system-memory | CC-MAIN-2017-13 | refinedweb | 503 | 66.57 |
Resolved: Visual Rendering Defects in Flex Data ControlsDCI
Databinding remains a core feature in Flex visual components if we would like to render our data in Grid or List based control. We can even bind our custom components as Item Renderer for the Grid or List for binding dynamic data. The custom component would consist of just dynamic image or text or even a combination of both.
Issue: The main problem we faced on custom component item renderer is the misbehavior of the List and Grid controls. The custom items do not render or refresh upon scrolling of the container. If the application is minimized and maximized, the display object gets out of scope. Sometimes, the width and height of those inner custom components are not properly fit in and overlap with each other especially in case of TileList controls.
Reason: Each and every component in Flex has a property called UID (Unique Identifier) that is used to assign an unique identifier for each instance of the component. The Flex Data Controls identify its child elements based on its UID. For non-custom item renderers, these Data Controls itself assign the UID value so the above-said problem does not happen. But this is not the case with custom component.
Work-Around Provided:
In order to overcome this situation, we need to do the following:
- Implement the interface IUID that belongs to mx.core package in our custom AS3 component that we would like to render inside those data controls.
- Make the renderer class Bindable.
- Just override the set and get methods of the UID property of IUID interface.
Code Snippet:
package com
{
import mx.containers.Canvas;
import mx.core.IUID;
[Bindable]
public class MyItemHolder extends Canvas implements IUID
{
/* uid variable declaration */
private var _uid:String;
/* your other variable declaration goes here */
/* overriding set and get method of IUID interface */
public override function get uid():String
{
return _uid;
}
public override function set uid(value:String):void
{
this._uid=value;
}
/* rest of the functionalities */
} | https://www.dotcominfoway.com/resolved-visual-rendering-defects-in-flex-data-controls/ | CC-MAIN-2018-43 | refinedweb | 333 | 54.52 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Ordering and 404s5:38 with Kenneth Love
What if we get our steps out of order? What if someone puts in a bad URL?
class Meta: ordering = ['field1', 'field2']
This will cause the model to be ordered by
field1, then
field2 if there are any conflicts on
field1 (two instances having the same
field1 value). Finally, they'll be sorted by
id if a conflict still exists.
get_object_or_404(Model, [selectors]) - Gets an object of
Model by using whatever selection arguments have been given. For example:
get_object_or_404(User, username='kennethlove') would try to get a
User with an
username set to "kennethlove". If that
User didn't exist, a 404 error would be raised.
What's the long way? Consider this view:
from django.http import Http404 from .models import Course def course_detail(request, pk): try: course = Course.objects.get(pk=pk) except Course.DoesNotExist: raise Http404() else: return render(request, 'courses/course_detail.html', {'course': course})
It's definitely more work!
If you want, you can customize your error views.
- 0:00
We are really tearing through this project.
- 0:02
Our imaginary bosses, investors, clients, whoever, would be really proud of us.
- 0:06
But, you know, we've introduced some functionality that we haven't addressed
- 0:10
yet with our order field.
- 0:12
And we have that 404 just waiting to happen.
- 0:15
So, I'm gonna open up models.py and
- 0:19
we have this order field here and it has this default of zero.
- 0:23
Now, by default,
- 0:24
Django orders records by their ID which is automatically generated and incremented.
- 0:29
The first record created gets a one.
- 0:31
The next one gets a two, and so on.
- 0:33
Now, this is awesome until you need to reorder something.
- 0:36
Like, as sometimes happens when I'm writing a course or
- 0:39
entering into our CMS here at Treehouse.
- 0:42
What if I forget to put in step three, and put in step four first?
- 0:45
Now they'll come out out of order and
- 0:47
you all won't know which order to watch these videos in.
- 0:50
That's not good.
- 0:51
So, we're gonna fix this, with two simple steps.
- 0:54
First, we're gonna add a new section to our model.
- 0:58
Let's call class Meta.
- 1:00
Django models have an optional piece to them known as class meta.
- 1:05
And if you did either of the peewee courses,
- 1:08
you've seen something similar to this already.
- 1:10
It's a class inside of our model class that controls how that model
- 1:13
does a few things.
- 1:14
We're gonna use it to set a default ordering for our instances.
- 1:19
So, we'll say ordering equals order.
- 1:23
So, order them by order.
- 1:26
And if that fails, it'll fall back to the ID.
- 1:28
So, this tells Django to order all of our records by their order attribute.
- 1:32
If there's a conflict, like two records both have an order of two.
- 1:35
Then, like I said, Django automatically puts those in order by their ID.
- 1:39
And we can tell Django to use other fields, as well.
- 1:42
I'll put more info about this into the teacher's notes.
- 1:44
So, let's go check out our course page and make sure that nothing has changed.
- 1:49
Nothing's changed, still looks the same, okay.
- 1:53
Now, let's add a new step in our admin.
- 1:56
So, here's Django Basics.
- 1:58
What's the deal with strings?
- 2:00
And we're going to add a new step here which is, I don't know, Using the shell.
- 2:06
Learn how to use Python's shell to play with the language and get help.
- 2:17
Now, I want this to be first.
- 2:19
Let's go save and continue editing.
- 2:21
So, we'll come right back to this.
- 2:22
Okay, so we have the two steps.
- 2:25
Now, I want this Using the Shell step to be first and it's gonna be second.
- 2:32
Let's check this out.
- 2:32
It's gonna be be second because it has the higher ID.
- 2:36
They both have the same order, order zero, but one has a higher ID.
- 2:41
So, let's fix that.
- 2:42
Now, we can just mark this one as a one and leave this one as a zero.
- 2:46
Or we can mark this one as a two and put this one as a one.
- 2:50
Let's do this one as a one and this one as a zero, because otherwise, the next
- 2:55
one I make will come in first, because it'll automatically get an order of zero.
- 2:58
So, we'll save and continue editing.
- 3:02
We come over here, now Using the Shell should be first,
- 3:04
What's the deal with strings should be second.
- 3:07
And it is, check that out.
- 3:09
That is awesome.
- 3:10
Great to know that we can control something like this so easily.
- 3:15
Now, though, it's time for us to address the possible 404's.
- 3:19
Cuz, you remember, we have this views.py and what if somebody puts in a bad URL?
- 3:27
So, we don't have 18 courses yet.
- 3:33
Look at that, I got a 404.
- 3:35
It's a 404, but this is actually Django's 500 error page.
- 3:38
Now how do I know that it's a 500 page?
- 3:42
Because Django gives me a handy traceback right here.
- 3:45
Tracebacks are only generated when there's an error with your server.
- 3:49
Like a Python exception.
- 3:51
And errors on a server should always generate a 500.
- 3:53
Okay, so let's fix this views so that it gives us a 404, like it should.
- 3:59
Instead of giving us a 500 like it just did.
- 4:02
So back over here in views.py, we need to import the shortcut for
- 4:07
throwing a 404 when an object isn't found in database.
- 4:10
Now, since I said it was a shortcut, you know that there's a long way of doing it.
- 4:14
I'll cover that in the teachers notes.
- 4:17
So, we're already importing render from shortcuts,
- 4:21
let's also import get_object_or_404.
- 4:25
And now, down here where we're doing our course lookup,
- 4:29
let's use get_object_or_404.
- 4:33
So I'll do it on a second line so you can compare these really quick.
- 4:35
Get_object_or_404, the object is of a Course
- 4:41
type and our look-up is where pk equals pk.
- 4:46
So, this part is the same, and that's the get.
- 4:52
And we know it's an object, because it says object, and
- 4:55
the object type is Course.
- 4:58
If that doesn't happen, we're gonna throw a 404.
- 5:01
So, let's delete that line, we don't need that anymore.
- 5:03
We're gonna save again.
- 5:06
Now, let's go try our bad URL.
- 5:07
Remember, this is a 500 error, we've got a trace back,
- 5:12
we've got all this other information.
- 5:15
And look this is a 404, we know that because it says 404 right here.
- 5:20
That's awesome.
- 5:22
Providing good errors for our users is a must.
- 5:24
The last thing we want is to confuse or
- 5:26
lose users because we didn't explain exactly what when wrong.
- 5:29
We can provide custom views and templates for 404, 500, and other error codes,
- 5:34
too if we want.
- 5:35
As always, I'll provide more info about that in the teacher's notes. | https://teamtreehouse.com/library/django-basics/django-templates/ordering-and-404s | CC-MAIN-2016-44 | refinedweb | 1,414 | 84.88 |
?
Testing plugins, while it can be a bit of a pita with the api being mostlyasync, is definitely doable. If you are familiar with nose tests framework thecode excerpt below is a way you can use it to run unit tests on pluginreload
[pre=#0C1021]# Nosetry: import noseexcept ImportError: nose = None
try: times_module_has_been_reloaded += 1except NameError: times_module_has_been_reloaded = 0 #reloaded
RUN_TESTS = nose and times_module_has_been_reloaded
if RUN_TESTS: target = name nose.run(argv= 'sys.executable', target, '--with-doctest', '-s' ]) print '\nReloads: %s' % times_module_has_been_reloaded
You'll sometimes find that you want file fixtures loaded into views forfunctional testing. You can do this 'manually' via view.insert() which is oneway to make file loading synchronous.
[pre=#0C1021]fixtures = ]
def teardown(): while fixtures: v = fixtures.pop() v.window().focus_view(v) v.window().cmd.close()
def load_fixture(f, syntax=u'Packages/Python/Python.tmLanguage'): """
Create a View using `window.new_file` fixture and `manually` load the
fixture as window.open_file is asynchronous.
v = window.open_file(f)
assert v.is_loading()
It's impossible to do:
while v.is_loading():
time.sleep(0.01)
This would just cause S2 to block. You MUST use callbacks or co routines.
"""
view = sublime.active_window().new_file()
edit = view.begin_edit()
view.set_scratch(1)
view.set_syntax_file(syntax)
try:
with codecs.open(f, 'r', 'utf8') as fh:
view.insert(edit, 0, fh.read())
finally:
view.end_edit(edit)
fixtures.append(view)
return view[/pre]
You can actually write a scheduler using sublime.set_timeout and python generator co-routines, using some kind of method to emulate keyboard input.
On windows, for testing Sublime Text 1 plugins, I used to use the SendKeys module.
castles_made_of_sand: How did you get the nose module to be accessible from within sublime? I can't figure out how to make it accessible to the sublime interpreter. | https://forum.sublimetext.com/t/how-do-you-refactor-your-plugins/6988 | CC-MAIN-2016-36 | refinedweb | 290 | 52.97 |
JavaScript operators as functions
Operators provides the JavaScript operators as functions. It provides a standard, short,
and easy to remember interface for addition, multiplication, concatenation, and-ing, or-ing, as well as several two parameter lambdas for non-associative operators, and curried
versions of the binary operators for quick creation of the functions that you end up writing for
map and
filter all the time.
Use it with qualified imports with the yet unfinished module
import syntax or attach it to the short variable of choice. For selling points, here's how it will look with ES7 modules.
132654;// [ 6, 5 ]1234; // [ 2, 3, 4, 5 ]1234; // [ 1, 4, 9, 16 ]1232; // [ 2, 2 ]12 34 ; // [ [ 0, 1, 2 ], [ 0, 3, 4 ] ]
This modules makes is a core part the larger utility library interlude.
MIT-Licensed. See LICENSE file for details. | https://www.npmjs.com/package/operators | CC-MAIN-2016-40 | refinedweb | 141 | 56.89 |
Outputting CSV with Django¶
This document explains how to output CSV (Comma Separated Values) dynamically using Django views. To do this, you can either use the Python CSV library or the Django template system.
Using the Python CSV library¶
Python comes with a CSV library, csv. The key to using it with Django is that the csv module’s CSV-creation capability acts on file-like objects, and Django’s HttpResponse objects are file-like objects.
Here’s an example:
import csv from django.http import HttpResponse def some_view(request): # Create the HttpResponse object with the appropriate CSV header. response = HttpResponse(mimetype='text/csv') response['Content-Disposition'] = 'attachment; filename=somefilename.csv' writer = csv.writer(response) writer.writerow(['First row', 'Foo', 'Bar', 'Baz']) writer.writerow(['Second row', 'A', 'B', 'C', '"Testing"', "Here's a quote"]) return response
The code and comments should be self-explanatory, but a few things deserve a mention:
- header, which contains the name of the CSV file. This filename is arbitrary; call it whatever you want. It'll be used by browsers in the "Save as..." dialogue, etc.
- writerow() your raw strings, and it'll do the right thing.
Handling Unicode¶
Python class provided in the csv module's examples section.
- Use the python-unicodecsv module, which aims to be a drop-in replacement for csv that gracefully handles Unicode.
For more information, see the Python CSV File Reading and Writing documentation.
Using the template system¶
Alternatively, you can use the Django template system to generate CSV. This is lower-level than using the convenient Python csv module, but the solution is presented here for completeness.
The idea here is to pass a list of items to your template, and have the template output the commas in a for loop.
Here's an example, which generates the same CSV file as above:
from django.http import HttpResponse from django.template import loader, Context def some_view(request): # Create the HttpResponse object with the appropriate CSV header. response = HttpResponse(mimetype='text/csv') response['Content-Disposition'] = 'attachment; filename=somefilename.csv' # The data is hard-coded here, but you could load it from a database or # some other source. csv_data = ( ('First row', 'Foo', 'Bar', 'Baz'), ('Second row', 'A', 'B', 'C', '"Testing"', "Here's a quote"), ) t = loader.get_template('my_template_name.txt') c = Context({ 'data': csv_data, }) response.write(t.render(c)) return response
The only difference between this example and the previous example is that this one uses template loading instead of the CSV module. The rest of the code -- such as the mimetype='text/csv' -- is the same.
Then, create the template my_template_name.txt, with this template code:
{% for row in data %}"{{ row.0|addslashes }}", "{{ row.1|addslashes }}", "{{ row.2|addslashes }}", "{{ row.3|addslashes }}", "{{ row.4|addslashes }}" {% endfor %}
This template is quite basic. It just iterates over the given data and displays a line of CSV for each row. It uses the addslashes template filter to ensure there aren't any problems with quotes.. | https://docs.djangoproject.com/en/1.2/howto/outputting-csv/ | CC-MAIN-2014-15 | refinedweb | 492 | 58.28 |
javax.faces.FacesException; 23 import javax.faces.component.UIComponent; 24 import javax.faces.component.UIComponentBase; 25 import javax.faces.component.UIViewRoot; 26 import javax.faces.context.FacesContext; 27 28 /** 29 * A component which handles clearing of cached user input (submittedValue) from 30 * input components whose model has been modified by an invoked flow. 31 * <p> 32 * When a flowcall is triggered by an immediate component, data entered by the user 33 * into input components is not pushed into the model before the call; it just gets 34 * stored in the "submittedValue" property of the input components. This view tree 35 * is then cached, and on return from the flow the original tree is deliberately restored 36 * so that user-entered data is not lost. 37 * <p> 38 * However this is a problem for input components whose model value has been modified by 39 * the flowcall (ie which map to something updated by a return-value from the flow). In 40 * this case we *do* want to discard the submittedValue in the component so that the 41 * new value in the model is displayed. 42 * <p> 43 * There is no automatic way to detect which input components in a page are affected by 44 * the return parameters from a flow. It *is* possible to detect input components whose 45 * value attribute is an EL expression that is identical to an EL expression for a return 46 * value, and that might be implemented in future. However there are many cases where a 47 * dependency cannot be detected with this simple test. 48 * <p> 49 * Instances of this component can be added to the page to explicitly mark components that 50 * need to be cleared on return from a specific flowcall. 51 * <p> 52 * Clearing of the submittedValue obviously does NOT need to be done on the first render 53 * of a view; it is something that is only relevant when a view has had at least one 54 * postback cycle and is then being re-rendered. And it is not relevant when a flowcall 55 * is triggered by a non-immediate command-component; in that case there is no _submittedValue 56 * cached in any input component, so updated model values cannot be "hidden" by the 57 * componenent cached value. In fact in this case, the original view is not even cached at 58 * all, as it can simply be recreated. 59 * <p> 60 * This component cannot be used with input components that are within a dataTable component; 61 * in that case the cached submitted-values are actually stored secretly within the dataTable 62 * and simply cannot be accessed by this component. In that case, the only solution is to 63 * apply this component to the entire table, ie clear all user data in all input fields 64 * within the table on flow return. This is not perfect, but is the best that can be done. 65 * <p> 66 * Note that this component does not simply "reset" the component; it actually deletes the 67 * component from the tree and relies on the rendering phase to recreate a new clean instance. 68 * This is done because UIInput has a resetValue method, but UIData does not. 69 */ 70 71 public class ClearOnCommit extends UIComponentBase 72 { 73 public static final String COMPONENT_FAMILY = "javax.faces.Component"; 74 public static final String COMPONENT_TYPE = "org.apache.myfaces.orchestra.flow.components.ClearOnCommit"; 75 76 private String outcome; 77 private String target; 78 79 @Override 80 public String getFamily() 81 { 82 return COMPONENT_FAMILY; 83 } 84 85 /** 86 * The navigation outcome that causes this component to remove the target component 87 * from the component tree. 88 * <p> 89 * Static value only (EL expressions not supported). 90 */ 91 public String getOutcome() 92 { 93 return outcome; 94 } 95 96 public void setOutcome(String outcome) 97 { 98 this.outcome = outcome; 99 } 100 101 /** 102 * Return the JSF component id of the target component to clear. 103 * <p> 104 * Static value only (EL expressions not supported). 105 */ 106 public String getTarget() 107 { 108 return target; 109 } 110 111 public void setTarget(String target) 112 { 113 this.target = target; 114 } 115 116 /** 117 * If the specified navigation-outcome matches the outcome attribute of this component 118 * then delete the associated target component. 119 * <p> 120 * If this component has no "outcome" property set, then the target component is 121 * cleared on any commit. 122 */ 123 private void clearTargetComponent(String outcome) 124 { 125 if ((this.outcome != null) && !this.outcome.equals(outcome)) 126 { 127 return; 128 } 129 130 if (this.getChildCount() > 0) 131 { 132 this.getChildren().clear(); 133 } 134 135 if (target != null) 136 { 137 UIComponent targetComponent = this.findComponent(target); 138 if (targetComponent == null) 139 { 140 throw new FacesException("Target component for clearOnCommit does not exist:" + target); 141 } 142 targetComponent.getParent().getChildren().remove(targetComponent); 143 } 144 } 145 146 // ================ State Methods ================= 147 148 @Override 149 public void restoreState(FacesContext context, Object state) 150 throws FacesException 151 { 152 Object[] states = (Object[]) state; 153 super.restoreState(context, states[0]); 154 outcome = (String) states[1]; 155 target = (String) states[2]; 156 } 157 158 @Override 159 public Object saveState(FacesContext context) 160 { 161 return new Object[] 162 { 163 super.saveState(context), 164 outcome, 165 target 166 }; 167 } 168 169 // ============ Static methods ================= 170 171 /** 172 * Execute the clearTargetComponent method of each ClearOnCommit component in the specified view tree. 173 * <p> 174 * This clears any _submittedValue property from the "target" component of each clearOnCommit component 175 * in the tree which has an outcome that matches the specified outcome value. 176 */ 177 public static void executeAllInstances(UIViewRoot viewRoot, String outcome) 178 { 179 applyToAll(viewRoot, outcome); 180 } 181 182 private static void applyToAll(UIComponent c, String outcome) 183 { 184 if (c instanceof ClearOnCommit) 185 { 186 ((ClearOnCommit) c).clearTargetComponent(outcome); 187 } 188 189 if (c.getChildCount() == 0) 190 { 191 return; 192 } 193 194 for(UIComponent child: c.getChildren()) 195 { 196 applyToAll(child, outcome); 197 } 198 } 199 } | http://myfaces.apache.org/orchestra/myfaces-orchestra-flow/xref/org/apache/myfaces/orchestra/flow/components/ClearOnCommit.html | CC-MAIN-2016-22 | refinedweb | 977 | 50.97 |
developed what I thought was a nice integration of readline into
cmd.py. It worked perfectly and it made no assumptions about the
presence or absence of readline. =20
I started preparing a patch file. It involved a few of the java classes
and cmd.py. All was going well and then I noticed that cmd.py was NOT
in CVS. Where did it come from then? It must have come from the
installation. From the docs I see that it may have come originally from
CPython. OK, so what does this mean? Does it mean that my changes also
have to work in CPython?
I'm confused.=20
-----Original Message-----
From: Steve Cohen on behalf of Steve Cohen
Sent: Sat 1/5/2002 6:28 PM
To: Finn Bock; jython-users@...
Cc:=09
Subject: RE: [Jython-users] More fun with Readline
OK. On one level getting cmd.Cmd to use the readline is easy. Just
import org.gnu.readline.Readline and I'm on my way. It works exactly as
I want it to work.
But of course, that won't do, since org.gnu.readline is optional. What
I'd like to do is somehow grab the "interp" InteractiveConsole that is
created in jython.main() and call IT'S raw_input method. But I don't
see a way to do this. interp is a local variable to the main method.
Is there some behind the scenes way I can hook this, to take advantage
of its readline capabilities? That seems like the "right" way to do
this, but is there a way to do it short of modifying jython.java? Seems
kind of an extreme step for my first attempt to fix something in jython.
-----Original Message-----
From: Finn Bock
Sent: Sat 1/5/2002 4:11 PM
To: jython-users@...
Cc: Steve Cohen
Subject: Re: [Jython-users] More fun with Readline
[Steve Cohen]
>There is a stack trace and it's the same whether or not
>showJavaExceptions is true:
That is OK. Thanks.
>[scohen@... scohen]$ jython
>-Dpython.options.showJavaExceptions=3D3Dtrue
>Jython 2.1 on java1.3.0 (JIT: null)
>Type "copyright", "credits" or "license" for more information.
>>>> import dbexts, isql
>>>> d=3D3Ddbexts.dbexts("prod_sport")
>>>> D=3D3Disql.IsqlCmd(d)
>>>> D.use_rawinput=3D3D0
>>>> D.cmdloop()
>Traceback (innermost last):
> File "<console>", line 1, in ?
> File "/usr/local/jython/jython-2.1/Lib/cmd.py", line 79, in cmdloop
>TypeError: write(): 1st arg can't be coerced to String
Not quite the error situation that I expected, but that just goes to
show how important the context is. I have added a bug report about the
situation.
>The 1st arg is None, which is arguably a bug in isql.IsqlCmd.=20
Are you absolutely sure that self.prompt is None. AFAICT D.prompt should
be an instance of the Prompt class and that will cause the stacktrace
above. A None value will cause a NPE which is a different bug. Please
recheck the value of D.prompt.
>The code
>in cmdloop can't handle it (can't coerce it to String). I can work
>around the bug by explicitly setting D.prompt (which is the first
>argument passed in) before executing D.cmdloop() but then you are quite
>right, there still isn't support for any form of readline, either the
>java_readline that is in the interactive mode or even the Ctrl-N,
Ctrl-P
>stuff that the documentation speaks of.
>
>I suppose I could take a look at fixing this, although I'm not
extremely
>familiar with it. If you could point me to where the java_readline
>stuff is integrated into the interactive mode, I could have a go at it.
That code is located in org\python\util\ReadlineConsole.java.=20
>Or, if you'd rather have someone more familiar with the internals do
it,
>I certainly understand.
Well, I prefer that it is implemented by someone who really want it to
work. If you decide to try, feel free to ask questions and seek advice
on jython-dev.
regards,
finn
View entire thread
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/jython/mailman/message/7578810/ | CC-MAIN-2017-04 | refinedweb | 718 | 76.93 |
I have a list of objects in Python and I want to shuffle them. I thought I could use the
random.shuffle method, but this seems to fail when the list is of objects. Is there a method for shuffling object or another way around this?
import random class a: foo = "bar" a1 = a() a2 = a() b = [a1,a2] print random.shuffle(b)
This will fail.
random.shuffle should work. Here's an example, where the objects are lists:
from random import shuffle x = [[i] for i in range(10)] shuffle(x) # print x gives [[9], [2], [7], [0], [4], [5], [3], [1], [8], [6]] # of course your results will vary
Note that shuffle works in place, and returns None.
As you learned the in-place shuffling was the problem. I also have problem frequently, and often seem to forget how to copy a list, too. Using
sample(a, len(a)) is the solution, using
len(a) as the sample size. See for the Python documentation.
Here's a simple version using
random.sample() that returns the shuffled result as a new list.
import random a = range(5) b = random.sample(a, len(a)) print a, b, "two list same:", a == b # print: [0, 1, 2, 3, 4] [2, 1, 3, 4, 0] two list same: False # The function sample allows no duplicates. # Result can be smaller but not larger than the input. a = range(555) b = random.sample(a, len(a)) print "no duplicates:", a == list(set(b)) try: random.sample(a, len(a) + 1) except ValueError as e: print "Nope!", e # print: no duplicates: True # print: Nope! sample larger than population | https://pythonpedia.com/en/knowledge-base/976882/shuffling-a-list-of-objects | CC-MAIN-2020-29 | refinedweb | 273 | 85.49 |
Using Pre/Post-Generate Hooks (0.7.0+)¶
You can have Python or Shell scripts that run before and/or after your project is generated.
Put them in hooks/ like this:
cookiecutter-something/ ├── {{cookiecutter.project_slug}}/ ├── hooks │ ├── pre_gen_project.py │ └── post_gen_project.py └── cookiecutter.json
Shell scripts work similarly:
cookiecutter-something/ ├── {{cookiecutter.project_slug}}/ ├── hooks │ ├── pre_gen_project.sh │ └── post_gen_project.sh └── cookiecutter.json
It shouldn’t be too hard to extend Cookiecutter to work with other types of scripts too. Pull requests are welcome.
For portability, you should use Python scripts (with extension .py) for your hooks, as these can be run on any platform. However, if you intend for your template to only be run on a single platform, a shell script (or .bat file on Windows) can be a quicker alternative.
Writing hooks¶
Here are some details on how to write pre/post-generate hook scripts.
Exit with an appropriate status¶
Make sure your hook scripts work in a robust manner. If a hook script fails (that is, if it finishes with a nonzero exit status), the project generation will stop and the generated directory will be cleaned up.
Current working directory¶
When the hook scripts script are run, their current working directory is the root of the generated project. This makes it easy for a post-generate hook to find generated files using relative paths.
Template variables are rendered in the script¶
Just like your project template, Cookiecutter also renders Jinja template
syntax in your scripts. This lets you incorporate Jinja template variables in
your scripts. For example, this line of Python sets
module_name to the
value of the
cookiecutter.module_name template variable:
module_name = '{{ cookiecutter.module_name }}'
Example: Validating template variables¶
Here is an example of script that validates a template variable
before generating the project, to be used as
hooks/pre_gen_project.py:
import re import sys MODULE_REGEX = r'^[_a-zA-Z][_a-zA-Z0-9]+$' module_name = '{{ cookiecutter.module_name }}' if not re.match(MODULE_REGEX, module_name): print('ERROR: %s is not a valid Python module name!' % module_name) # exits with status 1 to indicate failure sys.exit(1) | https://cookiecutter.readthedocs.io/en/1.7.0/advanced/hooks.html | CC-MAIN-2020-10 | refinedweb | 342 | 51.04 |
[SOLVED] help in getting focus to the dialog
as soon as my program loads, i have a dialog displayed over top of the mainwindow. The dialog is not focused but the mainwindow is. the tab key works for the mainwindow but instead i would like it to work with the dialog. how to set focus to the dialog so that it acts at if i clicked on the title of the dialog.
I have tried all of the following without any luck in getting focus to the dialog.
@ this->clearFocus();
this->setFocus();
this->raise();
this->activateWindow();@
EDIT: this topic is solved. go to page 2 to find out how to set focus to the dialog when setModal is false.
- Eddy Moderators
Have you tried setModal on the dialog?
yes setModal to true works but i need to interact with the mainwindow when the dialog is displayed. therefore, i have setModal to false.
Where were you calling the method(s) listed above in your code from? What's triggering them?
I was calling the methods from the mainwindow.cpp file to load the dialog.cpp file. also, calling setFocus directly from within the dialog.cpp has no effect.
- Eddy Moderators
[quote author="kalster" date="1312954398"]yes setModal to true works but i need to interact with the mainwindow when the dialog is displayed. therefore, i have setModal to false. [/quote]
What kind of interaction do you want? Anything signals and slots could help with?
i am looking for tab interaction. the tab key must work when the dialog displays. currently when the dialog displays over top of the mainwindow, the tab key works for the mainwindow, I am not sure about the signal and slots. it does not look like they could work but i could be wrong
What are you wanting the tab key to do, exactly? Select the dialog? Or are you looking to be able to tab through the widgets on the main window, and then have the tab-action continue onto the dialog's widgets, too? Or something else?
when the dialog displays, there are widgets on the dialog that can be tabbed through but only those widgets on the dialog. The user would also have the option to click widgets on the mainwindow. this is why i have setmodal to false
So to make sure I understand, you want to be able to interact with the mainwindow, but you want the dialog to always keep keyboard focus?
yes. here is the code to the displaying of the dialog.
dialog.h
@#ifndef DIALOG_H
#define DIALOG_H
#include <QDialog>
namespace Ui {
class Dialog;
}
class Dialog : public QDialog
{
Q_OBJECT
public:
explicit Dialog(QWidget *parent = 0);
~Dialog();
private:
Ui::Dialog *ui;
};
#endif // DIALOG_H@
mainwindow.h
@#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
class Dialog;
namespace Ui {
class MainWindow;
}
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
explicit MainWindow(QWidget *parent = 0);
~MainWindow();
private:
Ui::MainWindow *ui;
Dialog *tt;
};
#endif // MAINWINDOW_H@
dialog.cpp
@#include "dialog.h"
#include "ui_dialog.h"
Dialog::Dialog(QWidget *child) :
QDialog(child),
ui(new Ui::Dialog)
{
ui->setupUi(this);
}
Dialog::~Dialog()
{
delete ui;
}@
mainwindow.cpp
@#include "mainwindow.h"
#include "ui_mainwindow.h"
#include "dialog.h"
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
Dialog *tt = new Dialog(this);
tt->show();
}
MainWindow::~MainWindow()
{
delete ui;
}@
I'm not sure offhand (it's late here and I'm tired). You may be able to play with the "focusPolicy": on your MainWindow (or it's components) and your Dialog. But that's just speculation. I'm not sure on what the details would be to implement such a thing.
ok. i will read the focusPolicy. thank you for your help Mlong. goodnight.
so if i click the pushbutton from the mainwindow, the dialog displays in focus but if i load the dialog when the mainwindow loads then the dialog does not have focus. is there a way to simulate a button being pressed? can i have some examples please?
You can't set the focus to the dialog in the constructor of the main window, as during this time, the main window has no focus.
The focus is set to the main window later on and after that time, you can move the focus, not before.
We have a similar setup in our application. We do it this way:
- in main method main() instantiate the MainWindow subclass
- call mainWindow->show()
- call mainWindow->slotShowDialog()
- call app.exec()
this topic is solved. I showed the mainwindow first and then the dialog right after it as in this code and it works great. i stumbled on this fix after i read Volker post.
comment the w.show(); in main.cpp and add this code to your mainwindow.cpp file
@this->show();
Dialog->show();@ | https://forum.qt.io/topic/8430/solved-help-in-getting-focus-to-the-dialog | CC-MAIN-2017-43 | refinedweb | 789 | 67.35 |
>
I have a model moving using root motion. Currently, it's moving in 4 directions; Forward, Backward, Right, and Left.
Now, I want to make the character move in 45 degrees. I made a chart to understand the axes.
The character has a running animation for every direction, including the diagonal directions. Here is my animator controller set up (image link because I can't have any more attatchments):
I started out with trying to move diagonally left, and tried out a few scripts.
Script #1 (The simplest):"));
myAnimator.SetFloat ("DiagSpeed", Input.GetAxis ("Vertical") + Input.GetAxis ("Horizontal"));
Script #2:
using UnityEngine;
using System.Collections;"));
if (Input.GetAxis ("Vertical") || Input.GetAxis ("Horizontal")) {
if ((Input.GetAxis ("Vertical") < 0f) && (Input.GetAxis ("Horizontal") > 0)) {
myAnimator.SetBool ("isDiagonalLeft", true);
}
} else {
myAnimator.SetBool ("TurningLeft", false);
}
}
}
Script #3 (As an image link because I forgot to save this one and can't add any more attachments):
Results:
Script 1 did not work, and caused the character to move diagonally only when the Down Arrow key was pressed.
Script 2 also did not work, giving me the error: "Operator '||' cannot be applied to operands of type 'float' and 'float'."
Script 3 gave me some bracket errors.
What can I do to get the character to move diagonally properly?
Is there any way to combine two float values in code? If so, how?
Answer by UnityCoach
·
Dec 19, 2016 at 08:07 PM
All you need is in the 2D Blend Tree in Animator.
You simply expose X and Y parameters from the Animator Controller, which you manipulate from the code, like you do.
Then, you add you diagonal animations to the Blend Tree and position them on the 2D graph.
"Expose the X and Y parameters"? Could you explain this a bit more, please? I'm a noob.
Well, I see you already set float parameters like this :
myAnimator.SetFloat ("VSpeed", Input.GetAxis ("Vertical"));
myAnimator.SetFloat ("HSpeed", Input.GetAxis ("Horizontal"));
I assume you already have those parameters "exposed" in the Animator Controller, right?
If so, all you need is to create a 2D blend tree and add all the different animations to it.
Yeah, I have those in the animator controller. Setup:
However, it still doesn't play the diagonal run animation whenever I hold two directional keys down.
I set the Diagonal Left animation to -1, 1 and the Diagonal Right animation to 1,1
Yes, there are no other layers on top. Buuuut, its' set up like this:
Is this set up causing my problem? (For some reason, I can't reply to your post)
Yes, I know, you can only reply to post I published to you. I can't see the image.
I made a forum topic so you can help me better: Also, here's the
301 People are following this question.
Animation/Movement Error
1
Answer
Hexagonal Grid Rangefinding
1
Answer
How do i limit my accelerometer movement
1
Answer
Character Movement jerking
0
Answers
How can I make the camera not overreach a limit of rotation one axis.
0
Answers | https://answers.unity.com/questions/1287860/how-do-i-get-diagonal-3d-movement.html | CC-MAIN-2019-26 | refinedweb | 509 | 59.8 |
Starting.
My personal favorite, though, is the rare person who knows about the class library, that uses the class library… to reinvent methods which exist in the class library. They’ve seen a wheel, they know what a wheel is for, and they still insist on inventing a coffee-table.
Anneke sends us one such method.
The method in question is called thus:
if output_exists("/some/path.dat"): do_something()
I want to stress, this is the only use of this method. The purpose is to check if a file containing output from a different process exists. If you’re familiar with Python, you might be thinking, “Wait, isn’t that just
os.path.exists?”
Of course not.
def output_exists(full_path): path = os.path.dirname(full_path) + "/*" filename2=full_path.split('/')[-1] filename = '%s' % filename2 files = glob.glob(path) back = [] for f in re.findall(filename, " ".join(files)): back.append(os.path.join(os.path.dirname(full_path), f)) return back
Now, in general, most of your directory-tree manipulating functions live in the
os.path package, and you can see
os.path.dirname used. That splits off the directory-only part. Then they throw a glob on it. I could, at this point, bring up the importance of
os.path.join for that sort of operation, but why bother?
They knew enough to use
os.path.dirname to get the directory portion of the path, but not
os.path.split which can pick off the file portion of the path. The “Pythonic” way of writing that line would be
(path, filename) = os.path.split(full_path). Wait, I misspoke: the “Pythonic” way would be to not write any part of this method.
'%s' % filename2 is how Python’s version of
printf and I cannot for the life of me guess why it’s being done here. A misguided attempt at doing an
strcpy-type operation?
glob.glob isn’t just the best method name in anything, it also does a filesystem search using globs, so
files contains a list of all files in that directory.
" ".join(files) is the Python idiom for joining an array, so we turn the list of files into an array and search it using
re.findall… which uses a regex for searching. Note that they’re using the filename for the regex, and they haven’t placed any guards around it, so if the input file is “foo.c”, and the directory contains “foo.cpp”, this will think that’s fine.
And then last but not least, it returns the array of matches, relying on the fact that an empty array in Python is false.
To write this code required at least some familiarity with three different major packages in the class library-
os.path,
glob, and
re, but just one ounce of more familiarity ith
os.path would have replaced the entire thing with a simple call to
os.path.exists. Which is what Anneke did.
| https://thedailywtf.com/articles/the-pythonic-wheel-reinvention | CC-MAIN-2018-47 | refinedweb | 488 | 75.91 |
I'm using PyVISA on python 2.7 to read a 34465A DMM over TCP. I am able to execute an "*IDN?" query and it returns the correct value, however the "READ?" query simply returns u'\n'. I've tried several variations including using '' vs "", as well as having preceding ':', using "MEAS:..."; no luck. Here is an example of my code:
import visa
import time
rm = visa.ResourceManager()
resources = rm.list_resources()
print "Available resources:"
print resources
print "Getting instrument..."
my_inst = rm.open_resource(resources[1]) # The device is always at [1]
start = time.time()
worked = True
command = "MEAS:VOLT:DC?" # Doesn't work
#command = "READ?" # Doesn't work
try:
print "Got instrument, getting *IDN?..."
print my_inst.query('*IDN?') # Prints correct value
print "Got *IDN?, READ?ing..."
print my_inst.query(command).split()
print "READ? complete attempting, write/read."
print my_inst.write(command)
print "Wrote READ?, reading from inst..."
print my_inst.read().split()
except Exception as e:
worked = False
print e
print "Had an... exceptional time...."
finally:
my_inst.close()
print "Elapsed time:"
print time.time() - start
print "Did it work: {}".format(worked)
I do not use Python. The READ? and MEAS:VOLT:DC? do work and return an ASCII real,64 floating point number. Following is an example when using the default DCV function with the input open:
-> READ?
<- -1.67223359E-05
-> MEAS:VOLT:DC?
<- -4.97292228E-05
These are just noise values. It is your method of implementing this with PyVISA that is the problem. Sorry I can't help with that. | https://community.keysight.com/message/86076 | CC-MAIN-2018-34 | refinedweb | 250 | 71.92 |
Can I somehow move all field values from one object to another without using reflection? So, what I want to do is something like this:
public class BetterThing extends Thing implements IBetterObject { public BetterThing(Thing t) { super(); t.evolve(this); } }
So, the evolve method would evolve one class to another. An argument would be
<? extends <? extends T>> where
T is the class of the object, you're calling evolve on.
I know I can do this with reflection, but reflection hurts performance. In this case Thing class is in external API, and there's no method that would copy all the required fields from it to another object.
P.S. Sorry for my English. | http://www.howtobuildsoftware.com/index.php/how-do/f3R/java-oop-object-field-moving-all-fields-info-from-one-object-to-another | CC-MAIN-2019-09 | refinedweb | 114 | 74.69 |
ondelta 0.3.0
A mixin that allows models to register methods that are notified when their values change,or register a method that is notified of all changes. Basically, OnDeltaMixin implementsthe observer pattern.
A django model mixin that makes it easy to react to field value changes on models. Supports an API similar to the model clean method.
Quick Start
Given that I have the model
- class MyModel(models.Model):
- mai_field = models.CharField() other_field = models.BooleanField()
And I want to be notified when mai_field’s value is changed and persisted I would simply need to modify my model to include a ondelta_mai_field method.
from ondelta.models import OnDeltaMixin
- class MyModel(OnDeltaMixin):
-
mai_field = models.CharField() other_field = models.BooleanField()
- def ondelta_mai_field(self, old_value, new_value):
- print “mai field had the value of”, mai_field print “but by the time we called save it had the value of”, new_value
This is the easiest method to watch a single field for changes but what about if we want to perform an action that has an aggregate view of all of the fields that were changed? OnDeltaMixin provides an ondelta_all method for these cases which is only called once for each save.
from ondelta.models import OnDeltaMixin
- class MyModel(OnDeltaMixin):
-
mai_field = models.CharField() other_field = models.BooleanField()
- ondelta_all(self, fields_changed):
-
- if fields_changed[‘other_field’][‘old’] == True:
- print “other field was true and is now”, fields_changed[‘other_field’][‘new’] print “We also have access to”, fields_changed[‘mai_field’][‘old’]
Design Considerations
All effort to be made not to over notify on changes. Any comparison problems should fail in a way that does NOT duplication notifications. Field comparisons should also not cause other problems in the model (for example causing a child to be unable to persist).
I like to help people as much as possible who are using my libraries, the easiest way to get my attention is to tweet @adamhaney or open an issue. As long as I’m able I’ll help with any issues you have.
- Downloads (All Versions):
- 41 downloads in the last day
- 188 downloads in the last week
- 770 downloads in the last month
- Author: Adam Haney
- Keywords: Django,observer
- License: LGPL
- Categories
- Package Index Owner: adamhaney
- DOAP record: ondelta-0.3.0.xml | https://pypi.python.org/pypi/ondelta/0.3.0 | CC-MAIN-2015-27 | refinedweb | 369 | 54.32 |
Software may be eating the world, but JavaScript may eat the software world. Duktape takes JavaScript beyond the confines of the browser or server with a full ECMAScript 5 compliant engine that can be embedded into any C/C++ project.
[Duktape's] small code base and simple build system make it the embedders dream. It's essentially just like the Lua project technically, but runs JavaScript which has a much bigger ecosystem and set of developers. — Tim Caswell
While Lua is not much older than JavaScript, the latter has garnered significant mind share with its ubiquity in the browser and on the server with node.js. For projects that want to embed a scripting langauge, Duktape provides access to a very popular language and its ecosystem. "The original motivation was to have a Lua-like implementation for JavaScript," says Sami Vaarala, the creator of Duktape.
Working with Duktape is as simple as adding
duktape.c and
duktape.h to a project. The bindings between JavaScript and C are bidirectional, so either can call the other. The 'Hello World!' example is:
#include "duktape.h" int main(int argc, char *argv[]) { duk_context *ctx = duk_create_heap_default(); duk_eval_string(ctx, "print('Hello world!');"); duk_destroy_heap(ctx); return 0; }
Tim Caswell has taken the core of Duktape and extended it with Dukluv to create a minimal "node.js-like environment for tiny devices." Caswell says that his project adds libuv bindings to Duktape, giving it
access to the operating system making it a fully general programming environment with non-blocking I/O, timers, sub-process support and loads of useful utility functions as provided in libuv.
The need to embed a scripting language is not new. Video games have done it for years, such as World of Warcraft which uses Lua for interface customization. According to a popular answer on Stack Overflow, Lua is often used because
it's small, portable, hackable ANSI C code base; easy to embed, extend, and -- most importantly for game developers -- it has a minimal runtime footprint
Browser engines like SpiderMonkey and V8 could be embedded, but their size makes them unusable for small applications. They are "far too heavyweight for simple tasks or lower-powered machines," Caswell says. The Espruino project also provides JavaScript for microcontrollers, but its ECMAScript compliance is only around 95% whereas Duktape is fully compliant.
Duktape comes with a MIT License and the source code is available on GitHub. Developers who have used it have praised its extensive documentation
Community comments
javascript
by Mark N /
javascript
by Mark N /
Your message is awaiting moderation. Thank you for participating in the discussion.
Is also eating all my time because I have to constant google for solutions and examples and because simple issues take time to resolve because I can't just see a compiler error. | https://www.infoq.com/news/2014/12/Duktape-Embedded-JavaScript/ | CC-MAIN-2019-26 | refinedweb | 466 | 54.02 |
A dynamic-link library (DLL) is a module that contains functions and data that can be used by another module (application or DLL). In Linux/UNIX, the same concept is implemented in shared object (.so) files. From now on, I use the term shared libraries to refer to DLL and SO files.
Advantaged of using shared libraries are:
This article will address the following topics:
Creating, Linking and Compiling the DLL or SO:
Accessing the DLL or SO from a Calling Process:
There are many differences in the way shared libraries are created, exported and used..
But in the case of Linux/Unix, no special export statement needs to be added to the code to indicate exportable symbols, since all symbols are available to an interrogating process (the process which loads the SO/DLL).
Export symbols in src file
No export symbol required.
__declspec( dllexport )
Header file
#include <dlfcn.h>
#include <windows.h>
void* dlopen
( const char *pathname, int mode );
HINSTANCE LoadLibrary
( LPCTSTR lpLibFileName );
Runtime access of functions
void* dlsym( void* handle,
const char *name);
GetProcAddress( HMODULE hModule,
LPCSTR lpProcName);
Closing the shared library
int dlclose( void *handle );
BOOL FreeLibrary
( HMODULE hLibModule );
Read further for more information on the above functions.
All UNIX object files are candidates for inclusion into a shared object library. No special export statements need to be added to the code to indicate exportable symbols, since all symbols are available to an interrogating process (the process which loads the SO/DLL).
In Windows NT, however, only the specified symbols will be exported (i.e., available to an interrogating process). Exportable objects are indicated by the including the keyword '__declspec(dllexport)'. The following examples demonstrate how to export variables and functions.
__declspec(dllexport)
__declspec( dllexport ) void MyExportFunction(); /* exporting function MyExportFunction */
__declspec (dllexport) int MyExportVariable; /* exporting variable MyExportVariable */
Both DLL and SO files are linked from compiled object files.
In Windows, most of the IDEs automatically help you compile and link the DLL.
CC = g++
add.so : add.o
$(CC) add.o -shared -o add.dll
add.o : add.cpp
$(CC) $(CFLAGS) add.cpp
Under UNIX, the linking of object code into a shared library can be accomplished using the '-shared' option of the linker executable 'ld'. For example, the following command line can be used to create an SO file add from add.cpp.
To use the shared objects in UNIX, the include directive '#include <dlfcn.h>' must be used. Under Windows, the include directive '#include <windows.h>' must be used.
In Unix, loading the SO file can be accomplished from the function dlopen(). The function protoype is:
void* dlopen( const char *pathname, int mode )
The argument pathname is either the absolute or relative (from the current directory) path and filename of the .SO file to load. The argument mode is either the symbol RTLD_LAZY or RTLD_NOW. RTLD_LAZY will locate symbols in the file given by pathname as they are referenced, while RTLD_NOW will locate all symbols before returning. The function dlopen() will return a pointer to the handle to the opened library, or NULL if there is an error.
RTLD_LAZY
RTLD_NOW
RTLD_NOW
NULL
#define RTLD_LAZY 1
#define RTLD_NOW 2
Under Windows, the function to load a library is given by:
HINSTANCE LoadLibrary( LPCTSTR lpLibFileName );
In this case, lpLibFileName carries the filename of an executable module. This function returns a handle to the DLL (of type HISTANCE), or NULL if there is an error.
lpLibFileName
HISTANCE
Under UNIX, the shared object will be searched for in the following places:
-rpath
ld(1)
LD_LIBRARY_PATH
LD_LIBRARY_PATH
LD_LIBRARY64_PATH
LD_LIBRARYN32_PATH
Under Windows, the shared object will be searched for in the following places:
GetSystemDirectory
GetWindowsDirectory
PATH
Under Unix, symbols can be referenced from a SO once the library is loaded using dlopen(). The function dlsym() will return a pointer to a symbol in the library.
dlopen()
dlsym()
void* dlsym( void* handle, const char *name);
The handle argument is the handle to the library returned by dlopen(). The name argument is a string containing the name of the symbol. The function returns a pointer to the symbol if it is found and NULL if not or if there is an error.
string
FARPROC GetProcAddress( HMODULE hModule, LPCSTR lpProcName);
Under Windows, the functions can be accessed with a call to GetProcAddress().
GetProcAddress()
The argument hModule is the handle to the module returned from LoadLibrary(). The argument lpProcName is the string containing the name of the function. This procedure returns the function pointer to the procedure if successful, else it returns NULL.
hModule
LoadLibrary()
lpProcName
NULL
Closing the library is accomplished in Unix using the function dlclose, and in Windows using the function FreeLibrary. Note that these functions return either a 0 or a non-zero value, but Windows returns 0 if there is an error. Unix returns 0 if successful.
FreeLibrary
In Unix, the library is closed with a call to dlclose.
dlclose
int dlclose( void *handle );
The argument handle is the handle to the opened SO file (the handle returned by dlopen). This function returns 0 if successful, a non-zero value if not successful.
dlopen
BOOL FreeLibrary( HMODULE hLibModule );
In Windows NT, the library is closed using the function Free Library.
The argument hLibModule is the handle to the loaded DLL library module. This function returns a non-zero value if the library closes successfully, and a 0 if there is an error.
hLibModule
Most of the big applications that we write will have many calls to API functions specific to the operating system. This will make the application platform dependant. The source code that is written compiles in all the platforms without any modifications to the source code, as shown in the figure below will be the ideal situation. This can be achieved by routing the operating system specific calls through a common function, which in turn will call the operating system specific calls based on the operating system.
One solution to create platform independent code is to create a header file, which handles all platform dependant calls. Based on the compiler (or operating system), the same code will generate applications for different platforms.
Main functions which differ between windows and Linux are:
LoadLibrary
………
The sample given below demonstrates a simple example of such a header file, which handles platform specific calls. In this example, the compiler is being checked to differentiate different platforms.
//Boby Thomas pazheparampil - march 2006
#ifndef os_call_h
#define os_call_h
#include<string>
#if defined(_MSC_VER) // Microsoft compiler
#include <windows.h>
#elif defined(__GNUC__) // GNU compiler
#include <dlfcn.h>
#else
#error define your copiler
#endif
/*
#define RTLD_LAZY 1
#define RTLD_NOW 2
#define RTLD_GLOBAL 4
*/
void* LoadSharedLibrary(char *pcDllname, int iMode = 2)
{
std::string sDllName = pcDllname;
#if defined(_MSC_VER) // Microsoft compiler
sDllName += ".dll";
return (void*)LoadLibrary(pcDllname);
#elif defined(__GNUC__) // GNU compiler
sDllName += ".so";
return dlopen(sDllName.c_str(),iMode);
#endif
}
void *GetFunction(void *Lib, char *Fnname)
{
#if defined(_MSC_VER) // Microsoft compiler
return (void*)GetProcAddress((HINSTANCE)Lib,Fnname);
#elif defined(__GNUC__) // GNU compiler
return dlsym(Lib,Fnname);
#endif
}
bool FreeSharedLibrary(void *hDLL)
{
#if defined(_MSC_VER) // Microsoft compiler
return FreeLibrary((HINSTANCE)hDLL);
#elif defined(__GNUC__) // GNU compiler
return dlclose(hDLL);
#endif
}
#endif //os_call_h
//Boby Thomas Pazheparampil - march 2006
// plat_ind.cpp :
#include "os_call.h"
#include <iostream>
using namespace std;
typedef int (*AddFnPtr)(int,int);
int main(int argc, char* argv[])
{
AddFnPtr AddFn;
void *hDLL;
//do not add extension. It is handled by LoadSharedLibrary.
hDLL = LoadSharedLibrary("add");
if(hDLL == 0)
return 1;
AddFn = (AddFnPtr)GetFunction(hDLL,"fnAdd");
int iTmp = AddFn(8,5);
cout<<"8 + 3 = "<<iTmp;
FreeSharedLibrary(hDLL);
return 0;
}
Another problem, which we face when we code targeting multiple platforms, is so called "Big Endian Little Endian". This is a problem raised because of different byte ordering used for information storage. Some machines choose to store the object in memory ordered from least significant byte to most, while other machines store them from most to least. The former convention—where the least significant byte comes first—is referred to as little endian. Most machines follow this convention. The latter convention—where the most significant byte comes first—is referred to as big endian. This convention is followed by most machines from IBM, Motorola, and Sun Microsystems. I will take a simple example to elaborate this problem. Say you want to store an integer data (4 byte long say) 0x12345678 starting from a memory address 0x5000 to 0x5003. The data arrangement in the memory will be as below.
0x5000 0x5001 0x5002 0x5003
-------------------
12
34
56
78
-----------------
This issue will become critical if we store data in external binary files. If two different platforms use the same binary data file, the data retrieved from the file will be completely different. Keep this point in mind when you are targeting many platforms.
With increasing usage of Linux, it will be good always to target multiple platforms when we write code. If we can keep the same source code and just compile for a different platform after coding, we can reduce the time spent on porting from one platform to another.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
#if...#elif...#endif
C:\MY_APP
| my_app.c
+---include
| os_call.h
+---pc
| \---src
| os_call.c
\---unix
\---src
os_call.c
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/articles/13501/platform-independent-coding-dlls-and-sos?fid=282151&df=90&mpp=25&sort=position&spc=relaxed&tid=1415838 | CC-MAIN-2016-50 | refinedweb | 1,562 | 55.03 |
I'm going through the Udacity "Intro to Computer Science" course coding with Python, and in Lesson 2 Problem Set (Optional 2) I encountered the following problem:
# Write a Python procedure fix_machine to take 2 string inputs
# and returns the 2nd input string as the output if all of its
# characters can be found in the 1st input string and "Give me
# something that's not useless next time." if it's impossible.
# Letters that are present in the 1st input string may be used
# as many times as necessary to create the 2nd string (you
# don't need to keep track of repeat usage).
def fix_machine(debris, product):
i = 0
while i <= len(product)-1:
if debris.find(product[i]) == -1:
return "Give me something that's not useless next time."
elif i == len(product)-1:
return product
else:
i = i + 1
# BONUS: #
# 5***** # If you've graduated from CS101,
# Gold # try solving this in one line.
# Stars! #
def fix_machine(a, b): return set(a) >= set(b) and b or "Give me something that's not useless next time."
special thanks to @ajcr
PS: as @user2357112 mentioned, it will fail with empty strings.
def fix_machine(a, b): return b if set(a) >= set(b) else "Give me something that's not useless next time." | https://codedump.io/share/9tCoHSTNnqH0/1/udacity39s-exercise-superhero-nuisance-one-line-answer | CC-MAIN-2016-44 | refinedweb | 215 | 66.67 |
#include <pcre2.h>
If
match_data is NULL, this
function does nothing. Otherwise,
match_data must point to a match data
block, which this function frees, using the memory freeing
function from the general context or compiled pattern with
which it was created, or
free()
if that was not set.
If the PCRE2_COPY_MATCHED_SUBJECT was used for a successful match using this match data block, the copy of the subject that was remembered with the block is also freed.
There is a complete description of the PCRE2 native API in the pcre2api(3) page and a description of the POSIX API in the pcre2posix(3) page. | http://manpages.courier-mta.org/htmlman3/pcre2_match_data_free.3.html | CC-MAIN-2021-17 | refinedweb | 103 | 63.02 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.