text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Using Generic Methods So much attention gets paid to the new generic classes that have been introduced with .NET generics, I often find developers overlooking what can be achieved through the use of generic methods. In fact, in many cases, you may discover one of the first opportunities to leverage generics is by introducing some generic methods into your non-generic types. This brings the flavor and power of generics to a class without requiring the class itself to be parameterized. The Basics To illustrate the fundamental value of generic methods, let's start with the simplest of examples. Suppose you have a Max() function that accepts two double values, compares them, and returns the greater of the two values. This function might appear as follows: [VB code] Public Function Max(ByVal val1 As Double, ByVal val2 As Double) As Double Return IIf(val2 < val1, val1, val2) End Function [C# code] public double Max(double val1, double val2) { return (val2 < val1) ? val1 : val2; } This method is handy for number-crunching applications. However, once you decide you want to apply this same Max() function to additional data types, you have a problem. This method can only be applied to double data types. You only have a few real, type-safe options that you can use to resolve this problem. One approach would be to create specific versions of this method to support each data type. However, doing that would force you to bloat your namespace with MaxString, MaxInt, and MaxLong methods. Not good. To get around the bloat issue, you might consider going back to using an object-based interface and tossing all type safety to the wind. Your last option here would be to provide several overloaded versions of Max() that accepted different types. That might represent some measure of improvement, but it's still not ideal. This discussion of taking on bloat or compromising type safety is probably starting to sound like a broken record at this point. You see the same patterns over and over again in your code. You start out with a nice, general-purpose class or method only to find that, as you attempt to broaden its applicability, you discover that the tools offer you few good options to extrapolate that generality to additional data types. That's right in the sweet spot of generics. So, let's look at how generics can be applied to the Max() method. The following code represents the generic version of the Max() method: [VB code] Public Function Max(Of T As IComparable) (ByVal val1 As T, ByVal val2 As T) As T Dim retVal As T = val2 If (val2.CompareTo(val1) < 0) Then retVal = val1 End If Return retVal End Function [C# code] public T Max<T>(T val1, T val2) where T : IComparable { T retVal = val2; if (val2.CompareTo(val1) < 0) retVal = val1; return retVal; } The syntax and concepts here are right in line with what you've already seen with generic classes. The Max() method becomes a parameterized type, accepting one or more type parameters as part of its signature. Once you've outfitted your method with a type parameter, you can then proceed to reference that type parameter throughout the scope of the function. Method parameters, return types, and types appearing in the body of your methods may all reference the type parameters that are supplied to your generic method. For this example to work properly, I was required to apply a constraint to my type parameter, indicating that each T must implement IComparable. Constraints are addressed in detail in the book Professional .NET 2.0 Generics Chapter 7, "Generic Constraints." All that remains at this stage is to start making some calls to this new, generic Max() method. Let's take a quick look at how clients would invoke the Max() method with a few different type arguments: [VB code] Dim doubleMax As Double = Max(Of Double)(3939.99, 39999.99) Dim intMax As Int32 = Max(Of Double)(339, 23) Dim stringMax As String = Max(Of String)("AAAA", "BBBBBB") [C# code] double doubleMax = Max<double>(3939.99, 39999.99); int intMax = Max<int>(339, 23); string stringMax = Max<string>("AAAA", "BBBBBB"); Calling a generic method, as you can see, is not all that different than calling a non-generic method. The only new wrinkle here is the introduction of a type parameter immediately following the name of the method. There are no comments yet. Be the first to comment!
http://www.codeguru.com/csharp/.net/net_asp/miscellaneous/article.php/c12321/Using-Generic-Methods.htm
CC-MAIN-2017-30
refinedweb
745
60.55
Introduction So, you learned how to control LEDs, use push buttons, and read data from temperature sensors on your Arduino, now what? Well, let's do something with the data you are reading! Let's create an Arduino project to read from a temperature sensor and write the information to the serial bus. Then, we will create a VB.NET project to read from the serial bus and display the temperature data on the screen. This is a simple project to get started with communicating between your Arduino and a Windows Program. Once you are able to get the data streaming to VB.NET, the possibilities are endless. Arduino Circuitry The circuitry for this temperature sensor is very simple. All you will need is the following items:The circuitry for this temperature sensor is very simple. All you will need is the following items: - 1 Arduino - 1 Breadboard - 1 USB Cable - 5 Jumper Wires - 1 4.7K Resistor - 1 DS18B20 Digital thermometer sensor Look at Figure 1 for making all the connections. The Arduino should have jumper wires connected to 5v, GND, and Digital Pin 8. Once all the connections are made, connect the Arduino to your computer using the USB cable. Figure 1: Wiring the Arduino Arduino Code Programming the Arduino to read from the temperature sensor is simple. The code might look a little daunting once it gets down to the getTemperature() function, but that code is provided with the DallasTemperature OneWire library. We will break down the entire file and explain what is happening. Install OneWire First things first. You will need to download the OneWire library for Arduino from the Arduino Playground:. Place the downloaded files into the Arduino library folder (typically found in Documents/Arduino/library). To confirm that the library was installed correctly: open the Arduino IDE, click the "File" menu, hover over "Examples" menu, and confirm that the "OneWire" menu is showing. Congratulations; you installed your first Arduino library! I added comments throughout the script to explain everything that is happening. Setup The first thing we have to do is include the OneWire library into our script. Next, there are a few variables that need to be reserved and set aside for use later in the script. #include <OneWire.h> // OneWire Variables int tempSensor = 8; // Pin number byte i; // Used in getTemperature() byte present = 0; // Used in getTemperature() byte type_s; // Used in getTemperature() byte data[12]; // Used in getTemperature() byte addr[8]; // Used in getTemperature() float celsius, fahrenheit; // Used in getTemperature() OneWire ds(tempSensor); // Set up the OneWire class void setup() { // Start Serial Connection Serial.begin(9600); } Loop Function void loop() { // Get The Temperature float temp = getTemperature(); // Print Temperature to Serial Serial.println(temp); // Wait 500 milliseconds before continuing delay(500); } getTemperature() Function This function may look a little confusing, but reading the comments should help. Parts of this section were provided by the Arduino Playground. // Get Temperature Function float getTemperature () { // Search for a OneWire Device if ( !ds.search(addr)) { ds.reset_search(); delay(250); } ds.reset(); ds.select(addr); ds.write(0x44, 1); delay(500); // Wait 500 milliseconds... // Read from OneWire, expecting 9 bytes present = ds.reset(); ds.select(addr); ds.write(0xBE); for ( i = 0; i < 9; i++) { data[i] = ds.read(); } // Convert the data to real temperature int16_t raw = (data[1] << 8) | data[0]; if (type_s) { raw = raw << 3; if (data[7] == 0x10) { raw = (raw & 0xFFF0) + 12 - data[6]; } } else { byte cfg = (data[4] & 0x60); if (cfg == 0x00) raw = raw & ~7; else if (cfg == 0x20) raw = raw & ~3; else if (cfg == 0x40) raw = raw & ~1; } // We are left with celsius celsius = (float)raw / 16.0; // Convert celsius to fahrenheit fahrenheit = celsius * 1.8 + 32.0; // Switch "fahrenheit" to "celsius" if you prefer return fahrenheit; } Upload and Test the Arduino Code It's now time to upload and test the code! Make sure the right board is selected from Tools menu > Board (see Figure 2) and make sure the correct serial port is select under Tools menu > Serial Port. Figure 2: Selecting the correct serial port and board After confirming the correct port and board, click the arrow button to upload your script the Arduino. Once the script is uploaded, click the magnifying glass near the top right of the IDE. This will open the serial monitor. Every time the Arduino gets the temperature, it will print to this monitor window. Figure 3: Printing to the monitor window If you see something similar what's shown in Figure 3, great job! If you are seeing a constant reading of 32.00, there is something wrong with your circuitry and the sensor is not properly reading. Let's move on to the Visual Basic part of the project now. Visual Basic Program The Visual Basic program is going to connect to the selected Serial Port and process the incoming serial information. In our case, it will only be a decimal temperature reading coming in about every second. The form is pretty generic and includes the following elements: - Serial Port Selection (combobox) - Baud Rate (textbox) - Connect Button (button) - Disconnect Button (button) - Temperature Display (textbox) - History List (listbox) - High / Average / Low Temperatures (textbox) - Serial Port (serialport) I have attached all the project files to this article for you to use. Figure 4: The completed temperature display The VB.NET Code I will explain the major parts of the Visual Basic code; however, I won't explain everything. I have commented every part of the code in the project file to make it very easy to understand what is happening. Function DoConnect() DoConnect() makes the initial connection to the selected serial port. Public Sub DoConnect() 'Set up the serial port connection With SerialPort1 'Selected Port .PortName = cmbPorts.Text 'Baud Rate. 9600 is default. .BaudRate = CInt(txtBaudRate.Text) .Parity = IO.Ports.Parity.None .DataBits = 8 .StopBits = IO.Ports.StopBits.One .Handshake = IO.Ports.Handshake.None .RtsEnable = False .ReceivedBytesThreshold = 1 .NewLine = vbCr .ReadTimeout = 10000 End With 'Try to open the selected port... Try SerialPort1.Open() comOpen = SerialPort1.IsOpen Catch ex As Exception 'Couldn't open it... show error comOpen = False MsgBox("Error Open: " & ex.Message) End Try btnDisconnect.Enabled = True btnConnect.Enabled = False txtBaudRate.Enabled = False cmbPorts.Enabled = False End Sub Function DoDisconnect() DoDisconnect() is called by the disconnect button as well as the form close event. It closes the serial connection and enables the form elements. Public Sub DoDisconnect() 'Graceful disconnect if port is open If comOpen Then SerialPort1.DiscardInBuffer() SerialPort1.Close() 'Reset our flag and controls comOpen = False btnDisconnect.Enabled = False btnConnect.Enabled = True txtBaudRate.Enabled = True cmbPorts.Enabled = True End If End Sub Serial Port Received Event This is where the magic happens. Okay, it's not really magic, but this is where the program is receiving the data from the Arduino. The data is received and sent to a new thread to handle processing. Private Sub SerialPort1_DataReceived(ByVal sender _ As System.Object, _ ByVal e As System.IO.Ports.SerialDataReceivedEventArgs) _ Handles SerialPort1.DataReceived If comOpen Then Try 'Send data to a new thread to update the 'temperature display readbuffer = SerialPort1.ReadLine() Me.Invoke(New EventHandler(AddressOf updateTemp)) Catch ex As Exception 'Otherwise show error. Will display when 'disconnecting. 'MsgBox(ex.Message) End Try End If End Sub Function updateTemp() This function is what actually handles processing the data. The temperature display is updated, the new temperature is added to the history list, the low and high temperature are compared, and the average temperate is calculated and updated. Public Sub updateTemp(ByVal sender As Object, ByVal e _ sAs System.EventArgs) 'Update temperature display as it comes in Dim read As Decimal read = readbuffer.Replace(vbCr, "").Replace(vbLf, "") txtTemp.Text = read lstHistory.Items.Insert(0, read) 'Check Highest Temp If txtHigh.Text < read Then txtHigh.Text = read End If 'Check Lowest Temp If txtLow.Text > read Then txtLow.Text = read End If 'Calculate Average Dim total As Decimal Dim count As Integer For Each temperature In lstHistory.Items total += temperature count = count + 1 Next txtAverage.Text = total / count 'Running count of temperature reads. GroupBox2.Text = "History [" & count & "]" 'Reset total 'Reset count End Sub Running the Program Open up the project files in Visual Studio and start the program. Everything should work right out of the box. If you get the error "There are no com ports available!", make sure you have connected the Arduino to the computer. Select the correct com port from the combobox. There is no need to change the baud rate unless you have altered it in the Arduino source as well. Click Connect and you will start seeing the Arduino temperature data displayed in the temperature display textbox. The program will also keep a running log of all temperatures received in the history listbox to the right. To disconnect from the Arduino, press the Disconnect button or close the form. Congratulations! You got your Arduino talking to a VB.NET program! Make the Program Work for You! So great, you can see the temperature in your room or office... Now what? The data is processed in the updateTemp() function. You can process the information any way you want from that function. Perhaps you want the program to alert you every time the temperature rises above 90 degrees; you would do something like this in that function: If read > "90" Then MsgBox("Alert! High temperature in room!") End If Sure, this is a simple example, but the possibilities are endless with this project! Learn how to interface with your air conditioning unit from another Arduino board and have your program turn on and off the AC unit depending on the temperatures. Maybe you are trying to save money. Create an Arduino to control misters on the outside of your home to turn on and off when the temperature gets too hot. Although this article covers the use of a temperature sensor, the code can be altered to accommodate almost any type of sensor available! Alter it to detect light with a light detecting resistor (LDR), sound with a microphone, humidity, distance with an infrared proximity sensor, or any other physical element with the vast array of sensors available. Conclusion Arduino boards are fun and exciting boards on their own, but pairing them up with a powerful language such as Visual Baic .NET can take a project to the next level! Once you pair the Arduino with something powerful like VB.NET, you can take your project to bigger and better places. The Arduino only has so much processing power on its own.
https://mobile.codeguru.com/vb/displaying-sensor-data-in-an-app.html
CC-MAIN-2019-09
refinedweb
1,752
57.47
Artificial Neural Network Implementation using NumPy and Image Classification This tutorial builds artificial neural network in Python using NumPy from scratch in order to do an image classification application for the Fruits360 dataset This tutorial builds artificial neural network in Python using NumPy from scratch in order to do an image classification application for the Fruits360 dataset. Everything (i.e. images and source codes) used in this tutorial, rather than the color Fruits360 images, are exclusive rights for my book cited as "Ahmed Fawzy Gad 'Practical Computer Vision Applications Using Deep Learning with CNNs'. Dec. 2018, Apress, 978-1-4842-4167-7 ". The book is available at Springer at this link: . The source code used in this tutorial is available in my GitHub page here: The example being used in the book is about classification of the Fruits360 image dataset using artificial neural network (ANN). The example does not assume that the reader neither extracted the features nor implemented the ANN as it discusses what the suitable set of features for use are and also how to implement the ANN in NumPy from scratch. The Fruits360 dataset has 60 classes of fruits such as apple, guava, avocado, banana, cherry, dates, kiwi, peach, and more. For making things simpler, it just works on 4 selected classes which are apple Braeburn, lemon Meyer, mango, and raspberry. Each class has around 491 images for training and another 162 for testing. The image size is 100x100 pixels. Feature Extraction The book starts by selecting the suitable set of features in order to achieve the highest classification accuracy. Based on the sample images from the 4 selected classes shown below, it seems that their color is different. This is why the color features are suitable ones for use in this task. The RGB color space does not isolates color information from other types of information such as illumination. Thus, if the RGB is used for representing the images, the 3 channels will be involved in the calculations. For such a reason, it is better to use a color space that isolates the color information into a single channel such as HSV. The color channel in this case is the hue channel (H). The next figure shows the hue channel of the 4 samples presented previously. We can notice how the hue value for each image is different from the other images. The hue channel size is still 100x100. If the entire channel is applied to the ANN, then the input layer will have 10,000 neurons. The network is still huge. In order to reduce the amounts of data being used, we can use the histogram for representing the hue channel. The histogram will have 360 bins reflecting the number of possible values for the hue value. Here are the histograms for the 4 sample images. Using a 360 bins histogram for the hue channel, it seems that every fruit votes to some specific bins of the histogram. There is less overlap among the different classes compared to using any channel from the RGB color space. For example, the bins in the apple histogram range from 0 to 10 compared to mango with its bins range from 90 to 110. The margin between each of the classes makes it easier to reduce the ambiguity in classification and thus increasing the prediction accuracy. Here is the code that calculates the hue channel histogram from the 4 images. import numpy import skimage.io, skimage.color import matplotlib.pyplot raspberry = skimage.io.imread(fname="raspberry.jpg", as_grey=False) apple = skimage.io.imread(fname="apple.jpg", as_grey=False) mango = skimage.io.imread(fname="mango.jpg", as_grey=False) lemon = skimage.io.imread(fname="lemon.jpg", as_grey=False) apple_hsv = skimage.color.rgb2hsv(rgb=apple) mango_hsv = skimage.color.rgb2hsv(rgb=mango) raspberry_hsv = skimage.color.rgb2hsv(rgb=raspberry) lemon_hsv = skimage.color.rgb2hsv(rgb=lemon) fruits = ["apple", "raspberry", "mango", "lemon"] hsv_fruits_data = [apple_hsv, raspberry_hsv, mango_hsv, lemon_hsv] idx = 0 for hsv_fruit_data in hsv_fruits_data: fruit = fruits[idx] hist = numpy.histogram(a=hsv_fruit_data[:, :, 0], bins=360) matplotlib.pyplot.bar(left=numpy.arange(360), height=hist[0]) matplotlib.pyplot.savefig(fruit+"-hue-histogram.jpg", bbox_inches="tight") matplotlib.pyplot.close("all") idx = idx + 1 By looping through all images in the 4 image classes used, we can extract the features from all images. The next code does this. According to the number of images in the 4 classes (1,962) and the feature vector length extracted from each image (360), a NumPy array of zeros is created and saved in the dataset_features variable. In order to store the class label for each image, another NumPy array named outputs is created. The class label for apple is 0, lemon is 1, mango is 2, and raspberry is 3. The code expects that it runs in a root directory in which there are 4 folders named according to the fruits names listed in the list named fruits. It loops through all images in all folders, extract the hue histogram from each of them, assign each image a class label, and finally saves the extracted features and the class labels using the pickle library. You can also use NumPy for saving the resultant NumPy arrays rather than pickle. import numpy import skimage.io, skimage.color, skimage.feature import os import pickle fruits = ["apple", "raspberry", "mango", "lemon"] #492+490+490+490=1,962 dataset_features = numpy.zeros(shape=(1962, 360)) outputs = numpy.zeros(shape=(1962)) idx = 0 class_label = 0 for fruit_dir in fruits: curr_dir = os.path.join(os.path.sep, fruit_dir) all_imgs = os.listdir(os.getcwd()+curr_dir) for img_file in all_imgs: fruit_data = skimage.io.imread(fname=os.getcwd()+curr_dir+img_file, as_grey=False) fruit_data_hsv = skimage.color.rgb2hsv(rgb=fruit_data) hist = numpy.histogram(a=fruit_data_hsv[:, :, 0], bins=360) dataset_features[idx, :] = hist[0] outputs[idx] = class_label idx = idx + 1 class_label = class_label + 1 with open("dataset_features.pkl", "wb") as f: pickle.dump("dataset_features.pkl", f) with open("outputs.pkl", "wb") as f: pickle.dump(outputs, f) Currently, each image is represented using a feature vector of 360 elements. Such elements are filtered in order to just keep the most relevant elements for differentiating the 4 classes. The reduced feature vector length is 102 rather than 360. Using less elements helps to do faster training than before. The dataset_features variable shape will be 1962x102. You can read more in the book for reducing the feature vector length. Up to this point, the training data (features and class labels) are ready. Next is implement the ANN using NumPy.
https://www.kdnuggets.com/2019/02/artificial-neural-network-implementation-using-numpy-and-image-classification.html
CC-MAIN-2021-21
refinedweb
1,077
58.08
In another programming like C, C++ and PHP etc, arrays are similar. An array is a special variable that can contain a lot of similar data item. In this tutorial, we are going to learn arrays and types of arrays in C# programming. The array is a collection of similar data items. C# supports various types of arrays. In C# programming, an array is an object of base type System. Array. If you are already learned array in another programming then it will be easy to understand. The array index starts from the 0. like 0, 1,2,3,4, etc. C# supports to store the fixed size of elements in the array. The C# array is known as a data structure to stored data. This question comes out in every beginner. In C# programming, the array helps to remove complexity. Array optimize the code. It reduces the length of code. We can access array data randomly. If you want to traverse data, it makes easy. The manipulation is easy with the array. There are a lot of reasons to use Arrays in C# programming. Unlike other programming languages, the array is similar to declare. We can initialize an array at the time of declaration. Let's discuss C# array declaration with example. int[] record=new int[7] {5,15,20,25,30,35,40};In other way , we cane blank size of array like this . int[] record=new int[] {5,15,20,25,30,35,40};In simple way , we do not need to use new operator . We can condone new operator like this . int[] record=[] {5,15,20,25,30,35,40}; We declared, initialize and assigned values to C# array. The process of accessing data from the array is very easy. We can access data using the array name and the index number. Arrays index number starts from the zero. Access the array elements. record[0] record[1] record[2] record[3] record[4] record[5] You can specify the number of the index to get the array element. Let's create an example of a C# array. using System; namespace DowhileLoop{ public class Program { public static void Main(string[] args) { int[] data=new int[6] {67,89,23,21,56,32}; Console.WriteLine(data[0]); Console.WriteLine(data[1]); Console.WriteLine(data[2]); Console.WriteLine(data[3]); Console.WriteLine(data[4]); Console.WriteLine(data[5]); } } } Output - 67 89 23 21 56 32 In the above example, we declared and initialized the array with values. We printed array values by C# array name and the index number. This is a simple example of a C# array. A programmer never tired to try new code and think innovative. Let's create an example of the array using for loop. We had already discussed C# for loop . In this example, we will traverse the array. using System; namespace ArrayApplication{ public class Program { public static void Main(string[] args) { int[] record={5,10,25,20,25,30}; /*decaring and initializing array */ /*traversing array */ for(int a=0; record.Length>a; a++) { Console.WriteLine(record[a]); } } } } Output - 5 10 25 20 25 3 In the above C# example, we declared and initialized the array at a time. We omitted the new operator in this example and used for loop to traverse an array. The most benefit of traversing array, we do not need to specify index number one by one. Let's create another example of C# array with foreach loop. In the above example, we traversed the array using for loop. In the example, we are going to traverse array using forearch loop in C#. Let's have a look. using System; namespace ArrayApplication{ public class Program { public static void Main(string[] args) { int[] record={5,10,25,20,25,30}; /*decaring and initialisig array */ /*traversing array using foreach loop */ foreach(int a in record) { Console.WriteLine(a); } } } } Output- 5 10 25 20 25 30 There are three types of arrays in C# language. We have discussed above the single-dimensional array. The single-dimensional array is known as a liner array. In single dimensional array, we used bracket after type. The multidimensional array has rows and columns. C# support multidimensional array. Learn C# Multidimensional arrays A jugged is an array of array. Learn complete jugged array in C# programming.Learn C# Jugged array
https://technosmarter.com/csharp/arrays
CC-MAIN-2021-31
refinedweb
724
60.61
View Event Args Class Definition Provides event data that is used as a parameter object in the AxisViewChanged and AxisViewChanging events of the root Chart object. public ref class ViewEventArgs : EventArgs public class ViewEventArgs : EventArgs type ViewEventArgs = class inherit EventArgs Public Class ViewEventArgs Inherits EventArgs - Inheritance - Remarks This class is exposed as the e parameter in the AxisViewChanged and AxisViewChanging events. The AxisViewChanging event is raised just before a new view is displayed, as a result of the end-user clicking and dragging in a chart area. The AxisViewChanged event is raised just after the new view is created. It is important to note that the position and size of a view cannot be set in the AxisViewChanged event because in this case the view is already created. The ViewEventArgs class contains the following properties: The ChartArea property, which is used to get the ChartArea object in which the view is being displayed. The Name property gets the name of the chart area. Other chart area properties can also be set. The Axis property, which is used to get the Axis object with which the view is associated. To determine the type of axis, which can be X, Y, X2 or Y2, use the AxisName property. Other axis properties can also be set. The NewPosition property, which represents the new position of a view. The NewSize property, which represents the new size of a view. The NewSizeType property, which represents the unit of measurement for the size of a view.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.datavisualization.charting.vieweventargs?view=netframework-4.8
CC-MAIN-2019-51
refinedweb
250
61.87
Collapse Issue - Ezequiel Guillermo Martínez Quiroga I’m having an issue when I try to collapse a class in Java. It is not working as expected. I have the following class public class ClassificationAndApproversCalculation { Inside of this class there is 3000 lines more or less. } When I try to collapse the class, it collapse at line 1500. This is not right. When I try with other editors Like Ultra Edit it works as expected collapsing the class at the real end of it (Line 3000). This issue has started 5 days ago, I couldn’t find any solution. Could anybody help with this issue?. Thanks in advance. Kind Regards, Ezequiel. - Jacob Currie What does it see on 1500? I noticed that the language settings for finding collapsible sections is a bit odd sometimes. You may want to look for a commented out bracket on the line it ends on. I know its not much to go on but I have this issue when including multiple languages in one file. - Chip Cooper I’ve seen this recently also… using .lua (built-in). Almost always it does line breaking correctly, but I’ve seen it a least twice now myself. I’ve also examined the code, but wasn’t sure what ‘might’ be causing it,so I was just looking for ‘something obvious?’ in the code, but… no, that wasn’t it. Whatever it is, it seems to be an exception that’s not being handled, as it is rare. I’ll try to remember to post the code next time, and bring it up, as mine was only about 10 to 15 lines… or so. It seems to me that anything in a comment should be ignored completely. Mine had comments, and they appeared to be ignored, no brackets though, except comment brackets --[[ ]]
https://notepad-plus-plus.org/community/topic/14982/collapse-issue
CC-MAIN-2018-39
refinedweb
302
75
package jpeg; public abstract class Image { public void write(File file) throws ImageException {...} public static Image read(File file) throws ImageException {...} }The Image class resides in the jpeg library, so there's no need to put "'jpeg" in the class or method names. The writeToFile and readFromFile methods reside in the Image class, so there's no need to put "image" in the method names. The File parameters are identified by their type, so there's no need to add "file" to the method names. Each name makes sense in its local context. A name should not try to repeat all of its context. That's too much baggage for a name to carry around.it. Thelop names seem to carry their entire context around with them. If I implement a thelop program in an OOP language, do the names get trimmed now that the packages and objects can provide context for the names, or do the names remain encumbered? -- WayneConrad Wayne, I hope that this will not lead us into discussing subtleties of minor importance. But: although your suggestion seems to be at least equivalent at first sight, IMO it won't work well neither in a Java (OO) nor in a Thelop context. First, the class and packages names don't fit. We do not define an image object, we just fill an image from a special image format file. We surely will have many image formats within a general library and others within different projects that have to work together and that are potential targets for CodeHarvesting. So in reality we might better have: /* Jpeg.java */ package com.xcorp.lib.graphics.jpeg; public abstract class Jpeg { public void write(Image im,File file) throws ImageException {...} public static Image read(File file) throws ImageException {...} } /* Png.java */ package com.xcorp.yproject.png; public abstract class Png { public void write(Image im,File file) throws ImageException {...} public static Image read(File file) throws ImageException {...} }Of course this has nothing to do with Thelop, only with the proper organization of the package and class namespaces, although there are many other possibilities like package com.xcorp.graphlib.imageformats.jpeg; class ImageFormatJpegor package com.xcorp.imageformats; class Jpegbut these variations are not important. One could also combine packages with Thelop names: /* Jpeg.java */ package com.xcorp.lib.graphics.jpeg; public abstract class Jpeg { public void ImageWriteFileJpeg(Image im,File file) throws ImageException {...} public static Image ImageReadFileJpeg(File file) throws ImageException {...} } /* Png.java */ package com.xcorp.yproject.png; public abstract class Png { public void ImageWriteFilePng(Image im,File file) throws ImageException {...} public static Image ImageReadFilePng(File file) throws ImageException {...} }Now lets imagine a situation where one is interested in code use/reuse and CodeHarvesting in the context of a multi library, multi project situation (that's what LOP and THELOP are working for). The questions are: What classes, functions do exist for e.g. Png? Where are they defined? Where and how are they used? How many source changes are needed for changing an API or moving a module including the update of all projects using it? There will be a quite different answers whether you use namespaces or not, and whether you use Thelop or not. Without going into detail it should be clear that namespaces increase the work for CodeHarvesting. It should also be clear that a name like "write" will not help someone to find the definition and calls of a special "write". You may have tools that help you but they are often restricted to a project or to a special programming language IDE. Even a good tool will not search for "anything that has to do with Jpeg" as will a simple text search utility if you use Thelop names. In a way Thelop replaces explicit namespaces by "semantic namespaces". Any Thelop name (e.g. ImageWriteFileJpeg) may be thought to belong to a number of different semantic namespaces built from any subset of ThelopWords it contains (e.g. [Image,Write] [Jpeg,File], [Write,File], [Jpeg]...). Often there is neither possibility nor need to have a file, class or package structure to hierarchically organize all these semantic namespaces at the same time. Just like VirtualClasses these semantic namespaces may be distributed among a number of modules or even projects. Name collisions using Thelop are rare, because they can only happen when there is a semantic collision (e.g. two modules doing the same conversion of a Jpeg file to an Image object). Within a project a semantic collision is unacceptable and must be resolved immediately. Within a multi library, multi project source pool a semantic collision could occur during CodeHarvesting and should be resolved. In other (rare) collision situations using Thelop at least doesn't worsen the situation (use explicit namespaces if the programming language supports them). In short: Thelop needs additional work to think about and use a consistent dictionary of words. Thelop pays back by reduced need for documentation, less context dependence in using function names, an easier and tool-independent way to "query the source pool" and easier CodeHarvesting. Is it worth the effort? For me the answer is "yes", but I agree that for many developers the answer will be "no". It just depends on the situation and the priorities.
http://c2.com/cgi-bin/wiki?ThelopLanguage
CC-MAIN-2015-11
refinedweb
874
57.16
This is a project I setup to show the use the application factory pattern in Flask. I set it up because I was running was stuck while refactoring one of my projects to use the factory pattern. The code would run normally, but my test cases where failing with the following two errors: RuntimeError: application not registered on db instance and no application bound to current context RuntimeError: working outside of application context Most Flask examples don’t use the factory pattern so I spent a lot of time searching around to solve the problem. So I thought I would work it out and share it. Hopefully it saves someone else time. The problem Once your project starts to grow, code organization is everything. Flask provides a number of mechanisms for code organization. One of these mechanism’s is blueprints. Combined with the factory pattern provides a nice way to structure and organise code. Another problem that the factory pattern helps solve is circular dependencies. Getting the factory pattern to work isn’t hard. Getting it to work correctly, it turned out, was a little harder. The problem I had was caused in the testing code. In the following section I will briefly explain how to setup and use the factory pattern correctly. Lessons learned I have added more code than strictly necessary to show the concept of the factory pattern working as a realistic example. The structure and contents of this example project is: src │ .gitignore │ readme.md │ manage.py │ requirements.txt ├── instance │ sensitive.cfg ├── test_app_factory │ │ __init__.py │ │ appliction.py │ │ config.py │ │ extensions.py │ │ models.py │ ├── helpers │ │ __init__.py │ │ misc.py │ ├── module │ │ __init__.py │ │ viws.py │ ├── static │ │ favicon-16x16.png │ │ favicon-32x32.png │ └── templates │ index.html └── tests test_basics.py Okay, what’s important to point out here? The core of the factory pattern is setup in application.py and extensions.py. All extensions are initialized in extensions.py. If you add additional extensions make sure to add them to the import statement in test_app_factory/__init__.py. This is a convent way to shorten import statements. The actual heavy lifting is done in application.py. Each part of the application initialization is a separate function, which are called by the main function app_factory. This function takes a string, which specifies the environment the configuration should be loaded for. The configuration is defined in config.py. The factory pattern in application.py looks like this: def app_factory(config, name): app = Flask(...) ... return app The function calls a number of functions that load the configuration settings, extensions, blueprints, etc. Using the factory is really easy, just use the following call: app = app_factory('TST') To access the app object in modules after the application has been initialized is done using the proxy provided by Flask: from flask import current_app as app Now for the part that was driving met crazy, the testing code. I still do not understand fully why it is the only place in my code that was causing a problem, probably has to do with the way unittest works. Anyway, to get the factory pattern to work you need to add app_context to specific statements. Here is an example. class TestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls.app = app_factory('TST') def setUp(self): with self.app.app_context(): self.client = app.test_client() db.create_all() def tearDown(self): with self.app.app_context(): db.session.remove() db.drop_all() def test_add_user(self): with self.app.app_context(): db.session.add(User(name='test user', email='[email protected]')) db.session.commit() Conclusion Finding good examples isn’t always easy. The factory pattern can really help to organize the code and make it more readable and maintainable. Any suggestions how I can further improve the code? I would love to hear from you!
http://nidkil.me/category/python/
CC-MAIN-2020-10
refinedweb
627
51.34
SQL LocalDB Wrapper is a .NET library providing interop with the Microsoft SQL Server LocalDB Instance API from managed code using .NET APIs. The library targets netstandard2.0 and net6.0. This library exposes types that wrap the native SQL LocalDB Instance API to perform operations on SQL LocalDB such as for managing instances (create, delete, start, stop) and obtaining SQL connection strings for existing instances. Microsoft SQL Server LocalDB 2012 and later is supported for both x86 and x64 on Microsoft Windows and the library targets netstandard2.0. While the library can be compiled and referenced in .NET applications on non-Windows Operating Systems, SQL LocalDB is only supported on Windows. Non-Windows Operating Systems can query to determine that the SQL LocalDB Instance API is not installed, but other usage will cause a PlatformNotSupportedException to be thrown. To install the library from NuGet using the .NET SDK run: dotnet add package MartinCostello.SqlLocalDb // using MartinCostello.SqlLocalDb; using var localDB = new SqlLocalDbApi(); ISqlLocalDbInstanceInfo instance = localDB.GetOrCreateInstance("MyInstance"); ISqlLocalDbInstanceManager manager = instance.Manage(); if (!instance.IsRunning) { manager.Start(); } using SqlConnection connection = instance.CreateConnection(); connection.Open(); // Use the SQL connection... manager.Stop(); Further examples of using the library can be found by following the links below: Version 1.x.x of this library was previously published as System.Data.SqlLocalDb. The current version ( 3.x.x) has been renamed and is a breaking change to the previous version with various changes to namespaces and types. Version 2.x.x of this library uses SQL types from the System.Data.SqlClient namespace. The current version ( 3.x.x) uses the new Microsoft.Data.SqlClient NuGet package where the same types (such as SqlConnection) are now in the Microsoft.Data.SqlClient namespace. To migrate a project from using the previous 2.x release, you should change usage of the System.Data.SqlClient namespace to Microsoft.Data.SqlClient and recompile your project. Any feedback or issues can be added to the issues for this project in GitHub. The repository is hosted in GitHub: This project is licensed under the Apache 2.0 license. Compiling the library yourself requires Git and the .NET SDK to be installed (version 6.0.100 or later). For all of the tests to be functional you must also have at least one version of SQL LocalDB installed. To build and test the library locally from a terminal/command-line, run the following set of commands: Windows git clone cd sqllocaldb ./build.ps1 Note: To run all the tests successfully, you must run either build.ps1 or Visual Studio with administrative privileges. This is because the SQL LocalDB APIs for sharing LocalDB instances can only be used with administrative privileges. Not running the tests with administrative privileges will cause all tests that exercise such functionality to be skipped. Note: Several tests are skipped on non-Windows Operating Systems as SQL LocalDB itself is only supported on Windows. This library is copyright () Martin Costello 2012-2022. Microsoft SQL Server is a trademark and copyright of the Microsoft Corporation.
https://awesomeopensource.com/project/martincostello/sqllocaldb
CC-MAIN-2022-40
refinedweb
506
51.85
Louis Rilling <Louis.Rilling@kerlabs.com> writes:> On 09/07/10 8:58 -0700, Eric W. Biederman wrote:>> >> Having proc reference the pid_namespace and the pid_namespace>> reference proc is a serious reference counting problem, which has>> resulted in both leaks and use after free problems. Mount already>> knows how to go from a pid_namespace to a mount of proc, so we don't>> need to cache the proc mount.>> >> To do this I introduce get_proc_mnt and replace pid_ns->proc_mnt users>> with it. Additionally I remove pid_ns_(prepare|release)_proc as they>> are now unneeded.>> >> This is slightly less efficient but it is much easier to avoid the>> races. If efficiency winds up being a problem we can revisit our data>> structures.>> IIUC, the difference between this solution and the first one I proposed is that> instead of pinning proc_mnt with mntget() at copy_process()-time, proc_mnt is> looked for and, if possible, mntget() at release_task()-time.>> Could you elaborate on the trade-off, that is accessing proc_mnt at> copy_process()-time vs looking up proc_mnt at release_task()-time?A little code simplicity. But Serge was right there is cost a noticeablecost. About 5%-7% more on lat_proc from lmbench.The real benefit was simplicity.Eric
http://lkml.org/lkml/2010/7/11/59
CC-MAIN-2014-42
refinedweb
203
58.18
Introduction After a long period of silence, I decided to write my next blog post. I know that this is very interesting area and lots of new experiments going on. So in here I’m going to tell how to integrate text-to-speech and voice recognition capabilities to your application. This will help you to set your first step in this area. This time I selected Visual C# as the language, because it provides very easy way to implement those. It is also essential to have .NET framework 3.0 or later version. With those implementing speaking /listening application is only a matter of few lines of code. How to start First open Visual C sharp and select a new project which allows building Console Application. For our purpose we have to add Speech API for our References. (By default it is not added) For that Go to Solution Explorer, Right click on References and Click Add References. From the Add reference window go to .NET tab, Click System.Speech and click OK. Now the environment has settled. It’s time to code. Text to speech application First you need to add the following using statement. using System.Speech.Synthesis; Then add the following code where you want to implement the Text to speech capability. SpeechSynthesizer synth = new SpeechSynthesizer(); synth.Speak("Hello from Student Guru!"); First it creates a SpeechSynthesizer object and then calls its Speak method. The text you want to read should be passed as the argument. But Speak method holds the program until its finish. This won’t create a program interactive. So instead Speak() you can use SpeakAsync(), which allows you to execute your program while speaking is going on. In addition to that SpeechSynthesizer alllow you to change voice using SelectVoice() method and save voice to .wav file using SetOutputToWaveFile() method. Speech Recognition application First you need to add using System.Speech.Recognition; Then add following code to a method you wish to do speech recognising, SpeechRecognitionEngine recognitionEngine = new SpeechRecognitionEngine(); recognitionEngine.SetInputToDefaultAudioDevice(); recognitionEngine.LoadGrammar(new DictationGrammar()); RecognitionResult result = recognitionEngine.Recognize(new TimeSpan(0, 0, 20)); foreach (RecognizedWordUnit word in result.Words) { Console.Write("{0} ", word.Text); } After executing program, you have to speak to computer and it will detect your voice within first 20 seconds. If it fails to recognize any word it will show an error. Note that this won’t give 100% accurate outputs, but this is for new comers. In here I used DictationGrammer object to LoadGrammer, which is provided by windows desktop speech technology. But for special tasks you can build a new Grammer object using GrammerBuilder. Recognition engine also allows you to read from a wave file and do the same. References: - Dr.dobbs web site: - StudentGuru: - Microsoft Speech SDK:
https://buddhimawijeweera.wordpress.com/2011/10/28/speaking-and-listening-applications/
CC-MAIN-2017-43
refinedweb
461
60.11
synchronized methods or code blocks (or thread-safe classes like AtomicInteger or ArrayBlockingQueue). However, there is a pitfall for the unwary. As with most user interface APIs, you can’t update the user interface from threads you’ve created yourself. Well, as every Java undergraduate knows, you often can, but you shouldn’t. If you do this, sometimes your program will work and other times it won’t. You can get around this problem by using the specialised SwingWorker class. In this article, I’ll show you how you can get your programs working even if you’re using the Thread class, and then we’ll go on to look at the SwingWorker solution. For demonstration purposes, I’ve created a little Swing program. As you can see, it consists of two labels and a start button. At the moment, clicking the start button invokes a handler method which does nothing. Here’s the Java code: import java.awt.Font; import java.awt.GridBagConstraints; import java.awt.GridBagLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.List; import java.util.concurrent.ExecutionException; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.SwingUtilities; import javax.swing.SwingWorker; public class MainFrame extends JFrame { private JLabel countLabel1 = new JLabel('0'); private JLabel statusLabel = new JLabel('Task not completed.'); private JButton startButton = new JButton('Start'); public MainFrame(String title) { super(title); setLayout(new GridBagLayout()); countLabel1.setFont(new Font('serif', Font.BOLD, 28)); GridBagConstraints gc = new GridBagConstraints(); gc.fill = GridBagConstraints.NONE; gc.gridx = 0; gc.gridy = 0; gc.weightx = 1; gc.weighty = 1; add(countLabel1, gc); gc.gridx = 0; gc.gridy = 1; gc.weightx = 1; gc.weighty = 1; add(statusLabel, gc); gc.gridx = 0; gc.gridy = 2; gc.weightx = 1; gc.weighty = 1; add(startButton, gc); startButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { start(); } }); setSize(200, 400); setDefaultCloseOperation(EXIT_ON_CLOSE); setVisible(true); } private void start() { } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { new MainFrame('SwingWorker Demo'); } }); } } We’re going to add some code into the start() method which is called in response to the start button being clicked. First let’s try a normal thread. private void start() { Thread worker = new Thread() { public void run() { // Simulate doing something useful. for(int i=0; i<=10; i++) { // Bad practice countLabel1.setText(Integer.toString(i)); try { Thread.sleep(1000); } catch (InterruptedException e) { } } // Bad practice statusLabel.setText('Completed.'); } }; worker.start(); } As a matter of fact, this code seems to work (at least for me anyway). The program ends up looking like this: This isn’t recommended practice, however. We’re updating the GUI from our own thread, and under some circumstances that will certainly cause exceptions to be thrown. If we want to update the GUI from another thread, we should use SwingUtilities to schedule our update code to run on the event dispatch thread. The following code is fine, but ugly as the devil himself. private void start() { Thread worker = new Thread() { public void run() { // Simulate doing something useful. for(int i=0; i<=10; i++) { final int count = i; SwingUtilities.invokeLater(new Runnable() { public void run() { countLabel1.setText(Integer.toString(count)); } }); try { Thread.sleep(1000); } catch (InterruptedException e) { } } SwingUtilities.invokeLater(new Runnable() { public void run() { statusLabel.setText('Completed.'); } }); } }; worker.start(); } Surely there must be something we can do to make our code more elegant? The SwingWorker Class SwingWorker is an alternative to using the Thread class, specifically designed for Swing. It’s an abstract class and it takes two template parameters, which make it look highly ferocious and puts most people off using it. But in fact it’s not as complex as it seems. Let’s take a look at some code that just runs a background thread. For this first example, we won’t be using either of the template parameters, so we’ll set them both to Void, Java’s class equivalent of the primitive void type (with a lower-case ‘v’). Running a Background Task We can run a task in the background by implementing the doInBackground method and calling execute to run our code. SwingWorker<Void, Void> worker = new SwingWorker<Void, Void>() { @Override protected Void doInBackground() throws Exception { // Simulate doing something useful. for (int i = 0; i <= 10; i++) { Thread.sleep(1000); System.out.println('Running ' + i); } return null; } }; worker.execute(); Note that SwingWorker is a one-shot affair, so if we want to run the code again, we’d need to create another SwingWorker; you can’t restart the same one. Pretty simple, hey? But what if we want to update the GUI with some kind of status after running our code? You cannot update the GUI from doInBackground, because it’s not running in the main event dispatch thread. But there is a solution. We need to make use of the first template parameter. Updating the GUI After the Thread Completes We can update the GUI by returning a value from doInBackground() and then over-riding done(), which can safely update the GUI. We use the get() method to retrieve the value returned from doInBackground() So the first template parameter determines the return type of both doInBackground() and get(). SwingWorker<Boolean, Void> worker = new SwingWorker<Boolean, Void>() { @Override protected Boolean doInBackground() throws Exception { // Simulate doing something useful. for (int i = 0; i <= 10; i++) { Thread.sleep(1000); System.out.println('Running ' +. } } }; worker.execute(); What if we want to update the GUI as we’re going along? That’s what the second template parameter is for. Updating the GUI from a Running Thread To update the GUI from a running thread, we use the second template parameter. We call the publish() method to ‘publish’ the values with which we want to update the user interface (which can be of whatever type the second template parameter specifies). Then we override the process() method, which receives the values that we publish. Actually process() receives lists of published values, because several values may get published before process() is actually called. In this example we just publish the latest value to the user interface. SwingWorker<Boolean, Integer> worker = new SwingWorker<Boolean, Integer>() { @Override protected Boolean doInBackground() throws Exception { // Simulate doing something useful. for (int i = 0; i <= 10; i++) { Thread.sleep(1000); // The type we pass to publish() is determined // by the second template parameter. publish. } } @Override // Can safely update the GUI from this method. protected void process(List<Integer> chunks) { // Here we receive the values that we publish(). // They may come grouped in chunks. int mostRecentValue = chunks.get(chunks.size()-1); countLabel1.setText(Integer.toString(mostRecentValue)); } }; worker.execute(); More …. ? You Want More …. ? I hope you enjoyed this introduction to the highly-useful SwingWorker class. You can find more tutorials, including a complete free video course on multi-threading and courses on Swing, Android and Servlets, on my site Cave of Programming. Reference: Multi-threading in Java Swing with SwingWorker from our JCG partner John Purcell at the Java Advent Calendar blog. Very clear explanation! This is exactly what I was looking for, will try to use new swing knowledge :) hallo sir, sir, in your code: …. for (int i = 0; i <= 10; i++) … in a real program what is a number `0` and `10` means … ? Sire, think of 0 meaning you did not eat. think of 10 means you ate 10 eggs. Now it is clear to me. Thread.sleep(1000); why the thread need to sleep? … there is no need. thread just adds your code latency. for instance: for(int i=0;1==1;i++) System.out.println(i); in this code, your pc prints i values as fast as it can. but if you use thread sleep, i will be increased per second and i values will be printed per second like clock. 1000 means 1000milisecond, which means 1 sec for(int i=0;1==1;i++){ try { Thread.sleep(1000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } System.out.println(i); } thank you aras. I found this easier to understand than Oracle’s own SwingWorker tutorial, cheers! Very lucid and to the point explanation. I had seen quite a few tutorials before this one but found this to be the most useful one. Thanks a lot for taking out time and efforts for this good work. Cheers, Ameya. Hi! Thanks for sharing your knowledge. This is a very good article about SwingWorkers. Superb tutorial thank you. nice simplicity at its best :) Great explanation! Thanks a lot! How much data can be stored in chunks? If there is infinite loop in doInBackground(), can chunks get all the data. Namely, is chunks reliable to be buffer? Thanks, James Usually not much … it will only save up the amount of data that has been published since the last GUI update ran, which will never be very much.
http://www.javacodegeeks.com/2012/12/multi-threading-in-java-swing-with-swingworker.html/comment-page-1/
CC-MAIN-2015-14
refinedweb
1,475
60.72
I'm wondering if anyone with a better understanding of python and gae can help me with this. I am uploading a csv file from a form to the gae datastore. class CSVImport(webapp.RequestHandler): def post(self): csv_file = self.request.get('csv_import') fileReader = csv.reader(csv_file) for row in fileReader: self.response.out.write(row) The call self.request.get('csv') returns a String. When you iterate over a string, you iterate over the characters, not the lines. You can see the difference here: class ProcessUpload(webapp.RequestHandler): def post(self): self.response.out.write(self.request.get('csv')) file = open(os.path.join(os.path.dirname(__file__), 'sample.csv')) self.response.out.write(file) # Iterating over a file fileReader = csv.reader(file) for row in fileReader: self.response.out.write(row) # Iterating over a string fileReader = csv.reader(self.request.get('csv')) for row in fileReader: self.response.out.write(row) Short answer, try this: fileReader = csv.reader(csv_file.split("\n")) Long answer, consider the following: for thing in stuff: print thing.strip().split(",") If stuff is a file pointer, each thing is a line. If stuff is a list, each thing is an item. If stuff is a string, each thing is a character. Iterating over the object returned by csv.reader is going to give you behavior similar to iterating over the object passed in, only with each item CSV-parsed. If you iterate over a string, you'll get a CSV-parsed version of each character.
https://codedump.io/share/AJAkbkOEEvet/1/upload-and-parse-csv-file-with-google-app-engine
CC-MAIN-2016-44
refinedweb
251
54.69
ThreadCtl(), ThreadCtl_r() Control a thread Synopsis: #include <sys/neutrino.h> int ThreadCtl( int cmd, void * data ); int ThreadCtl_r( int cmd, void * data ); Since: BlackBerry 10.0.0 Arguments: - cmd - The command you want to execute; one of the following: - _NTO_TCTL_ALIGN_FAULT - _NTO_TCTL_IO - _NTO_TCTL_IO_PRIV Since: BlackBerry 10.1.0 - _NTO_TCTL_NAME - _NTO_TCTL_ONE_THREAD_CONT - _NTO_TCTL_ONE_THREAD_HOLD - _NTO_TCTL_RCM_GET_AND_SET - _NTO_TCTL_RUNMASK - _NTO_TCTL_RUNMASK_GET_AND_SET - _NTO_TCTL_RUNMASK_GET_AND_SET_INHERIT - _NTO_TCTL_THREADS_CONT - _NTO_TCTL_THREADS_HOLD For more information, see below. - data - A pointer to data associated with the specific command; see below. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: These kernel calls allow you to make OS-specific changes to a thread. The ThreadCtl() and ThreadCtl_r() functions are identical except in the way they indicate errors. See the Returns section for details. The sections that follow describe the possible commands. _NTO_TCTL_ALIGN_FAULT ThreadCtl(_NTO_TCTL_ALIGN_FAULT, data) This command controls the response to a misaligned access. The data argument must be a pointer to an int whose value indicates how you want to respond: - greater than 0 — make a misaligned access fault with a SIGBUS, if the architecture permits it. - less than 0 — make the kernel attempt to emulate an instruction with a misaligned access. If the attempt fails, it also faults with a SIGBUS. - 0 — don't change the alignment-fault handling for the thread. The function sets data to a positive or negative number, indicating the previous state of the alignment-fault handling. - Threads created by the calling thread inherit the _NTO_TCTL_ALIGN_FAULT status. - On ARMv6 and ARMv7 targets, the _NTO_TCTL_ALIGN_FAULT command is ignored; you can't control the alignment-fault behavior on a per-thread basis on these targets. You can set the global behavior by specifying the -ad or -ae option to procnto* : - -ad - Causes alignment faults for all threads (the default). - -ae - Performs hardware misaligned accesses and causes alignment faults only for certain misaligned accesses that can't be performed by hardware. _NTO_TCTL_IO ThreadCtl(_NTO_TCTL_IO, 0) Superlock the process's memory and request I/O privileges; let the thread execute the in, ins, out, outs, cli, and sti I/O opcodes on architectures where it has the appropriate privilege, and let it attach IRQ handlers. If a thread attempts to use these opcodes without successfully executing this call, the thread faults with a SIGSEGV when the opcode is attempted. A lack of I/O privileges may also cause a SIGILL signal. - In order to use this command, your process must have the PROCMGR_AID_IO ability enabled. For more information, see procmgr_ability(). - Threads created by the calling thread inherit the _NTO_TCTL_IO status. _NTO_TCTL_IO_PRIV ThreadCtl(_NTO_TCTL_IO_PRIV, 0) This command is similar to _NTO_TCTL_IO , but _NTO_TCTL_IO_PRIV also requests that the thread be put into System (privileged) execution mode on ARM targets. - In order to use this command, your process must have the PROCMGR_AID_IO ability enabled. For more information, see procmgr_ability(). - Threads created by the calling thread inherit the _NTO_TCTL_IO_PRIV status. _NTO_TCTL_NAME ThreadCtl(_NTO_TCTL_NAME, data) Set or retrieve the name of the current thread. The data argument must be a pointer to a _thread_name structure, which is defined as follows: struct _thread_name { int new_name_len; int name_buf_len; char name_buf[1]; }; The name_buf member is a contiguous buffer that extends the structure; name_buf_len is the size of this buffer. If you're setting or deleting the thread's name, the old name is copied as a NULL-terminated string into name_buf, up to the number of bytes specified by name_buf_len. - Currently, the thread names are limited to _NTO_THREAD_NAME_MAX. - You can also use the pthread_getname_np() and pthread_setname_np() functions instead of calling ThreadCtl() directly. Here's an example: #include <stdio.h> #include <sys/neutrino.h> #include <stdlib.h> int main () { struct _thread_name *tname; int size; size = sizeof(*tname) * 2 + _NTO_THREAD_NAME_MAX * sizeof(char); tname = malloc (size); if (tname == NULL) { perror ("malloc"); return EXIT_FAILURE; } else { memset (tname, 0x00, size); tname->name_buf_len = _NTO_THREAD_NAME_MAX; /* To change the name, put the name into name_buf and set new_name_len to the length of the new name. */ strcpy (tname->name_buf, "Hello!"); tname->new_name_len = strlen (tname->name_buf); if (ThreadCtl (_NTO_TCTL_NAME, tname) == -1) { perror ("ThreadCtl()"); return EXIT_FAILURE; } else { printf ("The old name was: '%s'.\n", tname->name_buf); } /* To get the current name, set new_name_len to -1. */ tname->new_name_len = -1; if (ThreadCtl (_NTO_TCTL_NAME, tname) == -1) { perror ("ThreadCtl()"); return EXIT_FAILURE; } else { printf ("The current name is: '%s'.\n", tname->name_buf); } /* To delete the name, set new_name_len to 0. */ tname->new_name_len = 0; if (ThreadCtl (_NTO_TCTL_NAME, tname) == -1) { perror ("ThreadCtl()"); return EXIT_FAILURE; } else { printf ("The old name was: '%s'.\n", tname->name_buf); } free (tname); } return EXIT_SUCCESS; } _NTO_TCTL_ONE_THREAD_CONT ThreadCtl(_NTO_TCTL_ONE_THREAD_CONT, data) Unfreeze the thread with the given thread ID, which was frozen by an earlier _NTO_TCTL_ONE_THREAD_HOLD command. The data is the thread ID, cast to be a pointer (i.e. (void *) tid). This command returns an error of ESRCH if there's no thread with an ID of tid. _NTO_TCTL_ONE_THREAD_HOLD ThreadCtl(_NTO_TCTL_ONE_THREAD_HOLD, data) BlackBerry 10 OS Programmer's Guide. _NTO_TCTL_RUNMASK ThreadCtl(_NTO_TCTL_RUNMASK, data) Set the processor affinity for the calling thread in a multiprocessor system. The data is the runmask, cast to be a pointer (i.e. (void *) runmask). Each set bit in runmask represents a processor that the thread can run on. By default, a thread's runmask is set to all ones, which allows it to run on any available processor. A value of 0x01 would, for example, force the thread to run only on the first processor. You can use _NTO_TCTL_RUNMASK to optimize the runtime performance of your system by, for example, relegating nonrealtime threads to a specific processor. In general, this shouldn't be necessary, since the BlackBerry 10 OS realtime scheduler always preempts a lower-priority thread immediately when a higher priority thread becomes ready. The main effect of processor locking is the effectiveness of the CPU cache, since threads can be prevented from migrating. Threads created by the calling thread don't inherit the specified runmask. _NTO_TCTL_RUNMASK_GET_AND_SET Thread. _NTO_TCTL_RUNMASK_GET_AND_SET_INHERIT ThreadCtl(_NTO_TCTL_RUNMASK_GET_AND_SET_INHERIT, data) Manipulate the calling thread's runmask and inherit mask. The data argument must be a pointer to a struct _thread_runmask. Conceptually, this structure consists of these members: - size - runmask - inherit_mask However, the size of the masks (and hence the size of the structure) depends on the number of processors on your system. We've defined the following macros to make it easier for you to work with this structure: - RMSK_SIZE(num_cpu) - Determine the size of the masks. You can find the number of CPUs on your system in _syspage_ptr->num_cpu. - RMSK_SET(cpu, p) - Set the bit for cpu (where cpu is zero-based) in the mask p. - RMSK_CLR(cpu, p) - Clear the bit for cpu (where cpu is zero-based) in the mask p. - RMSK_ISSET(cpu, p) - Determine the value of the bit for cpu in the mask p. The _NTO_TCTL_RUNMASK_GET_AND_SET_INHERIT command saves the values for both masks at the time of the call in their respective members of this structure. If you pass 0 for the masks, the masks are left unaltered; otherwise they're set to the specified value(s). Here's an example: the current * values without alteration. */ if (ThreadCtl(_NTO_TCTL_RUNMASK_GET_AND_SET_INHERIT, rsizep) == -1) { perror("_NTO_TCTL_RUNMASK_GET_AND_SET_INHERIT"); free(freep); return 1; } /* * Restrict our inherit mask to the last cpu; leave the *. Threads created by the calling thread aren't frozen. Blocking states These calls don't block. Returns: The only difference between these functions is the way they indicate errors: Errors: - E2BIG - The name is larger than the accepted size. - EINVAL - The name buffer length is invalid or smaller than the new name length, or the specified runmask is invalid. - EPERM - The calling process doesn't have the required permission; if you're using _NTO_TCTL_IO, see procmgr_ability(). - ESRCH - There's no thread with an ID of tid. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/t/threadctl.html
CC-MAIN-2014-52
refinedweb
1,300
55.95
Closed Bug 961506 Opened 7 years ago Closed 7 years ago Add a margin for .menulist-label on menulist .folder Menu Item Categories (Thunderbird :: Theme, defect) Tracking (Not tracked) Thunderbird 29.0 People (Reporter: Paenglab, Assigned: Paenglab) Details Attachments (2 files, 1 obsolete file) In several folderlist menulists like in message filters the label is directly attached to the icon. #locationFolders had already a fix for this. Moving this to a global place like messenger.css would allow to fix this on all menulists with class folderMenuItem. This patch moves the rule to messenger.css and makes it more global by using the selector menulist.folderMenuItem. The second rule is to make the folderlist menuitems without a icon as tall as the ones with icons. Assignee: nobody → richard.marti Status: NEW → ASSIGNED Attachment #8362258 - Flags: ui-review?(josiah) Attachment #8362258 - Flags: review?(josiah) Could you give me a screenshot or something for the ui-review. Thanks. Comparison between no patch (left) and with patch (right). In the upper half the menuitem without icon (now as tall as the ones with icons). On bottom the the gap between icon and text on menulist button. I need to say this will have full effect on all folder menulists when bug 878805 is also checked-in. Comment on attachment 8362258 [details] [diff] [review] patch Review of attachment 8362258 [details] [diff] [review]: ----------------------------------------------------------------- Looks good to me. ui-r+ and r+ with one comment. ::: mail/themes/windows/mail/messenger.css @@ +158,5 @@ > +%ifndef WINDOWS_AERO > + padding-top: 2px; > + padding-bottom: 2px; > +%endif > +%ifdef WINDOWS_AERO You should be able to shorten this to: %ifndef WINDOWS_AERO stuff %else stuff %endif But double check that it works. We only use the %else in IM theme code, so I can't be sure. Attachment #8362258 - Flags: ui-review?(josiah) Attachment #8362258 - Flags: ui-review+ Attachment #8362258 - Flags: review?(josiah) Attachment #8362258 - Flags: review+ (In reply to Josiah Bruner [:JosiahOne] from comment #4) > You should be able to shorten this to: > > %ifndef WINDOWS_AERO > stuff > %else > stuff > %endif Yeah, this is a lot cleaner. Attachment #8362258 - Attachment is obsolete: true Attachment #8363137 - Flags: ui-review+ Attachment #8363137 - Flags: review+ Status: ASSIGNED → RESOLVED Closed: 7 years ago Resolution: --- → FIXED Target Milestone: --- → Thunderbird 29.0
https://bugzilla.mozilla.org/show_bug.cgi?id=961506
CC-MAIN-2021-10
refinedweb
371
66.54
If you haven't checked FastAPI as an alternative to Flask, take a look at it, and you'll be pleasantly surprised by how capable, modern, and cool it is. I'm not going to talk about FastAPI here, but I'll explain how to get a simple "Hello World" application running on Google's App Engine. For this example, I'm going to be using the App Engine's Python 3 Standard Environment. Deploying to the Flexible Environment should be very similar. You'll need to create three files: requirements.txt - Here, you'll list your required libraries so App Engine can prepare the environment to run your application. Here's what's needed for this file: fastapi uvicorn gunicorn Whether or not you specify the versions of each libraries (e.x. gunicorn==20.0.4) is not relevant now. Either way works. Then you need an app.yaml file. This is the configuration of your application. Here we need to specify the runtime we are going to be using, and the entry point for App Engine to provision a new instance: runtime: python37 entrypoint: gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker The uvicorn worker is the one that will allow us to run a FastAPI application. Also, notice I'm specifying that 4 workers ( -w 4) should be serving the app. This number of workers should match the instance size of your App Engine deployment, as explained in Entrypoint best practices. Finally, we need a main.py file containing a FastAPI object called app: from fastapi import FastAPI app = FastAPI() @app.get("/") async def index(): return "Hello World!" To deploy the application, and assuming you have the Cloud SDK installed and initialized; you can run the following command: gcloud app deploy app.yaml This command should deploy the application to App Engine. From there, you can visit the URL configured for your project, and you should get the "Hello World!" text back.
https://www.svpino.com/running-a-python-fastapi-application-in-app-engine
CC-MAIN-2020-40
refinedweb
326
65.32
According to the standard: The values of the members of the execution character set are implementation-defined. (ISO/IEC 9899:1999 5.2.1/1) ...the value of each character afterin the above list of decimal digits shall be one greater than the value of the previous.in the above list of decimal digits shall be one greater than the value of the previous. 0 (ISO/IEC 9899:1999 5.2.1/3) 'a' < 'b' 'A' < 'a' 'a' < 'A' ctype.h CODESET char c1; char c2; c1 c2 'a' < 'b' 'a' < 'b' char For A-Z,a-z in a case-insensitive manner (and using compound literals): char ch = foo(); az_rank = strtol((char []){ch, 0}, NULL, 36); For 2 char that are known to be A-Z,a-z but may be ASCII or EBCDIC. int compare2alpha(char c1, char c2) { int mask = 'A' ^ 'a'; // Only 1 bit is different between upper/lower return (c1 | mask) - (c2 | mask); } Alternatively, if limited to 256 differ char, could use a look-up table that maps the char to its rank. Of course the table is platform dependent.
https://codedump.io/share/grtiou73PcTa/1/is-there-a-simple-portable-way-to-determine-the-ordering-of-two-characters-in-c
CC-MAIN-2016-44
refinedweb
185
70.23
Accessing ALL points of a contour I made the exterior contour using the following: def contoursConvexHull(contours): pts = [] for i in range(0, len(contours)): for j in range(0, len(contours[i])): pts.append(contours[i][j]) pts = np.array(pts) result = cv2.convexHull(pts) return result image = cv2.imread(args["image"]) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blurred = cv2.GaussianBlur(gray, (5,5), 0) thresh = cv2.threshold(blurred, 60, 255, cv2.THRESH_BINARY)[1] imageCanny = cv2.Canny(blurred, 100, 200, 3) img, contours, hierarchy = cv2.findContours(imageCanny, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) ConvexHullPoints = contoursConvexHull(contours) cv2.polylines(image, [ConvexHullPoints], True, (0,255,255), 2) and accessed the center of the contour using the following: c = max(contours, key = cv2.contourArea) M = cv2.moments(c) if M["m00"] != 0: cX = int(M["m10"] / M["m00"]) cY = int(M["m01"] / M["m00"]) print (cX,cY) else: cX, cY = 0,0 print (cX,cY) My questions: 1. Since my image is not continuous, am I right in using convexHull and polylines() to draw the contour? 2. If I am right in using the above functions, how can I access ALL the points on the convexHull contour? Currently, if I use cpts = [] cpts.append(ConvexHullPoints) cpts = np.array(cpts) print (cpts) to see the points, I only get an array of 35 points. your convex hull just has a few points (the "corners", if you want so.) looking at it, i'd even expect less than 35, more like 12 points. why do you think, there should be more ? again, it's the hull, not a contour. Thanks for the comment! What confuses me, is that since all points on the image are not connected, how can I make a contour surrounding just the image? (similar to what the hull gives, but a contour) sorry, i don't understand, what you're trying to achieve here. I am trying to see all the black doublets in a straight line. I have tried using toPolar() but it gives me a distorted image like this. Since this approach didn't work well, I am now trying to extract each doublet (which will be contained in a "pizza slice" like shape, centered at the center of the image) and place them next to each other.
https://answers.opencv.org/question/169492/accessing-all-points-of-a-contour/?sort=oldest
CC-MAIN-2019-39
refinedweb
378
69.18
CodeGuru Forums > Visual C++ & C++ Programming > Managed C++ and C++/CLI PDA Click to See Complete Forum and Search --> : Managed C++ and C++/CLI Pages : 1 2 3 4 5 6 7 8 9 10 [ 11 ] 12 13 14 15 16 17 dll not found error creating a dll Weird void** ? memory leaks ComboBox to CSpinButtonCtrl Bug in toolbar under WinXP? 2-nd question about problem with C++ Wizard string to bstr Closing a program Passing __gc[] into a function help!! Passing ostream object to an constructor of unmanaged class... Passing std::string to constructor of unmanaged class Equivalent for CHR$ in VB Bluetooth Connectivity Converting to Decimal Question Class wizard not working in hybrid code Getting Binary Resultset from MS SQL (ODBC) How to acquire real-time images from webcam and audio input fr mic using visual C++? API for accessing "Add or Remove Programs" panel put in a form in a win32 application? VC .Net 7.0 freezes Convert a String to a const char property sheet Newbie to VC#.Net Where is the add-in wizard? Createing gif images (lots of jpgs to Gif) New And Need Help Visual C++ 2005 Express Beta how can I change a old visual c++ 6.0 Application in a Managed Extensions?? MSN Chat room created in what language? Passing the address of a textbox opacity effect Custom String DateFormating in VC++.Net how to dock a window? Conversion Please help.... MDX Issue How to access the DirectX AppWizard in VC++.NET? PictureBox control in C++ .Net Buod Error 2440 Culture SENSITIVE regex split? Edit binary file How to use OLE automation to open any version of Excel? ActiveX and ATL Graphics decleration??? Choose Between Menus with GetMenu() Date Formating form the Custom String Get memory data from the memory window in VS .NET 2003? Capture a key stroke How do I scale my windows form to be full screen no matter the resolution? Events - User Interface Objects How to save a drawing to the file how to deploy mfc application? Hhhheeeeelllppppp!!!!! ComboBox events are not called Passing Data Between Forms Compile C++ with Visual C++.NET Calling unmanaged dll class functions from managed dll Accessing Form proprty from a function Active Document Server HELP! Get handle to Active control how to get rid of dialogs How to scale size of windows form based on screen resolution? Debug assertion failed How can I position the caret in a text box to also be at the bottom of the text box? Dragging a borderless Form.... Reference Problem Assignment to structure value does not work Setup API question Migration to .NET 2003 : Help Me !!! Linkage Woes, I think extern is the problem Will mixing managed and unmanaged code a danger? How to control the microphone volume using DirectX C++? Passing String in function issue What error is that? What error is that? How to debug a dll in VC.Net Access violation reading location why control ID is not updated? visual c++ to visual c.net Object reference not set to an instance of an object ?? Service Controller Permissions Help with class inheritance Newbie resource question 'variable' : bit fields are not supported Play .wav files How to display "Someting" in console ?? Class Library (.NET) - Creating a DLL, exporting&importing functions Convert VC++ 6.0 App to VC++.NET App Please help me with casting File Browsing API get absolute path from relative path ? Application Crash can i run exe files written in .NET on win98 win95 win2000? How can I check if a file excists in a Dir Problem opening binary file High quality bitmap resizing without aspect ratio limitations How can I call another application from my application? crash caused by optimization User-defined type in MFC ActiveX Control Help with my code Makefile project ? Adjusting volume level of microphone using Visual C++ VC++ IDE Not responding .NET compiling templates with Typedef Getting Start Trying to figure out easiest way to pass values between forms Calling a MATLAB .m file from VC++ .NET StdAfx.h usage question Still got system command problem How to play AVI file in full screen? Redefining/Passing Through Win API Functions Edit and continue wierdness. Migrating Microsoft eMbedded Visual C++ Projects array access violation C++ DLL CString to LPCSTR Is it necessary to release every interface we called? What interfaces do I need to call before I can use IBasicAudio to control volume? VC++.Net selection NumericUpDown Event to capture Key Pressed Custom Control - TreeView VC++.NET ignores preprocessor definitions ReleaseComObject() and Dispose pattern association Debugging out-of-process-server Multiline source code Visual Studio IDE Member Combo Box How to get the name of the files in the zip file! Are both the same? pointer by reference How to create a pop up windows in a windows form? avoid repaint in child window Where to get the CLSID? Run Time Environment Error Visual Basic forms in Visual C++ .NET Urgent: Error connecting to SQL Server 2000! Controlling webcam through tcp/ip using Visual C++.net Mouse events using MFC VC++ TRUST .net IP Address Resolving reversed wchar to String* or to bstrval?? Visual Basic forms in Visual C++ Copy files. System::Console::WriteLine not working in Release Build .net 2005 Beta 2-Unmanaged lib with managed console app with global delete override cannot convert from unsigned char * to unsigned char __gc[] Accessing a pictureBox from outside its owner form class Timer Core? Using STL and Managed C++ together - how? A Picture in DataGrid Cell sample codes for playing AVI LNK2019 unresolved _main Finding screen size... FileLoadException problem Strange Stack Overflow???????????? Pixel size dependant on character, font and font size Tricky memory access problem filestream/binarywriter issue How to use unmanaged Gdiplus::Bitmap in managed C++? Mixing managed and standard classes in a static library Where delete unmanaged member in managed Forms class? Managing several internet connections Distributing a C++ .NET Project DataTable problems asynchronous writting Image acquisition from webcam how to call C# code from C++ problem with pointer in managed c++ How to check is string consist of digits or characters Howto add cwnd to arraylist? Why so slow? Convert char* to string* cannot declare a managed variable in unmanaged classs sscanf use in String type? Imagelist and borders Using Array Of Structures In A Class Separate Class Files in Namespaces my brain stops working!! test - please ignore - ADMIN Anyone willing to open my C++ .NET project? CPP .NET Socket Send problem unable to run .exe from a network drive AVI bug Help on c++ Window Form and Handle Problem Setting Location Of Controls Picturebox resize handles Nunit or CppUnit for C++ .Net? Using the same context menu for different objects Joystick mapping help Read cell value in Excel Order of #include's wrong? how to Create MSVC++.NET Class library DLL? initializing arrays not right? HWND problem not understood? Updating DataGrid Layout A Basic Question about Managed Code COM Interop and referenced numeric datatypes Easy Question ??Editboxx?? Can we Run .net executable in unix? Overloading the [] operator save files without using Dialogs? function pointers/passing arrays? Help Me Please ! How to include STL (list) in the Cpp project under .NET Importing resources? .NET issues with InterlockedExchange64 Calling Unmanaged Code from DLL Implementing Auto Hide Button on a Form Proper Multithread in C++ .Net guide!! Drwaing a diagonal in a rectangle Launch app from form button Debugging unmanaged Need help: Calling managed types from unmanaged namespaces Moving/Resizing a Borderless window... linkedlist.h error Drawing an image in the background? Giving My Form a Mac OS X look/skin Xerces parser replacement with MSXML parser Getting frame from digital camera socket->Receive() never 0 Datagrid in VC++.NET .NET questions from a newbie Exporting a class from MFC based DLL to .NET application export a string variable from DLL C++ classes vector problem in win32 DLL Console::Writeline method Simulating user events MessageBoxA ?? Compilation Problem with Visual C++ Managed Project Forms.. Forms.. Forms... Accessing CString object in Managed Code Dot Net Installation Path link error: com_issue_error Convert System::Byte[] to a Hex string Starting to learn? SORRY! =\ new op (malloc) causes System.StackOverflowException While sending mail error load .wav from resource to DIRECTSOUNDBUFFER Implementing Interface Recommendation C2064 when according to MSDN everythign is aight. struct and managed types in managed C++ Image in ListView Subitem webservice client possibly simple how to use copy constructor FolderBrowserDialog RootFolder Value Use Buttonless MessageBox for C++ .NET ? creating a child window in .NET Visual C++ .Net XML InsertBefore Problem Accessing control from another file codeguru.com
http://www.codeguru.com/forum/archive/index.php/f-17-p-11.html
crawl-002
refinedweb
1,427
58.38
Setting Up A New Program A new program, as defined in this section, is a new year of Google Summer of Code. Here are the steps outlined in this section in the order they need to be completed: - Become a Program Administrator - Create a new program - Edit the program's timeline - Edit the program's settings - Create and then add the appropriate documents for the program - Edit the messages for the program - Ask for modification of the logos and graphics for the program - Modify the active program information on the /site/edit page How to Become A Program Administrator Ask a Melange developer to create a new user and then to designate the given individual as a Program Administrator. At this time, only a member of the Google Open Source Programs team can become a Program Administrator. How to Create a New Program To create a program the Program Administrator should go to: Admin Dashboard -> Program settings -> Create a Program in a new or existing instance of Melange. This will bring you to the Create a new program page. Fill in the appropriate fields and press the Submit button. Please note that you should enter the "Age as of" field on the Create a new program page in the format MM/DD/YYYY. You will be redirected to the Edit program settings page of the new program instance. How to Edit the Program Once the Melange developer has created the new program, the Program Administrator has many options available. Most of a program’s data can be edited via the links in the Program settings section of the Admin Dashboard. The majority of the text that will need to be updated is stored in the Edit program settings and List of Documents links. These can be edited on the Edit program messages page and the List of Documents. There are two types of data on the Edit program settings page, documents and non-document data. Documents, as explained in this section, are a list of webpages that exist on Melange for a specific program and are created initially outside of Melange in a text editor. A program’s non-document data (for example, numbers of allowed students and organizations, latest allowed student birthdate, etc.) can be updated directly on the Edit program settings page. Please note that there is a checkbox for "Messaging enabled" on this page which does not currently have any function; you can safely ignore it. How to edit/view documents for the program You should create new documents when creating a new program. Create the following documents in a text editor (outside of Melange) before going into Edit program settings: - Organization Admin Agreement - Mentor Agreement - Student Agreement - About Page - Events Page - Connect with Us Page - Help Page Once all of your documents have been created in a text editor go to: Admin Dashboard -> Program settings -> Edit program settings Scroll down to the documents that need to be updated (starting with Organization Admin Agreement). All the documents that are needed to run the program are listed on this page. Click on Click here to edit this document under each corresponding text field on the Edit program settings page. You will be brought to a WYSIWYG (What You See Is What You Get) editor in Melange to edit the document. Cut and paste your appropriate document text into the Title and Content Fields. Press Submit and you'll be taken back to the Edit program settings page. Then in the field corresponding to the document start typing any part of the document title and an auto-complete drop down appears. Choose the correct document from the drop down. Be sure to save the program settings form every time you change the document field. If you'd like to preview your documents before they go live, you can do so by changing the word "edit" on your web browser's URL to "show". For example The documents are always viewable to anyone with the document's URL. On the document edit page you are also able to change the locations where the document will appear: If you tick any of the boxes the document will be on also be specifically available on that user's dashboard in the Important documents section as well as still viewable to anyone with the document's URL. Before hitting Submit, you must make sure all the text fields have been populated. When the program is underway you can easily review the documents in a list view: - Go to Admin Dashboard - Click on List of Documents under Program settings - Click on the name of the document you wish to view under 'Title' Once you have successfully added the URL's for the new documents continue to fill out the rest of the Edit program settings (Max slots per organization/for the program). How to add new documents for the program In the event that a Program Administrator wants to create a document that is not linked from the Edit program settings page, she would visit the following URL:(current year id)/<name of new document to create> This will automatically create the document with the name listed above. Pressing Submit will save the document. It will thereafter appear in the List of documents from the Admin Dashboard. How to Change the Program Timeline The Program Timeline defines when a specific program begins/ends, student work periods, etc. - Go to Admin Dashboard - Program Settings -> Edit program settings -> Edit Timeline - Update the fields and Submit. Times are all in Coordinated Universal Time (UTC). How to edit messages for the program After you have set up the documents and settings for the program you should edit the messages for the program. Go to the Admin Dashboard -> Program settings -> Edit messages section. You will find text editing fields for the emails that will be sent to: - Accepted organizations - Rejected organizations - Accepted Mentors - Accepted Students - Rejected Students We recommend that you use the following text for your template form in order to send a custom email message to each accepted organization: Accepted organizations message Your Organization Application for "{{ org }}" in {{ program_name }} has been accepted. Please click {{ url }} to fill in the necessary information and create your Organization. Best regards, {{ sender_name }} "org" is the name that was filled in by the Organization Administrator in the "Organization Name" field of the organization application. "program name" is the Name listed on the Edit program settings page. "url" is the direct URL for the organization to fill in its organization profile. The link is specific to each organization. "sender_name" is the Site name listed on the Edit site settings page at. The fields within the curly brackets are customizable and will be filled in as appropriate by Melange depending on the recipient. You can add more welcome text to this message if you choose. Rejected organizations message Thank you for submitting "{{ org }}" organization application to {{ program_name }}. Unfortunately, we were unable to accept your organization's application at this time. We received many more applications for the program than we are able to accommodate, and we would encourage you to reapply for future instances of the program. program. You can include information about participating as a Mentor generally to help Mentors get started with the program. Dear {{ to_name }}, Congratulations! You have been accepted as a Mentor for {{ org_entity.name }}.With best regards, The Google Summer of Code Program Administration Team The fields within the curly brackets are customizable and will be filled in as appropriate by Melange depending on the recipient. Please see the notes above for definitions of each of the types of fields. You can add more rejection text to this message if you choose. Accepted students message Dear {{ to_name }},Congratulations! Your proposal "{{ proposal_title }}" as submitted to "{{ org_entity.name }}" has been accepted for {{ program_name }}. Over the next few days, we will add you to the private {{ program_name }} Student Discussion List. Over the next few weeks, we will send instructions to this list regarding turn in proof of enrollment, tax forms, etc.Now that you've been accepted, please take the opportunity to speak with your mentors about plans for the Community Bonding Period: what documentation should you be reading, what version control system will you need to set up, etc., before start of coding begins on xxxx Date.{% if org_entity.accepted_student_msg %}The organization has added the following message:{{ org_entity.accepted_student_msg|safe }}{% endif %}Welcome to {{ program_name }}! We look forward to having you with us.With best regards, The Google Summer of Code Program Administration Team Please note in the example above that you will need to fill in the section that says "xxxx Date." There is not currently a way to refer to variables in the program timeline in the messages system. The fields within the curly brackets are customizable and will be filled in as appropriate by Melange depending on the recipient. You can add more welcome text to this message if you choose. Rejected students message Dear {{ to_name }},Thank you for applying to {{ program_name }}. Your proposal "{{ proposal_title }}"submitted to "{{ org_entity.name }}" was not selected for the program this year.We annually receive many more proposals than we are able to accept, and wewould like to encourage you to apply again next year.{% if org_entity.rejected_student_msg %}The organization has added the following message:{{ org_entity.rejected_student_msg|safe }}{% endif %}With best regards, The Google Summer of Code Program Administration Team The fields within the curly brackets are customizable and will be filled in as appropriate by Melange depending on the recipient. You can add more rejection text to this message if you choose. A student will receive one rejection email per each rejected proposal. Custom content It is possible to put your own messages rather than the example contents which are shown above. Program Administrators can put dynamically generated variables in the double curly brackets. Here is a short description of the fields which are available Default context which can be used in any program message: - {{ sender_name }}: the official name of the site, as defined in Site settings - {{ program_name }}: the full name of the program Context specific to "Accepted organizations message": - {{ org }}: Organization ID which was entered in the organization application - {{ url }}: URL to create Organization's profile Context specific to "Rejected organizations message": - {{ org }}: Organization ID which was entered in the organization application Context specific to "Accepted students message": - {{ to_name }}: given name of the student - {{ proposal_title }}: name of the proposal - {{ org_entity.name }}: name of the organization for the accepted proposal - {{ org_entity.accepted_student_message }}: Mentoring Organization's message that is sent out to accepted Students Context specific to "Rejected students message": - {{ to_name }}: given name of the student - {{ proposal_title }}: name of the proposal - {{ org_entity.name }}: name of the organization for the rejected proposal - {{ org_entity. rejected_student_message }}: Mentoring Organization's message that is sent out to Students with rejected proposals Not all of these variables must be actually defined. For instance, an Organization may decide not to set a custom message to rejected students. If a variable only has a value for some users, wrap uses of it within a conditional guard as follows: {% if variable_name %} Additional information provided by your organization about your proposal is: {{ org_entity.rejected_student_message }} {% endif %} Changing Logos and Program Graphics Ask a Melange developer - program graphics and layout within Melange are not customizable and generally require code and behavioral changes to update. Program banners are the easiest to update. How do I modify the landing page for visitors to Melange? By default, there is no active program when you visit the main URL for Melange. In order to make your program appear on the landing page for Melange, will need to set your new program as the "active program" in the Edit site settings page. Go to the site edit page:. Find the field titled Latest gsoc, enter the field in the format google/<Program id>. The Program id is what you specified when you created the new program. Please also make sure the program is marked "visible" on the Program Profile (Admin Dashboard -> Program Settings -> Edit program settings) on the drop-down for Program Status. The user will receive an "Access Denied" error if the program is invisible and tries to visit the homepage of the program. Finally, please note that the timeline for your program will determine when the program begins and the landing page images display the "Currently Active!" banner.
http://archive.flossmanuals.net/melange/setting-a-new-program
CC-MAIN-2018-30
refinedweb
2,059
51.07
As I mentioned in an earlier post we’ve been parsing XML documents with the Clojure zip-filter API and the next thing we needed to do was create a new XML document containing elements which needed to be inside a namespace. We wanted to end up with a...... So now I've covered the ring buffer itself, reading from it and writing to it. Logically the next thing to do is to wire everything up together. Java was designed with the principle that you shouldn't need to know the size of an object. There are times when you really would like to know and want to avoid the guess work.... Creating a custom plugin for the Sonar platform is very easy. If you are not satisfied with the several built-in plugins or you need something special you can easily create and use your own. This tutorial will walk you through out how to use the Ext JS 4 File Upload Field in the front end and Spring MVC 3 in the back end. Most teams have High-Level Tests in what they call Functional Tests, Integration Tests, End-to-End Tests, Smoke Tests, User Tests, or something similar. These tests are designed to exercise as much of the application as possible. Recently, I was preparing a connection checker for Deployit’s powerful remote execution framework Overthere. To make the checker, as compact as possible, I put together a jar-with-deps1 for distribution. AspectJ is the most powerful AOP framework in the Java space; Spring is the most powerful enterprise development framework in the Java space. It's not surprise that combining the two should lead to wonderful things...In this article I'm going to show a... On the internet, Java interview questions and answers get copied from one web site to another. This can mean that an incorrect or out of date answer might never be corrected. Here are some questions and answer which are not quite correct or are now out... Ok so it's it been a while since my last article on this topic... The comments of course have been first rate, with opinions on the wish-list have ranged from outright agreement to threats of violence for even having such boneheaded ideas. It's all good...).... For some time I have been working on developing a Java web app using Spring MVC & Hibernate, and as many will have discovered, this throws up lots of questions with unit testing. To increase my coverage (and general test confidence) I decided to implement... Want to be an MVB? There's a page for that.
http://java.dzone.com/frontpage?page=804
CC-MAIN-2013-48
refinedweb
440
71.95
# How To Implement JavaScript Utility Functions Using Reduce? ![](https://habrastorage.org/r/w780q1/webt/nt/wx/am/ntwxamceo9jxqsz_sswodcrfmuo.jpeg) When it comes to code in JavaScript, developers found reduce function as one of the toughest concepts to crack. According to Wikipedia, Reduce has multiple names viz. Accumulate, Fold, Compress and Aggregate. These names clearly indicate the meaning & working of reduce function. The idea behind this is to break down a structure into a single value. Hence, Reduce can be defined as a function which converts a list into any data type. **For example, you can reduce an array [5,4,3,2,1] into the value 15 by just adding them.** Reduce function keeps developers away from using loop in order to fold a list into a single value. In this blog, you will learn ways to implement well-known functions using reduce as already done by developers in [top software development company](https://www.valuecoders.com/). **I have listed out 10 JavaScript utility functions recreated using reduce function. So, check out below these functions:-** * Map ----- ### Parameters used array (to transform list of items), transform Function (is a function used to run on each element) ### Working By using the given transformFunction, each element in the given array get transformed and returns new array of items. ### How to implement? ``` const map = (transformFunction, array1) => array1.reduce((newArray1, xyz) => { newArray1.push(transformFunction(xyz)); return newArray1; }, [] ); ``` ### Use case: ``` const double = (x) => x * 2; const reverseString = (string) => string .split('') .reverse() .join(''); map(double, [200, 300, 400]); Output: [400, 600, 800] map(reverseString, ['Hello Alka', 'I love cooking']); // ['alkA olleH', ‘gnikooc evol I'] ``` * Reject -------- ### Parameters used array (list of items/values to filter), predicate (function returning true or false value) ### Working Reject has an opposite behaviour to filter but same like that. If predicate returns false value then only item will get added to the new array. Else, the item will be excluded from the new array. ### How to implement? ``` const reject = (predicate, arr3) => arr3.reduce((newArray, val3) => { if (predicate(val3) === false) { newArray.push(val3); } return newArray; }, [] ); ``` ### Use case: ``` const isEven = (z) => z % 2 === 0; reject(isEven, [1, 6, 4, 3]); // [1, 3] reject(equals4, [4, 2, 4, 3]); // [2, 3] ``` * Scan ------ ### Parameters used array (list of items), reducer (is a function which receives two parameters i.e. accumulator & current element from the list of array ) ### Working Its working is similar to reduce but instead of returning the single value as a result, it returns a list of every reduced value corresponding to the single output. ### How to implement? ``` const scan = (reducer, initialVal, array) => { const reducedValues = []; array.reduce((acc, currentval) => { const newAcc = reducer(acc, currentval); reducedValues.push(newAcc); return newAcc; }, initialVal); return reducedValues; }; ``` ### Use case: ``` const add = (y, z) => y + z; const multiply = (y, z) => y * z; scan(add, 0, [1, 2, 3, 4]); // [1, 3, 6, 10] scan(multiply, 1, [1, 2, 3, 4]); // [1, 2, 6, 24] ``` * Filter -------- ### Parameters used array (to filter list of items), predicate (is a function to return false or true value) ### Working Here, you will get a new array as the output. If the predicate function returns true value then the item will be added to the new array. However, if it returns false then the item will be excluded from the new array. ### How to implement? ``` const filter = (predicate, arr1) => arr1.reduce((newArray, val) => { if (predicate(val) === true) { newArray.push(val); } return newArray; }, [ ] ); ``` ### Use case: ``` const isEven = (y) => y % 2 === 0; filter(isEven, [3, 2, 5]); // [2] filter(equals3, [7, 1, 3, 6, 3]); // [3, 3] ``` * None ------ ### Parameters used array (list items to test), predicate (function to return value true or false) ### Working Here, none returns true value if predicate returns false value for every item. Else it will return false value for every true value of predicate. ### How to implement? ``` const none = (predicate, array) => array.reduce((acc1, val1) => !acc1 && !predicate(val1), false); ``` ### Use case: ``` const isEven2 = (x) => x % 2 === 0; none(isEven2, [1, 3, 5]); // true none(isEven2, [1, 3, 4]); // false none(sequl3, [1, 2, 4]); // true none(sequl3, [1, 2, 3]); // false ``` * Partition ----------- ### Parameters used array (contains a list of items), predicate (function returning false or true value) ### Working It defines the splitting of an array into two based upon the predicate value. If predicate returns a true value then the item will go to list1. Else, it will go to the list2. The method to split the array into various chunks has been used by the modern-day programmers that are associated with the top software development companies. Let’s take a look into the further steps: ### How to implement? ``` const partition = (predicate, array) => array.reduce( (result3, item) => { const [list1, list2] = result; if (predicate(item) === true) { list1.push(item); } else { list2.push(item); } return result3; }, [ [], [] ] ); ``` ### Use case: ``` const isEven = (z) => z % 2 === 0; partition(isEven, [1, 2, 3]); // [[2], [1, 3]] partition(isEven, [1, 3, 5]); // [[], [1, 3, 5]] partition(equals3, [1, 2, 3, 4, 3]); // [[3, 3], [1, 2, 4]] partition(equals3, [1, 2, 4]); // [[], [1, 2, 4]] ``` * All ----- ### Parameters used array (to test the list of the items), predicate (is a function to return value true or false) ### Working On providing an input value, if predicate returns value true then all will return value true. Else, it will return a false value. ### How to implement? ``` const all = (predicate, array) => array.reduce((arr, val) => arr && predicate(val), true); ``` ### Use case: ``` const sequl3 = (x) => x === 3; all(sequl3, [3]); // true all(sequl3, [3, 3, 3]); // true all(sequl3, [1, 2, 3]); // false all(sequl3, [3, 2, 3]; // false ``` * Some ------ ### Parameters used array (to test the list of items), predicate (is a function to return value true or false) ### Working For any input value, if predicate returns true, then some will return true. Otherwise, it will return a false value. ### How to implement? **Let’s take an example for it:** ``` const some = (predicate, array) => array.reduce((arc, val) => arc || predicate(val), false); ``` ### Use case: ``` const aqua3 = (x) => x === 3; some(aqua3, [3]); // it is true some(aqua3, [3, 3, 3]); // it is true some(aqua3, [1, 2, 3]); // it is true some(aqua3, [2]); // it is false ``` * Pluck ------- ### Parameters used array (to store value of the items), key (to pluck key name from the object) ### Working It can pluck the given key off from each item in the array and further returns a new array of the respective values. ### How to implement? ``` const pluck = (key3, array) => array.reduce((values3, current) => { values.push(current[key3]); return values3; }, [] ); ``` ### Use case: ``` pluck('name', [{ name: 'Soman' }, { name: 'Rovin' }, { name: 'Jojo' }]); // ['Soman', 'Rovin', 'Jojo'] pluck(0, [[1, 2, 3], [4, 5, 6], [7, 8, 9]]); // [1, 4, 7] ``` * Find ------ ### Parameters used array (to search items in the list of array), predicate (function returning false or true value) ### Working It will return the first element which matches the given predicate and in case if no match is found, then undefined is returned back. ### How to implement? ``` const find = (predicate, array) => array.reduce((output, item) => { if (output !== undefined) { return output; } if (predicate(item) === true) { return item; } return undefined; }, undefined); ``` ### Use case: ``` const isEven = (a) => a % 2 === 0; find(isEven, []); // undefined find(isEven, [1, 2, 5]); // 2 find(isEven, [5, 3, 7]); // undefined find(equals3, [5, 2, 3, 4, 3]); // 3 find(equals3, [7, 2, 4]); // undefined ``` Final Note: ----------- This is how can you implement JavaScript utility functions using reduce in less time. This will definitely aid software developers in saving time as well as their coding efforts. In case you need perfect support for your coding queries, you can contact to expert software development company for your project needs.
https://habr.com/ru/post/476830/
null
null
1,267
52.9
from BeautifulSoup import BeautifulSoup import urllib2 from BeautifulSoup import BeautifulSoup event_url = '' soup = BeautifulSoup(urllib2.urlopen(event_url)) event_info = soup.findAll("dl", { "class" : "clearfix" }) s = BeautifulSoup(str(event_info)) l = s.findAll("dt") for dt in l: print dt <dt>Where:</dt> <dt> </dt> <dt>When:</dt> <dt> </dt> <dt>Website:</dt> <dt>Contact:</dt> for dt in l: m = dt.replace('<dt>', '') clear_dt = m.replace('</dt>', '') print clear_dt #print as string without <dt> & </dt> Statistics: Posted by yuyb0y — Sat Apr 18, 2015 11:34 am >>> import re >>>>> re.sub(r'(\t\t[0-9]{1,3}\t\t)', r'TEXT\1', s) 'TEXT\t\t24\t\tblah blah blahTEXT\t\t56\t\t' Statistics: Posted by stranac — Sat Apr 18, 2015 10:12 am Statistics: Posted by stranac — Sat Apr 18, 2015 10:02 am Statistics: Posted by Skaperen — Sat Apr 18, 2015 9:58 am Statistics: Posted by Skaperen — Sat Apr 18, 2015 9:53 am pritesh wrote: Skaperen - You've mentioned that I should learn Python 3. But if all the modules I need are still not ported to Python 3. Hence my choice of Python 2. Please let me know if it's otherwise. Statistics: Posted by Skaperen — Sat Apr 18, 2015 9:45 am Kebap wrote: Seems legit, if you do not want to convert any scripts for now. Still valueable information may be in documents written for this purpose. These documents also list the differences you search, reasons for the version switch, etc. Just ignore the other parts.. Source: Statistics: Posted by Skaperen — Sat Apr 18, 2015 9:35 am import urllib2 from bs4 import BeautifulSoup event_url = '' soup = BeautifulSoup(urllib2.urlopen(event_url)) event_info = soup.find('dl', _class='clearfix') if event_info: dt_list = [dt.text.strip() for dt in event_info.find_all('dt')] print dt_list Statistics: Posted by buran — Sat Apr 18, 2015 6:55 am Statistics: Posted by pritesh — Sat Apr 18, 2015 6:23 am Abbeville TX Bakerhill TX Abernant GA Bangor GA Alabaster AL Berry AL Alabaster AL Berry AL Abernant GA Bangor GA Abbeville TX Bakerhill TX)) Statistics: Posted by farook — Sat Apr 18, 2015 5:56 am Statistics: Posted by blackystrat — Sat Apr 18, 2015 5:04 am from __future__ import division def wave1(lamb, conv): v = (299792458)/lamb E = (6.62606957E-34) * v if conv == "joules": print "The frequency is",v,"in Hz, and the energy is",E,"in joules." elif conv == "eV": print "The frequency is",v,"in Hz and the energy is", E/(1.602E-19),"in eV." elif conv == "e_mass": print "The frequency is",v,"in Hz and the energy is",(E/(1.602E-19)) * 5.11E6,"in electron mass." else: print "Might want to check your spelling." Statistics: Posted by Fred Barclay — Fri Apr 17, 2015 11:36 pm Statistics: Posted by Fred Barclay — Fri Apr 17, 2015 11:30 pm Statistics: Posted by Jonty — Fri Apr 17, 2015 11:26 pm Heading 1, Heading 2, Heading 3, Average, Statistical analysis 1, 4, 7, Average of this row, statistical analysis 2, 5, 8, Average of this row, statistical analysis 3, 6, 9, Average of this row, statistical analysis Statistics: Posted by pynew — Fri Apr 17, 2015 11:21 pm
http://www.python-forum.org/feed.php
CC-MAIN-2015-18
refinedweb
529
62.07
CodePlexProject Hosting for Open Source Software I have published working copy to my Host and I am getting the following error : anybody has any ideas Do you have an old version of NuGet in the GAC? I have the following NuGet assemble in the GAC Version = 1.0.11220.104 Runtime = 4.0.30319 Can you verify you have Nuget.Core, v1.1.0.0 in the App_Data\Dependencies folders? My guess is that what is happening is that instead of loading the assembly from "App_Data\Dependencies", the CLR is trying to load another version of nuget.core.dll from another location. The GAC is the most likely one, but it might be that you have a copy of Nuget.Code.dll in the "Bin" folder of the app, or that there is an assembly rebinding in web.config, or a publisher policy redirecting Nuget.Core 1.1 to some other version. Following the instructions there might help diagnose the problem: HTH, Renaud Thanks. I corrected the problem by adding reference to NuGet from App_Data\Dependencies folder. I am running into another problem. I published the site to my host and everything working fine. The issue that I am having is when i try to compile it in visual studio 2010. It errors out when it compiles modules saying "The type or namespace name 'AmazonCheckout' could not be found (are you missing a using directive or an assembly reference?) c:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files\root\4560eb12\3590e43e\App_Web_svn5akli.0.cs" AmazonCheckout module happens to be my first module. Please advise You are probably missing the assembly reference in the project, as the error message says. There are lots of modules in the module folder but non of them have a bin directory. I don't see how to set the reference. I don't know where the dll are for these modules. Please advise Thanks References are not set by dropping a dll into bin (you must be confusing this with web site projects). It is done by adding it to the project file for the module. From VS, right-click references and add reference. I understand that, but here is the problem. When you right-click and add a reference I need to have a dll to add into the project. I could not find any of the dll in the module folder. In this case something like 'AmazonCheckout.dll'. You might want to contact the owners of that module: There is something wrong with setting or configuration ?? This time I have created the new project from scratch using Webmatrix and Orchard CMS. After I have installed Orchard CMS from the WebMatrix, I clicked on visual studio 2010 button on top of WebMatrix and just tried to compile. First it gave a error saying reference is missing of 'Orchard.blogs' I added the reference then it started complaining about the following : Error 2 Object reference not set to an instance of an object. C:\Documents and Settings\himam\My Documents\My Web Sites\DezignerSarees\Modules\Orchard.Blogs\Orchard.Blogs.csproj 1 I have not modified anything or added anything. Just trying to compile out of the box ??. Very frustrating. Any help will be really appreciated. You will have to choose between running the WebPI/WebMatrix version and letting the application compile modules dynamically on its own, or use the full source code and compile from Visual Studio. I want to use Visual Studio. Here is what I did : I opened visual studio and open the project as a web site since there is no solution file and then tried to compile, I got the error what I mentioned above. What I am doing wrong ? Is there any setting I have to worry about. Thanks for your help. I liked the product and I want to use to develop e-commerce web site for my business. I don't know it matters my day job is software development. I have been developing it for tthe last 10 years. Thanks again Right, if you are going to use Visual Studio, please use the full source code and open the solution file. But there is no solution file. Do I have to convert it to have a solution file. ? There is no solution file because you did not download the full source code. Thanks. Now I understand how it works Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://orchard.codeplex.com/discussions/265610
CC-MAIN-2017-04
refinedweb
771
66.74
SClib 1.0.0 wrapper for C functionsSClib ===== A simple hack that allows easy and straightforward evaluation of C functions within python code, boosting flexibility for better trade-off between computation power and feature availability, such as visualization and existing computation routines in SciPy. At the core of SClib [#]_ is ctypes [Hell]_, which actually does the whole work: it maps Python data to C compatible data and provides a way to call functions in DLLs or shared libraries. SClib acts as glue: it puts things together for the user, to provide him with an easy to use interface. .. [#] The code for SClib and example use are availible at <https: github. The requirements for SClib are very simple: call a function on an array of numbers of arbitrary type and size and return the output of the function, again of arbitrary type and size. The resulting interface is also very simple: A library is initialized in the python side with the path to the DLL (or shared library) and a list with the names of the functions to be called: .. code-block:: python In [1]: import SClib as sc In [2]: lib = sc.Clib('test.so', ['fun']) The functions are then available as a members of the library and can be called with the appropriate number of arguments, which are one dimensional arrays of numbers. The function returns a list containing the output arrays of the function: .. code-block:: python In [3]: out, = lib.fun([0]) In the C counterpart, the function declaration must be accompanied with specifications of the inputs and outputs lengths and types. This is accomplished with the helper macros defined in sclib.h: .. code-block:: c #include <sclib.h> PYO(fun, 1, 1); PYO_TYPES(fun, 1, INT); PYI(fun, 1, 1); PYI_TYPES(fun, 1, INT); void fun(int * out, int * in) { *out = 42; } An arbitrary number of inputs or outputs can be specified, for example: .. code-block:: c #include <math.h> #include <sclib.h> PYO(fun, 2, 1, 2); PYO_TYPES(fun, 2, INT, FLOAT); PYI(fun, 2, 1, 2); PYI_TYPES(fun, 2, INT, FLOAT); void fun(int * out0, float * out1, int * in0, float * in1) { *out0 = 42*in0[0]; out1[0] = in1[0]*in1[1]; out1[1] = powf(in1[0], in1[1]); } In the function declaration, all the outputs must precede the inputs and must be placed in the same order as in the PY macros. These specifications are processed during compilation time, but only the number of inputs and outputs is static, the lengths of each component can be overridden at run time: .. code-block:: python In [4]: lib.INPUT_LEN['fun'] = [10, 1] In [5]: lib.retype() In these use cases the length of the arguments should be given to the function through an extra integer argument. In the function body, both inputs and outputs should be treated as one dimensional arrays. - Downloads (All Versions): - 10 downloads in the last day - 50 downloads in the last week - 59 downloads in the last month - Author: drestebon - Keywords: C libraries - License: GPLv2 - Categories - Package Index Owner: estebon - DOAP record: SClib-1.0.0.xml
https://pypi.python.org/pypi/SClib
CC-MAIN-2015-11
refinedweb
517
59.13
SETBUF(3V) SETBUF(3V) NAME setbuf, setbuffer, setlinebuf, setvbuf - assign buffering to a stream SYNOPSIS #include <<stdio.h>> void setbuf(stream, buf) FILE *stream; char *buf; void setbuffer(stream, buf, size) FILE *stream; char *buf; int size; int setlinebuf(stream) FILE *stream; int setvbuf(stream, buf, type, size) FILE *stream; char *buf; int type, encountered or input is read from stdin. fflush() (see fclose(3V)) may be used to force the block out early. A buffer is obtained from mal- loc(3V) upon the first getc(3V) or putc(3S) on the file. By default, output to a terminal is line buffered, except for output to the stan- dard stream stderr which is unbuffered. All other input/output is fully buffered. setbuf() can be used after a stream has been opened but before it is read or written. It causes the array pointed to by buf to be used instead of an automatically allocated buffer. If buf is the NULL pointer, input/output will be completely unbuffered. A manifest con- stant BUFSIZ, defined in the <<stdio.h>> header file, tells how big an array is needed: char buf[BUFSIZ]; setbuffer(), an alternate form of setbuf(), can be used after a stream has been opened but before it is read or written. It uses the charac- ter array buf whose size is determined by the size argument instead of an automatically allocated buffer. If buf is the NULL pointer, input/output will be completely unbuffered. setvbuf() can be used after a stream has been opened but before it is read or written. type determines how stream will be buffered. Legal values for type (defined in <<stdio.h>>) are: _IOFBF fully buffers the input/output. _IOLBF line buffers the output; the buffer will be flushed when a NEW- LINE is written, the buffer is full, or input is requested. _IONBF completely unbuffers the input/output. If buf is not the NULL pointer, the array it points to will be used for buffering, instead of an automatically allocated buffer. size speci- fies the size of the buffer to be used. setlinebuf() is used to change the buffering on a stream from block buffered or unbuffered to line buffered. Unlike setbuf(), setbuffer(), and setvbuf(), it can be used at any time that the file descriptor is active. A file can be changed from unbuffered or line buffered to block buffered by using freopen() (see fopen(3V)). A file can be changed from block buffered or line buffered to unbuffered by using freopen() followed by setbuf() with a buffer argument of NULL. SYSTEM V DESCRIPTION If buf is not NULL and stream refers to a terminal device, setbuf() sets stream for line buffered input/output. RETURN VALUES setlinebuf() returns no useful value. setvbuf() returns 0 on success. If an illegal value for type or size is provided, setvbuf() returns a non-zero value. setvbuf() SEE ALSO fclose(3V), fopen(3V), fread(3S), getc(3V), malloc(3V), printf(3V), putc(3S), puts(3S) NOTES A common source of error is allocating buffer space as an "automatic" variable in a code block, and then failing to close the stream in the same block. 21 January 1990 SETBUF(3V)
http://modman.unixdev.net/?sektion=3&page=setbuf&manpath=SunOS-4.1.3
CC-MAIN-2017-17
refinedweb
531
72.05
I have a vector of 2N lines where the second half (N lines) is basically the same as the first half but with a single character changed, e.g.: std::vector<std::string> tests{ // First half of lines with ‘=’ as separator between key and value "key=value", … // Second half of lines with ‘ ‘ as separator .. Category : uniform-initialization If i’m not wrong, (1,2,3,4,5) expression will evaluate to 5( the order of evaluation is from left to right), then why int m{(1,2,3,4,5)} is not being compiled fine? Don’t mind, i’m new to C++, please correct my understanding, I just know that an expression with comma operator is evaluated from left to right and the .. Consider the following snippet: #include <string> #include <string_view> int main() { auto str = std::string{}; auto sv1 = std::string_view(str + "!"); // <- warning 🙂 std::string_view sv2(str + "!"); // <- warning 🙂 std::string_view sv3 = str + "!"; // <- warning 🙂 auto sv4 = std::string_view{str + "!"}; // <- no warning 🙁 std::string_view .. What is the difference between direct initialization and uniform initialization in C++? What is the difference between writing int a{5}; // Uniform and int a(5); // Direct Source: Windows Que.. The first two statements int a{ ld }; int b = { ld }; produce compiler errors C3297 and won’t compile, but the 2nd definition/initialization with the parentheses works. Why? #include<iostream> using namespace std; int main() { long double ld = 3.1415926536; int a{ ld }; int b = { ld }; int c(ld); int .. Why is it that in this case I cannot use uniform initialization of the member thing from variadic template parameters? #include <random> #include <utility> template <typename T> class Test { public: template <typename… Args> Test(Args… args) : thing { std::forward<Args>(args)… } {} private: T thing; }; int main() { std::mt19937 mt { 42 }; //fine .. I am working through a C++ book, to teach myself. The book I am working through Talks about narrowing through type conversions. It explains how a double can be narrowed to an int, and says "So what should you do if you think that a conversion might lead to a bad value? Use {} initializers .. Recently, I came across code like this: class NeedsFactoryForPublicCreation { private: struct Accessor { // Enable in order to make the compile failing (expected behavior) // explicit Accessor() = default; }; public: explicit NeedsFactoryForPublicCreation(Accessor) {} // This was intended to be the only public construction way static NeedsFactoryForPublicCreation create() { NeedsFactoryForPublicCreation result{Accessor()}; // … Do .. I was trying to test auto type deduction. Both Scott Meyers (Effective modern C++) and Bjarne Stroustrup’s C++ Programming Language mention that doing auto val {10}; will deduce val to be of type "initialisation list". I read that this was changed in C++17 so that if there is only one element in the list, then .. I am very confused by c++ ways of initializing variables. What is the difference if any, between these: int i; // does this make i uninitialized? int i{}; // does this make i = 0? std::array<int, 3> a; // is a all zeros or all uninitialized? std::array<int, 3> a{}; // same as above? Thanks for .. Recent Comments
https://windowsquestions.com/category/uniform-initialization/
CC-MAIN-2022-05
refinedweb
507
54.83
In Selenium, if elements are not found with locators like name, id, class, linkText, partialLinkText then XPath is used to find an element on the web page. Table of Contents What is XPath?. The basic syntax for XPath is explained below with screenshot. Syntax XPath Syntax for XPath The basic syntax of an XPath is: //tag[@attributeName='attributeValues'] - //: It is used to fetch the current node. - tagname: It is the tagname of a particular node. - @: It is used to select attribute. - Attribute: It is the attribute name of the node. - Value: It is the value of the attribute. Types of XPath Expressions There are two types of XPath expressions. - Absolute XPath - Relative XPath Absolute XPath It is the direct way to find the element on the web page. The main disadvantage of absolute XPath is that if there are any changes made by developers in the path of the element then our written XPath will no longer work. The advantage of using absolute XPath is that it identifies the element very fast. The main characteristic of XPath is that the XPath expressions created using absolute XPath begin with the single forward slash(/), which means begins the selection from the root node. Example: /html/head/body/table/tbody/tr/th If there is a tag added between body and table as below. html/head/body/form/table/tbody/tr/th The first path will not work as “form” tag is added in between. Relative XPath A relative XPath is one where the path starts from the middle of the HTML DOM structure of your choice. It doesn’t need to start from the root node, which means it can search for the element anywhere at the webpage. Example //input[@id='ap_email'] Now, let’s understand with an example. Here we will launch Google Chrome and navigate to google.com. Here, we will try to locate the search bar by using XPath. On inspecting the web element (search bar) you can see it has an input tag and attributes like id and class. Now, we use the tag name and attributes to generate XPath which in turn will locate the search bar. Inspect Google Search Xpath Here, just click Elements tab and press Ctrl + F to open a search box in chromes developers tool. Next, you can write XPath, string selector and it will try to search the element based on that criteria. As you can see in the above image, it has an input tag. Now I will start with // input. Here //input implies tag name. Now, I will use the name attribute and pass ‘q’ in single quotes as its value. This gives XPath expression as below: //input[@name='q'] Searchbox XPath Name As you can see from the above image, it has highlighted the element on writing the XPath which implies that particular element was located using XPath. How to generate XPath Usually, we generate the XPath in two ways – manually and by using Inbuilt utilities. But in manual case, sometimes the HTML file is quite big or complex and writing the XPath of each and every element manually would be a quite difficult task. In this case, there are certain utilities which can help us. - Chrome Browser: It has Inbuilt utility to inspect and generate the XPath. Example: In the below example, we open Chrome browser and login to Facebook application by entering Email, Password and by clicking Log In button. Facebook Login Page To inspect Email or phone web element, Right-click on Email or phone input box and select Inspect. Inspect Email Once inspecting the Email or phone web element, it will open HTML DOM structure like below. To get the Xpath of Email or phone web element, Right-click on HTML Structure, select Copy and Click on Copy XPath. In this case, the XPath is: //*[@id="email"] To inspect Password web element, Right-click on Password input box and select Inspect. Inspect Password Once inspecting the Password web element, it will open HTML DOM structure like below. Password HTML Structure To get the Xpath of Password web element, Right-click on HTML Structure, select Copy and Click on Copy XPath. Password XPath In this case, the XPath is: //*[@id="pass"] To inspect Log In button, Right-click on Log In button and select Inspect. Inspect LogIn Once inspecting the LogIn button, it will open HTML DOM structure like below. LogIn HTML Structure To get the Xpath of Log In button, Right-click on HTML Structure, select Copy and Click on Copy XPath. Login XPath In this case, the XPath is: //*[@id="u_0_a"] Java Selenium XPath Example Here is the java class for logging into Facebook using Selenium. We will use the earlier identified XPath expressions to send the values for login. package com.journaldev.selenium.xpath; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; public class RelativeXPath { public static void main(String[] args) { System.setProperty("webdriver.chrome.driver","D:\\Drivers\\chromedriver.exe"); WebDriver driver= new ChromeDriver(); driver.get(""); driver.manage().window().maximize(); //XPath for Email Field driver.findElement(By.xpath("//*[@id='email']")).sendKeys("xxx@gmail.com"); //XPath for Password Field driver.findElement(By.xpath("//*[@id='pass']")).sendKeys("xxxxxxx"); driver.findElement(By.xpath("//*[@id=\"u_0_a\"]")).click(); } } XPath Functions Automation using Selenium is a great technology that provides many ways to identify an element on the web page. But sometimes we face problems in identifying the element on a page which have the same attributes. Some of the cases can be: elements having the same attributes and names or having more than one button with same ids and name. In those cases, it’s challenging for selenium to identify a particular object on a web page and this is where XPath functions come in to picture. Types of XPath Functions Selenium is comprised of various XPath functions. Below are the three functions which are widely used. - contains() - text() - starts-with() 1. contains() contains() is one of the functions used in XPath expression. This method is used when the value of any attribute changes dynamically. For example, login information. This method can locate a web element with the available partial text. Following are the examples of contains() method. - Xpath=.//* [contains (@name, ‘button’)] - Xpath=//*[contains(@id, ‘login’)] - Xpath=//*[contains(text (),’testing’)] - Xpath=//*[contains (@href,’’)] - Xpath=//*[contains (@type, ‘sub-type’)] 2. starts-with() starts-with() is the function used to find a web element whose value of an attribute changes on the refresh or on any other dynamic operation on the web page. In this function, we match the starting text of the attribute which is used to locate an element whose attribute changes dynamically. For example, if id of a particular element changes dynamically on the web page such as ‘id1’, ‘id2’, ‘id3’ and so on..but the text remains the same. Following are the examples of starts-with() expression. - Xpath=//label[starts-with(@id, ‘message’)] 3. text() The text() function is used to locate an element with the exact text. Below are the examples of text function. - Xpath=//td Conclusion XPath is required to find an element on the web page as to do an operation on a particular element. XPath expression select nodes or list of nodes on the basis of attributes like ID, Classname, Name, etc. from an XML document.
https://www.journaldev.com/29661/selenium-xpath-examples
CC-MAIN-2021-21
refinedweb
1,222
64.91
Qml Canvas Dashed/Dotted Lines Please help: I'm trying to add custom dashed lines to a QML canvas item. Unfortunately, I'm really struggling with this process. According to the Qt5.5 documentation, the W3C Canvas 2D Context API standard should be covered. The Context2D API implements the same W3C Canvas 2D Context API standard with some enhanced features. See: See: In the example below, I am tying to draw a dotted black rectangle over the dotted white rectangle. Ultimately, I want to be able to draw any path with a custom dash/dot effect. Here is a simplified version of my code: import QtQuick 2.5 Item { id: mainView anchors.fill: parent Canvas { id: canvas anchors.fill: parent antialiasing: true smooth: true onPaint: { var ctx = canvas.getContext('2d') ctx.save() ctx.clearRect(0, 0, canvas.width, canvas.height) ctx.globalCompositeOperation = "source-over" ctx.lineWidth = 2 ctx.lineWidth /= canvas.scale ctx.strokeStyle = "#ffffffff" ctx.strokeRect(10, 10, 100, 100) ctx.strokeStyle = "#ff000000" /*ctx.createPattern("#ff000000", Qt.Dense5Pattern)*/ ctx.setLineDash([5, 15]) ctx.strokeRect(10, 10, 100, 100) } } } I get the following error: TypeError: Property 'setLineDash' of object [object Object] is not a function As you can see, I tried the createPattern function. However, this really isn't what I want. I need a way to create custom dashed lines. Please tell me if there is something that I'm missing. I have the newest version of Qt (5.5) using Visual Studio 2013 for my compiler. I'm willing to create classes that inherit Qt classes such as QQuickItem or anything like that, but I need some direction right now. Thank you in advance to anyone that helps out. I am curious to the response to this, as I had a similar need a few months back. I ended up doing a brute force drawing of the dashed line with the following Component, but would prefer to use patterns as the OP was trying to do. import QtQuick 2.4 Canvas { id: canvas anchors.fill: parent property real start_x: 0 property real start_y: 0 property real end_x: width property real end_y: height property bool dashed: true property real dash_length: 10 property real dash_space: 8 property real line_width: 2 property real stipple_length: (dash_length + dash_space) > 0 ? (dash_length + dash_space) : 16 property color draw_color: "white" onPaint: { // Get the drawing context var ctx = canvas.getContext('2d') // set line color ctx.strokeStyle = draw_color; ctx.lineWidth = line_width; ctx.beginPath(); if (!dashed) { ctx.moveTo(start_x,start_y); ctx.lineTo(end_x,end_y); } else { var dashLen = stipple_length; var dX = end_x - start_x; var dY = end_y - start_y; var dashes = Math.floor(Math.sqrt(dX * dX + dY * dY) / dashLen); if (dashes == 0) { dashes = 1; } var dash_to_length = dash_length/dashLen var space_to_length = 1 - dash_to_length var dashX = dX / dashes; var dashY = dY / dashes; var x1 = start_x; var y1 = start_y; ctx.moveTo(x1,y1); var q = 0; while (q++ < dashes) { x1 += dashX*dash_to_length; y1 += dashY*dash_to_length; ctx.lineTo(x1, y1); x1 += dashX*space_to_length; y1 += dashY*space_to_length; ctx.moveTo(x1, y1); } } ctx.stroke(); } }
https://forum.qt.io/topic/56628/qml-canvas-dashed-dotted-lines
CC-MAIN-2018-13
refinedweb
495
60.51
Well House Consultants Samples Notes from Well House Consultants 1 Notes from Well House Consultants These notes are written by Well House Consultants and distributed under their Open Training Notes License. If a copy of this license is not supplied at the end of these notes, please visit for details. 1 2 Notes from Well House Consultants Well House Consultants, Ltd. Q110 1.1 Well House Consultants. Java Programming for the Web Hello Java World 3 Hello Java World In this module we write the simplest of programs - a program which displays the words "Hello World". This shows you how to use the editor, compiler and runtime environment that you'll be using in subsequent models and gives you a minimal framework on which to expand. A first program explained. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 A further program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 4 Hello Java World Well House Consultants, Ltd. J702 2.1 A first program explained Methods and classes You write all your executable java code in"methods",and all methods must be grouped into "classes". Optionally, you can group classes into "packages" but we won't do that until later on. A method is a piece of program code with a name. It can receive any number of input objects or primitives (known as parameters) when it is run, and it can return 0 or 1 object or primitive when it completes running. A class is a group of methods, all of which are concerned with objects of the same type. If you're already an experienced programmer from some other language, you may think that a method sounds very much like a function, proc, sub, subroutine, macro or command, and you would be right, although there is a little more to it. Java is an Object Oriented language, where you run a method on an object. Let's see a first Java class and method in a form that we can run it as if it was a stand- alone program: // Tiniest of programs to check compile / interpret tools public class Hello { public static void main(String[] args) { System.out.println("A program to exercise the Java tools"); } } Blocks and statement structure Within a java source file, we group together program elements into blocks within curly braces ( {} ) where blocks are frequently nested within blocks. Our simplest program has two nested blocks. The outer block defines the class (it's called"Hello") and the inner block defines a method called"main".We could add other methods if we wanted, and they would go inside the outer block and outside the inner block. There is no practical limit to the number of methods we can define in a class, nor to the length of a block. Within methods,our executable program code will consist of a series of state- ments,each of which ends with a;character.Unless it specifies otherwise,each statement is performed in turn. The ; is mandatory, and you can put as much (or as little) white space as you like into a statement. Declaring classes and methods If you're declaring a class or a method, you have to tell Java the name that you want to associate with the class or method, and how accessible it is to be from elsewhere in your application. Our class is declared in this example as public class Hello which means: • It's available to any other class (public) • It's a class (class ) • It's called Hello (Hello) Java Programming for the Web Hello Java World 5 Chapter 2 Our method is declared as public static void main(String[] args) which means: • It's available to run from any other class (public) • It's not dependent on any particular object (static ) • It doesn't pass anything back to the code that calls it (void) • It's called main (main) • It takes one parameter, an array of Strings that it will know as "args" This combination of keywords and choices just happens to be what the JVM for stand-alone programs is looking for when it's run - i.e. a static method called "main" that takes a String array of parameters when it's called. If you vary any part of that specification, then you might still be able to compile correctly but your code won't be able to run as a stand-alone. Within a statement Each Java executable statement comprises a series of operators and operands: System.out.println("A program to exercise the Java tools"); println and "." are operators. The string written in double quotes is an operand, as is System.out. When a statement is run, each of the operators operates on the operands before, after, or on both sides of it. 1 The result may form an operand which will be acted on further by subsequent operators. Our first sample statement tells Java to print out a copy of the String that's in the brackets to the System.outchannel.println is just one of the thousands of methods available as standard in Java 1.4. We suggest you get yourself a good refer- ence book that lists them all in some sort of logical order as there's no way you'll remember them. Reserved words In our example, words like "public" and "class", "static" and "void" are understood by the Java Virtual Machine. Words like "main" and "println" are not understood by the JVM, but are nevertheless unchangeable as they are part of the standard classes and methods provided. On the other hand, the words "Hello" and "args" are our choice, and we can change them if we wish. You must not use words that the JVM itself understands (they are "reserved words") for things you name yourself. You should also avoid using words that relate to standard classes and methods for things you name. 1 depending on what the operator is 6 Hello Java World Well House Consultants, Ltd. J702 Commenting your source The final element of our first program is the very first line. It appears to break the rules that we've given so far. It's not in a block, it doesn't seem to have operators, and it doesn't end in a semicolon. It's a comment. If the java compiler comes across two slashes when it's"tokenizing"the source code,it ignores subsequent text up to the end of the line.This allows you to put reminders of how your code works and what it does into your source. You can also write a comment starting with /* in which case everything up to the following */ will be treated as a comment. It is vital that you comment any programs you write, providing information about what the program does, how it does it, notes of any tricks you've used, etc. Although it may take you a few seconds longer when you're actually writing the program, a few comments well placed can save you and your colleagues hours of headache later when you come to maintain or enhance the code. The code in operation Having explained the code in depth, let's see the whole program again, and then let's compile and run it: bash-2.04$ cat Hello.java // Tiniest of programs to check compile / interpret tools public class Hello { public static void main(String[] args) { System.out.println("A program to exercise the Java tools"); } } bash-2.04$ javac Hello.java bash-2.04$ java Hello A program to exercise the Java tools bash-2.04$ Java Programming for the Web Hello Java World 7 Chapter 2 2.2 A further program You've seen a lot of structure to make up the simplest"hello world"program in Java. Fortunately we can now extend that to provide further functionality at very little cost. Following is a Java program that uses just the first few features of the Java language that we've just covered to print out a series of lines of text: /*"); } } Figure 1 Running plublic class Two bash-2.04$ java Two A program to exercise the Java tools This is my starter Still in my starter I am here Message from main I am here bash-2.04$ 8 Hello Java World Well House Consultants, Ltd. J702: bash-2.04$ java Who Well House Consultants Graham Ellis bash-2.04$ Well House Consultants Samples License 9 License These notes are distributed under the Well House Consultants Open Training Notes License. Basically, if you distribute it and use it for free, we’ll let you have it for free. If you charge for its distribution of use, we’ll charge. 3 10 License Well House Consultants, Ltd.. sham,Wiltshire,UK,SN12 6QL - phone number +44 (1) 1225 708225.Email contact - Graham Ellis (graham@wellho.net). Updated by: ___________________ on _________________ Updated by: ___________________ on _________________ Updated by: ___________________ on _________________ Updated by: ___________________ on _________________ Updated by: ___________________ on _________________ Updated by: ___________________ on _________________ Updated by: ___________________ on _________________ License
https://www.techylib.com/el/view/makeshiftlute/notes_from_well_house_consultants
CC-MAIN-2018-17
refinedweb
1,499
68.6
Other AliassmiInit, smiExit, smiSetErrorLevel, smiGetFlags, smiSetFlags, smiLoadModule, smiSetPath, smiReadConfig SYNOPSIS #include <smi.h> int smiInit(const char *tag); int smiExit(); void smiSetErrorLevel(int level); int smiGetFlags(); void smiSetFlags(int userflags); char *smiLoadModule(char *module); int smiIsLoaded(char *module); char *smiGetPath(); int smiSetPath(char *path); int smiSetSeverity(char *pattern, int severity); int smiReadConfig(char *filename, const char *tag); void smiSetErrorHandler(SmiErrorHandler *smiErrorHandler); typedef void (SmiErrorHandler) (char *path, int line, int severity, char *msg, char *tag); DESCRIPTIONThese functions provide some initialization and adjustment operations for the SMI library. The smiInit() function should be the first SMI function called in an application. It initializes its internal structures. If tag is not NULL, the global configuration file and (on UNIX systems) a user configuration file are read implicitly, if existent. All global statements and those statements with a tag (a ``tag: '' prefix) that matches the tag argument are executed. (see also CONFIGURATION FILES below). smiInit() returns zero on success, or otherwise a negative value. The smiInit() function can also be used to support multiple sets of MIB data. In this case, the tag argument may be prepended by a colon and a name to differentiate the data sets. Any library function call subsequent to an smiInit("tag:dataset") call is using the specified data set. The smiExit() function should be called when the application no longer needs any SMI information to release any allocated SMI resources. The smiSetErrorLevel() function sets the pedantic level (0-9) of the SMI parsers of the SMI library, currently SMIv1/v2 and SMIng. The higher the level, the louder it complains. Values up to 3 should be regarded as errors, higher level could be interpreted as warnings. But note that this classification is some kind of personal taste. The default level is 0, since usually only MIB checkers want to tune a higher level. The smiGetFlags() and smiSetFlags() functions allow to fetch, modify, and set some userflags that control the SMI library's behaviour. If SMI_FLAG_ERRORS is not set, no error messages are printed at all to keep the SMI library totally quiet, which might be mandatory for some applications. If SMI_FLAG_STATS is set, the library prints some module statistics. If SMI_FLAG_RECURSIVE is set, the library also complains about errors in modules that are read due to import statements. If SMI_FLAG_NODESCR is set, no description and references strings are stored in memory. This may save a huge amount of memory in case of applications that do not need this information. The smiSetSeverity() function allows to set the severity of all error that have name prefixed by pattern to the value severity. The smiLoadModule() function specifies an additional MIB module that the application claims to know or an additional file path to read. Only after a module is made known through this function, iterating retrieval functions and retrieval functions without fully qualified identifiers will return results from this module. smiLoadModule() returns the name of the loaded module, of NULL if it could not be loaded. The smiIsLoaded() function returns a positive value if the module named module is already loaded, or zero otherwise. The smiGetPath() and smiSetPath() functions allow to fetch, modify, and set the path that is used to search MIB modules. smiGetPath() returns a copy of the current search path in the form "DIR1:DIR2:...", or NULL if no path is set. The application should free this string if it is no longer needed. smiSetPath() sets the search path to path. The smiReadConfig() function reads the configuration file filename. All global statements in the configuration file and those statements with a tag (a ``tag: '' prefix) that matches the tag argument, if present, are executed. The smiSetErrorHandler() function allows to set a callback function that is called by the MIB parsers deviating from the builtin default error handler, that prints error messages to stderr. The error handler has to comply with the SmiErrorHandler function type. The path, line, severity, msg, and tag arguements carry the module's pathname, the line number within the module, the error severity level, a textual error message, and a short error name of the error being reported. MODULE LOCATIONSThe SMI library may retrieve MIB modules from different kinds of resources. Currently, SMIv1/v2 and SMIng module files are supported. If in an smiLoadModule() function call a module is specified by a path name (identified by containing at least one dot or slash character), this is assumed to be the exact file to read. Otherwise, if a module is identified by its plain module name, the correspondant file (either SMIv1/2 or SMIng) is searched along a path. This path is initialized with /usr/share/mibs/ietf:/usr/share/mibs/iana:/usr/share/mibs/irtf:/usr/share/mibs/site:/usr/share/mibs/tubs:/usr/share/pibs/ietf:/usr/share/pibs/site:/usr/share/pibs/tubs. Afterwards the optional global and user configuration files are parsed for `path' commands, and finally the optional SMIPATH environment variable is evaluated. The `path' command argument and the environment variable either start with a path separator character (`:' on UNIX-like systems, `;' on MS-Windows systems) to append to the path, or end with a path separator character to prepend to the path, or otherwise completely replace the path. The path can also be controlled by the smiGetPath() and smiSetPath() functions (see above). When files are searched by a given module name, they might have no extension or one of the extensions `.my', `.smiv2', `.sming', `.mib', or `.txt'. However, the MIB module language is identified by the file's content, not by its file name extension. CONFIGURATION FILESSMI library configuration files read at initialization and on demand by smiReadConfig() have a simple line oriented syntax. Empty lines and those starting with `#' are ignored. Other lines start with an optional tag (prepended by a colon), followed by a command and options dependent on the command. Tags are used to limit the scope of a command to those applications that are using this tag. The load command is used to preload a given MIB module. If multiple modules shall be preloaded, multiple load commands must be used. The path command allows to prepend or append components to the MIB module search path or to modify it completely (see also MODULE LOCATIONS above). The cache command allows to add an additional directory for MIB module lookup as a last resort. The first argument specifies the directory and the rest of the line starting from the second argument specifies the caching method, which is invoked with the MIB module name appended if the module is found neither in one of the regular directories nor in the cache directory beforehand. The level command sets the error level. The hide command allows to tune the list of errors that are reported. It raises all errors with names prefixed by the given pattern to severity level 9. [Currently, there is no way to list the error names. RTFS: error.c.] Example configuration: # # $HOME/.smirc # # add a private directory path :/usr/home/strauss/lib/mibs # don't show any errors by default level 0 # preload some basic modules load SNMPv2-SMI load SNMPv2-TC load SNMPv2-CONF # want to make smilint shout smilint: level 8 # but please don't claim about # any names longer than 32 chars smilint: hide namelength-32 tcpdump: load DISMAN-SCRIPT-MIB smiquery: load IF-MIB smiquery: load DISMAN-SCRIPT-MIB FILES /etc/smi.conf global configuration file $HOME/.smirc user configuration file ${prefix}/include/smi.h SMI library header file /usr/share/mibs/ SMI module repository directory AUTHOR(C) 1999-2001 Frank Strauss, TU Braunschweig, Germany <[email protected]>
http://manpages.org/smigetpath/3
CC-MAIN-2019-39
refinedweb
1,264
53.31
Today's article tells you a story and process of tracking down bugs. I have always believed that if something goes wrong, there must be demons, and so is the Bug in the program. I hope that through this Bug troubleshooting story, we can not only learn a series of knowledge points, but also learn how to solve problems and how to do things more professionally. The way of solving problems and thinking are more important than simple technology. Let's go! The cause of the story Not long after taking over the new project of the new team, when releasing a system, colleagues kindly reminded: when releasing the xx system, comment out a line of code in the test environment and release the comments when releasing it online. Hearing this friendly reminder, I was surprised: what kind of black technology is this?! In my experience, there is no system that needs to be handled in this way. I am secretly determined to troubleshoot this problem. Finally, I took time. I tossed about for more than half a day on Friday and didn't solve it. I was still thinking about it at the weekend, so I worked overtime to solve this problem. Existence and operation of Bug The project is based on JSP without front-end and back-end separation. A public head.jsp is introduced into the JSP page, which contains such a line of code and comments: <!-- Solve Online HTTPS Browser circling problem,The test environment should comment out the following sentence --> <meta content="upgrade-insecure-requests" /> What colleagues kindly remind is the operations on the comments. The test environment is commented out (otherwise it can't be accessed), and the production environment needs to be released, otherwise it can't be accessed (turn around). According to the notes, it is probably used to solve HTTPS related problems. So, what is the reason for this operation? Is there a simpler operation? We are just doing this. No one is looking for the root of the problem, and no one can give the answer. We can only look for it ourselves. HTTP requests in HTTPS Let's take a look at what META elements are used for. The "content security policy" specified by HTTP equiv is "web page security policy", abbreviated as CSP, which is often used to prevent XSS attacks. The usual way to use it is to define it in HTML through meta Tags: <meta content="strategy"> <meta content="strategy"> Among them, various restriction policies related to security can be specified in content. The upgrade secure requests used in the project is one of the restriction policies. Its function is to automatically replace all HTTP links loaded with external resources on the web page with HTTPS protocol. Now I understand a little. It turns out that this line of code was originally written to forcibly convert HTTP requests into HTTPS requests. But normally, as long as HTTP to HTTPS is configured in Nginx or SLB, such problems will not occur, and there is a corresponding configuration in the system. So, I started another service experiment online and commented out this code. Some functions are really turning around. I'm not deceived! Why are HTTP requests not allowed in HTTPS Viewing the request in the browser, it is found that the circle is caused by the following error: Mixed Content: The page at ' was loaded over HTTPS, but requested an insecure stylesheet ' This request has been blocked; the content must be served over HTTPS. Mixed Content is Mixed Content. The so-called Mixed Content usually occurs in the following situations: the initial HTML content is loaded through HTTPS, but other resources (such as css style, js, pictures, etc.) are loaded through unsafe HTTP requests. At this time, the same page uses both HTTP and HTTPS content, and the HTTP protocol will reduce the security of the whole page. Therefore, modern browsers will warn against HTTP requests in HTTPS, block requests, and throw the above exception information. Now, the cause of the problem is basically clear: HTTP requests appear in HTTPS requests. So, there are several solutions: - Scheme 1: add meta tags in HTML to force the conversion of HTTP requests into HTTPS requests. This is also the above use method, but the disadvantages of this method are also obvious. In the test environment without HTTPS, it needs to be commented out manually. Otherwise, it cannot be accessed normally. - Scheme 2: convert HTTP requests into HTTPS requests through the configuration of Nginx or SLB. - Scheme 3: the stupidest method is to find the problems of HTTP requests in the project and repair them one by one. Preliminary transformation, slightly effective The first scheme currently used obviously does not meet the requirements, while the second scheme has been configured, but some pages still do not work. So, are there any other options? After a lot of investigation, it is found that the reason for the failure is that the redirect jump is widely used in the project. @RequestMapping(value = "delete") public String delete(RedirectAttributes redirectAttributes) { //.. do something addMessage(redirectAttributes, "delete xxx success"); return "redirect:" + Global.getAdminPath() + "/list"; } The redirect method will redirect to the HTTP protocol in the HTTPS environment, resulting in inaccessibility. It's too bad. No wonder the settings of HTTP to HTTPS have been configured, and some pages still don't work. The root cause of this problem is the compatibility of Spring's ViewResolver with HTTP 1.0 protocol. This problem can be solved by closing it. There are two specific transformation schemes. Scheme 1: change redirect to the RedirectView class to implement: modelAndView.setView(new RedirectView(Global.getAdminPath() + "/list", true, false)); The last parameter of RedirectView is set to false, which is to turn off the switch and not be compatible with the HTTP 1.0 protocol. Scheme 2: configure the redirectHttp10Compatible property of Spring's ViewResolver. Through this scheme, global shutdown can be realized. <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> <property name="prefix" value="/" /> <property name="suffix" value=".jsp" /> <property name="redirectHttp10Compatible" value="false" /> </bean> Because redirect is widely used in the project, the second scheme is adopted. After modification, it is found that most of the problems have been solved. In order to prevent omission, I ordered more pages, and even missed fish! Shiro interceptor again The problem caused by redirection was solved. I thought everything was fine. As a result, similar problems occurred on the page redirected by Shiro. The reason is simple: the permission verification of some pages needs to go through Shiro, but Shiro intercepts the HTTPS request and converts it into an HTTP request during redirection. So why doesn't it work if the view layer sets redirectHttp10Compatible to false? After tracking the code in Shiro interceptor, it is found that Shiro sets redirectHttp10Compatible to true by default in the interceptor, which is another pit~ Looking at the source code, you can find that Shiro's login filter FormAuthenticationFilter calls the saveRequestAndRedirectToLogin method: protected void saveRequestAndRedirectToLogin(ServletRequest request, ServletResponse response) throws IOException { saveRequest(request); redirectToLogin(request, response); } // Then call the redirectToLogin method protected void redirectToLogin(ServletRequest request, ServletResponse response) throws IOException { String loginUrl = getLoginUrl(); WebUtils.issueRedirect(request, response, loginUrl); } // Set through WebUtils.issueRedirect public static void issueRedirect(ServletRequest request, ServletResponse response, String url) throws IOException { issueRedirect(request, response, url, (Map)null, true, true); } // Overloaded method via WebUtils.issueRedirect public static void issueRedirect(ServletRequest request, ServletResponse response, String url, Map queryParams, boolean contextRelative, boolean throws IOException { RedirectView view = new RedirectView(url, contextRelative, view.renderMergedOutputModel(queryParams, toHttp(request), toHttp(response)); } Through the above code tracing, we can see that in the WebUtils issueRedirect method, we call two times issueRedirect, and the parameter value is true by default. Find the root of the problem and solve it easily. Rewrite the FormAuthenticationFilter Interceptor: public class CustomFormAuthenticationFilter extends FormAuthenticationFilter { @Override protected boolean onAccessDenied(ServletRequest request, ServletResponse response) throws Exception { if (isLoginRequest(request, response)) { if (isLoginSubmission(request, response)) { return executeLogin(request, response); } else { return true; } } else { saveRequestAndRedirectToLogin(request, response); return false; } } protected void saveRequestAndRedirectToLogin(ServletRequest request, ServletResponse response) throws IOException { saveRequest(request); redirectToLogin(request, response); } protected void redirectToLogin(ServletRequest request, ServletResponse response) throws IOException { String loginUrl = getLoginUrl(); WebUtils.issueRedirect(request, response, loginUrl, null, true, false); } } In the example, change the parameter in onAccessDenied that needs to call the WebUtils.issueRedirect method to false. The above is just an example. In fact, it includes not only the success page, but also the failure page. You need to re implement the corresponding methods. Finally, configure a custom interceptor in shiroFilter. <!-- Custom login filter--> <bean id="customFilter" class="com.senzhuang.shiro.CustomFormAuthenticationFilter" /> <bean id="shiroFilter" class="org.apache.shiro.spring.web.ShiroFilterFactoryBean"> <property name="securityManager" ref="securityManager" /> <property name="loginUrl" value="/login.html"></property> <property name="unauthorizedUrl" value="/refuse.html"></property> <property name="filters"> <map> <entry key="authc" value- </map> </property> </bean> After the above transformation, the problem of HTTP request in HTTPS has been solved. In order to prevent omission, I clicked some pages one by one and sent questions again! Hey, why do you owe me so much LayUI pit I thought that after solving the above problems, it was completely solved. I could have a barbecue to celebrate. As a result, a similar error was found in the front page. However, the error message comes from the path to the login page: It's strange that you have successfully logged in. Why does the business operation page request the login page again? Moreover, the jump is still an HTTP request, not an HTTPS request. Check the login request results: After checking the relevant business codes, after the login is completed, there is no login request. Why do you request login again? Is it because access to some resources is limited, which leads to redirection to the login page? So, check the "Initiator" called by HTML: It turned out that the login operation of login was triggered when LayUI requested the corresponding layer.css resource. The first thought is that Shiro did not release the interception of static resources, so the interception permission of layui was released in Shiro, but the problem already exists. After troubleshooting again, it is found that the layer.css file is not actively introduced into the page, so the layer.css file is actively introduced, but the problem still exists. I have no choice but to check layui.js to see why the request is made. At this time, I also notice that there is a word "undefined CSS" in the request path. Friends who have used js know that undefined is the uninitialized default value of variables in js, similar to null in Java. Search "css /" in layui.js, and you really find such a code: return layui.link(o.dir + "css/" + e, t, n) In contrast, the value of o.dir is "undefined", which becomes "undefined" when connected with the following css. However, this path does not exist, and the permission is not configured in Shiro. By default, it will go to the login interface. Here is an internal asynchronous redirect request, which will not be presented on the page. You can find it only by viewing the browser's error information. Once the cause of the problem is found, the transformation is simple. Modify the link method parameter of layui: // Comment out // return layui.link(o.dir + "css/" + e, t, n) // Change to return layui.link((o.dir ? o.dir:"/static/sc_layui/") +"css/"+e, t, n) The basic idea of the transformation is: if o.dir has a value (the value in js is true), then use the value of o.dir; if o.dir is undefined, then use the specified default value. Where "/ static/sc_layui /" is the path to store layui components in the project. Since layui.js may be compressed JS, you can find the corresponding code by searching "css /" or "layui.link". Restart the project, clear the browser cache, and visit the page again. The problem has been completely solved. You can eat kebabs at ease It took another half day at the weekend and finally solved the problem. Now you can have a kebab to celebrate. Finally, review the process and see what you can gain from it: - There is a problem: the code needs to be changed manually in different environments (HTTP and HTTPS); - Looking for problems: for security, HTTP requests are not allowed in HTTPS; - Problem solving: close in two ways; - Shiro problem: is turned off by default in Shiro, and Filter is rewritten to realize the closing operation; - LayUI Bug repair: a bug in the LayUI code causes an http (login) request to be initiated. Repair this bug; In this process, if you are just content with the status quo and "abide by the rules", it will not only take time and effort, but also do not know why you want to do so every time you go online. But if you follow up like the author, you will learn a series of knowledge: - CSP of HTTP request, upgrade secure requests configuration; - Why can't an HTTP request be initiated in HTTPS; - Configure in the Spring View parser; - Disadvantages of view return in redirect mode; - How to convert HTTP requests into HTTPS requests in Nginx; - Mixed Content concept and error of HTTP request; - Differences between HTTP 1.0, HTTP 1.1 and HTTP 2.0 protocols; - Shiro interceptor custom Filter; - Shiro interceptor filters the specified URL access; - Shiro interceptor configuration and partial source code implementation; - A bug in LayUI; - Other technologies used or learned in troubleshooting the problem; Have you learned these techniques? Have you learned the ideas and ways to solve problems? If there is a little content in this article that inspires you, I will not hesitate to share it, and you should not be stingy. Give me a praise. About the blogger: the author of the technical book "inside of SpringBoot technology", loves to study technology and write technical dry goods articles. The official account: "new horizon of procedures", the official account of bloggers, welcome the attention. Technical exchange: please contact blogger wechat: zhuan2quan
https://www.fatalerrors.org/a/19t11zs.html
CC-MAIN-2022-21
refinedweb
2,361
55.54
Agenda: We will learn how to create Advanced Data Store Object through eclipse based modelling. Next generation of Data Store Object simplifies data modelling in SAP BW(eclipse based modelling experience) - Consolidate Data Store Objects and Info Cubes - Combine Info Object and field based modelling - New intuitive & modern modelling UI Optimized Data Store services • High frequent data loads – based on optimized request management • Change of usage scenario without deletion of data • Up to 120 key fields • Optional SID entries for query performance optimization Step by Step 1 ) You are in the BW modeling tools. In the context menu for your BW project, choose Start of the navigation path New Next navigation step Data Store Object (advanced) End of the navigation path. 2 ) In the dialog box for creating Data Store objects (advanced), assign an Info Area to your advanced Data Store object and give it a name and description. The name can be between three and nine characters long. If you have a namespace for the Data Store object (advanced), the name can only be eight characters long. You can select an object template or a template for the data warehousing layer. More information: Templates for Modeling the Data Warehousing Layers. 3 ) Selected a template (DSO) for the data warehousing layer. 4) In overview tab modeling properties enable you to control how you use your A DSO. 5) In details tab you can add IOs or Fields. 6) Partition tab for optimize performance. This is an expert setting, usually no need to change this setting. The system default setting should be sufficient. You can create partitions for key fields. 7) Index tab is for to create index on the active table. If no active table is found, the index is created on the inbound table. 8) Active A DSO from BW Modeling Tools (Eclipse-based) 9) After creating and configuring the ADSO, it will become visible in the workbench as well. Check in BW IA in RSA11. 10) Create transformation in BW, here I have selected source as a classic DSO. After transformation activation execute DTP. 11) You will check total records loaded in A DSO 12) Go to Manage of A DSO & execute. Usage recommendation for the advanced DSO - Model persistency of new end2end scenarios (staging and reporting) with the advanced DSO - Do not massively convert existing scenarios – conversion tools and support are planned to be shipped with later SPs Hi Promod One quick question. Is there any limitations on no. of fileds/inofobjects for keyfields and data fields? Regards, Krishna Chaitanya 120 key fields is a major improvement on ADSO compared to a DSO classic which had a max of 16 key fields. Cheers DR Hi Daniel, Any length limitation of key? Selective deletion works smoothly? 🙂 I am wondering about aDSO and SIDs. Generally SIDs improve reporting performance, nothing changed here, right? Regards, Dominik
https://blogs.sap.com/2015/05/07/step-by-step-procedure-for-advanced-dso-creation/
CC-MAIN-2018-30
refinedweb
476
55.44
Bug Description The HTML5 spec [1] now contains a proposal for SVG-in-HTML (i.e. text/html serialization). This includes things like: * unquoted attributes * case-insensitive element and attribute names * missing tags are implied (but are syntax errors) * no requirement for namespace declarations For example, the following document would work (but is not valid) for SVG-in-HTML: <html><body> <p>Hello, World! <Svg> <CIRCLE cx=50 cy=50 r=30 fill=blue /> <rEcT x="50" y=50.0 width=10.0E+1 height=50. fill=#0f0> <p>Goodbye, cruel World! </body></html> It should be possible to copy from "<Svg" to the end of the rect element "#0f0>" paste it into a text document and bring that file up in Inkscape. Of course the DOM serialization and saving the SVG document in Inkscape would produce the following SVG XML document: <svg xmlns="http:// <circle cx="50" cy="50" r="30" fill="blue" /> <rect x="50" y="50.0" width="10.0E+1" height="50." fill="#0f0" /> </svg> Here is a HTML5 parser library: http:// [1] HTML spec: http://
https://bugs.launchpad.net/inkscape/+bug/367997
CC-MAIN-2016-26
refinedweb
180
59.03
Subclassed QPlainTextEdit, how can call in other class? This is my main that creates the RepairDevices window: #include "repairdevices.h" #include <QtWidgets> int main(int argc, char * argv[]) { Q_INIT_RESOURCE(books); QApplication app(argc, argv); RepairDevices win; win.show(); return app.exec(); } RepairDevices.cpp has slot setDirty: public slots: void setDirty(QString text); I have subclassed QPlainTextEdit, to a class named 'notifierText'. I have promoted a QPlainTextEdit, named sn_txt, to this class. The sn_txt is in my form, created with Qt Designer. This is my class' source: notifierText::notifierText(QWidget *parent) : QPlainTextEdit(parent) { QString thistext = this->toPlainText(); connect( this, SIGNAL(textChanged(QString)), parent, SLOT(setDirty(thistext)) ); } Obviously, the 'parent, SLOT(...' is wrong. Ho can I call the setDirty function in RepairDevices.cpp? It's RepairDevices that has to connect to notifierText not the other way around I did it like this in RepairDevices constructor: connect(ui.sn_txt,SIGNAL(textChanged(QString)),this,SLOT(setDirty(QString))); But I get a warning: QObject::connect: No such signal QPlainTextEdit::textChanged(QString) And it's not working. What am I doing wrong? Hi mrjj. I demoted it to QPlainTextEdit, I don't use the notifierText class. @Panoss Hi :) ok so sn_txt is now a 100% normal QPlainTextEdit and not your class? Did you clean, rebuild all after changing back? Yes it 's a pure QPlainTextEdit . I cleaned and rebuild but the same happens. This is the connection in RepairDevices' constructor: connect(ui.sn_txt,SIGNAL(textChanged(QString)),this,SLOT(setDirty(QString))); This is the declaration of the function: public slots: void setDirty(QString txt); This is the function: void RepairDevices::setDirty(QString txt) { qDebug() << "text=" << txt; } When I change to textChanged(), it works. But I don't have the text of the sn_txt. And I see here: void QPlainTextEdit::textChanged() - mrjj Qt Champions 2016 @Panoss said in Subclassed QPlainTextEdit, how can call in other class?: textChanged i dont think it has version where it emits the whole text. lineEdit has. so you must use QPlainTextEdit::textChanged() and void RepairDevices::setDirty() { qDebug() << "text=" << ui->plaintext->toPlainText(); } The reason I subclassed it, was because I wanted the class to connect to RepairDevices 's slot. (that is, the code: connect blah blah to be in the class) This is not possible, so it's canceled. Thank you for your help. Well the normal signal does not have QString but in a subclass you could easy make a new signal that would have and then emit it in the subclass and it would be as you wanted.
https://forum.qt.io/topic/76310/subclassed-qplaintextedit-how-can-call-in-other-class
CC-MAIN-2018-09
refinedweb
416
57.27
This class represents a window of the GuiSys. More... #include "Window.hpp" This class represents a window of the GuiSys. A WindowT is the most basic element of a GUI, and all other windows are derived and/or combined from this. WindowT instances can be created in C++ code or in Lua scripts, using the gui:new() function. They can be passed from C++ code to Lua and from Lua to C++ code at will. In C++ code, all WindowT instances are kept in IntrusivePtrT's. Their lifetime is properly managed: A window is deleted automatically when it is no longer used in Lua and in C++. That is, code like Example 1: w=gui:new("WindowT"); gui:SetRootWindow(w); w=nil; Example 2: w=gui:new("WindowT"); w:AddChild(gui:new("WindowT")); works as expected. See the cf::ScriptBinderT class for technical and implementation details. The normal constructor. The copy constructor. Copies a window (optionally with all of its children recursively). The parent of the copy is always NULL and it is up to the caller to put the copy into a window hierarchy. The virtual destructor. Deletes this window and all its children. Adds the given window to the children of this window, and sets this window as the parent of the child. This method also makes sure that the name of the Child is unique among its siblings, modifying it as necessary. See SetName() for more details. Adds the given component to this window. trueon success, falseon failure (if Compis part of a window already). Calls the Lua method with name MethodName of this window. This method is analogous to GuiI::CallLuaFunc(), see there for more details. The virtual copy constructor. Callers can use this method to create a copy of this window without knowing its concrete type. Overrides in derived classes use a covariant return type to facilitate use when the concrete type is known. Deletes the component at the given index from this window. Finds the topmost window that contains the point Pos in the hierachy tree of this window (with Pos being in (absolute) screen coordinates, not relative to this window). Use GetRoot()->Find(Pos) in order to search the entire GUI for the window containing the point Pos. Pos. Returns the position of the upper left corner of this window in absolute (vs. relative to the parent) coordinates. Returns the "Basics" component of this window. The "Basics" component defines the name and the "show" flag of the window. Returns the immediate children of this window. This is analogous to calling GetChildren(Chld, false) with an initially empty Chld array. Returns the children of this window. Returns the (n-th) component of the given (type) name. Covers the "custom" components as well as the application components, "Basics" and "Transform". That is, GetComponent("Basics") == GetBasics() and GetComponent("Transform") == GetTransform(). Returns the components that this window is composed of. Only the "custom" components are returned, does not include the application component, "Basics" or "Transform". Returns the parent window of this window. Returns the top-most parent of this window, that is, the root of the hierarchy this window is in. Returns the "Transform" component of this window. The "Transform" component defines the position, size and orientation of the window. The clock-tick event handler. Keyboard input event handler. Mouse input event handler. Removes the given window from the children of this window. Renders this window. Note that this method does not setup any of the MatSys's model, view or projection matrices: it's up to the caller to do that! List of methods registered with Lua.
https://api.cafu.de/c++/classcf_1_1GuiSys_1_1WindowT.html
CC-MAIN-2022-40
refinedweb
601
68.57
Building Custom Content Types with ArgoUML and ArchGenXML and Permitting Anonymous Content Submission And howto on creating a custom content type with ArgoUML and ArchGenXML, and allowing anonymous submission of that content. Overview and Prerequisits Overview of what we're trying to accomplish, and prerequisites for development. PreambleWhile I've found good documentation on may different aspects of Plone customization and development, I found it difficult to bring the information from those sources together into something I could use to accomplish a customization goal. This documentation is based on my first pass at custom content type and my first major Plone customization. As such, I'm sure it's not done in an optimal manner, and probably won't work for you at all. In fact, I have no idea what I'm doing whatsoever. So, consult this information at your own risk. Also, if you know a better approach to the things I'm doing, please post a comment so that I can learn from that as well. OverviewUp until this point, my organization had simply used Plone for storing uploaded files on the intranet. PDFs, Office documents, etc and the greatest selling points for us on this tool was it's ease of use and fantastic search capibilities. Users had broad access to upload and modify files, and workflow was largely ignored. We needed a business suggestion submittal system that would take anonymous submissions from intranet users and allow those submissions to go through an approval/rejection process that was visible to the end users. We wanted to be able to add custom content via a form, and have that data change permissions as it traveled through a proper workflow. We also wanted to to allow anonymous intranet users to be able to make these submissions, and have visibility to the documents as they progressed through the workflow. I started by building in archtype by hand, which worked. But, as I was/am new to Plone/Zope, it was a little hard to get started, and I didn't feel like I was making good progress. Then in my research, I found a better way. I could design my data form and my desired workflow in UML and have that converted to the base Plone product. Once that was customized and the correct permissions defined I could finish my project in a very short period of time. PrerequisitesMost of the tutorials I found were not targeted at the complete novice, so I'll try to explain in as much detail as possible. That being said, setup and installation is beyond the scope of this document. You will need the following software, though: - ArgoUML - Be sure to install the ArchGenXML profile listed at the bottom of this ArgoUML+ArchGenXML tutorial. - ArchGenXML - Plone 2.1+ - Python My development environment is Plone 2.5 running on WindowXP, though this process should work regardless of your environment. The code I developed in this env was deployed to a Plone 2.1 instance running on Redhat Enterprise Linux 3. Creating a Class and Workflow with ArgoUML and ArchGenXML Describes creating a archtype class and custom Plone workflow with ArgoUML. ArgoUMLArgoUML is a diagramming tool similiar to Visio, but better cause it's Open Source, and cross platform. It's a Java thick client application, so you'll need a Java Runtime Environment before you can run it. Again, I won't cover installation, but once you get it all installed and the ArchGenXML profile configured, fire it up. Creating the ClassThe Class is the business object that will be holding all the information that we'll be collecting from the users. We define what kind of data we want it to hold, and put in a few configuration parameters, and ArchGenXML and the archetypes framework will create the forms for us. Hurray for Archetypes! - First we're gonna make a package to store all of our classes in. In the example, it doesn't make much of a difference, but it's probably a good habit to have, so click on the New Package icon on the top of the main diagram area and then click in the diagram area. - Stretch the corner out to it takes up most of the screen. On the bottom of the screen, in the properties tab, give your package a name like "ProcessImprovement". - Now click on the New Class icon on the top, next to the New Package icon, and click inside of your package. You've just created a class. Give it a name like "ImprovementSuggestion". - Now we'll start defining some datatype. The ArchGenXML supports a whole series of Archetype predefined datatypes and they're associated requestor widgets (quick reference). We're going to keep this real simple and use just three fields: - Name (String): To hold the user's name, since users need not be logged in in our senerio, this field will allow them to manually enter their name. - Suggestion (Text): This will be a basic text area for them to enter their suggestion. If you wanted to get fancy you could Rich type and allow HTML and the fancy Kupu editor input. - ImprovementArea (Selection): This will be a selection list of possible choices. It will allow the users to select the area the improvement will impact from a list. - Next we need to put some additional information into the class diagram so that ArchGenXML know how to display the fields, and addition option that we have available to us. We do that by giving each field tagged values. Click on the Name field one time to select it, and then click on the "Tagged Values" tab in the bottom pane. This is where we're going to put our widget and field configuration. Enter the following Tagged Values in the Name data element: - widget:label = Your Name This is what the form field will be labeled. If it is not specified, the label will default to the name of that field. - widget:description = Enter your name, or 'Anonymous' Suggestion field and enter the following Tagged Values: - widget:label = Your Suggestion This is what the form field will be labeled. If it is not specified, the label will default to the name of that field. - widget:description = Enter your improvement suggestion with as much detail as possible. ImprovementArea field and enter the following Tagged Values: - widget:label = Improvement Area This is what the form field will be labeled. If it is not specified, the label will default to the name of that field. - widget:description = Enter the area that your suggestion will affect.. - vocabulary = python:["Sales", "Help Desk", "Factory Floow", "Other (please specify above)"] vocabulary take in a python list, and will make a select list out of it. It will automatically determine whether it should be a drop down list or a series of radio buttons, depending on how many entries you have. - One last thing before we generate our initial Zope Product. In the navigation tree in the left hand pane, click on "untitledModel", and give it a name in the Properties pane on the bottom. Name it "ProcessImprovement". - ArchGenXML 1.4.0-beta2 (c) 2003-2005 BlueDynamics KEG, under GNU General Public License 2.0 or later INFO Parsing... WARNING Empty package: '.:0000000000001DE2'. WARNING Empty package: '.:0000000000001DE1DE2'. INFO Generating package 'ProcessImprovement'. INFO Generating class 'ImprovementSuggestion'. This will have generated a complete Plone Product called "ProcessImprovement". - Copy the entire "ProcessImprovement" directory tree into your Plone Products directory. Restart your Plone service, log in as a user with administrative priviliges, go to the site preferences and click on "Add/Remove Products". Check the checkbox next to the new Product and click "install". Congratulation! You've just built your first custom Plone product! - You should now be able to navigate to any folder in the Plone website, and add an "improvementsuggestion" content type. It should look something like this: - Feel free to go back and add additional content types. There some really cool ones like "File". Also toy with the available options for each. Be sure to poke around the generated code. We'll be digging into that later on, so get familiar with it now. Creating the WorkflowThe workflow is path through which the Class we just defined travels. In our case there will be two paths, one of approval and one of rejection. First we'll draw out the states, then we'll define how one gets from one to the next, and finally will fill in some detail on the process. We'll be working with a StateChart diagram. If you're unfamiliar with what one of those is, there is some good basics on StateChart Diagrams here. - Go back to AgroUML and the class that we just completed. Click on the ImprovementSuggestion class, and then select Create->New Statechart Diagram from the top menu. Click on "(anon StateMachine)" in the left nav tree and give the workflow a name in the Properties tab on the bottom. Name it "ImprovementSuggestionWorkflow". - In the main chart area, add a new "Initial". It looks like a solid black dot. Drop it on the left hand side of the workspace, and give it a name (in the Properties tab) of "Creating". - Next add a new Simple State (looks like a rounded box with a horizontal line in it) a to the right of the Initial. Name this one "Submitted". - Above and to the right of Submitted, add another Simple State called "Approved". - To the right of "Approved", and another Final State (circle with black dot inside of it) called "Complete". - Below "Approved", add a Final State called "Rejected". - Now click on the Inital and drag a connecting line from it to "Submitted". Connect Submitted to both "Approved" and "Rejected". And finally connect "Approved" to "Complete". The completed diagram should look somthing like this: Business types LOVE this kinda stuff. So take a screenshot of this workflow, throw some labels on it and send it off to them right way. CC your boss, and then followup and ask for a raise. You deserve it! - Now we're going to go back and name out state transitions (the connecting lines). The transistion going into Submitted should be names "Submit", the trans going into Approved should be called "Approve", the trans going into Rejected should be called "Reject" and the trans going into Completed should be called "Complete". Click on each transition state and set it's name in the Properties tab on the bottom of the screen. - We need to define what kind of users can trigger different state transitions. We have anonymous users to be able to Create (an initial state) and Submit (a state transition) requests, but we don not want them approving. We want Managers and Reviewer to be able to approve/reject/complete the Suggestions. But, we want anonymous users to be able to view the requests all the way through the process. To accomplish this, we'll start by setting the permissions on all the states. Click on the Submitted state, and go to the Tagged Values tab. We're going to add a tagged value for each permission on that state and the groups that should have that access. Add the following tags: - view=Anonymous - list=Anonymous - access=Anonymous - modify=Manager, Reviewer - Go through and give the same Tag Values to the Approved, Complete and Rejected states. - Next we'll add "guards" to the state transitions that will define who can, and cannot trigger those state changes. The first state transistion (Submit) should be available to everyone, so we won't put a guard there, but we will on the rest. Click on the "Approve" state transition, and find the "Guard" form field in the Properties tab. Right click in that and select "New" (your only option). Give the new Guard a name of "Approve", and in the expression field (on the right) enter "guard_roles:Reviewer;Manager". This will limit that transistion's availablity to those two groups. - Do the same for the Reject and Complete state transitions. Permissions are now done. - TODO: Define worklist, and provide clarification about how it functions and what it does. - Your workflow is now complete! Run the same cli as in step #9 of "Creating the Class" section above. C:\Sandbox\Plone\Tutorial>ArchGenXML.py ProcessImprovement.zargo ArchGenXML 1.4.0-beta2 (c) 2003-2005 BlueDynamics KEG, under GNU General Public License 2.0 or later INFO Parsing... WARNING Empty package: '.:00000000000021E3'. WARNING Empty package: '.:00000000000021E21E3'. INFO Generating package 'ProcessImprovement'. INFO Generating class 'ImprovementSuggestion'. INFO Generating workflow 'ImprovementSuggestionWorkflow'. INFO Generating workflow script(s). - Copy your directory over to your Plone products directory again, restart Plone and go back to the Add/Remove Products section. The product is still installed, but now will need to be upgraded. Click the link next to your product to upgrade it. Anonymous Content Submission Allowing anonymous users to submit content while preventing them from later editing theirs or other users submissions. The RequirementSo far we've created a business object and defined the workflow through which it will operate. We could be done now, if all of our users had member accounts. But for the purposes our business, we want anonymous users to be able to submit these suggestions. This is good because: - Since this is a suggestion box, we want staff to be able to submit a suggestion under the guise of anonymous submission. In reality, I could track them down by IP address and such, but they don't know that. - I don't want to have to maintain user accounts for 500 people. Yes, I know I could integrate into our MS AD, or some such nonsense, but this is easier. Anonymous Content AdditionWe don't want users to be able to add content everywhere, though. Just in one folder, so we'll start by creating that. - Go into your Plone instance and create a folder off of the Root called "Suggestions". And in that folder, create another folder named "Archive". We're going to be creating the actual improvementsuggestions in the "Archive" folder, and putting smartfolders in the "Suggestions" folder, as you'll see later on. - Now, go into the ZMI. Yes, I know it's scary. Yes, you've been told to stay away and set permissions in a different manner, but this is they way we're gonna do it. - Browse to the /Suggestions/Archive folder and click on the properties tab. You will see a huge grid of permissions. Do not be afraid. Scroll down until you see "add portal content". Uncheck the "Aquire" box on the left, and check the "Anonymous" and "Authenticated" boxes on the right. - Scroll down until you see "modify portal content". Make sure "Aquire" is unchecked and check "Anonymous" and "Authenticated". - Scroll down to the very bottom of the page and click "Save Changes". - Now close your web browser completely (or pull up a different one. I user Firefox for dev, so I just pull of IE for this testing so I can keep my ZMI on screen), and go to your website. Do not log in. Browse to the /Suggestions/Archive folder. You should be able to add a "improvementsuggestion" type object. - TODO: Define custom permission for the ImprovementSuggestion type so that containing folder can be controlled the objects inside of it. Right now, with this configuration, the folder is editable by anonymous, and this is undesirable. - Great! Anonymous users can now add content, but there's a problem. Preventing Anonymous Users from EditingNow that anyone can add content, it's important to prevent those same anyones from being about to delete or modify that submitted content. The way we will be handling this is to add a script that is executed automatically when the new content is saved that progresses it in it's workflow process. We will move from the "Creating" state to the "Submitted" state. If you remember (or click back on your browser), the Submitted state only allows modification of content by Managers and Reviewers, and not anonymous users. We will be using a technique that I took from the PloneJobBoard project. We're gonna getting into the nitty gritty here. We'll be editing files with a text editor. One thing to know is that the changes that we're making will be overwritten if you regenerate the project with ArchGenXML. So make sure you have your project checked into your CVS/Subversion system so you can diff the changes back in. Or, just don't regenerate after this point... - Go into your /Products/ProcessImprovement/skins/ProcessImprovement folder and create the following two files. They comprise a script that will be executed and automatically propel the workflow. /Products/ProcessImprovement/skins/ProcessImprovement/ improvementsuggestion_post.cpy ## Script (Python) "improvementsuggestion_post" ##title=Post ImprovementSuggestion after validation ##bind container=container ##bind context=context ##bind namespace= ##bind script=script ##bind state=state ##bind subpath=traverse_subpath ##parameters= ## from Products.CMFCore.utils import getToolByName workflow = getToolByName(context, 'portal_workflow') workflow.doActionFor(context, 'Submit') return state.set(status = 'success', portal_status_messsage = 'Thank you.') /Products/ProcessImprovement/skins/ProcessImprovement/ improvementsuggestion_post.cpy.metadata [default] title = Submit a suggestion [validators] validators = [actions] action.success = redirect_to:string:../ action.failure = traverse_to:string:content_edit - Open up the /Products/ProcessImprovement/Extensions/Install.py file and do a search for the following: print >>out,'no workflow install' immediatly following that line, add the following controller = getToolByName(self, 'portal_form_controller') addFormControllerAction(self, out, controller, template = 'validate_integrity', status = 'success', contentType = 'ImprovementSuggestion', button = '', actionType = 'traverse_to', action = 'string:improvementsuggestion_post') and add this to the bottom of the file def addFormControllerAction(self, out, controller, template, status, contentType, button, actionType, action): """Add the given action to the portalFormController""" controller.addFormAction(template, status, contentType, button, actionType, action) print >> out, "Added action %s to %s" % (action, template) - Now try anonymously creating a ProcessImprovement document. You should be able to view it after creation, but not edit it and it's state should be "Submitted" instead of the default of "Creating". Log in as a manager or reviewer and you should be able to progress it down it's workflow. Conclusion and Summation Final notes and a summation of this exercise.
http://plone.org/documentation/tutorial/anonymously-adding-custom-content-types-with-argouml-and-archgenxml/tutorial-all-pages
crawl-002
refinedweb
3,019
64.91
1) interested in receiving the future Java mails, please subscribe here. Enhanced For loop We will cover the usage of Enhanced for-loop in this section. This is also referred to as for-each loop. The following code shows how to iterate over the elements in a Collection using the traditional Iterator interface. ForEachTest.java package foreach; import java.util.*; public class ForEachTest { public static void main(String[] args) { List numbers = new ArrayList(); numbers.add(1);numbers.add(2);numbers.add(3); Iterator numbersIterator = null; for(numbersIterator = numbers.iterator(); numbersIterator.hasNext();) { System.out.println(numbersIterator.next());; } } } Though the goal of the above program is traversing over the elements in the list collection, we are depending on the Iterator API to achieve this. A replacement to the Iterator is the usage of Enhanced for-loop, through which it is possible to have a short and a neat code like this, for(Integer number : numbers) { System.out.println(number); } Let us see how the above syntax is used to traverse over the collection. The expression before the symbol ':' is a declaration statement which declares an Integer object called number for holding the individual element from the collection. The expression that follows after the symbol must be a Collection type whose elements can be iterated. Precisely, this type must implement an interface called java.util.Iterable which has a single method called iterator(). During compile-time, the compiler translates the above code which looks very similar to the following, Iterator intIterator = numbers.iterator(); for(intIterator.hasNext();) { System.out.println(intIterator.next()); } Even it is possible to apply this new syntax for type Arrays. So, the following code is perfectly legal and it compiles fine. int [] numbers = {10, 20, 30, 40, 50}; for(int num : numbers) { System.out.println(num); } The enhanced for-loop is just a convenience from the programmer’s point of view for iterating over the Collection. But the usage of this loop has some dis-advantages. The first one out of the two is that step-value cannot be incremented. For example, it is not possible to traverse the first element and then the thirth element and so on. And the second dis-advantage is that backward traversal is not possible. Variable Arguments Among the new features added in Java 5.0, Variable Arguments (var-args) is one of the coolest features. Before getting into this, let us see the reason in action for introducing such a feature like this. The following code snippet shows the addition of two numbers being done in the testAdd() method. VariableArgumentsTest.java package varargs; public class VariableArgumentsTest { static void addTest(String message, int a, int b) { int result = a + b; System.out.println(message + result); } public static void main(String[] args) { addTest("Addition Result is ", 10, 20); } } Consider that we want the addTest() method to add three numbers. Now, we introduce a third parameter called c in the method definition like this, static void addTest(String message, int a, int b, int c) { int result = a + b + c; ... } Or we can have an overloaded method that is defined with three parameters and have the original implementation there. If the number of parameter values that are going to be passed for the addTest() method goes on increasing, then it requires change in the existing code or possibly the addition of new overloadeded methods. The way to get over this is by using a variable arguments. Internally a variable argument is maintained as an array that can hold zero or one or more arguments of the same type. Following code snippet shows the declaration of variable arguments. static void addTest(String message, int... numbers) { int result = 0; for(int num : numbers) { result = result + num; } return result; } Note that the type (int) is followed by three dots (often called ellipsis) and then the variable argument name. With such a declaration in hand, it is now possible to invoke the method as shown below. addTest("Addition Ressult", 10, 20); -> addTest("Addition", new int[]{10,20}); addTest("Addition Ressult", 10, 20, 30); -> addTest("Addition", new int[]{10,20, 30}); addTest("Addition Ressult"); -> addTest("Addition", new int[]{}); All the above styles are allowed as they are internally represented as arrays. There are two major constraints in the usage of variable arguments. The first thing is a method is allowed to carry only one variable argument type. The other constraint is that a variable argument type can appear only as the last parameter in a method definition. The following shows the legal and illegal usage of variable argument types. static void test(int... a, String... b) -> Invalid, more than one variable argument type. static void test(int... a, int... b) -> Invalid, more than one variable argument type. static void test(int... a, int b, int c)-> Invalid, Variable argument type must be the last parameter. static void test(int a, int... b, int c)-> Invalid, Variable argument type must be the last parameter. static void test(int a, int... b) -> This is legal. static void test(int... a) -> This is also legal. Static Imports Static Imports is all about importing all the static Entities – like static classes, static methods and static objects and directly accessing them in code. For example, let us consider that you have a class like this, StaticTest.java package staticimport; public class StaticTest { public static final Object STATIC_OBJECT = new Object(); public static Object getStaticObbject() { return STATIC_OBJECT; } public static class StaticClass { } } The above class is in a package called staticimport and it has a public static object called STATIC_OBJECT, a static method called getStaticObject() and a static inner class called StaticClass. Now let us see how to make use of static imports for accessing these static objects directly. Consider the following client program, StaticTestClient.java package staticimport.client; import static staticimport.StaticTest.StaticClass; import static staticimport.StaticTest.getStaticObbject; import static staticimport.StaticTest.STATIC_OBJECT; public class StaticTestClient { public static void main(String[] args) { System.out.println("Usage of a static class"); StaticClass staticClassObject = new StaticClass(); System.out.println("Usage of a static method"); Object fromAStaticMethod = getStaticObbject(); System.out.println("Usage of a static object"); Object someStaticObject = STATIC_OBJECT; } } If you note the syntax of static imports, the static keyword follows after the import keyword. Only Static classes, static methods and static objects can be imported in this manner. Suppose, if you try to import a type that is not static, then the compiler will warn you telling that “The import cannot be resolved”, though it should give a descriptive warning telling that the type you are trying to import something which is not static. Following example makes use of the APIs in the java packages. More specifically it imports all the static objects (members, objects and static classes) defined in System and Math class. StaticTest.java package staticimport.client; import static java.lang.Math.*; import static java.lang.System.*; public class StaticTest { public static void main(String[] args) { gc(); out.println("Testing static object"); out.println("max of 10 and 20 is " + max(10, 20)); } } 5) Enumerations Let us conclude this article by having a look at Enumerations (or Enums). In Java, Enum is a type, like a class or an interface. If Classes are used to represent objects, interfaces for behavior, then Enumerations are used to define range-based constants. For example, consider the following class that will calculate the population of a country when given the country name. PopulationCalculator.java package enums; public class PopulationCalculator { public static double calculatePopulation(String countryName) { double population = 0.00; if(countryName.equals("India")) { // Set Indian population here. }else if (countryName.equals("Australia")) { // Set population for Aussies here. }else if (countryName.equals("England")) { //Set population value for England here. } return population; } } The problem in the above code is with the input parameter countryName. Since the countryName is represented as a string, care should be taken in the very first line to ascertain the availability of such a country. What if the client passes something like “ABCDEF” or an empty string (“”) or even null for the country name? In all these cases, we end up in some kind of un-expected results at the run-time. We would feel that it would be better if the same error is captured during the compile-time itself. Here comes the concept of Enum which is any other type like Object, String or some other thing. Following is the declaration of Enum called Country which takes the possible values of India, Australia and England. Country.java package enums; public enum Country { INDIA, AUSTRALIA, ENGLAND, } The keyword enum precedes the name of the Enum, in this case Country. Note that declaration of the various country names like India, Australia and England separated by commas. Internally, an Enum is represented by a class and the objects within it will be treated as public, static and final. So the above Country enum will internally look like this, Country.java package enums; public class Country extends java.lang.Enum { public static final Country INDIA = new Country(); public static final Country AUSTRALIA = new Country(); public static final Country ENGLAND = new Country(); } Note that by default all the Enum classes extends java.lang.Enum class. And the restriction is that they cannot be extended. So Enums are implicitly treated as final classes, even though they can implement Interfaces. Now, let us modify the above Population Calculator class to make use of Country Enum instead of a string. Following is the code for the same. PopulationCalculator.java package enums; public class PopulationCalculator { public static double calculatePopulation(Country country) { double population = 0.00; if(country.equals(Country.INDIA)) { // Set Indian population here. }else if (country.equals(Country.AUSTRALIA)) { // Set population for Aussies here. }else if (country.equals(Country.ENGLAND)) { //Set population value for England here. } return population; } } Now, since the calculatePopulation() method is accepting a type Country which is of type Enum, the client can pass only any of the defined Object in the Enum, thereby ensuring type-safeness. Since all custom Enum classes extend java.lang.Enum class, some of the useful methods like Enum.values() and Enum.ordinal() are automatically available to the Country Enum class. Have a look at the following sample class. EnumTest.java package enums; public class EnumTest { public static void main(String[] args) { Country[] countries = Country.values(); for(Country country : countries){ System.out.println(country.toString() + ": Oridinal Value->" + country.ordinal()); } } The call to Country.values() will return an Array of Country values with the values defined within the Country Enum. The method Country.ordinal() will return an index that tells the position of the enum object. Conclusion This article covered some of the new features available in Java 5.0 like Enhanced For loop, Variable Arguments, Static Imports and Enumerations along with plenty of samples. Each of these new features finds its appropriate usage based on the requirements. Applications can make use of the new features in an optimal way and thereby ensure that the code is simple and efficient.. Very good examples. Try to add other features as well. I found few more in Java 5 (J2SE 5.0/JDK 1.5) New Features with Examples Very good examples. Try to add other features as well. I found few more in Java 5 (J2SE 5.0/JDK 1.5) New Features with Examples Thanks, we have added here only the popular and important features. very nice blog, i have also found one good link here. Loop Statements In Java
http://javabeat.net/new-features-in-java-5-0/
CC-MAIN-2017-34
refinedweb
1,901
50.02
Created on 2013-03-17 16:08 by Ronny.Pfannschmidt, last changed 2019-10-28 17:25 by blueyed. examples that are found on a property dont detect the line number class example(object): @property def me(self): """ >>> 1/0 """ pass @Ronny can you provide a patch for this? this is my first contribution to Python core so I really have no idea how to do this, but I have found a solution (works in Py3.4, 2.7): in doctest.py after line 1087 ("lineno = getattr(obj, 'co_firstlineno', None)-1") add these lines: if lineno is None and isinstance(obj, property) and \ obj.fget is not None: obj = obj.fget.__code__ lineno = getattr(obj, 'co_firstlineno', None) # no need for -1 because of decorator. I can try to make a patch file for this, but just want to be sure I'm on the right track for contributing first. I know how to do a Git pull, but not hg/patch. (p.s., I think the current code "lineno = getattr(obj, 'co_firstlineno', None)-1" has an error; if the getattr does not find 'co_firstlineno', it will return None and then subtract 1 from None which is a TypeError). I took the ideas from @Michael.Cuthbert and wrote a proper test. It's my first patch so I hope everything's fine with it. If not I'm happy for feedback :) The test looks great to me. Does anyone on nosy know the proper way to request a patch review? Left a comment on Rietveld. I don't have time right now to check the test, but I suspect you tested it before submitting the patch, so it should probably be fine. Yes, I've tested it. looks like we're stuck on a style change (backslash to parens; ironic: I chose backslash to match surrounding code; I never use them myself). tuxtimo - could you fix this? (or I'll do once the move to github is done). Thanks! Here's a rather obscure bug that I was able to catch before we put this into action: doctests inside the __doc__ for namedtuples (and perhaps all namedtuples?) are instances of property, have .fget, but do not have .fget.__code__. Thus one more check is needed: if (lineno is None and isinstance(obj, property) and obj.fget is not None and hasattr(obj.fget, '__code__')): obj = obj.fget.__code__ lineno = getattr(obj, 'co_firstlineno', None) just poking to see if this patch is worth trying to get into 3.7 A pull request has been in for about a month -- is it possible to review or merge or comment? Thanks! The PR appears to need a better test according to.
https://bugs.python.org/issue17446
CC-MAIN-2021-39
refinedweb
447
83.66
When it comes to advanced topics in React. you must have heard about the HOC(Higher Order Component). Now this HOC is not that much complex to learn but avoid learning it directly by taking complex example. I'll try to make it as simple as possible. If you prefer to watch video then click the link below. Now first of all let's see what kind of problem HOC solves? Well, sometimes we have two different components which implements same logic such as, As you can see below, Now for both of this components they have the same logic like counter. Now let's see the definition of HOC which mentioned in the ReactJS.Org A higher-order component (HOC) is an advanced technique in React for reusing component logic. HOCs are not part of the React API, per se. They are a pattern that emerges from React’s compositional nature. Now as mentioned above we can implement logic of component in single HOC and then we can use it in required components. Let's see how this HOC is a pattern that emerges from React's compositional nature and not a part of React API. import React, { Component } from "react"; const HOC = (Component, data) => { //You can do modification in data then pass it to the component return class extends React.Component { render() { return ( <Component /> ); } }; }; export default HOC; As you can see this is one of the pattern of HOC component where, It takes two arguments one is component in which we want to add logic and second argument is data. We can modify this data and then can pass it to the component. Now this HOC will return a React component which returns more Enhanced Version of component Let's try it in our likes and comments component. Both of them is using the same logic as we use in the counter. So create new file called Hoc.js and write below code. Here we have implemented logic for counter. Line no 3: we can pass component and data. Line no 6: Returns a React component. Line no 7 to 19: This lines represents the same logic we use to implement counter. Line no 25: Here we have pass state of the counter. Line no 26: Here we have passed a function to increment counter state. Now let's see how we can use this HOC. Below is the like component. Line no 8: To display no of likes. Line no 9: Button to increment likes. Line no 15: Here we have used the HOC component. We have passed our Likes Component and no 5. Why 5? because let's assume that there are already 5 likes then we can implement counter logic from no 5. Line no 17: Here we will export the new Enhanced likes component returned by HOC. Now in simple terms, HOC took LikesComopnent and data then returned an enhanced LikesComopnent by implementing Counter logic in it. We can do the same for CommentComponent, Line 15: Here we are sending different data. (10 instead of 5) Don't forget to call enhanced Version of component that you have exported in your component file. Just like this, import React from "react"; import "./App.css"; import EnhancedLikes from "./components/HOC/LikesCount"; import EnhancedComments from "./components/HOC/CommentsCount"; function App() { return ( <div className="App"> <EnhancedLikes /> <EnhancedComments /> </div> ); } export default App; After implementing this you will understand that we don't have to write same logic for more components. There are many uses of HOC like, If user has already logged in and you want to check user's login status in more then one component or pass user's data then you can pass it to HOC then wrap HOC component around those components. You can find Full-Code repository from here. Thanks For Reading and Supporting.😄 Feel free to visit my youtube channel: @CodeBucks Follow me on Instagram where I'm sharing lot's of useful resources! Discussion (0)
https://dev.to/codebucks/what-is-higher-order-component-hoc-in-react-2e1p
CC-MAIN-2021-17
refinedweb
663
74.08
Answered Currently, PyCharm uses #noinspection comment to suppress a PEP8 inspection for the following line. For example: # noinspection PyUnresolvedReferences from base import * In our team, not all developers use PyCharm as their IDE and adding #noinspection in our code base is not allowed. It would be great if PyCharm would understand #noqa as proposed and used by PEP8 and Flake8. That's what BDFL uses too after all. Something like this: from .base import * # noqa: F403,F401 Does anybody know of a way to force PyCharm uses and understand #noqa? Please vote for to increase its priority and be notified about updates.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360000147090-PyCharm-PEP8-style-check-does-not-understand-noqa
CC-MAIN-2019-22
refinedweb
102
65.42
jEdit version number: 5.0.0 BufferTabs plugin and version numbers: 1.2.3 platform: windows (win 7) Java version: 1.6.0_29 currently the tabs are reordered without any reason (or is it a java feature?) steps to reproduce: 1. open files to fill 1 line BufferTabs on top of the screen 2. open one more file expected result: - the older file's tab should be remained on their place - only the new file's tab should be appeared in the new tab line Alan Ezust 2013-11-16 Under Global options - View, do you have the option 'Sort buffer sets' un-checked? What bufferset scope do you use? Did you have multiple views or editpanes open? Hi, Maybe the title of bug is not correct. So there is no real reordering just new (and older existing tab(s)) are moved to the next row. It does not matter if 'Sort buffer sets' is checked or un-checked. And the scope is also irrelevant. I did some experiments, and now I am sure that this is the java default behavior (but why???). Try the code bellow. Start it and re-size the window (decrease the width). When there is no space enough for all tabs, not only the last tab will be moved to the next row but more. In forums I have found some hint how to solve this but it is beyond my java knowledge. I hope now it is clear what I mean. (Anyway, this is still really annoying. When opening/closing tabs and this happens, I usually lost and I have to start finding the wanted tabs again) import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JTabbedPane; public class TestTabbedPane extends JTabbedPane { public static void main(String[] args) { JFrame test = new JFrame("Tab test"); test.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); test.setSize(600, 400); TestTabbedPane tabs = new TestTabbedPane(); tabs.addTab("One 11111111", new JButton("One")); tabs.addTab("Two 2222", new JButton("Two")); tabs.addTab("Three 33333", new JButton("Three")); tabs.addTab("Four 4444444444", new JButton("Four")); tabs.addTab("Five 555", new JButton("Four")); test.add(tabs); test.setVisible(true); } } Matthieu Casanova 2013-11-18 It is a feature of tabs in java, we have the choice to wrap taps on new line or add a scrolling when there are too many tabs. Alan Ezust 2014-12-20 Alan Ezust 2014-12-20 Closing since this seems to be a complaint about how Java tabbed panes work in certain look and feels, rather than something that can be fixed in jEdit.
http://sourceforge.net/p/jedit/plugin-bugs/1728/
CC-MAIN-2015-32
refinedweb
426
76.52
40+ Controls for Silverlight 1.1 available for free at It is called GOA WinForms, a free implementation of the System.Windows.Form .NET library for Silverlight! Enjoy wow, very cool! -- is there a blog that tells the story behind these? -th this answered your question, please be sure to click the 'mark as answered' feature, otherwise please feel free to post follow-up questions that are related. i am truely impressed. this is awesome stuff. do you also provide a layouting scheme or is everything still based on absolute positioning? coolio No more secrets with Silverlight Spy! heuertk: wow, very cool! -- is there a blog that tells the story behind these? The framework was originaly written for Flash and ported to Silverlight. We will write something that tells the full story. bob, is there a dependency on the .net framework? it doesn't work on my mac -- and i can't see ways to interact with other root canvas objects... I played with it a bit. I'm not sure what to make of it yet. It's a clever wrapper that emulates the System.Windows.Forms namespace. On the one hand, it wouldn't be hard to code a Windows form app and then port it to Silverlight. On the other hand, the additional overhead needed for emulation makes apps feel a bit sluggish when compared to "native" XAML controls. As far as XAML goes, the only XAML utilized by GOA ties a Silverlight Canvas to a Form object. It's neat and fits together, but for now, I'm going to wait and see what else comes along. After all, Silverlight is still just alpha. I'm sure there will be plenty of SDKs from which to choose. P.S. The designer example (available online) is quite nifty - definitely a sign of Silverlight's potential. Make love not war. Of course there is no dependency on any external .net assemblies but those coming with Silverlight plus Goa.Windows.Forms.dll. Furthermore, since it is still alpha running on alpha, it has not been extensively tested on Mac Forci, the problem resides in the way Silverlight handles and sends mouse/keyboard events. Please remember it is alpha running on alpha! Further versions will fix this problem in my opinion. I'm not sure if it's a step forward or backward going back to the WinForms API again, but I know one thing: for now, this framework is miles ahead of anything else in the Silverligt space. As mentioned, the real trick is getting these controls to interact with other root canvas objects. Could you make a comment on this bobrob? Billy you are absolutely right, WinForms API mainly targets existing .NET WinForms users. We have already made some tests regarding the interaction between WinForms controls and native XAML controls. It still needs some adjustements but it works fine! We will publish some sample code very soon. Stay tuned! How can we prepare Multi Page Application in GOA WinForm? Any Idea about redirecting user to another page? You can still use the "standard" Silverlight features. For instance have a look at the end of the following QuickStart: . It explains how to call the HtmlPage.Navigate method. Jeff Cool - will these remain free? Michael Foord geetanjali: How can we prepare Multi Page Application in GOA WinForm? Any Idea about redirecting user to another page? I did not use GOA, however, the way I did that in the Carbon Calculator was to build user controls for each "page". I created a base class that inherited from control, and added Show and Hide overridable/virtual methods. Those methods looked for two standard-named storyboards in the xaml. If they weren't present, they would simply set the Visiblity to Visible (Show) or Collapsed (Hide) when called. Later, when I'm more satisfied with some of the animation performance and capabilities, I can go back and easily change how those animations work. The base classs also contained some other useful things, like a templated version of FindName that cleaned up the syntax a little. I'll have a blog post coming soon on how all this was done. It's basically a port of an idea I used and then in some conferences back in 98 about using VB User Controls as "forms" in an explorer-type application. Funny how this stuff comes back around :) Pete Silverlight.net ModeratorMVP: SilverlightSilverlight InsiderPOKE 53280,0 - My Blog at irritatedVowel.com fuzzyman: Cool - will these remain free? Michael Foord Yes. The standard version will remain free. Jeff
http://silverlight.net/forums/t/2280.aspx
crawl-002
refinedweb
762
67.76
How. Steps - 1Read the Writer's Guide. All articles must be written in a certain format and the Writer's Guide gives full details. - 2Try out wikiHow's Sandbox page. This page is set up just for test edits, so nobody will mind if you type things in there just to see how it works. This is a great place to play with formatting and try editing for the first time. - 3Use the Preview button on any page with any edit you make; no one will ever see your edits until they have been fully published. At the bottom of every article you open to edit on the wikiHOw site will be a button that will show you exactly what your page will look like so that you can adjust whatever you want before you save it. - 4Create an account. You don't have to use your real name, but it's best if you have a user name before you do the next step. - 5In the Upper Right hand corner, below your user name and the search box, click "Create a Page". - 6Create your test page as a sub-page to your user page. For the article title, put in a line like this, being sure to insert your own user name. Don't create your own personal Sandbox pages on wikiHow - they take additional time to patrol and besides, all editing you do on your Sandbox page can be done within the Sandbox page itself. Use names like test page or the like to keep them from becoming a Sandbox appearing page - User:YourUserName/test page - 7Edit your test page, but do not publish your edit - use the Preview button on this page only! - 8Ask for help if after previewing, you can't seem to get the image to be correct (this is the only time you can fully publish your edit to this test page). Ask an Administrator, New Article Booster or other veteran user on their talk page. Most will be happy to assist answer your question about your question if they can. - 9Post your content on a relevant article's Discussion page. When you're satisfied with your work and it is ready to go, copy and paste it into a relevant article. Skip this step if it was all just test edits. - 10Ask an administrator to delete any test page you are finished using. Be sure to state that it is your page and that you are through with it. - an article in mind to write, try writing it directly under the article title. See the Writer's Guide for more information. It's OK if the article isn't perfect the first time. You can always edit the article again if something isn't right yet. - Ask for help. If you have trouble making what you want on a test page, ask an administrator or other veteran user on their talk page. Most will be happy to assist if they can. - If you are through with a test page and wish to have it deleted, ask an administrator, and be sure to state why you want it deleted. Warnings - Please do not create test pages in the main namespace. Always make them a sub-page of a user page. - While test pages are generally allowed to exist for experimental purposes, you may not post content in violation of wikiHow policy. Inappropriate content will still be deleted, even if it is on a test page. Article Info Categories: Writing and Editing Thanks to all authors for creating a page that has been read 6,032 times.
https://www.wikihow.com/Create-a-Test-Article-on-wikiHow
CC-MAIN-2018-05
refinedweb
602
71.44
NAME i386_get_ldt, i386_set_ldt -- manage i386 per-process Local Descriptor Table entries LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <machine/segments.h> #include <machine/sysarch.h> int i386_get_ldt(int start_sel, union descriptor *descs, int num_sels); int i386_set_ldt(int start_sel, union descriptor *descs, int num_sels); DESCRIPTION The i386_get_ldt() system call them. If start_sel is LDT_AUTO_ALLOC, num_sels is 1 and the descriptor pointed to by descs is legal, then i386_set_ldt() will allocate a descriptor and return its selector number. If num_descs is 1, start_sels is valid, and descs is NULL, then i386_set_ldt() will free that descriptor (making it available to be reallocated again later). If num_descs is 0, start_sels is 0 and descs is NULL then, as a special case, i386_set_ldt() will free all descriptors. RETURN VALUES Upon successful completion, i386_get_ldt() returns the number of descriptors currently in the LDT. The i386_set_ldt() system call returns the first selector set on success. If the kernel allocated a descriptor in the LDT, the allocated index is returned. Otherwise, a value of -1 is returned and the global variable errno is set to indicate the error. ERRORS The i386_get_ldt() and i386_set_ldt() system calls will fail if: [EINVAL] An inappropriate value was used for start_sel or num_sels. [EACCES] The caller attempted to use a descriptor that would circumvent protection or cause a failure. SEE ALSO i386 Microprocessor Programmer's Reference Manual, Intel WARNING You can really hose your process using this.
http://manpages.ubuntu.com/manpages/precise/man2/i386_get_ldt.2freebsd.html
CC-MAIN-2015-06
refinedweb
235
53.71
OcrRegion class Platform: Appium Language: Java An object of this type is used to define where OCR text extraction should be done. This feature is experimental. Please note that the functionality and/or API may change. You can extract text from an application window using OCR by passing one or more OcrRegion objects to the method Eyes.extractText. Each such object defines a region in the application window. In addition, use the hint method to specify literal text or a regular expression-like pattern that should match the text found. The pattern passed as a hint helps overcome ambiguities that arise when using OCR. It can be used, for example, to disinguish between the digit 0 (zero) and the letter O. For more information see Eyes OCR support . Import statement import com.applitools.eyes.locators.OcrRegion; Example Example not yet available.
https://applitools.com/tutorials/reference/sdk-api/appium/java/ocrregion
CC-MAIN-2022-40
refinedweb
141
59.09
Devel::REPL::Profile - Code to execute when re.pl starts version 1.003028 package Devel::REPL::Profile::MyProject; use Moose; use namespace::autoclean; with 'Devel::REPL::Profile'; sub apply_profile { my ($self, $repl) = @_; # do something here } 1; For particular projects you might well end up running the same commands each time the REPL shell starts up - loading Perl modules, setting configuration, and so on. A mechanism called profiles exists to let you package and distribute these start-up scripts, as Perl modules. Quite simply, follow the "SYNOPSIS" section above to create a boilerplate profile module. Within the apply_profile method, the $repl variable can be used to run any commands as the user would, within the context of their running Devel::REPL shell instance. For example, to load a module, you might have something like this: sub apply_profile { my ($self, $repl) = @_; $repl->eval('use Carp'); } As you can see, the eval method is used to run any code. The user won't see any output from that, and the code can "safely" die without destroying the REPL shell. The return value of eval will be the return value of the code you gave, or else if it died then a Devel::REPL::Error object is returned. If you want to load a Devel::REPL plugin, then use the following method: $repl->load_plugin('Timing'); The load_plugin and eval methods should cover most of what you would want to do before the user has access to the shell. Remember that plugin features are immediately available, so you can load for example the LexEnv plugin, and then declare my variables which the user will have access to. To run the shell with a particular profile, use the following command: system$ re.pl --profile MyProject Alternatively, you can set the environment variable DEVEL_REPL_PROFILE to MyProject. When the profile name is unqualified, as in the above example, the profile is assumed to be in the Devel::REPL::Profile:: namespace. Otherwise if you pass something which contains the :: character sequence, it will be loaded as-is..
http://search.cpan.org/dist/Devel-REPL/lib/Devel/REPL/Profile.pm
CC-MAIN-2017-13
refinedweb
340
50.06
is a relational database management system contained in a small (~350 KB) C programming library. In contrast to other database management systems, SQLite is not a separate process that is accessed from the client application, but an integral part of it.To create a SQLite database, use the CreateSQLiteDatabase(). import arcpy # Set local variables sqlite_database_path = 'C:/Data/Counties.sqlite' # Execute CreateSQLiteDatabase arcpy.gp.CreateSQLiteDatabase(sqlite_database_path, "ST_GEOMETRY") It's not too hard. All file paths must end in '.sqlite' extension. The database supports two spatial types: "ST_GEOMETRY" or "SPATIALITE". ST_GEOMETRY is Esri's storage type, whereas SPATIALITE is the SpatiaLite geometry type. ST_GEOMETRY is the default geometry storage type. This database can be fully used with the python sqlite3 module that comes as a default with any python installation. This means you can create a database using ArcPy, then perform all updates through native SQL instead of update/insert cursors. Some notes: - ":memory:" stored databases are not supported, they must be written to disk - SQLite version 3.16.2 Enjoy
http://anothergisblog.blogspot.com/2013/08/creating-sqlite-database-via-102-arcpy.html
CC-MAIN-2017-22
refinedweb
169
50.73
Good day, I have a GDB with features with almost the same name only the last postfix is a underscore and a number (Test_area, Test_area_1) I am working an a script that will merge the features with the same name. This is what i have so far: It works but the feature list is about 50 features long and more important I would like to learn how I can improve this script. import arcpy from arcpy import env import os # Set the workspace for the ListFeatureClass function arcpy.env.workspace = r"D:\ GIS \Test.gdb" outputA = r" D:\ GIS \Output.gdb \ TestA_area " outputB = r" D:\ GIS \ Output.gdb \ TestB_area " outputC = r" D:\ GIS \ Output.gdb \ TestC_area " outputD = r" D:\ GIS \ Output.gdb \ TestD_area " # Use the ListFeatureClasses function to return a list of all FC. fclistA = arcpy.ListFeatureClasses("TestA_area*","ALL") fclistB = arcpy.ListFeatureClasses("TestB_area*","ALL") fclistC = arcpy.ListFeatureClasses("TestC_area*","ALL") fclistD = arcpy.ListFeatureClasses("TestD_area*","ALL") arcpy.Merge_management(fclistA, outputA) arcpy.Merge_management(fclistB, outputB) arcpy.Merge_management(fclistC, outputC) arcpy.Merge_management(fclistD, outputD) greeting peter Does something like this get you what you need? This assumes you do not have any feature classes in a feature dataset. If you do, the code will be slightly different.
https://community.esri.com/thread/121114-merge-multiple-features
CC-MAIN-2019-13
refinedweb
203
53.17
Hi, thanks for the excellent tutorial, but I had problems with the implementation of this API, because I don’t understand how the frontend works, if I need to add a token in the request header to the Auth0. Can you show the frontend code of this application? Developing a Secure API with NestJS Thank you for joining the Auth0 community and for your kind feedback Sorry for the late reply. I missed the notification. It’s definitely in the works to release the client application with a tutorial. I’ve been making some tweaks as the current app uses a demo launcher to bootstrap the Auth0 values. Right now that client is built with React and RxJS. Is that an architecture that interests you? My goal is to provide it in both React and Angular. I think that for React it may be more mainstream to use the Context API instead of RxJS. Adan, I am currently working on this to simplify the client application and its corresponding tutorial. @zacksinclair, @mob, @jajaperson and anyone else who may know: Is there a best practice or commended way to initiate a module such as MongooseModule using an environmental variables? I essentially need to use a secret in the connection string to MongoDB Cloud but I have not found an easy way to make this work with the existing ConfigModule and ConfigService. The goal is to have something like this: @Module({ imports: [ ConfigModule.forRoot(), MenusModule, LocationsModule, ItemsModule, MongooseModule.forRoot( `mongodb+srv://<USERNAME>:<PASSWORD>@someclustername.mongodb.net/test?retryWrites=true&w=majority`, ), ], controllers: [AppController], providers: [AppService], }) export class AppModule {} But I’d like USERNAME and PASSWORD to come from an .env file. Thank you for any insight you may provide me on this I think this may be the best way Dan I responded to the email! Yes the user guide works except it’s designed to use with mogoose and I wanted to use mongoDb so this is what I did: In the app.module I wrote //app.module import { Module } from '@nestjs/common'; import { ItemsModule } from './items/items.module'; import { TypeOrmModule } from '@nestjs/typeorm'; import { Item } from './items/item.entity'; import * as dotenv from 'dotenv'; dotenv.config(); @Module({ imports: [ItemsModule, TypeOrmModule.forRoot({ type: 'mongodb', url: process.env.URL, database: process.env.DATABASE, entities: [Item], synchronize: true })], controllers: [], providers: [], }) export class AppModule { } Then I wrote the service like this: import { Injectable } from '@nestjs/common'; import { InjectRepository } from '@nestjs/typeorm'; import { getMongoRepository } from 'typeorm'; import { Item } from './item.entity'; @Injectable() export class ItemsService { manager = getMongoRepository(Item); async create(newItem: Item): Promise<void> { try { await this.manager.insert(newItem); } catch (err) { return err; } } } I used the MongoDB TypeORM definition As far as dotEnv I really did not have problems. Hope this helps, Adan Really great tutorial. Suggest to change the example so that the audience variable ends with a “/”. I just spent over an hour trying to figure out what’s wrong with my code, eventually it was the missing “/”. This is missing in your example as well. Howdy, welcome to the Auth0 Community. Thank you for your feedback and I am glad that you enjoyed the tutorial. When you created the identifier, did you perhaps add the / at the end? The identifier can be anything you want. The tutorial suggests using but you can use any other string value: The value of AUTH0_AUDIENCE in the .env file just needs to match whatever the value of the Identifier is
https://community.auth0.com/t/developing-a-secure-api-with-nestjs/33026?page=3
CC-MAIN-2020-16
refinedweb
575
57.27
I know that the library is be loaded, and I am fairly certain that the linkage is correct because the class performs as expected when invoking interactively, e.g. java Hello. I failed to mention earlier that this works on NT, but not HP-UX. I have also read postings that indicate this is an issue on other versions of UNIX as well. Has anyone accomplished this on UNIX? If so, were there any special configuration issues? Eric. -----Original Message----- From: Russell Freeman [mailto:Russell.Freeman@sagemaker.com] Sent: Tuesday, December 05, 2000 10:48 AM To: 'tomcat-user@jakarta.apache.org' Subject: RE: Tomcat, JNI and HP-UX I have some tips from experience with both Win32 and AIX platforms. I'm assuming that HP UK C dynamic link libraries are built similar to AIX ones...? - JDK 1.18 uses JNI version 1.0 bindings by default. Use the -jni option with javah to create a JNI 1.1 style C headers. These are easier to link with. - Make sure you use the generated header for the basis of the function implementation (rather than copying by hand). This will ensure you get the function header exactly right. - The unsatified link exception can occur for many reasons: the library is not found (not on the LIBPATH for instance), the method wasn't resolved (not exported or wrong signature in C) Given the following java class with natic methods: package a.b; public class c { public native void doSomething() {} public native void doSomethingElse() {} } You will need an export file with your so/shared library/DLL containing the mangled function prototypes, For example, in foo.exp ( the export file for the library foo.so ) I have: - Java_a_b_c_doSomething Java_a_b_c_doSomethingElse This is the JNI 1.1 C export for the above java code. -----Original Message----- From: Eric Lee (LYN) (EUS) [mailto:EUSERIL@am1.ericsson.se] Sent: 05 December 2000 14:45 To: 'tomcat-user@jakarta.apache.org' Subject: Tomcat, JNI and HP-UX I am using Tomcat 3.1 on HP-UX 10.20 with Java version ("HP-UX Java C.01.18.03 05/12/2000 16:31:36 hkhd02") I am attempting to use a servlet that instantiates a class that contains JNI methods and I keep getting java.lang.UnsatisfiedLinkError exceptions when these native methods are accessed. The JNI methods execute fine from the command line with Java interpreter, I also know that the servlet is loading the shared library. There are similar postings in the mailing list archives but they are without response. If anyone has any comments, I sure would appreciate them. Many Thanks, Eric Lee
http://mail-archives.apache.org/mod_mbox/tomcat-users/200012.mbox/%3C9DBEF0AB0E49D311AD7D005004D4FB5004750AAF@eamlynt751.ena-east.ericsson.se%3E
CC-MAIN-2016-30
refinedweb
437
59.09
EmberConf 2017: State of the Union Ember.js (or should we say Amber.js) turned five years old last December. In some ways, five years is a short amount of time. But when measured in web framework years, it feels like a downright eternity. As Yehuda and I were getting ready for our keynote presentation at this year’s EmberConf, we tried to remember what developing web apps was really like in 2011. We knew that the web had changed for the better since then, but I think we both had repressed our memories of how truly awful it was. The Web in 2011 The most popular browser in 2011, by a wide margin, was IE8. Today, for most people, IE8 is a distant, half-remembered nightmare. Today, we freely use new language features like async functions, destructuring assignment, classes, and arrow functions. We even get to use not-quite-standardized features like decorators ahead of time thanks to transpilers like Babel and TypeScript. In 2011, however, everyone was writing ES3. ES5 was considered too “cutting edge” for most people to adopt. DOM and CSS features we’ve come to take for granted weren’t available, like Flexbox and even querySelectorAll. Things were so primitive that, hard as it is to believe now, no one even questioned whether you might not need jQuery. Ember in 2011 Ember was still finding its sea legs, too. There was no Ember App Kit yet, let alone Ember CLI. There was no router. npm 1.0 wasn’t released until halfway through 2011. Ember apps used a global namespace and many people included their Handlebars templates in inline script tags. <html> <head> <script src="/public/ember.js"></script> </head> <body> <script type="text/x-handlebars"> Hello, <strong>{{firstName}} {{lastName}}</strong>! </script> <script type="text/x-handlebars" data- <div class="my-cool-control">{{name}}</div> </script> <script> App.ApplicationController = Ember.Controller.extend({ firstName: "Trek", lastName: "Glowacki" }); </script> </body> </html> As antiquated as this feels today, this was more or less how most JavaScript apps were written. Some parts of Ember are truly embarrassing to look back on. Because IE was so dominant, our rendering engine was optimized for its performance quirks. DOM APIs were extremely slow, so our templates were string-based: render everything as a string of HTML, and then insert it with a single innerHTML operation. (Modern rendering engines like React, Angular, and Glimmer all create their own DOM instead of asking the browser to parse HTML.) Unfortunately, letting the browser create our DOM elements for us led to some… interesting approaches to go back and find them later. For one thing, we had to use the awkward {{bindAttr}} helper just to bind an element's attributes. <div id="logo"> <img {{bindAttr src=logoUrl}} </div> Even worse was the Eldritch horror awaiting anyone who looked at the DOM: > All of that just to render what today looks like this: <div> <h2>Welcome to Ember.js</h2> <ul> <li>red</li> <li>yellow</li> <li>blue</li> </ul> </div> As bad as some of the early stuff was, we also have to credit Ember with being ahead of the curve. In many ways, Ember has continued to push the state of the art of client-side JavaScript forward. Ember was the first to declare that build tools were critical to any frontend stack, making Ember CLI a first class part of the framework. Having opinionated build tools meant that we were able to be the first framework to embrace next-generation features of ES6, like Promises and modules, to name a few. While other frameworks have only recently landed Ahead of Time (AOT) compiled templates, we’ve had them for years — and have now moved on to an even more efficient compiled bytecode format. Indeed, the fact that we’ve compatibly moved from string-based rendering to DOM-based rendering to our new VM-based architecture with Glimmer has been one of the keys to Ember’s longevity. Perhaps the biggest impact Ember has had is not the what but the how. Major changes to the framework go through an RFC process that solicits community feedback early and often. By requiring new features to go through a rigorous design process, even seasoned contributors must articulate rationales and the context driving different tradeoffs. I often hear from developers who don’t even use Ember that they’ve adopted our RFC process for their own teams at work. Ember was also the first major framework to adopt Chrome’s six week release cycle. By putting all new work behind feature flags, a big feature taking longer than expected doesn’t block getting other important improvements into your hands. Stable, beta and canary release channels let you decide for yourself the balance between riding the cutting edge or preferring battle-tested stability. Ember’s 2.0 release was also novel: it was the first framework to release a major new version without any breaking changes. An Ember app running on 1.13 could upgrade seamlessly to Ember 2.0, so long as it had no deprecation warnings. While the transition was bumpier than we would have liked for many people, this experiment showed how valuable focusing on upgrade paths is. Compared to the previous status quo of releasing new major versions that require you to effectively rewrite your app, we believe Ember 2.0 was an important bellwether that showed that JavaScript frameworks can make progress without breaking their ecosystem. Of course, I’d be remiss if I didn’t mention the Ember router. Routers that map URLs on to application code exist in every server-side framework, such as Rails and Django. Stateful UI architecture has also been around forever. Ember’s architecture borrows a lot from Cocoa, but the MVC idea has been around since at least Smalltalk-76. Ember’s contribution was to stumble on to the idea that, in single-page apps, URLs and app architecture are intrinsically linked. By tying the models and components that appear on screen to the URL, keeping the two in sync becomes the framework’s job. Circa 2011 and before, it was common to hear people lament that JavaScript had become the new Flash. Websites that heavily relied on JavaScript “felt broken” in sometimes hard-to-articulate ways. Refreshing the page left you looking at a different thing. Sharing links took people to the wrong place. Bookmarks didn’t work. Command-clicking to open in a new tab didn’t work. In 2017, people use JavaScript-driven apps all the time and rarely notice. By making the URL the cornerstone of how you organize your application, for the first time, Ember helped you build JavaScript applications were no longer broken by default. Today there are fantastic routers available for React, Angular and other libraries, and all of them can trace a lineage back to Ember’s router. The turning point for the wider acceptance of single-page apps happened when, as a community, we started to embrace the URL. Ember’s router led that charge. What’s Next? Five years is a good run for JavaScript frameworks. We’ve done much, much better than average: most frameworks die young. But what should our next move be? Many worthy competitors have come along, without all of the backwards compatibility baggage. Almost all of Ember’s standout features, like build tools, AOT template compilation, first-class router, and server-side rendering are available for competing libraries. We could decide to put Ember into maintenance mode, cede the future to the newcomers, and focus on catering to the existing user base for many years to come. But I don’t think that’s what we should do. I think it’s possible to stay cutting edge without breaking the apps people have spent years investing in, and I think we have just the formula for doing it. What Didn’t Work? With the benefit of hindsight, we can examine the improvements we’ve tried to make to Ember in the last year or two, and figure out what worked and what didn’t. It’s a little bit embarrassing to have to write this, since it’s something I knew intellectually beforehand. But, in short, what didn’t work for us was anything requiring big design upfront. We wanted to make Ember easier to learn, so we wanted to eliminate controllers from the programming model. To do that, we wanted to introduce the idea of “routable components” — components that are managed by the router. But we also wanted to make Ember more approachable by introducing components that used <angle-bracket> syntax, so they work just like the HTML elements people are already familiar with. And if we were introducing routable components, they should obviously use the new component syntax—we shouldn't introduce a new API that people immediately felt like they had to rewrite. We were also embarrassed that the design of the “pods” filesystem layout was left in a half-completed state, and we considered it to be a dead end for other features we wanted to introduce. But filesystem layout touches nearly everything, so the Module Unification RFC became another design that invisibly delayed other important features. All of this work felt high-stakes because it touched such a fundamental part of Ember: the component API. Ember contributors felt like this was their one shot to get in that feature they’d always wanted. And if you had one shot, one opportunity, to seize everything you ever wanted in one moment, would you capture it, or just let it slip? Creating this series of dependencies meant that one disagreement on a particular RFC could delay work on another that, from the outside, seemed unrelated. It also became near impossible for any one person to keep the state of all of the proposals in their head, so we did a very bad job of communicating status updates to the community. It’s no surprise that many people perceive Ember as having slowed down over the last year. What Did Work? Despite these missteps, we actually did ship some pretty cool stuff in 2016 that people were able to use right away. FastBoot is an addon that people can drop in to their app to get server-side rendering with minimal setup. Engines allow big teams to split their app into smaller apps that can be worked on (and loaded) independently. In both cases, we focused on adding small primitives into the framework that exposed some missing capability. For example, for FastBoot, we added the visit() method to Ember.Application. This method takes a URL and allows you to programmatically route an Ember app (instead of having to change the browser's window.location directly). FastBoot uses this API to render Ember applications in Node.js. While we figure out the best way to deploy production-ready server-side rendered JavaScript apps, we can move that experimentation out of Ember and into the ember-fastboot addon. Engines worked similarly: an RFC proposed a small set of primitives, and then the addon could build on these to add features we were less certain of. And, of course, there’s Glimmer 2. Shipping in Ember 2.10, this ground-up rewrite of our rendering engine was a huge success. We dramatically reduced compiled template size, with many apps seeing 30–50% reductions in total payload (after gzip!). Initial rendering performance was also improved. For example, the “Render Complex List” scenario in ember-performance ran 2x faster in Ember 2.10 than 2.9. Incredibly, these results were achieved as a drop-in upgrade to Ember. I can’t think of a release of a library/framework that reduced my app’s size AND significantly improved perf — Ember 2.10 is super rare. — Robin Ward (@eviltrout) December 13, 2016 Ground-up rewrites are usually fraught with compatibility peril. In this case, the secret was to invest upfront in infrastructure that allowed us to keep both the old and new rendering engine on master at the same time. Rendering tests were run twice, once on each engine, so we always had a snapshot of how far along we were. And by making compatibility with the existing API the goal from the start, there was no temptation to start from a “pure” re-implementation and figure out compatibility later. Our New Modus Operandi: Unlocked Experimentation, In-Place Upgrades Going forward, we will prioritize adding missing capabilities and primitives to the core Ember framework. No one should feel like they need core team approval to experiment with new ways of building applications. In some places, we’re already good at this. For example, ember-redux and ember-concurrency are two examples that push the state of the art by building on top of Ember’s already well-rationalized object model. Other areas, like our router and components, have been less open for experimentation (at least when using public API). If we do decide that an existing feature needs a rethink, we will follow the Glimmer model: keep both the old and new running at once, and hold off merging until tests (and your apps!) work without changes. This is another example of something that should have been more obvious to us ahead of time. We’re big fans of the Extensible Web Manifesto, and this bears an uncanny resemblance to that. Glimmer’s Performance Sweet Spot Last year, we talked about Glimmer’s VM architecture and promised many performance benefits to come. We delivered Glimmer in Ember 2.10 and this year we’re continuing to reap the performance rewards of its modular VM architecture. Benchmarks are essential to measuring our performance improvements, but benchmarks are also dangerous. Focusing on the wrong benchmark, or just one kind of benchmark, can cause you to miss important context. V8’s Benedikt Meurer has a fantastic blog post about their new Ignition + TurboFan architecture, and how years of benchmark competition had caused them to be “over-focused on the peak performance case” while “baseline performance was a blind spot.” JavaScript libraries can fall into the same trap too. Community discussion often ends up focused around one measurement, which libraries then feel obligated to optimize for. For example, a few years ago it was updating performance and the infamous dbmon demo. Now the focus has turned to initial render times, as people (rightfully) focus on improving the experience of users on lower-end mobile devices and networks. But there is a point at which you hit diminishing returns optimizing for the initial render while sacrificing update performance. Fundamentally, this is a tradeoff about bookkeeping. Do more bookkeeping upfront during initial render and subsequent renders can be better optimized. Do less bookkeeping and initial renders will be faster, but updating gets close to being a full re-render. There are other considerations like file size, eager vs. lazy parsing, optimizing for the JIT compiler, etc., but this accounts for most of the algorithmic performance differences. Due to the drop-in nature of the Glimmer upgrade, we knew we couldn’t regress on Ember’s world-class update performance, even as we worked to improve initial render performance. This required us to find an architecture that would strike the optimal balance between the two. If you’re interested in more of the details, and in particular how the Glimmer VM maintains better performance by default compared to Virtual DOM libraries as your UI scales, I highly recommend Yehuda’s blog post explaining the design decisions that helped us hit our performance targets. All of this is to say, Glimmer offers a novel approach to rendering component-based web UIs. It’s great that Ember users get to take advantage of it. But what about everyone else? Ember Adoption One of my favorite pastimes is watching videos of old Steve Jobs presentations. One I like in particular is his 1998 Macworld keynote, when he had only been back at Apple for a year. Apple was on the brink of failure, low on money and with warehouses full of unwanted computers. The press, mainstream and tech journalists alike, all used one word to describe Apple: beleaguered. When Steve showed up at Apple, he rapidly turned things around. The confusing product lineup was replaced with a simple-to-understand consumer/pro laptop/desktop matrix. They delivered the original, Bondi blue iMac, showing they still had the ability to deliver innovative new products. Despite this, it’s hard to turn around a narrative. The press would give a reason why Apple was doomed to fail, and when Apple would fix that problem, they would come up with a new reason why Apple was doomed to fail. Borrowing from Maslow’s Hierarchy of Needs, Steve introduced the Apple Hierarchy of Skepticism: “When I came to Apple a year ago, all I heard was ‘Apple is dying, Apple can’t survive.’ It turns out that every time we convince people we’ve accomplished something at one level, they come up with something new. And I used to think this was a bad thing. I thought, ‘Oh Jesus, when are they ever gonna believe that we’re gonna be able to turn this thing around?’ But actually now I think it’s great! Because what it means is we’ve now convinced them that we’ve taken care of last month’s question. And they’re on to the next one! So I thought, let’s get ahead of the game, let’s figure out what all of the questions are gonna be, and map out where we are.” Without being overly dramatic, I think there are some obvious parallels between the 90’s era Mac and Ember. While we have a fantastic community and high-profile, successful apps, it can feel like the momentum is somewhere else. And I know Ember users who have told me they feel beleaguered by this common reaction: “You use Ember? I thought React was the new thing?” I’ve even gotten it from my Lyft driver. .@tomdale: “…if it makes you feel better, my lyft driver just asked ‘isn’t ember dead and react is the new thing?’” — Lady Zahra (@ZeeJab) March 30, 2017 When I think about the reasons people give for not using Ember, there are some that used to be common that I never hear anymore. Those ugly <script> tags in the DOM and lack of documentation were two major knocks against Ember, but we've since eliminated the DOM noise and invested heavily in guides and API documentation. We convinced people that these weren't a barrier anymore! But there are still lots of reasons people don't want to take another look at Ember. Let’s introduce our own Ember Hierarchy of Skepticism: By far, the three most common remaining reasons I hear for not using Ember are: - It’s monolithic and hard to adopt incrementally. - It’s too big out of the box, particularly for mobile apps. - The custom object model is scary. I want to write JavaScript, not whatever that is. Starting today, we can focus on overcoming these last three barriers to Ember’s growth. Introducing Glimmer.js With Glimmer.js, we’ve extracted the rendering engine that powers Ember and made it available to everyone. Glimmer is just the component layer, so it’s up to you to decide if you need routing, a data layer, etc. If you want to drop Glimmer components into an existing app, it’s as simple as adding a Web Component. For a quick, five minute tour of what building a Glimmer app is like, check out this video from Ember Map: Or visit glimmerjs.com to get started and read the documentation. While extracting Glimmer to be used standalone from Ember, we also took the opportunity to clean up some of the API that people found most confusing when using Ember components. Goodbye tagName, attributeBindings, etc. Tired of remembering all of the magic properties needed to configure a component’s root element? import Ember from 'ember'; export default Ember.Component.extend({ tagName: 'input', attributeBindings: ['disabled', 'type:kind'], disabled: true, kind: 'range' }); In Glimmer, the component’s root element is defined in the template, so all of that goes away. (You can think of the component template now being “outer HTML” instead of “inner HTML”.) Here’s the same component in Glimmer, with just a template: <input disabled ES6 Classes This gets even nicer once you introduce dynamic data from the component into it. Here’s the Ember component: import Ember from 'ember'; export default Ember.Component.extend({ tagName: 'input', attributeBindings: ['disabled', 'type:kind'], disabled: false, kind: 'range', classNameBindings: 'type', type: 'primary' }); Now in a Glimmer component, using ES6 class syntax to provide dynamic data: <input disabled type="range" class={{type}} /> import Component from '@glimmer/component'; export default class extends Component { type = 'primary' } TypeScript Because Glimmer is written in TypeScript, it has great autocomplete and type definitions out of the box. And every new Glimmer app is configured to use TypeScript automatically. JavaScript is still the primary way to write Glimmer apps. Because it’s extracted from a JavaScript framework, Glimmer’s API has been designed to be used with JavaScript from the start. TypeScript is just an extra tool in your toolbelt — if you want it. import Component from '@glimmer/component'; export default class extends Component { firstName: string; lastName: string; } Computed Properties Ember users love computed properties, but getting used to their syntax can be a challenge. Because Glimmer uses ES6 classes, you can use standard getters and setters: import Component from '@glimmer/component'; export default class extends Component { firstName = "Katie"; lastName = "Gengler"; get fullName() { return `${this.firstName} ${this.lastName}`; } } Decorators Glimmer uses decorators (a Stage 2 TC39 proposal) to augment a class’s properties and methods. For example, to mark a component property as “tracked” (so changes to it are updated in the DOM), use the @tracked decorator: import Component, { tracked } from '@glimmer/component'; export default class extends Component { @tracked firstName; @tracked lastName; @tracked('firstName', 'lastName') get fullName() { return `${this.firstName} ${this.lastName}`; } } Actions Actions in Glimmer are just functions, with optional argument currying. Use the {{action}} helper to bind the function to the component context: import Component, { tracked } from '@glimmer/component'; export default class extends Component { @tracked name: string; setName(name: string) { this.name = name; } } <button onclick={{action setName "Zahra"}}> Change Name </button> No .get()/ .set() In the above examples, you probably noticed that we never have to use the .get() method to retrieve a component property, or .set() to set one. This requirement frequently trips up new Ember users until they develop the right muscle memory. In Glimmer, we rely on ES5 getters and setters to intercept properties, so you never need to learn .get() and .set() at all. File Size Web developers are rightfully sensitive to file size. Not only do your app’s dependencies need to be downloaded, JavaScript must be parsed and evaluated. Particularly on lower-end mobile devices, that can add up quickly. Ember has historically been larger in file size than its competitors. Our line of reasoning was: for the kinds of apps people build with Ember, that’s all code that you’ll eventually need to pull in anyways. Today, a hello world Ember app starts off with about 200KB of JavaScript. In my experience, most production Angular, Ember and React apps hover between 400KB to 700KB of JavaScript, sometimes more. (Sometimes a lot more.) While this is true of many apps, it’s not universally true. Sometimes people have hard file size requirements that disqualify Ember out of the gate. And when people are starting out on a greenfield app, it’s hard for them to buy on faith that they will eventually need everything Ember offers. What if they don’t? It feels safer to start small and bring things in piecemeal. Shane Osbourne recently compared the file size of a “hello world” app generated by each of the major frameworks’ CLI tools. While Ember is the largest, a Glimmer app is tiny: at 34KB, it’s smaller than React, Angular and Vue. Only Preact comes in smaller. Best of all, we haven’t yet begun to focus on bundle optimization. You can expect this size to decrease even more in the future. So that’s Glimmer.js. It’s tiny, it’s fast, and it can be adopted incrementally. Best of all, you can start playing with it today. But… where does that leave Ember? Back to Ember We believe that the key to balancing stability and progress in Ember is to make it easy to do experimentation outside of the framework. The only way to truly get a sense of something is to be able to use it. Glimmer components are the future of components in Ember. We want to let you — and everyone — get a chance to use Glimmer components before we make them an official part of Ember. But we’re not leaving Ember users out in the cold until that happens. A few weeks before EmberConf, Godfrey Chan submitted the “Custom Component API” RFC. This RFC is the key to bringing Glimmer components to Ember apps. Because the Glimmer VM is really a “library for writing component libraries,” we can let addons specify their own custom component API. Notably, this means we’re working on making it possible to use the Glimmer components you’ve seen above in your existing Ember apps, just by installing an addon. Best of all, Glimmer apps use the Module Unification filesystem layout. This is the link between the Ember and Glimmer worlds. If you decide you actually do need all of the functionality Ember offers, you will be able to drag and drop your Glimmer components into an Ember app. One last thing. If you take a peek under the hood of a new Glimmer app, you’ll see that it’s made up of a few different npm packages, like @glimmer/application, @glimmer/di, etc. We spent time making sure these packages follow modern best practices for distributing JavaScript in 2017. Much of the secret sauce of a Glimmer app is in the ahead-of-time compilation we do with Rollup, so I recommend most people use the default Ember CLI flow documented on the website. That said, there’s no stopping an enterprising developer from using these packages in other environments. Let experimentation reign! Ember in 2017 While we’re excited about Glimmer, work on Ember is not slowing down. If anything, the focus on exposing capabilities means that the pace of community experimentation should noticeably tick upwards. Module Unification for Ember apps is under active development. We’re applying the lessons we learned and are working to expose the primitives needed to be able to implement the Module Unification filesystem layout in an addon. Development is happening on the master branch of ember-resolver behind a feature flag. As we upstream Glimmer.js code into Ember, this gives us a great excuse to clean up older tests so that we can easily run them against the old and new implementation, as we did with rendering tests and Glimmer VM integration. We’ve also begun to implement a routing service that gives applications and addons imperative control over the router. This is exciting because, previously, routing-related features like the built-in {{link-to}} helper relied on private API. With the routing service, developers will have the tools to build their own {{link-to}} helper if they wish. Long term, our goal is to break Ember apart into a series of small modules. Each piece of Ember should be an npm package that you can remove if you don’t need it. (Unlike most small modules approaches, of course, things will “just work” together if you do need them. We remain strongly opposed to forcing integration work onto application developers.) It should also work in reverse: if you start with Glimmer and realize you actually do need a router, services, a data layer, etc., you should be able to incrementally npm install your way to Ember. This is the future we’ve always dreamed of for Ember: a complete, cohesive front-end stack for those who want it, with the ability to quickly pare it down if the need arises. We’re not there quite yet, but it’s an exciting goal to build toward and I think we’ve shown tangible progress already with Glimmer. I hope you are as excited about Ember and Glimmer as we are, and we can’t wait to see all of the cool stuff you build with them! Originally published at emberjs.com on April 5, 2017, and authored by Tom Dale (with Yehuda Katz and Godfrey Chan).
https://medium.com/@emberjs/ember-js-emberconf-2017-state-of-the-union-9dec6d5a67c9
CC-MAIN-2017-47
refinedweb
4,804
62.98
Indian Constitution- historical underpinnings, evolution, features, amendments, significant provisions and basic structure. Written v unwritten Written Unwritten Provisions have been codified into a single legal doc Not so the case Enacted on a particular date Evolves with time Constitution is supreme Parliament is supreme i.e. laws made by parliament supersedes all previous laws Judiciary enjoys wide powers Judiciary enjoys limited powers i.e. it can review actions of E, but not of L Can be rigid, flexible or a combination of both i.e. can be amended but more than 1 body involved Flexible Clear distinction b/w nal & other laws No such distinction Can be unitary or federal Necessarily unitary Historical background The Company Rule (1773 1858) Regulating Act, 1773 1. Governor of Bengal was designated GG of Bengal. An executive council of 4 was created to assist him. 1 st such GG was Warren Hastings. Governors of Bombay and Madras were made subordinate to GG of Bengal. 2. It provided for SC at Calcutta, which was established in 1774. 3. Servants of company were prohibited to engage in any private trade. 4. Court of directors were now to report on its revenue, civil and military affairs in India. Pitts India Act, 1784 1. CoDs could manage commercial affairs, but political affairs were to be controlled by BoC (6 commissioners for the affairs of India, including 2 cabinet ministers). Thus it established double govt. BoC could supervise and control all operations of military and civil govt and revenues of possessions in India (was referred as 1 st time). 2. Govt of India was placed in the hands of Governor-General (G-G) and a council of 3. G-G could have his way by getting support of even 1 member. Later in 1786 G-G was given the authority to overrule the council in important matters. 3. Bombay and Madras presidencies were subordinated to Bengal in all questions of war, diplomacy and revenues. 4. The company was allowed to retain its monopoly of Indian and Chinese trade. And its directors retained the profitable right of appointing and dismissing officials in India. Moreover, the GoI was to carried out through them. Significance 1. It brought companys affairs and administration under the supreme control of govt. A new phase of conquest of India began. India was made to serve all sections of the ruling classes of Britain. Charter Act, 1833 1. GG of Bengal was made GG of India (William Bentick), and was given all civil and military powers. Thus for 1 st time govt having authority over entire area possessed by India was created. 2. GG of India was given exclusive legislative powers. Bombay and Madras governors were deprived of their legislative powers. Now laws were called Acts (earlier regulations) 3. EIC ended as a commercial body, and became a purely administrative body. territories in India were held by company in trust for his majesty. 4. It ended the companys monopoly of tea trade and trade with China. Debt of the company was taken over by the govt of India. The govt continued to be run by the company under the strict control of BoC. 5. Introduced a system of open competition for civil servants. But was negated. 6. Later, not by the act, the supreme authority was delegated to Governor-general-in-council. G-G having the veto power became the de-facto ruler of India. Charter Act, 1853 1. Legislative and executive functions of GG council were separated for the 1 st time. A separate GGs legislative council, later known as Indian (Central) legislative council was established. Also local representation was introduced for the 1 st time, and 6 new members were added to this, 4 of whom were appointed from provinces of Madras, Bombay, Bengal and Agra. 2. Open competition for civil services 3. It extended companys rule to retain Indian territories in trust of crown, but did not provide a time The Crown Rule (1858 1947) GoI Act 1858 Provisions 1. It transferred the power to govern Indian from EIC to crown. Earlier directors of EIC and BoC had the power, now power was given to Secretary of State (SoS) aided by a council. SoS was a member a cabinet. 2. Govt was to be carried as before by GG. It changed the designation of GG of India to Viceroy of India. Lord Canning was the 1 st viceroy. Viceroy would have an Executive council. Members would head different department and act as advisors. Decision would be taken by majority vote, but the viceroy could overrule on important matters. 3. Thus it ended double govt by abolishing BoC and CoD Significance 1. With time viceroy was subordinated, and SoS controlled even the minutest details. Further SoS was responsible to the parliament. So being controlled directly from London, Indian opinion had even less impact on govt policy 2. industrialists, merchants, and bankers increased their influence over govt. Thus now even the pretence of liberalism was given up. Indian Councils Act 1861 Provisions 1. GGs council was enlarged for making laws. In this capacity it was known as Imperial Legislative Council. However it possessed no real powers and was merely an advisory body. Any important measures had to be discussed with prior approval of govt. It could not discuss financial or administrative matters, and had no control over the budget. A bill had to approved by GG, and could be vetoed by the SoS. 2. It made beginning of representative institutions by associating Indians with the law making process. Viceroy was authorised to add 6-12 members to his executive council, atleast half of which had to be non-officials, or Indian. Thus it allowed the viceroy to add some non-official Indian members 3. It initiated the process of decentralisation by restoring legislative powers to Bombay and Madras. This policy led to almost complete provincial autonomy by 1937 4. New LCs were established for Bengal, NWFP and Punjab 5. Portfolio system of lord canning was recognised 6. It empowered the viceroy to issue ordinances without the concurrence of lLC Significance 1. With no real powers, Imperial legislative council was to do official work only and to give the appearance of important matters having been passed by a legislative body. 2. Non-official Indian members were few in number, nominated by the GG, which mostly comprised princes, zamindars, merchants, and were unrepresentative of the Indian people Indian Councils Act / Lord Cross' Act, 1892 1. Number of additional (non-official) members in Imperial Legislative Council (ILC) and Provincial Legislative Councils (PC) were increased, but official majority was maintained. 2. Budget could be voted upon and questions could be asked now. 3. Act provided for indirect election for non-official seats for the 1 st time. However word election was not used. It provided for nomination of some non-official members of (a) ILC by viceroy on recommendation of PLCs and Bengal Chamber of Commerce (b) PLCs by governor on recommendation of local bodies Indian Councils Act / Morley-Minto Reforms, 1909 Provisions 1. Number of elected members were increased in ILC and PLCs. It retained official majority in ILC, but allowed PLCs to have non-official majority 2. Most of elected members were elected indirectly by PCs in case of ILC, and by municipal committees and district boards in case of PCs. Some elected seats were reserved for landlords and capitalists. The reformed council was still an advisory body. 3. Separate electorates were introduced, in which Muslims were grouped together in separate constituencies from which Muslim alone could be elected. This became a potent factor in rise of communalism. 4. Powers of legislatures were enlarged. Now it could pass resolutions, ask questions and vote on separate items in the budget 5. 1 Indian was to be appointed to the viceroys EC (Satyendra Sinha was the 1 st in 1909) GoI Act / Montagu-Chelmsford Reforms, 1919 Provisions at provincial level 1. Executive a. Dyarchy It was rule by 2 i.e. the EC of governor and the ministers. Some subjects, like finance and L&O, were called reserved and remained under direct control of governor and his EC. Others such as education, health, LSG were called transferred and were controlled by ministers responsible to legislatures. However in case of failure of nal machinery governor could take control of transferred subjects. SoS and GG could interfere in reserved, but restrictively in transferred 2. Legislature a. Provincial Legislative Councils were enlarged and majority members were to be elected b. LCs could initiate legislation c. Women got right to vote Provisions at the central level 1. Executive a. GG was to be the chief executive authority. He retained full control over reserved subjects. b. In Viceroyss EC of 8, 3 were to be Indians 2. Legislature a. 2 houses of Legislature (CLA, CoS) were established at the centre. However it had no control over G-G and its EC Significance 1. Central govt had unrestricted control over provincial govt and right to vote was severely restricted 2. INC had moved beyond such halting concessions. At its Bombay session in 1918, under Hasan Imam, it condemned the reforms. 3. Some veteran leaders led by Surendranath Banerjee were in favour of accepting reforms, and left congress and founded Indian Liberal Foundation. Later they were known as Liberals and played a minor role. Simon commission report, 1930 1. It recommended abolition of dyarcy, extension of responsible govt to provinces, establishment of federation of India and princely states, continuation of communal electorate etc. 2. To consider the recommendations 3 RTCs were held GoI Act, 1935 Making of the Background 1. In 1934, the idea of Constituent Assembly (CA) was put forward by MN Roy. In 1935 INC officially demanded it to frame the of India. In 1938 JLN declared the of free India be framed. The demand was finally accepted by govt in August Offer of 1940. Cripps came to India with a draft proposal in 1942. Finally Cabinet Missions proposal for composition of CA was accepted. Composition of CA 1. Seats to provinces and princely states were to be allotted in proportion of population. 2. Seats of provinces were to be decided among Muslims, Sikhs and General. Representative of each community were to be indirectly elected by members of that community in Provincial LA. PR by STV. 3. Representatives of princely states were to be nominated by the heads. 4. Elections were held in July-Aug 1946. Working of CA 1. CA held its 1 st meeting in Dec, 1949. ML boycotted it. Dr Sachidanand Sinha was elected as the temporary president. later Dr Rajendra Prasad and HC Mukherjee were elected as the president and vice president 2. JLN moved the Objectives Resolution in the Assembly. It laid down the fundamentals and philosophy of the nal structure. It is a modified version of preamble of the present . Independence Act 1. It made the CA fully sovereign which could repeal or alter any law made by parliament. CA was also given (ordinary) legislative role, under GV Mavlankar. ML members withdrew from the CA of India. 2. In addition the CA ratified Indias membership of commonwealth, adopted national flag, anthem, and song. It elected Rajendra Prasad as the 1 st president on Jan 24, 1950. Enactment and Commencement 1. The final draft (395 Articles, 8 schedules) was adopted by the CA on 26 Nov 1949. Some provisions pertaining to citizenship, elections, provisional parliament, temporary provisions and short title came to effect. 2. The remaining major part came into force on Jan 26 1950, referred to as the date of commencement of the . Date was chosen because of historical importance of being the poorna swaraj day (Jan 26 1930) following the resolution of Lahore session (1929) Salient Feature Sources 1. Govt. of India Act 1935 - Administrative details, Federal System, Power of federal judiciary, Emergency power, Public Service Commissions, Governor post 2. United Kingdom - Parliamentary form of govt, Citizenship, Law making procedure, Bicameral Legislature, Rule of Law, Writs, CAG office 3. USA Preamble, FRs, Impeachment of SC and HC judges, Independent Judiciary, Functions of VP, JR 4. Ireland - DPSP, Nomination of RS members, Method of Presidential election 5. Canada - Federation with strong centre, Residuary powers with centre, Appointment of Governors (by centre), Review by Supreme Court 6. Australia - Concurrent list, Freedom of trade, Joint sitting 7. Germany - Suspension of FRs during emergency 8. South Africa - Procedure for amendment, Election of members of RS 9. France Republic, fraternity in preamble 10. Russia (U.S.S.R.) - Fundamental Duties, justice in preamble 11. Japan - Procedures established by law. PART I UNION & ITS TERRITORY Art 1 1. It describes India, i.e. bharat, as a union of states 2. It also classifies the territory of India into state, UTs, acquired territories Union of states 1. It signifies that, unlike a federation, India is not a result of agreement between states and no state can secede from it. 2. Territory of India is a wider term than union of India, as union includes only the states. Thus the states are members of federal system. UTs & acquired territories are administered directly by centre Art 2 1. It empowers the president to admit existing states or establish new states into UoI, on such terms and conditions as it thinks fit. The admission and establishment of states here relate to those states which are not a part of UoI. 2. Article 3, on the other hand, relates to formation of or changes in the existing states (and UTs) of UoI. In other words it deals with internal readjustment of the constituent states of UoI Art 3 1. It empowers the parliament to form new states or UTs by all possible combinations of states, UTs, or both. A part of state/UT can also be used. 2. It also empowers the parliament to alter area, boundary, name of any state 3. Bill for the purpose of above changes shall be introduced only after recommendation of president, which will be after it has been referred to the concerned state legislature to take its views. Views are not binding. No reference is needed in case of UTs 4. Thus India is an indestructible union of destructible states, unlike America which is indestructible union of indestructible states Art 4 1. Laws made u/a 2 & 3 will not be considered as a CAA Evolution of states & UTs Integration of princely states 1. After the integration of Hyderabad, junagarh and Kashmir, contained a 4 fold classification of states Part A (9 erstwhile governors provinces of British India), Part B (9 erstwhile princely states with legislatures), Part C (erstwhile chief commissioners provinces of British India & some princely states), Part D (A&N islands) Dhar commission & JVP committee 1. Dec, 1948 - The linguistic provinces commission under SK Dhar recommended administrative convenience, and rejected language, as the basis of reorganisation of states 2. Apr, 1949 JVP committee consisting of Nehru, Patel, Pattabhi Sitarammaya again rejected language as the basis of reorganisation 3. Oct, 1953 Govt was forced to create the 1 st linguistic state known as Andhra State Fazl ali commission 1. The creation of Andhra state intensified the demand from other states & thus the govt created another state reorganisation commission 2. In its report in September, 1955, it broadly accepted language as the basis of reorganisation. But rejected the theory of one language one state. Its view was that unity of India should be given prime importance in reorganisation. 3. By states reorganisation act, 1956 and 7 th CAA, 14 states & 6 UTs were created PART II CITIZENSHIP 2 kinds of people 1. There are 2 kinds of people citizens & aliens 2. Enemy aliens, compared to friendly aliens, do not enjoy protection against detention & arrest u/a 22 3. Following rights are given to citizens, but denied to aliens Art 15, 16, 19, 29, 30, right to vote in elections of LS and SLA, right to become an MP/MLA/MLC, eligibility to hold office of president, VP, judge of SC and HC, attorney and advocate general. Citizens also have certain duties. In India a naturalised citizen is eligible to the office of president, whereas in USA he is not. nal provisions 1. provides citizenship rights for people born before 26 th Jan, 1950 & it confers unfettered power on the parliament to bring out a legislation which will govern citizenship rights of those born on or after 26 th Jan, 1950 Types of citizenship (citizenship act, 1955) 1. By birth - If one is born in India and 1 parent is Indian. 2. By descent - It covers persons born outside India, provided 1 or both parents are Indians at the time of their birth. The birth has to be registered in embassy of India. Descent citizenship is as of right. 3. By registration - Even if parents are not Indians, one can become an Indian citizen by registration provided the person has lived in India for 7 yrs prior to his registration. Also he should be one of the following (a) PIO (b) Persons married to Indian citizens (c) Minor children of Indian parents (who became Indian citizens after the childs birth.) (d) Persons whose parents are registered as citizens (e) Persons whose either parent was a citizen. OCIs can register if they have 5 year standing with 1 year residence in India before applying. 4. By naturalization: If a person has resided in India for 12 years before applying 5. By incorporation of a territory: If India acquires some new territory then those people will be given a choice. Loss of citizenship (citizenship act, 1955) 1. By renunciation If a person acquires citizenship of other country & renounces his Indian citizenship. In the case every minor child also loses citizenship 2. By termination If a person voluntarily acquires citizenship of another country & does not renounce, then GoI can terminate 3. By deprivation Central govt can terminate on certain grounds such as misrepresentation or concealment of facts. This is applicable only to naturalized citizens (by registration, by naturalization) Overseas Citizens of India 1. GoI appointed the LH Singhvi committee in 2000 to inquire into the matter of granting citizenship to PIOs. As per the recommendations OCI was created in 2003 by amending the citizenship act, 1955 2. Under the scheme PIOs living in any part of the world except in Pak & Bangladesh are eligible to apply for OCI. But OCI is also available to people other than PIOs. Benefits 1. Can travel to India without visa, will get a travel document similar to passport. 2. Parity w.r.t. NRIs in economic, financial and educational fields 3. Property and investment in India will get domestic treatment. Limitations 1. Can't - hold a nal office / vote / contest any election. 2. Doesn't get equality in treatment under Art 16 for public offices. PIO OCI Criteria (a) they held an Indian passport at some time, or (b) their lineage can traced upto 3 generations (either of which were born in India & were permanent resident according to GoI act, 1935). But they should not be a citizen of Pakistan, Bangladesh, Afghan, Bhutan, Nepal, China, SL Criteria (a) they were eligible to become citizens on 26 th Jan, 1950 or they were citizens on or after this date, or (b) children & grandchildren of such parents No registration is required if stay < 180 days No registration required at all PIO card issued for 15 yrs (visa free travel) Lifelong visa free travel Parity with NRIs in all economic, financial & educational fields except for acquiring agricultural property Same PART III FRs General Significance & criticism Theory of separation of powers 1. It was proposed by French philosopher Montesquieu. It proposed that there be 3 departments of govt (E, L, J) which will be separated from each other. Such separation will be watertight so that (a) there is no concentration of power within 1 organ (b) individual liberties are safeguarded. 2. US was the 1 st written to adopt TSP. Since US provides for a presidential system, TSP was incorporated fully. However the provided for doctrine of checks & balances because of which the separation was not watertight. 3. Indian incorporates TSP explicitly u/a 50 & implicitly under part V & VI. To implement Art 50 govt enacted the CrPC 1973 which separated the E from J by taking away the judicial powers of district magistrate. However since Indian provides for a parliamentary system, TSP has been incorporated partially in the sense that E & L are not separated. Further the incorporation has been modified in the sense that DofC&Bs is also there in Indian 4. Underscoring the importance of TSP in a democratic setup, SC ruled that TSP is a part of basic structure of Doctrine of checks & balances 1. The concept originated in the US . It means that 1 organ of the govt can exercise control over the other 2 organs to limit their power within their nal authority. 2. SC in kanadasan vs SoTN, 1996 case ruled that the Indian incorporates in itself this doctrine. Art 12 Definition of state Judicial review 1. JR is the powers of higher courts (SC and HCs) to declare a law unnal and void if it is inconsistent with any of the provisions of , to the extent of such inconsistency 2. The courts while declaring a law as invalid, do not suggest improvements or alternatives. It is left to the state to take necessary steps. JR is available against both L and E. It is applied against the states (and not individuals) action 3. As a concept it originated under the US . JR w.r.t to FRs is conferred explicitly in Art 13(2). For other nal provisions JR is found implicitly under the writ jurisdiction of SC and HC. 4. Conditions while applying JR (a) If a law is capable of 2 interpretations, one which validates the law and second which invalidates it, then court will give effect to the 1 st one and uphold the validity of law (b) However if there is only 1 interpretation which clashes with the , then court will declare the law as unnal and void (c) Ordinarily, court will not pronounce on the validity of law with pending legal enforcement (d) Ordinarily the court shall not apply JR suo moto. 5. SC in 1973 held JR to be a part of the basic structure of the . Amendability of FRs Art 13(2) 1. It says the state shall make no laws that takes away or abridges one or more FRs. If done so, the law will be declared unnal Shankari Prasad v UoI, 1951 1. The court ruled that the legislature enjoyed 2 types of law making powers (a) ordinary L.P. under which the legislation made is law & comes under scope of Art 13(2). (b) Constituent L.P. under which the legislation is CAA & outside the scope of Art 13(2). Thus parliament can amend any part of the , including FRs, by way of a CAA 2. The court maintained its progressive interpretation until the Sajjan Singh case (1965) Golaknath v S of Punjab, 1967 1. The court overruled its earlier decisions & ruled that Art 368 only provided the procedure, and not the power, to amend. Thus parliament enjoyed only O.L.P & cannot amend FRs 2. Moreover FRs have been given a transcendental position by the , which no authority functioning under the can amend 24 th CAA, 1971 1. Govt amended Art 13 & 368, and gave itself the power to amend the u/a 368 with the provision that nothing under Art 13(2) shall be applicable to an amendment made u/a 368 Keshavananda bhArti v S of kerala, 1973 1. Court upheld the validity of 24 th CAA & stated that parliament can amend any part of the , including FRs. However the power is limited to the extent of not destroying the basic structure of . Basic structure can defined as those parts of the , without which would lose its basic character. 2. SC did not define basic structure, but it has indicated it in a no of cases since 1973. It includes sovereign nature of state, secularism, balance of powers, TSP, free and fair elections, RoL, JR etc. 42 nd CAA, 1976 1. The govt responded by inserting clause 4 & 5 in Art 368, which said that a CAA cannot be challenged in any court & amending powers of the parliament are unlimited. Minerva mills v UoI, 1980 1. The court held clauses 4 & 5 to be unnal & void on the grounds that they took away the powers of JR & disturbed the balance among the organs of govt, which are a part of the basic structure of . 2. So the present position is that parliament can amend any part of the without disturbing the basic structure of . This will continue unless the court overrules its decision in keshavananda bhArti case by a bench > than a 13 judge bench Right to equality (Art 14-18) Art 14 1. The state shall not deny to any person equality before law or equal protection of laws within the territory of India. Equality before law 1. The concept originated in England. It is a negative concept in the sense that, no special privilege will be given to anyone in the eyes of law. 2. It includes in itself the concept of rule of law, which has the following 3 elements a. Absence of arbitrary power, that is, no man can be punished except for the breach of law b. Equality before law, i.e., equal subjection of all citizens (rich/poor, high/low, official/unofficial) to ordinary law of land administered by ordinary law courts. c. The primacy of the rights of the individual, i.e., is a result of ordinary law of the country. However this stands modified in India where is the supreme law of the land. d. SC held that Rule of law as embodied in Art 14 is a basic feature of the 3. Exceptions a. Immunities conferred on President / Governor b. No MP/MLA/MLC shall be liable for proceeding in any court for anything said or any vote given in the legislature or any committee (Art 105 & 194) c. Art 31-C d. Diplomatic immunity Equal protection of laws 1. It is a positive concept which says that people in equal circumstances be treated equally i.e. like should be treated alike. 2. However where equals and unequals are treated differently Art 14 does not apply. While it forbids class legislation, it permits reasonable classification of persons, objects and transactions by law. That means the classification should be proportionate, scientific, rational (i.e. people in the group satisfy the property and not in the group don't satisfy) and directly linked to the objective Art 15 No discrimination 15 (1) State shall not discriminate on basis only of rrscb (race, religion, sex, colour, place of birth). 15 (2) No state or private discrimination on basis only of rrscb with regard to access or use of public places 15(3) State can make special provisions for women and children. 15(4) State can make special provisions for socially & educationally backward classes. 15(5) Reservations in educational institutions including private whether aided or unaided (except minority unaided) for socially & educationally backward classes 1. It is available only to the citizens and not to non-citizens. 2. All reservations for women are justified on the basis of Art 15(3). 3. Art 15(4) was the 1st CA Act, 1951. 4. Art 15(5) was the 93rd CA Act, 2005. Centre enacted a law in 2006 to provide 27% quota to OBCs. SC in 2008 directed the centre to exclude creamy layer of OBCs. Art 16 Equality of opportunity in public employment 16(1) Equality of opportunity in public employment 16(2) No discrimination on basis only of rrsb, caste, descent, residence or any of these. 16(3) Residence is a valid ground of discrimination in certain categories of public employment. 16(4) Reservation in favour of backward classes if not adequately represented 16(4A) Reservation in promotions for SCs and STs ok, which are not adequately represented in service in view of state 16(4B) Carry forward rule valid for SCs & STs even if it violates 50% principle. 16(5) A law can say that office holder of a particular religious or denominational body, or member of its governing body should belong to the particular religion 1. It is available only to the citizens and not to non-citizens. 2. Reservation u/a 16(3) is only for a temporary period 3. 50% rule states that reservation for BCs shall not exceed 50% under any circumstances 4. Art 16(4B) was added by 81 st CAA, 2000. It allowed the unfilled seats reserved for SCs and STs to be carried forward to the next year, even if the total reservations exceeded 50%. Indra sawhney case (Mandal commission case) 1992 1. Court clarified that 16(4) is an enabling clause & does not confer a FR on the person to demand reservation 2. The court laid down requirements for reservation social & eco backwardness, adequate representation not given in view of state, 50% rule, creamy layer, efficiency of administration should not be affected 3. Court also held that economic backwardness in upper classes is not a ground for reservation 4. Court held that u/a 16(4) reservation was only allowed at the entry level, thus reservation in promotions are unnal. Parliament responded by enacting 77 th CAA, 1995 which inserted 16(4A). In Nagaraj case (2006) court upheld the CAA but laid down the requirements of social & economic backwardness, adequate representation not given, and efficiency of administration. The court insisted that the state shall provide quantifiable date to support the 3 requirements. Horizontal & vertical reservation 1. Vertical reservation is the 50% rule excluding reservations for women, war widows, physically challenged 2. Horizontal reservation is subdivision of section of backward classes provided the classification is reasonable. SC is Muralidhar Rao case held that reservation of 4% of seats (out of 27%) in favour of socially & educationally backward Muslims to be valid, as it was rational and not based on religion Analysis of Art 16(4) 1. SC in the past has raised doubts that reservation on the basis of caste might perpetuate the caste itself. With the ever expanding scheme of reservations we have digressed from our original objective of furthering the equality of opportunity. Why the creamy layer doctrine is only for BCs & not for SC or STs? Unfortunately neither the SC nor the parliament in interested in discussing that. Do we want equality of opportunity or do we want to get rid of the caste system? If we want both we better show that the reservation policy is helping in doing this. Art 17 Abolition of untouchability 1. It abolished untouchability & forbids its practice in any form 2. SC has held that the right is available against individuals & it is nal obligation of state to take necessary measures to protect this right Art 18 Abolition of titles 18(1) State cannot confer titles on any individual 18(2) No citizen can receive titles from a foreign state. However can receive awards. 18(3) No foreigner in the service of state can receive any title w/o the permission of the president 18(4) No citizen or foreigner in the service of state can receiver present, emolument or office from a foreign state w/o the permission of the president 1. States can recognise academic or military distinctions. Such distinctions are awards and not titles. 2. Art 18(3) is to ensure loyalty to the state. 3. Art 18 is declaratory in nature as it neither declares that violation of Art 18 is a punishable offence, nor the parliament has enacted any law for this Right to freedom (Art 19-22) Art 19 The six freedoms 1(a) Freedom of speech and expression 1. According to SC it is a composite right & contains in itself other inferred rights. It confers the right to give opinion openly & w/o fear of state or any individual. Citizen can choose any means of communication to express his opinion. It also confers right to express opinion of others i.e. the freedom of press. It also confers right to information, as info is required to express correctly the opinion of others & make informed choices 2. Subject to sovereignty and territorial integrity of India, public order, defamation, contempt of court, morality or decency, security of state, friendly relations with foreign states, incitement to an offense. 1(b) Freedom of assembling peacefully and without arms 1. Subject to sovereignty and territorial integrity of India, public order. 1(c) Freedom of forming associations or unions or cooperative societies 1. Subject to sovereignty and territorial integrity of India, public order and morality 1(d) Freedom to move freely throughout the territory of India 1. Its purpose is to promote national feeling. It guarantees only internal freedom. External freedom is guaranteed u/a 21 2. Subject to public interest and rights of STs. 1(e) Freedom to reside and settle in any part of the country 1. Residing is temporary, whereas settling is permanent. This freedom is intended to remove internal barriers within the country. It is regarded as complimentary to the previous right. 2. Subject to public interest and rights of STs. 1(g) Freedom to practice any profession/occupation/trade/business 1. Subject to public interest, public sector monopoly and fulfilment of technical qualifications. Art 20 Protection in respect to conviction of offences 20(1) No ex-post facto laws 1. It prohibits the legislature from enacting a retrospective criminal legislation. Punishment given will be according to the law at that time only. However civil legislations can be given retrospective effect. 20(2) No double jeopardy 1. No person can be prosecuted & punished more than once for the same offence. This applies only in case of judicial decisions. A person after being convicted can be punished for the same offence by a non-judicial body. Also non-judicial bodies can punish for more than once for the same offence. 20(3) No self-incriminating evidence 1. Evidence can include medical tests, handwriting, signature etc. But tests such as brain mapping, polygraph, narco-analysis on the accused & the witness done against their consent are unnal & void because of violation of Art 20(3) and Art 21 (right to privacy). Limitations 1. Protections are only available in criminal proceedings 2. A formal accusation has to be made before the person can claim such immunity. Art 21 Protection of life & personal liberty Nature & character of Art 21 1. It says that the state shall not deprive any individual of his life & personal liberty except according to procedure established by law. It has undergone the greatest changes due to liberal interpretation provided by SC. It is the right to live life with dignity & includes all such things like clean environment, health, privacy etc. required for a dignified human existence. 2. Since the purpose of all other FRs is to extend quality of life, Art 21 implicitly contains all of them & is rightly regarded as fundamental to all FRs. It is the backbone of part III & IV of the . Evolution 1. In AK Gopalan v S of Madras, 1950, SC gave a narrow interpretation to liberty & said that liberty means personal liberty (bodily liberty) & not full liberty. Further it said that Art 21 incorporated the procedure established by law i.e. state can deprive life and liberty by means of law. 2. In Meneka Gandhi case, 1978 SC overruled its earlier decision & said that liberty cant be further qualified. Therefore there is no difference b/w personal liberty & liberty. Further Art 21 incorporates PNJs, thus there is no difference b/w procedure established by law & due process of law Procedure established by law 1. This doctrine originated in the English . It means according to the usage & practise as laid down in statute. 2. When an E action is challenged, the court will apply 3 tests (a) Whether there is a law authorising E to deprive individual of life & liberty (b) Whether the law is passed by a competent L (c) Whether the L followed the procedure while enacting the law 3. Thus the court does not look into the motive behind the law & cannot rule it to be unnal for being oppressive. Thus this doctrine depends of the good sense of L & strength of public opinion in the country Due process of law 1. This doctrine originated in the US . In case an E action is challenged, apart from applying the above 3 tests, court could look into the motive behind the law & declare it unnal for being unfair or unjust or oppressive. Principles of natural justice 1. They are (a) No man shall be punished w/o being heard (b) No man shall be judge of his own case (c) An authority shall act bonafide i.e. in good faith. 2. They are not mentioned in any official document, but are born out of the human ability to think & reason. They are universal principles which seek to restrict arbitrary decision making, & to humanise & rationalise it. 3. According to SC these are the inherent principles of the Euthanasia - Aruna Shanbugh Case 2011 1. SC in 2011 recognised for the 1 st time right to die with dignity. It said that if the person is brain dead or in a permanent vegetative state, and the doctors have lost hope of reviving him even with the most advance medical equipment, life support systems can be withdrawn after an order from the HC, which will be given after a bonafide & informed consent is given by the relatives. 2. However the court still held active euthanasia to be illegal, in which lethal drugs are injected to take life of the patient. It recommended the parliament to enact a law regarding euthanasia 3. SC held sec 309 of IPC (incriminalizes suicide) to be an anachronistic law & recommended the legislature to delete it. A person attempting suicide is in the need of help and not punishment. 4. Santhara 1. It is a religious practise among Jains. It is a spiritual decision to abandon the body when the person feels that life has served its purpose. 2. It has been often argued that the practise is nothing but an exercise of committing suicide. However the supporters argue that unlike suicide which is a decision taken in haste & emotion, it is done with the full knowledge of the person. Further it can be supported by Art 25 & 26 as well as u/a 29. So far, no law has declared it unnal Art 21A Right to Education 1. The state shall provide free & compulsory education to all children in the age group of 6-14 yrs in a manner as prescribed by law. Evolution 1. Originally the provided free & compulsory education for all children b/w 0-14 yrs u/a 45 2. SC in Unni Krishnan case, 1993 ruled that right to primary education is a FR u/a 21 3. 86 th CAA, 2002 added Art 21A & 51-A(k) in the . The latter provided the 11 th FD for all parents to extend opportunities for education to their children aged b/w 6-14 yrs. 4. Right of children to free & compulsory education act, 2009 was enacted to implement Art 21(A) RTE act, 2009 provisions & problems a) Provisions 1. 8 yrs of elementary education to be given in an appropriate classroom in the neighbourhood 2. Admission cannot be denied for want of documents 3. State shall also ensure attendance & completion of 8 years of schooling 4. Private schools shall have 25% children from weaker sections & disadvantaged communities, whose expense will be borne by the state b) Problems in implementation 1. There is a lack of necessary infrastructure as well as trained teachers 2. States are against the current resource sharing formula (68:32) & are demanding full commitment from centre 3. Without providing any economic incentive, it is extremely difficult to sustain the children in school and help them complete 8 years of schooling 4. It is said that the act is all about ensuring the right to enrolment. Art 22 Protection against arrest & detention If a person is arrested then he has the following rights 1. Right to be informed about the grounds of arrest 2. Right to consult & defended by a legal practitioner 3. Right to be produced before the nearest judicial magistrate in 24 hours excluding time of journey 4. Right to be released within 24 hours unless the magistrate furthers the detention These rights are not available to enemy aliens & persons under preventive detention Preventive detention 1. It means detention w/o trial merely on the grounds of suspicion. If not prevented the person will commit a crime that will affect the interest of the country at large. Generally legislation regarding this is in the form of sunset legislation (a) Safeguards 1. A person can be detained under preventive detention only for up to 3 months. After that his case has to be produced before an Advisory Board which shall decide if the detention is justified. 2. The person has to be informed of the grounds of his detention as soon as may be, by the detaining authority except when it is against public interest. 3. The person must have earliest opportunity to make his case against detention (b) Position of preventive detention in 1. Parliament has exclusive authority to make a law of preventive detention related to defence, foreign affairs & security of India. All other matters (like public order) are in the concurrent list. 2. However no democratic country has made preventive detention as an integral part of the as has been done by India Right against exploitation (Art 23 & 24) Art 23 Ban on human trafficking, begar, and similar forms of forced labour 1. Violation of the Article is a punishable offence & parliament has enacted laws for this. 2. It is available to both citizens & non-citizens 3. However state can compel both citizens and non-citizens (conscription only in case of citizens) to provide a service in public interest. While doing so the state shall not discriminate on the basis of rrcc (race, religion, class, caste). Art 24 Prohibition of employment of children < 14 yrs in hazardous employment 1. The child labour (prohibition & regulation) act, 1986 declared 14 industries to be hazardous like mining, chemical, slate, firecrackers, matchstick. And it also regulates the employment of children in non-hazardous industries. 2. In 2009 the state banned employment of children < 14 yrs in some of the unorganized sectors like restaurants, hotels, household industry etc. 3. In 2012 the govt proposed an amendment to the 1986 act under which employment of children < 14 yrs will be prohibited in all industries & those b/w 14-18 yrs will be prohibited in hazardous industries. RTE act, 09 is also expected to solve the problem of child labour. Right to freedom of religion (Art 25-28) Art 25 - Freedom of conscience and freedom to profess, practice and propagate a religion Freedom of conscience 1. It is absolute inner freedom to mould ones religious beliefs w/o any external intervention. 2. It also includes freedom from religion i.e. right to be an agnostic or an atheist Freedom to profess 1. Declaration of ones religious beliefs openly & freely Freedom to practice 1. Performance of religious worship, rituals, ceremonies & exhibition of beliefs & ideas in the form of symbols, colours etc. Freedom to propagate 1. It included spreading of religious beliefs & explaining the basic tenets of ones religion 2. However it does not include the right to forcibly convert another person to ones religion, as this impinges the on the freedom of conscience Limitations 1. State cannot interfere with religious belief or faith, but can impose restriction on religious conduct & practise on grounds of public order, morality, health & other provisions of part III 2. Regulations can be made by state on any secular activity which may be associated with a religious practice 3. However it is available to both citizens & non-citizens Art 26 Freedom to manage religious affairs It protects the collective freedom of religion. Subject to public order, morality and health, religious denominations have the 1. Right to establish & maintain institutions for religious & charitable purposes 2. Right to manage its own affairs in matter of religion 3. Right own & acquire movable or immovable property & administer it according to the law A religious denomination, according to SC, is 1. a collection of individuals having a system of beliefs (doctrine), a common organisation & a distinctive name Art 27 - State shall not use public funds for the promotion and maintenance of a particular religion 1. State is prohibited from patronizing any one religion. But it can patronize all the religions without any discrimination 2. It is said that Art 27 truly reflects the secular character of Indian state Art 28 Religious education in schools 1. No religious education can be given in schools owned & maintained wholly by the state 2. Religious education can be imparted on a voluntary basis in schools that either receive aid or are recognised by the state 3. Compulsory religious education can be imparted in schools established by religious endowments or charitable trusts, but administered by the state. Cultural & educational rights (Art 29 & 30) Art 29 - Protection of interests of minorities 1. Any section of citizens inside India having a distinct language, script or culture of its own shall have the right to conserve the same. This provision underlines the importance of unity in diversity 2. No citizen shall be denied admission into any educational institution maintained by the State or receiving aid out of State funds on grounds only of religion, race, caste, language or any of them Art 30 1. All minorities (religious of linguistic) have right to establish & administer educational institutions of their own choice. Thus by giving this right recognises that to safeguard the minority character of a community establishing education institutions is necessary. 2. Art 30(1A) - The compensation given by state while acquiring their property should be such that it does not restrict or abrogate the above right 3. While granting aid state shall not discriminate any educational institution on the grounds that it is managed by a minority community Minority community is recognised by the state if 1. It is a non-dominant community in general population 2. It has a distinct well established identity of its own that is clearly recognised by the society 3. It has willingness to maintain a separate identity Rights enjoyed exclusively by minority educational institutions 1. Art 30(1A) 2. Reservations made u/a 15(5) in favour of BCs in educational institutions does not apply to unaided MEI. 3. They have the right to reserve 50% for students coming from its own community. 4. MEIs which nether seek recognition nor aid from the state, are free to following their own admission procedure provided it is merit based & transparent 5. However if the institutions ignores the interest of their community they can be stripped of the MEI status Saving of certain laws (Art 31A, 31B, 31C) Art 31: Right to property Evolution 1. Initially right to property was a FR, subject only to - (a) reasonable restrictions to serve emergencies of public welfare (b) reasonable restrictions to serve welfare of STs. No person could be deprived of his property except according to law and such an acquisition could be made only - (a) for public purpose and (b) after paying compensation. 2. The court held the word 'compensation' to mean full compensation (market value) which necessitated 4th CAA, 1955 which clearly specified that the adequacy of such compensation shall not be challengeable before court. The govt thought it would be unviable to undertake development if every piece of land nationalised has to be paid at the market value. But court continued to maintain an adverse position. 3. By 25th CA Act, 1971 the word 'compensation' was replaced by the word 'amount'. But in Keshavananda bhArti case the court again held that such an amount can't be illusionary and must be determined by a principle which is relevant to the acquisition. 4. By successive amendments changes were introduced in Art 31 A-D by the government, to exclude the obligation of paying compensation & push its agrarian reform agenda. a. Art 31A was amended to state that a law made for land acquisition or temporary takeover shall be valid even if it abridges Art 14 and 19. b. Art 31B provides for blanket immunity to enactments placed in Schedule 9. (In 2007 SC in IR Coehlo v SofTn, held that laws under schedule 9 are open to JR if they violated Art 14, 15, 19, 21 or the basic structure. Originally schedule 9 contained 13 acts, while in 2013 it had 282 acts) c. Art 31C provides for immunity for any laws made to implement DPSP in Art 39B & 39C even if they contravene Art 14 and 19. SC crippled it saying that it took away powers of JR in cases when legislation was in contravention to Art 14 & 19. In response to this govt enacted 42 nd CAA, 1976 which sought to give a blanket immunity to all laws passed to implement any DPSP. But in Minerva Mills case, SC struck it down. d. Art 31D has been repealed by 43rd CA Act, 1977. 5. 44th CA Act, 1978 took out the right to property altogether from Part 3 of and placed it under Art 300A. Exceptions 1. If the property belongs to a minority educational institution. 2. If the property is personally cultivated and doesn't exceed the statutory ceiling. (Art 31A) In both cases above, full compensation shall be paid. Right to constitutional remedies (Art 32-35) Art 32 Remedies for enforcement of FRs 1. A person can go directly to SC for the enforcement of FRs 2. Further SC has the power to issue writs to enforce FRs 3. Parliament can give above powers to any other court w/o prejudicing the powers of SC 4. Right to move to SC shall not be suspended except otherwise provided by , which has provided suspension of FRs during national emergency Ambedkar on Art 32 1. He referred this Article as fundamental to all FRs, as without it other FRs will be without a remedy & would be a nullity Writ jurisdiction of SC v HC 1. SC has the duty to enforce FRs u/a 32 since the remedy itself is a FR, while HC has no such duty u/a 226 2. SC can issue writs to enforce only FRs, while HC can do this for other legal rights also 3. Thus writ jurisdiction of HC is wider in this sense, but narrower in territorial extent Different types of writs 1. Habeas corpus It literally means to have the body of & is issued to protect the liberty of a person who is wrongfully (by a private org) or illegally (by a public org) detained. However physical presence of detained is not needed is all material facts relating to the case are made available to the court. Locus standi is not applicable i.e. it can be filed by any indv/org & not necessarily by the aggrieved party 2. Mandamus It is issued when a public official or public authority has failed to discharge their duties resulting in the violation of legal rights of the petitioner. It literally means command & order the official or authority to do or not to do something. It cannot be issued against president or governor. Locus standi is applicable 3. Prohibition It literally means to forbid & is issued when judicial or quasi-judicial body has taken a case in excess or absence of jurisdiction, and it prohibits them from proceeding further. Locus standi is applicable. 4. Certiorari Similar to prohibition except that it is issued after the J/Q-J body has given judgement. Purpose is to quash the judgement if given in excess or absence of jurisdiction 5. Quo-warranto It is issued to ensure that a person holding the public office is qualified to hold it. It has an immediate effect of removing the person. Locus standi is not available & thus it can be filed by any individual Art 33 Armed forces & FRs 1. It empowers the parliament to restrict or abrogate FRs of members of armed forces, para-military forces, police forces & persons employed in intelligence agencies & in relation to communication systems for these forces, to ensure proper discharge of their duties & maintenance of discipline among them 2. Also it empowers the parliament to exclude court martials from writ jurisdiction of SC & HC in relation to FR 3. No law under this Article can be questioned in any court on grounds of contravention to FRs Art 34 Restriction of FRs while martial law is in force 1. Parliament can indemnify any govt servant or any other person for any act done by him in relation to maintenance or restoration of order in any area where martial law is in force. Act of indemnity cannot be questioned on grounds of contravention to FRs 2. Also the parliament can validate any sentence passed, punishment inflicted, forfeiture ordered or other act done under martial law in such area Martial law 1. It means military rule i.e. a situation when the civil administration is run by military authorities 2. It is implicitly contained in this Article & has no specific provision in the 3. It suspends the FRs, govt and ordinary law courts. However SC has ruled that it does not ipso facto leads to suspension of habeas corpus 4. It can applied to a specific area & not the whole of country Art 35 Effecting certain FRs 1. It gives the power to give effect to certain FRs to parliament. SL are not given the power, to maintain uniformity w.r.t those FRs 2. Parliament shall have the power to make laws regarding (a) Prescribing residence as a condition u/a 16 (b) Empowering other courts to issue writs u/a 32 (c) Art 33 & 34 3. Parliament shall have the power & shall make laws for prescribing punishment for those acts declared to be offences under FRs. These are (a) Art 17 (b) Art 23 4. It should be noted that this Article gives power to parliament for certain items in state list PART IV DPSPs (Art 36-51) Features 1. These have been borrowed from the Irish , who further borrowed it from spanish . Along with the FRs, they contain the philosophy of the 2. They seek to establish a welfare state rather than a regulatory state & are in the form of general directions or instruction given to the state 3. Art 37 DPSPs are non-justiciable, but they are nevertheless fundamental in the governance of the country and it shall be the duty of the State to apply these principles in making laws 4. They constitute a very comprehensive social, economic & administrative programme for a modern democratic state Socialistic principles Art 38 To promote welfare by providing social, economic and political justice & minimising inequalities Art 39 State shall direct its policy towards securing (a) adequate means of livelihood for all (b) equitable distribution of material resources of the community for common good (c) prevention of concentration of wealth & means of production (d) equal pay for equal work (e) preservation of health & strength of workers & children against abuse (f) opportunities for healthy development of children 1. 39 (b) & (c) have been majorly implemented by land reforms 2. 39 (d) has been implemented by Equal Remuneration Act, 1976 3. 39 (e) & (f) (a) National commission for protection of child rights act (b) Measures taken u/a 23 & 24 (c) Guidelines in MC Mehta v S of TN, 97 case Setting up of child labour rehab fund, fine of 20k, govt must find a suitable job for 1 adult member of childs family failing which the state should deposit an additional 50k per child Art 39A To provide equal justice & free legal aid to the poor 1. National legal services authority act, 1977 to provide free legal aid to the poor 2. National legal literacy mission, 2005 Art 41 To secure right to work, to education and to public assistance in cases of unemployment, old age, sickness and disablement 1. Right to work has been secured by programmes like NREGA, no of rozgar & employment programmes like NREP, IRDP etc. 2. Currently 141 centrally sponsored schemes, out of which 8 are flagship schemes. They are under state subject, but partially or fully implemented by centre Art 42 To make provisions for just & humane conditions of work & maternity relief Art 43 To secure a living wage, decent standard of life & social & cultural opportunities for all workers Art 43A To take steps to ensure participation of workers in management of industries Art 47 To raise the level of nutrition & standard of living & to improve public health Gandhian principles Art 40 - Organisation of village panchayats (73 rd CAA, 1992) Art 43 To improve cottage industries on an individual or cooperation basis in rural areas 1. Number of boards have been set up. E.g. all India handloom board, all India handicrafts board, silk board etc. Art 46 To promote educational & economic interests of SCs, STs & other weaker sections & protect them from social injustice & exploitation Art 47 To prohibit the consumption of intoxicating drinks & drugs injurious to health Art 48 To prohibit slaughter of cows, calves & other milch & draught cattle & to improve their breeds Liberal intellectual principles Art 44 Uniform civil code Art 45 - Provision of early childhood care & education of children < 6 yrs 1. Integrated child development scheme (ICDS) Art 48 To organise agriculture & animal husbandry on modern scientific lines Art 48A To protect & improve the environment & safeguard forests & wildlife 1. Has been implemented via various acts related to wildlife protection & environment Art 49 To protect monuments, places & objects or Artistic & historic interest 1. Ancient & historical & archaeological sites (declaration of national importance) act, 1951 2. Setting up of ASI Art 50 To separate J from E 1. In 1973 CrPC was amended to away criminal cases out of district magistrates jurisdiction. However he still can adjudicate civil cases Art 51 To promote international peace & security & maintain just & honourable relations b/w nations; to foster respect of international law & treaties, and to encourage settlement of international disputes by arbitration DPSP vs FR 1. Current position is DPSP and FRs have to be read harmoniously by the court instead of giving any general preference to DPSP. 2. Until the re Kerala Educational Bill, 1958 case, SC held that DPSP are inferior to FRs and no law implementing DPSP but violating FR shall be constitutional. But in this case it gave the doctrine of harmonious reading i.e. FR and DPSP complement each other and there is no inherent conflict between them. So as far as possible a harmonious reading of both should be carried out and if 2 interpretations of a law are possible where one is harmonious and other conflicting, the harmonious one should be given effect. But if eventually there is a conflict, FR shall prevail. 3. Then came the 25th CA Act, 1971 which introduced Art 31C saying - (a) if there is any law enacted to give effect to DPSP Art 39 (b) and (c) and in the process violates Art 14, 19 and 31, then it shall be valid. (b) Any law declaring that it is to give effect to Art 39 (b) & (c) can't be questioned in a court. 4. SC in the Keshavananda case upheld the first part but struck down the second. 5. Then came 42nd CA Act 1976 which amended Art 31C to say that if state gives effect to any DPSP by enacting a law and such a law violated Art 14, 19 or 31 then it shall be valid. In Minerva Mills case, this was struck down as it disturbed the balance between Part 3 and 4 of the which was a basic feature. Directive outside part IV 1. Claims of SCs and STs to services (Art 335, Part XVI) The claims of members of SCs and STs shall be taken into consideration, consistent with the maintenance of efficiency of administration, in the making of appointments to services and post in connection with the affairs of Union or a state 2. Instruction in mother tongue (Art 350-A, Part XVII) It shall be the endeavour of every state and authority within it to provide adequate facilities for instruction in mother tongue at the primary stage of education to children belonging to linguistic minority groups 3. Development of the Hindi language (Art 351, Part XVII) It shall be the duty of the union to promote the spread of Hindi language and to develop it so that it may serve as a medium of expression for all the elements of composite culture of India. PART IVA FDs Duties in the It shall be the duty of every citizen of India 1. to abide by the. who is a parent or guardian to provide opportunities for education to his child between age of 6-14 years Evolution 1. They are inspired by the of erstwhile USSR. Currently no major democratic , except japan, specifically contain a list of duties of citizens 2. On the recommendations of the Swaran Singh Committee, the 42 nd CAA incorporated part IVA & Art 51A in the Verma committee on FDs (1999) observations It identified the existence of legal provisions for the implementation of some of FDs 1. Prevention of Insults to National Honour Act (1971) prevents disrespect to , flag and Anthem 2. Criminal laws punishes for encourage enmity between different sections on grounds of rrcb etc. 3. IPC protects national integration 4. Protection of Civil Rights Act (1955) provides punishment for offences related to caste and religion 5. UAPA, 1967 declared communal organisations as unlawful 6. RPA, 51 disqualifies MPs/MSL if they get votes on ground of religion or promote enmity 7. Wildlife Protection Act (1972) and Forest Conservation Act (1980) PART V THE UNION The Executive Chapter I President Elections 1. Elected indirectly by elected members of parliament, state legislative assemblies & UTs of Delhi & Puducherry 2. To provide uniformity in scale of representation of different states & parity b/w states as a whole & union, fixes vote values. a. Vote value of MLA = totuI popuIuton o] stutc totuI no o] cIcctcd MLAs x 1 1000 b. Vote value of MP = totuI uIuc o] otcs o] uII MLAs totuI no o] cIcctcd MPs c. To win the candidate needs = totuI no o] uId otcs poIIcd 1+1 +1 3. The election is held by a system of proportion representation by way of single transferrable vote. PR is said to be a misnomer as only 1 person has to be elected. 4. All disputes regarding election of P & VP are inquired into & decided by SC whose decision is final 5. Takes oath in the presence of CJI Qualifications 1. Citizen + 35 yrs + eligible to be member of LS + no office of profit 2. A sitting P/VP/Governor/Minister is not deemed to hold any office of profit & hence eligible Conditions of presidents office 1. When elected, he is deemed to have vacated the seat (if any) of parliament or state legislature 2. No office of profit 3. Entitled to use official residence w/o rent 4. Emolument, allowances decided by parliament & cannot be diminished during the term Immunity 1. He is not legally liable for his official acts 2. Criminal proceedings (even for personal acts) cannot be initiated during his term. He can neither be arrested nor imprisoned. 3. Civil proceeding (for personal acts) can be initiated after giving a 2 months notice Vacancy 1. Election shall be held within 6 months after the vacancy arises 2. P VP CJI seniormost judge of SC 3. He submits his resignation to VP Why there is a vacancy? Who acts as president? Expiry of term Either new, or incumbent will continue Death, resignation, removal VP acts as Absence, illness VP discharges Election declared void VP Impeachment 1. Being the highest authority president cannot be removed. He can only be impeached for the violation of which is not defined by the 2. Impeachment is a quasi-judicial process. Any house may initiate the charge (supported by 25% members + a 14 day notice). If passed by 2/3 rd of total strength (toughest), it goes to the 2 nd house which investigates the charges. President has the right to defend himself 3. If 2 nd also passes the resolution president stands impeached Powers Executive powers 1. U/a 53 executive power of the union is vested in the president 2. Makes rules for more convenient transaction of govt business 3. Appoints PM, CoM, Attorney General who hold office during his pleasure 4. Appoints CAG, UPSC & Finance Commission members & chairman, CEC, governors etc. 5. Appoints commission of SCs, STs, BCs, inter-state council 6. Administers UTs 7. Can declare any areas as scheduled area 8. U/a 78 his personal RTI Legislative powers 1. Nominates members of LS (2) & RS (12) 2. Ordinance making power u/a 123 3. Summons, prorogues, and dissolves the parliament. Can summon joint sitting also 4. Address to the parliament, messages to the house 5. Appoint presiding officers in certain cases 6. Prior recommendation & assent to bills of parliament & in some cases of state legislature 7. Power to issue regulations in UTs Veto powers When a bill is presented to the president for assent, he has 3 alternatives u/a 111 1. Give assent 2. Absolute veto - Reject the bill. Can be applied to ordinary/money/financial bill. Usually it is exercised in case of private member bill or when the cabinet resigns after the passage of the bill & new cabinet advises to withhold assent. 3. Suspensive veto - Return the bill for reconsideration, which he shall pass if passed again by the parliament. Can be applied to ordinary bills or financial bills u/a 117. Not in money bills 4. Pocket veto - Further prescribes no time for the president to give assent. So he may keep the bill pending for an indefinite period Presidential veto over state legislation When governor reserves the bill for the consideration of the president (Art 200), president has the following option u/a 201 1. Give or withhold assent 2. Use suspensive veto. But is not bound if legislature again passes the bill 3. Use pocket veto 4. In case on money bill the president can only give or withhold his assent. Nothing else. Ordinance making powers 1. Provision of ordinance was borrowed from GoI act, 1935 to deal with emergent situations 2. U/a 123 president can promulgate an ordinance, when 1 or both the houses are not in session, if he is satisfied about the necessity due to circumstance 3. Ordinance has same effect as that of an act of parliament, expect that it is temporary. It has to be approved within 6 weeks after both houses reassemble 4. According to the rules of LS, a bill to replace the ordinance should be accompanied by a statement explaining the circumstance Cooper vs UoI, 1970 1. Satisfaction of the president is subject to limited JR & can be challenged on the grounds of malafide. 2. If the petitioner shall prima facie show the non-existence of circumstance, the court will shift burden of proof on the president. 3. Further decisions taken under the ordinance will remain valid in any case. 38 th CAA, 1975 1. Art 123 was amended to place ordinance making power beyond the scope of JR 2. However 44 th CAA, 1978 restored the status DC wadhwa vs State of Bihar, 1987 1. B/w 1967-81 the governor of Bihar promulgated 256 ordinances & many of them were kept alive for 1-14 yrs by re-promulgation 2. SC said that re-promulgation w/o referring to the assembly is a fraud on the Why undemocratic? 1. All modern democracies do not mention this power. India is perhaps the only one. 2. Also it still has a scope to be misused. Food security ordinance was promulgated just before the parliament session Governors ordinance making power He has to take instructions from the president only in the following 3 cases 1. If bill would have required previous sanction of president for its introduction in SL 2. If governor would have deemed it necessary to reserve the bill for consideration of the president 3. If an act of SL containing the same provisions would have been invalid without presidents assent Pardoning power 1. Art 72 empowers the president to grant pardons to persons who have been tried and convicted in cases where - (a) Punishment or sentence is for offence against union law or, (b) It is by a court martial (military court) or, (c) Death sentence 2. The objective of granting this power is (a) To keep the door open for correcting any judicial errors (b) To afford relief from a sentence, which the president considers as unduly harsh 3. It includes (a) Pardon completely absolves the convict of all sentences, punishments and disqualifications (b) Commutation substitutes the punishment with a lighter form (c) Remission reduces the period of sentence without changing its character (d) Respite is awarding lesser punishment in place of one originally awarded due to disability, pregnancy etc. (e) Reprieve implies stay of the execution of sentence (especially that of death) to enable the convict to seep pardon or commutation 4. U/a 161 pardoning power of governor differs from the president in the sense that (a) He cannot pardon court martials (b) He cannot pardon death sentences 5. The power is not subject to JR except when the decision of president/governor is arbitrary, irrational, mala fide, or discriminatory. Financial powers 1. Prior sanction to money bills 2. Contingency fund 3. Sets up state finance commission after every 5 years & can do even before 5 yrs 4. Present annual fin statement before the parliament 5. No demand of grant can be made w/o his consent Judicial powers 1. Appointment & transfer of judges 2. Advisory reference u/a 143 3. Pardoning power u/a 72 Diplomatic powers 1. All international affairs & treaties are concluded on his behalf 2. He sends & receives diplomats Military powers 1. He is the supreme commander of defence forces 2. Appoints chiefs of 3 forces 3. Can declare war, conclude peace Emergency powers Discretionary powers 1. Art 74 President can refer the advice of CoM for reconsideration 2. Art 78 The PM has the nal duty to inform the president about the affairs of the country. Correspondingly president has the right to be informed 3. Veto Suspensive and pocket 4. Sends messages to the house 5. Art 85 Summon houses if not advised so by the CoM within a 6 months gap 6. When no party or coalition enjoys majority in LS, president can enjoy leader of that PP who in his opinion will be able to provide a stable govt 7. President is bound to act on advice of the CoM only when they enjoy majority of LS. Thus he can keep a check on the decisions of a caretaker govt Vice-president Election 1. Indirectly elected by all members of parliament 2. PR by STV Qualification 1. Citizen + 35 yrs + eligible to be member of RS + no office of profit Conditions of office 1. When elected, he is deemed to have vacated the seat (if any) of parliament or state legislature 2. No office of profit Removal 1. No grounds are mentioned 2. A resolution passed by RS by effective majority & agreed to by LS. Requires 14 day notice. Powers 1. Ex-officio chairman of RS 2. Acting as or discharging duties of president Position 1. Modelled on the lines of American VP, but doesnt succeed the president in any case, as the American VP does for the unexpired term 2. Thus the office was created to maintain political continuity Prime Minister and CoM 1. Art 74 2. Art 75 collective responsibility + individual responsibility + 91 st CAA 3. A minister who is member of one house has the right to take part in proceeding of other house 4. No legal responsibility Art 78 1. 75(1) The Prime Minister shall be appointed by the President and the other Ministers shall be appointed by the President on the advice of the Prime Minister. 2. [(1A) The total number of Ministers, including the Prime Minister, in the Council of Ministers shall not exceed fifteen per cent. of the total number of members of the House of the People. 3. (1B) A MP belonging to any PP.] 4. (2) The Ministers shall hold office during the pleasure of the President. 5. (3) The Council of Ministers shall be collectively responsible to the House of the People. 6. (4) Before a Minister enters upon his office, the President shall administer to him the oaths of office and of secrecy according to the forms set out for the purpose in the Third Schedule. 7. (5) A Minister who for any period of six consecutive months is not a member of either House of Parliament shall at the expiration of that period cease to be a Minister. 8. (6) The salaries and allowances of Ministers shall be such as Parliament may from time to time by law determine and, until Parliament so determines, shall be as specified in the Second Schedule. Art 77 1. All executive action of the GoI. Art 78 It shall be the duty of the Prime Minister 1. to communicate to the President all decisions of the CoM CoM any matter on which a decision has been taken by a Minister but which has not been considered by the Council Attorney general Refer nal bodies Parliament Chapter II Types of majority 1. Simple majority More than 50% of the members present & voting. Generally used in case of bills & motions 2. Absolute majority More than 50% of the total strength of the house. Legally not important, but politically important. 3. Effective majority More than 50% of effective strength (total strength excluding vacancies) of the house. Used in the removal of VP, Dep chairman of RC, speaker, deputy speaker 4. Special majority It is of 3 types (a) As u/a 249 & Art 312 (b) As u/a 368 (c) As u/a 61 Composition 1. It consists of president, council of states, house of people Composition of CoS 1. States reps are elected by state MLAs. Representation varies with population. 2. Among UTs only delhi & puducherry have representation. Others are too small. As prescribed by parliament, they have same procedure of election 3. Strength = 238 elected + 12 nominated (by president from field of science, Art, literature, social service; governor has 1 extra option of cooperative also) 4. Elections are based on PR by means of STV Relevance of CoS 1. It gives representation to the states 2. It is a not inferior to HoP & acts as a revisory chamber, which can give fair & balanced opinion 3. It has certain non-federal characters like UTs are represented, nominations are allowed, and states are unequally represented Composition of HoP 1. 530 members from states + 20 members from UTs + 2 Anglo-Indian members. 2. There shall be no reservation for minority communities. 3. Members from UTs may be chosen as prescribed by parliament, which has prescribed a direct election 4. Voting age was reduced from 21 to 18 yrs by 61 st CAA, 1998 Comparison b/w HoP & CoS HoP > CoS 1. Money bills, no confidence motion & motion to discontinue national emergency can be introduced only in LS HoP = CoS 1. amendment bill, election & impeachment of president, approval of national & state emergency HoP < CoS 1. For parliament to legislate on a state subject, creation of an all India service CoS has to pass resolution 2. Impeachment of VP can be initiated only in CoS System of election to LS Territorial constituencies 1. Each state is allotted seats such that PopuIuton o] u stutc scuts uIIottcd to stutc is same for all the states. Population of the state should be 60 lacs for this provision to be applied 2. Each state is divided into constituencies such that PopuIuton o] u consttucnc scuts uIIottcd to consttucnc is same throughout the state Delimitation 1. Art 82 allows the parliament by law to provide for delimitation acts. Accordingly parliament has enacted acts in 1952, 62, 72, 2002 2. Delimitation is an exercise to redraw the boundaries of LS & SLA constituencies to maintain equitable distribution of seats across constituencies. It is a very sensitive issue as boundary can be drawn keeping support base in mind & thus is generally handled by a SC judge. 3. During the 1970s the southern states had largely implemented family planning measures & were having good HDI indicators. But the northern states failed in these aspects & there population was increasing. Therefore to prevent marginalisation of southern states, 44 th CAA, 1976 froze delimitation, strength of LS & SLAs & representation of state in LS till 2026, as per the 1971 census. It was expected that the population will stabilize by 2026. Further the 87 th CAA, 2003 provided for delimitation based on 2001 census figures. However this should be done without altering the seats allotted to each state in LS & SLAs. Reservation for SCs & STs 1. Seats are reserved & they are elected by all voters in a constituency Territorial representation 1. It means that every MP represents a constituency 2. We did not adopt PR because difficult to understand for voters, instability in govt@ Duration 1. CoS is a continuous house which never dissolves. 1/3 rd of members retire every 2 yrs. 2. HoP has a tenure of 5 yrs. It can be prematurely dissolved. It can be extended during a national emergency (1 yr at a time for any no of yrs) 3. State legislature can be extended for 1 yr at a time for a maximum of 3 yrs. J&K assembly has tenure of 6 yrs. Membership of parliament Qualifications 1. Citizen + subscribe to oath + 30 yrs (CoS)/25 yrs (HoP) 2. Additionally parliament requires the person to be a registered elector. Till 2003 domicile of the state, from where the RS elections were contested, were required. This was amended in 2003 allowing the candidate to be in electoral rolls of any constituency. Punchhi commission recommended to restore the previous position as after all the RS member represents his state only. Disqualifications 1. Art 102(1) For being chosen as or for being a MP a. Office of profit. But parliament by law may specify which office of profit is exempted. b. Unsound mind, undischarged insolvent, voluntarily acquired citizenship of another country. 2. Art 102(2) Anti defection law under schedule 10 3. Also for corrupt practises mentioned in RPA, 51 4. On disputes over disqualifications, president's decision in concurrence with EC will be final. (Art 103) Vacating of seats 1. Double membership under RPA, 51 (a) A person elected 2 both RS & LS has to intimate his choice within 10 days, otherwise seat is RS becomes vacant (b) If a sitting member of 1 house gets elected to other house, his seat in 1 st house becomes vacant (c) If a person elected to 2 seats in a house does not choose 1, both seats become vacant (d) If a person elected to both parliament & SL does not resign from SL within 14 days, seat in parliament becomes vacant. 2. Disqualification, resignation, absence for 60 days w/o permission 3. Election declared void, or expelled by the house, or elected as P, VP, governor Oath 1. Takes oath before the president or a person appointed by him Presiding officers 1. Speaker/deputy speaker/deputy chairman are elected from among their respective houses. They vacate the office if they (a) Cease to be a member of the house (b) resign (speaker writes to deputy & vice versa, deputy chairman writes to chairman (c) removed by a resolution passed by ordinary majority of respective house with a 14 day notice before moving the resolution 2. They all cannot vote in the 1 st instance while presiding, but exercise a casting vote. 3. Deputies are not subordinate to the speaker or chairman, but are directly responsible to the house. They are like any other member of the house when the speaker/chairman presides. 4. When a resolution for the removal of presiding officers is under consideration, they cannot preside over the house, but can be a part of the proceedings. Only the speaker can vote in such cases & that too in the 1 st instance 5. Their salaries & allowances are fixed by parliament & charged on CFI 6. Deputy speaker acts as speaker, when the speakers office is vacant or he is absent from the sitting of the house. In addition to these cases, deputy chairman also acts as chairman, when VP acts as or discharges functions of president. 7. Deputy speaker has a special privilege of automatically becoming the chairman of parliamentary committee to which he is appointed 8. Panel of chairpersons of LS not more than 10 members are nominated by the speaker. One of them can preside if both speaker & deputy are absent. However if both these offices are vacant, the president decides. Similar is the case with panel of vice chairpersons of RS that is nominated by the chairman Speaker 1. Decorum, final interpreter, adjourns or suspends during lack of quorum, casting vote, presides over joint sitting, can allow for secret sitting, money bill, anti-defection law, ex-officio chairman of Indian parliamentary group and conference of presiding officers of legislative bodies in country, and appoints chairman of all parliamentary committees of LS. He himself is chairman of Business advisory group, rules committee, and general purpose committee. Leaders in parliament 1. Leader of the House - In LS it is PM or a minister member nominated by him. In RS it is minister member nominated by PM 2. Leader of Opposition He is the leader of the largest opposition (with minimum 1/10 th seats) in the house 3. Whip It is based on convention. He ensures attendance and secures vote of party members on a particular issue. Members have to follow his directives. Sessions of parliament Summoning 1. Minimum quorum required is 1/10 th of total membership 2. President has the nal duty to summon the houses at a max gap of 6 months 3. Conventionally there are 3 sessions budget (feb-may), monsoon (july-sep), winter (nov-dec) Adjournment, prorogation, dissolution 1. It suspends the work of parliament for a specified time 2. Adjournment sin die suspends the work for an indefinite period 3. Prorogation terminates the whole session. It doesn't affect bills pending (Art 107) and only affects notices, resolutions, motions etc. 4. Dissolution ends the very life of house. Bills pending in LS, passed by LS pending in RS lapse. Bills pending in RS but not passed by LS, pending in joint sitting notified before dissolution, returned for reconsideration does not lapse. 5. While prorogation and dissolution are done by president, adjournment is done by the speaker Rights of ministers & attorney general 1. They have right to speak & take part in proceedings of any house, any joint sitting, any committee of which they are member. But they are not entitled to vote. Devices of parliamentary proceedings Question hour 1 st hour of every parliamentary sitting is slotted for this. Usually members ask & minister answers. 1. Starred question (*) requires an oral answer & hence, supplementary questions can follow 2. Un-starred question requires a written answer & hence, supplementary questions cannot follow 3. Short notice question is asked by giving a notice of 10 days. It is answered orally Zero hour 1. It is an informal device to take up issues without any prior notice 2. It fills the gap b/w question hour & agenda Motions Only motions can be used (by any MP) to discuss issues of general public importance, with prior recommendation of the presiding officer. These fall into 3 categories 1. Substantive motion self-contained & independent motion dealing with an important matter 2. Substitute motion it seeks to replace the substantive motion. If adopted, it supersedes the original motion 3. Subsidiary motion it is motion that has meaning only in reference to an original motion or proceeding of house. It may seek to modify or substitute a part of original motion. Closure motion 1. When adopted, debate is stopped & the matter is put to vote. It can be moved by any member. 2. 4 kinds (a) Simple Closure when a member moves that matter having been sufficiently debated be put to vote (b) Closure by Compartments is when clauses of bill or lengthy resolution are grouped in parts, debated by parts, and entire part is put to vote (c) Kangaroo Closure is when only important clauses are taken for debate and voting and others are skipped and taken as passed (d) Guillotine Closure is when undiscussed clauses of bill or resolution are put to vote with discussed ones as time is over Privilege motion 1. It is moved by a member when he feels that the minister has breached privileges by withholding or giving wrong facts of a case. Its purpose is to censure the minister. Calling attention motion 1. It can be moved by any member to call attention of a minister towards a matter of urgent public importance & seek authoritative settlement from him. Like zero hour, it is also an Indian innovation Adjournment Motion and Point of Order 1. Adjournment Motion and Point of Order are extraordinary devices because they seek to interrupt the normal business of the House. 2. The former carries an element of censure against the CoM regarding a matter of public importance, so it cannot be introduced in the CoS and is introduced only in the LS. Point of Order relates to the questions regarding business of the house usually raised by the opposition. It can be introduced in any house. Censure Motion vs No-Confidence Motion 1. While a censure motion needs to state the reason for its adoption, in case of No-confidence motion there is no need to state the reason for its adoption. 2. A censure motion can be moved against a minister or a group of minister or entire council of ministers while No-confidence motion can be moved against entire council of ministers only. 3. If censure motion is passed in Lok Sabha, the council of minister need not resign from the office. While if No-confidence motion is passed, the council of ministers must res Resolutions 1. The members can move resolutions to draw attention of the house or govt towards a matter of general public importance. 2. All resolutions are substantive motions as they are self-contained, but they are not necessarily put to vote, while all motions are. Legislative procedure Private member bill 1. It can be introduced by any MP other than a minister, with a months notice 2. It does not require previous recommendation of the president Ordinary Bills 1. Introduction: A bill other than a money bill or financial bill may be introduced in any house. A private member's bill has to give notice of his intention to introduce the bill and ask for the leave of the house. If the bill has been published in the official gazette before the introduction no leave of the house is needed. Otherwise after the introduction it may be published in the gazette. 2. After introduction: After the introduction a discussion may take place where only the principles and the general provisions of the bill may be discussed. Amendments and clause by clause discussion doesn't take place at this stage. Following this the sponsor member may propose a motion that - (a) The bill be taken up for consideration. (b) The bill be referred to a select committee of the house. (c) The bill be referred to a joint parliamentary committee. (d) The bill be circulated for eliciting public opinion. 3. Select committee report: The committee considers the provisions of the bill in detail (but not the principles and the general provisions for they have already been debated in the house). Finally it submits a report in the house. 4. If the motion to take the bill up for consideration is accepted, clause by clause discussion on the bill and further amendments take place. 5. Passage in the originating house: After the amendments, debates are over, the sponsor member may move a motion that the bill be put to vote. This is analogous to 3rd reading in House of Commons. If the bill is passed it goes to the other house for same procedure. 6. Passage in the other house: If the other house rejects the bill or doesn't take any action for 6 months (from the reception of the bill) or the 1 st house rejects the amendments then the president may call for a joint sitting (Art 108), which is resolved by a simple majority Money Bills (Art 110) 1. A bill is a money bill if it deals with provisions only among the following: 1. Taxes. 2. Government borrowing. 3. Custody of CFI or contingency fund of India. 4. Appropriation of money out of CFI. 5. Declaration of or amendment to any expenditure to be charged on CFI. 6. Receipt of money in CFI or public account of India 2. A bill shall not be a money bill only because it imposes fines, fees etc. or changes the tax structure of local bodies. 3. A money bill needs presidential assent to be introduced and can be introduced only in HoP. 4. Any question regarding money bill is decided by speaker & he endorses that the bill is a money bill 5. When RS receives the money bill it can accept it, reject it (deemed to be passed), do not take action for 14 days (deemed to be passed), send back with amendments (deemed to be passed in the form LS passes it the 2 nd time). There is no provision for deadlock. Financial Bills (Art 117) 1. Financial bill deal with fiscal matters i.e. concerning revenue & expenditure. Money bills are a type of financial bills. 2. Financial bills - first class: They involve one of the above 6 issues but are not confined solely to them. It is introduced similar to a money bill & its further passage is similar to that of an ordinary bill, except that the amendment to the part involving taxation matters requires presidential recommendation. A joint sitting may resolve the deadlock. 3. Financial bills - second class: It is an ordinary bill which contains provisions of spending from CFI, but does not contain any matters mentioned in Art 110. It is similar to an ordinary bill except that it requires presidential recommendation for introduction. A joint sitting may resolve the deadlock. Joint Sittings (Art 108) 1. If the bill had been rejected by the other house or no action taken for 6 months, then the bill appearing before the joint sitting will be the original bill + other amendments as made necessary by the delay. 2. If the bill had been amended by the other house and some of such amendments not accepted by the first house then the bill presented before the joint sitting will be the amendments where both houses disagree + other amendments as made necessary by the delay. Budget Expenditure Charged on CFI to provide financial autonomy 1. Allowances to president, speaker, deputy speaker, deputy chairman of CoS, judges of HC (only pensions) and SC, CAG, UPSC chairman & members 2. Admin expenses of CAG, UPSC, SC including allowances of serving persons 3. Debt charges for which GoI is liable 4. Any sums required to satisfy any judgement of any court of India. Budget Process 1. Annual financial statement: It is laid before both houses of parliament on the recommendation of the president. It shows separately - (a) the sums required by the to be charged directly to CFI, and (b) sums required to meet other expenditure of the government from the CFI. When the budget is presented only a general discussion on the policies and principles may take place in both houses of the parliament. No motion or voting is allowed at this stage. 2. Items which are required by to be directly charged upon CFI shall not be put to vote but can be discussed by any house. After this stage, role of CoS is over. 3. Demand for grants: Items which are required to meet other expenditure are grouped together in the form of demands of grants, receive presidential recommendation, submitted to the HoP and then put to debate and vote. Vote on account is a grant in advance for the estimated departmental expenditure for the year before complete sanction has been given to that expenditure. 4. Appropriation bill: No money can be withdrawn from CFI except by appropriation acts. Once demand for grants have been accepted by the vote of HoP, an appropriation bill is drafted consisting of all demand of grants and the money to be charged directly on CFI. This bill has to be passed or rejected by the HoP and no amendment varying any amount can be made. 5. Annual finance bill: The taxation proposals of the budget are put together in the form of an annual finance bill and put to vote. 6. Cut Motion: A disapproval of policy cut motion may be moved on demand for grants that the amount of demand be reduced to Re 1/- representing disapproval of the policy underlying the demand. An economy cut motion may be moved to reduce the amount of demand by a specified amount. A token cut motion is that the amount of the demand be reduced by Rs. 100/- in order to raise a certain grievance Committees Privileges Parliamentary committees Parliamentary forums Legislative powers of president Chapter III Ordinance making power u/a 123 The Union Judiciary Chapter IV Supreme Court Most powerful judicial institution of the world 1. Unified judiciary with SC at apex 2. Judges appoint themselves 3. Art 142 power to pass any order untrammelled by any statutory restriction, to do complete justice 4. Art 144 all civil/judicial authorities to act in aid of SC 5. Keshavananda bharti case, 1973 6. IR Coelho vs S of TN, 2007 case Organisation of SC Judges 1. Sanctioned & current strength in CJI + 30 judges 2. CJI can appoint judges of HC as ad-hoc judges for a temporary period, on lack of quorum of permanent judges 3. CJI can request retired judges of SC/HC to be SC judges for temporary period Qualifications 1. Citizen 2. He should be a distinguished jurist, or a judge in HC for 5 years, or an advocate in HC for 10 years. His retirement age is 65 years Appointment 1. provisions a. For SC judge president may consult any SC or HC judge b. For SC judge (other than CJI) president shall consult the CJI 2. Current procedure a. The CJI is the sole authority to initiate the process of the appointment of judges to SC. President appoints the judge on advice of CJI. CJI's opinion represents the collective opinion of the judiciary. b. CJI forms a collegium of CJI + 4 other senior most judges of SC ( + successor to CJI if not already in the 4). The views of each member of the collegium are obtained in writing and presented to president. c. Views of senior most judge of the SC hailing from the HC where the person recommended comes from (if not already a member of the collegium) also have to be obtained in the writing and presented to president. d. The substance of the views of any other persons consulted by CJI should also be presented to president in writing. e. Decision is by consensus. So if 2 or more judges of the collegium disagree recommendation should not be made & if made shall not be binding. f. If the government rejects the recommendation, it must place all material and facts before the collegium. The collegium considers all the facts and if it unanimously reiterates the recommendation it has to be accepted. Such facts may be presented to the person recommended and his defence be heard. g. Merit is given predominance followed by seniority and representation of HC in the SC 3. Evolution of current procedure SP Gupta v UoI, 1981 It was decided that consultation of CJI was not binding. However it should be done completely. Rationale was that the E was accountable to the P, & J wasnt SC A-o-R Association v UoI, 1993 It was decided that consultation is binding. President can send back the advice of CJI, but after the reconsideration advice will be binding. Further opinion of CJI is the collective opinion of judiciary (CJI + 2 seniormost judges of SC) & shall have primacy. Rationale was the E is not accountable to the parliament since under the , conduct of SC & HC judges cannot be discussed on the floor of the legislature. Moreover J only can judge the competency of judges Reappointment of judges, 1998 It was a presidential reference. Current practice is this only. Removal 1. Judges inquiry act regulates the procedure for impeachment of an SC/HC judge. 2. A removal motion signed by 100 (LS) or 50 (RS) is given to the speaker/chairman. If admitted speaker/chairman constitutes a 3 member committee (CJI/SC judge + CJ HC + jurist) to investigate the charges. If the committee finds the judge guilty on grounds of proved misbehaviour or incapacity, a motion is taken up in each house. If motion is passed by each house by a special majority ( 50% + 2/3 rd present and voting), an address is presented to the president, who then orders the removal 3. No judge has been impeached so far Seat of SC 1. declares Delhi as the seat of SC 2. It also authorises the CJI to appoint other places as the seat after approval from president Procedure (Art 145) 1. SC can make rules with permission of president 2. cases & presidential reference shall be heard by a full bench comprising of 5 judges 3. Other cases can be decided by a division bench of 2 judges Jurisdiction of SC Original J power to hear disputes in first instance & not by way of appeal 1. Writ J it is original but nor exclusive. Parliament can confer power on SC to issue writs for purposes other than FR also 2. Original suites to decide on union-state disputes & inter-state disputes, except a. Pre constitutional treaties & arrangements b. River water disputes u/a 262 c. Matters referred to fin commission u/a 280 d. Some expenditure settlement u/a 290 Political question are excluded & decided u/a 263. It is original & exclusive jurisdiction. 3. Transfer J u/a 139A it has the power to transfer the case pending before 1 HC to another. Further a question of law can be authoritatively settled if pending before 2 or more HCs, or SC & 1 or more HCs 4. Election J original & exclusive jurisdiction to hear disputes regarding election of P & VP Appellate J 1. Appeals as a matter of right a. Constitutional matters if HC certifies that case involves a substantial question regarding interpretation of constitution which cud be decided by SC b. Civil matter if HC certifies that case raises a substantial question of law of public importance which cud be decided by SC c. Criminal matters if HC (a) on appeal reverts acquittal & sentence to death/10 yrs (b) takes case from subordinate court & sentences to death/10 yrs (c) certifies that case is fit to be appealed to SC 2. Appeals as a matter of discretion/SLP u/a 136, SC can take appeal regarding any law/order passed by any court/tribunal, except in case of laws concerning armed forces 3. Statutory appeals if a law provides for appeal to the SC Advisory J 1. U/a 143 president can ask for advice of SC on fact/law of public importance which may have arisen or might arise in the future. SC may/may not give its opinion & president is not bound by the opinion. Till now 15 advisory references have been made. 15 th was regarding 2g scam. 2. If matter pertains to exceptions in original jurisdiction than SC shall give opinion. Review J 1. U/a 137, It is the power to review its own decisions. SC rules made u/a 145 requires the petition to be filed within 30 days. 2. Curative petition devised by SC in rupa ashok khurra vs. ashok khurra, 02 case, is a remedy of a last resort, which can be filed after dismissal of review petition, on the grounds of a. Violation of PNJs b. If judge who decided the case had some personal interest in the case It needs to be certified by a senior advocate. Exemplary costs are imposed if found frivolous. Court of record 1. Only SC and HC (and not a subordinate court) are courts of record. It means its judicial records are admissible as evidence in all courts of the country (also enjoyed by subordinate courts) and it has power to punish for contempt of itself (not enjoyed by subordinate courts). Contempt of court is meant to protect the justice but not the judges. But its use has become debatable due to inconsistencies in its applications, intolerance of judiciary against public criticism and because it is against the principles of natural justice. 2. Civil contempt vs criminal contempt: Civil contempt is wilful disobedience of any order / direction of the court. Criminal contempt is an act which lowers the authority of the court or tends to interfere with its procedure and delivery of justice. Advocates of SC 1. Senior advocates designated as such by SC/HC because of their ability and expertise. They cannot appear in the SC without an A-o-R 2. Advocates-on-record only they can file any matter before SC. They can also appear/argue for a party 3. Other advocates listed in state bar council. Can argue/appear for party in SC, but cannot file any matter/doc Comptroller & Auditor-General of India Chapter V Refer nal bodies PART VI THE STATES General Chapter I State does not include J&K unless stated otherwise The Executive Chapter II Governor Appointment 1. He is appointed by the president u/a 155 2. But as clarified by SC, it is an independent nal office & not subordinate to central govt 3. After 7 th CAA, 1956 president can hand over charge of more than 1 state to the governor Term of office 1. Holds office generally for 5 yrs, during the pleasure of the president 2. Can resign to the president Qualification 1. Citizen + 35 years Powers Executive 1. u/a 154 executive power of the state is vested in the governor 2. Appoints CM, CoM, advocate general who hold office during his pleasure 3. Appoints SPSC members who can be removed only by president 4. Appoints state EC who can be removed similar to a judge of HC 5. Makes rules for more convenient transaction of govt business 6. U/a 167 his personal RTI 7. U/a 356 recommends presidents rule 8. Acts as chancellor (de-jure) of state universities & appoints vice-chancellors (de-facto) Legislative 1. Similar to president except that he nominates 1 member to SLA from anglo Indian community & 1/6 th to SLC Financial 1. Similar to president Judicial 1. Pardoning power 2. He is consulted by the president while appointing judges of HC 3. He appoints judges to subordinate courts in consultation with HC 4. He appoints persons to judicial service in consultation with HC & SPSC Discretionary powers 1. Explicit discretion (those mentioned in the ) a. Art 163 (1) CoM to aid & advise the governor, except so far under the he is required to act in his discretion. b. Art 163(2) - Further in any question whether any matter is or is not the discretion under the , decision of the governor shall be final c. Art 166(3) power to make rules for convenient transaction of the govt of the state d. Art 200 right to reserve any bill for the consideration of the president e. Art 356 f. Under convention while sending fortnightly report regarding administration of the state g. Special powers u/a 371 subject to the directions given by the president, he is to act in his discretion 2. Implicit (situational) discretion Special powers (Art 371) 1. Governors of Maharashtra and Gujarat have to pay special attention to development of Vidharbha and Saurastra. 2. Governor of Nagaland & Arunachal Pradesh has special responsibility for law and order in the state. 3. Governor of Manipur has special responsibility to ensure proper functioning of committee of the LA comprising of members from the hill areas of the state. 4. Governor of Sikkim has special responsibility for social and economic advancement of people of the state. 5. Governor of Assam has special responsibility w.r.t administration of ST areas 6. Governor of Karnataka has special responsibility to develop Hyderabad Karnataka region (98 th CAA) 7. Governor of Telangana The state legislature chapter III Legislative power of the governor chapter IV High courts chapter V High Court Organisation Judges 1. CJ of HC can appoint additional judges for a temporary period if there is a temporary increase in business or arrears of work in the HC 2. CJ of HC can request retired judge of any HC to act as a judge of HC for a temporary period 3. President can appoint acting judge in place of existing judge, if existing judge is acting as CJ or is unable to perform his duties Qualification 1. Citizen 2. (a) 10 years of judicial office, or (b) 10 years of advocate in HC. 3. Retirement age is 62 years. Appointment of Judges 1. According to the , for HC judge president shall consult CJI + governor + CJI HC (for judges other than CJI) 2. Current procedure a. The collegium constitutes CJI + 2 senior most SC judges hailing from HC in question. CJI has a veto. The collegium takes into account the opinion of the CJ of the HC concerned (which would be entitled to greatest weight), views of other judges of HC and SC (who are conversant with the affairs of the HC) as deemed necessary + views of SC judges who had served in the HC concerned. b. If the president doesn't accept the recommendations he has to give back reasons. If collegium reiterates such a recommendation has to be accepted. Transfer of Judges 1. For transfer of HC judge president shall consult CJI, according to the 2. The collegium consists of CJI + 4 seniormost judges of SC. They must obtain views of CJ of the 2 HCs concerned and views of SC judges who are in a position to provide material information in the matter. Rest all same. 3. If CJ of HC is to be transferred, then views of CJs need not be taken. Jurisdiction Original Jurisdiction 1. Criminal cases: CrPC, 1973 took away all original criminal jurisdiction of HCs. 2. Civil cases: Original jurisdiction only in certain cases like marriage, contempt, disputes regarding election, FRs etc. Writ jurisdiction 1. Original in case of FRs 2. It can issue writ outside its territory, if cause of action is inside the territory Appellate Jurisdiction hear appeals against judgement of subordinate courts 1. Civil Cases 1. First appeal: Appeal from district/subordinate courts on questions of both law & fact (in cases of high value). 2. Second appeal: Lies only on questions of law and procedure and not on fact. 3. Calcutta, Bombay, madras HC have provision of intra-court appeals case decided by a single judge can be appealed to a division bench 4. Appeals from decisions of tribunals 2. Criminal Cases 1. An appeal from sessions judge where the sentence is > 7 years or certain specific matters. Death sentence already requires HC approval before execution 2. Appeals from judicial magistrate / metropolitan magistrate in certain specified cases Supervisory jurisdiction (Art 227) 1. For any tribunal / court lying within its territory (except military tribunals) & subjected to its appellate jurisdiction, HC has the power of suo moto administrative & judicial superintendence Control over subordinate courts In addition to the appellate & supervisory jurisdiction, HC has the following control of subordinate courts 1. HC is consulted by the governor is personnel matter of district judges & appointment of persons to judicial service 2. It deals with personnel matter of persons appointed to judicial service (other than district judges) 3. Its law is binding on all subordinate courts in its territory 4. It can withdraw important cases from subordinate courts Subordinate courts chapter VI Appointment 1. District judges are appointed by governor in consultation with HC. Qualification are (a) not already in service of centre or state, (b) advocate/pleader for 7 yrs, (c) recommended by HC for appointment 2. Other judges are appointed by governor in consultation with SPSC & HC Judicial service 1. It includes district judge & other civil judicial posts. 2. President can make the provision relating to judicial service apply to any class of magistrates in state Structure & jurisdiction 1. District judge is highest judicial authority in the district. He possesses original & appellate jurisdiction in civil & criminal matters. He also has supervisory power over all subordinate courts. When he deals with criminal matters he is known as sessions judge. 2. Chief judicial magistrate tries criminal cases punishable upto 7 yrs. Subordinate courts have unlimited jurisdiction over civil suits irrespective of amount involved 3. Munsiff courts hear civil cases involving small value. Judicial magistrate tries criminal cases punishable upto 3 yrs 4. Panchayat courts are the lowest courts having both civil as well as criminal jurisdictions (only for petty cases in some states). Their nomenclature in different states is Nyaya Panchayat, Panchayat Adalat, Gram Kutchery. 5. In some metropolitans there are civil courts (chief judges) on the civil side & metropolitan magistrates on the criminal side PART VIII UTs 1. The state reorganisation act 1956 abolished the 4 fold classification of states and introduced a new 2 fold classification of states and UTs 2. Unlike other federal countries India has a large number of UTs due to various reasons UT REASON Lakshwadeep, A&N islands Strategic reason Daman & Diu, Dadra and Nagar Haveli Cultural reason Both were Portuguese colonies NCT of Delhi and Chandigarh Political and administrative consideration Puducherry Under indo-French agreement the people were allowed to retain their identity at their discretion Administration of UTs Executive 1. Executive head - President is the administrative head of UTs. He is represented by officers designated by him. Delhi, Puducherry, A&N islands have Lt Governor. Rest have administrators. 2. Regulations - President has the power to issue regulations (equivalent to an act of parliament) for peace, progress and good government of all UTs except Delhi and Chandigarh. In case of Puducherry, regulations can be issued only when the assembly is either suspended or dissolved. 3. Advisory committee President can appoint advisory committee for the UTs without a legislature. It is headed by the home minister and include MP from the state, administrators, members from local bodies and eminent persons. It discusses general issues relating to social and economic development of the UTs, relevance of legislative proposals on state list items to the UTs, matters relating to budget etc. Legislature 1. Delhi and Puducherry have legislative assembly and a CoM headed by the CM. Remaining 5 do not have such popular political institutions. However parliament can provide by law. 2. Power of parliament to legislate on any of the 3 lists for UTs is supreme. LA of Delhi and Puducherry can legislate on state and concurrent list. Delhi cannot legislate on public order, police and land in state list. Judiciary 1. UTs can have HCs of their own or can be brought under the jurisdiction of nearby HCs. Presently only Delhi has a HC of its own. Each UT has a subordinate judiciary of its own. Special provisions for Delhi 1. Delhi was given the status of UT in 1991. A LA and a CoM was also established. 2. CoM are appointed by the president (and not Lt Governor) and their strength should be 10% of that of LA 3. CoM aid and advice the Lt Governor. In case of difference of opinion the Lt Governor is to refer the matter to president and act accordingly. Pending such decision, he can act as he deems necessary, if in his opinion the matter is urgent and it is necessary to take immediate action. 4. Provisions in case of failure of nal machinery - If the president, on report of Lt Governor or otherwise, is satisfied that the administration of NCT cannot be carried according to the above provision, and for proper administration it is necessary and expedient to do so, he may suspend some of all of above provisions, and make certain incident and consequential provisions as he deems necessary 5. Lt Governor can promulgate and withdraw ordinances, with the prior permission of president. This can be done when the LA is in recess, and not when it dissolved or suspended. PART IX PANCHAYATS Evolution 1. 73 rd CAA, 1992 added part IX to the & also inserted eleventh schedule to the . Schedule 11 contains 29 functional items of panchayats. The act has a given a practical shape to Art 40. nal provisions 1. 243(A) gram sabha It provides for the gram sabha, whose functions will be determined by the state legislature. Every person 18 yrs of age is a member of the gram sabha 2. 243(B) constitution It provides for a 3 tier structure of PRIs district, intermediary, and village. However a state having population 20 lac may not set up intermediary level 3. 243(C) composition - It provides that all members of the panchayats will be elected by direct election. However the chairperson at the district or intermediary level shall be elected indirectly by elected members, and the chairperson at the village level shall be elected in a manner decided by SL. 4. 243(D) reservation It provided for reservation for 33% seats for women members & chairpersons. State shall provide reservation for SC/STs (also chairpersons) in proportion to the population. Further it can provide reservations for other backward classes also. 5. 243(E) duration Unless dissolved a panchayat shall continue for 5 yrs. New panchayat elected after dissolution shall continue only for the remainder of the term. 6. 243(F) disqualification A person shall be disqualified under any law made by the SL or under any law that is time being in force during elections to the SL. The SL has defined grounds similar to the disqualification of an MLA/MP. Also no person will be disqualified for being 25 yrs, if he is 21 yrs. Authority to resolve disputes regarding the election shall be decided by the SL. 7. 243(G) It authorises the SL to endow powers, functions & responsibilities on the panchayats to let them function as institutions of self govt. Such a law may contain preparation of plans or implementation of schemes regarding social justice & economic development including the matters of schedule 11 8. 243(H) The SL by law may authorise panchayat to levy or appropriate taxes, duties, tolls, fees, or assign taxes, duties, tolls, fees collected by SL to panchayats, or provide for grants-in-aid for CF of state, or provide for constitution of funds for credit & withdrawal of money received. 9. 243(I) The governor of every state shall constitute a state finance commission every 5 yrs, which will recommend regarding (1) distribution or assignment of taxes & provisions grants-in-aid (2) measures needed to improve financial situation of panchayats (3) any matter referred by governor. The governor shall place the report & action taken report before the SL. Also the Central FC shall also suggest measures to augment CF of state to supplement resources to panchayats (on recommendations of SFC) 10. 243(J) SL may by law provide for maintenance & auditing of accounts by panchayats 11. 243(K) It provides for state election commission to supervise & conduct panchayat elections. State election commissioner shall be appointed by the governor & be removed in a manner similar to that of judge of HC. 12. 243(L) The president may direct the application of provisions of this part to UTs, partly of wholly. 13. 243(M) The act does not apply to J&K, Nagaland, Mizoram, Meghalaya, hill areas of Manipur, Darjeeling district in WB. Also it does not apply to scheduled & tribal areas, but can be extended. For this the parliament has enacted PESA act, 1996 14. 243(N) All states to enact new laws within 1 year of the amendment act 15. 243(O) it bars the interference of courts in electoral matters of panchayats. Matters regarding delimitation or allotment of seats to a constituency cannot be questioned in any court. Further, elections can be questioned only by an election petition in a manner & by an authority decided by SL. Voluntary provisions All the above provisions are compulsory, except 1. Giving representation to MPs or MLAs or MLCs 2. Reservation of seats for BCs 3. 243(G) 4. 243(H) PESA act PART IXA MUNICIPALITIES PART IXB THE CO-OPERATIVE SOCIETIES What is a cooperative society (COSO)? 1. A co-operative society is an autonomous association of persons united voluntarily to meet their common economic, social and cultural needs and aspirations through a jointly-owned and democratically-controlled enterprise. 2. A co-operative society is another means for forming a legal entity to conduct business, besides forming a company. It pools together human resources in the spirit of self and mutual help with the object of providing services and support to members. Constitutional provisions 97 th CAA, 2011 1. It made the right to form COSO a FR u/a 19 2. It included a new DPSP u/a 43-B for promotion of COSOs 3. It added part IX-B in the entitled The Co-operative Societies u/a 243-ZH - 243-ZT Reasons for 97 th CAA Provisions of part IX-B 1. Incorporation of COSOs SL can incorporate, regulate, wind-up COSOs based on certain principles 2. Number and term of members of board and its office bearers Number will be provided by SL. Maximum directors 21. SL shall reserve 1 seat for SC or ST and 2 from women on board of COSO. Term 5 years. Co- opted person 2 and without right to vote can be included. 3. Election of members of board Election shall be conducted before the expiry of term and by a body decided by the SL 4. Supersession and suspension of board and interim management The board may be superseded or suspended in case - of persistent default, negligence of its duties, act prejudicial to members or contrary to nal provisions, and when elections are not held according to provisions. However this shall not be done if there is no govt shareholding, loan or financial assistance. Otherwise also it can be done for a max period of 6 months. Elections have to be held within 6 months in case of supersession. 5. Audit of accounts of COSOs SL can make provisions for maintenance and auditing of accounts once a financial year. Further it can lay down minimum qualifications for auditors. Auditor shall be appointed from within a panel approved by state govt, by the general body of COSO. Audit shall be done within 6 months of the close of financial year. The report shall be laid before the SL. 6. Convening general body meeting SL may provide from convening within 6 months of close of financial year. 7. Right of a member to get information SL may provide for members access to book, info, accounts; participation in management; co-operative training and management 8. Returns Every COSO shall file returns within 6 months of close of every financial year 9. Offences and penalties SL shall decide on them. Certain acts of corruption and wilful defaulting shall be included in such a law. 10. Application to multi-state COSOs will be with the references changed to parliament, central acts and govt. 11. Application to UTs Provisions of this part shall apply to UTs, but the president may make them inapplicable 12. Continuance of existing laws Previous inconsistent law to continue until repealed/amended, or till 1 year PART X SCHEDULED AND TRIBAL AREAS 1. Art 244 2. Art 244A provides for formation of an autonomous state comprising of certain tribal areas in Assam and creation of local legislature and/or CoM. Administration of scheduled areas 1. Art 244(1) states that the provision of 5 th schedule shall apply to administration and control of scheduled areas and STs in any state other than Assam, Meghalaya, Tripura and Mizoram. 2. Special provisions are provided as these areas are inhabited by aboriginals, who are socially and economically backwards and special effort is needed to improve their condition Provisions of 5 th schedule 1. Declaration of scheduled areas President is empowered to declare. He can alter the area or boundary, cancel the designation or make fresh orders in consultation with the governor of the state 2. Executive power of state and centre Executive power of state extends to these areas and that of centre extends to giving direction to the state regarding their administration. Governor has to submit report regarding administration of these areas annually or whenever required by the president. 3. Tribes advisory council Each state having scheduled areas has to establish TAC for the welfare and advancement of STs. It is to be of 20 members, 3/4 th of which will represent the STs in SLA. State having STs but no scheduled areas can establish TAC, if president so directs. 4. Law applicable to scheduled areas Governor can direct that any act of parliament or SL apply with or without modifications, or does not apply. He can make regulations for peace and good govt after consulting TAC. Regulations repealing or amending any act of parliament or SL required presidents assent 5. The required the president to appoint a commission to report on administration of scheduled areas and welfare of STs within 10 years of commencement of . 1 st commission was appointed in 1960 under UN Dhebar. 2 nd was appointed in 2002 under Dilip Singh Bhuria. Administration of tribal areas 1. Art 244(2) states that the provisions of 6 th schedule shall apply to the administration of tribal areas in Assam, Meghalaya, Tripura and Mizoram 2. The tribes in these states, as compared to other states, have not assimilated much with the life and ways of other people in these states. They still retain their culture, customs and civilization. Thus these areas are treated differently and a sizeable amount of autonomy has been given to them by the Provisions of 6 th schedule 1. The tribal areas in the 4 states have been constituted as autonomous districts. But they fall inside the executive authority of the state concerned. If there are different tribes in an autonomous district, governor can divide it into several autonomous regions. The governor is also empowered to organise and reorganise these districts. That means he can alter their areas, boundaries, names etc. 2. Each such district has a district council (DC) comprising of 30 members. 4 are nominated by the governor and 26 are elected by adult franchise. The former hold office till the pleasure of the governor and the latter for 5 years (unless council is dissolved early). Each autonomous region has a separate regional council (RC). 3. DCs and RCs administer the areas under their jurisdiction. They can make laws regarding land, forests, marriage, social customs etc. All such laws require assent of the governor. 4. DCs and RCs can constitute village councils and courts for trials of case between the tribes. Jurisdiction of HC over these cases and suits is specified by the governor. 5. DCs and RCs are empowered to assess and collect land revenue and to impose certain specified taxes 6. DCs can establish, construct or manage primary schools, ferries, dispensaries, fisheries, roads etc. It can also make regulations to control money lending and trading by non-tribals. Such regulations require governors assent 7. The acts of parliament and SL either do not apply to these districts and regions, or apply with modifications 8. Governor can appoint a commission to report on any matter relating to administration of these districts and regions. He may dissolve a DC or RC on its recommendation. PART XI UNION STATE RELATIONS Legislative relation chapter I Territorial extent 1. The laws made by parliament are applicable throughout the territory of India. That of state are confined only to the territory of state. Parliament has the power for extra territorial legislations i.e. for citizens who are staying outside India. 2. However laws made by parliament can be inapplicable in certain UTs, scheduled and tribal areas. Distribution of legislative subjects 1. Schedule 7 provides for union, state and concurrent list. Both centre and state can legislate on concurrent list items, but in case of conflict central law prevails. State law can prevail only when it has received assent of the president at a later stage. 2. In cases of overlapping, union list prevails. When concurrent and state are there, concurrent list prevails 3. The power to make laws w.r.t. residuary subjects rests with parliament. Parliamentary legislation in state field 1. Art 249 When RS passes a resolution, supported by 2/3 rd of the members present and voting, that to protect the national interest parliament shall have jurisdiction to legislate on state subjects. The resolution remains in force for 1 year and can be extended any number of times by 1 year at a time. State can continue to legislate, but in conflict central law prevails. Central laws cease 6 months after the expiration of resolution. 2. Art 250 During national emergency 3. Art 252 When 2 or more states pass resolutions authorising the parliament to legislate on a state list item, then parliament (only) can legislate on that subject for the concerned states. Any other state can join later. States cannot withdraw. Parliament can only withdraw. E.g. Wildlife (protection) act, 1972 4. Art 253 To implement international agreements, parliament can legislate on a state list item. E.g. Geneva convention act, 1962 5. Art 356 During state emergency Centres control over state legislation 1. Art 31A State can provide for agrarian reforms, even if they violate FRs. But all such bills have to be reserved for the president. 2. Art 31B It provided for S 9. No state legislation can be placed under it without the acceptance of parliament 3. Art 200 - Governor can reserve any bill for the consideration of president, who enjoys absolute veto over them 4. Art 288(2) SL can impose tax on water/electricity generated, sold or distributed by an authority established by an act of parliament, but all such bills require approval of the president. 5. Bills on certain matters in state list can be introduced in state legislature on with the previous sanction of the president. E.g. Art 304 B SL can impose reasonable restriction on trade, commerce and intercourse of the state, but bill requires recommendation. 6. Art 360 During financial emergency, president can direct states to reserve money bills and other financial bills for his consideration. Doctrines Administrative relation chapter II Executive power All-India services Public service commissions Integrated Judicial system Relation during emergencies Other provisions Cooperative federalism PART XII FINANCE, PROPERTY, CONTRACTS & SUITS Allocation of taxing powers 1. Parliament (SL) has exclusive power for Union list (State list). Both have powers for concurrent list. Residuary power is with the parliament. Restriction on states 1. Taxes on professions, trades, employments should be < than 2500 per annum 2. Sales tax should not be imposed on trade outside states, import export, inter-state trade. A good declared to be of special importance by parliament is subject to its restriction 3. No tax on electricity when centre is involved or when centre is developing railways Distribution of tax revenues Distribution of Non-tax revenues 1. Receipts to centre Post, telegraphs, railways, banking, broadcasting 2. Receipts to states Irrigation, forests, fisheries Grants-in-Aid to states 1. Statutory grants For states in need and not for all states. Charged on CFI. Given on the recommendation of the FC. Apart from general provision, provides for specific grants for STs and administration in scheduled areas including Assam 2. Discretionary grants Can be made for any public purpose. On recommendation of PC. PART XII TRADE, COMMERCE & INTERCOURSE Freedom of trade, commerce and intercourse 1. Trade refers to sale and purchase of commodities (goods and services). Commerce refers to the transactions by air, water or land with the objective of supplying commodities. Intercourse refers to free movement of commodities, with or without commercial motive. 2. Art 301 states that, subject to other provisions of this part, TCI shall be free throughout the territory of India. The provision is based on Australian which provided for inter-state freedom. However Indian recognises both inter-state and intra-state freedom. 3. Art 307 empowers the parliament to appoint an authority and vest it with necessary powers and duties, to carry out the provisions of this part. But no such authority has been appointed so far. Restrictions 1. Parliament - Parliament can impose restrictions on the freedom in public interest. But it cannot discriminate between or give preference to one state over another, except in case of scarcity of goods in any part of India. 2. State - State can impose reasonable restrictions on the freedom in public interest. But it cannot discriminate between or give preference to one state over another. Further SL can impose on imported goods such tax which is imposed on similar goods manufactured or produced in the state. This is to prevent discrimination between local and imported goods of similar nature. 3. Public monopolies - Freedom is subject to the laws providing for monopolies concerning centre or state. That is, citizens completely or partially can be excluded/restricted in such cases. 4. Restriction should be reasonable and are judicially reviewable. Rule of direct and immediate test 1. SC in 1961-62 came up with a rule to test the reasonability of the restrictions. 2. It declared that any restriction which imposed a direct and immediate impediment on the freedom u/a 301 is void. However regulatory rules and compensatory taxes are not regarded as restriction that place direct impediment. Rather, in view of SC, they promote t&c. Art 19(g) and Art 301 1. Art 19(g) provides a freedom from the standpoint of citizens to practise any profession, trade, business or occupation throughout the territory of India. 2. Art 301 provides for freedom from standpoint of movement and passage of commodities, irrespective of the individuals who are engaged. Thus the state also enjoys this right. Also it facilitates trade, commerce and intercourse by removing inter-state and intra-state barriers. 3. Art 301 complements Art 19(g) in the sense that a citizens freedom to practise might be hampered if the inter-state and intra-state freedom is restricted. SC has stated that on restriction of trade and commerce, an aggrieved individual can approach the SC directly by establishing the violation of Art 19(g). Ordinarily this happens. PART XIV SERVICES UNDER THE UNION & STATES CHAPTER I SERVICES Classification of services 1. All-India services These are common to both centre and state. These include the IAS, IPS and IFS (forest). Sardar Vallabhbhai Patel is known as the Father of AIS. 2. Central services The personnel work under the exclusive jurisdiction of the centre. They are divided into Group A, B, C and D. Groups A & B are gazetted officers. Groups C & D are non-gazetted officers and involve clerical and manual work respectively. IFS (foreign) is the highest central service. 3. State services The personnel work under the exclusive jurisdiction of the state. They are also divided into 4 classes or groups. Generally classes I & II come under the gazetted class. Their names are published in government gazette for appointment, promotion, transfer etc. Further they enjoy special privileges over the non-gazetted class. The former are officers and the latter are employees. Constitutional provisions 1. By following the Chinese policy, civil servants (CS) in India are recruited through an open competitive exam. Doctrine of pleasure (Art 310) 1. All CSs belonging to AIS and central services hold office during the pleasure of president. State CSs hold office during the pleasure of governor. 2. However there is an exception to the general rule of pleasure. The president or the governor (in order to secure the services of a person having special qualifications) has to provide for compensation in 2 cases - If the post is abolished before the contractual period, or CS is required to vacate the post, but not for any misconduct on his part Safeguards to civil servants 1. Art 311 provides 2 safeguards (a) CSs cannot be dismissed/removed/reduced in rank by an authority subordinate to one which appointed them (b) A CS can be dismissed/removed/reduced in rank shall only after an inquiry and giving him reasonable opportunity of being heard. 2. These safeguards are available to members of AIS, CS of centre and state, persons holding civil posts under centre or state, and not to members of defence services or persons holding military posts. 3. The 2 nd safeguard is not available in 3 case (a) When a CS is convicted or sentenced for a criminal offence (b) If investigating authority records in writing that it is not practically possible to hold an inquiry (c) When the president or governor is satisfied that it is not in the national interest to hold such an inquiry 4. Prior to 42 nd CAA, right to be heard was available during 2 stages during appointment of inquiry committee and at the time of implementation of punishment. 42 nd CAA removed the right at the later stage. 5. Other protections (a) UPSC/SPSC shall be consulted before a CS is dismissed/removed/reduced in rank (b) CS are governed by Civil Services Conduct rules, 1964 (c) CS are protected by rule of law. They enjoy protection of courts against arbitrary dismissal. All-India Services (Art 312) 1. The parliament can create a new AIS (42 nd CAA included AIJS) if RS passes a resolution (supported by 2/3 rd present and voting) declaring that it is necessary or expedient in the national interest to do so. Creation of AIJS will not deemed to be an amendment u/a 368. However AIJS has not been created till now. 2. Recruitment and conditions of service of AIS is regulated by AIS act, 1951 CHAPTER II PUBLIC SERVICE COMMISSIONS PART XIVA TRIBUNALS 1. This part was added by 42 nd CAA, 1976 Art 323 A Administrative Tribunals 1. It authorises the parliament (and not SL) to establish tribunals for dispute relating to service matters in public authorities. The parliament can take out these particular disputes out from civil courts and HCs and place it before ATs. 2. 3 fold objective (a) Relieve workload of regular courts (b) Speedy justice on service issues (c) Better safeguard to civil service by including administrative elements in the case 3. In pursuance of Art 323A, parliament enacted Administrative Tribunals act, 1985. It provided for establishment of 1 central administrative tribunal (CAT), state administrative tribunals (SATs) and joint administrative tribunals (for 2 or more states) CAT and SAT 1. CAT was set up in 1985. There are SATs in 9 states as of 2013. 2. Composition and appointment ATs are composed of a chairperson and such number of vice-chairpersons as the government has fixed. They are appointed by the president in consultation with CJI (CAT) or governor (SAT). Members are appointed from judicial or administrative background. Chairman should be serving or retired judge of HC or a VC for atleast 2 years. 3. Tenure Chairman and VC are appointed for 5 years or 65 years of age. Members are appointed for 5 years or 62 years of age. 4. Procedure followed They can adopt their own rules of procedure and are not bound by the technicalities of CPC and rules of evidence act. Though they are guided by PNJs. Their procedure is regarded as judicial in nature. Moreover they have the powers of civil court in certain matters like issuing of summons. 5. Jurisdiction They have original jurisdiction in service matters of central (central service, AIS, persons in civil posts serving the union) and state government employees. However members of defence forces, officers and servants of SC and HC, secretarial staff of parliament and legislatures of state and UTs are not covered by it. Art 323 B Tribunal for other matters 1. It authorises parliament and SLs to provide for tribunals for other matters such as taxation, land reforms, import export, elections etc. 2. Under Art 323B a hierarchy of tribunals can be established. There is no such question of hierarchy in tribunals established u/a 323A. Chandra Kumar case (1997) 1. Both the Articles provided that the decision of ATs cannot be challenged before HC and can be taken up for appeal in SC only. 2. Judgement SC held that these two particular clauses are void as they excluded the appellate jurisdiction of HC over tribunal decisions. Moreover (a) JR is a part of basic structure and cannot be taken away even by an amendment (b) ATs are inferior bodies as they are statutory and cannot be compared to nal bodies like HC. They cannot be substitute and their role is supplementary in nature. 3. Impact (a) Now ATs might not lead to reduction of cases in regular courts (b) The objective of providing speedy justice has got nullified to a large extent (c) ATs appear to have lost their relevance as states like TN, HP and MP wind up their ATs recently. However SATs were recently setup in HP and Assam. PART XV ELECTIONS PART XVI SPECIAL PROVISIONS RELATING TO CERTAIN CLASSES (Art 330-342) PART XVII OFFICIAL LANGUAGE (Art 343-351) 1. It contains provisions regarding language of the union, regional languages, language of Judiciary and special directives to protect linguistic minorities and promote development of Hindi PART XVIII EMERGENCY PROVISIONS National emergency Instances when invoked 1. 1962-88 On grounds of 1962 indo-china war. Was the longest. 2. 1971-77 On grounds of external aggression 3. 1975-77 On grounds of internal rebillion Provisions 1. U/a 352 if the president is satisfied that their exists or may exist in future a situation of emergency due to war, external aggression or armed rebellion, then he can proclaim emergency after receiving the written advice of the cabinet 2. President can proclaim the emergency to whole or a part of India. He can even vary the proclamation afterwards & also issue another proclamation irrespective of whether there is a proclamation already in force or not 3. The resolution approving emergency shall be approved by both the houses within 1 month by a special majority as u/a 368. If LS stands dissolved at that time, then reconstituted LS shall approve it within 1 month. 4. After ratification the emergency continues for a period of 6 months. Parliament can extend the emergency for 6 months at a time, any no of times. Revocation 1. If 1/10 th members of LS give in writing, to the speaker (if LS is in session) or president (if LS is not in session), their intent to discontinue national emergency, then the speaker or president, as the case may be, convenes a special session of LS. 2. In this session if the LS passes a resolution to discontinue emergency by a simple majority, president shall revoke the emergency Effects 1. Effect on executive state govts are not suspended, but are brought under effective control of central govt. Central govt can issue binding directions to them, even in relation to the items in state list also. 2. Effect on legislature the distribution of powers stands suspended. Parliament assumes concurrent legislative powers on state list items. SL continues to legislate but state laws prevail. However union law on state list items expire 6 months after the emergency has ceased to be in operation. Also parliament can extend the life of LS & SLAs by 1 yr at a time for any no of times, which will continue till 6 months after the emergency has ceased to be in operation. 3. Effect on financial relations the distribution of financial resources stands suspended & centre can make use of financial resources in any part of the country to address the grounds of the emergency. Also president can modify the nal distribution of revenue b/w centre & states. Such an order has to be laid before both the houses of parliament & the modification continues till the end of the financial yr. 4. Effect on FRs (a) earlier u/a 358, freedoms u/a 19 were automatically suspended for the whole emergency period throughout the territory & all laws made & executive actions taken during the emergency did not had a remedy after emergency ceased. The 44 th CAA provided that Art 19 will be suspended only in case of external emergency & such laws relating only to the emergency are protected for being challenged. (b) earlier u/a 359 all FRs could be suspended for a specified time in whole or part of India, by a presidential order & all laws made & executive actions taken during the emergency did not had a remedy after emergency ceased. The 44 th CAA provided that Art 20, 21 cannot be suspended u/a 359 & such laws relating only to the emergency are protected for being challenged 5. Judicial review proclamation can be challenged on the ground of malafide or that the declaration was based on wholly irrelevant facts or is absurd Related nal amendments 1. 38 th CAA, 1975 - Issue different proclamation on different grounds irrespective of any existing proclamation 2. 42 nd CAA, 1976 - Proclaim to whole or any part of India; Proclamation immune from JR 3. 44 th CAA 1978 - Vague term of internal disturbance was replaced by armed rebellion an armed uprising to overthrow a lawfully elected govt; Changed oral advice to written advice, vary the proclamation, special majority, continued parliamentary control, special session of LS, effect on FRs, JR State emergency or Presidents rule Provisions 1. U/a 355 it is the duty of the president to ensure that govt of state is carried out in accordance with the provisions of . U/a 365 if the state govt fails to comply with or give effect to any direction given by the centre, then it can be considered to be a breakdown of nal machinery 2. U/a 356 if the president is satisfied, on the report of the governor or otherwise, that the administration of state cannot be carried out in accordance with the provisions of the , then he can take over the administration of the state & can notify that parliament shall exercise jurisdiction over state subjects. However he cannot take over the powers of HC. 3. Every such proclamation has to be approved by both houses of parliament within 2 months by a simple majority. If LS is dissolved at that time, the reconstituted LS has to approve it within 1 month. 4. Such a proclamation will continue for 6 months unless approved by the parliament for another 6 months. It can be extended beyond 1 yr if (a) NE is in operation in whole or any part of the state, and (b) ECI has certified that it is not possible to conduct elections of the state in the present situation. It cannot be extended beyond 3 years 5. It can be revoked by the president when the situation is normal in the state Effects 1. Effect on executive state govt is dismissed & centre takes over 2. Effect on legislature the SLA is suspended or dissolved. Its function is assumed by the parliament or it can delegate the function to president, who generally makes laws in consultation with MPs from that state (delegation is not allowed in case of national emergency). Laws remain effective & amendable by SL after the emergency ceases 3. Effect on financial relations no effect Related nal amendments 1. 42 nd CAA it changed approval period of parliament to 1 yr 2. 44 th CAA - it restored the approval period to 6 months; divided the 3 yr period to 1 yr of ordinary & 2 yr of extra-ordinary SR bommai case (1994) 1. Propositions laid down 1. Power of president is subject is subject to JR. Although the court cannot inquire into the advice tendered by CoM, but it can apply JR on the following grounds (a) whether the decision was taken on any material (b) whether the material was relevant (c) whether the exercise of power by the president was malafide; in this case court can restore the status quo ante 2. Before the approval from the parliament within 2 months, president should not take any irreversible action like dissolving the assembly 3. State govt pursuing anti-secular policies is liable to action u/a 356 4. As far as possible, union must issue a warning to the erring state and give it sufficient time to recover 5. Governor should desist from any subjective conclusion & floor test (chance to incumbent CM to prove majority in a reasonable time) should be held 6. Failure to provide good governance and failure of law and order amounts to breakdown of administrative machinery and not machinery and hence is not a valid ground for invoking Art 356 2. Cases of proper use 1. Art 365 2. If state govt fails to fulfil its obligation as expressed in the preamble 3. When there is not stable govt possible in the state 3. Cases of improper use 1. On grounds of mal administration/corruption/financial problems 2. When there is no prior warning given to the state 3. When, after the resigning of govt, governor recommends w/o exploring the possibility of alternative govt 4. When no floor test is held & governor recommends on his subjective assessment Financial emergency 1. U/a 360 president has the power to proclaim financial emergency if he is satisfied that there exists a grave situation wherein the financial stability & credibility of the state is threatened. It has to be approved by both the houses of parliament by a simple majority within 2 months & after that it continues till the president revokes it. 2. It is an Indian innovation & has never been used so far Effects 1. President can suspend distribution of financial resources with the states 2. President can issue directions to follow canons (principals) of financial propriety (sound financial conduct) 3. President can direct the governor to reserve all financial & money bills for the consideration of the president 4. President can direct the state govts to reduce salaries & allowances of civil servants & other nal functionaries PART XIX MISCELLANEOUS PART XX AMENDMENT TO THE - Art 368 1. The procedure laid down is neither as easy as in Britain nor as difficult as in USA 2. U/a 368 parliament can amend any part of the w/o affecting the basic structure of Procedure 1. A bill to amend the can be introduced in either house w/o prior recommendation of the president. 2. It has to be passed by a special majority (majority of total membership of house + majority of 2/3 rd present & voting) by both the houses & there is no provision of joint sitting in case of disagreement since the requirement of special majority will be useless if joint sitting is provided. 3. In case the bill seeks to amend the federal features of the , it has to be ratified by half of SLs by a simple majority, but with no time limit prescribed. Such features include - (a) Election of president & its manner (b) Extent of E power of union & states (c) SC & HCs (d) Distribution of legislative powers (e) S7 (f) Representation of states in parliament (g) Art 368 itself 4. Then the bill is presented to the president, who shall give assent to the bill Amendments outside the scope of Art 368 1. can also be amended by a simple majority. These are not considered to be amendments u/a 368 2. These provisions include (a) Art 2 & 3 (b) S2, S5, S6 (c) Citizenship acquisition & termination (d) Elections to parliament & SL & delimitation (e) Provisions relating to UTs PART XXI TEMPORARY, TRANSITIONAL & SPECIAL PROVISIONS (Art 369-392) Art 370 Temporary provisions w.r.t the state of J&K Evolution of Art 370 1. In Aug, 1947 J&K also became independent and its rule Maharaja Hari Singh decided to remain independent. In Oct, 1947 Azad Kashmir Forces supported by Pakistan army attacked the frontiers of the state. Under this extraordinary circumstance state decided to accede itself to India. 2. Instrument of Accession was signed in Oct, 1947 and state surrendered 4 matters defence, external affairs, communication, and ancillary matters to Dominion of India. India also made a commitment to let determine the state its own and jurisdiction of India over it. It was decided that would govern the interim arrangement until the decision of constituent assembly of the state. Accordingly Art 370 was incorporated, which gives only temporary provisions. Provisions 1. Parliament can make laws on those matters in Union and Concurrent list corresponding to the matters in the Instrument of Accession. They, and any other matters in these list, have to declared by the president in consultation with the state govt 2. Provisions of Art 1 and 370 are applicable to J&K. Other provisions of the can be applied as specified by the president in consultation with the state govt 3. President can declare that Art 370 ceases to operate or operate with exceptions and modification, only on the recommendation of constituent assembly of state Present relation 1. Presidential order of 1950 and 1954 2. J&K autonomy resolution, 2000 rejected by union cabinet 3. Group of interlocutors was appointed by the central govt in 2010 under Dileep Padgaonkar Special provisions for some states 1. These are meant to meet aspirations of people of backward regions of states or protect interests of tribals or local people or deal with disturbed L&O. Originally the did not provide these special provisions Art 371 Special provisions for Maharashtra and Gujarat 1. Responsibilities of the governors Art 371A Special provisions for Nagaland 1. Act of parliament shall not apply in certain matters unless approved by the SLA 2. Responsibilities of the governor Art 371B Special provisions for Assam 1. President is empowered to create committee of Assam LA consisting of members elected from Tribal areas and other members which he may specify Art 371C Special provisions for Manipur 1. President is empowered to create committee of Manipur LA consisting of members elected from Hill areas, can give its responsibility to the governor. 2. Governor should submit annual report to president regarding administration of hill areas 3. Centre can give directions to state regarding hill administration Art 371D Special provisions for Andhra Pradesh 1. President is empowered to provide people in different parts of state equal opportunities in matter of public employment and education 2. Art 371E Establishment of Central University in Andhra Pradesh Art 371F Special provisions for Sikkim 1. SLA 30 members, 1 seat in LS and Sikkim is a single constituency 2. Parliament can reserve seats in SLA to protect interests of weaker sections 3. Special responsibility of governor 4. Union law can be extended by president Art 371G Special provisions for Mizoram 1. SLA 40 members; act of parliament not to apply in certain customary cases unless approved by SLA Art 371H Special provisions for Arunachal Pradesh 1. SLA 30 members; special responsibility of governor ceases when president directs so Art 371I Special provisions for Goa 1. SLA 30 members Art 371J Special provisions for Karnataka 1. Special provision for Hyderabad-Karnataka region meant to meet is development needs PART XXII (Art 393-395) 1. Short title 2. Commencement 3. Authoritative text in Hindi 394A by 58 th CAA, 1987 4. Repeals SCHEDULES Schedule 1 1. Names of states and their territorial jurisdiction 2. Names of UTs and their extent Schedule 2 Salaries, emoluments of 1. President, Governor 2. Speaker and Deputy, Chairman and Deputy 3. SC and HC judges 4. CAG Schedule 3 Oaths and Affirmations of 1. Union and state ministers Oath of office and oath of secrecy for 1. MPs and MLAs 2. Election candidates 3. SC and HC judges 4. CAG Oath of P, VP and governor are in the Schedule 4 Distribution of seats In RS Schedule 5 Provisions relating to administration and control of scheduled areas and STs Schedule 6 Provisions relating to administration of tribal areas in Assam, Meghalaya, Tripura and Mizoram Schedule 7 Union, State and Concurrent list Explanation of Art 246 Schedule 8 1. Scheduled languages. All states choose their language from here. Nagaland has not chosen any. 2. Initially 14. Now 22. Schedule 9 1. Acts and regulations of SL dealing with land reforms and abolition of zamindari system, and of parliament dealing with other matters. Was added by 1 st CAA, 1951 to protect laws included from JR. 2. Art 31B is retrospective in nature. When a statute is declared unnal, but is later included in S9, it is considered to be in S9 since its inception. In other words, judicial decision is nullified. However this protection is not available to amendments done after the date of inclusion. 3. But in 2007 SC held that laws included after 24 th Apr, 1973 are open to JR. Schedule 10 Anti defection law Provisions 1. The law was enacted in 1985 by 52 nd CAA. It was enacted to check unethical political defections which destabilized the govt. It also ensures that the candidates elected with party support & on basis on party manifesto remain loyal to party policies & do not betray the trust of people. 2. Original provisions the law applied to all the members of central & state legislators. It sought to disqualify them (a) if an independent member joins a PP (b) if a nominated member joins a PP after 6 months (c) if a legislative party member resigns from his PP & joins another PP, or (d) if he defies the party whip (directions issued by PP w.r.t. voting in the house) & his action is not condoned by the party within 15 days. 3. Exceptions (a) it allowed wholesale defection i.e. if 2/3 rd of the legislative party member breaks away & joins other PP (b) speaker at the central or state level was allowed to resign from his PP on taking up the office provided he does not join any other PP & joins the same party after completion of his tenure 4. Under para 7 the authority to decide on the disqualification is with the speaker, whose decision is final & binding. SC judgement, 1992 1. It held that decision of the presiding officer is not final & subject to JR 2. Conformity with the directions issued by party whip in all cases amounts to disregarding the opinion of representative member vis--vis a person who is just getting directions from the party high command. Thus the whip is binding only in cases of (a) money bill (b) confidence & no confidence motion (c) vote of thanks to the presidents or governors address 91 st CAA, 2003 1. It inserted Art 75(1A) & 164(1A) & restricted the strength of CoM to 15% of the total strength of the house. It sought to discourage the ruling party in encouraging defections. 2. It inserted Art 75(1B) & 164(1B) where an MP or MLA disqualified under the law is ineligible to hold any ministerial post Schedule 11 Suggestive list of items for panchayats 29 items Schedule 12 Suggestive list of items for ULBs 18 items. Constitutional bodies Appointment to various Constitutional posts, powers, functions and responsibilities of various Constitutional Bodies. Election commission 1. Election commission is a permanent and independent body established u/a 324. It handles the elections to the parliament, SL, president and VP. It must be noted that election to the local bodies are handled by a separate body known as state election commission. 2. It is assisted by deputy ECs drawn from the civil service. At the state level, it is assisted by the chief electoral officer appointed by CEC in consultation with the state govt. At district level collector acts as the returning officer. He appoints returning officer for every constituency and presiding officer for every polling booth. Composition 1. It will consist of a Chief election commissioner (CEC) and such number of election commissioners (ECs) as the president may decide. Appointment, conditions of service and tenure is determined by the president 2. When any other EC is appointed, CEC shall act as the chairman of the commission 3. President may appoint regional commissioners (RCs) in consultation with the election commission to assist it 4. Since 1993 the commission consisted of CEC and 2 other ECs. They have equal powers and receive salary, allowances similar to a judge of SC. In case of difference of opinion, matter is decided by majority 5. Tenure is 6 years or 65 years of age Independence 1. CEC is provided with security of tenure. He can removed only in a manner similar to a judge of SC. His service conditions cannot be varied to his disadvantage after his appointment 2. Other ECs and RCs cannot be removed except on the recommendation of CEC Issues with independence 1. has not prescribed any qualification and term for the members of election commission. 2. has not debarred retiring ECs from any further appointment by the govt Powers and function 1. Administrative it conducts (date and schedule), supervises and ensures free and fair elections. It scrutinises nomination papers. It determines CoC to be followed by PPs and candidates during election. It is responsible for delimitation. It prepares and periodically revise electoral rolls and register all eligible voters. It registers PPs, recognises them, grants them status of national or state PPs and allot election symbols to them. It can cancel polls in case of certain irregularities. 2. Advisory It advises the president and governor regarding disqualification of MPs and members of SL. Also it advises the president whether the elections can be held in a state under state emergency. 3. Quasi-Judicial It acts as a court for settling disputes relating to recognition of PPs, allotment of election symbols. Public service commissions (Art 315-323 Chapter II Part XIV) 1. UPSC is the central recruiting agency in India. Parallel to it at the state level is SPSC. 2. also provided for establishment of Joint SPSC (JSPSC) for 2 or more states. It is a statutory body and can be created on the request of SLs concerned. Composition is similar to SPSC, except that the members resign to the president. Composition of UPSC and SPSCs 1. UPSC (SPSC) consists of a chairman and other members appointed by the president (governor). Strength and conditions of service of members and chairman is determined by the president (governor). 2. Tenure is 6 years or 65 years (62 years for SPSC) 3. Members and chairman can resign to the president (governor). They can also be removed. 4. Acting chairman is appointed by the president (governor) temporarily if (a) office of chairman falls vacant (b) he cannot perform duties due to absence or other reasons Removal of chairman or member 1. The president only can remove the chairman or a member under following circumstances (a) If he is adjudged insolvent (bankrupt) (b) If, during his term, he engages in any paid employment outside duties of office (c) If he in opinion of president is unfit to continue because of infirmity of mind or body 2. Additionally president can remove on the ground of misbehaviour. For this president has to refer the matter to SC for an enquiry. If SC upholds the removal and advises so (advice is binding), president can remove. During the course of enquiry, president (governor) can suspend the member 3. According to the , member will be guilty of misbehaviour if (a) he is interested or concerned in any contract/agreement made by the govt (b) he participates in any way in benefit/profit from the contract similar to the other members of the incorporated company Independence 1. Security of tenure as grounds and manner of removal mentioned in the 2. Conditions of service cannot be varied to the disadvantage after appointment 3. Salaries, allowances and pensions charged on consolidated fund of India (or state) 4. Chairman are not eligible for further employment under govt of India or state. However chairman of SPSC can be appointed as chairman or member of UPSC, or chairman of any other SPSC. 5. Members are not allowed employment in govt of India or state. However members of UPSC are eligible for chairman of UPSC or SPSC, and members of SPSC are eligible for chairman or member of UPSC or chairman of that or any other SPSC 6. Chairman and members are not eligible for a second term i.e. reappointment to that office Functions 1. It conducts examination for appointments to the AIS, Central services and public services of centrally administered territories. It assist the states in framing and operating joint recruitment schemes 2. It is consulted by the govt in matters relating to appointment, promotion and transfers of civil servants, disciplinary matters concerning them and any other matter related to personnel management. However govt has the final say in all these matters. 3. The jurisdiction of UPSC can be extended by an act of parliament over the personnel system of any authority, corporate body or public institution 4. The UPSC presents its performance report annually to the president, who places it before both houses, along with memorandum explaining cases where advice of the commission was not accepted and its reasons. All such cases must be approved by the ACC Limitations 1. UPSC/SPSC is not consulted while making reservations for BCs and while taking into consideration claims of SCs and STs in making appointment to services or post 2. UPSC is not consulted with regard to chairman and member of commissions and tribunals, high diplomatic posts and a bulk of group C and D services, with regard to temporary appointment generally < 1year. 3. President/Governor can make regulations specifying matters where it is not necessary to consult UPSC/SPSC with regard to AIS and central services. Such regulations have to be placed before both houses within 14 days. Parliament/SL can amend or repeal them Role 1. UPSC/SPSC is the central/state recruiting agency, while DoPT is the central/state personnel agency. visualises UPSC to be the watchdog of the merit system in India/state. 2. Emergence of CVC/SVC in 1964 has affected the role of UPSC/SPSC in disciplinary matters. Now both are consulted by the govt while taking disciplinary action against a civil servant, but UPSC/SPSC being a nal body has an edge over CVC Finance commission 1. Art 280 provides for Finance commission as a quasi-judicial body. It is constituted by the president every 5 th year or at such earlier time as he considers necessary. Till now, 14 finance commissions have been constituted Composition 1. FC consists of chairman and 4 other members. They are appointed by the president, for a term he specifies, and are eligible for reappointment 2. Parliament has laid down qualifications for the chairman and members. Chairman should be a person having experience in public affairs. Others members have to be (a) Judge of HC or qualified to be one or, (b) Person having specialised knowledge of finance and accounts of govt or, (c) A person having wide experience in financial matters and in administration or, (d) A person having specialised knowledge of economics Functions 1. FC advises the president on (a) Distribution of net proceeds of taxes between centre and states, and their allocation between states (b) Principles governing grants to states by centre (c) Measures to augment CF of state to supplement resources of panchayats and municipalities on recommendations of SFC (d) Any other matter referred to it by the president 2. FC submits report to president, who lays it down before both houses along with an explanatory memorandum on the actions taken on recommendations. Thus recommendations are advisory and not binding Impact of planning commission 1. Setting up of PC has undermined the role of FC in centre-state fiscal relations. National commission for SCs and STs National commission for SCs 1. It has been directly established by Art 338. It came into existence in 2004 after 89 th CAA, 2003 bifurcated the combined National Commission for SCs and STs into 2 bodies. It consists of chairperson, vice-chairperson and 3 members appointed by the president. Their conditions of service and tenure is determined by the president 2. Functions (a) To investigate, monitor and evaluate the working of all legal and nal safeguards for STs (b) Inquire into complaints (c) Participate and advise of planning of socio-economic development of SCs and evaluate it (d) To recommend centre and state for effective implementation of provisions for SCs (e) To discharge such functions concerning SCs as president may specify 3. It presents annual report to the president at any time, who then places it before parliament along with memorandum on actions taken and reasons for non-acceptance. Any report pertaining to the state govt is forwarded by the president to the governor, who follows the same process 4. Powers While investigating or inquiring, it has all the powers of a civil court trying a suit. Centre and state are required to consult it on all major policy matters affecting SCs. It discharges same functions w.r.t OBCs and Anglo-Indians. National Commission of STs 1. It has been established by Art 338-A. It was separated as the problems of STs are different from STs. In 1999, a new Ministry of Tribal Affairs was created. 2. Same composition, functions and powers as above. 3. However in 2005 president specified certain other functions to it regarding the welfare, development and advancement of STs. Effective implementation of PESA. Prevent alienation. Special officer for linguistic minorities 1. 7 th CAA, 1956 inserted Art 350-B with following provision (a) There should be special officer for linguistic minorities to be appointed by president (b) His duty is to investigate in matters relating to safeguards provided to them. He would report to the president at intervals as the president may direct. The president will place report before the parliament and send to state govts concerned. Comptroller and auditor general of India (CAG) 1. Art 148 provides for the office of CAG. He is the head of Indian audit and accounts department. He is the guardian of public purse at both levels centre and state. 2. Appointment Appointed by the president. Takes oath before him. 6 years or 65 years. Can resign to the president or can be removed in same manner as the judge of SC. 3. Independence Fixed tenure. Not eligible for further office under centre or state. Salary and service conditions determined by parliament. Salary is equal to judge of SC. Conditions cannot be altered to his disadvantage. Administrative expenses of office of CAG is charged on CFI. Conditions of service of employees of Indian Audit and Account department and admin powers of CAG are prescribed by president after consultation with CAG 4. Article 149 authorises parliament to prescribe duties and powers of CAG. 5. Art 150 CAG advises the president on how the accounts of centre and state shall be kept 6. Art 151 He submits his audit reports of accounts of centre to president or governor, who in turn places them before both houses of parliament or SL 7. Art 279 - He certifies net proceeds (proceeds cost of collection) of any tax or duty. His certificate is final Attorney general and Advocate general 1. Art 76 provides for the office of attorney general. He is the highest law officer of the country 2. Art 165 provides for the office of advocate general. He is the highest law officer of the state. 3. He is not a full-time counsel of the govt. he does not fall in the category of govt servant and is not debarred from private legal practice Appointment and term 1. He is appointed by the president (governor) and holds office during his pleasure. He may also resign to the president (governor). He should be qualified to be appointed as the judge of SC. His term is not fixed. His remuneration is determined by the president (governor). Duties and functions 1. According to the , duties of AG and advocate general are (a) To give advice to GoI (govt of state) on legal matters referred to him by the president (governor) (b) To perform such other duties of legal character assigned to him by the president (governor) (c) To discharge functions conferred on him by or any other law. 2. The president has assigned following duties to AG (a) To appear on behalf of GoI in SC in all cases concerning GoI (b) To represent govt in any presidential reference u/a 143 (c) To appear in a HC (when required) in case concerning GoI Rights and limitations 1. Rights (a) Right to audience in all courts in territory of India (b) Right to privileges and immunities of an MP (c) Right to speak, take part (but not vote) in proceedings of both houses or a joint sitting and parliamentary committee of which he is named a member. These rights are available to advocate general within the state and w.r.t the SL 2. Limitation (a) He should not advise or hold a brief against GoI and in cases where he not asked for (b) He should not defend criminal accused without permission of GoI (c) He should not accept to be director of any company without the permission of GoI Solicitor general of India 1. In addition to attorney general, other law officers of GoI are solicitor general and additional solicitor general. They assist the attorney general. They are not nal offices. Other bodies Statutory, regulatory and various quasi-judicial bodies Planning Commission 1. It was established in 1950 by an executive resolution of GoI. Thus it is neither a nal body nor a statutory body. It is a non-nal or extra-nal body. It plans for social and economic development. 2. It is only a staff agency an advisory body and has no executive responsibility. 3. Composition PM (chairman), Deputy Chairman (appointed by cabinet, cabinet rank, fixed tenure, invited to all cabinet meeting w/o right to vote), Some central ministers as part time members (Fin Min and Plan Min are ex-officio members), 4-7 full time expert members (rank of MoS), member-secy (usually IAS). State govt are not represented 4. Internal organisation Technical divisions, Housekeeping branches, Programme Advisors 5. UIDAI was constituted in Jan, 2009 as an attached office under PC. Programme Evaluation Office was established as an independent unit of PC in 1952. It undertakes assessment of developmental plans in FYP and gives feedback to PC. It also provides tech advice to state evaluation orgs. National Development Council 1. NDC was established in 1952 by an executive resolution of GoI. It is neither a nal nor a statutory body. 2. Composition PM (chairman), all union cabinet ministers, CMs of all states, administrators of all UTs, Members of PC. Secy of PC acts as Secy of NDC. 3. The draft FYP prepared by PC is 1 st submitted to union cabinet, then before NDC, and then to parliament. With approval of parliament it becomes the official plan. Thus NDC is the highest body below parliament responsible for policy matters of social and economic development. However it is listed as an advisory body to PC and its recommendations are not binding. It makes recommendations to centre and state and should meet atleast twice every year. National Human Rights Commission 1. It is a statutory body. Protection of Human Rights Act, 1993 established it. The act was amended in 2006. It is the watchdog of human rights in the country. 2. Composition Chairman (Retired CJI), 4 members (serving/retired judge of SC, HC, 2 persons having knowledge or practical experience in HR), 4 ex-officio members (chairman of National commission of Minorities, SCs, STs, Women) 3. Appointment Chairman and members are appointed by president on recommendation of a 6 member committee (PM, home minister, speaker, Dep Chairman, LoO of RS and LS). Further sitting judge can be appointed after consultation with CJI. Term is 5 years or 70 years and are not eligible for further appointment under centre or state govt. 4. Removal insolvent, paid employment, infirmity of mind or body, unsound mind, convicted or sentenced to imprisonment. President can also remove on grounds of proved misbehaviour or incapacity. For this SC inquires and advises the president. 5. Salaries are decided by centre, but cannot be varied to disadvantage during term 6. It has powers of civil court, proceedings are of judicial character, can utilise service of any officer of centre, and has to inquire within 1 year of occurrence of event. 7. It role is advisory. It annual or special reports are laid before respective legislatures along with memorandum of action taken and reasons for rejection. Action taken should be informed by centre to NHRC within 1 month (3 month in an army case) State Human Rights Commission 1. Established by the same act. 23 states have constituted SHRC. It can look into violations of HRs for state and concurrent list only. 2. Composition Chairman (retired CJ of HC), 2 members (serving/retired judge of HC or a district judge with 7 year experience, and experienced person) 3. Governor appoints, but only president can remove. 4. The act also provided for setting up of Human Rights court with concurrence of CJ of HC. Central Information Commission 1. RTI act, 2005 provided for CIC, SICs. Thus they were constituted by official gazette notification and are not nal. CIC is an independent body which inter alia looks into complaints made to it regarding offices, PSUs, financial institutions under centre and UTs. 2. Composition CIC, 10 ICs. Appointed by president on recommendation of a committee (PM, a cabinet minister nominated by him, LoO in LS). They should be eminent persons, not an MP/MSL, no office of profit, business, profession, not connected with any PP. 3. Tenure and service conditions 5 year or 65 years. Not eligible for reappointment. Removal similar to HRCs. Salaries are similar to election commissioners, but cannot be varied to disadvantage. 4. Powers of a civil court. Centre places its annual report before every house. 5. Governor can remove SIC and refers to SC only for inquiry. Central Vigilance Commission 1. It was established in 1964 by an executive resolution. Thus it was neither nal nor statutory. In 2003 it was given statutory status. It is the apex vigilance institution, free of control form any executive authority. 2. Composition Central VC (Chairperson), not more than 2 VCs. Appointed by president on recommendation of a 3 member committee (PM, home, LoO in LS). 4 years or 65 years. Not eligible for further employment under centre or state. Removal similar to HRCs. Only difference is that misbehaviour is deemed on certain grounds. Salary similar to UPSC members, but cannot be varied to disadvantage. 3. Powers of a civil court. Gives annual report to president who places it before parliament. Central Bureau of Investigation 1. It was established in 1963 by a Home ministry resolution. Later it was transferred to M of Personnel, and now enjoys a status of attached office. The SPE set up in 1941 was also merger with it. It is not a statutory body and derives its powers from DSPE Act, 1946. It is the main multi-disciplinary investigating agency. It assists the CVC. 2. Composition Director (IG), special or additional director, large no of joint directors, DIGs, SPs etc. Under CVC Act, 03 investigations by CBI will be supervised by central govt except in offences under PCA, 1988, for which CVC supervises. 2 year tenure security to director. He is appointed by central govt on recommendation of a committee CVC as chair, VCs, Secy of MHA, Secy (Coordination and public grievance) in Cabinet Sectt
https://www.scribd.com/document/238788803/Polity-Prelim-civil-services
CC-MAIN-2019-35
refinedweb
29,557
55.34
Class 31: Cookie Monsters and Semi-Secure Websites David Evans CS150: Computer Science University of Virginia Computer Science Why Care about Security? Security Confidentiality – keeping secrets Protect user’s data Integrity – making 31: Cookie Monsters and Semi-Secure Websites David Evans CS150: Computer Science University of Virginia Computer Science Serious authentication requires at least 2 kinds Login does direct password lookup and comparison. Login: alyssa Password: spot Failed login. Guess again. Eve Terminal Login: alyssa Password: fido Trusted Subsystem login sends <“alyssa”, “fido”> Solve this today Solve this Wednesday Problem if K isn’t so secret: decryptK (encryptK (P)) = P “dog” “neanderthal” “horse” H (char s[]) = (s[0] – ‘a’) mod 10 One-way Given h, it is hard to find x such that H(x) = h. Collision resistance Given x, it is hard to find y x such that H(y) = H(x). Input: two 100 digit numbers, x and y Output: the middle 100 digits of x * y Given x and y, it is easy to calculate f (x, y) = select middle 100 digits (x * y) Given f(x, y) hard to find x and y. (Morris/Thompson 79) (This is the standard UNIX password scheme.) Salt: 12 random bits DES+ (m, key, salt) is an encryption algorithm that encrypts in a way that depends on the salt. How much harder is the off-line dictionary attack? // We use the username as a "salt" (since they must be unique) encryptedpass = crypt.crypt (password, user) rnd = str(random.randint (0, 9999999)) + str(random.randint (0, 9999999)) encrnd = crypt.crypt (rnd, str(random.randint (0, 99999))) users.userTable.createUser (user, email, firstnames, lastname, encrnd) sendMail.send (email, "hoorides-bot@cs.virginia.edu", "Reset Password", \ "Your " + constants.SiteName + \ " account has been created. To login use:\n user: " + \ user + "\n password: " + encrnd + "\n") ... From register-process.cgi def createUser(self, user, email, firstnames, lastname, password) : c = self.db.cursor () encpwd = crypt.crypt (password, user) query = "INSERT INTO users (user, email, firstnames, lastname, password) " \ + "VALUES ('" + user + "', '" + email + "', '" \ + firstnames + "', '" + lastname + "', '" + encpwd"')" c.execute (query) self.db.commit () def checkPassword(self, user, password): c = self.db.cursor () query = "SELECT password FROM users WHERE user='" + user + "'" c.execute (query) pwd = c.fetchone ()[0] if not pwd: return False else: encpwd = crypt.crypt (password, user) return encpwd == pwd From users.py (cookie processing and exception code removed)
http://www.slideserve.com/johana/evanshttp-www-cs-virginia-edu-evans
CC-MAIN-2017-22
refinedweb
389
59.8
Rendering node that displays a vector data set. More... #include <MeshVizXLM/mapping/nodes/MoMeshVector.h> Each value of the vector data set is represented by a line plus an (optional) small arrow. The lines are located at the cell centers for per cell data sets or at the node positions for per node data sets. The vectorSetId field defines the index of the vector set to display. This is an index into the list of vector sets existing in the traversal state (see the MoVec3Setxxx nodes). The lines can be colored using a scalar set defined in the colorScalarSetId inherited field. This is an index into the list of scalar sets existing in the traversal state (see the MoScalarSetxxx nodes). To disable coloring set this field to -1. Turbine, CellShape, ClipLine, FenceSlice, GridPlaneSlice, InterpolatedLogicalSlice, Isoline, LogicalSlice, Outline, Streamlines, Vectors Constructor. Returns the type identifier for this class. Reimplemented from MoMeshRepresentation. Returns the type identifier for this specific instance. Reimplemented from MoMeshRepresentation. Do not draw arrows on vectors if set to false. Default is true. The scale factor to apply to the vector field to resize the representation. Default is 1.0. All values are accepted. If scaleFactor is 0, the representation is empty. The shift factor to apply to the representation of each vector. Each vector is translated along its direction by the value of shiftFactor. Default is 0.0 (no shift). All values are accepted. The following images illustrate the effect of shiftFactor on the vector field representation, with a vector set using a PER_NODE binding. shiftFactor = 0.0 shiftFactor = -0.5 shiftFactor = -1.0 Specifies the vector set to display. During traversal of the scene graph, vector sets are accumulated in a list of vector sets. vectorSetId is an index in this list. Default is 0 meaning the first set in the list.
https://developer.openinventor.com/refmans/9.9/RefManCpp/class_mo_mesh_vector.html
CC-MAIN-2020-16
refinedweb
305
61.83
Overview Atlassian Sourcetree is a free Git and Mercurial client for Windows. Atlassian Sourcetree is a free Git and Mercurial client for Mac. Lark ---- Lark is a little experiment in designing a Lisp-like language with a syntax inspired by SmallTalk. It tries to answer the question, "Can you get the benefits of code-is-data without needing s-exprs?" It looks like this: # Exercise 1.3 in SICP: def: square is: [x] => (x * x) def: twoLargest is: [x y z] => ( if: x < y then: ( if: x < z then: [y z] else: [x y] ) else: ( if: y < z then: [x z] else: [x y] ) ) def: sumOfLargestSquares is: [x y z] => { def: a is: twoLargest [x y z] square a.0 + square a.1 } Fun features: 1. Homoiconic. Code is data. Data is code. You can build and manipulate code at runtime. 2. Infix operators and user-defined keyword syntax. No (+ 1 2) or (if happy "clap hands" "sad face"). 3. Macros. Define functions that don't evaluate their arguments. Lark Syntax =========== Core syntax ----------- Internally, Lark has a very simple syntax. It's more complex than Scheme, but just barely. It has three core expression types: - Atoms are things like names, numbers and other literals. - Lists are an ordered set of other expressions. - Calls are a pair of expressions, the function and the argument. That's it. Scheme has the first two, and Lark just adds one more. Every expression in Lark can be desugared to these core elements. The rest of this section explains the syntactic sugar added on top of that, but that's handled only by the parser. To the interpreter, the above is the only syntax. This *should* mean that it'll be as easy to make powerful macros in Lark as it is in Scheme. Names ----- The simplest syntactical element in Lark are names, identifiers. There are three kinds of identifiers. Regular ones start with a letter and don't contain any colons. Like: a, abs, do-something, what?!, abc123 They are used for regular prefix function calls and don't trigger any special parsing, so you can just stick them anywhere. Operators start with a punctuation character, like: +, -, !can-have-letters-too!, !@#%#$$&$ An operator name will cause the parser to parse an infix expression. If you don't want that, and just want to use the name of an operator for something (for example, to pass it to a function), you can do so by surrounding it in (). This: (+) (1, 2) is the same as: 1 + 2 Keywords start with a letter and contain at least one colon. A colon by itself is also a keyword. Ex: if:, :, multiple:colons:in:one: Like operators, keywords will trigger special parsing, so must be put in parentheses if you just want to treat it like a name. Prefix functions ---------------- The basic Lark syntax looks much like C. A regular name followed by an expression will call that named function and pass it the given argument. Ex: square 123 Dotted functions ---------------- Lark also allows functions to be called using the dot operator. It's functionally equivalent to a regular function call, but has two minor differences: the function and argument are swapped, and the precedence is higher. For example: a.b c.d is desugared to: (b(a))(d(c)) Swapping the argument and function may sound a bit arbitrary, but it allows you to write in an "object.method" style familiar to many OOP programmers. For example, instead of: length list you can use: list.length Operators --------- Operators are parsed like SmallTalk. All operators have the same precedence (no Dear Aunt Sally), and are parsed from left to right. The expression: a + b * c @#%! d is equivalent to: @#%!(*(+(a, b), c), d) Keywords -------- Lark has keywords that work similar to SmallTalk. The parser will translate a series of keywords separated by arguments into a single keyword name followed by the list of arguments. This: if: a then: b else: c is parsed as: if:then:else:(a, b, c) Lists ----- A list can be created by separating expressions with commas, like so: 1, 2, 3 Parentheses are not necessary, but are often useful since commas have low precedence. For example: this: means: difference 1, 2 ===> (difference(1), 2) difference (1, 2) ===> difference(1, 2) Braces ------ A series of expressions separated by semicolons and surrounded by curly braces is syntactical sugar for a "do" function call. This gives us a simple syntax for expressing a sequence of things that executed in order (which is what the "do" special form is for). So this: { print 1, 2; print 3, 4 } is parsed as: do (print (1, 2), print (3, 4)) Note that the semicolons are *separators*, not *terminators*. The last expression does not have one after it. Precedence ---------- The precedence rules follow SmallTalk. From highest to lowest, it is prefix functions, operators, keywords, then ",". So, the following: if: a < b then: c, d is parsed as: (if:then:(<(a, b), c), d) That's the basic syntax. The Lark parser reads that and immediately translates it to the much simpler core syntax. Running Lark ============ Lark is a Java application stored in a single jar, you can run it by doing either: $ java -jar lark.jar Or, if you have bash installed, just: $ ./lark This starts up the Lark REPL, like this: lark v0.0.0 ----------- Type 'q' and press Enter to quit. > From there you can enter expressions, which the interpreter will evaluate and print the result. You can also tell lark to load and interpret a script stored in a separate file by passing in the path to the file to lark: $./lark path/to/my/file.lark Built-in functions ================== So what can you actually do with it? Not much, yet. Only a few built-in functions are implemented: ' <expr> The quote function simply returns its argument in unevaluated form. Ex: > ` print 123 : print 123 (note: does not call print function) do <expr> If the expression is anything but a list, it simply evaluates it and returns. If it's a list, it evaluates each item in the list, in order, and then returns the evaluated value of the last item. In other words, it's Lark's version of a { } block in C. Ex: > do (print 1, print 2, print 3) 1 2 3 : () print <expr> Simply evaluates the argument and prints it to the console. <params> => <body> Creates an anonymous function (a lambda). <params> should either be a single name or a list of parameter names. <body> is an expression that forms the body of the function. Ex: > (a => do (print a, print a)) 123 123 123 : () def: <name> is: <expr> Binds a value to a variable in the current scope. <name> should be a single name. <expr> will be evaluated and the result bound to the name. Ex: > def: a is: 123 : () > a : 123 if: <condition> then: <expr> if: <condition> then: <expr> else: <else> Evaluates the condition. If it evaluates to true, then it evaluates and returns <expr>. If it evaluates to false, it will either return () or, if an else: clause is given, evaluate and return that. Ex: > if: true then: print 1 else: print 2 2 : () bool? <expr> int? <expr> list? <expr> name? <expr> unit? <expr> Evaluates <expr> then returns true if it is the given type. Note that ().list? returns *true* because unit is the empty list. Ex: > 123.int? : true > ('foo).name? : true > 123.list? : false <int> <expr> Evaluates <expr>, expecting it to be a list. Returns the item at the <int> position in the list (zero-based). Basically, this means an int is a function you can pass a list to to get an item from it. Ex: > 0 (1, 2, 3) : 1 > (4, 5, 6, 7).3 : 7 count <expr> Evaluates <expr>, expecting it to be a list. Returns the number of items in the list. Ex: > (1, 2, 3).count : 3 func <expr> Evaluates <expr>, expecting it to be a call. Returns the function being called. Ex: > (' square 123).func : square arg <expr> Evaluates <expr>, expecting it to be a call. Returns the argument being passed to the function. Ex: > (' square 123).arg : 123
https://bitbucket.org/munificent/lark
CC-MAIN-2018-43
refinedweb
1,367
74.59
SimFin - Simple financial data for PythonSimFin - Simple financial data for Python SimFin makes it easy to obtain and use financial and stock-market data in Python. It automatically downloads share-prices and fundamental data from the SimFin server, saves the data to disk for future use, and loads the data into Pandas DataFrames. ExampleExample Once the simfin package has been installed (see below), the following Python program will automatically download all Income Statements for US companies, and print the Revenue and Net Income for Microsoft. import simfin as sf from simfin.names import * # Set your API-key for downloading data. # If the API-key is 'free' then you will get the free data, # otherwise you will get the data you have paid for. # See for what data is free and how to buy more. sf.set_api_key('free') # Set the local directory where data-files are stored. # The dir will be created if it does not already exist. sf.set_data_dir('~/simfin_data/') # Load the annual Income Statements for all companies in USA. # The data is automatically downloaded if you don't have it already. df = sf.load_income(variant='annual', market='us') # Print all Revenue and Net Income for Microsoft (ticker MSFT). print(df.loc['MSFT', [REVENUE, NET_INCOME]]) This produces the following output: Revenue Net Income Report Date 2008-06-30 6.042000e+10 17681000000 2009-06-30 5.843700e+10 14569000000 2010-06-30 6.248400e+10 18760000000 2011-06-30 6.994300e+10 23150000000 2012-06-30 7.372300e+10 16978000000 2013-06-30 7.784900e+10 21863000000 2014-06-30 8.683300e+10 22074000000 2015-06-30 9.358000e+10 12193000000 2016-06-30 9.115400e+10 20539000000 2017-06-30 9.657100e+10 25489000000 2018-06-30 1.103600e+11 16571000000 2019-06-30 1.258430e+11 39240000000 We can also load the daily share-prices and plot the closing share-price for Microsoft (ticker MSFT): # Load daily share-prices for all companies in USA. # The data is automatically downloaded if you don't have it already. df_prices = sf.load_shareprices(market='us', variant='daily') # Plot the closing share-prices for ticker MSFT. df_prices.loc['MSFT', CLOSE].plot(grid=True, figsize=(20,10), title='MSFT Close') This produces the following image: DocumentationDocumentation InstallationInstallation The best way to install simfin and use it in your own project, is to use a virtual environment. You write the following in a Linux terminal: virtualenv simfin-env You can also use Anaconda instead of a virtualenv: conda create --name simfin-env python=3 Then you can install the simfin package inside that virtual environment: source activate simfin-env pip install simfin If the last command fails, or if you want to install the latest development version from this GitHub repository, then you can run the following instead: pip install git+ Now try and put the above example in a file called test.py and run: python test.py When you are done working on the project you can deactivate the virtualenv: source deactivate DevelopmentDevelopment If you want to modify your own version of the simfin package, then you should clone the GitHub repository to your local disk, using this command in a terminal: git clone This will create a directory named simfin on your disk. Then you need to create a new virtual environment, where you install your local copy of the simfin package using these commands: conda create --name simfin-dev python=3 source activate simfin-dev cd simfin pip install --editable . You should now be able to edit the files inside the simfin directory and use them whenever you have a Python module that imports the simfin package, while you have the virtual environment simfin-dev active. TestingTesting Two kinds of tests are provided with the simfin package: Unit TestsUnit Tests Unit-tests ensure the various functions of the simfin package can run without raising exceptions. The unit-tests generally do not test whether the data is valid. These tests are mainly used by developers when they make changes to the simfin package. The unit-tests are run with the following commands from the root directory of the simfin package: source activate simfin-env pytest Data TestsData Tests Data-tests ensure the bulk-data downloaded from the SimFin servers is valid. These tests are mainly used by SimFin's database admin to ensure the data is always valid, but the end-user may also run these tests to ensure the downloaded data is valid. First you need to install nbval, which enables support for Jupyter Notebooks in the pytest framework. This is not automatically installed with the simfin package, so as to keep the number of dependencies minimal for normal users of simfin. To install nbval run the following commands: source activate simfin-env pip install nbval Then you can run the following commands from the root directory of the simfin package to execute both the unit-tests and data-tests: pytest --nbval-lax The following command only runs the data-tests: pytest --nbval-lax -v tests/test_bulk_data.ipynb More TestsMore Tests The tutorials provide more realistic use-cases of the simfin package, and they can also be run and tested automatically using pytest. See the tutorials' README for details. CreditsCredits The database is created by SimFin. The Python API and download system was originally designed and implemented by Hvass Labs. Further development of the Python API by SimFin and the community. License (MIT)License (MIT) This is published under the MIT License which allows very broad use for both academic and commercial purposes. You are very welcome to modify and use this source-code in your own project. Please keep a link to the original repository.
https://libraries.io/pypi/simfin
CC-MAIN-2020-16
refinedweb
945
62.68
So, I have some code that queries a data source, and that data source sends me back an XML message. I have to parse the XML message so I can store information from it into a relational database. So, let’s say my XML response looks like this: <xml> <response> <results=2> <result> <fname>Brian</fname> <lname>Jones</lname> <gender>M</gender> <office_phone_ext>777</office_phone_ext> <mobile_phone>201-555-1212</mobile_phone> </result> <result> <fname>Molly</fname> <lname>Jones</lname> <home_phone>201-555-1234</home_phone> </result> </results> </xml> So, as you can see, the attributes for each result returned for a query can differ, and if a result doesn’t have a value for some attribute, the corresponding xml element isn’t included at all for that result. If it were just 2 or 3 attributes, I could easily enough get around it by doing something like this: def __init__(self, xmlresult): self.xmlresult = xmlresult if self.xmlresult.xpath('fname') is not None: self.fname = self.xmlresult.xpath('fname') if self.xmlresult.xpath('lname') is not None: self.lname = self.xmlresult.xpath('lname') Like I said, if it were just a few things I needed to check for, I’d do it this way and be done with it. It’s not just a few though — it’s like 50 attributes. Now what? I decided lxml.objectify would be a great way to go. It would allow me to access these things as object attributes, which should mean I can do something like this: self.fname = getattr(self.xmlresult, 'fname', None) self.lname = getattr(self.xmlresult, 'lname', None) ... So, you *can* do this, technically speaking. Trouble is, you’re asking for an attribute of an ObjectifiedElement object, and when you do that, it returns an object that is not a native Python datatype, which I did not realize when I first started using lxml.objectify. So, in the above, ‘self.fname’ will not be a Python string — it’ll be an lxml.objectify.StringElement object. Of course, my database driver, my ‘join()’ operations, and everything else in my code that relies on native Python datatypes is now broken. What I actually need to do is get the ‘.pyval’ attribute of self.xmlresult.fname, if that attribute exists at all. So, something that does what I mean, which is “self.fname = getattr(self.xmlresult, ‘fname.pyval’, None). And, of course, doing ‘getattr(self.xmlresult, ‘fname’, None).pyval’ doesn’t work because None has no attribute ‘pyval’. I’ve tried a couple of other hacks too, but I’ve learned enough Python to know that if it feels like a hack, there’s probably a better way. But I can’t find that better way. Ideas?
http://protocolostomy.com/2010/02/01/seeking-elegant-pythonic-solution/
CC-MAIN-2022-27
refinedweb
452
66.54
Moin, I am currently using Altium Designer on business and Eagle at home. Is it possible to export the Altium *.PcbDoc files to use them with Eagle anyhow? It is possbile to export *.brd files with Altium, but they can't be opened with Eagle?!? Thanks in advance ^^ Literally using them is not possible. But importing them as a static graphic element via gerber files seems passable. Anyway, Altium offers a license model that allows you to use an office license at home on condition of both copies not running at the very same time. I can't put my finger on the exact name, but i'm positive, i read about it some day. Hello, u can use a copy of AD at home, WHEN this licence is not used at work. But please check the EULA. Jens There is a forum at Altium. forums.altium.com The problem is unsolved yet. But if you HAVE Altium somewhere, you can export your designs to ACCEL ASCII format. Having them in ASCII, you can import in EAGLE via the import-accel.ulp ULP script. lcd
https://embdev.net/topic/178883
CC-MAIN-2020-40
refinedweb
185
77.13
requests 1.1.5 http/ftp client library, inspired by python-requests To use this package, run the following command in your project's root directory: dlang-requests HTTP client library, inspired by python-requests with goals: - small memory footprint - performance - simple, high level API - native D implementation Table of contents - Library configurations (std.socket and vibe sockets) - Levels of API - Quick start - Requests with parameters - Posting data - Posting url-encoded - Posting multipart - Posting raw data - Properties of Request structure - Streaming response - Modifying request(headers, etc.) - Interceptors - SocketFactory - SSL - FTP - Request pool Library configurations This library can either use standard std.socket library or vibe.d for network IO. By default this library uses the standard std.socket configuration called std. To build vibed variant, use the vibed configuration: "dependencies": { "requests": "~>1" }, "subConfigurations": { "requests": "vibed" } Two levels of API At the highest API level you interested only in retrieving or posting document content. Use it when you don't need to add headers, set timeouts, or change any other defaults, if you don't interested in result codes or any details of request and/or response. This level propose next calls: getContent, postContent, putContentand patchContent. What you receive is a Buffer, which you can use as range, but you can easily convert it to ubyte[]using .dataproperty. These calls also have ByLinecounterparts which will lazily receive response from server, split it on \nand convert it into InputRange of ubyte[] (so that something like getContentByLine("").map!"cast(string)a".filter!(a => a.canFind("\"id\": 28"))should work. At the next level we have Requeststructure, which encapsulate all details and settings required for http(s)/ftp transfer. Operating on Requestinstance you can change many aspects of interaction with http/ftp server. Most important API calls are Request.get(), Reuest.postor Request.exec!"method"and so on (you will find examples below). You will receive Responsewith all available details -document body, status code, headers, timings, etc. Windows ssl notes In case requests can't find opsn ssl library on Windows, here is several steps that can help: - From the slproweb download latest Win32OpenSSL_Light installer binaries for Windows. - Install it. Important: allow installer to install libraries in system folders. See step-by-step instructions here. Make a simple request Making HTTP/HTTPS/FTP requests with dlang-requests is simple. First of all, install and import requests module: import requests; If you only need content of some webpage, you can use getContent(): auto content = getContent(""); getContent() will fetch complete document to buffer and return this buffer to the caller. content can be converted to string, or can be used as range. For example, if you need to count lines in content, you can directly apply splitter() and count: writeln(content.splitter('\n').count); Count non-empty lines: writeln(content.splitter('\n').filter!"a!=``".count); Actually, the buffer is a ForwardRange with length and random access, so you can apply many algorithms directly to it. Or you can extract data in form of ubyte[], using data property: ubyte[] data = content.data; Request with parameters dlang-requests proposes simple way to make a request with parameters. For example, you have to simulate a search query for person: name - person name, age - person age, and so on... You can pass all parameters to get using queryParams() helper: auto content = getContent("", queryParams("name", "any name", "age", 42)); If you check httpbin response, you will see that server recognized all parameters: { "args": { "age": "42", "name": "any name" }, "headers": { "Accept-Encoding": "gzip, deflate", "Host": "httpbin.org", "User-Agent": "dlang-requests" }, "origin": "xxx.xxx.xxx.xxx", "url": " name&age=42" } Or, you can pass dictionary: auto content = getContent("", ["name": "any name", "age": "42"]); Which gives you the same response. If getContent() fails getContent() (and any other API call) can throw the following exceptions: ConnectErrorwhen it can't connect to document origin for some reason (can't resolve name, connection refused, ...) TimeoutExceptionwhen any single operation (connect, receive, send) timed out. ErrnoExceptionwhen received ErrnoExceptionfrom any underlying call. RequestExceptionin some other cases. Posting data to server The easy way to post with dlang-requests is postContent(). There are several ways to post data to server: - Post to web-form using application/x-www-form-urlencoded- for posting short data. - Post to web-form using multipart/form-data- for large data and file uploads. - Post data to server without forms. Form-urlencode Call postContent() in the same way as getContent() with parameters: import std.stdio; import requests; void main() { auto content = postContent("", queryParams("name", "any name", "age", 42)); writeln(content); } Output: { "args": {}, "data": "", "files": {}, "form": { "age": "42", "name": "any name" }, "headers": { "Accept-Encoding": "gzip, deflate", "Content-Length": "22", "Content-Type": "application/x-www-form-urlencoded", "Host": "httpbin.org", "User-Agent": "dlang-requests" }, "json": null, "origin": "xxx.xxx.xxx.xxx", "url": "" } Multipart form Posting multipart forms requires MultipartForm structure to be prepared: import std.stdio; import std.conv; import std.string; import requests; void main() { MultipartForm form; form.add(formData("name", "any name")); form.add(formData("age", to!string(42))); form.add(formData("raw data", "some bytes".dup.representation)); auto content = postContent("", form); writeln("Output:"); writeln(content); } Output: { "args": {}, "data": "", "files": {}, "form": { "age": "42", "name": "any name", "raw data": "some bytes" }, "headers": { "Accept-Encoding": "gzip, deflate", "Content-Length": "332", "Content-Type": "multipart/form-data; boundary=e3beab0d-d240-4ec1-91bb-d47b08af5999", "Host": "httpbin.org", "User-Agent": "dlang-requests" }, "json": null, "origin": "xxx.xxx.xxx.xxx", "url": "" } Here is an example of posting a file: import std.stdio; import std.conv; import std.string; import requests; void main() { MultipartForm form; form.add(formData("file", File("test.txt", "rb"), ["filename":"test.txt", "Content-Type": "text/plain"])); form.add(formData("age", "42")); auto content = postContent("", form); writeln("Output:"); writeln(content); } Output: { "args": {}, "data": "", "files": { "file": "this is test file\n" }, "form": { "age": "42" }, "headers": { "Accept-Encoding": "gzip, deflate", "Content-Length": "282", "Content-Type": "multipart/form-data; boundary=3fd7317f-7082-4d63-82e2-16cfeaa416b4", "Host": "httpbin.org", "User-Agent": "dlang-requests" }, "json": null, "origin": "xxx.xxx.xxx.xxx", "url": "" } Posting raw data without forms postContent() can post from InputRanges. For example, to post file content: import std.stdio; import requests; void main() { auto f = File("test.txt", "rb"); auto content = postContent("", f.byChunk(5), "application/binary"); writeln("Output:"); writeln(content); } Output: { "args": {}, "data": "this is test file\n", "files": {}, "form": {}, "headers": { "Accept-Encoding": "gzip, deflate", "Content-Length": "18", "Content-Type": "application/binary", "Host": "httpbin.org", "User-Agent": "dlang-requests" }, "json": null, "origin": "xxx.xxx.xxx.xxx", "url": "" } Or, if you keep your data in memory, you can use something like this: auto content = postContent("", "ABCDEFGH", "application/binary"); Those are all details about simple API with default request parameters. The next section will describe a lower-level interface through Request structure. Request structure When you need to configure request details (like timeouts and other limits, keep-alive, ssl properties), or response details (code, headers), you have to use Request and Response structures: Request rq = Request(); Response rs = rq.get(""); assert(rs.code==200); By default Keep-Alive requests are used, so you can reuse the connection: import std.stdio; import requests; void main() { auto rq = Request(); rq.verbosity = 2; auto rs = rq.get(""); writeln(rs.responseBody.length); rs = rq.get(""); writeln(rs.responseBody.length); } In the latter case rq.get() will reuse previous connection to server. Request will automatically reopen connection when host, protocol or port change (so it is safe to send different requests through single instance of Request). It also recovers when server prematurely closes keep-alive connection. You can turn keepAlive off when needed: rq.keepAlive = false; For anything other than default, you can configure Request structure for keep-alive, redirects handling, to add/remove headers, set IO buffer size and maximum size of response headers and body. For example, to authorize with basic authentication, use the following code (works both for HTTP and FTP URLs): rq = Request(); rq.authenticator = new BasicAuthentication("user", "passwd"); rs = rq.get(""); Here is a short description of some Request options you can set: *) Throws exception when limit is reached. Request properties that are read-only: Redirect and connection optimisations Request keep results of Permanent redirections in small cache. It also keep map (schema,host,port) -> connection of opened connections, for subsequent usage. Streaming server response When you plan to receive something really large in response (file download) you don't want to receive gigabytes of content into the response buffer. With useStreaming, you can receive response from server as input range. Elements of the range are chunks of data (of type ubyte[]). contentLength and contentReceived can be used to monitor progress: import std.stdio; import requests; void main() { auto rq = Request(); rq.useStreaming = true; rq.verbosity = 2; auto rs = rq.get(""); auto stream = rs.receiveAsRange(); while(!stream.empty) { writefln("Received %d bytes, total received %d from document legth %d", stream.front.length, rs.contentReceived, rs.contentLength); stream.popFront; } } Output: > GET /image/jpeg HTTP/1.1 > Connection: Keep-Alive > User-Agent: dlang-requests > Accept-Encoding: gzip, deflate > Host: httpbin.org > < HTTP/1.1 200 OK < server: nginx < date: Thu, 09 Jun 2016 16:25:57 GMT < content-type: image/jpeg < content-length: 35588 < connection: keep-alive < access-control-allow-origin: * < access-control-allow-credentials: true < 1232 bytes of body received < 1448 bytes of body received Received 2680 bytes, total received 2680 from document legth 35588 Received 2896 bytes, total received 5576 from document legth 35588 Received 2896 bytes, total received 8472 from document legth 35588 Received 2896 bytes, total received 11368 from document legth 35588 Received 1448 bytes, total received 12816 from document legth 35588 Received 1448 bytes, total received 14264 from document legth 35588 Received 1448 bytes, total received 15712 from document legth 35588 Received 2896 bytes, total received 18608 from document legth 35588 Received 2896 bytes, total received 21504 from document legth 35588 Received 2896 bytes, total received 24400 from document legth 35588 Received 1448 bytes, total received 25848 from document legth 35588 Received 2896 bytes, total received 28744 from document legth 35588 Received 2896 bytes, total received 31640 from document legth 35588 Received 2896 bytes, total received 34536 from document legth 35588 Received 1052 bytes, total received 35588 from document legth 35588 With verbosity >= 3, you will also receive a dump of each data portion received from server: 00000 48 54 54 50 2F 31 2E 31 20 32 30 30 20 4F 4B 0D |HTTP/1.1 200 OK.| 00010 0A 53 65 72 76 65 72 3A 20 6E 67 69 6E 78 0D 0A |.Server: nginx..| 00020 44 61 74 65 3A 20 53 75 6E 2C 20 32 36 20 4A 75 |Date: Sun, 26 Ju| 00030 6E 20 32 30 31 36 20 31 36 3A 31 36 3A 30 30 20 |n 2016 16:16:00 | 00040 47 4D 54 0D 0A 43 6F 6E 74 65 6E 74 2D 54 79 70 |GMT..Content-Typ| 00050 65 3A 20 61 70 70 6C 69 63 61 74 69 6F 6E 2F 6A |e: application/j| 00060 73 6F 6E 0D 0A 54 72 61 6E 73 66 65 72 2D 45 6E |son..Transfer-En| 00070 63 6F 64 69 6E 67 3A 20 63 68 75 6E 6B 65 64 0D |coding: chunked.| 00080 0A 43 6F 6E 6E 65 63 74 69 6F 6E 3A 20 6B 65 65 |.Connection: kee| ... Just for fun: with streaming you can forward content between servers in just two code lines. postContent will automatically receive next data portion from source and send it to destination: import requests; import std.stdio; void main() { auto rq = Request(); rq.useStreaming = true; auto stream = rq.get("").receiveAsRange(); auto content = postContent("", stream); writeln(content); } You can use dlang-requests in parallel tasks (but you can't share the same Request structure between threads): import std.stdio; import std.parallelism; import std.algorithm; import std.string; import core.atomic; import requests; immutable auto urls = [ "", "", "", "", "", "", "", ]; void main() { defaultPoolThreads(5); shared short lines; foreach(url; parallel(urls)) { atomicOp!"+="(lines, getContent(url).splitter("\n").count); } assert(lines == 287); } File download example Note: use "wb" and rawWrite with file. import requests; import std.stdio; void main() { Request rq = Request(); Response rs = rq.get(""); File f = File("123.png", "wb"); // do not forget to use both "w" and "b" modes when open file. f.rawWrite(rs.responseBody.data); f.close(); } Loading whole document to memory and then save it might be impractical or impossible. Use streams in this case: import requests; import std.stdio; void main() { Request rq = Request(); rq.useStreaming = true; auto rs = rq.get(""); auto stream = rs.receiveAsRange(); File file = File("123.png", "wb"); while(!stream.empty) { file.rawWrite(stream.front); stream.popFront; } file.close(); } vibe.d You can safely use dlang-requests with vibe.d. When dlang-requests is compiled with support for vibe.d sockets ( --config=vibed), each call to dlang-requests API can block only the current fiber, not the thread: import requests, vibe.d; shared static this() { void taskMain() { logInfo("Task created"); auto r1 = getContent(""); logInfo("Delay request finished"); auto r2 = getContent(""); logInfo("Google request finished"); } setLogFormat(FileLogger.Format.threadTime, FileLogger.Format.threadTime); for(size_t i = 0; i < 3; i++) runTask(&taskMain); } Output: [F7EC2FAB:F7ECD7AB 2016.07.05 16:55:54.115 INF] Task created [F7EC2FAB:F7ECD3AB 2016.07.05 16:55:54.116 INF] Task created [F7EC2FAB:F7ED6FAB 2016.07.05 16:55:54.116 INF] Task created [F7EC2FAB:F7ECD7AB 2016.07.05 16:55:57.451 INF] Delay request finished [F7EC2FAB:F7ECD3AB 2016.07.05 16:55:57.464 INF] Delay request finished [F7EC2FAB:F7ED6FAB 2016.07.05 16:55:57.474 INF] Delay request finished [F7EC2FAB:F7ECD7AB 2016.07.05 16:55:57.827 INF] Google request finished [F7EC2FAB:F7ECD3AB 2016.07.05 16:55:57.836 INF] Google request finished [F7EC2FAB:F7ED6FAB 2016.07.05 16:55:57.856 INF] Google request finished Adding/replacing request headers Use string[string] and addHeaders() method to add or replace some request headers. User-supplied headers override headers, created by library code, so you have to be careful adding common headers, like Content-Type, Content-Length, etc.. import requests; void main() { auto rq = Request(); rq.verbosity = 2; rq.addHeaders(["User-Agent": "test-123", "X-Header": "x-value"]); auto rs = rq.post("", `{"a":"b"}`, "application/x-www-form-urlencoded"); } Output: > POST /post HTTP/1.1 > Content-Length: 9 > Connection: Keep-Alive > User-Agent: test-123 > Accept-Encoding: gzip, deflate > Host: httpbin.org > X-Header: x-value > Content-Type: application/x-www-form-urlencoded > < HTTP/1.1 200 OK < server: nginx ... SSL settings HTTP requests can be configured for SSL options: you can enable or disable remote server certificate verification, set key and certificate to use for authorizing to remote server: - sslSetVerifyPeer(bool) - turn ssl peer verification on or off (on by default since v0.8.0) - sslSetKeyFile(string) - load client key from file - sslSetCertFile(string) - load client cert from file - sslSetCaCert(string) - load server CA cert for private or self-signed server certificates import std.stdio; import requests; import std.experimental.logger; void main() { globalLogLevel(LogLevel.trace); auto rq = Request(); rq.sslSetKeyFile("client01.key"); // set key file rq.sslSetCertFile("client01.crt"); // set cert file auto rs = rq.get(""); writeln(rs.code); writeln(rs.responseBody); } Please note that with vibe.d you have to add the following call rq.sslSetCaCert("/opt/local/etc/openssl/cert.pem"); with path to CA cert file (location may differ for different OS or openssl packaging). By default ssl peer verification turned ON. This can lead to problems in case you use server-side self-signed certificates. To fix, you have either add server ca.crt to trusted store on local side(see for example), or use sslSetCaCert to add it for single requests call( rq.sslSetCaCert("ca.crt");), or just disable peer verification with rq.sslSetVerifyPeer(false); FTP requests You can use the same structure to make ftp requests, both get and post. HTTP-specific methods do not work if request uses ftp scheme. Here is an example: import std.stdio; import requests; void main() { auto rq = Request(); rq.verbosity = 3; rq.authenticator = new BasicAuthentication("login", "password"); auto f = File("test.txt", "rb"); auto rs = rq.post("", f.byChunk(1024)); writeln(rs.code); rs = rq.get(""); writeln(rs.code); } Second argument for FTP posts can be anything that can be casted to ubyte[] or any InputRange with element type like ubyte[]. If the path in the post request doesn't exist, it will try to create all the required directories. As with HTTP, you can call several FTP requests using the same Request structure - it will reuse established connection (and authorization as well). Interceptors Interceptors provide a way to modify, or log, or cache request. They can form a chain attached to Request structure so that each request will pass through whole chain. Each interceptor receive request as input, do whatever it need and pass request to the handler, which finally serve request and return Response back. Here is small example how interceptors can be used. Consider situation where you have main app and some module. Main code: import std.stdio; import mymodule; void main() { auto r = mymodule.doSomething(); writeln(r.length); } module: module mymodule; import requests; auto doSomething() { return getContent(""); } One day you decide that you need to log every http request to external services. One solution is to add logging code to each function of mymodule where external http calls executed. This can require lot of work and code changes, and sometimes even not really possible. Another, and more effective solution is to use interceptors. First we have to create logger class: class LoggerInterceptor : Interceptor { Response opCall(Request r, RequestHandler next) { writefln("Request %s", r); auto rs = next.handle(r); writefln("Response %s", rs); return rs; } } Then we can instrument every call to request with this call: import std.stdio; import requests; import mymodule; class LoggerInterceptor : Interceptor { Response opCall(Request r, RequestHandler next) { writefln("Request %s", r); auto rs = next.handle(r); writefln("Response %s", rs); return rs; } } void main() { requests.addInterceptor(new LoggerInterceptor()); auto r = mymodule.doSomething(); writeln(r.length); } The only change required is call addInterceptor(). You may intercept single Request structure (instead of whole requests module) attaching interceptors directly to this structure: Request rq; rq.addInterceptor(new LoggerInterceptor()); Interceptor can change Request r, using Request() getters/setters before pass it to next handler. For example, authentication methods can be added using interceptors and headers injection. You can implement some kind of cache and return cached response immediately. To change POST data in the interceptors you can use makeAdapter (since v1.1.2). Example: import std.stdio; import std.string; import std.algorithm; import requests; class I : Interceptor { Response opCall(Request rq, RequestHandler next) { rq.postData = makeAdapter(rq.postData.map!"toUpper(cast(string)a)"); auto rs = next.handle(rq); return rs; } } void main() { auto f = File("text.txt", "rb"); Request rq; rq.verbosity = 1; rq.addInterceptor(new I); auto rs = rq.post("", f.byChunk(5)); writeln(rs.responseBody); } SocketFactory If configured - each time when Request need new connection it will call factory to create instance of NetworkStream. This way you can implement (outside of this library) lot of useful things: various proxies, unix-socket connections, etc. Response structure This structure provides details about received response. Most frequently needed parts of Response are: code- HTTP or FTP response code as received from server. responseBody- contain complete document body when no streaming is in use. You can't use it when in streaming mode. responseHeaders- response headers in form of string[string](not available for FTP requests) receiveAsRange- if you set useStreamingin the Request, then receiveAsRangewill provide elements (type ubyte[]) of InputRangewhile receiving data from the server. Requests Pool When you have a large number of requests to execute, you can use a request pool to speed things up. A pool is a fixed set of worker threads, which receives requests in form of Jobs and returns Results. Each Job can be configured for an URL, method, data (for POST requests) and some other parameters. pool acts as a parallel map from Job to Result - it consumes InputRange of Jobs, and produces InputRange of Results as fast as it can. It is important to note that pool does not preserve result order. If you need to tie jobs and results somehow, you can use the opaque field of Job. Here is an example usage: import std.algorithm; import std.datetime; import std.string; import std.range; import requests; void main() { Job[] jobs = [ Job("").addHeaders([ "X-Header": "X-Value", "Y-Header": "Y-Value" ]), Job(""), Job(""), Job("") .maxRedirects(2), Job(""), Job("") .method("POST") // change default GET to POST .data("test".representation()) // attach data for POST .opaque("id".representation), // opaque data - you will receive the same in Result Job("") .timeout(1.seconds), // set timeout to 1.seconds - this request will throw exception and fails Job(""), ]; auto count = jobs. pool(6). filter!(r => r.code==200). count(); assert(count == jobs.length - 2, "pool test failed"); iota(20) .map!(n => Job("") .data("%d".format(n).representation)) .pool(10) .each!(r => assert(r.code==200)); } One more example, with more features combined: import requests; import std.stdio; import std.string; void main() { Job[] jobs_array = [ Job(""), Job("").method("POST").data("test".representation()).addHeaders(["a":"b"]), Job("", Job.Method.POST, "test".representation()).opaque([1,2,3]), Job("").maxRedirects(2), ]; auto p = pool(jobs_array, 10); while(!p.empty) { auto r = p.front; p.popFront; switch(r.flags) { case Result.OK: writeln(r.code); writeln(cast(string)r.data); writeln(r.opaque); break; case Result.EXCEPTION: writefln("Exception: %s", cast(string)r.data); break; default: continue; } writeln("---"); } } Output: 2016-12-29T10:22:00.861:streams.d:connect:973 Failed to connect to 0.0.0.0:9998(0.0.0.0:9998): Unable to connect socket: Connection refused 2016-12-29T10:22:00.861:streams.d:connect:973 Failed to connect to 0.0.0.0:9998(0.0.0.0:9998): Unable to connect socket: Connection refused Exception: Can't connect to 0.0.0.0:9998 --- 200 { "args": {}, "data": "test", "files": {}, "form": {}, "headers": { "A": "b", "Accept-Encoding": "gzip, deflate", "Content-Length": "4", "Content-Type": "application/octet-stream", "Host": "httpbin.org", "User-Agent": "dlang-requests" }, "json": null, "origin": "xxx.xxx.xxx.xxx", "url": "" } [] --- 200 { "args": {}, "data": "test", "files": {}, "form": {}, "headers": { "Accept-Encoding": "gzip, deflate", "Content-Length": "4", "Content-Type": "application/octet-stream", "Host": "httpbin.org", "User-Agent": "dlang-requests" }, "json": null, "origin": "xxx.xxx.xxx.xxx", "url": "" } [1, 2, 3] --- Exception: 2 redirects reached maxRedirects 2. --- Job methods Result fields Pool limitations - Currently it doesn't work under vibe.d- use vibe.dparallelisation. - It limits you in tuning request (e.g. you can add authorization only through addHeaders(), you can't tune SSL parameters, etc). Job's' and Result's' dataare immutable byte arrays (as it uses send/receive for data exchange). International Domain names dlang-requests supports IDNA through idna package. It provide correct conversion between unicode domain names and punycode, but have limited ability to check names for standard compliance. - Registered by Igor Khasilev - 1.1.5 released 2 days ago - ikod/dlang-requests - BSL-1.0 - Authors: - - Dependencies: - cachetools - Versions: - Show all 71 versions - Download Stats: 152 downloads today 569 downloads this week 2398 downloads this month 30731 downloads total - Score: - 4.8 - Short URL: - requests.dub.pm
https://code.dlang.org/packages/requests?tab=info
CC-MAIN-2020-24
refinedweb
3,876
50.53
react-stickynodereact-stickynode A performant and comprehensive React sticky component. A sticky component wraps a sticky target and keeps the target in the viewport as the user scrolls the page. Most sticky components handle the case where the sticky target is shorter than the viewport, but not the case where a sticky target is taller than the viewport. The reason is that the expected behavior and implementation is much more complicated. react-stickynode handles not only regular case but the long sticky target case in a natural way. In the regular case, when scrolling the page down, react-stickynode will stick to the top of the viewport. But in the case of a taller sticky target, it will scroll along with the page until its bottom reaches the bottom of the viewport. In other words, it looks like the bottom of viewport pulls the bottom of a sticky target down when scrolling the page down. On the other hand, when scrolling the page up, the top of viewport pulls the top of a sticky target up. This behavior gives the content in a tall sticky target more chance to be shown. This is especially good for the case where many ADs are in the right rail. Another highlight is that react-stickynode can handle the case where a sticky target uses percentage as its width unit. For a responsive designed page, it is especially useful. FeaturesFeatures - Retrieve scrollToponly once for all sticky components. - Listen to throttled scrolling to have better performance. - Use rAF to update sticky status to have better performance. - Support top offset from the top of screen. - Support bottom boundary to stop sticky status. - Support any sticky target with various width units. UsageUsage npm install react-stickynode The sticky uses Modernizr csstransforms3d and prefixed (link) features to detect IE8/9, so it can downgrade not to use transform3d. import Sticky from 'react-stickynode'; <Sticky enabled={true} top={50} bottomBoundary={1200}> <YourComponent /> </Sticky>; import Sticky from 'react-stickynode'; <Sticky top="#header" bottomBoundary="#content"> <YourComponent /> </Sticky>; PropsProps Handling State ChangeHandling State Change You can be notified when the state of the sticky component changes by passing a callback to the onStateChange prop. The callback will receive an object in the format {status: CURRENT_STATUS}, with CURRENT_STATUS being an integer representing the status: You can access the statuses as static constants to use for comparison. import Sticky from 'react-stickynode'; const handleStateChange = (status) => { if (status.status === Sticky.STATUS_FIXED) { console.log('the component is sticky'); } }; <Sticky onStateChange={handleStateChange}> <YourComponent /> </Sticky>; Also Sticky supports children functions: import Sticky from 'react-stickynode'; <Sticky> {(status) => { if (status.status === Sticky.STATUS_FIXED) { return 'the component is sticky'; } if (status.status === Sticky.STATUS_ORIGINAL) { return 'the component in the original position'; } return 'the component is released'; }} </Sticky>; FreezingFreezing You can provide a function in the shouldFreeze prop which will tell the component to temporarily stop updating during prop and state changes, as well as ignore scroll and resize events. This function should return a boolean indicating whether the component should currently be frozen. DevelopmentDevelopment LintingLinting npm run lint Unit TestUnit Test npm test Functional TestFunctional Test npm run func:local LicenseLicense This software is free to use under the BSD license. See the LICENSE file for license text and copyright information.
https://www.npmjs.com/package/react-stickynode
CC-MAIN-2022-21
refinedweb
541
53.41
Java Classes in Code Completion By Geertjan-Oracle on May 14, 2011 I've been looking at the Code Completion Tutorial and updating it to 7.0. While doing so, I got some tips from Tomas Zezula, the NetBeans Java editor guru, on how to change that tutorial so that Java classes could be shown in the code completion box: That's pretty cool, right. A code completion box inside an HTML file, showing Java classes, and narrowing as you type. To achieve the above, start with the NetBeans Code Completion Tutorial and then fill the CompletionResultSet (in the CompletionProvider class) like this: FileObject fo = getFO(doc); ClassPath bootCp = ClassPath.getClassPath(fo, ClassPath.BOOT); ClassPath compileCp = ClassPath.getClassPath(fo, ClassPath.COMPILE); ClassPath sourcePath = ClassPath.getClassPath(fo, ClassPath.SOURCE); final ClasspathInfo info = ClasspathInfo.create(bootCp, compileCp, sourcePath); final Set<ElementHandle<TypeElement>> result = info.getClassIndex().getDeclaredTypes("", ClassIndex.NameKind.PREFIX, EnumSet.of(ClassIndex.SearchScope.SOURCE, ClassIndex.SearchScope.DEPENDENCIES)); for (ElementHandle<TypeElement> te : result) { String binaryName = te.getBinaryName(); if (!binaryName.equals("") && binaryName.startsWith(filter)) { resultSet.addItem(new CountriesCompletionItem(te.getBinaryName(), startOffset, caretOffset)); } } The helper method: private FileObject getFO(Document doc) { Object sdp = doc.getProperty(Document.StreamDescriptionProperty); if (sdp instanceof FileObject) { return (FileObject) sdp; } if (sdp instanceof DataObject) { DataObject dobj = (DataObject) sdp; return dobj.getPrimaryFile(); } return null; } For this code to work, you need (some of these are for other parts of the tutorial) dependencies on the Classpath APIs, Datasystems API, Editor Code Completion, File System API, Java Source, Lookup API, MIME Lookup API, Nodes API, Project API, and Utilities API. Note that the HTML file needs to be in a "src" folder in a Java project. The "src" folder is a Java compilation unit and hence you can get the classpath as shown above from the HTML file simply because it is in the compilation unit. Posted by Ralph Lance on May 23, 2011 at 06:13 AM PDT # Posted by Ralph Lance on May 24, 2011 at 02:21 AM PDT # I went through the tutorials and i still can't get it to work. I open up my netbeans module, load a sample project containing an html file in the src location, press ctrl+space, method query(CompletionResultSet completionResultSet, Document document, int caretOffset) is invoked however the document variable is null which results in an exception. Any ideas of what could be wrong? Hope for a reply. Posted by Kicek on January 11, 2012 at 09:51 PM PST # No idea. Remove the 'build' folder of your project. Then put your project in a ZIP file and send it to me at geertjan dot wielenga at oracle dot com and I will investigate. Posted by Geertjan on January 11, 2012 at 11:48 PM PST # I needed that tutorial for different functionality, I alredy found the solution and i won't bother you with that, thanks for reply. Posted by Kicek on January 12, 2012 at 01:47 AM PST # Thank you for this tip, Geertjan. I want to extend code completion for javascript files. But if I do it as described by the tutorial, I replace the code completion always. What I want is very simple: I want to register some prototypes in my script engine and want to get the completion help for it. Is there any way to extend it? Oh - and I want to high light the prototypes too. Is there also a simple way? Thanks in advance, Axel Posted by Axel F on March 23, 2012 at 08:23 AM PDT # Hello Guys,I'm making an editor to support my own language keywords. I want to implement the Code Completion interface to my editor. i'm using java on NetBeans IDE. if someone got a knowledge about this please guide me. Its urgent and important . Thanks ... Posted by guest on June 07, 2013 at 12:35 PM PDT #
https://blogs.oracle.com/geertjan/entry/java_classes_in_code_completion
CC-MAIN-2015-32
refinedweb
642
54.52
Re: how to replace a substring in a string using C? - From: Netocrat <netocrat@xxxxxxxxxxx> - Date: Wed, 09 Nov 2005 04:21:35 GMT On Tue, 08 Nov 2005 05:20:04 -0800, Mark F. Haigh wrote: > Netocrat wrote: [...] > There are a couple of problems here. First, [i should be size_t not int]. Well spotted - no one else caught that one (I only picked it up after Dave Thompson's critique). > Second, this code performs poorly even though it may look at a glance to > be somewhat optimized. Yes; I've since posted code that's an improvement - it doesn't use subscripting in the replacement loop and it uses memcmp rather than strstr - but I realise that both you and Skarmander are correct that character-by-character operations should be avoided. As Skarmander points out my attempt to avoid redundant iteration did not in fact do that since memcmp is called on every character. [...] > To give an idea of the impact of string parsing missteps can make on > program performance, I fed "Much Ado About Nothing" (the official play > of comp.lang.c) through your replace function, then through mine. The > entire file (121K) was loaded into a buffer, then fed 10 times to the > replace function as such: > > /* ... */ > fread(str, 1, size, fp); > str[size] = '\0'; > for(i = 0; i < 10; i++) { > newstr = replace(str, "DON PEDRO", "NETOCRAT"); free(newstr); > } > /* ... */ I've written a benchmark program, which I've made available here: It incorporates my fixed-up function, your function below and Skarmander's suggested snippet (which has an identical algorithm to yours although the expression is slightly different, and it uses memcpy where you've used strcpy). It also incorporates an optimised version which avoids calling strlen on the entire original string where possible - I posted a similar but less readable snippet in response to Dave T's post and as you and S have pointed out, it's a better approach. Inspired by the appropriateness of the play, I downloaded a copy. Here are some typical results (using the same compilation options as you except with -std=c99 and gcc version 3.3.6): Filename : /data/doc-dl/literature/Much_Ado_About_Nothing.txt Iterations : 1000 Old : Leonato New : Haigh Function : replace_net Pre-length : 139007 Post-length: 138883 Duration : 17.280000 seconds Filename : /data/doc-dl/literature/Much_Ado_About_Nothing.txt Iterations : 1000 Old : Leonato New : Haigh Function : replace_haigh Pre-length : 139007 Post-length: 138883 Duration : 1.250000 seconds Filename : /data/doc-dl/literature/Much_Ado_About_Nothing.txt Iterations : 1000 Old : Leonato New : Haigh Function : replace_opt Pre-length : 139007 Post-length: 138883 Duration : 1.160000 seconds The fixed-up version of my original function is still about 15 times slower, but it's a lot better than it was. The optimisation of not calling strlen on the entire string cuts about 7% off the running time. > The somewhat naieve implementation I whipped up quickly is: I like the clarity with which you've used looping expressions. > char *replace(const char *str, const char *old, const char *new) { > char *ret, *r; > const char *p, *q; > size_t len_str = strlen(str); ^^^^^^^^^^^^^^ This can be avoided (I know you've already qualified this as a somewhat naive implementation). > size_t len_old = strlen(old); > size_t len_new = strlen(new); > size_t count; > If len_old == len_new, there's no need to run this loop and you can simply malloc len_str + 1 bytes. > for(count = 0, p = str; (p = strstr(p, old)); p += len_old) > count++; > > ret = malloc(count * (len_new - len_old) + len_str + 1); > if(!ret) > return NULL; > > for(r = ret, p = str; (q = strstr(p, old)); p = q + len_old) { > count = q - p; As discussed elsethread, under C99 this invokes undefined behaviour when q - p > PTRDIFF_MAX. I vaguely recall discussions on the semantics under C89, but can't recall the conclusion. Also under C99 count would be better declared ptrdiff_t but it appears that you're writing to C89. > memcpy(r, p, count); > r += count; > strcpy(r, new); As Dave T pointed out memcpy is more appropriate here; but as compiled under gcc on my platform strcpy's very slightly faster than using memcpy - go figure. > r += len_new; > } > strcpy(r, p); > return ret; > } > > Here are the timings on my machine. Both were compiled with the same > flags on gcc 4 (-Wall -O2 -ansi -pedantic) [results omitted] > Even though the somewhat naieve program traverses the input string > multiple times, it's still a 99.9% execution time decrease. Here's the optimised C99 version of the function used for the benchmark results above. The #error seems to me to be the best way of dealing with the possibility of UB - it alerts the programmer to it at compile time and encourages him/her to examine the comments in the code whilst removing it. #include <string.h> #include <stdlib.h> #include <stddef.h> #include <stdint.h> # if PTRDIFF_MAX < SIZE_MAX /* If the following line causes compilation to fail, * comment it out after taking note of its message. */ # error The replace function might invoke undefined \ behaviour for strings with length greater than \ PTRDIFF_MAX - see comments in the function body # endif char *replace(const char *str, const char *old, const char *new) { char *ret, *r; const char *p, *q; size_t oldlen = strlen(old); size_t count, retlen, newlen = strlen(new); if (oldlen != newlen) { for (count = 0, p = str; (q = strstr(p, old)) != NULL; p = q + oldlen) count++; /* this is undefined if p - str > PTRDIFF_MAX */ retlen = p - str + strlen(p) + count * (newlen - oldlen); } else retlen = strlen(str); ret = malloc(retlen + 1);; } -- . - Follow-Ups: - References: - Re: how to replace a substring in a string using C? - From: Mark F. Haigh - Prev by Date: Re: Why C? - Next by Date: Re: Why C? - Previous by thread: Re: how to replace a substring in a string using C? - Next by thread: Re: how to replace a substring in a string using C? - Index(es):
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2005-11/msg00738.html
crawl-002
refinedweb
969
62.48
VofaDev @VofaDev Hello! I like to experiment with multiple programming languages! I try to use rust when I can though. I also am getting into pen testing! CopperMade with BashRecent comments (6)xxpertHacker1 year agoHey, just a small tip: a few of the standard library's string-related types and numerical types have user-defined-literal operators for creating them. `std::string` is among those types, there is a `std::operator""s` provided for constructing them from string literals. ```c++ std::string foo = std::string("foo"); std::string bar = std::string("bar"); // vs using std::operator""s; // unnecessary in your case, since you already included the entire {namespace std} std::string foo = "foo"s; std::string bar = "bar"s; ``` seeing as you passed string literals to the `std::string` constructor a few times, I thought that you might be interested in cutting down the code.mollthecoder OpenGL and AL shouldn't be too high on the priority list.1 year ago
https://replit.com/@VofaDev?tab=community
CC-MAIN-2022-33
refinedweb
161
56.05
Optimized ways to Read Large CSVs in Python Hola! 🙋 In the current time, data plays a very important role in the analysis and building ML/AI model. Data can be found in various formats of CSVs, flat files, JSON, etc which when in huge makes it difficult to read into the memory. This blog revolves around handling tabular data in CSV format which are comma separate files. Problem: Importing (reading) a large CSV file leads Out of Memory error. Not enough RAM to read the entire CSV at once crashes the computer. Here’s some efficient ways of importing CSV in Python. Now what? Well, let’s prepare a dataset that should be huge in size and then compare the performance(time) implementing the options shown in Figure1. Let’s start..🏃 Create a dataframe of 15 columns and 10 million rows with random numbers and strings. Export it to CSV format which comes around ~1 GB in size. df = pd.DataFrame(data=np.random.randint(99999, 99999999, size=(10000000,14)),columns=['C1','C2','C3','C4','C5','C6','C7','C8','C9','C10','C11','C12','C13','C14'])df['C15'] = pd.util.testing.rands_array(5,10000000) df.to_csv("huge_data.csv") Let’s look over the importing options now and compare the time taken to read CSV into memory. PANDAS The pandas python library provides read_csv() function to import CSV as a dataframe structure to compute or analyze it easily. This function provides one parameter described in a later section to import your gigantic file much faster. 1. pandas.read_csv() Input: Read CSV file Output: pandas dataframe pandas.read_csv() loads the whole CSV file at once in the memory in a single dataframe. start = time.time() df = pd.read_csv('huge_data.csv') end = time.time() print("Read csv without chunks: ",(end-start),"sec")Read csv without chunks: 26.88872528076172 sec This sometimes may crash your system due to OOM (Out Of Memory) error if CSV size is more than your memory’s size (RAM). The solution is improved by the next importing way. 2. pandas.read_csv(chunksize) Input: Read CSV file Output: pandas dataframe Instead of reading the whole CSV at once, chunks of CSV are read into memory. The size of a chunk is specified using chunksize parameter which refers to the number of lines. This function returns an iterator to iterate through these chunks and then wishfully processes them. Since only a part of a large file is read at once, low memory is enough to fit the data. Later, these chunks can be concatenated in a single dataframe. start = time.time() #read data in chunks of 1 million rows at a time chunk = pd.read_csv('huge_data.csv',chunksize=1000000) end = time.time() print("Read csv with chunks: ",(end-start),"sec") pd_df = pd.concat(chunk)Read csv with chunks: 0.013001203536987305 sec This option is faster and is best to use when you have limited RAM. Alternatively, a new python library, DASK can also be used, described below. DASK Input: Read CSV file Output: Dask dataframe While reading large CSVs, you may encounter out of memory error if it doesn't fit in your RAM, hence DASK comes into picture. - Dask is an open-source python library with the features of parallelism and scalability in Python included by default in Anaconda distribution. - It extends its features off scalability and parallelism by reusing the existing Python libraries such as pandas, numpy or sklearn. This makes it comfortable for those who are already familiar with these Python libraries. - How to start with it? You can install via pip or conda. I would recommend conda because installing via pip may create some issues. pip install dask Well, when I tried the above, it created some issue aftermath which was resolved using some GitHub link to externally add dask path as an environment variable. But why make a fuss when a simpler option is available? conda install dask - Code implementation: from dask import dataframe as ddstart = time.time() dask_df = dd.read_csv('huge_data.csv') end = time.time() print("Read csv with dask: ",(end-start),"sec")Read csv with dask: 0.07900428771972656 sec Dask seems to be the fastest in reading this large CSV without crashing or slowing down the computer. Wow! How good is that?!! A new Python library with modified existing ones to introduce scalability. Why DASK is better than PANDAS? - Pandas utilizes a single CPU core while Dask utilizes multiple CPU cores by internally chunking dataframe and process in parallel. In simple words, multiple small dataframes of a large dataframe got processed at a time wherein under pandas, operating a single large dataframe takes a long time to run. - DASK can handle large datasets on a single CPU exploiting its multiple cores or cluster of machines refers to distributed computing. It provides a sort of scaled pandas and numpy libraries. - Not only dataframe, dask also provides array and scikit-learn libraries to exploit parallelism. Some of the DASK provided libraries shown below. - Dask Arrays: parallel Numpy - Dask Dataframes: parallel Pandas - Dask ML: parallel Scikit-Learn We will only concentrate on Dataframe as the other two are out of scope. But, to get your hands dirty with those, this blog is best to consider. How Dask manages to store data which is larger than the memory (RAM)? When we import data, it is read into our RAM which highlights the memory constraint. Let’s say, you want to import 6 GB data in your 4 GB RAM. This can’t be achieved via pandas since whole data in a single shot doesn’t fit into memory but Dask can. How? Dask instead of computing first, create a graph of tasks which says about how to perform that task. It believes in lazy computation which means that dask’s task scheduler creating a graph at first followed by computing that graph when requested. To perform any computation, compute() is invoked explicitly which invokes task scheduler to process data making use of all cores and at last, combines the results into one. It would not be difficult to understand for those who are already familiar with pandas. Couldn’t hold my learning curiosity, so happy to publish Dask for Python and Machine Learning with deeper study. Conclusion Reading~1 GB CSV in the memory with various importing options can be assessed by the time taken to load in the memory. pandas.read_csv is the worst when reading CSV of larger size than RAM’s. pandas.read_csv(chunksize) performs better than above and can be improved more by tweaking the chunksize. dask.dataframe proved to be the fastest since it deals with parallel processing. Hence, I would recommend to come out of your comfort zone of using pandas and try dask. But just FYI, I have only tested DASK for reading up large CSV but not the computations as we do in pandas. You can check my github code to access the notebook covering the coding part of this blog. References - Dask latest documentation - Book worth to read - Other options for reading and writing into CSVs which are not inclused in this blog. Reading and Writing CSV Files in Python - Real Python Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written… realpython.com 3. To make your hands dirty in DASK, should glance over the below link.
https://medium.com/analytics-vidhya/optimized-ways-to-read-large-csvs-in-python-ab2b36a7914e
CC-MAIN-2022-33
refinedweb
1,229
65.01
Announcing Python Tools for Visual Studio 1.1 Alpha - Posted: Nov 03, 2011 at 3:57 PM - 45,792 Views - 13 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements Right click “Save as…” Python Tools for Visual Studio 1.1 is now available in Alpha! PTVS turns VS into an IDE for Python. PTVS a free, open source plug-in for Visual Studio 2010 from Microsoft's Developer Division.. Here, we chat with PTVS architect Sean Mortazavi and lead developer Dino Viehland to learn all about this release and see some really cool demos. This is great news for Python developers! work! I use Python Tools a lot and it is awesome Any plans to infer generic types too? That way you'd have the example with the function foo that just returns its argument infer the string methods for foo("bar") and the integer methods for foo(42), even if you have both in the same file. Languages like F# already infer types that way, perhaps their algorithm could be adapted. This.....is.....awesome Been a programmer and a scripter, this really makes sense. I used to create tools for a charity I did Work Experience for, and WxPython was my primary GUI toolkit. Python is brill for admin scripting. Having Python for Visual Studio is brilliant, what would be great is if Visual Studio Express came with something like that. Tom Jules.dot: To support that we'd need to fully understand model the flow control of the program and then analyze calls based upon the Cartesian product of their arguments. Currently we're control flow independent and we just combine all of the types being passed to each argument together. It's certainly something I'd love to do at some point but it's currently pretty low on the priority list. There would still be the issue of what completions to show within the method but we could continue to offer our intersection or union option there. You can use it with the free VS Integrated Shell. NewWorldMan is correct - see the installation instructions at: 1. click to download the free Integrated Shell 2. click to download PTVS and you have a virtual "Visual Python 2010 Express" edition. PTVS isn't "in the box", but this enables us to release much more frequently. Cheers. Sean, what you are doing is amazing. As I have said in my post, I'd love to get involved in this, even though I live in the UK. If you have ever used WxWidgets or PyGTK, a resource editor in Visual Studio for that type of stuff would be awesome. Give me plenty of time, and I can try working on a resource editor for WxWidgets at least, although I have no idea where to start in making toolbox like items for a windows editor. Tominator2005: That'd be awesome! For the toolbox you probably want to look at the IVsToolbox[2,3,4,5,6] interfaces, you can get an instance of one by calling Package.GetService(typeof(SVsToolbox)). I suspect you call AddItem w/ an IDataObject and then get that same object back when the user drops it onto your editor window. This is a basic Python question ( I'm a newbie to Python ).... Do you have any links/documentation on how to call C# Classes/Methods from Python? shaggygi: In general you do: import clr clr.AddReference('AssemblyName') from Namespace import Type For example: import clr clr.AddReference("System.Windows.Forms") from System.Windows.Forms import Form This requires using IronPython or installing Python.NET [] if you're using CPython. From there you can do: f = Form() f.ShowDialog() Or in other words you can interact with the types as if they're normal Python types. When working w/ your own types the biggest gotcha is to make sure you make them public. @Dino Viehland Thanks! This will be a big help to get me started. My background is a HND in Computing, and I only did a small amount of programming in that, which I regret. Is there anyway, I can contact you via email or messenger. This is gonna be tough job and I'll need occasional help. This is really cool! Awesome IDE for Python! Can Python be used or will it be available for Metro Style Apps in Windows 8? Can it talk to the WinRT? Long live Python and the Knights who say Ni!. Remove this comment Remove this threadclose
http://channel9.msdn.com/Blogs/Charles/Announcing-Python-Tools-for-Visual-Studio-11-Alpha?format=auto
CC-MAIN-2015-14
refinedweb
765
74.08
In this article we will automatize the creation of multiple components by using an external JSON file to hold our information. This will make our projects more organized because we make a clear separation of concerns. - The JSON contains our information. - The components are the visual representation. Pre-requisites - Arrow functions: A modern way to write functions in JavaScript. Is used all across React and in the map, filter, reduce methods. - Map function: A JavaScript method that allows to create a new array based on information from another array. (watch from 1:58 to 2:54) Intended result Figure 1: The same pet shot app but more organized behind the scenes. Figure 2: Hierarchy diagram. The element in the middle with dash lines is not a component, but an array holding components. Anatomy of a map function Before we get started let's explain a bit about: The map function Figure 3: A comparison between the map function with an arrow function inside vs. a traditional for loop. As you can see, the map function is just a shortcut to write more compact code. The less code you write, the less bugs you will have. Arrays of components Figure 4: The contents of ComponentArray. As you can see, JavaScript arrays can literally hold anything, from strings, numbers, to even React components. Getting started For this exercise we will use a JSON file to create our list of components. JSON file: Create a json file inside your src/ folder. Preferably inside a folder name data/. Note: Each object inside the array needs a key called id. React needs this in order to keep track of each component. [ { "id": 1, "title": "Puppy", "fileName": "dog.jpg" }, { "id": 2, "title": "Whiskers", "fileName": "cat.jpg" }, { "id": 3, "title": "Birdie", "fileName": "cat-food.jpg" } ] App component: import MyJSON from "./data/pets.json"; import MyComponent from "./components/MyComponent"; export default function App() { const ComponentArray = MyJSON.map((item) => ( <MyComponent key={item.id} title={item.title} fileName={item.fileName} /> )); return ( <div className="App"> <section className="grid"> {ComponentArray} </section> </div> ); } Let's analyze the code: import MyJSON from "./data/pets.json": We import our JSON. const ComponentArraywe create a variable to hold our array of components. MyJSON.map()creates an array of components using the map function. This is where the Map function comes into play. <MyComponent>is a copy of MyComponent, the map function will create as many components as the the json needs. - Inside the JSX we put the ComponentArrayinside curly braces {}. MyComponent: export default function MyComponent({ title, fileName }) { const imageObject = require("../images/" + fileName); const imageURL = imageObject.default; return ( <article> <h2>{title}</h2> <img src={imageURL} alt={title} /> </article> ); } The only change imageObject: Is a variable to import image combining a location in our project folder with a prop received from a parent. require("../images/" + fileName): require is a Node.js command to open a file from our project folder. imageURL: We create another variable to assign the specific key that has the image path. In this case imageObject.default. - Finally, inside the JSX we use {imageURL}in the <img/>source property. Conclusion: Congratulations this conclude the articles for the first day of React. You can practice by refactoring your Cupcake website's product page and seeing how easy it is to organize the products. (only for the SDA students) You can take a break before moving to the articles intended for the next day, or click here to continue your studies. If want can to see the finished code, open this link and open the branch create-list. Credits: Cover picture: Photo by Curology on Unsplash Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/eduardo_alvarez_946ae8b20/creating-a-list-of-components-using-functional-programming-4215
CC-MAIN-2021-21
refinedweb
601
59.6
sec_id_parse_group-Translates a global group name into cell name, cell-relative group name and UUIDs #include <dce/secidmap.h> void sec_id_parse_group( sec_rgy_handle_t context, sec_rgy_name_t global_name, sec_rgy_name_t cell_namep, uuid_t *cell_idp, sec_rgy_name_t group_namep, uuid_t *group_idp, error_status_t *status ); Input - context An opaque handle bound to a registry server. Use sec_rgy_site_open() to acquire a bound handle. - global_name The global (full) name of the group in sec_rgy_name_t form (see Global PGO Names). Input/Output - cell_namep The output name of the group's home cell in sec_rgy_name_t form (see Global PGO Names). - cell_idp A pointer to the UUID of the home cell of the group whose name is in question. - group_namep The local (with respect to the home cell) name of the group in sec_rgy_name_t form (see Global PGO Names). - group_idp A pointer to the UUID of the group whose name is in question. Output - status A pointer to the completion status. On successful completion, the function returns error_status_ok. Otherwise, it returns an error. The sec_id_parse_group()routine translates a global group name into a cell name and a cell-relative group name. It also returns the UUIDs associated with the group and its home cell. A NULL input to any Input/Output parameter suppresses parsing of that parameter. - /usr/include/dce/secidmap.idl The idl file from which dce/secidmap.h was derived. - error_status_ok The call was successful. - sec_id_e_bad_cell_uuid The cell UUID is not valid. - sec_id_e_name_too_long The name is too long for current implementation. - sec_rgy_object_not_found The registry server could not find the specified group. - sec_rgy_server_unavailable The DCE Registry Server is unavailable. Functions: sec_id_gen_group(), sec_id_gen_name(), sec_id_parse_name(). Protocols: rsec_id_parse_name().
http://pubs.opengroup.org/onlinepubs/9696989899/sec_id_parse_group.htm
CC-MAIN-2017-04
refinedweb
261
51.65
What happened in the process LoRa message delivery? Can I measure or get the precise time cost of it? - RyverSolare last edited by RyverSolare Hi, everyone. I am applying a clock synchronization algorithm on LoPy4 modules. And I am trying to get rid of the time cost on message delivery of LoRa. Are there any documents of reference describing the process of LoRa message delivery on LoPy4? For example, how long time it takes to encode and send out a message? How long it takes to receive and decode the message? I have found that there is a tx_time_on_air attributein lora.stats(), however, this attribute is in the unit of milliseconds, can I get a more precise value in microseconds? Moreover, I have no idea what this time stands for. I have done some experiment on it: def recvMsg(my_lora): call_back = localClk.read_us() recv_msg = socket.recv(16) stats = my_lora.stats() if(recv_msg == b's@1234'): record_time = localClk.read_us() localClk.reset() socket.send('m@1234') print('send cost', localClk.read_us()) print('on_callback', record_time) print('on the air', stats[7] * 1000) print('uncertain', record_time / 2 - stats[7] * 1000 - (record_time - call_back)) time.sleep(5) localClk = Timer.Chrono() localClk.start() result = [] result_time = [] # Set up LoRa lora = LoRa(mode=LoRa.LORA, region=LoRa.EU868) socket = socket.socket(socket.AF_LORA, socket.SOCK_RAW) lora.callback(trigger=LoRa.RX_PACKET_EVENT, handler=recvMsg) time.sleep(10) One master node executes the code above, and another slave node just responses to the message it receives. The result is like: send cost 42302.07 on_callback 88978.02 on the air 37000 uncertain 7292.387 Seems that socket.send()takes 42ms to execute. But the send time+ transport on the air+ receive timeis about 44ms (half of duration of two on_callback), and the tx_time_on_air attributeis 37ms. What went wrong here? If you have any ideas or clues about the LoRa message delivery process here, please do let me know. Thank you very much in advance. - RyverSolare last edited by @robert-hh Thank you many times. It helps a lot. @RyverSolare The source code is here:. The readme's at various levels contain instructions how to build it. Please follow them literally. The source code is not easy to understand, simply because it is large. An the Lora components are distributed over many files in a few folders. For you purpose you may work with average values. But still the error of a single event will be in the ms range. A small correction: it's not pipes but queues which are used for inter-task communication. - RyverSolare last edited by @robert-hh Thank you very much for responding. I am a layman of embedded software and RTOS. I understand that since the task is scheduled by OS, the specific time of executing it (e.g. sending a message) is not possible to get, right? The process is impacted by the current workload of the system. Do you consider it makes sense to measure the average time cost given a fixed program (i.e. a fixed workload) and use this data? By the way, would you please tell me where to find the source code related to the LoPy libraries? I have searched it on Github but I got nothing. Thank you again, you are really warmhearted. @RyverSolare That's what I understood form reading the source code once: The front-end of LoRa with the python API and the actual send/receive function run in two different RTOS tasks, which communicate via pipes. So timing depends on the RTOS scheduler. When packets are received, an ISR is called, which sets up the data for the further processing in the receive tasks, which then signals to the micropython task. Other tasks may interfere too, like WiFi activity. I did not find information about the basic scheduler tick of RTOS, but the order of 1 ms sounds like a reasonable assumption.
https://forum.pycom.io/topic/6126/what-happened-in-the-process-lora-message-delivery-can-i-measure-or-get-the-precise-time-cost-of-it
CC-MAIN-2022-33
refinedweb
646
69.28
This: We've'm:. This upcoming ASP.NET MVC preview includes several refactoring and design improvements that make the MVC framework more extensible and testable. In general the team has followed a philosophy where for all features you have three options:. Last month I announced that the .NET Framework source code can now be downloaded and debugged. Ultimately once it ships the ASP.NET MVC framework will be available via this mechanism as well (just like the rest of the ASP.NET source code).. Excellent; it's great to see some of the forum and blog community feedback and suggestions in these updates. Thanks to everyone and looking forward to the bits! Would there be any kind of site/protocal where users could contribute patches back upstream to MS for potential inclusion? You've been kicked (a good thing) - Trackback from DotNetKicks.com I don't care if you're excited about the new MVC framework or if you're one of those people Yes! I've been waiting for this. I started a series of ASP.NET MVC tips on my blog (weblogs.asp.net/mikebosch). Can't wait to play with the new stuff! Scott great decision on distributing a working solution of the MVC source code! Reminds me of some open sources framework (although it is not open source). Scott, I'm sure I speak for a lot of people on this one, but thanks SO MUCH for keeping us so regularly and completely informed. The increased release cycle, the willingness of the PM's to help out (often going into a lot of detail on queries) with the MVC prerelease and the general insight into what's coming up (not to mention the support for 3rd party frameworks) is greatly appreciated. The release of the source code and the more friendly license go a long way to increasing the confidence we are likely to have when embracing new technologies/MS implementations thereof. To you and your team, keep up the good work! Andrew Hi Scott, This sounds like it is progressing really well. I guess it is too early for it to have a Go Live license at MIX? :D Will we be seeing releases every quarter or on a more regular basis? Thanks, John Where is the best place to leave MVC framework feedback? I have a few comments I'd like to send on over. Fantastic, Scott! I love the FilterAttribute. Partial Trust support is an absolute must so I am so happy that is happening. Wow! ASP.NET MVC Update What about some type of partial view support? It would be nice to factor and reuse view logic! Great news, thanks! I am wondering about this: If the routing is 'factored out', is it now in a different namespace? Will the System.Web.Mvc.dll contain the routing engine? Is there going to be Ajax support in the next release, or what do you mean with 'start talking about'? Can we also download _just_ the .dll if that's all we need? Current download sort of includes everything but the kitchen sink, which not everyone needs.! Regarding the Authorization attribute, does it use the Forms Authentication? Will there be any support in the future for a more fine-grain authorization mode (using ACL instead of role based)? Thanks.. Great work! Number 8 ROCKS!!! (Downloadable ASP.NET MVC Framework Source that can be Built and Patched) Scott and the MVC Team @ MS: this is really really good news for both your team and the community wanting to develop with this framework. Excellent news! It's very encouraging to see that you guys are listening to us and finding a middle ground :)). Comment: more blog posts / guides / etc about LINQ + MVC would make us cry with joy. Thanks again Scott and the MVC Team - you're kicking big goals for the ASP.NET community. We love you, keep it up. -PK; Melbourne, Australia- A second vote for a clear solution for Partials in Views. And that the Dynamic Data improvements gives some clarity on suggested direction for validation metadata. Hi John, >>>>>> This sounds like it is progressing really well. I guess it is too early for it to have a Go Live license at MIX? :D You'll be able to go live with the MIX preview (technically you can even do this today). >>>>>> Will we be seeing releases every quarter or on a more regular basis? You'll see pretty regular builds coming out after MIX. We did a lot of refactoring work with the MIX preview that took awhile to make sure we got right (specifically the routing layer - which impacts a lot). Going forward we expect to ship more regular drops. Hi Bill, >>>>>> Would there be any kind of site/protocal where users could contribute patches back upstream to MS for potential inclusion? The core MVC code will ultimately ship in the .NET Framework (which ships in Windows) which unfortunately means we need to be careful about taking patches back (for legal reasons). We are, though, planning on helping foster open source projects and extensions on top of the core - and will have Microsoft employees contributing code to those. We are definitely interested/looking to enable contributions and patches with those projects for MVC. Hi Matt, >>>>>> Where is the best place to leave MVC framework feedback? I have a few comments I'd like to send on over. The ASP.NET MVC Forum on might be the best way to send them to the team. Alternatively you can shoot them to me (scottgu@microsoft.com) and I can connect you with the team. Hi Michael, >>>>>>> If the routing is 'factored out', is it now in a different namespace? Will the System.Web.Mvc.dll contain the routing engine? The routing engine (as well as the base Http Intrinsic abstractions) are now factored out into a separate assembly. This will enable other projects to reference them without having to reference the entire MVC framework (the MVC framework then obviously has a dependency on them as well). >>>>>>> Is there going to be Ajax support in the next release, or what do you mean with 'start talking about'? We probably won't have AJAX built-into the MVC core for the MIX preview, but we are planning on showing sample and shipping some extensions that demonstrates it that month. Think of this as conceptually similar to what we did with the Html helpers with the December preview (we ship them separately to begin with while they are more in prototype validation form, and then integrate them into the core with a future build). >>>>>>>>> Can we also download _just_ the .dll if that's all we need? Current download sort of includes everything but the kitchen sink, which not everyone needs. We'll have a MVC specific download. I believe this includes both the core MVC assemblies, as well as the VS project wizard. I think we'll also have a separate download for just the assemblies. >>>>>>>>>! Cool - glad you like it. :-) Hi Dimaz, >>>>>>>> Regarding the Authorization attribute, does it use the Forms Authentication? I'm assuming that the above Authorization attribute could be built to support both Windows Auth and Forms Auth. ASP.NET today supports an authentication model where the auth-mode used is abstracted from the authorization approach (this is also true for Roles Auth - which works for any Authentication scheme). >>>>>>>> Will there be any support in the future for a more fine-grain authorization mode (using ACL instead of role based)? You could use ACLS for this - although one complication with a MVC world where your URLs map to Controller actions as opposed to files on disk is that you don't have a physical file to ACL. You can use the ACL concept and do logical ACLing - although that requires a special store that you manage to persist the ACLs. Hi Ryan and Andy, >>>>>>>> What about some type of partial view support? It would be nice to factor and reuse view logic! You can actually use .ascx templates for partials in views today. These can be declared and used within .aspx files, or dynamically created and rendered either within the controller or using the Html.RenderUserControl helper method. What we are looking at doing in the future is supporting the concept of nested controllers, which enables more controller encapsulation of specific UI sections of a page. This won't be in the MIX build, but the design time is discussing the concept now. Thanks for the update! All this looks like good news and is really exciting. Pingback from { null != Steve } » Scott Guthrie : ASP.NET MVC Framework Road-Map Update Excellent news Scott. I look forward to trying this out. I'm most interested now in how it will handle AJAX calls since there is no postback. (Currently I'm using jQuery for most my javascript needs, I'd like to see more on how I can make a ajax call and bind that data to a repeater on the page.) Have any best practices been established reguarding server side form validation? ASP.NET MVC Framework のロードマップ。 That's a great decision to release the code for emergency patching etc. It was something that was holding me back from committing to the ASP.NET MVC framework for my next project. Excellent news! Awesome, thanks for the detailed update. ASP.NET MVC CTP 2 - Awesomeness A question regarding the release of these extensions: will they be release as add-ons to ASP.NET or incorporated into ASP.NET via a service pack update? Also, any word on Oracle support for ASP.NET Dynamic Data? Even the first CTP was an awesome demonstration of how we can benefit from this type of development. I didn't realize how nice MVC would be until I dug in and tried it. Hats off to everyone working on this project. Also, the members answering questions in the forums have been extremely helpful as well. Great Job! Please make the Session/HttpContext/Controller etc easier to mock. I've left posts on on Phil Haack's blog regarding the specific problems encountered, so I'm sure he knows. haacked.com/.../tell-me-your-unit-testing-pains.aspx Awesome, simply awesome. Hard to believe all this is coming out of Microsoft! Excellent news about the source code. Hello, Can you kindly provide a method/Link to remove the previous December Asp.Net MVC Preview from the system. I would like to keep my system clean and tidy for the next Preview coming during MIX2008. Thanks in advance Hi Softmind, >>>>>>> Can you kindly provide a method/Link to remove the previous December Asp.Net MVC Preview from the system. I would like to keep my system clean and tidy for the next Preview coming during MIX2008. You can use Windows' Add/Remove Programs option to uninstall the December ASP.NET Extensions CTP. This will cleanly remove it from your system. Hi PK, >>>>>>). The team is looking at a few options on ViewData right now. I'm not sure whether this will change for MIX - but this is an area where a few improvements are being investigated. >>>>>> A question regarding the release of these extensions: will they be release as add-ons to ASP.NET or incorporated into ASP.NET via a service pack update? We are still finalizing this - we'll share more details in the weeks ahead once we have the final release plans locked. >>>>>> Also, any word on Oracle support for ASP.NET Dynamic Data? ASP.NET Dynamic Data will support LINQ to Entities, which will support Oracle. So this should be supported soon. :-) I have a question about integration between ASP.NEt MVC and Silverlight In the MIX build, ASP.NET MVC can render a xaml view? If not, do you have plan to support this feature in final release? DuyHB Pingback from ASP.NET MVC Framework Roadmap « Patrick’s Bytes Looking forward to ...... That's cool! Will the license for the MVC dll allows its port to .NET 2.x? As much as I like this stuff and interested, I do not see any of our clients accepting the .NET 3.5 soon. Best regards, Paul. So Silverlight Apps are great, and MVC is exciting. So why am I having so much problem serving up a Silverlight Page with MVC View ? The javascript execution to start Silverlight does not run, guess because it interfers with ASPx being executed at the Server. Any suggestions ? Can someone explain how MVC will integrate with Silverlight. Will MIX08 tell us how these two technologeies will work together ? Cheers Wie Scott schreibt, wird die nächste Version des ASP.NET MVC Frameworks zur Mix-Konferenz veröffentlicht, die am 3. März startet. Allerdings muss man nicht vor Ort sein um die Daten zu erhalten, sie werden wie gewohnt online zur Verfügung gestellt. . Hi Scott The MvcContrib implementation of filters currently supports ordering of filters as well as being able to create filters using an IoC container...will your filter implementation support these scenarios? Thanks Jeremy I have a concern about ajax and third party controls with MVC.? Is the new MVC Framework anyhow related with the Webclient Software Factory from the Patterns and Practices Department. Or make the new MVC the WCSF redundant. Hi, I've got questions : Is MVC framework compatible with Sharepoint? Is there a way to include MVC pages into sharepoint in a simple way? with URL routing? BTW - MVC is the most brillant stuff I've ever seen in .Net... Thanks in advance. Cheers, Ronan Scott, please don't forget your Silverlight pals. We're sitting here lonely and starving :-) How about some Light on Silver?? DW Any word on file upload support Scott? Sharepoint SharePoint 2007 and WSS 3.0 Dispose Patterns by Example [Via: Roger Lamb ] WPF WPF 3.5 Data... Any chance we'll see a screencast of the tutorials? Some things that come to mind with this. I know my opinion will be unpopular, but one cannot perfect a product without looking at both sides of the coin :) #1 While its vastly different, its looking more and more like there's quite the Rails unfluence in the feedbacks that are pushing ASP.NET MVC along... For example, now, while in all the other (built in) .NET stuff, the attributes must always be specified (see: The controller methods now, as opposed to an ASP.NET Web Service's WebMethod attribute), now in this, not only they aren't required, but things are all done via the names of the method... That worries me about reusability/refactoring... renaming the methods has an effect on what happens, something that is quite inconsistant with the .NET philosophy (as opposed to having an attribute [ControllerAction("List")] for example, which would be not only refactoring safe, but could be programmed to more easily, use constants, etc. That should be the default to be consistant with other things in the framework. #2 The introduction of some "convention over configuration" things (see #1) personally worries me a bit. Not because its bad (its a good way to do things), but because it has both strength and flaws...and I feel like this way, you'll get an application that has BOTH the flaws of the RoR way, AND the flaws of the .NET framework (nothing's perfect), thus ending up with twice the "flaws". A bit like if you make a .NET app with NHibernate and Spring.NET, while it works very well, all you end up with, is a non-cross-platform Java app, in practice... The MVC framework so far doesn't seem to be going too far down that road (though the whole controller thing throws me off personally... not because it doesn't make sense: it does, but it doesn't feel ".NET", and has certain weaknesses not covered by the strength of .NET), but if it goes much further, it will be "Just another Rail clone". I also repeat my request: that a basic MVP pattern (like Web Client Factory) framework based on WebForm, with unit test support should be included in the framework/visual studio. Right now, I'm almost inclined to start using the ASP.NET MVC, not because I like it better than WebForms (I think WebForms are vastly superior), but because of all the -built in- goodies that will be integrated in the base framework for MVC, even though those things exist by Microsoft for WebForms, will be available without much extra work (as opposed to the Web Client Factory), and will thus be easier to get approved by management... a weird situation indeed. Pingback from MVC (Modelo-Vista-Controlador) en aspx « Un blog mas de programaci??n… y mas Thanks Scott. I like the idea of making URL routing engine decoupled from asp.net MVC. Do you have a plan on enhancing/rewriting the existing webforms control to work with MVC seamlessly? I was thinking of this scenario. Imagine adding a webforms control on an MVC project page. The control should automatically strip itself of all webforms functionality and be MVC aware. :) Pingback from Extreme makeover: routing edition < Trying This Again Great work! I love the MVC Framework! I'm really excited for the new release.. Plans to Open ASP.NET MVC Source Scott, what about subcontrollers / subviews ? Will they be included in next preview, are they on schedule ? I recently started designing an ASP.Net application using the new ASP.Net MVC framework. After a couple Will there be any improvements or examples on deploying MVC to IIS 6?. Keep up the good work. When will we get ASP.MVC and Silverlight Integration ? At the moment I cannot figure out how to serve up Silverlight Pages/Js/XAML from the MVC Views. The Silverlight.js does not seem to like wanting to execute within a .ASPX page. I hope we don't have to wait another year before we can mix these two great technologeis together. Jules >>> We are still finalizing this - we'll share more details in the weeks ahead once we have the final release plans locked. Service Pack! Please... Roll everything up into one pack so you know that with 3.5 SP1 you can rely on everything being there on a host. Great news on the Action attribute. Though it made sense conceptually, I have not found a single case where this was not redundant. Tonight, put on the DVR and come out to the Rockville .NET Users Group (RockNUG) for a presentation by Why did .net 3.5's [ScriptMethod] JSON serializer change from outputting {"__type":"MsgClass","Msg":"Hello World"} to : {"d":{"__type":"MsgClass","Msg":"Hello World"}} where did the Wrapper object come from? You don't seem to be aware of whats possible with the castle/nvelocity components. ascx controls (and probably any subview/subcontroller combo based on them) don't have the capability. Templates are close but not dynamic enough to be usefull. For instance, I can create a form component that has a default way of displaying all data which can be overridden one property at a time if neccessary. This is the sort of functionallity I need from components. I've been frustrated with ASP.NET from nearly the day I started working with it. C# and the .NET This is fantastic! Nice feature in .NET 3.5 and Nice blog Listen: Pay attention to what your customers are telling you. Understand: Take the time to understand what your customers are telling you. Respond: Follow-up to what your customers are telling you where and as appropriate. Repeat: Keep listening, understanding Hi Scott... do you know any interesting tutorials in using the MVC framework in VB instead of C#? Pretty soon we can just drag and drop a messenger client, where's the fun in that? Tom, you can use the Validator Toolkit for ASP.NET MVC to do the client and serv-side validation until Microsoft has released their build-in solution. You can download the binary or source code from CodePlex: Well it’s a week of news; Scott Guthrie has posted an update on the ASP.NET MVC Framework roadmap , and Is there any plan to integrate MobilePage support in this framework, or at least allow for overriding of the apropriate virtual methods, etc? It would be nice to use this framework consistently across both tradional and mobile sites. Rich ASP.NET MVC 框架路线图更新 Pingback from MSDN Blog Postings » MVC and Unity Новини про ASP.NET MVC Framework читайте тут - weblogs.asp.net/.../asp-net-mvc-framework-road-map-update.aspx I have been excited about the Microsoft implementation of MVC for some time now. It's very exciting to see this technology and methodology getting more acceptance. Can you please tell us expected release date for MVC patch, We at Merrill Lynch interested in using MVC pattern in our new project Will there be Validation improvements ? IE Object level validation in the preview ? Pingback from ASP.NET MVC MIX Preview | ChrisNTR Resources from my ASP.NET MVC talk at RockNUG (2/13/08) Compilable source code for patching, quick get the Queen to give you a knighthood. Hi Jeremy, >>>>>>> The MvcContrib implementation of filters currently supports ordering of filters as well as being able to create filters using an IoC container...will your filter implementation support these scenarios? Yes - the filters feature will support ordering of filters. Hi Courtney, >>>>>>>? If you are productive today I would recommend sticking with the ASP.NET Web Forms and AJAX model. The MVC model is a different approach that uses a different mindset when thinking about problems. People tend to gravitate to a controls or a MVC model - and both are perfectly valid approaches within ASP.NET. I would recommend going with whichever feels best to you. Hi Rudimenter, >>>>>>> Is the new MVC Framework anyhow related with the Webclient Software Factory from the Patterns and Practices Department. Or make the new MVC the WCSF redundant. The WCSF is currently separate, but will move to support MVC as well in the future. Hi Ronan, >>>>>> I've got questions : Is MVC framework compatible with Sharepoint? Is there a way to include MVC pages into sharepoint in a simple way? with URL routing? Currently MVC doesn't directly integrate with SharePoint. That is something we'll be looking at supporting in the future though. >>>>>> BTW - MVC is the most brillant stuff I've ever seen in .Net... Cool - glad you like it! Hi Ben, >>>>>>> Scott, please don't forget your Silverlight pals. We're sitting here lonely and starving :-) How about some Light on Silver? Lots of Silverlight love coming soon.... :-) Hi Dugald, >>>>>>? We went back and forth on the decision to require attributes or not. In the end it seemed like more people would prefer we didn't have them then require them, which is why we removed the requirement. Note that the new filter feature does provide enough extensibility hooks to add them back yourself if you'd like to require them in order to be more explicit. Hi David, >>>>>>> Any word on file upload support Scott? You should be able to use file upload even with the December preview. You can get access to the uploaded files using the Request.Files collection inside your action method. Hi Mark, >>>>>>> Any chance we'll see a screencast of the tutorials? Yes - we'll definitely have screencasts in the future. Hi Kurt, >>>>>>>? Action methods are public methods on controller classes, and so you can invoke those in medium trust (public reflection is allowed). Hi Bryan, >>>>>>> Yes - you can definitely create a separate project (or several projects) and isolate your data layer and business object layer in them and keep them separate from your MVC project. >>>>>>>. I believe you'll be able to handle this scenario using the new factoring. The named route feature might also be something that helps handle this scenario. Hi Alexander, >>>>>>> Scott, what about subcontrollers / subviews ? Will they be included in next preview, are they on schedule ? Sub-controllers are something we still plan to support, although we haven't finished the design in time for the MIX preview. They will show up in a future build though. Hi Ryan, >>>>>>> Will there be any improvements or examples on deploying MVC to IIS 6? Yep - we'll definitely have more guidance on this in the future. Hi Janolof, >>>>>>. It is hard to say. I think for many organizations the MIX preview will be good enough to start large projects with, although realistically for others it might be worth waiting a little longer. It will really depend to some degree on your risk tolerance. >>>>>>> Hi Scott... do you know any interesting tutorials in using the MVC framework in VB instead of C#? I am planning on doing some VB ones in the future myself! On an unrelated-to-MVC note, congratulations on your promotion to VP Scott! Hope you don't drop the blog as a result - its the only one I regularly read. I decided to publish, every friday, some links that i judge interesting, from now. Architecture Scott now that Unity is announced, will we see from either the ASP.NET MVC team or the p&p Unity team integrate the two? I'd guess this will be done in MvcContribu sooner than later. Gabor >>Silverlight love<< Scott, that was the word that I meant to put in my first post but forgot, and you picked up what I meant! ;-) You know "LOVE" is like a two-way binding, you give us SilverLove and we give you back GoldLove :-) :-) Thanks! 摘要VisualStudio2008Web开发相关的Hotfix发布ASP.NET安全性教程系列ASP... Any plans to show an example of MVC with any one of Dynamic Languages Congratulations on your promotion as a Vice President. I hope, you will stay in touch with asp.net enough though being promoted as a Vice President. SoftMind Congratuations on your promotion to Corporate VP! There's no question that you deserve the bump in grade. --rj See oakleafblog.blogspot.com/.../linq-and-entity-framework-posts-for_11.html Congratulations on the promotion! Scott, Congrats on the promotion! Hopefully you will still have some time left to blog to us mere mortals. Congratulations on your new position! Keep up the good work. Will it change focus of your posts on this blog? Off Topic...LINQ Scott, Desperately need your help with this one! Some clients of ours expressed to have some db columns encrypted. We upgraded to SQL2k5 and used 'encryptbypassphrase' commands. The passphrase is supplied by an external assembly (outside of the database). This way, if the db is ever 'stolen' information in it would be unreadable if read outside our domain. We wish to leverage our code using LINQ in order to create a rich ORM model (API) that can be reused with other presentation tier (plaforms). But there's no guidelines on how to interact with LINQ and encryption mechanisms. For instance, in a table representing a user, there's column of Firstname that is encrypted by pass phrase which in fact represents a nvarchar(50) but since it is encrypted, LINQ modeled the class and return a column of type binary. Thanks; All encrypted columns have the same prefix in the column name like 'cryp_<columnName>'. I basically need a way to model the API so that the encryption / decryption by pass phrase are transparent and that when i query using LINQ, the real (decrypted) values is populated. But when the values is saved or updated is s/b encrypted. ASP.NET MVC フレームワークのロードマップの更新 Thanks all! Don't worry, this won't impact my blogging. :-) Thanks again, Just a little congrats from the devs here at IsaiX (Partner Gold)! The other day, we were watching the presentation that you did last June at TechEd ("Light Speed with VS 2008"). In all the presentations that we watched over the last few months, you were ranked ***** (5 stars presenter). Happy VP'ing But don't forget us! Well, I fell off the blog wagon this week. Was far to busy with the project I am on, but since we now I like the New ASP.Net Model View controller (MVC) too much, it's a good pattern for developing ASP.Net I become very happy when I heard you are Promoted to Corporate Vice President Congrats on the promotion Scott :) Hope you keep writing these posts. I am personally very excited about the MVC platform and hope to use the next CTP to build a prototype site. Keep up the good work and I look forward to the release made around MIX. Too bad i wont be there :(. 摘要 VisualStudio2008Web开发相关的Hotfix发布 ASP.NET安全性... Alcune modifiche significative su ASP.NET MVC. Great stuff. I'm anxious to see a real-world sample in the future. Pingback from Microsoft opens Office binary file format specifications » DamienG Pingback from ASP.Net - Builtin MVC Framework « Everything technical Pingback from Wöchentliche Rundablage: | Code-Inside Blog It's really good to see a recognised framework pattern being implemented. I do, however, have a couple of points to raise on the design aspect. In the url routing to controller; this seems to be happening by convention rather than design. For instance: the url "Products/Detail/3" would be picked up by the "[controller]/[action]/[id]" route, but then reaches (somehow?) the ProductsController class. Surely this is exactly where an attribute would be good design. ie: [Controller("Products")] public class ProductsController : Controller { ... } This way, the class could have any name, and could be a multi-facetted controller: [Controller("Items")] public class MyCommonController : Controller { As for losing the attribute from actions, the same applies: ... [ControllerAction("List")] [ControllerAction("Browse")] public void List(string category, int? page){ Please don't let convention replace good design. Pingback from Microsoft adds new features to its Popfly mash-up tool | All about Microsoft | ZDNet.com ScottGu recently announced the plans for the next public preview of the ASP.NET MVC framework. For those Resumo da semana - 18/02/2008 Upcomming Code Stuff and Other Related Awesomeness Looks like MS is really starting to read the writing on the wall. We're using Monorail right now and really don't have any plans to switch but I do hope that this will introduce more MS developers to the world of MVC frameworks. The standard aspx model is/was an over-engineered and flawed approach (could've been designed by SUN). Ever since the days of VB5/6 MS has done a really good job with Windows/Forms development environments but web app development has always been behind the rest of the industry (yup Visual Interdev sucked). Either way...kudos for MS. I'd like to hear some thoughts about how this affects the Monorail/Activerecord/Winsor stack and what compelling reasons are there to switch? You mentioned that it had been factored out (i.e. no longer just part of MVC). You then mentioned that this would enable routing for Web Forms and Dynamic Data applications. So it sounds as if the intention is to share the routing love with all flavors of ASP.NET development methods. Does this mean we will be able to mix and match MVC, Web Forms, ASP.NET, and possibly Dynamic Data in the same application by using the routing engine and its pretty URLs? It would be nice to be able to choose how development is done on a route by route basis, and it provides a nice migration path for existing apps. Will this type of control be possible? Eddy Pingback from All ye dogmatic followers, behold; the Church goes heretical. « Life, programming etc. ScottGu blogged about the roadmap for the ASP.NET MVC Framework , and it sure looks like this framework Here's to hoping that the built-in HTML helpers will be improved... The markup they currently generate is very inconsistent in XHTML compliance (missing quotes for attributes in a number of places). It's a great framework though, and the beginnings are already very promising! ASP.NET MVC in Action by Jeffrey Palermo I think the view name of an action should be the same as the action name by default, I don't want to write RenderView in every action :( it seems array type data binding is not supported yet :( Request.Files.Count is always zero despite me having an input type="file" on my form. I presume this is a known bug? If so are there any plans for a release with a fix soon? I've spent quite a bit of time developing a website and have suddenly hit this unpleasant obstacle :-( The current approach or sending multiple view data objects is so old-fashioned and untyped. Consider auomatic setting of the tagged (attributed) properties on the View, from the parameters in the ViewData collection, or even better, from property names of an anonymous object sent as ViewData. Paymon at geeks dot ltd dot uk A few weeks ago, ScottGu blogged about the ASP.NET MVC Framework roadmap ... I am happy to announce that ASP.NET MVC Preview 2 Released ASP.NET MVC Framework Preview 2 Now Available With MIX, and matching the announced MVC roadmap , the new versions of AS.NET Extensions are now publicly 별도의 다운로드로 제공되었고, 이전 포스트 에 이어서 몇가지 추가... >>>> Currently MVC doesn't directly integrate with SharePoint. That is something we'll be looking at supporting in the future though. It´s great! Has anybody used asp.net mvc in sharepoint? Is there an article about this? A few weeks ago, ScottGu blogged about the ASP.NET MVC Framework roadmap ...    I am happy How can I map links like this /controller/action/id1/id2 ? Pingback from Pawel Klimczyk WebLog » Blog Archive » Tech news ! I am attempting to compile the relevant resources for ASP.NET MVC Framework Preview 2 at one place so I'm trying to do some Ajax'y type stuff with the MVC platform. I have read nikhil's great post and a bunch of other peoples stuff, using this latest release I have found that things have changed a bit, IE viewfactory going to ViewEngine, kind of, etc. I think I have got around most stuff but I can't seem to get over the fact that I can't find IView. Can you explain what I should be doing if I wanted to change the way a view renders? Thanks heaps for all your great posts and your time. Henry Yikes. No one mentioned the ComponentController??? I blogged about it since I think its one of the most useful new features on this drop. Check it out: weblogs.asp.net/.../using-the-componentcontroller-in-asp-net-mvc.aspx Pingback from mookid on code » Nifty web site with ASP.NET MVC Thoughts on ASP.NET MVC Preview 2 And Beyond Uma das tendências atuais em arquitetura tem ainda vários nomes como RESTfull programing ou Pingback from MSDN Blog Postings » WOA - RESTfull Programming I'm slowly recovering from keynoting at MIX last week, and have been digging my way out of backlogged There has been a tremendous positive response on ASP.NET MVC Framework from the community since we released Pingback from Django vs. ASP.NET MVC | Stuff I want to log ASP.NET MVC Framework out on CodePlex Last month I blogged about our ASP.NET MVC Roadmap . Two weeks ago we shipped the ASP.NET Preview 2 Release Como ya delantó Scottgu hace unas semanas mientras actualizaba el roadmap de este nuevo producto , desde Pingback from ASP.NET MVC Source Code Now Available « .NET Framework tips Before we talk about JQuery and ASP.Net MVC let's take an overview about both of them. ... Pingback from Web Application Development with Microsoft Technologies » ASP.NET MVC Tutorials and Source Code Any thoughts on grid controls in this framework? I need a batch update grid with paging. I wouldn't mind ajax functionality either. A demo on how to bring these together would be great. [Note: This post was meant to be published the day after MIX08 version was out] I have upgraded Kigg VS 2008 Tool Support for Express Version!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! :) Comme prévu dans la roadmap du framework MVC ASP.NET, Scott Guthrie vient d’annoncer la publication du Pingback from Ninjamurai » ASP.Net MVC Framework Work Can you please let me know if these previews can work with Windows XP running IIS 5.1. I have tried (I think) all the suggestions out there but even a plaing new project that I created using the MVC project template does not work with IIS 5.1. Things seem to work normal with local web server. I think this is important because if this is true then we have to do development work with the local web server and then move the code to other environments (QA etc) that run IIS6 (and perform the necessary hacks for it work on IIS 6). Again, this is more of a calrification more than anything else. Appreciate everything you do through this blog! -Raj Congratulations. Great work. I'll put part of them on my blog. Is this post updated as the roadmap evolves. Will for example release of Preview 3+ be announced here? Mês passado eu escrevi a respeito de nosso plano para a ASP.NET MVC . Duas semanas atrás nós lançamos Codice sorgente di ASP.NET MVC disponibile ASP.NETMVC源码现在可以下载了(翻译) 上个月我在blog里介绍了ASP.NETMVCRoadmap,两个周以前我们发布了ASP.NETPreview2Release,ASP.... On Wednesday I presented a session on troubleshooting ASP.NET production issues at Developer Summit 2008 ASP.NET MVC Preview 2发布有很多天了。这段时间一直在研究并应用到实际的项目中。我对它的一句话感受是:的确很完美,的确很粗糙。完美的是产生的XHTML代码我可以完全使用XHTML1.1标准DTD了,粗糙的是还不够成熟,View里还有cs代码存在,控制页面的Title属性和服务器控件的数据绑定还得通过View的 codebehind代码实现。 Craig Shoemaker takes you on a tour of the best ASP.NET MVC resources available today. Listen to the I wouldn't normally consider myself an early adopter. In fact, I prefer the 'tried and true' Pingback from ASP.NET MVC Archived Buzz, Page 1 MVC, la nuova faccia delle pagine web con ASP.Net Since creating this blog, I have been working on all sorts of things, but none if it was bloggable for (archived from my old blog from my pre-MS days) If you want to learn a new Microsoft developer tool or Pingback from Inovation Blog » ASP.NET MVC Source Code Now Available Pingback from ASP.NET MVC Framework Tutorials « Irfan Syahputra Today at the Microsoft MIX09 conference in Las Vegas, Scott Guthrie (a Vice President in the Microsoft ASP.NET Model View Controller (MVC) Pingback from ASP.NET MVC Archived Blog Posts, Page 1
http://weblogs.asp.net/scottgu/archive/2008/02/12/asp-net-mvc-framework-road-map-update.aspx
crawl-002
refinedweb
6,423
75.1
Say you have a stream of means and standard deviations for a random variable x that you want to combine. So essentially you’re combining two groups of means and standard deviations, and . If you have access to the random variable x‘s value coming in as a stream, you can collect the values for some number of values and calculate the mean and standard deviation to form a group , and then combine it with the mean and standard deviation of the next group consisting of the next values of x to form: The formulas for the combined means and standard deviations are: Note that this is the Bessel corrected standard deviation calculation according to, which I found leads to a better estimate. In Python code, this is what it looks like: import numpy as np np.random.seed(31337) def combine_mean_std(mean_x_1, std_x_1, n_x_1, mean_x_2, std_x_2, n_x_2): n_x_1_2 = n_x_1 + n_x_2 mean_x_1_2 = (mean_x_1 * n_x_1 + mean_x_2 * n_x_2) / n_x_1_2 std_x_1_2 = np.sqrt(((n_x_1 - 1) * (std_x_1 ** 2) + (n_x_2 - 1) * ( std_x_2 ** 2) + n_x_1 * ((mean_x_1_2 - mean_x_1) ** 2) + n_x_2 * ((mean_x_1_2 - mean_x_2) ** 2)) / (n_x_1_2 - 1)) return mean_x_1_2, std_x_1_2, n_x_1_2 total_mean_x = None total_std_x = None total_n_x = 0 all_x = None # For getting the actual mean and std for comparison with the running estimate for i in range(10): x = np.random.randint(0, 100, np.random.randint(1, 100)) if all_x is None: all_x = x else: all_x = np.concatenate((all_x,x),axis=0) mean_x = x.mean() std_x = x.std() n_x = x.shape[0] if total_mean_x is None and total_std_x is None: total_mean_x = mean_x total_std_x = std_x total_n_x = n_x else: total_mean_x, total_std_x, total_n_x = combine_mean_std(total_mean_x, total_std_x, total_n_x, mean_x, std_x, n_x) print(total_mean_x, total_std_x, total_n_x) print(all_x.mean(), all_x.std(), all_x.shape[0]) If you run the code above and inspect the values printed at the end, you’ll note that the running estimate in total_mean_x and total_std_x are almost exactly the same as the actual mean and std output by literally collecting all x values and calculating the two values (but which may not be possible or feasible in your task).
http://blog.adeel.io/2019/02/05/calculating-running-estimate-of-mean-and-standard-deviation-in-python/
CC-MAIN-2022-05
refinedweb
339
55.98
I). Configuration – using the flags The configuration for nova services – global or specific to a service – are all done with configuration files that can be over-ridden on the command line, taking advantage of python gflags to make it all work nice. I didn’t know much about the flags system, so I dug around in the python-gflags project. They have the documentation for how to use gFlags in the code itself:. To summarize it up: Python modules in the codebase can define and use flags, and there is a general nova flags file that holds the cross-service (common) configuration settings. Nova defaults to looking for it’s configuration in a nova.conf file in the local directory. Failing that it looks for the nova.conf file in /etc/nova/nova.conf. Where it looks for the configuration file can be overridden (typically on the command line) by ( --flagfile) and a location to a config file. The code that makes this happen nova.utils.default_flagfile(). To use the configuration from within code, you typically instantiate the global flags, add any flag definitions (with default values) that you care to add, and then use ’em! Here’s a code snippet example: from nova import flags # import the nova wrapper around python gflags # .. there's some interesting wrapping for taking in arguments # and passing along extras values to your code # .. and it's where the global flag definitions reside FLAGS = flags.FLAGS # get the global instance # .. this attempts to read the /etc/nova/nova.conf for flags # # You can define an additional flag here if you needed to... flags.DEFINE_string('my_flag', 'default_value', 'human readable description of your flag') # there's also flags.DEFINE_bool, flags.DEFINE_integer and more... # # And then you can use the flags # .. the flags you defined show up as attributes # on that FLAGS object print FLAGS.my_ip If you were happening to write a script that took in flags and worked with them for a command-line script, you might do something like: from nova import flags form nova import utils utils.default_flagfile() flags.FLAGS(sys.argv) GLOBAL_FLAGS = flags.FLAGS # ... and on to the rest of your code There is some good end-user documentation on how to find the flags. The gist is – if you want to know what flags are there, the easiest way is to hand in the flag “–help” or “–shorthelp” from the command line. That is how the gFlags library is set up to tell you about the flags. Update: After a little digging down a side passage, I noticed that service.py had some debugging code in it that iterated through all the set flags. You iterate directly on FLAGS (treating it as an iterable thing) and use FLAGS.get() to retrieve the set values. logging.debug(_('Full set of FLAGS:')) for flag in FLAGS: flag_get = FLAGS.get(flag, None) logging.debug('%(flag)s : %(flag_get)s' % locals()) Services There are two types of services in Nova: system services and web services. The code to use and launch them is basically the same, and Nova has this all bundled into a general service architecture and code base. The reason that configuration is so important is that the nova service framework has a convention of knowing how to run a service based on flags from the framework. Here’s a bit of example code of a service to illustrate what I’m talking about. nova-exampleservice: import eventlet eventlet.monkey_patch() import sys from nova import flags from nova import service from nova import utils if __name__ == '__main__': utils.default_flagfile() flags.FLAGS(sys.argv) service.serve() service.wait() The convention starts off by using the name of the script invoked – in this case “nova-exampleservice”. The scripts in bin/ (like nova-network) use this mechanism. This convention can be overridden, of course, but it does make things pretty straightforward once you know the convention. The key to this convention is that the code in nova.service looks in the configuration for a class to instantiate (expected to be a subclass of nova.manager.Manager) named after the service that was just invoked. (this convention is in code under the nova.service.create() method) For our example of nova-exampleservice, the service is going to look in the configuration for exampleservice_manager, expecting the value to be a class that it can load that will be a subclass of nova.manager.Manager and will be responsible for running the service. This code is invoked from service.serve() from our example above. Again, it looks for the flag “exampleservice_manager” and try to load that class to do the work. An updated example that sets a default manager that will attempt to load the class mymodule.exampleservice.ExampleServiceManager by default: nova-exampleservice: import eventlet eventlet.monkey_patch() import sys from nova import flags from nova import service from nova import utils if __name__ == '__main__': utils.default_flagfile() flags.FLAGS(sys.argv) flags.DEFINE_string('exampleservice_manager', 'mymodule.exampleservice.ExampleServiceManager', 'Default manager for the nova-exampleservice') service.serve() service.wait() The manager class has two classes that you override to get your stuff done: init_host periodic_tasks There are also some conventions around adding methods to your manager and invoking them using the service framework’s RPC mechanism, which I’ll dig into with another post. Ref: Nova Developer Documentation Ref: OpenStack Compute (Nova) Administration Manual Ref: Openstack Wiki: Unified Service Architecture 5 thoughts on “Spelunking Nova – flags and services” I am a java person and I have downloaded the source code of Open Stack and have installed it . I have played with virtualization before and so I have a fair idea. I also understand RabbitMQ etc. The question I have for you is how to start understanding the code base . Where / how do I start . Is there an IDE to start with ? If not is there a place to start poking around in the code. Where do you recommend I start with ? Hey Hari, First, almost all of the code in the OpenStack project (and everything in the core) is in python – so getting some base familiarity with python is the initial step. There isnt an IDE or specific editor set up to dig around – many of the folks I know working on the project use vim or emacs. I use textmate on my mac to read the code, and almost all of the development is done on Ubuntu based linux distributions – so there are bash scripts to do “all in one” installs and such to get it working. I recommend getting it all running in an “all in one” setup and then reading and tracing code to understand whats on the inside. Well, that is what Im doing anyway. You can run all of the nova stack, or all of the swift stack (compute and storage respectively) within a VM for the purposes of seeing what it does and light functional testing. The libraries that the code uses are the next challenge – Im going to write more on the service framework, eventlet, and the “carrot” library which enables the use of AMQP/RabbitMQ for communications in another post. There is also the IRC channel (#openstack and #openstack-dev on FreeNode), LaunchPad answers, OpenStack forums, and the mailing list as avenues to ask questions. You will hae much better luck in those venues with specific questions over broader questions. Hope this helps get you started into OpenStack! thanks for your past, I had reproduced this blog Another snippit that folks may find useful when making command-line tools that interact with nova’s database is: Spoke too soon… It might be useful to add a python search path. This chunk was also useful. This is what my template looks like now:
https://rhonabwy.com/2011/06/30/spelunking-nova-flags-and-services/
CC-MAIN-2017-13
refinedweb
1,288
63.9
The Java Tutorials have been written for JDK 8. Examples and practices described in this page don't take advantage of improvements introduced in later releases. StringBuilder objects are like String objects, object is more efficient. The StringBuilder class, like the String class, code // creates empty builder, capacity 16 StringBuilder sb = new StringBuilder(); // adds 9 character string at beginning sb.append("Greetings"); will produce a string builder with a length of 9 and a capacity of 16: The StringBuilder class has some methods related to length and capacity that the String class does not have: A number of operations (for example, append(), insert(), or setLength()) can increase the length of the character sequence in the string builder so that the resultant length() would be greater than the current capacity(). When this happens, the capacity is automatically increased. The principal operations on a StringBuilder that are not available in String are class. Stringmethod on a StringBuilderobject by first converting the string builder to a string with the toString()method of the StringBuilderclass. Then convert the string back into a string builder using the StringBuilder(String str)constructor. The StringDemo program that was listed in the section titled "Strings" is an example of a program that would be more efficient if a StringBuilder were used instead of a String. StringDemo reversed a palindrome. Here, once again, is its listing: public class StringDemo { public static void main(String[] args) { String palindrome = "Dot saw I was Tod"; int len = palindrome.length(); char[] tempCharArray = new char[len]; char[] charArray = new char[len]; // put original string in an // array of chars for (int i = 0; i < len; i++) { tempCharArray[i] = palindrome.charAt(i); } // reverse array of chars for (int j = 0; j < len; j++) { charArray[j] = tempCharArray[len - 1 - j]; } String reversePalindrome = new String(charArray); System.out.println(reversePalindrome); } } Running the program produces this output: doT saw I was toD To accomplish the string reversal, the program converts the string to an array of characters (first for loop), reverses the array into a second array (second for loop), and then converts back to a string. If you convert the palindrome string to a string builder, you can use the reverse() method in the StringBuilder class. It makes the code simpler and easier to read: public class StringBuilderDemo { public static void main(String[] args) { String palindrome = "Dot saw I was Tod"; StringBuilder sb = new StringBuilder(palindrome); sb.reverse(); // reverse it System.out.println(sb); } } Running this program produces the same output: doT saw I was toD Note that println() prints a string builder, as in: System.out.println(sb); because sb.toString() is called implicitly, as it is with any other object in a println() invocation. StringBufferclass that is exactly the same as the StringBuilderclass, except that it is thread-safe by virtue of having its methods synchronized. Threads will be discussed in the lesson on concurrency.
https://docs.oracle.com/javase/tutorial/java/data/buffers.html
CC-MAIN-2018-34
refinedweb
480
60.04
#include <sys/types.h> pid_t tcgetpgrp (fildes); int fildes; int tcsetpgrp (fildes, pgrp_id); int fildes; pid_t pgrp_id; tcsetpgrp- sets the foreground process ID group These routines identify and set the parent process group ID when job control is defined. The tcgetpgrp function returns the value of the process group ID of the foreground process group associated with the terminal. tcgetpgrp is allowed only from a process that is part of a background process group. However, this information can be changed by a process that is part of the foreground process group by means of the tcsetpgrp call. tcsetpgrp sets the foreground process ID group associated with the terminal to the value of pgrp_id. fildes must be the file associated with the controlling terminal of the calling process. The controlling terminal must also be currently associated with the session of the calling process. The value of pgrp_id must match a process group ID of a process in the same session as the calling process. Any other value of pgrp_id causes an error. tcsetpgrp returns a value of zero upon success. Otherwise, -1 is returned and errno is set to indicate the error. If any of the following conditions occur, tcsetpgrp returns -1 and sets errno to the corresponding value: IEEE POSIX Std 1003.1-1990 System Application Program Interface (API) [C Language] (ISO/IEC 9945-1) and X/Open Portability Guide, Issue 3, 1989 .
http://osr507doc.xinuos.com/en/man/html.S/tcpgrp.S.html
CC-MAIN-2019-30
refinedweb
233
62.27
So I am confused on why everyone measures just the echo duration and not the whole time the signal is sent out. I was wondering if anyone could elaborate this or show me what I am not understanding. Thanks for your help, and tell me if I am in the wrong place for this question. Code: Select all import RPi.GPIO as GPIO import time #________________SETUP_____________________________ GPIO.setwarnings(False) GPIO.setmode(GPIO.BCM) TRIG = 4 ECHO = 18 GPIO.setup(TRIG, GPIO.OUT) GPIO.setup(ECHO, GPIO.IN) #____________THIS_IS_INITIAL_BURST_________ # I would think this is where to start the initial timer. GPIO.output(TRIG, True) time.sleep(0.00001) GPIO.output(TRIG, False) #____________RECIEVES_INPUT_OF_BURST_______ while GPIO.input(ECHO) == False: #THIS IS WHERE I GET CONFUSED start = time.time() while GPIO.input(ECHO) == True: end = time.time() #________CALCULATES_TIME___________________ sig_time = end - start #________CALCULATES DISTANCE_______________ distance = sig_time / 0.000058 print('Distance: {} cm'.format(distance)) print(start) print(end) GPIO.cleanup()
https://www.raspberrypi.org/forums/viewtopic.php?p=1491948
CC-MAIN-2019-43
refinedweb
155
63.15
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. On Thu, Jun 9, 2016 at 12:53 AM, Jonathan Wakely <jwakely@redhat.com> wrote: > On 08/06/16 22:12 -0700, Tim Shen wrote: >> >> I just realized that <type_traits> doesn't define aliases like >> is_same_v, is_default_constructible_v, etc. Are they going to be in >> C++17? Should I add them to <type_traits> as well? > > > Yes, C++17 adds those to namespace std. > > I think Ville was talking about doing some of that work, please > co-ordinate with him so you don't duplicate effort - thanks! Hi Ville, Is there a status for your *_v type traits work? Thanks!
https://gcc.gnu.org/ml/libstdc++/2016-06/msg00010.html
CC-MAIN-2018-43
refinedweb
115
77.03
I faced some problem with solving the next problem: We have a list of elements (integers), and we should return a list consisting of only the non-unique elements in this list. Without changing order of the list I think the best way is to delete or remove all unique element. Take note that I just start to learn python and would like only the simplest solutions. Here is my code: def checkio(data): for i in data: if data.count(i) == 1: #if element seen in the list just ones, we delet this el ind = data.index(i) del data[ind] return data Your function can be made to work by iterating over the list in reverse: def checkio(data): for index in range(len(data) - 1, -1, -1): if data.count(data[index]) == 1: del data[index] return data print(checkio([3, 3, 5, 8, 1, 4, 5, 2, 4, 4, 3, 0])) [3, 3, 5, 4, 5, 4, 4, 3] print(checkio([1, 2, 3, 4])) [] This works, because it only deletes numbers in the section of the list that has already been iterated over.
https://codedump.io/share/NLQMA1JkW420/1/delete-unique-elements-from-a-list
CC-MAIN-2017-47
refinedweb
188
63.73
Created on 2008-08-09 02:26 by daishiharada, last changed 2020-01-12 20:44 by miss-islington. This issue is now closed. I am testing python 2.6 from SVN version: 40110 I tried the following, based on the documentation and example in the ast module. I would expect the second 'compile' to succeed also, instead of throwing an exception. Python 2.6b2+ (unknown, Aug 6 2008, 18:05:08) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import ast >>> a = ast.parse('foo', mode='eval') >>> x = compile(a, '<unknown>', mode='eval') >>> class RewriteName(ast.NodeTransformer): ... def visit_Name(self, node): ... return ast.copy_location(ast.Subscript( ... value=ast.Name(id='data', ctx=ast.Load()), ... slice=ast.Index(value=ast.Str(s=node.id)), ... ctx=node.ctx ... ), node) ... >>> a2 = RewriteName().visit(a) >>> x2 = compile(a2, '<unknown>', mode='eval') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: required field "lineno" missing from expr >>> This is actually not a bug. copy_location does not work recursively. For this example it's more useful to use the "fix_missing_locations" function which traverses the tree and copies the locations from the parent node to the child nodes: import ast a = ast.parse('foo', mode='eval') x = compile(a, '<unknown>', mode='eval') class RewriteName(ast.NodeTransformer): def visit_Name(self, node): return ast.Subscript( value=ast.Name(id='data', ctx=ast.Load()), slice=ast.Index(value=ast.Str(s=node.id)), ctx=node.ctx ) a2 = ast.fix_missing_locations(RewriteName().visit(a)) I am reopening this as a doc bug because RewriteName is a copy (with 'ast.' prefixes added) of a buggy example in the doc. The bug is that the new .value Name and Str attributes do not get the required 'lineno' and 'col_offset' attributes. As Armin said, copy_location is not recursive and does not fix children of the node it fixes. Also, the recursive .visit method does not recurse into children of replacement nodes (and if it did, the new Str node would still not be fixed). The fix could be to reuse the Name node and add another copy_location call: the following works. def visit_Name(self, node): return ast.copy_location( ast.Subscript( value=node, slice=ast.Index(value=ast.copy_location( ast.Str(s=node.id), node)), ctx=node.ctx), node) but I think this illustrates that comment in the fix_missing_locations() entry that locations are "tedious to fill in for generated nodes". So I think the doc fix should use Armin's version of RewriteName and say to call fix_missing_locations on the result of .visit if new nodes are added. (I checked that his code still works in 3.5). The entry for NodeTransformer might mention that .visit does not recurse into replacement nodes. The missing lineno error came up in this python-list thread: I re-verified the problem, its presence in the doc, and the fix with 3.9. New changeset 6680f4a9f5d15ab82b2ab6266c6f917cb78c919a by Pablo Galindo (Batuhan Taşkaya) in branch 'master': bpo-3530: Add advice on when to correctly use fix_missing_locations in the AST docs (GH-17172) New changeset e222b4c69f99953a14ded52497a9909e34fc3893 by Miss Islington (bot) in branch '3.7': bpo-3530: Add advice on when to correctly use fix_missing_locations in the AST docs (GH-17172) New changeset ef0af30e507a29dae03aae40459b9c44c96f260d by Miss Islington (bot) in branch '3.8': bpo-3530: Add advice on when to correctly use fix_missing_locations in the AST docs (GH-17172)
https://bugs.python.org/issue3530
CC-MAIN-2021-04
refinedweb
570
59.09
7110/integration-of-protractor-using-jenkins I want to write test scripts using Protractor and I have to set up Continuous Integration for the same by using Jenkins. Tasks which I need to perform are: kindly help me out with this I created a script, as follows # start selenium ./node_modules/protractor/bin/webdriver-manager start > /dev/null 2>&1 & # wait until selenium is up while ! curl &>/dev/null; do :; done # run the build grunt cibuild --force # stop selenium curl -s -L > /dev/null 2>&1 We can take screenshots using below function ...READ MORE I solved this issue myself actually. I ...READ MORE The problem is that, the end_with method ...READ MORE My qustion is, by default for main advantage of using Marionette Gecko ...READ MORE you can use the below code: import java.awt.image.BufferedImage; import ...READ MORE OR
https://www.edureka.co/community/7110/integration-of-protractor-using-jenkins?show=7117
CC-MAIN-2019-26
refinedweb
141
57.47
No, Python is not a large snake and Sublime Text is not an obscure reference to the 90's ska-punk band (at least not in this context). Rather, prepare to be amazed at the wonders of plugin development and making programs work for you. Introduction I’m, but I will do my best to outline key points in the creation process that (to me) were a bit fuzzy. (For those of you unfamiliar with ST3; it is “a sophisticated text editor for code, markup and prose” currently in Beta and set to succeed ST2. Please visit for more information) ST3 does have documentation referenced in the links below, but much of it is plain text and can be rather hard to dig through when starting out. Like anything there is a vocabulary that you need to pay close attention to in order to learn more of the advanced aspects involved. I started out with the ST2 Plugin Tutorial (which this tutorial will follow) but discovered some of the methods used were deprecated and caused errors when running the plugin. I honestly learned a lot just from reading Python 3 documentation and referencing the API and am hoping to pass that along to you. Step 1 – Starting Out All plugins start the same way: “hey, wouldn’t it be nice if this wonderful application did … .” That was my thought exactly when speaking with a colleague about HTML formatting for emails using Premailer. The objective was simple; create a one-step process that sends the CSS and HTML to Premailer and returns the HTML code in-lined with all the styles. This is as simple as sending a POST request and receiving the response. I was very familiar with how to do this using cURL in PHP, but wasn’t so sure how Python handled those kinds of requests. The results were surprisingly simple. One thing you will want to do when starting out – pick a unique but descriptive name. If you plan on submitting this to the ST3 plugin repository, you will want something that depicts your plugin’s function but also sets it apart from everything else out there. For my example, I used the name “cnPremailer.” Step 2 – Using ST3’s Plugin Creator To begin, open ST3, click the “tools” menu and select “New Plugin.” This will open the new plugin template for editing. Prior to saving you will want to create a folder for your plugin in the following location (where “Username” is your actual user name): - (OSX): Users\”Username”\Library\Application Support\Sublime Text 3\Packages - (WIN7): C:\Users\”Username”\AppData\Roaming\Sublime Text 3\Packages After you’ve created your desired folder you can save your new plugin as a .py file (ex: cnpremailer.py) in that location. Your plugin is immediately included as active (due to the import references at the beginning of the code). ST3 will also scan and validate your plugin. If there are errors, they will show in the console. You will want to open the console at this point so you know what’s going on behind the scenes. You can do this by opening the “view” menu and clicking “show console” (or using the built-in keyboard shortcut defined on the menu). You will then see the console at the bottom of the screen: The default plugin code is as follows (at the time of writing): import sublime, sublime_plugin class ExampleCommand(sublime_plugin.TextCommand): def run(self, edit): self.view.insert(edit, 0, "Hello, World!") Its function is simple: the line “Hello, World!” is prepended to the beginning of the document you are viewing. To see it in action, you will need to open the console and type: view.run_command('example') This will execute your class and do the requested action. You’ll notice that ST3 automatically parses the class name “ExampleCommand” as a command to execute since it has the text “Command” appended to the end. Once ST3 recognizes this, it strips the “command” string off of the end and takes the beginning as the actual name in lower case (ex: ExampleCommand = example). Multiple capitals will result in underscore separators between words. In my case, cnPremailer becomes cn_premailer. Instead of having to type an underscore every time, I removed the capital and went with CnpremailerCommand as my classname instead. For my example I will quickly rename the class and then save like so: import sublime, sublime_plugin class CnpremailerCommand(sublime_plugin.TextCommand): def run(self, edit): self.view.insert(edit, 0, "Hello, World!") Regardless of how you decide to format your class name, you will need to make note of it for the next step. Step 3 – Creating Menus and Key Bindings ST3 actually makes it very easy to include shortcuts to run your plugin via menus or key bindings (eg. cmd + shift + p). The former is a bit easier than the latter in that key bindings require OS-specific set-up, but it’s not overly complicated and is fairly flexible. To create a menu entry you first need to decide where you want it to be viewed. You have three choices that have corresponding file names: - Main.sublime-menu - Side Bar.sublime-menu - Context.sublime-menu “Main” is ST3’s menu bar, “Side Bar” is when you right-click a document or folder on the left side of the application (not shown when opening individual files), and “Context” is the right-click menu over the text area. These all follow the same format, and to add an entry you simply need to add the following into the file and then save it with the appropriate name above. Here is a sample: [ { "id" : "cnPremailer", "caption" : "cnPremailer", "command" : "cnpremailer" } ] As you can see; it’s a basic JSON array with some simple array values. - “id” is a basic menu id for your entry - “caption” is a display label (will default to the command name if not provided) - “command” is the case-sensitive command derived from your class name (mentioned above) You can also make submenu items by using the “children” entry like this: [ { "id" : "cnPremailer", "caption": "cnPremailer", "children": [ { "caption" : "run", "command" : "cnpremailer" } ] } ] Keep in mind that these need to be properly formatted arrays, so if you add an extra comma (or forget to add one) the menus will cease to work and your plugin will throw errors when ST3 automatically compiles it. The key-bindings are similar but need to follow this structure in order for them to work in the matching operating systems: - Default (Windows).sublime-keymap - Default (OSX).sublime-keymap - Default (Linux).sublime-keymap If you don’t wish to make your plugin key bindings available to other OSs, or are just creating a plugin for your own personal use, you don’t need to define them all. You can simply just create the file that matches the OS you are using. Then, it’s as simple as adding a JSON string to the file to map the appopriate keys you wish to use. You may want to consult the list of default keybindings to make sure yours isn’t taken. This is located on the menu bar: - (OSX) “Sublime Text” > “Preferences” > “Key Bindings – Default” - (Windows) “Preferences” > “Key Bindings – Default” You will also want to check the other plugins you have installed to see what bindings they are using so you can avoid conflicts. I used the following for OSX, since the colleague I was designing this for did not use Windows or Linux: [ { "keys": ["super+shift+c"], "command": "cnpremailer" } ] For OSX, “super” is used for the command key. After that I just list the command and save. You’ll also notice that ST3 automatically adds the key binding command to your menus (if you’ve created any). That way the plugin users can reference it should they forget. Now that you have your basic plugin saved, menus configured and keybindings added, you are ready to start the real fun. Stay tuned for part 2 – Manipulating the text area and sending data.
https://cnpagency.com/blog/creating-sublime-text-3-plugins-part-1/
CC-MAIN-2019-35
refinedweb
1,333
60.75
On 1/19/2011 9:11 PM, Glyph Lefkowitz wrote: > > On Jan 20, 2011, at 12:02 AM, Glenn Linderman wrote: > >> But for local code, having to think up an ASCII name for a module >> rather than use the obvious native-language name, is just >> brain-burden when creating the code. > > Is it really? You already had to type 'import', presumably if you can > think in Python you can think in ASCII. There is a difference between memorizing and typing keywords, and inventing new names in non-native scripts. It is hard to even invent all the names in one's native language; if restricted to inventing them, even some of them, in some non-native script such as ASCII, it is just brain-burden indeed. > > (After my experiences with namespace crowding in Twisted, I'm inclined > to suggest something more like "import > m_07117FE4A1EBD544965DC19573183DA2 as café" - then I never need to > worry about "café2" looking ugly or "cafe" being incompatible :).) > Now if the stuff after m_ was the hex UTF-8 of "café", that could get interesting :) But now you are talking about automating the creation of ASCII file names from the actual non-ASCII names of the modules, or something. Sadly, the module is not required to contain its name, so if it differs from the filename, some global view or non-Python annotation would be required to create/maintain the mapping. [This paragraph is only semi-serious, like yours.] -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-dev/2011-January/107522.html
CC-MAIN-2019-22
refinedweb
249
64.75
2008/11/13 Conal Elliott <conal at conal.net>: >. I was wondering about that, actually. The specialized Applicative instance mentioned in the paper adds an additional storage cell to the computation graph. I would expect that for simple computations, using the applicative instance could perform worse than "naive" monadic code. For example, given three adaptive references "ref1", "ref2", and "ref3" :: Adaptive Int do a <- ref1 b <- ref2 c <- ref3 return (a+b+c) This computation adds three "read" nodes to the current "write" output. But this code: do (\a b c -> a + b + c) <$> read ref1 <*> read ref2 <*> read ref3 using (<*>) = the enhanced version of "ap" from the paper. I believe this would allocate additional cells to hold the results of the function partially applied to the intermediate computations. Overall I think this would not be a win in this case. But it is also easy to construct cases where it *is* a win. I suppose you can allow the user to choose one or the other, but my general expectation for Applicative is that (<*>) should never perform *worse* than code that uses Control.Monad.ap Is this the right understanding, or am I totally missing something from the Adaptive paper? -- ryan
http://www.haskell.org/pipermail/haskell-cafe/2008-November/050624.html
CC-MAIN-2014-42
refinedweb
202
61.97
The gentlebirth.org website is provided courtesy of Ronnie Falcao, LM MS, a homebirth midwife in Mountain View, CA Humiliation - If you had a birth experience that involved humiliation, please contact research Belinda Diamond about her research project, "Humiliation in the Medical Setting and Its Relationship to PTSD". The e-mail domain is yahoo.com, and her username is diamondbelinda. Please make sure to put the words "birth trauma" in the subject line. Postpartum depression may be caused by a variety of factors, but nurturing the mother can only help! It's very important to realize that this does not always mean "taking care of the baby so the mom can rest". The mom's strongest instincts are going to be nurtured if she is supported in caring for her own child. Obviously, if the mother is unable to connect with the baby during some periods of time, someone else may need to be caring for the baby. But taking over the baby's care can worsen a mother's feeling of inadequacy. A fabulous book - MOTHER NURTURE - A Mother's Guide to Health in Body, Mind, and Intimate Relationships Mother Nurture offers One of my very wise clients said that when she started to feel any kind of postpartum blues, she would then refocus on her children and bring the family circle closer together. This seems like a very good first remedy. This study concludes that omega-3 fatty acids are less helpful than supportive psychotherapy, but if you aren't able to get supportive psychotherapy, the omega-3 fatty acids may still be a good idea. And we know they're good for baby's brain development. Omega-3 fatty acids and supportive psychotherapy for perinatal depression: a randomized placebo-controlled study. Freeman MP, Davis M, Sinha P, Wisner KL, Hibbeln JR, Gelenberg AJ. J Affect Disord. 2008 Sep;110(1-2):142-8. Epub 2008 Feb 21. CONCLUSIONS: There was no significant difference between omega-3 fatty acids and placebo in this study in which all participants received supportive psychotherapy. The manualized supportive psychotherapy warrants further study. The low intake of dietary omega-3 fatty acids among participants is of concern, in consideration of the widely established health advantages in utero and in infants. A new paradigm for depression in new mothers: the central role of inflammation and how breastfeeding and anti-inflammatory treatments protect maternal mental health Kathleen Kendall-Tackett International Breastfeeding Journal 2007, 2:6 doi:10.1186/1746-4358-2-6 Breastfeeding fights depression 08 May 2007 International Breastfeeding Journal 2007; 2: 6 MedWire News: Breastfeeding can help new mothers fight depression, research shows. Kathleen Kendal-Tackett (University of New Hampshire) says that depression is common among new mothers, and affects anywhere from 10 percent to 20 percent of postpartum women. "Since depression has devastating effects on mother and baby, it's vital that it be identified and treated promptly," she adds. Kendal-Tackett says that new mothers experience an increase in inflammation due to high levels of pro-inflammatory cytokines. Common experiences associated with new motherhood such as disturbed sleep and postpartum pain can also act as stresses that cause pro-inflammatory levels to rise, she says. Breastfeeding can reduce women's stress levels so that their inflammatory response systems remain inactive. This then reduces their risk of depression. But Kendal-Tackett notes this is only true when breastfeeding is "going well." "When breastfeeding is not going well, particularly if there is pain, it becomes a trigger to depression rather than something that lessens the risk." She concludes: "Mother's mental health is yet another reason to intervene quickly when breastfeeding difficulties arise." Dietary folate and vitamins B(12), B(6), and B(2) intake and the risk of postpartum depression in Japan: The Osaka Maternal and Child Health Study. Miyake Y, Sasaki S, Tanaka K, Yokoyama T, Ohya Y, Fukushima W, Saito K, Ohfuji S, Kiyohara C, Hirota Y; The Osaka Maternal and Child Health Study Group. J Affect Disord. 2006 Jun 29 CONCLUSIONS: Our results suggest that moderate consumption of riboflavin may be protective against postpartum depression. Postpartum treatment key for depression: study - Jul 6/05 -- The key to preventing postpartum depression may be individual support provided after birth by a health professional and tailored to a mother's needs, says a University of Toronto researcher. ." Contact: Cindy-Lee Dennis, Faculty of Nursing, (416) 946-8608; e-mail: cindylee.dennis@utoronto.ca Psychosocial and psychological interventions for prevention of postnatal depression: systematic review. Dennis CL. BMJ. 2005 Jul 2;331(7507):15. CONCLUSIONS: Diverse psychosocial or psychological interventions do not significantly reduce the number of women who develop postnatal depression. The most promising intervention is the provision of intensive, professionally based postpartum support. Evaluation of the Edinburgh Post Natal Depression Scale using Rasch analysis. Pallant JF, Miller RL, Tennant A. BMC Psychiatry. 2006 Jun 12;6:28. Psychological interventions fail to prevent postnatal depression - " . . . intensive postpartum support provided by a healthcare professional showed a clear preventive effect." Baby Blues Mood Remedy from Cascade Depression After Delivery (Postpartum Depression) - this has an easy-to use detailed chart to help you evaluate your own postpartum depression. A new. Beck Depression Inventory - a sytem of assessing level of depression that is considered much more accurate than the Edinburgh PDS. Edinburgh Postnatal Depression Scale (EPDS) Beyond the Blues by Pec Indman EdD, MFT is a concise and up-to-date book for women needing help with prenatal (pregnancy) and postpartum depression and anxiety. Handouts and Short Articles on Depression in New Mothers Resourcess For Prenatal and Postpartum Mood and Anxiety Disorders from Pec Indman EdD, MFT Help, information, and support for prenatal and postpartum illness What Is Postpartum Depression? by Pec Indman EdD, MFT Pregnancy and Postpartum Myths by Pec Indman EdD, MFT Support or Postpartum Dad by Pec Indman EdD, MFT Perinatal Medical Updates by Pec Indman EdD, MFT - With the right treatment, you will be well again. Omega 3 Fatty Acid for treatment of prenatal and postpartum depression Articles by Pec - helping you cope in pregnancy and postpartum Knowledge Path: Postpartum Depression has been compiled by the Maternal and Child Health (MCH) Library at Georgetown University. MedEdPPD - Comprehensive resources about postpartum depression. http: Postpartum Stress Center - The Center was established to provide a better understanding and comprehensive clinical intervention for any woman who suffers from the range of postpartum psychiatric disorders. postpartumprogress.typepad.com/ - An award winning blog by a surviovor and board member of postpartum support intl. Katherine Wisner discusses some of the medical issues surrounding the correct treatment for postpartum depression. (video). Postpartum Distress Support from Postpartum Education for Parents in Santa Barbara. PRENATAL ? POSTPARTUM 'BLUES', DEPRESSION, STRESS ? ANXIETY from radiantmother.com PSI: Postpartum Support International (They used to be Depression after Delivery) The Postpartum Depression Center of San Antonio Tips for Addressing the Baby Blues by Lois V. Nightingale, Ph.D. Notes from a Talk on Postpartum Depression Sacred Window Ayurveda for Mothers and Children, where You May Discover How to Avoid Depression and Colic And How to Enjoy Profound Rejuvenation! Women are encouraged to e-mail a letter of hope and recovery to add to this collection. Thank you! Postpartum Depression - Overview article [12/4/07] [Medscape registration is free] PSI: Postpartum Support International. What Does Safe Motherhood Mean? - Safe Motherhood means that no woman should die or be harmed by pregnancy or birth. Emotional Recovery After Birth - a homeopathic approach Pregnancy blues worsen with age [10/12/04] - Older mothers are more anxious during their pregnancy and less likely to have the social support younger mothers enjoy, a study has found. But they are also less starry-eyed about parenthood, and will perhaps make better mothers for it. Postnatal Depression or Childbirth Trauma? - For some women, childbirth has all the characteristics we associate with trauma. Professionals should abandon the easy assumption that childbirth is not traumatic just because it is within the normal range of experience, and adjust their treatment accordingly. The Effect of Birth Experience on Postpartum Depression by Michelle A. Bland The Effect of Birth Experience on Postpartum Depression: A Follow-Up Study by Michelle A. Bland Postpartum Depression Handout Puerperal Psychosis/Postnatal Depression Tahlor Dawn's Homepage The Postpartum Stress Center in Rosemont, Pennsylvania - Clinical Director is Karen Kleiman, MSW, author of "This Isn't What I Expected: Overcoming Postpartum Depression" This brief article on Deep Tissue Vaginal Massage has some really good information for women experiencing physical or emotional pain in their genitals after birth. The Mystery of Postpartum Depression - By Pamela Gerhardt, Washington Post, Tuesday, March 14, 2000 Great set of links about birth-related psychology and emotions 1. As a Functional Medicine oriented Chiropractor, I am seeing more frequent women with pre and post pregnancy sex hormone, adrenal and thyroid hormone imbalances. This would in and of itself predispose to increase PTSD. I do not however attribute correlation with higher PTSD to un-medicated childbirth, but to imbalances in measurable levels of 2-hydroxyestrones and 16-alpha-hydroxyestrones, which predisposes and is associated with depression. This is becoming more common, according to research because of exposure to hormonal disruptors in the environment such as pesticides, BPA in plastic exposure, exposure to GMO foods, and even "hidden environmental" exposures like outgassing from new carpets and materials in new buildings. 2. I'm seeing more frequent and previously undetected Hashimoto's thyroid disorders, partially I suspect from the above environmental and personal exposures, as well as the fact that often by the time the woman sees me, they have been frustrated because they have seen their primary care physician or OB-Gyn who does not have a Functional Medicine approach, and they have had "all the tests, but they came back normal", but the doctor didn't test a total thyroid panel including TSH, totT4, totT3, freeT3, reverseT3, TPO and Thyroglobulin Antibodies. Therefore they are literally being "Miss Diagnosed".they are missing the real diagnosis! They are erroneously being told nothing's wrong, but they know something is, and then they are put on antidepressants because the doctor attributes it to being "all in their head" or to depression! 3. The fact that adrenal dysregulation problems are associated with depression and at the underlying cause of many PTSD issues. But again many are being "Miss Diagnosed". Adrenal gland dysregulation is not even acknowledged unless the standard methods of blood testing show up abnormal, but checking adrenal function via standard blood tests misses all but the most severe disorders of adrenal hormone dysregulation. And "Adrenal Exhaustion" is still thought of as not a real condition by most conventional medical doctors (unless they have been trained in the Functional Medicine approach). Adrenal dysregulation is becoming more prevalent for various reasons, including the hormonal disruptors noted previously. This will manifest often as depression. I talked about thoughts #2 and #3 in my free e-book available at http: 4. If the above factors are not identified before pregnancy, they will often be aggravated by the toll on a woman's body during pregnancy, and manifest post-pregnancy, especially in the first few months post-partum! Douglas Husbands, DC, CCN Holistic Health Bay Area (in Rivera Chiropractic Group) Traumatic birth as Legal Rape by sheila stubbs Not a happy birthday - Threatened, intimidated, bullied, violated: this is hospital birth as many mothers experience it. Amity Reed reports on the little-recognised crime of birth rape. See also: Post-Traumatic Stress Disorder Yahoo! group ptsdafterchildbirth for women who have gone through traumatic births. Yahoo! group BirthTraumaSupportGroup - A safe place for women who have had trauma during childbirth. "More than 25% of women may exhibit symptoms of post-traumatic stress disorder (PTSD) after childbirth, a new study reports." Overall, 25.9% of women in this study experienced some PTSD symptoms after giving birth. The authors noted that being a first-time mother did not predispose women to postpartum PTSD. A significant protective factor against PTSD was perceived social support. Although it is impossible to accurately predict which patients will experience symptoms of postpartum PTSD, the authors suggest that obstetricians can help mitigate these symptoms in patients who may be predisposed to this disorder by discussing options for analgesia during delivery early in the pregnancy. According to the authors, this is the first study to link postpartum PTSD with a discomfort with nakedness during delivery.- Sufficient social support was a protective factor against postpartum PTSD. Parasites of the Mind - A healing blog for PTSD awareness, education, treatment and self-empowered healing.. The Pink Kit folks offer an informational CD for women who experienced trauma from giving birth (TABS): Trauma and Birth Stress - PTSD After Childbirth PTSD After Childbirth Blog by Jodi Kluchar, who is a major activist in awareness of PTSD after childbirth. She has published a number of related articles. The Birth Trauma Association supports all women who have had a traumatic birth experience. Bad times make for more accurate memories "Pleas. " Research by Elizabeth Kensinger at Harvard. See also: Effects of Birth Practices on Breastfeeding Do you think there is a correlation between problems with being able breastfeed and traumatic birth/PTSD? For me, breastfeeding would trigger a replay of the birth experience. I dreaded and avoided breastfeeding, or at least as much as the mother and newborn can avoid a hungry infant. I was not able to establish a good supply, and I've even wondered if this is hormonal, specific to PTSD, somehow. Oxytocin is the hormone that is common to all of our reproductive functions: sexual arousal/intercourse; labor and birth; breastfeeding. A trauma in any one of these areas could result in an aversion to increasing levels of this hormone which accompany any of these functions. This could lead to an anxiety that increases adrenaline, which inhibits lactation, so . . . yes, PTSD could cause breastfeeding problems. This is similar to one of the mechanisms proposed for the relationship between inductions and autism: Post-Pregnancy Hysterectomy Childbirth and Narratives: How Do Mothers Deal with Their Child's Birth? by Paola Di Blasio and Chiara Ionio JOPPPAH 17(2), Winter, 2002, p. 143 "ABSTRACT: This. The results indicated a significant difference in the number of post traumatic stress symptoms between the two groups, underlining the positive effect of the emotional disclosure." How to Overcome a Disappointing Birth Experience By Kristi Patrice Carter Integrating A Difficult Birth from Karen Melton's site What if Your Homebirth Doesn't Happen at Grief within the Miracle Society for Women - This society has been created to give support and compassion to women who feel they have experienced trauma and abuse at the hands of professionals during the birth of their child or children. . . . If you feel any regret, pain, feelings of humiliation, visions of moments in birth that you cringe to remember but can't seem to forget...then you are in the right place. I have just received a copy of a book titled, "Reclaiming the Spirituality of Birth: Healing for Mothers and Babies" by Benig Mauger. It was released in USA in March/00, but was previously released in the UK in 1998 with the title "Songs of the Womb; Healing the wounded mother". I have only read the intro and 1st chapter so far, but it is quite good. "Recovery from Childbirth: An Emotional Healing" by Lynn Madsen - I give it to all my 2nd time or more moms if they even hint that they were less than totally happy with their prior birth experiences. [Ed: I'm not sure whether there was ever a book by this title, but there is a book by Lynn Madsen called Rebounding from Childbirth: Toward Emotional Recovery.] "Rebounding from Childbirth: Toward Emotional Recovery" by Lynn Madsen is a good book for those wishing to resolve issues related to their birth experiences. Topics include: Nancy Wainer is occasionally offering workshops on "Grieving and Healing After a Disappointing, Upsetting, or Traumatic Birth Experience" Contact (781) 449-2490 for more information. Psychotherapy Services for the Childbearing Year - Linda Cozzolino, M.Ed., CPM - Specializing in counseling for: Survivors of childhood abuse, now preparing for birth and parenting. Women or couples with traumatic previous birth experiences. Women or couples with high levels of fear of childbirth and/or parenting. BirthWorks Reading List - books to help women overcome the negative effects a difficult past can have on pregnancy and birth See this recounting of Six Birth Stories for a true testimonial of overcoming birth trauma. The VBAC Companion by Diana Korte has a section on dealing with previous traumatic births. Cesarean Art - for all the scarred mothers Pregnant Feelings by Rahima Baldwin (and a friend?) Transformation Through Birth by Claudia Panuthos Keeping journal of your thoughts is one thing. Writing letters (to mail or not) to everyone including yourself in which you share your feelings about the birth. A few books I like: Attend a Birth Works class if you are planning another pregnancy or if you are pregnant. If your emotions about your birth seem to be overwhelming- consuming you nearly every moment of the day, something that can be helpful is to begin to confine those feelings and thoughts to certain parts of the day. For example, say to yourself, you can worrying and think about this all day, EXCEPT from 9am- 11am. That's when I am going to think about different stuff, do other things. If the thoughts start creeping in, you say, "No, I'm not going to think about that now, that's for later". Then, as time goes on, you expand your "clear" times, and shrink your grief time. This is at a pace that is right for you. It may feel good to feel that you have some control over your emotions. Eventually you can put them in a small box and deal with them as you need to. When a bad birth haunts you by Sheila Kitzinger Notes from Sheila Kitzinger Talk - "Crisis in the Perinatal Period". I can't seem to help HATING myself for being "Stupid enough, at my age", to prevent/stop this from happening to me. Has anyone else tried to channel these feeling into something else? Something that may alleviate feelings like this? I have been so angry for many years concerning my births that I want to stand at the hospital entrance and tell birthing Mom's that there is a better way; Inside lies danger! Instead, I am a student midwife and educate everyone I can on hospital and medical interventions. I think if I could just save one woman and child from a horrid birth it would atone for mine. I always wished someone had told me how it really was BEFORE I gave birth. Now I make sure I tell who ever will listen. Don't get me wrong, I don't stand at the mall next to the Salvation Army guy with his ringing bell, stopping shoppers to tell them of the horrors of medicalized childbirth, but those who seem interested in my studies and those who are eager to talk of their own pregnancies open up a perfect window of opportunity! Post questions or answers to the "Dear Midwife' forum at Pacific Northwest Midwives - send e-mail to: hdw4@msn.com For a response from professionals that is particularly sensitive to the emotional issues around childbirth, check out the "Ask the Pros" section of Association for Pre- ? Perinatal Psychology ? Health (APPPAH). Midwifery Today Forums "Dear Midwife" Forum You can ask questions from The Midwife Pro (Midge Jolly, LM, CPM) (formerly at Moms Online - oxygen.com) The obgyn.net Women's Health Forum There are also some midwives and doctors reading misc.kids.pregnancy who occasionally respond about specific cases. There are other references at Web Florida - Women and Their Health and T-net inc. On-Line Medical Advice - this is an extensive meta list! There are also some women's forums where you might be able to get information from other parents: Labor of Love Message Boards We have it worked into our 2-3 week visit to ask similar questions: "Do you have any questions about what happened at your birth? How did you feel about your birth experience in general? What could we have done to improved our services?..." This is a good time to get initial impressions and clear up any technical misunderstandings about what really happened during a birth, but I think a mom's overall feelings about a birth take much, much longer to process. If the birth has been traumatic the mother is probably working out her feelings about it long after our last official contact with her. Rather than ask the mother specific questions about the birth, I learned to do a birth review where I just guide the mother to tell me the whole story of the birth, as if I hadn't been there. It's amazing some of the feelings that come up when she has control of the story and where it goes. Some moms end up dealing with a lot more than they expected. Especially for a hospital transport, the drama doesn't help, even when the transport was warranted. Even unmedicated vaginal births can be a lot to process afterwards. Sometimes "tincture of time" is the best remedy. AND listening to her, so that she gets to tell her story (over and over is great) in order to make meaning out of it. Sometimes EMDR is helpful; I don't need to see someone more than a half-dozen or so times for that, if that is their preference. Sometimes, "counseling" sounds like it will be traumatic in itself. It really involves what you are already doing - listening with respect. I usually see the process as offering a doula to the feelings or thoughts. [Ed: The August, 2007, issue of the Birth Trauma Association newsletter has a great piece about EMDR on page 2.] can also teach "mindfulness based stress management" techniques. This can be helpful in sitting WITH the disappointment, regret, sadness, guilt, or whatever until that can be processed. Sometimes the feelings tie in with other life events that moms don't even realize are re-surfacing until they feel "into" the feelings and see what's there. Once again, it's really similar to birthing - riding the feeling as you would a contraction. (Since we are still paying off my student loan - I'm reluctant to say that my birth work has been at least as valuable as my masters - reluctant to say it to my husband anyway...) You are probably already doing that for her. Letting her FEEL what she is feeling, and not pushing it away is often the best remedy. Often then it will dissipate on its own with time. And your being patient while she does that is the gift. MA, MFTI, IBCLC Here are some resources in the Silicon Valley area. STAR is a profoundly transformative 10-day personal growth retreat, a carefully structured program of accelerated self-healing and transformation. Resource Directory Of Practitioners and Programs in Pre- ? Perinatal Psychology ? Health - Year 2001 I have heard that Phyllis Klaus offers phone counseling sessions for birth trauma. (She's one of the founders of Doulas of North America). 510-559-8000, phyllisklaus@aol.com I was helped by EMDR - Eye Movement Desensitization and Reprocessing. Basically, by moving your eyes back and forth you can stimulate your brain to integrate right brain and left brain activity, which helps process memories. You can doEMDR while confronting highly emotional memories and it helps to release some of the emotion. Also, some counselors do EFT or Emotional Freedom Technique which has a similar philosophy to accupuncture, except that you just apply pressure with your finger tips. The idea is that we are electrical as well as chemical and strong emotions cause electrical disturbances in the body. Applying EFT while saying affirmations or working through the trauma can release a great deal of emotion. I think massage, prayer, yoga, meditation are all wonderful -- anything that helps the nervous system relax. I think the major challenge for recovery is that it is so hard for mothers to find the time/money to take care of themselves!!!!! Shekinah Birthing offers EMOTIONAL FREEDOM TECHNIQUES: EFT TELECLASSES FOR BIRTH TRAUMA It can be difficult to find a mental health professional who understands the issues of birth trauma. It is essential to avoid working with someone who will deny that you were harmed by a negative birth experience. Here are some resources that might be helpful: Center for Creative Growth Finding a Therapist Outside the San Francisco Bay Area DrScore is a web site that collects and displays ratings of doctors. You can contribute your ratings or read others' ratings. RateMDs.com is changing the way the world looks at medicine by providing patients with the unique opportunity to rate and read about their doctors. Unhappy With Your Maternity Care? File a Complaint! from Citizens for Midwifery Privacy concerns may prevent a professional midwife or doula from filing a complaint about the quality of care at a birth, or even a hostile or punitive attitude towards a homebirth transport. One option is for the birth professional to write a letter of complaint and then have her client sign it. In a situation where the parents are concerned about the care they were or are being provided I suggest speaking with the Charge Nurse, hospital Ombudsman, hospital chaplain, Head of the department, (oB or Peds) and also out in the wider world. The nursing Board and the California Medical board also have mechanisms of action to complain about care given. How Complaints Are Handled from the Medical Board of California Childbirth - The Rights of Childbearing Women from the Boston Women's Health Book Collective, Inc., reposted with permission from Childbirth Connection It is so important to provide feedback to your care providers - how else will they ever learn what helped and what hindered your birth experience? I know it can be incredibly hard to look back on a difficult birth experience. Who really wants to give another second's worth of energy to thinking about the physical trauma, the emotional trauma, the betrayal by seemingly benevolent care providers? Some women feel violated on every level - physical, mental, emotional and spiritual. Writing letters of complaint may even feel like a perpetuation of the trauma, much as victims of stranger assault can feel victimized by having to testify against their assailant. So . . . why bother? The answer is simple. You're doing it for your sisters, your cousins, your daughters, your nieces, and your eventual grand-daughters. If we don't start working now to change the system, your loved ones are likely to have the same horrible experience when they're ready to give birth. At the very least, simply write a letter expressing your disappointment that your experience was so different from what you were led to expect. And don't forget to write thank-you letters to the care providers who treated you well! I especially like to address the letters to an entire practice or to the entire nursing staff, giving honorable mention to those who deserve it. When it's addressed to multiple people, it's often posted in an employee lounge or somewhere where everyone will see it and wish they'd been nice enough to be mentioned. If you address it just to the nice people, they might be reluctant to show it about or post it publicly. If somebody had walked up to you on the street and done this, you can bet they'd be interested in pressing charges; being a doctor doesn't give somebody permission to assault you any more than a stranger on the street. In addition to writing a letter of "concern" to the relevant provider, it is essential that you send copies to public boards and medical societies. This is important because it's the only way to make sure the individual practitioner will ever have to worry about any consequences. And, perhaps more importantly, it will help put the fear of god/dess into every other OB who realizes that women really, really don't like having their bodies and their lives messed with against their will. Please, do use your energy to write letters and to help educate other women in the e-mail lists, Web forums and Usenet newsgroups instead of taking it out on yourself or your loved ones! How to File a Complaint in California How to Respond to Bad Hospital Treatment Writing Letters to Caregivers About Your Birth Experience These people DO investigate pt complaints. Complaints are taken from any one who wishes to file one, and can be made without giving your name. It is best to give the pt name and date of stay as well as what happened that you think was wrong. Not all complaints result in a 'punishment'. However, the hospital is put through a 'ringer' every time the JC comes in and the hospital will often 'fix' the problem before the JC comes in and destroy the evidence. This makes it harder top punish, but makes it easier for the next pt. Not all hospitals are accredited, and a simple call to the hospital administration receptionist will let you know if they are or not. Make sure you have the woman's permission before using her name in a complaint. There is a great book by Christopher Norwood "How To Avoid A Cesarean Section" wherein the author suggests what she believes an effective alternative to suing is sending "detailed, written complaints to administration of the hospital, its chairman of obstetrics, the local and state health departments, and to the professional society regulating obstetricians in your area." She suggests that in the end the doctor will feel more pressure to rethink his cesarean decisions than if he/she was sued. This is a gentle, forthright book about vaginal birthing that is not outdated even though written in 1987. Mom Writes Letter to OBs Who Did Unnecessary Cesareans One of my favorite mild preventives or treatments for PPD is the herb, Blessed Thistle. I like the non-alcoholic version from Tri-Light herbs; it's slightly sweet and can be used as a sweetener in tea if moms find it too sweet to take directly. It has the bonus of being a galactagogue, stimulating milk production. It's worked wonders for second-time moms who had PPD and difficulty breastfeeding a first baby. Motherwort may also work wonders for helping to induce a more mellow mood.; I call it my "audio prozac". Ellen Roos - Passion Flower Music - Songs that see and stir, love and forgive, lift, bless and free! Her first album is Lavender and Morning Sun. Natural Progesterone for Post-partum Depression and PMS Psychosis - general dose is 20-60 mg/daily (applied anywhere on the body) for 2 weeks each month. Counseling As Effective As Prozac for Postpartum Depression Unnecessary Cesarean with General Anesthesia resulting in severe postpartum depression Jenny's Tale - Saga of a Birth Gone Wrong or Yes, It Can Happen To You Jenny Strikes Back - A Set of Letters and a Meeting about the Unnecessary Cesarean I went to hear Anne Dunwald (?) speak a few years ago. She's got a book out now available through Cascade's book catalog I believe. She's a psychologist who specializes in women's concerns and does lots of counseling for post partum depression. From what I remember post partum psychosis involves major things like religious or satanic beliefs that are irrational, such as 'my baby was the devil and an angel told me to kill it' type of hallucinating AND the woman is acting on such beliefs. Unless PPD has progressed to acting upon thoughts of harming herself or her baby, the speaker did not recommend hospitalization. I've known a mother with severe PPD. A hospital birth experience made her remember being raped many years before and she believed that triggered her PPD. It is true that PPD is more common in women who've had interventive births. Also much more common in women with a history of depression. DHA found in long chain omega-3 fatty acids is effective in prevention of PPD according to researchers at the Children's Nutrition Research Center however their studies are not complete. DHA is found in fish oils, cold water fish, flaxseed oil, carrots, spinach, and some supplements. I might be able to copy the tape I have of Anne Dunwald's speech if you need it. Most all antidepressants can be safely taken by breastfeeding mothers. St. John's Wort might also be tried. Child sexual abuse that has been buried deep in the psyche can sometimes be brought up during the powerful physical and emotional feelings in labor and birth. This also explains the "type a " personality traits of wanting to be perfect. This energy is used to keep those nasty memories at bay sometimes. Sexual abuse is so rampant that I'm more likely to believe it has occurred in a woman's past than not these days. This would be my first suspicion with a woman suffering this type of psychotic episode. New - online support group - BirthTraumaSupport at egroups.com Resourcess For Prenatal and Postpartum Mood and Anxiety Disorders from Pec Indman EdD, MFT Help, information, and support for prenatal and postpartum illness Postpartum Support International Call the PSI Postpartum Depression Helpline: 1.800.944.4773 Postpartum Adjustment Support Services(PASS-CAN) Canadian Resource (905)844-9009 The Postpartum Stress Center Rosemont, PA Office, 610.525.7527 Voorhees, NJ Office, 856-745-8847 National Anxiety Foundation (for both Professional referral list and information resource) 1-800-755-1576 National Institute of Mental Health Hot Line about Panic With info on support groups 1-800 64-PANIC For information about "safe" anti-depressants postpartum: Medical Professional Involved in Relevant Research on PPD. MD who is known for her research on the use of Medication for PPD during breastfeeding. She is a good resource about Meds. KATHERINE WISNER - Pittsburgh Mind-Body Center Professor of Psychiatry, OB/GYN, Epidemiology, and Women’s Studies #include "trailer.incl"
http://www.gentlebirth.org/archives/ppdepres.html
CC-MAIN-2016-50
refinedweb
5,711
51.68
I need to have 3 functions in addition to the main. I got a response before except it didn't have 3 functions. The 1st function asks the user how many hours were worked and the pay rate and returns this information to main. The second function calculates the regular hours and overtime hours and returns this information to main. The third function calculates the regular pay, (regular hours times pay rate); overtime pay, (overtime hours times 1.5 pay rate) and the total pay and returns this information to main. I've been working on this for a couple weeks but can't figure out how to set up the 3rd function and return it to the main. Please help. I have errors in lines 65, 41, 27. def ask_hours(): hours = float(raw_input ("How many hours did you work? ")) rate = float(input ("What is your rate of pay? ")) return hours, rate def findrate(hours, rate): if hours >= 40: hours1 = 40 ot = hours - 40 elif hours < 40: hours2 = hours ot1 = 0 return hours1, ot, hours2, ot1 def getpaid(pay): if hours1 >= 40: pay = hours1 * rate ot3 = ot * 1.5 elif hours1 < 40: pay = hours * rate ot3 = 0 return pay, ot3 def main() : hours, rate = ask_hours() findrate(hours, rate) getpaid(pay) print "Pay rate", rate if hours >= 40: print "Regular hours", hours print "Overtime hours", ot elif hours < 40: print "Regular hours", hours2 print "Overtime hours", 0 if hours >= 40: print "Regular pay", (40 * rate) elif hours < 40: print "Regular pay", (hours * rate) if hours >= 40: print "Overtime Pay", ot3 elif hours < 40: print "Overtime Pay", 0 if hours >= 40: print "Total Pay", (pay + ot3) elif hours < 40: print "Total Pay", (pay + 0) main()
https://www.daniweb.com/programming/software-development/threads/118229/program-setting-up-3-function-in-addition-to-main
CC-MAIN-2017-34
refinedweb
283
73.81
Debugging JavaScript has traditionally been non-trivial. This is to do with evented asynchronous paradigms inherent in the programming model and partly to do with tooling (and the difficulties in creating tooling that is well matched with JavaScript's programming model). In recent years, however, as JavaScript usage has exponentially increased in both browser and server-side development, tooling has improved and continues to improve. In this chapter, we talk about how to use fundamental debugging tools, introduce some additional useful introspection resources, and delve deeper into advanced production debugging tools and techniques such as async tracing and postmortems. Node 6.3.0 onwards provides us with the --inspect flag, which we use to debug the runtime with Google Chrome's Devtools. Note Debugging legacy NodeThis recipe can be followed with older versions of Node prior to Node 6.3.0 â it just requires a little more set up. To follow this recipe with a legacy version of Node, jump to Using node-inspector with older Node versions in the There's more... section of this recipe first. In this recipe, we're going to diagnose and solve a problem in a simple Express application. We're going to debug a small web server, so let's create that real quick. On the command line, we execute the following commands: $ mkdir app$ cd app$ npm init -y$ npm install --save express$ touch index.js future.js past.js Our index.js file should contain the following: const express = require('express') const app = express() const past = require('./past') const future = require('./future') app.get('/:age', (req, res) => { res.send(past(req.params.age, 10) + future(req.params.future, 10)) }) app.listen(3000) Our past.js file should look like this: module.exports = (age, gap) => { return `${gap} years ago you were ${Number(age) - gap}<br>` } And our future.js file should be as follows: module.exports = (age, gap) => { return `In ${gap} years you will be ${Number(age) + gap}<br>` } When we run our server (which we created in the Getting ready section), and navigate our browser to , the output is as follows: 10 years ago you were 21 In 10 years you will be NaN It looks like we have a Not a Number problem. Let's start our server in inspection mode: $ node --inspect index.js This will output a message that the debugger is listening. We can connect to the debugger using the Chrome browser. Let's open Chrome and navigate to chrome://inspect. Ensuring that we're in the Devices tab, we should be able to see our Node process underneath the Remote Target section, as in the following screenshot: We should then see something like the following: Note The module wrapperNotice that the Devtools Code section shows an additional outer function wrapping the code we placed into index.js. This outer function is added at runtime to each code file loaded into the process (either by directly starting the file with node or by using require to load it). This outer function is the module wrapper, it's the mechanism Node uses to supply local references like module and __filename that are unique to our module without polluting global scope. Now let's set a breakpoint inside route handler, on line 7. If we click the number 7 in the LOC column to the left of the code, an arrow shape will appear over and around the number (which will turn white). Over in the right-hand column, in the Breakpoints pane we should also see a checkbox with index.js:7 next to it, while beneath that is the code from the line we've set a breakpoint on. In short, the Devtools GUI should now look something like the following: Now let's open a new tab and navigate to: This will cause the to trigger, and Devtools will grab focus. The next thing we see should look like the following: We can see line 7 is now highlighted, and there's a sort of tooltip showing us the values of the req and res objects on the line above. Over in the right column, the Call Stack panel is full of call frames (the functions in the stack), and there's now a blue play button in the control icons at the top of the right column. If we were to scroll the right column, we'd also see the Scope pane is populated with references. The debugger is waiting for us to allow execution to proceed, and we can chose whether to step over, in, or out of the next instruction. Let's try stepping in. This is the down arrow pointing to a dot, the third icon from the left in the controls section: When we press this, we step into the past function, which is in the past.js file, so Devtools will open a new tab in the center code panel and highlight the line that is about to execute (in our case, line 2): So let's step out of the past function by pressing the arrow pointing up and away from a dot, next to the step-in icon: The second line of the output seems to have the issue, which is our future function. Now that we've stepped out, we can see that the call to future is highlighted in a darker shade of blue: Now let's press the step-in icon again, which will take us into the future function in the future.js file: Okay, this is the function generates that particular sentence with the NaN in it. A NaN can be generated for all sort of reasons, such as dividing zero by itself, subtracting Infinity from Infinity,  coerce a string to a number when the string does not hold a valid number, to name a few. At any rate, it's probably something to do with the values in our future function. Let's hover over the gap variable. We should see the following: Seems fine. Now let's hover over the age variable: Wait... why does that say undefined ? We vicariously passed 31 by navigating to. To be sure our eyes aren't deceiving us, we can double-check by collapsing the Call Stack section (by clicking the small downwards arrow next to the C of Call Stack). This will make room for us to easily see the Scope section, which reports that the age variable is indeed undefined, as in the following screenshot: Well, Number(undefined) is NaN, and NaN + 10 is also NaN. Why is age set to undefined? Let's open up the Call Stack bar again and click the second row from the top (which says app.get). We should be back in the index.js file again (but still frozen on line 2 of future.js), like so: Now let's hover over the value we're passing in to future: That's undefined too. Why is it undefined? Oh. That should be req.params.age, not req.params.future. Oops. To be absolutely sure, let's fix it while the server is running. If we hit blue play button once we see something like this: Now let's click line 7 again to remove the breakpoint. We should be seeing: Now if we click immediately after the e in req.params.future we should get a blink cursor. We backspace out the word future and type the word age, making our code look like this: Finally, we can live save those changes in our running server by pressing Cmd + S on macOS, or Ctrl + S on Windows and Linux. Finally, let's check our route again: OK, we've definitely found the problem, and verified a solution. We don't really need to know how debugging Node with devtools is made possible in order to avail ourselves of the tool, however, for the curious here's a high-level overview. Debugging ability is ultimately provided by V8, the JavaScript engine used by Node. When we run node with the --inspect flag, the V8 inspector opens a port that WebSocket connections. Once a connection is established, commands in the form of JSON packets are sent back and forth between the and a client. The chrome-devtools:// URI is a special protocol recognized by the Chrome browser that loads the Devtools UI (which is written in HTML, CSS, and JavaScript, so can be loaded directly into a normal browser tab). The Devtools UI is loaded in a special mode (remote mode), where a WebSocket endpoint is supplied via the URL. The WebSocket connection allows for bi-directional communication between the inspector and the client. The tiny Inspector WebSocket server is written entirely in C and runs on a separate thread so that when the process is paused, the inspector can continue to receive and send commands. In order to achieve the level of control we're afforded in debug mode (ability to pause, step, inspect state, view callstack, live edit) V8 operations are instrumented throughout with Inspector C++ functions that can control the flow, and change state in place. For instance, if we've set a breakpoint, once that line is encountered, a condition will match in the C++ level that triggers a function that pauses the event loop (the JavaScript thread). The Inspector then sends a message to the client over the WebSocket connection telling it that the process is paused on a particular line and the client updates its state. Likewise, if the user chooses to step into a function, this command is sent to the Inspector, which can briefly unpause and repause execution in the appropriate place, then sends a message back with the new position and state. Let's find out how to debug older versions of Node, make a process start with a paused runtime and learn to use the built-in command-line debugging interface. The --inspect flag and protocol were introduced in Node 6.3.0, primarily because the V8 engine had changed the debugging protocol. In Node 6.2.0 and down, there's a legacy debugging protocol enabled with the --debug flag, but this isn't compatible the native Chrome Devtools UI. Instead, we can use the node-inspector tool, as a client for the legacy protocol. The node-inspector tool essentially wraps an older version of Devtools that interfaces with the legacy debug API, and then hosts it locally. Let's install node-inspector: $ npm i -g node-inspector This will add a global executable called node-debug, which we can use as shorthand to start our process in debug mode. If we could run our process like so: $ node-debug index.js We should see output that's something like the following: Node Inspector v0.12.10Visit to start debugging.Debugging `index.js`Debugger listening on [::]:5858 When we load the URL in our browser we'll again see the familiar Devtools interface. By default, the node-debug command start our process in a paused state. After pressing run (the blue play button), we should now be able to follow the main recipe in it's entirely using a legacy version of Node. In many cases, we want to debug a process from initialization, or we want to set up breakpoints before anything can happen. From Node 8 onwards, we use the following to start Node in an immediately paused state: $ node --inspect-brk index.js In Node 6 (at time of writing, 6.10.0), --inspect is supported but --inspect-brk isn't. Instead, we can use the legacy --debug-brk flag in conjunction with --inspect like so: $ node --debug-brk --inspect index.js In Node v4 and lower, we'd simply use --debug-brk instead of --debug (in conjunction with another client, see Using Node Inspector with older Node versions) There may be rare occasions we don't have easy access to a GUI. In these scenarios, command-line abilities become paramount. Let's take a look at Nodes, built in command-line debugging interface. Let's run our app from the main recipe like so: $ node debug index.js When we enter debug mode, we see the first three lines of our entry point ( index.js). Upon entering debug mode, the process is paused on the first line of the entry point. By default, when a breakpoint occurs the debugger shows two lines before and after the current line of code, since this is the first line we only see two lines after. The debug mode provides several commands in the form of functions, or sometimes as magic getter/setters (we can view these commands by typing help and hitting Enter). Let's get a little context using the list function: debug> list(10) This provides 10 lines after our current line (again it would also include 10 lines before, but we're on the first line so there's no prior lines to show). We're interested in the seventh line, because this is the code that's executed when the server a request. We can use the sb function (which stands for Set Breakpoint) to set a break point on line 7, like so: debug> sb(7) Now if we use list(10) again, we should see an asterisk ( *) adjacent to line 7: debug> list(10) Since our app began in paused mode, we need to tell the process to begin running as normal so we can send a request to it. We use the c command to tell the process to continue, like so: debug> c Now let's make a request to our server, we could use a browser to do this, or if we have curl on our system, in another terminal we could run the following: $ curl This will cause the process to hit our breakpoint and the debugger console should print out break in index.js:7 along with the line our code is currently paused on, with two lines of context before and after. We can see a right caret ( >) indicating the current line: Now let's step in to the first function. To step in, we use the step command: debug> step This enters our past.js file, with the current break on line 2. We can print out references in the current debug scope using the exec command, let's print out the values of the gap and age arguments: debug> exec gapdebug> exec age Everything seems to be in order here. Now let's step back out of the past function. We use the out command to do this, like so: debug> out We should now see that the future function is a different color, indicating that this is the next function to be called. Let's step into the future function: debug> step Now we're in our future.js file, again we can print out the gap and age arguments using exec: debug> exec gapdebug> exec age Aha, we can see that age is undefined. Let's step back up into the router function using the out command: debug> out Let's inspect req.params.future and req.params: debug> req.params.futuredebug> req.params It's now (again) obvious where the mistake lies. There is no req.params.future; that input should be req.params.age. - Creating an Express web app, in Chapter 7, Working with Web Frameworks - Writing module code, in Chapter 2, Writing Modules - Profiling memory, in Chapter 9, Optimizing Performance - CPU profiling with Chrome Devtools, in the There's more... section of Finding Bottlenecks with Flamegraphs, in Chapter 9, Optimizing Performance When a Node process experiences an error, the function the error occurred, and the function that called that function (and so on) is written to STDERR as the final output of the application. This is called a stack trace. By default, Node's JavaScript engine (V8) retains a total of 10 frames (references to functions in a stack). However, in many cases we need far more than 10 frames to understand the context from a stack trace when performing root-cause analysis on a faulty process. On the other hand, the larger the stack trace, the more memory and CPU a process has to use to keep track of the stack. In this recipe, we're going to increase the size of the stack trace, but only in a development environment. Let's prepare for the recipe by making a small application that causes an error creating a long stack trace. We'll create a folder called app, initialize it as a package, install express, and create three files, index.js, routes.js, and content.js: $ mkdir app$ cd app$ npm init -y$ npm install express$ touch index.js routes.js content.js Our index.js file should look like this: const express = require('express') const routes = require('./routes') const app = express() app.use(routes) app.listen(3000) The routes.js file like the following: const content = require('./content') const {Router} = require('express') const router = new Router() router.get('/', (req, res) => { res.send(content()) }) module.exports = router And the content.js file like so: function content (opts, c = 20) { return --c ? content(opts, c) : opts.ohoh } module.exports = content Let's begin by starting our server: $ node index.js All good so far. Okay let's send a request to the server, we can navigate a browser to or we can use curl (if installed) like so: $ curl That should spit out some error HTML output containing a stack trace. Even though an error has been thrown, the process hasn't crashed because express catches errors in routes to keep the server alive. The terminal window that's running our server will also have a stack trace: We can see (in this case) that the content function is calling itself recursively (but not too many times, otherwise there would be a Maximum call stack size exceeded error). The content function looks like this: function content (opts, c = 20) { return --c ? content(opts, c) : opts.ohoh } The error message is Cannot read property 'ohoh' of undefined. It should be fairly clear, that for whatever reason the opts argument is being input as undefined by a function calling the content function. But because our stack is limited to 10 frames, we can't see what originally called the first iteration of the content function. One way to address this is to use the --stack-trace-limit flag. We can see that c defaults to 20, so if we set the limit to 21, maybe we'll see what originally called the c function: $ node --stack-trace-limit=21 index.js This should result in something like the following screenshot: Now we can see that the original call is made from router.get in the routes.js file, line 6, column 12. Line 6 is as follows: res.send(content()) Ah... it looks like we're calling content without any inputs; of course, that means the arguments default to undefined. The --stack-trace-limit flag instructs the V8 JavaScript engine to retain more stacks on each tick (each time round) of the event loop. When an error occurs, a stack trace is generated that traces back through the preceding function calls as far as the defined limit allows. Can we set the stack limit in process? What if we want a different stack trace limit in production versus development environments? We can track and trace asynchronous function calls? Is it possible to have nicer looking stack traces? A lot of the time in development we want as context as we can get, but we don't want to have to type out a long flag every time we run a process. But in production, we want to save precious resources. Let's copy the app folder to infinite-stack-in-dev-app: $ cp -fr app infinite-stack-in-dev-app Now at very the top of index.js, we simply write the following: if (process.env.NODE_ENV !== 'production') { Error.stackTraceLimit = Infinity } Now if we run our server: $ node index.js Then, make a request with curl (or, optionally, some other method, such as a browser): $ curl Our stack trace will be limitless. The default stack trace could definitely to be more human friendly. Enter cute-stack, a tool for creating prettified stack traces. Let's copy our app folder to pretty-stack-app and install cute-stack: $ cp -fr app pretty-stack-app$ cd app $ npm install --save cute-stack Now let's place the following at the very top of the index.js file: require('cute-stack')() Now let's run our process with a larger stack trace limit (as in the main recipe), $ node --stack-trace-limit=21 index.js Make a request, either with a browser, or if installed, curl: $ curl As a result, we should see a beautified stack trace, similar to the following screenshot: Note Alternative layouts cute-stack has additional layouts, such as table, tree, and JSON, as well as a plugin system for creating your own layouts see the cute-stack readme for more. The cute-stack tool takes advantage of a proprietary V8 API, Error.prepareStackTrace, which can be a function that receives error and stack inputs. This function can then process the stack and return a string that becomes the stack trace output. Note Error.prepareStackTraceSee for more on Error.prepareStackTrace. The nature of JavaScript affects the way a stack trace works. In JavaScript, each tick (each time the JavaScript event-loop iterates) has a new stack. Let's copy our app folder to async-stack-app: $ cp -fr app async-stack-app Now let's alter content.js like so: function content (opts, c = 20) { function produce (cb) { if (--c) setTimeout(produce, 10, cb) cb(null, opts.ohoh) } return produce } module.exports = content Then let's alter routes.js in the following way: const content = require('./content') const {Router} = require('express') const router = new Router() router.get('/', (req, res) => { content()((err, html) => { if (err) { res.send(500) return } res.send(html) }) }) module.exports = router Now we start our server: $ node index.js And make a request: $ curl We'll see only a small stack trace descending from timeout specific internal code, as in the following screenshot: We can obtain asynchronous stack traces with the longjohn module. Let's install it as a development dependency: $ npm install --save-dev longjohn Now we can add the following the very top of the index.js file: if (process.env.NODE_ENV !== 'production') { require('longjohn') } Let's run our server again: $ node index.js And make a request: $ curl Now we should see the original stack, followed by a line of dashes, followed by the call stack of the previous tick. More than 13,450 modules directly depend on the third- debug module (at the time of writing). Many other modules indirectly use the debug module by the use of those 13,450. Some highly notable libraries, such as Express, Koa, and Socket.io, also use the debug module. In many code bases, there's a wealth of often untapped tracing and debugging logs that we can use to infer and understand how our application is behaving. In this recipe, we'll discover how to enable and effectively analyze these log messages. Let's create a small Express app which we'll be debugging. On the command line, we execute the following commands: $ mkdir app$ cd app$ npm init -y$ npm install --save express$ touch index.js Our index.js file should contain the following: const express = require('express') const app = express() const stylus = require('stylus') app.get('/some.css', (req, res) => { const css = stylus(` body color:black `).render() res.send(css) }) app.listen(3000) Let's turn on all logging: DEBUG=* node index.js As soon as we start the server, we see some debug output that should be something like the following screenshot: The first message is as follows: express:application set "x-powered-by" to true +0ms Let's make a mental note to add app.disable('x-powered-by') since it's much better for security to not publicly announce the software a server is using. This debug log line has helped us to understand how our chosen framework actually behaves, and allows us to mitigate any undesired behaviour in an informed manner. Now let's make a request to the server. If we have curl installed we can do the following: $ curl (Or otherwise, we can simply use a browser to access the same route). This results in more debug output, mostly a very large amount of stylus debug logs: While it's interesting to see the Stylus parser at work, it's a little overwhelming. Let's try just looking only at express log output: $ DEBUG=express:* node index.js And we'll make a request again (we can use curl or a browser as appropriate): $ curl This time our log filtering enabled us to easily see the debug messages for an incoming request. In our recipe, we initially set DEBUG to *, which means enable all logs. Then we wanted to zoom in explicitly on express related log messages. So we set DEBUG to express:*, which means enable all logs that begin with express:. By convention, modules and frameworks delimit sub-namespaces with a : (colon). At an internal level, the debug module reads from the process.env.DEBUG, splits the string by whitespace or commas, and then converts each item into a regular expression. When a module uses the debug module, it will require debug and call it with a namespace representing that module to create a logging function that it then uses to output messages when debug logs are enabled for that namespace. Note Using the debug moduleFor more on using the debug module in our own code, see Instrumenting code with debug in the There's more... section. Each time a module registers itself with the debug module the list of regular expressions (as generated from the DEBUG environment variable) are tested against the namespace provided by the registering module. If there's no match the resulting logger function is a no-op (that is, an empty function). So the cost of the logs in production is minimal. If there is a match, the returned logging function will accept input, decorate it with ANSI codes (for terminal coloring), and create a time stamp on each call to the logger. Let's find out how to use debug in our own code, and some practices around enabling debug logs in production scenarios. We can use the debug module in our own code to create that relate to the context of our application or module. Let's copy our app folder from the main recipe, call it instrumented-app, and install the debug module: $ cp -fr app instrumented-app$ cd instrumented-app$ npm install --save debug Next, we'll make index.js look like so: const express = require('express') const app = express() const stylus = require('stylus') const debug = require('debug')('my-app') app.get('/some.css', (req, res) => { debug('css requested') const css = stylus(` body color:black `).render() res.send(css) }) app.listen(3000) We've required debug, created a logging function (called debug) with the my-app namespace and then used it in our route handler. Now let's start our app and just turn on logs for the my-app namespace: $ DEBUG=my-app node index.js Now let's make a request to, either in the browser, or with curl we could do the following: $ curl This should create the following log message: my-app css requested +0ms The default debug logs are not suited to logging. The logging output is human-readable rather than machine-readable output; it uses colors that are enabled with terminal ANSI codes (which will essentially pollute the output when saved to file or database). In production, if we want to turn on debug logs we can produce more standard logging output with the following: $ DEBUG_COLORS=no DEBUG=* node index.js The pino-debug module passes debug messages through pino so that output is in newline-delimited JSON (a common logging format which offers a good compromise between machine and human readability). Note About pino pino is a high performance logger, that's up to 8-9 times faster than other popular loggers (see the benchmarks at: for more information). For more information about pino visit. Due to the performant techniques used by pino, using pino-debug leads to a performance increase in log writing (and therefore leaves more room for other in-process activities such as serving requests) even though there's more output per log message! Let's copy our app folder to logging-app and install pino-debug: $ cp -fr app logging-app$ npm install --save pino-debug We'll add two npm scripts, one for development and one for production. Let's edit package.json like so: { "name": "app", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "dev": "node index.js", "prod": "node -r pino-debug index.js" }, "keywords": [], "author": "", "license": "ISC", "dependencies": { "express": "^4.15.0", "pino-debug": "^1.0.3", "stylus": "^0.54.5" } } Now we run the following: $ DEBUG=* npm run --silent prod We should see the express logs in JSON form, where the msg field contains the log contents and the ns field contains the relevant debug message. Additionally, pino adds a few other useful fields, such as time, pid, hostname, level (the log level defaults to 20, which is debug level), and v (the log format version): Note Debug namespace to log level mappingSee the pino-debug readme at for mapping namespaces to custom log levels. It can be highly useful to what's going on in Node's core. There's a very easy way to get this information. In this recipe, we're going to use a special environment variable to enable various debugging flags that cause Node core debug logging mechanisms to print to STDOUT. We're going to debug a small web server, so let's create that real quick. On the command line, we execute the following commands: $ mkdir app$ cd app$ npm init -y$ npm install --save express$ touch index.js Our index.js file should contain the following: const express = require('express') const app = express() app.get('/', (req, res) => res.send('hey')) setTimeout(function myTimeout() { console.log('I waited for you.') }, 100) app.listen(3000) We simply have to set the NODE_DEBUG environment variable to one or more of the supported flags. Let's start with the timer flag by running our app like so: $ NODE_DEBUG=timer node index.js This should show something like the following screenshot: Core timer debug output Let's try running the process again with both timer and http flags enabled: $ NODE_DEBUG=timer,http node index.js Now we need to trigger some HTTP operations to get any meaningful output, so let's send a request to the HTTP server using curl (or an alternative method, such as navigating to in the browser): $ curl This should give output similar to the following screenshot: The NODE_DEBUG environment variable can be set to any combination of the following flags: http net tls stream module timer cluster child_process fs Note The fs debug flagThe quality of output varies for each flag. At the time of writing, the fs flag in particular doesn't actually supply any debug log output, but when enabled will cause a useful stack trace to be added to any unhandled error events for asynchronous I/O calls. See for context. In our recipe, we were able to enable core timer and HTTP debug logs, by setting the NODE_DEBUG environment variable to timers in the first and then timers,http in the second. We used a comma to the debug flags; however, the delimiter can be any character. Each line of output consists of the namespace, the process ID (PID), and the log message. When we set NODE_DEBUG to timer, the first log indicates that it's creating a list for 100. Our code passes 100 as the second argument passed to setTimeout, internally the first argument (the timeout callback) is added to a queue of callbacks that should run after 100 ms. Next, we see a message, timeout callback 100, which means every 100 ms timeout callback will now be called. The following message (the now message) indicates the current time as the internal timers module sees it. This is milliseconds since the timers module was initialized. The now message can be useful to see the time drift between timeouts and intervals, because a timeout of 10 ms will rarely (if ever) be exactly 10 ms. It will be more like 14 ms, because of 4 ms of execution time for other code in a given tick (time around the event loop). While 4 ms drift is acceptable, a 20 ms drift would indicate potential performance problems-a simple NODE_DEBUG=timer prefix could be used for a quick check. The final debug message shows that the 100 list is now empty, meaning all callback functions set for that particular interval have now been called. Most of the HTTP output is self-explanatory, we can see when a new connection has been made to the server, when a message has ended and when a socket has closed. The remaining two cryptic messages are write ret = true and SERVER socketOnParserExecute 78. The write ret = true relates to when the server attempted to write to a socket. If the value was false, it would mean the socket had closed and (again internally) the server would begin to handle that scenario. As for the socketOnParserExecute message, this has to do with Node's internal HTTP parser (written in C++). The number ( 78) is the string length of the headers sent from the client to the server. Combining multiple flags can be useful. We set NODE_DEBUG to timer,http and we were able to see how the http module interacts with the internal timer module. We can see after the SERVER new http connection message; that two timers are set (based on the timeout lists being created), one for 12,0000 ms (two minutes, the default socket timeout) and one (in the example case) for 819 ms. This second interval (819) is to do with an internal caching mechanism for the HTTP Date header. Since the smallest unit in the Date header is seconds, a timeout is set for the amount of milliseconds remaining before the next second, and the Date header is provided the same string for the remainder of that second. Let's look at the way Node Core triggers the debug log messages, and see if we can use this knowledge to gain a greater understanding of Node's internal workings. Core modules tend to use the util module's debuglog method to generate a logging that defaults to a no-op (an empty function) but writes log messages to STDOUT when the relevant flag appears in the NODE_DEBUG environment variable. We can use util.debuglog to create our own core like log messages. Let's take the app folder we created in the main recipe and copy it to instrumented-app: $ cp -fr app instrumented-app Now let's make index.js look like this: const util = require('util') const express = require('express') const debug = util.debuglog('my-app') const app = express() app.get('/', (req, res) => { debug('incoming request on /', req.route) res.send('hey') }) setTimeout(function myTimeout() { debug('timeout complete') console.log('I waited for you.') }, 100) app.listen(3000) Now we can turn on our custom debug logs like so: $ NODE_DEBUG=my-app node index.js If we make a request to, the output of our process should look something like this: MY-APP 30843: timeout complete I waited for you. MY-APP 30843: incoming request on / Route { path: '/', stack: [ Layer { handle: [Function], name: '<anonymous>', params: undefined, path: undefined, keys: [], regexp: /^\/?$/i, method: 'get' } ], methods: { get: true } } Note Prefer the debug moduleIn many cases, using the third-party debug module instead of util.debuglog is preferable. The debug modules supports wildcards, and the output is time stamped and color-coded, while the production cost of using it is negligible. See the Enabling debug logs recipe in this chapter for more. The core libraries that come bundled with the Node binary are in JavaScript, which means we can debug them the same way we debug our own code. This level of introspection means we can understand internal mechanics to a fine level of detail. Let's use Devtools to pick apart how util.debuglog works. Note DevtoolsTo understand how to use Devtools, see the first recipe in this chapter, Debugging Node with Chrome Devtools. We'll run our code we prepared in the Getting ready section like so (Node 8+): $ NODE_DEBUG=timer node --inspect-brk index.js Or if we're using Node 6.3.0+, use the following: $ NODE_DEBUG=timer node --debug-brk --inspect index.js Now if we navigate to chrome://inspect, click the inspect link this will open Devtools for our Node process. We should then see something like the following: Now in left hand pane (the Navigation pane), we should see two drop-down trees (no domain) and file://. The (no domain) files are files that came compiled into Node. Let's click the small right-facing triangle next to (no domain) to expand the list. Then locate the util.js file and double-click to open. At this point, we should see something like the following: Next, we want to find the debuglog function. An easy way to do this is to press Cmd + F on macOS or Ctrl + F on Linux and Windows, to bring up the small find dialog, then type debuglog . This should highlight the exported debuglog method: If we read the exported function, we should be able to ascertain that given the right conditions (for example, if the flag is set on NODE_DEBUG), a function is created and associated to a namespace. Different Node versions could have differences in their util.js. In our case, the first line of this generated function is line 157, so we set a breakpoint on line 157 (or wherever the first line of the generated function may be): Now if we press run, our breakpoint should be triggered almost immediately. Let's hover over the arguments object referenced in the generated function: We should see that the second argument passed to the generated debug function is 100 this relates to the millisecond parameter we pass to setTimeout in our index.js and is part of the first debug message ( no 100 list was found...). Now let's hit the blue play button four more times until it changes to a pause button and the top-right corner shows an error count of 4 as shown in the following screenshot: Devtools perceives each log message as an error because the debug messages are written to STDERR. This is why the error count in the top-right corner is 4. Now let's open a new browser tab and navigate to. Devtools should have paused again at our breakpoint. If we hover over the arguments object in the generated function we should see that the second argument is 12000. This relates to the default 2 minute timeout on sockets (as discussed in the main recipe): If we hit the play button again and inspect the arguments object, we should see the second argument is a number that's less than 1000: Over on the right-hand side, in the Call Stack panel there's a frame called utcDate. Let's select frame to view the function: This function is in a library that's only for internal core use called _http_outgoing.js. We can see that it's currently within an if block that checks whether dateCache is falsey. If dateCache is falsey, it creates a new Date object and assigns the output of toUTCString to dateCache. Then it uses timers.enroll. This is a way of creating a setTimeout where the provided object represents the timeout reference. It sets the time to 1000 minus the millisecond unit in the date object that effectively measures how long there is left of the current second. Then it calls timers._unrefActive, which activates the timer without allowing the timer to keep the event loop open (which means the fact the queued timer operation won't keep the process alive). The utcDate._onTimeout method sets dateCache to undefined, so at the end of the timeout, dateCache is cleared. If we look down the Call Stack panel, we should be able to infer that the utcDate function is called when a request is made, and is to do with HTTP header generation (specifically the Date HTTP header). The net effect is that a process may receive, say, 10,000 requests a second, and only the first of those 10,000 has to perform the relatively expensive Date generation, while the following 9,999 requests all use the cached date. And that's the sort of thing we can discover by debugging core.
https://www.packtpub.com/product/node-cookbook-third-edition/9781785880087
CC-MAIN-2020-40
refinedweb
6,889
61.46
Sourceware Bugzilla – Bug 12799 fflush violates POSIX on seekable input streams Last modified: 2012-12-19 10:46:45 UTC Per, fflush is required to discard any ungetc() bytes and set the underlying file position to the point as though the pushed-back bytes had never been read. That is, in the sequence: { app1; app2; } < seekable if app1 reads one byte too many, then does ungetc() to push it back, then app2 should start reading at the byte that app1 did not want, rather than at the point that app1 reached before using ungetc(). More concretely, this program should exit with status 0, but right now it is exiting with status 7 (glibc is reading 'w' instead of ' ' after the fflush): #define _POSIX_C_SOURCE 200112L #include <stdio.h> int main (void) { FILE *f; char buffer[10]; int fd; int c; f = fopen ("file", "w+"); if (f == NULL) return 1; if (fputs ("hello world", f) == EOF) return 2; rewind (f); fd = fileno (f); if (fd < 0 || fread (buffer, 1, 5, f) != 5) return 3; c = fgetc (f); if (c != ' ') return 4; if (ungetc ('@', f) != '@') return 5; if (fflush (f) == EOF) return 6; if (fgetc (f) != c) return 7; return 0; } This program returns 0 on Solaris and Cygwin.
http://sourceware.org/bugzilla/show_bug.cgi?id=12799
CC-MAIN-2013-48
refinedweb
206
71.58
Here’s a puzzle I ran across today: Start a knight at a corner square of an otherwise-empty chessboard. Move the knight at random by choosing uniformly from the legal knight-moves at each step. What is the mean number of moves until the knight returns to the starting square? There’s a slick mathematical solution that I’ll give later. You could also find the answer via simulation: write a program to carry out a knight random walk and count how many steps it takes. Repeat this many times and average your counts. Related post: A knight’s tour magic square 38 thoughts on “A knight’s random walk” def move_knight(start_point, step_til_now) if start_point == [0, 0] and step_til_now != 0 then return step_til_now end next_points = [ [2, 1], [-2, 1], [2, -1], [-2, -1], [1, 2], [1, -2], [-1, 2], [-1, -2] ].map{|x| [start_point[0] + x[0], start_point[1] + x[1]]} .select{|x| x[0] >= 0 and x[0] =0 and x[1] < 8} return move_knight(next_points[Random.rand next_points.length], step_til_now + 1) end puts (1..10000).map{|x| move_knight([0, 0], 0)}.reduce(0){|s,v| s+v}/10000. Seems around 170 steps. This is just a stupid simulation. I will try probability method later. In my simulation, the average seemed to be about 42. Looking forward to the mathematical solution… Hi, GlennF, can you provide your source? My simulation running 10 million walks averaged out at ~121. Maybe I have an error somewhere though. Java: import java.awt.Point; public class KnightWalk { public static void main(String[] args) { Simulator sim = new Simulator(); double average = sim.start(10000000); System.out.println("Average number of moves: " + average); } } class Simulator { private int moves = 0, returns = 0; private Point pos = new Point(0, 0); public double start(int lim) { while (returns < lim) { pos = makeMove(pos); moves++; if (pos.x == 0 && pos.y == 0) { returns++; } } return moves / (double) returns; } private Point makeMove(Point pos) { Point newPoint; do { newPoint = (Point) pos.clone(); int move = (int) (Math.random() * 7); switch (move) { case 0: newPoint.translate(1, 2); break; case 1: newPoint.translate(2, 1); break; case 2: newPoint.translate(2, -1); break; case 3: newPoint.translate(1, -2); break; case 4: newPoint.translate(-1, -2); break; case 5: newPoint.translate(-2, -1); break; case 6: newPoint.translate(-2, 1); break; case 7: newPoint.translate(-1, 2); break; } } while (newPoint.x 7 || newPoint.y 7); return newPoint; } } Gah! I did have an error. ……. I only used 7 of the different moves… (Up 2 and left 1 was omitted) Apologies. New result: 167.97 //int move = (int) (Math.random() * 7); int move = (int) (Math.random() * 8); Adn 167.97 is remarkably close to the number of edges in the “Knight’s Graph“. Let’s see. On the 16 corner squares the knight has 8 moves, on the 16 squares adjacent to these it has 6 moves, on the 20 squares adjacent to these it has 4 moves, on the 8 squares on the edge adjacent to the corner it has 3 moves, and on the corner it has 2 moves. This is just a random walk on the graph where each square is a vertex and two vertices are adjacent if they are separated by a knight’s move. A random walk on this graph is a reversible Markov chain, as the transition matrix is symmetric, so the probability that the knight is in a given square is just 2/(total degree)= 2/336=1/168. The mean recurrence time is just the inverse of this, or 168 moves. To deinst: My intuition tells me you are right. But your solution suggests that the result is irrelative to the start position. Am I right? Priezt: I think he took that into account with the 2 in ‘2/(total degree)’ I.e. if you were in a centre block, which has 8 moves, then it would be 1/(8/336) for the total length, which is 42… Which I just verified is correct on another 10 million walks. deinst: you just made that so simple.. To Levi: I tried to start from a 8-move block and end in a 6-move block. The result was around 68. How did this come out of the formula? Pirezt: I just made a test case for that and my result was 2.8… To be honest, I think that when we make the path lead from one square to another square with a different probability, then we make the search asymmetric – which breaks the markov model (or makes it more complex, I have no idea tbh). Hopefully deinst can help us. Does the average exists? answer is 168. simulation, little more smart: deinst : the transition matrix is symmetric, so the probability that the knight is in a given square is just 2/(total degree)= 2/336=1/168. The mean recurrence time is just the inverse of this, or 168 moves. 1) what is “degree”? Number of edges? 2) what is “probability that the knight is in a given square” ? Should not it converge to 1/64.0, by number of squares? 3) why the formula is right, that expectation of path length is “number of all edges / number of input edges” ? I did quick look into wikipedia, by can’t catch the idea. ( ) @dobokrot 1) The degree of a vertex is the number of edges incident on it. 2) No! Think of it this way, you have lots of knights the size of water molecules simultaneously moving about the chessboard, randomly choosing the next place to go. When the system reaches its steady state (it will, this is not obvious) then the flow in one direction across one edge is the same as the flow in the other. As the flow across an edge from a vertex is proportional to the number of knights on that vertex divided by the number of edges attached to that vertex, we see that for any two squares, the relative probability between two vertices is equivalent to the relative degree. 3) From the fact that the relative probabilities are proportional to the relative degrees, you see that a solution is that the probability is proportional to the degree (the fact that this solution is unique is ‘obvious’ from the physical intuition with the flow, but tricky to prove.) @Levi If you break the symmetry, things get much more complex. With only 64 vertices, you can solve it with with linear algebra, but much bigger, and simulation is a better choice, and considerably easier. I get an average of 9 moves, a median of 3. You seem all to forget that the board size is limited (chess board is 8×8 fields big). My source: Deinst’s explanation (2) is amazing. I think if I were going to simulate this, I would construct a sequence of state vectors S_n, each of length 64, giving the probability of being on each square after n moves. With the elements of S_n decreasing exponentially (no ???), sum(n*S_n) should converge pretty quickly to the answer (n0???). Yes, the Markov chain solution is amazing. That’s the slick solution I had in mind. As for simulation, the accuracy of your estimate after N simulations will be O( 1/sqrt(N) ). @matthias your getnewy has a sign error in case 7 @Aakash Mehendale: Thank you so much! I modified it to use the point-class Levi also used and also got to 168 and was already wondering where my error was Maybe I’m just crazy, but since this is a potentially infinite random set, is it even possible to reliably determine the mean? After all, there is a possibility that the knight gets caught in an infinite loop and never returns to the original corner. @Christopher Allen-Poole the knight returns almost surely, which is good enough for our purposes. Christopher: It’s conceivable that the knight could get caught in an infinite loop, but this has probability zero. If executing a loop has probability p, the probability of executing that same loop k times in a row is p^k, so as k grows, this probability goes to zero. For every N, there are paths of length N that do not return to the starting point. But as N grows, the probabilities of these paths decrease faster than their numbers increase, so the infinite sum defining the expected time converges. Oops… sorry to cause folks distress earlier. I found the bug in my code… essentially I had accidentally restricted the possible knights moves to always be +- 2 steps horizontally and +-1 step vertically. With the bug fixed, I also get a mean of 168. Convergence seems fast for O(1/sqrt(n)). I get 168 to 10 decimal points with 6925 iterations. Priezt is right, the answer depends on the start position, and is a function of the degree of the vertex, specifically 336/degree. So if you start near the center of the board, the mean to return is 336/8=42 moves. Also, (bishop, rook, queen, king, knight)=(40, 64, 69.33333, 140, 168). Starting from the corner, the bishop gets back soonest on average. How many ways could a knight get caught in an infinite loop? How many ways could a knight return to its starting position? Are the two values the same? Does an infinite sequence of moves exist avoiding loops? I tried experimenting with connected graphs having various numbers of vertices and random connections. If the graph has E edges, and you start at a vertex with degree K, the mean return time is 2*E/K. Despite deinst’s efforts, it’s still not clear why to me. Cool problem John. It looks like a classic example where generalizing the problem makes it easier to analyze and understand. Followup statement: For every sequence of moves returning the knight to its starting position that you hand me, I can hand you one that differs by the last step and so does not return. Christopher Allen-Poole is correct. The calculation of the mean is nonsense. The fact that simulation matched the Markov chain model has some of you fooled. Your programs avoid inifinite loops and your random number generators try to as well, the Markov model does the same thing under the hood too. So there is nothing amazing here to me about 168. The problem is not stated clearly enough to get a rigorous answer and it is contrived in more ways than one. These are my thoughts anyhow. The “real-world” question is my favorite. I guess one could have faith that the knight returns more often than not; the likelyhood of this to me is almost surely absurd :-). But surely, Jeff, that argument would apply to the geometric distribution, which _does_ have a well-defined mean? I mean, if you could how many coin tosses it takes you to toss a head, you could say ‘what about tail, tail, tail…?’ – but it’s ok because the probability of your ‘different last step’ series gets vanishingly small and you end up with a geometric series that sums to two. I imagine I’ve missed a subtlety in your argument, though – how would you have phrased the question differently? I think Jeff’s point is valid. However, I also think that we can avoid the problem mathematically by limiting our event space to sequences of moves that terminate, i.e. those that do return to the origin. We can calculate the probability of each such path, and the (countably infinite and absolutely convergent) sum of their probabilities is 1.0. Whether or not this models the “real world” is a question that goes beyond math. @SteveBrooklineMA Limiting the event space makes the problem concise, but the Markov chain solution still lacks rigor. To help you better understand my point I suggest reading a more conrete and meaningful application of Markov chains, namely on the Kruskal Count card trick. I think Lagaias co-authored a paper on it. I tried to limit board size to 3*3 and 4*4, the formula still worked. Cool. I really should read more about Markov chain. @Colin Beveridge You’ll just have to reread my followup statement above. It’s so lucid that I actually am a bit proud of myself for saying it. A test to a mathematician is to explain it away in rigorous fashion … Good luck with that. P.S. I know how to beat casino Roulette.
http://www.johndcook.com/blog/2012/05/08/a-knights-random-walk/
CC-MAIN-2015-22
refinedweb
2,087
74.29
Nant include task - namespace matters We’ve been trying to include some properties into our build file from a properties file today but no matter what we tried the properties were not being set. We eventually realised that the build file has an XML Namespace set on the project element. <project name="..." xmlns=""> It turns out that if you want to include a properties file in your build file, like so: <include buildfile="properties.xml" /> …you need to put the namespace on the project attribute of that file as well, otherwise its properties don’t get picked up. Our properties file therefore needs to look like this, assuming that we have a namespace set on the build file. <project name="properties" xmlns=""> <property name="foo" value="bar" /> </project> What’s a bit confusing is that on the NAnt documentation page for the include task it says that project element attributes are ignored! That’s not what we found! About the author Mark Needham is a Developer Relations Engineer for Neo4j, the world's leading graph database.
https://markhneedham.com/blog/2009/02/03/nant-include-task-namespace-matters/
CC-MAIN-2018-47
refinedweb
176
67.28
Search also provide local search capabilities to help you find necessary information as quickly as possible. In this two-part tutorial series I will show you how to leverage Android’s search framework. This first part shows you the basics of Android’s search framework. In the second part I am going to cover search suggestions and global search. Local vs. global search There are essentially two kinds of search in Android: - local search and - global search Local search enables users to search content of the currently visible app. Local search is appropriate for nearly every type of app. A recipe app could offer users to search for words in the title of the recipes or within the list of ingredients. Local search is strictly limited to the app offering it and it’s results are not visible outside of the app. Global search on the other hand makes the content also accessible from within the Quick Search Box on the home screen. Android uses multiple data source for global search and your app can be one of it. In the next screen taken on a tablet you can see the results of a Google search on the left and some results from apps or for app titles on the right. The user experience for local and global search is a bit different since global search uses multiple data sources. By default it provides search results from Google, searches for installed apps or contacts on the device and it also might include results from apps that enable global search. The following screenshot of a phone shows all content that is included in the list of search results. For an app to be included in this list, the user must explicitly enable it. From the perspective of the user doing the search this is a very useful restriction. That way users can limit global search to include only what they really are interested in. For app developers that tend to think that their content is the stuff any user is most likely searching for, this of course is a big letdown 🙂 You should use global search only when your content is likely to be searched for from the Quick Search Box. That might be true for a friend in facebook but not necessarily for a recipe of a cooking app. I will cover configuring global search in more detail in the second part of this tutorial series. Enabling search for your app You always need to execute at least three steps to make your app searchable. If you want to provide search suggestions you have to add a fourth step: - You have to write an xml file with the search configuration - You have to write a search activity - You have to tie the former two together using the AndroidManifest.xml - You have to add a content provider for search suggestions – if needed I am going to cover the first three steps in the following sections. And I am going to write about the content provider for search suggestions in the next part of this tutorial. Configuring your search You have to tell Android how you want your app to interact with Android’s search capabilities. This is done in an xml file. Following established practices you should name the file searchable.xml and store it in the res/xml folder of your app. The least you have to configure is the label for the search box – even though the label is only relevant for global search. You also should always add a hint as Android displays this text within the form field. You can use this to guide the user. <searchable xmlns:android="" android:label="@string/search_label" android: </searchable> There are many more attributes. I will cover those for global search in the next part of this tutorial and for voice search later on. You can find a list of all attributes on Android’s reference page. Adding a SearchActivity Whenever a users performs a search, Android calls an activity to process the search query. The intent used to call the activity contains the query string as an intent extra and uses the value of the static field Intent.ANDROID_SEARCH. Since you configure which activity to use for search (see next section), Android calls your activity with an explicit intent. Most often your search activity will extend ListActivity to present a list of results to the user: public class SampleSearchActivity extends ListActivity { public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); handleIntent(getIntent()); } public void onNewIntent(Intent intent) { setIntent(intent); handleIntent(intent); } public void onListItemClick(ListView l, View v, int position, long id) { // call detail activity for clicked entry } private void handleIntent(Intent intent) { if (Intent.ACTION_SEARCH.equals(intent.getAction())) { String query = intent.getStringExtra(SearchManager.QUERY); doSearch(query); } } private void doSearch(String queryStr) { // get a Cursor, prepare the ListAdapter // and set it } } So why did I include the onNewIntent() method? I did this because of Android back stack. By default Android adds every new activity on top of the activity stack. That way the functionality of the back key is supported. If a user presses the back button, Android closes the top-most activity and resumes the next activity on the stack. Now consider a typical search behavior. The user searches for a word and has a look at the results. Oftentimes she either did not find what she were looking for or she wants to search additional information. So she clicks the search key again, enters the new search phrase and selects a suggestion. Thus a new instance of your search activity would end up on top of your previous one. The activity stack would contain the search activity twice and users would have to hit the back key twice to get back to where they were before starting the search. Which probably is not what you want. Normally you want your search activity to be on the top of the stack only once. That’s why you should declare the activity as singleTop. That way it is only once on your stack. Should a user start a new search from within your search activity, Android would recycle the instance and call the method onNewIntent() with the new search intent as its parameter. Since the old intent is still stored within your activity, you should always call setIntent(newIntent) within your onNewIntent() method to replace the original intent with the new one. Tying everything together in your manifest file To make search work you have to configure it within your project’s AndroidManifest.xml file. These are the things you have to put in there: - Your search activity - The intent used for search - The launch mode for your activity - The meta-data pointing to your searchable.xml - Further meta-data to define which activity to use for search. The following code shows a snippet of a typical configuration. <application android:icon="@drawable/icon" android:label="@string/app_name" android: <meta-data android:name="android.app.default_searchable" android: <activity android:label="@string/app_name" android:launchMode="singleTop" android: <intent-filter > <action android: </intent-filter> <intent-filter > <action android: </intent-filter> <meta-data android:name="android.app.searchable" android: </activity> </application> Your search activity must include the intent used for search Android uses the value android.intent.action.SEARCH for the action within the intent when calling your search activity. So you must include this string within your activity’s intent filters. Otherwise your activity would be ignored. The meta-data pointing to your searchable.xml As described above you have to have an xml file to configure your search. But Android neither knows the name of this file nor its location. So Google came up with a meta-data element to point to this file. Additional meta-data to define which activity to use for search. Finally Android needs to know which activity to call when a search query is submitted. Again you have to use a meta-data element. I have written a blog post about why I would prefer convention by default for these elements. Handling search yourself It’s important to present your user an obvious way to start search. Even when most devices had a dedicated search button you still were better of, when you added a way to manually start a search. For one users might not be familiar with the search button. But even if they know of the button they simply might not know that your app is capable of search. Thus you have to make it obvious that your app is searchable. With the introduction of Honeycomb and Ice Cream Sandwich Android devices no longer have a search button. Instead the preferred way to present a search option is to show a search button or the search widget in the action bar. This pattern has become common quickly and you should use an action bar with a search button even on older devices. See for example the following screenshot. It shows Gigbeat – one of my favorite apps – on an Android 2.2 device. When the user wants to search, you have to call the search dialog from within your activity. This is done by calling onSearchRequested(). This method is part of Android’s Activity class and calls startSearch() which I will cover in a few moments. Android’s search framework uses onSearchRequested() when the dialog should be started. Thus you should use it as well, even though it might appear odd to call a callback method directly. Adding specific data to the search The default search behavior is not always sufficient. First of all it only starts a local search. Most often that’s appropriate for your app, but sometimes a global search is what you need. And secondly you might want to add some specific data to the search that might depend on the state your app is in or that is specific for starting the search from this Activity only. In these cases you have to override onSearchRequested() and call startSearch() yourself. The method startSearch() takes four parameters which I explain in the table below. The call to startSearch() of the base class Activity uses the values null, false, null, false – which means it is a local search without any further data. Enabling your app for voice search To enable voice search you only have to add a few lines to your search configuration. As you can see in the following screenshot, enabling voice search causes Android to display a microphone next to the search dialog. As soon as you click the microphone the well-known voice input dialog pops up. But not all Android devices might support voice search. If the device of a user has no voice search capabilities your voice search configuration will be ignored and the microphone symbol will not be displayed. There are four attributes to the searchable element that deal with voice search but two should suffice in most cases. The only obligatory attribute for voice search is android:voiceSearchMode. This attribute must contain showVoiceSearchButton, if you want to use voice search. Separated by a pipe (“|”) it needs a second value which determines the type of search: launchWebSearch or launchRecognizer. It’s best to use launchRecognizer here, because the other value would use a web search – which normally wouldn’t yield the results appropriate for your app. So the configuration should look like this: android:voiceSearchMode= "showVoiceSearchButton|launchRecognizer" The second important attribute is android:voiceLanguageModel. For this attribute also two types are possible: web_search and free_form. Normally web_search should work best here – it is ideal for search like phrases where a user often won’t use correct sentences. The value free_form is more appropriate for voice input in other apps like mail or SMS that take larger chunks of well-formed text. See the developers blog post about Google’s speech input API. One word of advice concerning voice search though: Voice search only works if the user of the app has access to the internet. If the user has no connection to the internet, Android will display an error dialog: Wrapping up In this part of the tutorial I have shown you how to use Android’s search framework. You have to add some configuration and add a search activity. But all in all, it’s not too difficult to add decent search capabilities to your app. Even adding voice search requires just two lines of configuration – though you should know its limitations. In the next part of this tutorial I will cover how to add search suggestions to your app and to the Quick Search Box on Android’s homescreen. Stay tuned! 28 thoughts on “Android Tutorial: Adding Search to Your Apps” there is a slight syntax error, android:hint=”@string/search_hint” it should be I have corrected the misplaced closing bracket. Thanks! but it still does not work In this example, your manifest.xml declares the search activity as such: </activity However, the searchable activity is declared as: public class SampleSearchActivity extends ListActivity { // etc } Wouldn't it be less confusing if the class name was the same as that declared in your manifest? I know cut-and-paste coding is a bad idea anyway, but it would help to emphasize the way one connects to the other and possibly avoid confusion. Christo … it looks like my XML was stripped from the post. Not sure how to get around that, but the first quote should include the manifest.xml declaration of the activity ‘android:name=”.YourSearchActivity”‘ thanks christo You’re absolutely right, of course. I have changed all occurences to SampleSearchActivity. Thanks for spotting this and for letting me know! what can i do in doSearch() method for implement search functionality. great tutorial! I am wondering how the main activity calls the search activity. do you have the source code for this tutorial anywhere? thanks! I have followed your tutorial but my SearchActivity is not starting. 🙁 So what’s the problem? “not starting” can be caused by many problems. BTW: It’s better to move such questions to SO. Of course you can always ping me with a link to the SO-question, if you think I can help. On SO there are thousands of helpful people. Here it’s only me – and I also need to actually have time. If you post this question on SO, please explain what steps you have done, add a full logcat log and add as much code as possible. In this case code of the SearchActivity, the AndroidManifest.xml and the searchable.xml are needed. Awesome Post! Thanks a lot! However, I have one query. Does the voice search has a callback which I can modify as per my need? I just want the search view to be filled with text when voice search is used. Currently, the search intent is called and the search view does not get filled with the search query. Any suggestions here? Thanks again for the post! Thumbs up! Awesome Blog! Thanks a lot! How can i prepare my listAdapter?, i already have my cursor, but i don’t know how to prepare the listAdapter Thanks for awesome post. i have read your hole post and followed it but i want to know how action bar hide edit text and opens when user click on search icon and then it appears. b’z i want to do that. It’s the standard SearchView. You can add it to your menu.xml– all you have to do is to add those two attributes: app:actionViewClass="android.support.v7.widget.SearchView"and app:. Now in your Activity/Fragment do something like this: [java] SearchView searchView = (SearchView) MenuItemCompat.getActionView(item); SearchManager searchManager = (SearchManager) getSystemService(Context.SEARCH_SERVICE); SearchableInfo info = searchManager.getSearchableInfo(getComponentName()); searchView.setSearchableInfo(info); [/java] This will tie the SearchViewto your searchable.xml– thus your SearchActivitywill be called with he correct values. This snippet assumes you are using AppCompat. BTW: I think I should add this directly to the tutorial. Thanks for pointing this out. After all that’s the most common way to use search. Hi, I can see you’ve put a lot off effort into this blog. However, it is not useful to me, a beginner, at all. If you had added detailed information, showing exactly how to put things together, this would have been an excellent blog. But as it stands, you assume the reader knows too much, and can understand your generalisations. A working example project would have been great. Thanks for excellent post. I must be mixing too many things together since I can’t make it work. I was hoping to put a searchview (widget) in the main layout (not in the menu). Do you have a complete working example? Thanks My ActionBar sample project on bitbucket contains also a simple SearchViewimplementation. I haven’t used it outside of the ActionBar yet. I really appreciate the response and I took some time looking (and trying to install) at your example, I’m afraid it’s a little above my pay grade. I know it’s a lot to ask but if you had an extremely simple example, similar to Android Dev. guide (but they don’t explain anything well) of the search widget and processing the results of the search, I would really appreciate it or buy you a beer! Thanks A very well explained and easy to follow tutorial! Cleared a few issues I was having cheers 🙂 from where I can get the next tutorial of fourth step You can find the link at the bottom of the post in the related posts list. It’s about search suggestions. Keep in mind that some stuff has changed (for example global search was removed due to some ridiculous patents) since I wrote those search related posts. I have added the search activity but I am getting error when I call onSearchRequested(); why I am getting this error can you help me ? Sorry, but if something very specific to your code fails, it’s something you should post on StackOverflow. I cannot answer that. If you post this question on StackOverflow, please add sufficient code and a complete stacktrace (logcat output), so that anyone can answer your question. If in doubt, err on the side of providing too much information! Of course you could post the link to the Stackoverflow question here. Others or I might want to answer the question, or might see this later on and find the link to your Stckoverflow question (and the answers you get) useful. Hello,the tutorial is great but need help on how send my global search(from internet search) results to the sqlite database. Please advise on the code to use plus the xml. Deborah Hello Mr.Rittmeyer, I have a query and it is kind of urgent.I’m trying to integrate the toolbar and googlemap fragment in 1 app and I want to know how to make the toolbar option which contains the searchview work for searching places in google map. I know to use a button and then code for it, but in toolbar the searchview has the magnifying glass symbol , how do i integrate it with the google map and use geocode. This is the code I want to implement: private void doSearch() { EditText location_field = (EditText) findViewById(R.id.edtsearch); String location = location_field.getText().toString(); List addressList = null; if (location != null || !location.equals(“”)) ; { Geocoder geocoder = new Geocoder(this); { try { addressList = geocoder.getFromLocationName(location, 1); } catch (IOException e) { e.printStackTrace(); } Address address = addressList.get(0); LatLng latLng = new LatLng(address.getLatitude(), address.getLongitude()); mMap.addMarker(new MarkerOptions().position(latLng).title(“Marker”)); mMap.animateCamera(CameraUpdateFactory.newLatLng(latLng)); mMap.moveCamera(CameraUpdateFactory.newLatLng(latLng)); mMap.getUiSettings().isCompassEnabled(); mMap.getUiSettings().isZoomControlsEnabled(); } Please help me out here ,as when I type the name of places there are no places being searched .Also ,let me know if yo need the entire code. Have a good day Thanks in Advance. Shreyasta Samal
http://www.grokkingandroid.com/android-tutorial-adding-search-to-your-apps/
CC-MAIN-2017-26
refinedweb
3,326
64.51
I'm having a problem with VC++, every time i compile a program even the smallest one i end up with a 100 KB or more !!!! any solution ? One Man\'s Villain, is another man\'s HERO did you alter any of the settings? like including libraries or so? no i did'nt ulter anything i just installed and compiled ! No big issue... As in my picture... I have a tiny program, simply prints out "Hello world!" but the dependency on libraries and using namespace dumps a lot of data into the .exe... see for more references. try this code: Code: #include <stdio.h> int main(void) { printf("Hello World!"); return 0; } this will give a normal sized program. i don't know why it is that big with your code, i have tried altering it, but it still stays big 400+ kb. but i'm not a C++ coder, i just know C, and the above code is C, the only advantage is that you can use it in a cpp file also. #include <stdio.h> int main(void) { printf("Hello World!"); return 0; } well i think this is just one of microsoft's problems ! i should use another compiler well i think this is just one of microsoft's problems ! That's not an explanation - compiling lepricauns code I get a executable of 40kb using VC++. Adding 10 somewhat complex statements, I get an executable of 41kb. Both sizes are reasonable. It's not just one of microsoft's problems. However, I never actually tried to minimize the size of executables (I just know that option: "optimize for ... minimal size", and therefore cannot help you. sorry. /edit: switching to another compiler might not help you much. I tried on a linux machine with gcc the above example: -rwxr-xr-x 1 abcdefgh abcd 515K Oct 22 12:54 prog* If the only tool you have is a hammer, you tend to see every problem as a nail. (Abraham Maslow, Psychologist, 1908-70) ok any way i'm using now DEV_CPP it is very good and it is compiling every C & C++ Code I'm giving it ! VC++ usually compiles a Debug build that includes a lot of bloat. Compile a Release build and it would strip a lot of the debugging info out and probably make a smaller executable. Although I do remember my VC++ making huge 200KB+ executables for Hellow World. Which is why I liked DevC++ better. Cheers. Timaxe.com -- Sports Photography Are You A Teen Interested in Linux? Checkout Forum Rules
http://www.antionline.com/showthread.php?260487-Huge-Programs-With-VC
CC-MAIN-2013-20
refinedweb
426
75.61
In C++ AMP, the type char is part of the subset of the C++ language restricted from use on accelerators. Fortunately, it’s relatively easy to work around this restriction, and you may even get better performance to boot (though this is dependent on the code you’ve written and the GPU you are running on among other factors). For example, even with support for a character type, it is a much better idea to initialize an array using integer values rather than character values. Let’s take a look at this issue in more detail. Suppose you want to work with an STL vector of char type data containing size number of elements in C++ AMP. You cannot write the following because you cannot use unsigned char in the restricted scope: array_view<unsigned char> d_data(size, data.data()); // THIS WON’T COMPILE!!parallel_for_each(extent<1>(size), [=] (index<1> idx) restrict(amp){ // Example 1. Read each character in the vector ... = d_data[idx]; // Example 2. Increment each character in the vector d_data[idx]++; // Example 3. Add a value to each character in the vector d_data[idx] += val; // Example 4. Assign a value to each character in the vector d_data[idx] = val;});d_data.synchronize(); However, if the vector is very large and the data can fit in 8 bits, then an array of unsigned char values is exactly what you may wish to use. Without support for char and unsigned char in C++ AMP, you could use integers and unsigned integers to accomplish your task, but care must be taken to ensure that writing all 32 bits does not trounce on other concurrent writes. Let’s see how we can work around this restriction. In the example above, the first step is to define an array view of unsigned integers via a cast. Since each unsigned integer can represent four characters, you’ll have to divide size by four (rounding up) and cast the vector to a vector of unsigned integers: array_view<unsigned int> d_data((size+3)/4, reinterpret_cast<unsigned int*>(data.data())); The rest of the work-around involves arithmetic, bit masks, and atomic operations. If you’re not interested in the details, you can skip to the next section to see a set of functions that should help you treat an array, array_view, or C-style array as if the element type was char. Reading the 8-bit character value (d_data[idx]) is possible with the following expression: ((arr(idx[0] >> 2) & (0xFF << ((idx[0] & 0x3) << 3)))) >> ((idx[0] & 0x3) << 3) Note we have to use idx[0] rather than idx (though we could have indexed into the array directly with idx since division is overload over the index type). Let’s break this down with a picture: These bit manipulations allow us to find the 10th character by looking at the 2nd byte in the 3rd unsigned integer. In addition to similar bit manipulations and arithmetic, writing to an 8-bit character element involves the use of atomic operations. Although it may be safe to write to the 8 bits concurrently, using integers and unsigned integers requires writing to 32 bits. There is thus the potential for a data race since the neighboring characters may be written at the same time. So even if you wouldn’t need an atomic operation to write to the 8 bits, you will most likely need one since you have to write to 32 bits! (The exception is if you know that you're not concurrently writing to any of the neighboring characters in the same 32 bits. In that case, you don't need the atomic operation.) Incrementing an 8-bit character element (d_data[idx]++) is possible with the following line: atomic_fetch_add(&arr(idx[0] >> 2), 1 << ((idx[0] & 0x3) << 3)) As with the expression to read an 8-bit character, this line involves many of the same bit operations. The key is to align the value 1 with the character that needs to be incremented. Adding a value to an 8-bit character element (d_data[idx] += val) requires only a minor change to the above, and it can be computed with the following line: atomic_fetch_add(&arr(idx[0] >> 2), val << ((idx[0] & 0x3) << 3)) Note the change from 1 to val. Finally, writing a value to an 8-bit character element (d_data[idx] = val) can be done with two calls to atomic_fetch_xor. The trick is to change the 8 bits while leaving the others unchanged. We can thus zero out the 8 bits by xor’ing in the original value and then set the 8 bits to the desired value by xor’ing it. Here is the code for our example: atomic_fetch_xor(&arr(idx[0] >> 2), arr(idx[0] >> 2) & (0xFF << ((idx[0] & 0x3) << 3)));atomic_fetch_xor(&arr(idx[0] >> 2), (val & 0xFF) << ((idx[0] & 0x3) << 3)); Note that incrementing a character and adding a value to a character are atomic, but writing a character, as written, is not atomic. It uses atomic operations so that it can be completed concurrent to a write to a neighboring character, but it cannot be completed concurrent to a write to the same character since the whole operation is not atomic. It is possible to make the write atomic using an atomic compare-and-exchange operation. By reading the value, and then making sure that the write is completed before any other bits change, the value can be written atomically. For completeness, let’s take a look at this code as an aside: bool done = false;while (!done){ unsigned int orig = arr[idx / 4]; unsigned int repl = orig ^ (orig & (0xFF << ((idx[0] & 0x3) << 3))); repl ^= (val & 0xFF) << ((idx[0] & 0x3) << 3); done = atomic_compare_exchange(&arr[idx / 4], &orig, repl);} The following set of functions will let us rewrite the above code with minimal changes: // Read character at index idx from array arr.template <typename T>unsigned int read_uchar(T& arr, int idx) restrict(amp){ return (arr[idx >> 2] & (0xFF << ((idx & 0x3) << 3))) >> ((idx & 0x3) << 3);}// Increment character at index idx in array arr.template<typename T>void increment_uchar(T& arr, int idx) restrict(amp){ atomic_fetch_add(&arr[idx >> 2], 1 << ((idx & 0x3) << 3));}// Add value val to character at index idx in array arr.template<typename T>void addto_uchar(T& arr, int idx, unsigned int val) restrict(amp){ atomic_fetch_add(&arr[idx >> 2], (val & 0xFF) << ((idx & 0x3) << 3));}// Write value val to character at index idx in array arr.template<typename T>void write_uchar(T& arr, int idx, unsigned int val) restrict(amp){ atomic_fetch_xor(&arr[idx >> 2], arr[idx >> 2] & (0xFF << ((idx & 0x3) << 3))); atomic_fetch_xor(&arr[idx >> 2], (val & 0xFF) << ((idx & 0x3) << 3));}// Helper function to accept 1D indices of index<1> type instead of integers.template <typename T>unsigned int read_uchar(T& arr, index<1> idx) restrict(amp) { return read_uchar(arr, idx[0]); }template<typename T>void increment_uchar(T& arr, index<1> idx) restrict(amp) { increment_uchar(arr, idx[0]); }template<typename T>void addto_uchar(T& arr, index<1> idx, unsigned int val) restrict(amp) { addto_uchar(arr, idx[0], val); }template<typename T>void write_uchar(T& arr, index<1> idx, unsigned int val) restrict(amp) { write_uchar(arr, idx[0], val); } With these functions, we can write the code we wanted to write at the beginning as follows: array_view<unsigned int> d_data((size+3)/4, reinterpret_cast<unsigned int*>(data.data()));parallel_for_each(extent<1>(size), [=] (index<1> idx) restrict(amp){ // Example 1. Read each character in the vector ... = read_uchar(d_data, idx); // Example 2. Increment each character in the vector increment_uchar(d_data, idx); // Example 3. Add a value to each character in the vector addto_uchar(d_data, idx, 2); // Example 4. Assign a value to each character in the vector write_uchar(d_data, idx, 3);});d_data.synchronize(); These techniques and abstractions make it possible and easy to write char-based code on the GPU using C++ AMP. It’s worth noting that the ideas above also apply to C-style arrays declared as tile_static variables. Comments are welcome below and in our forum. Also if you have an interesting computation that benefits from using the char type, we'd be curious to see it. One common case where 8-bit data shows up is where you have an input device, such as a camera, that produces 8-bit data. Another case is this example of a 256-bin histogram. Do you have another computation that benefits from using 8-bit data on the GPU?
http://blogs.msdn.com/b/nativeconcurrency/archive/2012/01/17/c-amp-it-s-got-character-but-no-char.aspx
CC-MAIN-2013-48
refinedweb
1,401
58.32
I've looked all over the place and i'm trying to find a way to print double array in my c program. I'm not trying to print 2d array but an array that is double. here is my code: #include <stdio.h> void printarray(double array[]){ int i; for(int i=0;i<=5;i++){ printf("%.2f\n",array[i]) } } int main() { double array={1.0,2.0,3.0,4.0,5.0}; } Working code below. You should usually pass the size of the array as a 2nd parameter. But this is the basic functionality that I guess you want happening. #include <stdio.h> void printArray(double array[]) { int i; for(i = 0; i<5; i++) { printf("%.2f ", array[i]); } } int main() { double myArray[5] = {1.0, 2.0, 3.0, 4.0, 5.0}; printArray(myArray); return 0; }
https://codedump.io/share/UCN1BjJXhAxe/1/how-do-you-print-arrays-of-double-in-c
CC-MAIN-2016-50
refinedweb
142
78.75
! Step 1: How It Works? We need to connect the RGB LED strip to Arduino Nano using a breadboard. To propel a strip, an external power supply is required connected to the strip. After we have attached the Arduino to Raspberry Pi via USB, we need to upload a control code to it. It allows to control the Arduino microcontroller for Ozeki 10 what is a messaging system that routes messages between entities installed on Raspberry Pi. We also need a microphone with a sound card attached to Raspberry. Finally, we need to compile and run a few lines of code in Robot Controller (that is a built-in app in Ozeki 10) and we will be able to get a voice controlled RGB LED strip. Let's look at these steps in more detail! Step 2: Accessories For this project, we need the following components exactly: - Raspberry Pi 2 or 3 - USB sound card with microphone - Installed Ozeki 10 on Raspberry Pi - Arduino Nano - 3 MOSFET NPN transistors - LED strip - Breadboard - Jumper wires - Power supply Let’s get started! Step 3: RGB LED Strip Control Switch on Breadboard LED strips are very simple, so easy to use them with any microcontroller such as Arduino Nano. I suggest using PWM dimming techniques to control the strip. Each LED pin may end up requiring an Amp or more to sink to ground. Because of this, we need to use power transistors. Do not try to connect the pins directly to your microcontroller, they will burn out and/or not work! However I use IRF540N MOSFET, you can use any power NPN or N-Channel MOSFET, but make sure the used transistor is rated to be able to pass as much current as you need. For example, since we draw about 0.2Amps per channel per meter, if you have a 5 meter strip you will need to pass up to 1 Ampere per transistor. The diagram above shows how to connect with N-Channel MOSFETs where the Gate is pin 1, the Drain is pin 2 and the Source is pin 3. To write this step, I used the content of website. Step 4: Format Arduino Nano Why should we format? Arduino microcontrollers have a built in memory called EEPROM whose values are kept when the board is turned off. When you get a new Arduino, this storage is often filled with garbage. So we need to clear this storage before uploading a control code to it. For formatting, we need its own software called Arduino IDE that you can download at. After you have installed it, connect Arduino to your PC via USB. Before uploading, make sure you have properly set up the application: under Tools menu, you have selected Arduino Nano as a Board and have set the Port that the device connects to (it is COM6 for me). In this step, the last thing is to upload the formater code, that you can find below, to your microcontroller. Copy then paste it into Arduino IDE then click on Upload button. #include <EEPROM.h> int a; void setup() { pinMode(LED_BUILTIN, OUTPUT); Serial.begin(115200); for (uint16_t i = 0; i < EEPROM.length(); ++i) { EEPROM.update(i, 0); } for (uint16_t i = 0; i < EEPROM.length(); ++i) { a+=EEPROM.read(i); } if (a == 0){ Serial.println("EEPROM is null! The process was successfull!"); } else if (a > 0) { Serial.println("EEPROM is not null, please upload the code again!"); } } void loop() { if (a == 0){ digitalWrite(LED_BUILTIN, HIGH); // turn the LED on (HIGH is the voltage level) delay(1000); // wait for a second digitalWrite(LED_BUILTIN, LOW); // turn the LED off by making the voltage LOW delay(1000); } } Step 5: Install Ozeki 10 Before installing, let me say a few words about what Ozeki 10 software is. This is a messaging system that routes messages between real world entities. We can build up a communication between shared hardware such as Arduino Nano and software resources. It has a built-in software called Robot Controller that we will use for sending messages to Arduino. Thanks to this, we can easily implement our voice controlled RGB LED strip with a few lines of code after we have finished the configuration. So, the guide to download and install Ozeki 10 software on Raspbian OS is available at. After you have downloaded and installed it, connect with your PC to Raspberry via WiFi. I consider it important to mention that the Ozeki 10’s surface is only accessible through a web browser, because it runs as a service, so you need one in witch you can login into it and can configure or manage the system. By the way, you can build this system without using Raspberry Pi. In this case, you only need to install the Windows or Ubuntu Linux installer to your PC that you can download via. Step 6: Upload a Control Code to Arduino Nano Ozeki 10 system controls this microcontroller using text messages. To enable for this microcontroller to understand these messages, it is necessary to upload the control code to it. Like in the previous step, copy the following code into Arduino IDE then upload it to the hardware. #include <OzIDManager.h><br>#include <OzRGBLedController.h><br> // global pointers OzIDManager* manager; OzRGBLedController* rgbController; // please select PWM pins const int redPin = 3; const int greenPin = 5; const int bluePin = 6; const int pwm = 9; void setup() { // wait for serial port Serial.begin(115200); // instantiate objects manager = new OzIDManager; manager->_sendACK = true; manager->_checksum = true; OzCommunication::setIDManager(manager); rgbController = new OzRGBLedController(redPin, greenPin, bluePin); rgbController->setPWM(pwm); int x=1; manager->sendLinkSetup(); manager->PrintWelcomeLine(rgbController, x++, "MyRGBLed"); } void loop() { OzCommunication::communicate(); } Step 7: Login Into Ozeki 10 Connect with your PC to Raspberry via WiFi then open a web browser and navigate to the Raspberry's 9505 port. You will see the Ozeki 10 Login page. Enter your password that you specified during the installation then click on OK. If you installed it to Windows or Ubuntu Linux and do not want to use Raspberry Pi, open localhost:9505 in your internet browser. Step 8: Open Robot Controller Robot Controller is a built-in application in Ozeki 10 in witch we can control and manage all the entities connected to the system. We can send messages to the entities and of course, we can receive messages from them using Subscribe method (that we will talk about in Step 11). The software detect all the entities connected to the system included Arduino Nano as an RGB LED connection. Step 9: Configure Speech to Text Time to configure the Speech to Text connection. After we have created, we will be able to set words or a word set to detect using microphone. To do this, we need to open Connections and create a new one for the microphone as a Speech to Text type of connection. First, navigate to Tools on manubar then select Connections item. In this Connection view, you can see all the connected entities. We will create a new one for the microphone. To start this, click on Create new Connection button. On the right side panel, you can choose what type of connection you would like to add, now we need AV (Audio or Video) then select Speech to Text. Then name it as My_Speech_to_Text_1, select your microphone as a recorder device and choose speech recognizer as English. Finally, set the accuracy level of the speech recognition to 80 percentage. If you does not have English option, you need to install the English language pack to your system. Step 10: Set Detectable Word Set Then we need to create a detectable word set to load the color names to be detected. To do this, in the Connection view, select the My_Speech_to_Text_1 connection's Details button then navigate to Detect words tab and click on Create new Detectable word button. On the right-side panel, select the Word set then the Colors option. Finally, name it as Colors and click OK. Click on Home button to return back to Robot Controller. Step 11: Run the Code in Robot Controller Running the code below, it will turn the color of the RGB LED strip to green then red and vice versa 10 times. After that, it will subsribe to My_speech_to_Text_1 connection, that we had created before, using the Subscribe method. So, when we say a color name into the microphone, the Text to Speech connection will recognize it then convert it into a simple text message what we will catch throught Receive method using its Message parameter. In this method, we can filter the name of the connection using FromConnection function. If it comes from the configured mirophone, we will transmit this message to our Arduino microcontroller using the Send method. Finally, our RGB LED strip will light up and pick up the spoken color. Copy this code then compile and run it. Now, you can use your microphone to control RGB LED strip’s color saying color names. using System;<br>using System.Threading; namespace Ozeki { public class Program { public void Start() { for(int x=0; x<10; x++) { Send("MyRGBLed@localhost", "green"); Thread.Sleep(500); Send("MyRGBLed@localhost", "red"); Thread.Sleep(500); } Subscribe("My_speech_to_Text_1@localhost"); } public void Receive(Message msg) { if(msg.FromConnection=="My_speech_to_Text_1@localhost") { Send("MyRGBLed@localhost", msg.Text); } } } }; Step 12: Testing the Solution First, we will test the Speech to Text connection that you can see on Test 1. After you have compiled and started to run the code, you will see a --- Subscribe My_speech_to_Text_1@localhost message on the console. Then you need to say a color name into your microphone, for example: red. In this case, a message should appear on the console: --> (My_speech_to_Text_1@localhost) Red. This means that the Robot Controller received a message containing a Red message from Speech to Text connection. Finally, we will test the MyRGBLed connection that you can see on Test 2. After the Robot Controller successfully received a message from Speech to Text connection, it should transmit a received message to MyRGBLed connection: <-- (MyRGBLed@localhost) Red. Then the RGB LED strip should light up and should change its color to red. Step 13: Further Reading Runner Up in the Voice Activated Challenge 3 Discussions 1 year ago Well done and documented! You've got my vote! Tip 1 year ago on Step 13 It works, but I used Windows 10 instead of Raspberry PI. First I had trouble compiling the Arduino code. I had to install Ozeki 10 on windows first to compile the Arduino codes. Other then this, it is a very neat solution. Tip 1 year ago on Step 6 You need to Install Ozeki 10 first, otherwise the code won't compile. Use a Windows 10 PC instead of a Raspberry PI to configure the Arduino.
https://mobile.instructables.com/id/RGB-LED-Strip-Controlled-by-Voice/
CC-MAIN-2019-22
refinedweb
1,806
63.09
john price wrote:This is SUPER confusing. john price wrote:So I have a GUI. When you push Enter, two JLabels pop up saying First Editable Number: Yourfirstenternumberhere, Second Editable Number: Yoursecondenternumberhere. john price wrote:Please run it. john price wrote:Now, When you click on subTract, it subtracts from the numbers you typed in, no problem. When you click on it again, it starts from 0, subtracting 1 every time. john price wrote:Now you see my problem. john price wrote:The problem has something to do with beck and heck, because when I set them = to another number in the Risk class, it started counting down the second time from that. I don't think heck and beck are being "updated". Andrew Monkhouse wrote:Also, as a matter of etiquette, you should mention when you post the same problem in another forum and another forum. It is not necessarily a problem doing this, but it would be nice to know if you already have an answer somewhere else before we spend time trying to help you here. Likewise, now that you have the solution in this forum, it is considered polite to go back to the other forums and state that you have the solution here. john price wrote:I have posted this in 5 different forums now because I have not received a sufficient answer to my question. I still do not feel as if my question has been answered. ... If you mods want to close this, I will respect that and will not post again, but I feel this should NOT be closed, because it has not been sufficiently answered. I wrote:Anyway, the problem you have is one of scope. Take a look at lines 78 & 79 - what is the scope of those two variables? What scope do you need? Andrew Monkhouse wrote:Hmmm, that doesn't answer why it won't compile. Let's try it slightly differently: public class Demo { private static int changeableScope = 3; public static void main(String args[]) { boolean gate = true; if (gate) { int changeableScope = 1; System.out.println("changeableScope = " + changeableScope); } System.out.println("changeableScope = " + changeableScope); } } What will be printed at line 9, and what will be printed at line 12? The two answers are different, and this is the core to your problem. john price wrote:Should I compile, see, and check or just think about it? john price wrote:I can't believe that was my mistake. john price wrote:So it's basically saying "Since you've already defined heck and beck, you want something new for this only. We'll give it to you and then go back to the origanal" Right?
http://www.coderanch.com/t/528708/GUI/java/Gui-running-fine-don-code
CC-MAIN-2015-18
refinedweb
446
72.66
Customizing ListView Display in Xamarin Customizing ListView Display in Xamarin Join the DZone community and get the full member experience.Join For Free Java-based (JDBC) data connectivity to SaaS, NoSQL, and Big Data. Download Now. As the usual behavior of ListView to display the items in a list and it contains a single column. The ItemSource property of Listview accepts the list of data to be displayed and render the same on screen. The code for a simple list to display the Students of a class would like as in the following pic. Now let’s customize this ListView in a way to display the Student Name along with the Class and School they belongs to. As in the image above, the data contains a single record. So to accommodate the data in a single record, Let’s create a Student model class. public class Student { public string StudentName { get; set; } public string Class { get; set; } public string School { get; set; } } In order to customize the ListView display, we need to override theViewCell class. public class StudentCell : ViewCell { public StudentCell() { Label nameCell = new Label { TextColor = Color.Blue, FontSize = 30 }; nameCell.SetBinding(Label.TextProperty, "StudentName"); Label classCell = new Label { TextColor = Color.Gray, FontSize = 20 }; classCell.SetBinding(Label.TextProperty, "Class"); Label schoolCell = new Label { TextColor = Color.Gray, FontSize = 20 }; schoolCell.SetBinding(Label.TextProperty, "School"); View = new StackLayout() { Children = { nameCell, classCell, schoolCell} }; } } We are done with the Model and the inherited ViewCell class i.e.StudentCell. We will use the ItemTemplate attribute of ListView to assign the StudentCell class. List<Student> students = new List<Student>(); students.Add(new Student() { StudentName = "Nirmal Hota", Class = "10th", School = "Bhagabati High School" }); students.Add(new Student() { StudentName = "Tadit Dash", Class = "6th", School = "Student High School" }); students.Add(new Student() { StudentName = "Suraj Sahoo", Class = "5th", School = "Khannagar High School" }); students.Add(new Student() { StudentName = "Suvendu Giri", Class = "9th", School = "Baleswar High School" }); students.Add(new Student() { StudentName = "Subhasish Pattanaik", Class = "8th", School = "Rourkela High School" }); // The root page of your application MainPage = new ContentPage { Content = new StackLayout { VerticalOptions = LayoutOptions.Center, Children = { new ListView(){ ItemsSource = students, ItemTemplate = new DataTemplate(typeof(StudentCell)), HorizontalOptions=LayoutOptions.FillAndExpand } } } }; Let’s run the app to see the new ListView. Happy coding Connect any Java based application to your SaaS data. Over 100+ Java-based data source connectors. Published at DZone with permission of Nirmal Hota , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/customizing-listview-display
CC-MAIN-2018-43
refinedweb
419
50.33
DeepInfant DeepInfant® is a Neural network system designed to predict whether and why your baby is crying. DeepInfant uses artificial intelligence and machine learning algorithms to determine which acoustic features are associated with which one of baby’s needs. For example, babies who are in pain demonstrate cries with high energy, while a fussy cry may have more periods of silence. We are preparing our findings for academic review and publication within a single well trained model based on academic datasets. Summary DeepInfant is a machine learning model that uses arts of artificial intelligence to predict your baby’s needs based on sound classification of cries. Dataset DeepInfant has used as part of final project in Speech Technology Course in KTH (Royal Institute of Technology Sweden) iOS Models This repo is published with pre-trained CoreML Models. - DeepInfant_VGGish - DeepInfant_AFP iOS and iPadOS App This repo contains an example of using DeepInfant_VGGish model to build an iOS app that analyzes baby’s cry sound with and push a prediction results with a tip of how to deal with each predicted result. Building a model The audio clips have a sample rate of 16000 Hz and a duration of about ~7 secs. This means there are about 16000*7 numbers per second representing the audio data. We take a fast fourier transform (FFT) of a 2048 sample window, slide it by 512 samples and repeat the process of the 7 sec clip. The resulting representation can be shown as a 2D image and is called a Short-Time Fourier Transform (STFT). Since humans perceive sound on a logarithmic scale, we’ll convert the STFT to the mel scale. The librosa library lets us load an audio file and convert it to a melspectrogram import librosa import librosa.display fname = 'test-1-audio001.wav' samples, sample_rate = librosa.load(fname) fig = plt.figure(figsize=[4,4]) ax = fig.add_subplot(111) ax.axes.get_xaxis().set_visible(False) ax.axes.get_yaxis().set_visible(False) ax.set_frame_on(False) S = librosa.feature.melspectrogram(y=samples, sr=sample_rate) librosa.display.specshow(librosa.power_to_db(S, ref=np.max)) The melspectrogram of a baby crying looks like the image below In order to build the spectrograms of the audio samples needed for training the model, we’ll be using the fantastic audio loader module for fastai v1 built by Jason Hartquist. n_fft = 2048 # output of fft will have shape [1024 x n_frames] n_hop = 512 # 50% overlap between frames n_mels = 128 # compress 2048 dimensions to 128 via mel frequency scale sample_rate = 16000 tfms = get_frequency_batch_transforms(n_fft=n_fft, n_hop=n_hop, n_mels=n_mels, sample_rate=sample_rate) batch_size = 64 data = (AudioItemList.from_folder(CRYING_PATH) .split_by_folder() .label_from_folder() .databunch(bs=batch_size, tfms=tfms, equal_lengths=False)) learn = create_cnn(data, models.resnet34, metrics=accuracy) learn.lr_find(start_lr=0.001, end_lr=1) learn.recorder.plot() Fastai’s cyclical learning rate finder runs the model against a small batch of training samples to find a good learning rate. As the learning rate increases to 10e-2, you can see the model loss decrease. However, for higher learning rates, the loss begins to increase. Hence we pick 10e-2 as the learning rate for training the model. After training the model over a few epochs, we see an accuracy of 95% over the validation set Predicting over realtime audio samples Now that we have a really good model, in order to use it in a real application, we need to be able to run predictions over an audio stream in real time. We use the pyaudio library to read audio samples from the device microphone and then convert the audio data into numpy arrays and feed it to the model. while True: frames = [] # A python-list of chunks(numpy.ndarray) for _ in range(0, int(RATE / CHUNKSIZE * RECORD_SECONDS)): data = stream.read(CHUNKSIZE, exception_on_overflow=False) frames.append(np.fromstring(data, dtype=np.float32)) npdata = np.hstack(frames) audio_clip = AudioClip.from_np(npdata, RATE) run_pred(audio_clip) The above code reads a 7 sec audio clip from the microphone and loads that into memory. It converts it to a numpy array and runs the model on them to get a prediction. This simple piece of code is now ready to be deployed to a service or an embedded device and be used in real applications ! Convention The audio files should contain baby cry samples, with the corresponding tagging information encoded in the filenames. The samples were tagged by the contributors themselves. So here’s how to parse the filenames. iOS: 0D1AD73E-4C5E-45F3-85C4-9A3CB71E8856-1430742197-1.0-m-04-hu.caf app instance uuid (36 chars)-unix epoch timestamp-app version-gender-age-reason So, the above translates to: the sample was recorded with the app instance having the unique id 0D1AD73E-4C5E-45F3-85C4-9A3CB71E8856. These ids are generated upon installation, so they identify an installed instance, not a device or a user the recording was made at 1430742197 (unix time epoch) , which translates to Mon, 04 May 2015 12:23:17 GMT version 1.0 of the mobile app was used, the user tagged the recording to be of a boy, the baby is 0-4 weeks old according to the user, the suspected reason of the cry is hunger Android: 0c8f14a9-6999-485b-97a2-913c1cbf099c-1431028888092-1.7-m-26-sc.3gp The structure is the same with the exception that the unix epoch timestamp is in milliseconds Gender - m – male - f – female Age - 04 – 0 to 4 weeks old - 48 – 4 to 8 weeks old - 26 – 2 to 6 months old - 72 – 7 month to 2 years old - 22 – more than 2 years old Reason - hu – hungry - bu – needs burping - bp – belly pain - dc – discomfort - ti – tired Please feel free to contact us if you need any further assistance. License Apache License Version 2.0, January.
https://iosexample.com/deepinfant-a-neural-network-system-designed-to-predict-whether-and-why-your-baby-is-crying/
CC-MAIN-2022-40
refinedweb
961
53
I often find myself writing or consuming API’s which require the caller to specify some sort of options. I’ve seen numerous ways to specify those options, but I’ve yet to find one that I really like. Let’s work with an example throughout the rest of this post. Imagine we’re working on an application which has some notion of a User. We’re now about to create a function to output a User to a Stream. We want to provide two options: - Should we include the User’s email address? - Should we include the User’s phone number? I can imagine several ways of designing this API. First, we could use a couple of bool parameters, like this: Next, we could use a flags enum to represent the options, like this: Finally we could create a struct which contains two bools instead of the enum, like so: I don’t like passing the two bools, because I find that reading the callsite of a method designed like this to be difficult, because you don’t know what the parameters mean anymore. Does this display the email address or the phone number? I can’t remember anymore. I know that parameter help can tell you the names of the parameters, but that doesn’t help when I’m doing a code review in windiff, since windiff doesn’t have parameter help tool-tips yet. On my current team, we have a convention that when we call a method like this, we put the parameter name in a comment, so that we can see it, like: However, in general, I don’t like to rely on conventions that force people to put comments in a certain style to make the code readable. If possible, I’d rather design the API in such a way that it has to be readable. The second option is using enum flags to represent the options. My problem with this approach is that it is hard to get right, both for the caller of the API, and the implementer. This example is sort of trivial, but I can remember a time when I was dealing with a bit-field that contained 31 unique bit values that could be set and unset independently. Getting all of the ~ and | and &’s just right was very hard, and once it was done, it was hard to figure out what it was trying to do. A final reason that I don’t like enums is that in practice I often find that there are more behaviors that I want to be able to add to the options which isn’t possible with enums. For example, it might be a requirement that two of the options are mutually exclusive. It’s difficult to ensure that this is enforced with an enum. Finally, we have the option of using a struct, which is addresses both of the two concerns above. You can write a call like: Well, that’s certainly clear. It also makes it easier to understand control flow that sets or clears the options, and to understand conditions based on them. It also gives a place to add that behavior: I can add methods to the struct, make the fields into properties which have setters that do validation, etc. The problem with this approach is that in simple cases like the above, it is much more verbose than either of the other two alternatives. However, I recently realized that in C# 3.0, we can take advantage of Object Initializers to make the simple case simple again: It’s almost like having named parameters in C# 🙂 What do you think? Which alternative do you prefer? Do you have another one that I haven’t thought of? I tend to use the second option quite a lot myself when regularly passing around more than one value between methods, or when the parameters to a function get quite large. The third form does look rather nice when I only need to set one or two properties such as in the example, but any more than about three and I’d probably stick with the second form. I like the third form, its like named parameters from VB, but with the ability to easily move it out of being in-line when the number of parameters gets to be too many. The first option with a set of boolean parameters is a bad idea, not just for readability, but for extensibility. If you add additional options in a later revision, you would have to change the method signature, which would result in a breaking change to your API (internal or external, it’s still undesirable). As for options 2 and 3, using an Enum would be compatible with .NET 1.0, 1.1, 2.0, and 3.0, but object initializers would only work with .NET 3.5. Having said that, I think that using an object with the appropriate fields is better than an Enum, since you could abstract the initialization into a function to improve readability (I’m thinking of your 31 options scenario here, which would also make the object intializers convention formatting a challenge). I pretty much agree with your take on this. The first option, bools, doesn’t really have any benefits, but does have a lot of drawbacks. Enums are nice for relatively simple situations (like your example), while using a settings object (be it a class or a struct) works well for more complex scenarios. I frequently use *Options structures with nullable types representing optional parameters. With the anonymous types feature of C# 3.0, can’t DisplayUser(user, Console.Out, new DisplayUserOptions { Email = true, PhoneNumber = false }); be shorteneed to?: DisplayUser(user, Console.Out, new { Email = true, PhoneNumber = false }); Seeing as though the DisplayUserOptions type’s only purpose is to provide named arguments to the DisplayUser function, it doesn’t seem important to keep the class name there. Then again, you could argue this reduces the clarity of the function call for later programmers, but, assuming DisplayUserOptions is a sealed class, the documentation for the DisplayUser function should make it pretty clear what the type of the third argument is. Great use of object initializers. Will be even nicer when C# supports initializing readonly structures with that kind of syntax… Bruce, unfortunately, you can’t omit the type name in this scenario. The reason is that the method needs to *take* a named type, and there isn’t a way to do that. Anonymous types don’t unify to named types, even if they have the same set of property names. Down with premature optimization! Up with readability & obvious correctness. I’ve created too many bugs in my career because of bit twiddling. That was a vote for the structs of bools. I prefer the following: DisplayUserOptions options = new DisplayUserOptions(); options.Email = true; options.PhoneNumber = false; DisplayUser(user, Console.Out, options); However, I do not care for Object Initializers in C# 3. To me, the verbosity of the two are the same because they compile into the exact same code. It’s nothing but syntactic sugar. There are also several advantages to not using object initializers: 1) Can more freely add comments 2) Easier to debug The boolean option is the worst in regard to both readability and (perhaps more importantly) extensibility. Hi Craig, I understand that Object Initializers compile to the same IL, and so their performance and semantics are identical, but that doesn’t mean they are equally verbose. By verbosity, I mean, "how much code does it take to express the idea." C# has several other constructs which are purely syntactic sugar, but which are nonetheless very useful. Some examples are: 1. Loop constructs like while/do/for. These are all just a series of goto’s. 2. the "using" statement, which exands to a try/finally with a Dispose call. 3. the "foreach" statement, which expands to a try/finally, a call to GetEnumerator and a while loop. 4. Anonymous methods, which expand to a display class plus method. 5. Lambda’s which are equivalent to anonymous methods but with less verbose syntax. 6. Query Expressions. 7. etc. Just because something is "syntactic sugar" doesn’t mean that it shouldn’t be used if it makes your code easier to read. Regarding adding comments, I agree that an object initializer inline in a call is not a good structure for adding comments. If I needed to, I would probably apply the "introduce explaining variable" refactoring to end up with an object initializer assigned to a local: var options = new DisplayUserOptions { // Free to add comments here. PhoneNumber = false; }; DisplayUser(user, Console.Out, options); Regarding debugging. This is a good point. It’s too bad that the VS debugger doesn’t support expression level debugging, so that you could set a breakpoint and step on each of the lines of object initializer. I hope that capability gets added in a future version of the debugger. However, for options type structs like these, I usually find that the expressions are simple enough that I don’t need to break on individual ones. Welcome to the thirty-seventh edition of Community Convergence. Visual Studio 2008 has been released Welcome to the thirty-seventh edition of Community Convergence. Visual Studio 2008 has been released I vote for structs as well 🙂 I also find it a good pattern to derive the struct name by adding ‘Options’ to whatever name the method has. I don’t like _any_ of those options. Personally, I think the clearest option would be an enum with all the possibilities, such as, say: enum DisplayUserOptions { None, PhoneNumber, } Now you don’t have to worry about passing in DisplayUserOptions.Email | DisplayUserOptions.PhoneNumber which sounds rather unclear when read aloud ("Email OR PhoneNumber", when you really mean the customer has "Email AND PhoneNumber"). Kirill, my naming convention varies. If the options are related to a class (like MessageBoxOptions in the BCL), I would rather make it a nest type named "Options". That way I have less things to rename when I refactor. In my example I was using a pseudo-visitor pattern, and so the expection is that there might be a bunch of "DisplayFoo" calls, where each Foo might want a different set of options. In this case, I didn’t have a good name for the options type. Hi Kyralessa, I agree that having a "combination value" in the enum is probably better than the straight enum, but still has three drawbacks in my opinion: 1. The implemenation of the "DisplayUser" becomes a little bit more complex, because you need to check combinations of flags. However this is a minor issue from an API design point of view, since it affects the implementor, instead of the caller. 2. It results in a cominatorial explosion of the possible values. For example, in my case of 31 possible flag values, your enum would need to have 2^31 different values specified explicitly, which can’t be good 🙂 3. There is still no logical home for all of the other behaviors and validation associated with it. In my experience, that behavior is almost always _there_, it’s just hard to _see_ unless you’re looking for it, and you have a better place to put itt. I generally follow the guidance in the .NET Framework Design Guidelines – but there is a case I think for reviewing those given the new syntactic forms that C# 3.0 enables. I have been using bools in simpler situations and enums in slightly more complex ones. But I must admit that the situations I had had to code for never had 2^31 possibilities. I would think that anytime it gets more complex, use a class and assign it the responsibility of figuring out what’s right and what’s not. Someone suggested to enumerate all possibilities in an enum rather than be burden the consumer with providing a|b when what you mean is that a and b. The disadvantage that I see is that even though you have all the valid possibilities explicitly stated it won’t stop someone from passing yourenum.a|yourenum.b & yourenum.c or any other combination to your API. How’s that going to be handled? Therefore, IMO, – bools in academic/example sitations – enums in slightly complex ones – classes in more complex ones seems to be a solution to me. I’d like to see syntax like this: public class Class1 { public bool GetSomething(int iParam, string sParam) { return true; } } Which is then called like this (analogous to the new way of calling constructors in 3.0): Class1 c = new Class1(); bool ret = c.GetSomething( iParam = 0, sParam = "you lose"); Is this what an earlier commenter referred to as "named parameters"? Kenneth, yes that’s named parameters. I like the enum option. Regardless of what you do, when it comes to 31 different options to single method, it’s gonna get complicated. Besides, if you use the struct option, the code to set all 31 fields of a struct will be more than the code to AND and OR 31 enums. Tundey, I agree that the case of having 31 different options is a pathological case. And while it may end up being more code, I think that the code for a struct based solution is easier to understand, and so I tend to prefer it, even though it might be a little bit more verbose. Interesting, I’d start by thinking of the consumers of my method. Is it something any programmer can use? If so, then I’d probably create more than one method, rather than try anything flash. I’d think about the likely uses of the process and create methods that have an easy set of parameters. A good API should cater for what a consumer wants to do. Rather than force the consumer to pass parameters, give the consumer a method that does what they want in most situations. EG DisplayUser, DisplayUserAndEmail, DisplayUserAndPhone and DisplayUserWithEmailAndPhone. Sure, you get long method names, but the use of the method is clear in its name. Plus, it gives the implementer of the methods the ability do what they like. Separate routines for all of the public methods? Fine. Can consolidate the public methods into one routine? Again fine. If you and you alone are the consumer of your methods, then you reap what you sow. In the hypothetical example of many possible parameters: My knee jerk response is your object model is not right. I’ve struggled to come up with an example of complexity that would need a heavily parameterised method. The only thing I can think of is something we are talking about at work, allowing users to specify the combination of columns they want to see in a report. The combinations quickly get out of hand and are next to impossible for a single method to deal with. In this case, forget an all-seeing method that can deal with the choices, break down the choices into chunks and have a different method process each chunk. Shadders, I pretty much agree with all of your points, and I frequently do use your strategy of having multiple named methods. This does suffer from the combinatorial explosion problem however. I’ll admit that 31 options isn’t a good example, but even if you have 6 or so independent options, that’s still a lot of methods to maintain. The real life example where I had 31 was for determining what things to include in completion lists. The options were things like: include static, include instance, include non-public, include properties/events/methods/types/namespaces, etc, but you’re right that it isn’t a very good way of structuring the code. Kevin, good point about using a nested type for Options! This starts looking somewhat like a Memento to me! Others: of course, it’s not like one option is the right way and two others are wrong – you can totally use any approach depending on your situation. It is that thinking about picking the best approach and being explicit about your decision – that’s what makes for good code. Also, that’s where Introduce Parameter Object refactoring might come in handy. Finally, here’s a post about ‘named arguments’ in C#: <a href=""></a> I use myself a combination of option 2 and 3, depending on the situation. It’s sure that option 1 isn’t viable if there is more than one parameter, and if I am sure that the function won’t be extended. However, I think that using explicit combination values in enums are not a good idea when there are more than 3 or 4 values (2^4), because it tends to complicate things from implementer perspective, and from caller perspective (should I look for enum.option1ANDoption2ANDoption3 or option2ANDoption1ANDoption3 etc…) OK this can be solved by using a convention.. For the function name option, it suffers (in my point of view) from the unability of dissociating option declaration from function call. In fact, there is chance that it will ultimately only move the issue. if(checkBoxEmail.Checked && checkBoxName.Checked) { DisplayEmailANDMail(user); } elseif…. elseif…. elseif…. I prefer DisplayUserOptions options; if(checkBoxEmail.Checked) { options.Email = true; } if(checkBoxName.Checked) { options.Name = true; } DisplayUser(user,option) I find it easier (even more easier if we want to add an option to display the user’s phonenumber) I use techniques 1 and 2 with the same concerns. There will be no proper solution unless named parameters are made part of C# –something I’ve long wished for. So that’s my vote: "Technique 4" of a better future world: C# named parameters. I prefer using the enum with params, so the call would be something like this: DisplayUser(user, Console.Out, DisplayUserOptions.Email, DisplayUserOptions.PhoneNumber); This way the caller doesn’t have to know bitwise operations (for small list of options). You still may use bitwise operations to let the code more readble in a very large list of options. Followin is an example of code using Enums/params/Extension methods: Using this extension method (that could be inproved with generics): public static class DisplayUserOptionsExtension { public static Boolean In(this DisplayUserOptions option, params DisplayUserOptions[] options) { DisplayUserOptions selectedOptions = DisplayUserOptions.None; foreach (DisplayUserOptions selectedOption in options) { selectedOptions |= selectedOption; } return (selectedOptions & option) == option; } } We can use a code like this to call the method: DisplayUser(user, Console.Out, DisplayUserOptions.Email, DisplayUserOptions.PhoneNumber); And the implementation of the method is not difficult either with the extension method: public void DisplayUser(User user, TextWriter textWriter, params DisplayUserOptions[] options) { if (DisplayUserOptions.Email.In(options)) { // print e-mail } } I tend to agree with Ricardo. This sort of dependency injection keeps the complexity of the parameters with the parameters. This object can grow to deal with changes without affecting the calling and called function(s). I think that wherever possible complexity and or functionality should be encapsulated. Of course this approach introduces complexity in terms of implementation but as Shadders has pointed out we reap what we sow. I often find myself in both consumer and provider roles. I am lazy. So I tend to work overly hard to minimize the work I will have to do in the future. I think of this as an investment which has the largest return in terms extensibility and containment of possible defects. Riccardo, I hadn’t considered the use of params enums. It’s very interesting, I’ll have to think about it more. Although the consumer of Ricardo’s solution has a very easy syntax, it looks like this comes with a performance penalty – constructing a temporary object for a single boolean check. C++ used to use constructs like this: union DisplayOptions { struct { bool Email:1; bool PhoneNumber:1; }; int BitFlags; }; …which, although a little ugly to define, gave performance + readability. Is there no C# equivalent? I can think of 1.5 alternatives. The 0.5 is to use bools, but moderating the downside Foo(true) by supplying a couple of consts const bool kShowMail = true; const bool kHideMail = false; The true alternative (and the pattern I typically use) is by using non-bitfield enums. enum DisplayEmailOption { Show, Hide } enum DisplayPhoneOption { Show, Hide } Type-safe, expandable both to other things (DisplayAddressOption) as well as values enum DisplayEmailOption { Show, Hide, ShowErrorOnly } Of the three options given, I’d go for (d) None of the above. For the simple case, I’d just create two different, well named, methods instead of having the boolean parameter. For more complex cases, I create a MethodObject to capture the configuration and required execution. To make things readable, I’ll often use a static method on a utility class plus some method chaining. For example: Display.User(user).WithEmail().On(stream); Display.User(user).WithPhone().WithAddress().On(stream); Requires a little bit of setup work, but is very clean to read – plus intellisense can help by showing what’s valid. Hey Bevan, thanks for the comment. Your second approach looks a lot like the Fluent Interface approach that I mention in my follow up post. I agree that it’s a great way too, since you can create an API that really flows together well. I generally prefer enums; I have never found bitwise operations particularly complicated or burdensome, and the ease of use and readability on the consumer side is a huge plus. The only time I would move to using an options class would be if, as you pointed out in your post, there was the possibility of complex interactions between options, which is generally not very common; and even then, often this can be overcome by factoring your options into two or three enums where the individual options are addressing a common issue. I have often thought that the Reflection API’s could have done much better in this area. None of the above work well with Web services, where you need the caller and callee to be as independent as possible. Your client may be several versions behind the server, which is an extreme version of late binding I guess. Also callers will likely be using a different programming language. So I vote for ordinary bitmaps (not even enums). @Pete: .NET enums map directly onto a bitmap for purposes of exposing the method to external callers. Why not use the enum from within .NET to make your life easier? I prefer the enums anytime. Once you get a hang of how bitmasking works it’s all very easy. What I usually do is use the enum as a method argument and then just declare local boolean variables for each of the enum items and initialize them with the bitmask values. For example if my enum has three items called READ=1, WRITE=2, and APPEND=4 bool read = ((byte)(enumParam & myEnum.READ)) > 0; bool write = ((byte)(enumParam & myEnum.WRITE)) > 0; bool append = ((byte)(enumParam & myEnum.APPEND)) > 0; If there are too many enum items or each value is only used once then I’d probably do the bit-masking in-place rather than declaring local variables. I would probably be casting to int or long or whatever base type my enum needs to accomodate all enum items. #1 and #3 should definitely work fine on a web service. When making changes to your webservice you must make sure that your changes will not break earlier client versions if backward compatibility is required, on that aspect these techniques are no different than anything else. If you use the technique #1 then if you add a parameter you will need to create a separate method with the new parameters and keep the old one intact for compatiblity. If you use #3 you can add more fields to the structure without breaking older versions unless the new field is required on your logic which wouldn’t be the case if you’re trying to keep backwards compat. Now the best method to use in a webservice IMO would be the bitmasking one (#2). Of course, it won’t work out of the box with web services. You would have at least 2 choices: 1) You could create a custom serializer/deserializer that serializes enums as their base type (an integer type) or serialize all items that are ORed together as a delimetered list, I guess the later would make more sense since that way the client won’t need the codes for each item. 2) Simply make the parameter an integer type and pass the items ORed together and then casted to int as follows "(int)(myEnum.ITEM1 | myEnum.ITEM2)". You can then AND the parameter value with each of the enum items to determine if the item has been ORed with the parameter value. Bevan: Functionally I don’t see any problems with your approach but I don’t think it’s that readable. Functionally that’s exactly the same as using properties since a property is just a method internally like C++ getter or setter methods. However .NET’s OOP model uses properties for this purpose, so a .NET programmer will expect that to be done with properties so that will be rather confusing for a .NET programmer. If you must use methods for any reason you might want to use the std. convention used in cpp and other languages for getter and setters (use a get or set prefix, the set takes in a value and the get returns it. A few questions: 1) How can you tell if the WithEmail() method has already been called for an object? 2) Do you have a WithoutEmail() method to clear that flag? Whatever the answer is, properties make more sense to me. I just realize that here I’m discussing the use of methods (used as getter or setters) vs. using properties in .NET which functionaly is the same thing, but this post is talking about method arguments vs. method calls to set the arguments and there is a huge functional difference in that case. #1 you cannot use methods to provide values for another method unless you make the variable global which unless you really need it global for anohter reason is a very bad idea. #2 even if the method is infact an instance method and you do need to save the parameter globally for the instance it’s still a bad idea to use a method call just to set a value that could be set on the next call without the need of allocating and deallocating a stack frame for that method. @Fernan Why do you say that enums won’t work out of the box with web services? An enum is just an integer type with predefined constants and some additional syntax checking. For purposes of exposing your method to a webservice, its the exact same thing. Enums for the win! While no size fits all, when the initialization comes to more than a few items, I like the structs. There is nothing to say that bools (which I like) and enums (which I also like) can not live together in these structs, depending on what is being initialized. (Say one item might be a method of formating the text). When using the structs, we have the option of using object initializers, or to pre-build the struct and fill it member by member before the call using it. I also do not like the idea of having a handfull of functions to use depending on how I want to initialize. I believe that the fewer the functions in a struct/class the easier it is to maintain and understand. I find when I browse a struct/class that has a large function base my eyes start to glaze over. Thank you for posting this. I find it a good thing to go back and question how we do the common tasks, and appreciate seeing how others deal with these. Up to now, I only used the enum approach, but structs may be a good alternative if you have many options. But then again: I think "too many options to us an enum" is a big hint to think about the design again. Mutual exclusive options fall into this category, as well: Why not make two methods, instead? In case you go for the complex option parameter, another struct approach might be handy to be able to start with an enum first and migrate to stuct later: struct DisplayUserOptions { static readonly DisplayUserOptions Email = new DisplayUserOptions(1); static readonly DisplayUserOptions PhoneNumber = DisplayUserOptions(2); private DisplayUserOptions(int bits) { … } // overload &, | operators … } (I did not try this in real code, just a thought…) It all depends on the complexity. Nothing else. The complexity determines the best suitable method. A boolean parameter would be used in conjunction with a method that is named in such way that it is obvious what the boolean parameter does: MyObject.Reset(true); MyObject.Reset(CheckBoxClearSettings.Checked); The next step of specifying options is the enum, naturally. When options are exclusive, the enum IS best suited for the option. MyReport.Export(ExportTo.PDF); MyReport.Export(ExportTo.File, fileName); The other situation is the example used in this topic. We’re now speaking of "Flags" DisplayUser(user, DisplayUserFlag.EMail|DisplayUserFlag.Name); This however would only be feasible with a limited number of flags, where there is no interdependancy on flag bits. The next level is using a class. This way obviously offers the most flexible way of specifying parameters. public class MyMethodDisplayOptions { public bool ShowEmail; public bool ShowAddress; public bool ShowCity; } enum ExportTo {PDF, Text, Printer……} public class MyMethodOptions { public MyMethodDisplayOptions DisplayOptions { get…. } public ExportTo ExportTo { get …. } } The goal is to avoid having a method incredibly long parameter lists. It doesn’t read well, code doesn’t look smooth. It may take some effort to define the parameter support objects, but it is very readable. It also offers the possibility to embed intelligence into the options classes, where they can adjust options when other options are specified. (Mutual exclusiveness can be enforced etc.) Even exceptions can be raised by the options classes. So… in the end, there’s no "best" way by itself. I use all three ways of specifying options. It’s just a matter of picking the *right* way for a given situation. The thing to do is to determine how you can keep the method parameter lists short. Where option names, enums or classes are self-explanatory. Happy coding 🙂 This is another vote for method objects, although the technique is a little obscure at the moment it’s got a lot going for it in terms of discoverability, readability & flexibility. That said, it does take a lot to setup, so unless they’re really needed, I’d go with named parameters if they’re available – although no projects I’m on can use them at the moment. How about the move method to another class refactoring? Create a separate renderer or view class for each domain entity / target output combination. public class UserTextStreamRenderer : UserRenderer { public User UserData {get;set;} public bool ShowEmail {get;set} public bool ShowPhone {get;set} //other options… public TextWriter TargetStream {get; set;} public override void Render(){Render(TargetStream);} public void Render(TextWriter stream); } public abstract class UserRenderer { public abstract void Render(); } To clarify, I would only use the separate renderer class as I proposed if there were many rendering options. In the case of two options I’d probably use separate methods, but they would be named based on the purpose of the rendering rather than list the fields that get rendered in the method name. more things to do in the code snippet I posted… 1) move common options/properties to the base UserRenderer (out of UserTextStreamRenderer). 2) Add an IsValid() method. based on the application this method may return a collection of validation messages instead of a simple bool. 3) if applicable, the render method should throw an exception if render is called with invalid options because the caller should have verified options with IsValid before calling render. I have not used this pattern before, but only because I really have not had the need for many options to control a single method. I’d be interested to see an example where this was actually required in a real world application.
https://blogs.msdn.microsoft.com/kevinpilchbisson/2007/11/30/coding-styles-bools-structs-or-enums/
CC-MAIN-2017-26
refinedweb
5,405
61.36
Mine is the same cross browser. I've tested in Firefox 1.5, MSIE 6.0, and Opera (vWhoCaresItsOpera.0). I'm surprised yours is browser-dependant. =============================== David Worley Senior Front End Developer dworley at communityconnect.com =============================== -----Original Message----- From: mod_python-bounces at modpython.org [mailto:mod_python-bounces at modpython.org] On Behalf Of Jim Steil Sent: Thursday, January 26, 2006 4:56 PM To: mod_python at modpython.org Subject: Re: [mod_python] Simple Issue, Baffling. -Jim Jim Gallacher wrote: David Worley wrote: To clarify further: is only ever called from an HTML file, via a <link> tag. I figured you knew that, but one never knows. ;) The HTML is valid. The HTML page is NOT generated by mod_python. It's just a plain HTML file in the folder. I created it just for testing dynamically created CSS files. The separate issue of "text/html" content is, indeed, handled by another handler. The two are unrelated. When text/css content wouldn't work, I tried something simpler: plain HTML return content. Simply put, req.write(xxx) returns content that my browser does not make use of, whether CSS or HTML, despite the explicit declaration of req.content_type. The browser just thinks it's text. It may be more accurate to say that the server never returns HTML or CSS, rather than saying that the browser never renders the file returned. Is your handler as simple as you indicated or is there other stuff going on? Any chance you are calling req.write *before* setting req.content_type? The first call to req.write triggers the sending of the response headers, which contains the Content-Type header. Once you've starting calling req.write changing req.content_type will have no effect. (Hmm, I wonder if we should actually raise an exception... ) I find that tools like wget or netcat are helpful for this sort of thing since you can dump the response headers as well as the page content. If you don't have ready access to these programs I'm sure you could know somthing together from the standard python lib. Sometimes the simplest problems have the simplest solutions. Jim =============================== David Worley Senior Front End Developer dworley at communityconnect.com =============================== -----Original Message----- From: Jim Gallacher [mailto:jpg at jgassociates.ca] Sent: Thursday, January 26, 2006 4:15 PM To: David Worley Cc: mod_python at modpython.org Subject: Re: [mod_python] Simple Issue, Baffling David Worley wrote: Hello, all. I'm new to mod_python and somewhat new to server side programming. I've read the documentation, and I can't seem to find out something relatively simple. I'm writing a CSS preprocessor. It's meant to grab a request for a .css file, process another file and return the result to the browser. The issue I have is that I can't get the handler to return content the browser actually uses. To clarify, I'm running Apache 2.0, Python 2.4 on Windows XP. So with the following httpd.conf entry: <Directory /some/file/system/directory> AddHandler python-program .sss PythonHandler switch PythonDebug On </Directory> And the following Python code, in switch.py: from mod_python import apache def handler(req): req.content_type = "text/css" req.write("""\ body { background-color: red; }""") return apache.OK This works. It works great. When I request, I get the body declaration above. BUT the browser doesn't use it! Maybe I don't understand the question but why would it? The browser only uses the stylesheet to render the page when it's specified in an html <style> tag. I have the same problem when declaring req.content_type = "text/html": the code is returned properly, but it isn't rendered as HTML. It's just text, as far as the browser is concerned. You got me there, unless you are saying you are getting the python code as opposed to the html you send with req.write. Are you writing valid html? Any chance there is a typo in req.content_type = 'text/html'? Are you using a different handler for generating the html? It would help if you can clarify your problem a little. Jim _______________________________________________ Mod_python mailing list Mod_python at modpython.org _______________________________________________ Mod_python mailing list Mod_python at modpython.org -- Jim Steil IT Manager Quality Liquid Feeds (608) 935-2345
http://modpython.org/pipermail/mod_python/2006-January/020108.html
CC-MAIN-2018-39
refinedweb
715
70.7
diagrams-haddock Preprocessor for embedding diagrams in Haddock documentation See all snapshots diagrams-haddock appears in diagrams-haddock-0.2.2.12@sha256:e64dd0e9ba42ba012ce23bf29e77c62a9c6077498d91c0a86566373e97047ad6,3788 Module documentation for 0.2.2.12 - Diagrams - Diagrams.Haddock diagrams-haddock diagrams-haddock is a preprocessor which allows embedding images generated using the diagrams framework within Haddock documentation. The code to generate images is embedded directly within the source file itself (and the code can be included in the Haddock output or not, as you wish). diagrams-haddock takes care of generating SVG images and linking them into the Haddock output. Installing Just cabal install diagrams-haddock. If you have any trouble, ask in the #diagrams freenode IRC channel, or file a ticket on the bug tracker. On the design of diagrams-haddock Before getting into the details of using diagrams-haddock, it should be noted that diagrams-haddock has been carefully designed so that you only have to maintain a single copy of your source files. In particular, you do not have to maintain one copy of your source files with embedded diagrams code and another copy where the diagrams code has been replaced by images. If you find yourself scratching your head over the quirky ways that diagrams-haddock works, now you will know why. An important caveat diagrams-haddock modifies files in place! While we have worked hard to ensure that it cannot make catastrophic changes to your files, you would be wise to only run diagrams-haddock on files under version control so you can easily examine and (if necessary) undo the changes it makes. (Of course, being a conscientious developer, you would never work with source files not under version control, right?) Adding diagrams to source files Haddock supports inline links to images with the syntax <<URL>>. To indicate an image which should be automatically generated from some diagrams code, use the special syntax <<URL#diagram=name&key1=val1&key2=val2&...>>. The URL will be automatically filled in by diagrams-haddock, so when you first create an inline image placeholder you can simply omit it (or put any arbitrary text in its place). For example, you might write <<#diagram=mySquare&width=200&height=300>> indicating an image which should be generated using the definition of mySquare, with a maximum width of 200 and maximum height of 300. (Incidentally, this syntax is used because everything following the # symbol will be ignored by browsers.) Continuing with the above example, you must also provide a definition of mySquare. You must provide it in a code block, which must be set off by bird tracks (that is, greater-than symbols followed by at least one space). For example, -- > mySquare = square 1 # fc blue # myTransf -- > myTransf = rotateBy (1/27) In this case, mySquare has type Diagram SVG R2. Additionally, you may give identifiers of type IO (Diagram SVG R2); in that case the IO action will be run to determine the diagram to render. This can be useful, for example, when producing a diagram built from some external data or using randomness. You can choose to have the code block included in the Haddock output or not, simply by putting it in a Haddock comment or not. Note that the code block defining mySquare can be anywhere in the same file; it does not have to be right before or right after the diagram URL referencing it. Code block dependency analysis diagrams-haddock does a simple dependency analysis to determine which code blocks should be in scope while compiling each diagram. First, it locates a code block containing a binding for the requested diagram name. Then, it pulls in any code blocks containing bindings for identifiers referenced by this code block, and so on transitively. (Note that this analysis is overly simplistic and does not take things like shadowing into account; this may sometimes cause additional code blocks to be included which would not be included with a more careful analysis.) This has a few implications. First, code blocks containing irrelevant bindings will not be considered. It is common to have code blocks which are intended simply to show some example code—they may not even be valid Haskell. However, as long as such code blocks do not contain any bindings of names used by a diagram, they will be ignored. For example: -- The algorithm works by doing the equivalent of -- -- > rep = uncurry replicate -- > -- > algo = map rep . zip [1..] -- -- as illustrated below: -- -- <<#diagram=algoIllustration&width=400>> -- -- > algoIllustration = ... The first code block shown above (beginning rep = ...) contains some bindings, but none of those bindings are referenced by any diagram URLs, so the code block is ignored. Another convenient implication is that supporting code can be put in separate code blocks and even shared between diagrams. For example: -- > makeitblue d = d # fc blue # lc blue -- -- Here is a blue circle: -- -- <<#diagram=blueC&width=200>> -- -- > blueC = circle 1 # makeitblue -- -- And here is a blue square: -- -- <<#diagram=blueS&width=200>> -- -- > blueS = square 1 # makeitblue This also means that diagrams are recompiled only when necessary. For example, if the definition of blueC is changed, only blueC will be recompiled. If the definition of makeitblue is changed, both blueC and blueS will be recompiled. Invoking diagrams-haddock Invoking the diagrams-haddock tool is simple: just give it a list of targets, like so: diagrams-haddock foo.hs baz/bar.lhs ~/src/some-cabal-directory For file targets, diagrams-haddocksimply processes the given file. Directory targets are assumed to contain Cabal packages, which themselves contain a library. diagrams-haddockthen finds and processes the source files corresponding to all modules exported by the library. (Note that diagrams-haddockdoes not currently run on unexported modules or on the source code for executables, but if you have a use case for either, just file a feature request; they shouldn’t be too hard to add.) Also, if you simply invoke diagrams-haddock with no targets, it will process the Cabal package in the current directory. diagrams-haddock also takes a few command-line options which can be used to customize its behavior: -c, --cachedir: When diagrams are compiled, their source code is hashed and the output image stored in a file like 068fe.......342.svg, with the value of the hash as the name of the file. This way, if the source code for a diagram has not changed in between invocations of diagrams-haddock, it does not need to be recompiled. This option lets you specify the directory where such cached SVG files should be stored; the default is .diagrams-cache. -o, --outputdir: This is the directory into which the final output images will be produced. The default is diagrams. -d, --distdir: When building diagrams for a cabal package, this is the directory in which diagrams-haddockshould look for the setup-configfile (i.e. the output of cabal configure). An explicit value for this flag takes precedence; next, diagrams-haddockchecks whether there is an active hsenv environment, and if so uses dist_<hsenv name>; otherwise, it defaults to using dist. -i, --includedirs: diagrams-haddockdoes its best to process files with CPP directives, even extracting information about where to find #includes from the .cabalfile, but sometimes it might need a little help. This option lets you specify additional directories in which diagrams-haddockshould look when searching for #included files. --cppdefines: likewise, this option allows you to specify additional names that should be #defined when CPP is run. --dataURIs: embed the generated SVG images directly in the source code with data URIs (the default is to generate external SVG files and link to them). See the section below for a discussion of the tradeoffs involved. -q, --quiet: diagrams-haddocknormally prints some logging information to indicate what it is doing; this option silences the output. Workflow and Cabal setup There are two ways one may include generated SVG images with your documentation: as data URIs, or as external images. The two options are discussed below, along with pros and cons of each. Note that in either case, consumers of your library (including Hackage itself) do not need to have diagrams-haddock installed in order to build your documentation. Using data URIs If you pass the --dataURIs option to diagrams-haddock, any generated images will be embedded directly in your source file (and hence also in the HTML ultimately produced by haddock) as data URIs. To use this method, - Include inline diagrams code and URLs in your source code. - Run diagrams-haddock --dataURIs. - Commit the resulting URL changes to your source files. The benefit of this scheme is that there are no extra files to deal with, and no need to alter your .cabal file in any way. The downside is that it significantly bloats your source code, and may make it extremely inconvenient to edit without some sort of tool support (e.g. an editor that can “collapse” certain sections of the source file). Using external images By default, diagrams-haddock generates external SVG image files. This makes for much less invasive changes to your source files, but requires some work to manage the extra files. To use this method, - Include inline diagrams code and URLs in your source code. - Run diagrams-haddock. - Commit the resulting URL changes to your source files and the produced SVG files. - Arrange to have the SVG files installed along with your package’s Haddock documentation (more on this below). The generated SVG files need to be copied in alongside the generated Haddock documentation. There are two ways to accomplish this: As of Cabal-1.18, the .cabalfile format has acquired an extra-doc-filesfield, specifying files which should be copied in alongside generated Haddock documentation. So the preferred method is to add something like extra-source-files: README.md, CHANGES.md, diagrams/*.svg extra-doc-files: diagrams/*.svg to your .cabalfile. Note that you must list the generated images in both the extra-source-filesfield (so they will be included in your package tarball) and the extra-doc-filesfield (so they will be copied alongside generated Haddock documentation). Hackage is now built on Cabal-1.18, so uploading a package using the extra-doc-filesfield in this way works just fine. If you need to make your documentation buildable with a pre- 1.18version of cabal-install, it is possible to take advantage of cabal’s system of user hooks to manually copy the images right after the Haddock documentation is generated. Add something like build-type: Custom extra-source-files: diagrams/*.svg to your .cabalfile, and then put something like the following in your Setup.hs: import Data.List (isSuffixOf) import Distribution.Simple import Distribution.Simple.Setup (Flag (..), HaddockFlags, haddockDistPref) import Distribution.Simple.Utils (copyFiles) import Distribution.Text (display) import Distribution.Verbosity (normal) import System.Directory (getDirectoryContents) import System.FilePath ((</>)) -- Ugly hack, logic copied from Distribution.Simple.Haddock haddockOutputDir :: Package pkg => HaddockFlags -> pkg -> FilePath haddockOutputDir flags pkg = destDir where baseDir = case haddockDistPref flags of NoFlag -> "." Flag x -> x destDir = baseDir </> "doc" </> "html" </> display (packageName pkg) diagramsDir = "diagrams" main :: IO () main = defaultMainWithHooks simpleUserHooks { postHaddock = \args flags pkg lbi -> do dias <- filter ("svg" `isSuffixOf`) `fmap` getDirectoryContents diagramsDir copyFiles normal (haddockOutputDir flags pkg) (map (\d -> ("", diagramsDir </> d)) dias) postHaddock simpleUserHooks args flags pkg lbi } It may not be pretty, but it works! File encodings For now, diagrams-haddock assumes that all .hs and .lhs files are encoded using UTF-8. If you would like to use it with source files stored using some other encoding, feel free to file a feature request. The diagrams-haddock library For most use cases, simply using the diagrams-haddock executable should get you what you want. Note, however, that the internals are also exposed as a library, making it possible to do all sorts of crazy stuff you might dream up. Let us know what you do with it! Reporting bugs Please report any bugs, feature requests, etc., on the github issue tracker. Changes!
https://www.stackage.org/lts-0.7/package/diagrams-haddock-0.2.2.12
CC-MAIN-2022-33
refinedweb
1,978
54.83
mekk.nozbe 0.4.2 Nozbe interface wrapper. mekk.nozbe wraps (noticeable parts of) Nozbe API as Python functions. It uses both old, officially published API () and new (not yet officially docunented) "Sync API". Neither of those APIs is fully covered (the module supports the functions which were working at early 2009 and which I needed), but still I succesfully use the library to extract projects, contexts and tasks from Nozbe and to create new (or update existing) items. The code is currently using Twisted network interface (that means returning deferreds etc). Well, I like Twisted. I consider providing urllib-based synchronous API as an alternative, just need some motivation. nozbetool Apart from the library, nozbetool script is bundled. Run: nozbetool --help for details. Most common usages: nozbetool export --csv=file.csv --user=YourNozbeUsername (export to .csv) or: nozbetool export --json=file.json --user=YourNozbeUsername --completed (export to .json, completed actions are included). Note: only .json export contains notes! Development Development is tracked on Example Some simple example: from mekk.nozbe import NozbeApi, NozbeConnection from twisted.internet import reactor, defer # API KEY servers as an authentication token. # Check for your own at Nozbe extras page (). # Note that publishing it is equivalent to publishing the password. API_KEY = "grab your own from Nozbe" @defer.inlineCallbacks def make_some_calls(): connection = NozbeConnection(API_KEY) nozbe_client = NozbeApi() print "* Some projects" projects = yield nozbe_client.get_projects() for project in projects[:3]: print project print print "* Some contexts" contexts = yield nozbe_client.get_contexts() for context in contexts[:3]: print context print print "* Some tasks" tasks = yield nozbe_client.get_tasks() for task in tasks[:3]: print task print print "Adding example task" yield nozbe_client.add_task( u"Example task made using script", project_hash = projects[0]['hash'], context_hash = contexts[0]['hash'], next = 1) @defer.inlineCallbacks def main(): try: yield make_some_calls() finally: reactor.stop() reactor.callLater(0, main) reactor.run() - Downloads (All Versions): - 24 downloads in the last day - 140 downloads in the last week - 383 downloads in the last month - Author: Marcin Kasperski - Keywords: nozbe - License: Artistic - Categories - Package Index Owner: Mekk - DOAP record: mekk.nozbe-0.4.2.xml
https://pypi.python.org/pypi/mekk.nozbe/0.4.2
CC-MAIN-2014-10
refinedweb
345
52.26